Real-time Procedural Modeling and Rendering of Volumetric Clouds
Problem statement
Clouds play an important role when rendering 3D outdoor scenes. The realism of such scenes depends on their colors and shapes which themselves are dependent on complex physical laws. A lot of methods have been developed to generate extremely realistic cloud images. Unfortunately, these methods are computationally expensive and cannot be used in real-time applications. In most games, clouds are represented using 2D images. This strategy is efficient but usually leads to a lack of realism when representing low-level clouds. The goal of this project is to create a real-time volumetric cloud generation and rendering system allowing interactive visualization of 3D outdoor scenes. The visual aspect of the clouds should depend on some user-defined parameters to give a controllable result.
Proposed strategy
Modeling & Dynamics
We propose to use a 3D uniform grid of densities to represent the cloudscape. In order to generate plausible cloud dynamics inspired by simple meteorological rules, we use a cellular automaton (CA) similar than the one used by Dobashi et al. [Dob00]. However, using a CA on a fine grid covering the whole sky would be too computationally expensive. Instead, we propose to use this method to generate a dynamic coarse structure of the clouds, refined thereafter using a higher frequency fractal noise. In order to control the behavior of the clouds during the execution of the CA, we propose to set the states transition probabilities of each voxel using procedural 3D noise. With this method, a user can easily create procedural animated clouds from just a few user-friendly parameters (e.g., sky coverage).
Scene setup
In order to fully cover the sky from the camera viewpoint, we map the resulting 3D uniform grid of densities into the visible portion of a spherical layer similar to the one used by Schneider and Vos [Sch15]. This spherical layer is centered on a point generated by projecting the position of the camera on the ground, and translating it towards the -y direction. This configuration gives the feeling that clouds descend into the horizon from camera viewpoint. When the camera is moving on the xz plane, we shift the texture coordinates to give the sensation of movement by scrolling the clouds in the sky.
Rendering
The clouds are rendered using raymarching through the spherical layer, and by using the single scattering approximation for lighting. We used the Heyney-Greenstein phase function in our implementation, traduced by a higher probability of light scattering in forward direction. To give control on light propagation in cloud to artist, we replace the physically-correct expression of optical thickness by the simple product of the integrated density in the voxel with a scalar controlled by user. Color of view samples are then composed using a front-to-back order. Many scattering events are neglected when using the single-scattering approximation, making the clouds much darker. To make the cloud brighter at nearly no cost, we use tone mapping. The tone mapping used is slightly shifted to blue to fake the blue tint due to sky reflection on clouds.
Optimizations & Implementation
We use adaptative raymarching to speed up the rendering process in cloud-free regions of the sky. The same number of subdivisions are made on each view ray but we travel it more or less quickly according to the presence of clouds at each sampling point. If the density estimated at a sampling point is lower than certain threshold (i.e., the sample is in a cloud-free region), we travel the ray forward using a large view step. Else, we first move backward of a large view step before moving forward using standard view steps in order to better sample the boundaries of the clouds. We also implemented another optimization proposed by Schneider and Vos [Sch15], consisting in alternatively updating one pixel out of two when rendering the clouds at each frame. The color of each remaining pixel is simply reused from the last frame. Finally a basic LOD rendering has been implemented, in order to use less complex noise to refine clouds far from the camera. Our solution has been implemented in GPU. The modeling and dynamics of clouds is computed using three pass, each of them executed using a OpenGL compute shader.
Conclusion
The results we obtain are quite convincing. The CA we used add realism to the clouds dynamics by making some clouds appear, disappear, merge or split. The use of procedural 3D noise to control the CA allow user to control the global aspect of the sky, while letting the system handle the dynamics and refinment of the generated coarse density structures.
Many potential improvements could be proposed to enhance our system, including a finer artistic control of the cloudscape (e.g., localized control by picking), the implementation of different weather conditions which could indirectly control the CA used to generate the cloud coarse structure, or the implementation of procedural high-level cloud and air traffic.
Bibliograhy
[Sch15] Schneider A. and Vos N. (2015). The real-time volumetric cloudscapes of horizon: Zero dawn. Advances in Real-Time Rendering in Games, ACM SIGGRAPH.
[Dob00] Dobashi Y., Kaneda K., Yamashita H., Okita T., and Nishita T. (2000). A simple, efficient method for realistic animation of clouds. ACM SIGGRAPH.