Home

Awesome

Unity Baked Volumetrics

A graphics solution for completely baked volumetric lighting, meant to be very lightweight and inexpensive for VR (and non-vr).

Results

sponza1

sponza2

sponza3

industrial1

yakohama1

yakohama2

yakohama3

church1

church2

Features

TLDR: This is basically lightmapping but for volumetric fog.

NOTE: Constructed on the Built-In Rendering Pipeline (But can be adapted to work with other SRPs).

Information

How it works

To start you define a box volume within your scene, the setting where it is located, and its bounds. The resolution of the 3D texture is computed by determining the size of the individual voxels (You can also set a custom resolution by hand). Next you can choose to either sample lighting from the scene light probes, light probe proxy volumes, or a voxel tracer. After that, there is an option to set the fog density of the volume (constant, luminance, height-based, etc.) The next step is to generate a 3D texture that is saved into the disk.

For the shader, what we do is sample that 3D texture and perform a raymarch through it, checking it against the scene depth buffer. The ray terminates if: it intersects with the scene, is out of bounds, or if the density is too thick. While raymarching also we jitter the samples to improve the quality. The final result is then lerped with the scene color buffer based on transmittance.

Multiple Implementations

There are 2 versions of this effect, two being a typical Post Process version, and the other is a Scene Based solution solution...

Post Processing V1

This is the first implementation of the effect implemented as a post-process. It does exactly as described in the How it works section.

Post Processing V2

This is similar to Post Processing V2, but an optimization was added to where the volumetrics are rendered at half resolution and upsampled.

Scene Based

The first implementation of the effect is a scene-based effect. This is for circumstances where you can't create custom post-processing effects. Why the heck would this be the case? A good example of this scenario is VRChat where you can't make custom post-processing effects, but you can still make shaders within the scene itself.

This REQUIRES camera depth generation enabled. This works automatically for deferred rendering, but not for forward rendering by default. You can enable it by using C# scripting. However, if you don't have access to the main camera properties there are other ways of enabling it.

Camera Depth Texture Trick 1: If the post-processing stack is available, you can enable ambient occlusion which will enable the camera depth texture generation flag. (For a low overhead put the AO quality settings at its lowest if you don't intend to use it. The intensity value also needs to be greater than 0 otherwise the effect won't be active)

Camera Depth Texture Trick 2: Courtesy of orels1, Create a directional light, that casts shadows. Set the culling layer on that directional light to hit a specific layer which will cause unity to enable camera depth texture generation.

Will be elaborated on but this has been tested and works on Oculus/Meta Quest 2.

Advantages/Drawbacks

Advantages

Drawbacks

Future Plans/Ideas

Credits