Global Illumination

Global illumination (GI) refers to rendering algorithms that simulate indirect lighting — the way light bounces between surfaces in a scene, illuminating areas that aren't directly lit by a light source. It's the difference between a scene that looks like a video game and one that looks like a photograph. Without GI, shadows are pure black, interiors lack ambient light, and colored surfaces don't bleed their hue onto neighboring objects.

The physics is straightforward: photons bounce. A red wall reflects red light onto a white floor. Sunlight enters a window and scatters through a room. Every surface becomes a secondary light source. Simulating this accurately requires solving the rendering equation — an integral over all possible light paths in a scene — which is computationally intractable for real-time applications.

Offline renderers (used in film and architecture) solve this through path tracing: firing millions of virtual rays and tracking their bounces. Pixar, ILM, and Weta use path tracing for feature films, where a single frame can take minutes to hours to render. The results are physically accurate but far too slow for interactive use.

Real-time GI has historically relied on approximations. Lightmaps pre-bake indirect lighting into textures — fast to render but static, unable to respond to moving objects or changing time of day. Light probes sample the environment at discrete points. Screen-space GI estimates bounced light from what's visible on screen, missing contributions from off-screen geometry.

The breakthrough in real-time GI came with hardware-accelerated ray tracing, introduced by NVIDIA's RTX architecture in 2018. Dedicated RT cores on the GPU trace rays in real time, enabling limited but convincing indirect lighting. Unreal Engine 5's Lumen system combines multiple GI techniques — screen-space tracing, signed distance fields, and hardware ray tracing — to provide fully dynamic global illumination at interactive frame rates. This was a landmark: for the first time, large-scale open worlds could have realistic bounced lighting without pre-computation.

Neural approaches are the next frontier. Neural rendering techniques use trained networks to denoise sparse ray-traced samples or to predict indirect lighting from learned priors. NVIDIA's DLSS (Deep Learning Super Sampling) already uses AI to reconstruct high-resolution frames from lower-resolution ray-traced inputs. The convergence of GPU computing, AI inference, and traditional graphics is making photorealistic real-time rendering increasingly accessible.

Further Reading