SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
What is global illumination and what are the techniques used to combat this problem in real-time applications. Talk briefly covers algorithms like instant radiosity, light propagation volumes and voxel cone tracing. Additional details within the slide notes.
materials (describing how photons interact with surfaces and volumes; aka objects’ appearance) camera (describing how photons are gathered and displayed)
Materials emit photons, scatter them and absorb them. That’s what’s happening in the scene. The final ouput of all this effects is captured by the camera system. And that’s it!
GI doesn’t actually exist per-se.For example, camera effects exist independant of the actual scene and light. It takes input, modifies it, and outputs. Materials also exist independant of actual setup. They take input, modify it, output new values. GI is very dependant on the scene itself. GI is a consequence of the scene (scene’s geometry and materials applied). Therefore, GI is an effect. In real-time graphics, we simulate effects.Path tracing is not a simulation, it is an evaluation of the processes that happen in the scene (emit-brdf modify-bounce). But in real-time graphics, GI is one of the consqeunces of the light transport. In reality, GI is the process itself, but we want to simplify it and simulate it, and therefore, we’ve created different effects that can be separated and evaluated independently (shadows, ao, indirect illumination...) and they are all in fact part of a overall bigger thing called – global illumination. GI is a set of algorithms that calculate how much and what kind of light arrives at a certain point in scene. You can think of it as irradiance calculation. It combines effects such as direct lighting, ambient occlussion, indirect illumination, shadowing, caustics... GI algorithms are approaches how to effectively and efficiently compute irradiance in a spherical domain, for every point in space. GI algorithms can include all mentioned effects (shadow, ao, indirect illumination...), or some parts, but they are used to efficiently compute irradiance values that will be used for evaluation of the radiance, using the material’s properties, that will present the final output of the rendering, just before being processed by the camera.
Examples. IN CG, we always simulate some effects, while in reality they all happen simultaneously. We are just trying to mimic a nature the best we can.
What is GI trying to solve? Infinite light bounces that happen through the scene, that eventually converge to some stable state. We try to calculate or at least approximate the values of irradiance or radiance in such converged state.
There are even more techniques and algorithms than mentioned here. But these are probably the major ones.There also screenspace techniques like SSDO (screenspace directional occlusion) and Deep G-buffer. There are also some offline techniques that use geometric approximation like surfels.
Most of the techniques will just briefly cover the algorithm. This talk is not intended to give an in-depth explanation of the techniques but rather provide an insight what different approaches are used and how people were thinking about the problem and what strategies were then developed.SSDO: https://people.mpi-inf.mpg.de/~ritschel/Papers/SSDO.pdfDeep G-buffer: http://graphics.cs.williams.edu/papers/DeepGBuffer14/ Surfels: http://graphics.pixar.com/library/PointBasedGlobalIlluminationForMovieProduction/
The most straightforward simulation of photon interaction with the scene. For each direction, trace the light path the photon would take. Sample and gather over the hemisphere and do that recursively.
Don’t sample, but evaluate entire surface-to-surface view properties. Progressively iterate, where each step recaclulates the illumination of a patch coming from all the other patches. Only diffuse surfaces (usually). Used for architectureal visualization. Usually not real-time.Patch == smooth, gradually changing piece of surfaceRadiosity: https://en.wikipedia.org/wiki/Radiosity_%28computer_graphics%29
Paper: http://www.vis.uni-stuttgart.de/~dachsbcn/download/rsm.pdfRSM and instant radiosity: http://www.bpeers.com/blog/?itemid=517
Instead of sampling illumination from other points like in pathtracing, better spread the light from a surface onto other surfaces. Each raytracing hit, we generate the light and “spread” it into the scene (render the scene with that light). We then sum the contribution of each hit to get the final result that approximates how the light is spread/distributed throughout the scene.
Instead of raytracing into the scene, use RSM to generate VPLs. We can utilize GPU to find all the VPLs and then we can use deferred pipeline to efficiently render many lights, that will eventually approximate the indirect illumination.
Somewhat “inverse” of the pathtracing. Shoot photons from the light source and bounce them around the scene. When sampling the hemisphere for a certain hit, use local neighbors of the sample to get a better approximation of the lighting coming from that particular direction.
Quick and dirty intro do spherical harmonics:What a cosine wave is to Foruier analysis, that’s what a Legendre polynomials are to Spherical harmonics.We use basis functions that get multiplied with calculated coefficients to get an approximation of the spherical function. More coefficients, more accuracy in our approximation. If need just low frequency data from the spherical function, then we save massiveamounts of memory by storing them as a few SH coefficients, better than storing entire spherical map (environment map/cube map).
SH lighting: (paper) http://www.research.scea.com/gdc2003/spherical-harmonic-lighting.pdf, (slides) https://graphics.cg.uni-saarland.de/fileadmin/cguds/courses/ss15/ris/slides/RIS18Green.pdf An efficient representation of irradiance maps: https://cseweb.ucsd.edu/~ravir/papers/envmap/envmap.pdf Stupid SH tricks: http://www.ppsloan.org/publications/StupidSH36.pdf BRDF shading using SH: http://www.ppsloan.org/publications/shbrdf_final17.pdf
Precompute and arrange SH probes around the scene that can be picked and used to evaluate the lighting in real-time.http://developer.amd.com/wordpress/media/2012/10/Tatarchuk_Irradiance_Volumes.pdf http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/
Precompute the radiance transfer functions and store them in SH representation, per-object. This radiance transfer is independent of the actual lighting in the scene, and accounts for effects like self-shadowing and self-reflections. We can then use PRT in combination with the actual lighting in the scene (that can even be dynamic) to compute the final output of the shading.
Generate VPLs from RSM data. But instead of rendering the scene with VPLs, encode them as SH and inject them into the grid. From then, propagate their contribution within the grid. Very good, much pretty. (check the final link’s slides)Cascaded LPV: (paper) http://www.vis.uni-stuttgart.de/~dachsbcn/download/lpv.pdfCrytek: http://www.crytek.com/download/Light_Propagation_Volumes.pdf More Crytek: http://www.crytek.com/cryengine/cryengine3/presentations/cascaded-light-propagation-volumes-for-real-time-indirect-illumination (this has awesome ppt slides)
We train a neural network that has few input parameters such as position, normal, light directin… with a ground truth rendering of the pixels of the scene (pathtracer that feeds its output into neural network as its input, and NN gets trained on it). To avoid using massive neural nets, we can train multiple smaller neural nets for each spatial part of the scene, then, at the runtime, we can search through a spatial structure to find an appropriate net that we need for evaluation at the given coordiantes.
Choosing good input parameters is important. But, it yields a good results.
RRF. Notice caustics and glossy surface. It’s all real-time and you can change the lighting params.
RRF. Glossy everywhere? No problem. Want to be able to change properties of the material real-time? No problem either.
I shamelessly stole bunch of pictures from different papers and presentation slides, I hope they don’t mind. Thanks!
(BLACK) PHOTONS EVERYWHERE
PROBLEMS OF COMPUTER GRAPHICS
generate digital imagery, so it looks “real”
only two problems:
brdf (diffuse, glossy, specular reflections)
btdf (refraction& transmission)
bssdf (subsurface scattering)
resolution + fov
hdr & tonemapping
bloom & glow
GI is a consequence of how photons are scattered around the scene
GI is an effect, i.e. doesn’t exist per-se and is dependent of the scene
In a CG terminology, GI is a set of algorithms that compute (ir)radiance
for any given point in space, in the spherical domain
That computed irradiance is then used in combination with material’s
properties at that particular point in space, for final calculation of the
Radiance is used as the input to the camera system
global illumination sub-effects:
color bleed/indirect illumination
check if surface is lit directly
check how “occluded” the surface is and how hard is for the light to reach that point in space
color bleed / indirect illumination
is reflected light strong enough so even diffuse surfaces “bleed” their color on surrounding (non-emitters behave like light source)
is enough of the light reflected/refracted to create some interesting bright patterns
how does participating media interact with the light
describes how light is scattered around the scene, how light is transported through the scene
what interesting visual effects start appearing because of such light transport
sh + ind.illum. sh sh + vol. + ind.illum. sh + caustics + ind.illum. + ao
sh + ind.illum. + ao
FORMULATION OF THE PROBLEM
analytically calculate or approximate the irradiance over the sphere, for a certain point in space, in a
how much each point [A] contributes to every other [B] in the scene
how much [A->B] influences point [A]
how much does that influence [B] back
recursive, but it can converge and reach a certain equilibrium
[all light bounces]
sample the hemisphere over the point with Monte Carlo
for every other sample, do the same thing recursively
for each surface-light path interaction, we evaluate the incoming light against the bsdf of the material
straighforward implementation of light bounces
very computationally exhaustive, not real-time
very good results, ground truth
for each surface element (patch), calculate how well it can see all other patches (view factor)
progressively recalculate the illumination of a certain patch from all other patches
start with direct illumination injected and iterate until convergence (or good enough)
only diffuse reflections
can be precomputed and it is viewport-independent
REFLECTIVE SHADOW MAPS (RSM)
from lights perspective: depth, position, normal, flux
sample RSM to approximate lighting
the idea is used in other more popular algorithms
ray trace from the light source into the scene
for each hit, generate VPL and render the scene with it
gather the results
mix between sampling and radiosity
INSTANT RADIOSITY V2
don’t raytrace, but instead use RSM
use RSM to approximate where to place VPLs
deferred render with many lights
shoot photons from light source into the scene
gather nearby photon to calculate approximate radiance
good for caustics
“spherical Fourier decomposition”
Legendre basis functions that can be added together to represent the spherical domain function
calculate lighting at the point in space and save in SH representation
build grid of such SH values
interpolate in space (trilinear)
build acceleration structure for efficiency (octree)
PRECOMPUTED RADIANCE TRANSFER
precomputed SH for a object that accounts for self-
shadowing and self-interreflection
independent of the lighting
DEFERRED RADIANCE TRANSFER VOLUMES
bake manually/auto placed probes that hold PRT data
create grid and inject PRT probes into it, interpolated between manually selected locations
use local PRT probe * lighting to get the illumination data
[CASCADED] LIGHT PROPAGATION VOLUMES ([C]LPV)
generate VPLs using RSM
inject VPL data into 3D grid of SH probes
propagate light contribution within the grid
Sample lit surface elements
Light propagation in the grid
Scene illumination with the grid
Discretize initial VPL
distribution by the
regular grid and SH
A set of regularly
sampled VPLs of the
scene from light
generate VPLs using RSM
inject VPL into 3D grid
propagate light contribution within the grid
iteratively going from
one cell to another
VOXEL CONE TRACING (SPARSE VOXEL OCTREE GI)
rasterize scene into 3d texture
generate mip levels and octree for textures
sample with cone tracing