2. Confetti
• Think-Tank for game and movie related industries
• Middleware Provider
– Aura – Dynamic Global Illumination System
– PixelPuzzle – PostFX pipeline
– Ephemeris – Dynamic Skydome system
-> License comes with full source-code
• Services:
– Hardware vendors
– many game developers (“Engine Tuner”)
• Provides software solutions for games, movies and
tools for GPU manufacturers
• http://www.conffx.com
11. History
• The first HDR rendering pipeline appeared in a
DirectX SDK in 2004
• From there on we called a collection of image
space effects at the end of the rendering
pipeline Post-Processing Pipeline
12. History
• The idea was to re-use resources == render
targets and data in the Post-Processing Pipeline
to apply effects like
– Tone-mapping + HDR rendering == dynamic contrast
operator
– Camera effects like Depth of Field, Motion Blur, lens
flare
– Color filters like contrast, saturation, color additions
and multiplications
• One of the first coverages of a collection of
effects in a Post-Processing Pipelines happened at
GDC 2007 [Engel2007]
13. History
• Since then
– Numerous new tone mapping operators were
introduced [Day2012]
– New more advanced Depth of Field algorithms
with shaped bokeh were introduced
• … but nothing changed fundamentally
14. Call for a new Post-Processing Pipeline
• RGB is not a color space for PostFX
-> we should use a color space that uses
luminance
• Global tone mapping operators didn’t work
out well in practice, only on paper
-> your artists probably limit the luminance values and
therefore the tone mapping operator because the textures
“blow out”
15. Call for a new Post-Processing Pipeline
• A fixed global gamma adjustment at the end
of the pipeline is a waste of cycles
– because it does a “similar thing” as the tone
mapper … why not just make it dynamic and part
of the tone mapper
– … we can also make it more local and not just
globally adjusting gamma
16. Call for a new Post-Processing Pipeline
• We can also add more stages to the Post-
Processing Pipeline
• Adding Screen-space
– Ambient occlusion by occupying the fourth channel of
the PostFX render targets or the luminance channel
– Skin
– Reflections by utilizing the “same” blur kernels
• Any new Post-Processing Pipeline needs to be
written in compute
-> substantial bandwidth savings and speed
increases
17. Simplified Pipeline Stages from 2007
Depth of Field
Tone Mapping
Color Filters
Gamma Control
Frame-Buffer
16x16 RT
Luminance
4x4 RT
Luminance
1x1 RT
Luminance
64x64 RT
Luminance
1x1 RT
Adapt Lum.
Measure Luminance
Adapt Luminance
Primary Render Target
Bright
Pass Filter
Z-Buffer
Downsample
½ Size
Downsample
¼ Size
Gauss Filter
Gauss Filter II
18. Yxy Color Space
• Instead of running the Post-Processing
Pipeline in RGB we can run it in Yxy
• Y holds then the luminance
19. Yxy Color Space
• To apply any tone mapping operation, we
have to convert RGB into a space that let’s us
easily separate luminance
-> tone mapping is applied to luminance*
* … don’t apply it to a RGB value … an artist will notice
20. Yxy Color Space
• If we use Bloom, we have to apply tone
mapping twice in our pipeline
-> once in the Bright-Pass filter and once in
the final pass
-> that means we convert from RGB to
luminance and from luminance to RGB twice
21. Yxy Color Space
• If we run the whole pipeline in a color space
that holds luminance in a dedicated channel,
we only covert once into this color space and
at the end of the pipeline back
22. Yxy Color Space
• For Bloom we can run the bright-pass filter in
one channel
-> modern scalar GPU hardware
-> speed-up
23. Yxy Color Space
• To apply SSAO … just “mix” it into your Y
channel
• … and re-use a blur kernel you use anyway to
blur it
24. Yxy Color Space
• Why Yxy color space?
*based on Greg Wards LogLuv model
High-Dynamic Range
Rendering
Color Space # of cycles
(encoding)
Bilinear
Filtering
Blur Filter Alpha
Blending
RGB - Yes Yes Yes
HSV ~34 Yes No No
CIE Yxy ~19 Yes Yes No
L16uv* ~19 Yes Yes No
RGBE ~13 No No No
25. Dynamic Local Gamma
• From linear gamma to sRGB
float3 Color; Color = (Color <= 0.00304) ? Color * 12.92 : (1.055 * pow(Color, 1.0/2.4) - 0.055);
26. Dynamic Local Gamma
• It seems everyone is using a global Gamma
setting that is the same for every pixel on
screen
• Propose to change Gamma per-pixel
-> in fact Gamma correction is considered
different depending on brightness
-> we just didn’t implement it this way …
27. Dynamic Local Gamma
• The human eye’s visual gamma changes the
perceived luminance for various adaptation
conditions [Bartleson 1967] [Kwon 2011]
• If the eye’s adaptation level is low, the
exponent for Gamma increases
28. Dynamic Local Gamma
Changes in relative brightness contrast as a function of
adaptation of relative luminance and adaptation luminance
levels according to the result [Bartleson 1967]
29. Dynamic Local Gamma
• Local Gamma varies with luminance [Kwon
2011]*
𝛾 𝑣 = 0.444 + 0.045 ln 𝐿 𝑎𝑛 + 0.6034
𝑌 𝑌𝑥𝑦 = 𝐿 𝛾 𝑣
30. Dynamic Local Gamma
• 𝛾 𝑣 is changed based on the luminance value of
the current pixel
-> that means each pixels luminance value might be
gamma corrected with a different exponent
-> with the equation above, the exponent == gamma
value is in the range of 0.421 to 0.465
31. Dynamic Local Gamma
Applied Gamma Curve per-pixel based on luminance of pixel
• Eye’s adaptation == low -> blue curve
• Eye’s adaptation value == high -> green curve
32. Dynamic Local Gamma
• 𝐿 𝛾 𝑣 works with any tone mapping operator
• Example: [Reinhard]
• Artistically desirable to burn out bright areas
• Source art not always HDR
• Leaves 0..1
• 𝑌 𝑌𝑥𝑦 = 𝐿𝑢𝑚𝐶𝑜𝑚𝑝𝑟𝑒𝑠𝑠 𝛾 𝑣
33. • What is the visual difference?
• Dynamic light & shadow information is considered for
gamma
-> new information introduced in the pipeline is used to
modify the gamma correction
-> changes “naturally” depending on how bright or dark a
scene is
• Light looks better / shadows look better
Dynamic Local Gamma
34. Depth of Field
• When using cameras, this refers to the
distance in a scene that appears in focus
• To add artistic effect and/or focus attention
• Physically correct parameters and calculations
needed for a proper looking depth of field
– Tradeoff: parameters are not artist friendly
• Need a fast and efficient way to generate a
high quality Depth of Field effect
35. Depth of Field
• Circle of Confusion(CoC)
• When using a lens to produce an image, the size
of the resulting “spot” produced by a point in the
scene
• The Depth of Field is the region where the CoC is less than the
resolution of the human eye (or of the display medium)
36. Depth of Field
• Circle of Confusion(CoC)
– Affected by:
• F-stop - ratio of focal length to aperture size
• Focal length – distance from lens to image in focus
• Focus distance – distance to plane in focus
37. Depth of Field
• Calculating Circle of
Confusion(CoC)[Potmesil1981]
– 𝐶𝑜𝐶 =
𝑎𝑝𝑒𝑟𝑡𝑢𝑟𝑒 𝑠𝑖𝑧𝑒∗𝑓𝑜𝑐𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ ∗ 𝑓𝑜𝑐𝑢𝑠 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 −𝑑𝑒𝑝𝑡ℎ
𝑑𝑒𝑝𝑡ℎ ∗ 𝑓𝑜𝑐𝑢𝑠 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 −𝑓𝑜𝑐𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ
– 𝑓−𝑠𝑡𝑜𝑝 =
𝑓𝑜𝑐𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ
𝑎𝑝𝑒𝑟𝑡𝑢𝑟𝑒 𝑠𝑖𝑧𝑒
• CoC is negative for far field, positive for near field
• Convert CoC from meters to pixel units to find the
effective CoC on screen
38. Depth of Field
• Basic depth of field effect:
– Calculate CoC for each pixel
– Use CoC to generate separate results for near field
and far field
• Flat, shaped kernel useful for producing Bokeh effect
– Combine far field and focus field based on CoC
– Combine with the near field based on CoC and
near field coverage
39. Depth of Field
Red = max CoC (near field CoC)
Green = min CoC (far field CoC)
43. References
• [Bartleson 1967] C. J. Bartleson and E. J. Breneman, “Brightness function: Effects of
adaptation,” J. Opt. Soc. Am., vol. 57, pp. 953-957, 1967.
• [Day2012] Mike Day, “An efficient and user-friendly tone mapping operator”,
http://www.insomniacgames.com/mike-day-an-efficient-and-user-friendly-tone-
mapping-operator/
• [Engel2007] Wolfgang Engel, “Post-Processing Pipeline”, GDC 2007
http://www.coretechniques.info/index_2007.html
• [Kwon 2011] Hyuk-Ju Kwon, Sung-Hak Lee, Seok-Min Chae, Kyu-Ik Sohng, “Tone
Mapping Algorithm for Luminance Separated HDR Rendering Based on Visual
Brightness Function”, online at http://world-comp.org/p2012/IPC3874.pdf
• [Reinhard] Erik Reinhard, Michael Stark, Peter Shirley, James Ferwerda,
"Photographic Tone Reproduction for Digital Images",
http://www.cs.utah.edu/~reinhard/cdrom/
Notes de l'éditeur
When the eye is adapted to a dark light conditions, the curve is rounder
-> darker colors get more precision
if the eye is adapted to bright light conditions, the curve is more flat …
-> precision is more distributed
𝛾𝑣 changes based on the luminance value of the current pixel
-> that means the current luminance value is gamma corrected with a different exponent
-> the gamma correction value is in the range of 0.421 to 0.465
Important Overview Points about DOF
Depth of field effect: what “depth of field” actually refers to when using cameras
Depth of field is useful for adding artistic effect and/or focusing the attention in a scene or during active gameplay
To get a proper, high quality depth of field, you need to use physically correct parameters and calculations, which requires understanding how a camera works. The tradeoff is parameters are not as artist friendly, because you need to understand a camera to adjust depth of field properly.
Need a fast an efficient way to generate a high quality depth of field
Overview of Basic DOF knowledge
Circle of confusion calculation peformed for each pixel of the downsampled PostFX buffers
All distances are in meters, including the CoC size
Convert CoC from meters to pixel units to determine size of CoC blur
Calculate circle of confusion for each pixel, reconstructing the depth in meters