A common misconception is that if you work solely in the domain of video you have no need for floating point. But just because your output will ultimately be restricted to the 0.0 to 1.0 range doesn't mean that overbright values above 1.0 won't figure into the images you create.
In the Figures 11.12a, b, and c, each of the bright Christmas tree lights is severely clipped when shown in video space, which is not a problem so long as the image is only displayed, not adjusted. 11.12b is the result of following the rules by converting the image to linear before applying a synthetic motion blur. Indeed, the lights are creating pleasant streaks, but their brightness has disappeared. In 11.12c the HDR image is blurred in 32 bit per channel mode, and the lights have a realistic impact on the image as they streak across. Even stretched out across the image, the streaks are still brighter than 1.0. Considering this printed page is not high dynamic range, this example shows that HDR floating point pixels are a crucial part of making images that simulate the real world through a camera, no matter what the output medium.
Figures 11.12a, b, and c. An HDR image is blurred without floating point (a) and with floating point (b), before being shown as low dynamic range (c). (HDR image courtesy Stu Maschwitz.)
Floating point's benefits aren't restricted to blurs, however; they just happen to be an easy place to clearly see the stark difference. Every operation in a compositing pipeline gains extra realism from the presence of floating point pixels. The simple act of combining one layer with another via a Blending mode benefits hugely; this may be the area above all where the old ways will start to look like cheap tricks compared with the brave new world of HDR.
Figures 11.13a, b, and c feature an HDR image on which a simple composite is performed, once in video space and once using linear floating point. In the floating point version, the dark translucent layer acts like sunglasses on the bright window, revealing extra detail exactly as a filter on a camera lens would. The soft edges of a motion-blurred object also behave realistically as bright highlights push through. Without floating point there is no extra information to reveal, so the window looks clipped and dull and motion blur doesn't interact with the scene properly.
Figures 11.13a, b, and c. A source image (a) is composited without floating point (b) and with floating point (c). (HDR image courtesy Stu Maschwitz.)
And now, a two-part news flash for anyone wondering what this has to do with your own work:
Just to be perfectly clear, you can have number 1 without bothering with number 2 in After Effects. It's a nifty way to pull off some cool lighting effects, such as the look of the lightsaber in Chapter 14, "Pyrotechnics: Fire, Explosions, Energy Phenomena," simply by creating some threshold values that, when boosted above monitor range, exhibit the glowing effects of high-intensity light.
Even more surprising, After Effects allows number 2 without number 1 as a prerequisite; linear blending is maintained even when a project is switched to 8 bpc or 16 bpc mode, unless you choose specifically to disable it.
But to composite naturally in an HDR world means to work with radiometrically linear, or scene-referred, color data. For the purposes of this discussion, this is perhaps best called "linear light compositing," or "linear floating point," or just simply, "linear." The alternative mode to which you are accustomed is "gamma-encoded," or "monitor color space," or simply, "video."
Bits per Channel