Digital images are made up of numbers. The fundamental particle of a digital image is the pixelthe number of pixels you capture determines the image's size and aspect ratio. It's tempting to use the term resolution, but doing so often confuses matters more than it clarifies them. Why?
Pixels and Resolution
Strictly speaking, a digital image in its pure Platonic form doesn't have resolutionit simply has pixel dimensions. It only attains the attribute of resolution when we realize it in some physical formdisplaying it on a monitor, or making a print. But resolution isn't a fixed attribute.
If we take as an example a typical six-megapixel image, it has the invariant property of pixel dimensions, specifically, 3,072 pixels on the long side of the image, 2,048 pixels on the short one. But we can display and print those pixels at many different sizes. Normally, we want to keep the pixels small enough that they don't become visually obvious, so the pixel dimensions essentially dictate how large a print we can make from the image. As we make larger and larger prints, the pixels become more and more visually obvious until we reach a size at which it just isn't rewarding to print.
Just as it's possible to make a 40-by-60-inch print from a 35mm color neg, it's possible to make a 40-by-60-inch print from a six-megapixel image, but neither of them is likely to look very good. With the 35mm film, you end up with grain the size of golf balls, and with the digital capture, each pixel winds up being just under 1/50th of an inch squarebig enough to be obvious.
Different printing processes have different resolution requirements, but in general, you need not less than 100 pixels per inch, and rarely more than 360 pixels per inch to make a decent print. So the effective size range of our six-megapixel capture is roughly from 20 by 30 inches downward, and 20 by 30 is really pushing the limits. The basic lesson is that you can print the same collection of pixels at many different sizes, and as you do so, the resolutionthe number of pixels per inchchanges, but the number of pixels does not. At 100 pixels per inch, our 3072-by-2048-pixel image will yield a 30.72-by-20.48-inch print. At 300 pixels per inch, the same image will make a 10.24-by-6.83-inch print. So resolution is a fungible qualityyou can spread the same pixels over a smaller or larger area.
To find out how big an image you can produce at a specific resolution, divide the pixel dimensions by the resolution. Using pixels per inch (ppi) as the resolution unit and inches as the size unit, if you divide 3,072 (the long pixel dimension) by 300, you obtain the answer 10.24 inches for the long dimension and if you divide 2,048 (the short pixel dimension) by the same quantity, you get 6.826 inches for the short dimension. At 240 ppi, you get 12.8 by 8.53 inches. Conversely, to determine the resolution you have available to print at a given size, divide the pixel dimensions by the size, in inches. The result is the resolution in pixels per inch. For example, if you want to make a 10-by-15-inch print from your six-megapixel, 3,072-by 2,048-pixel image, divide the long pixel dimension by the long dimension in inches, or the short pixel dimension by the short dimension in inches. In either case, you'll get the same answer, 204.8 pixels per inch.
Figure 2-1 shows the same pixels printed at 50 pixels per inch, 150 pixels per inch, and 300 pixels per inch.
Figure 2-1. Image size and resolution
But each individual pixel is defined by a set of numbers, and these numbers also impose limitations on what you can do with the image, albeit more subtle limitations than those dictated by the pixel dimensions.
Bit Depth, Dynamic Range, and Color
We use numbers to represent a pixel's tonal valuehow light or dark it isand its colorred, green, blue, yellow, or any of the myriad gradations of the various rainbow 0hues we can see.
In a grayscale image, each pixel is represented by some number of bits. Photoshop's 8-bit/channel mode uses 8 bits to represent each pixel, and its 16-bit/channel mode uses 16 bits to represent each pixel. An 8-bit pixel can have any one of 256 possible tonal values, from 0 (black) to 255 (white), or any of the 254 intermediate shades of gray. A 16-bit pixel can have any one of 32,769 possible tonal values, from 0 (black) to 32,768 (white), or any of the 32,767 intermediate shades of gray. If you're wondering why 16 bits in Photoshop gives you 32,769 shades instead of 65,536, see the sidebar "High-Bit Photoshop," later in this chapter (if you don't care, skip it).
So while pixel dimensionsthe number of pixelsdescribe the two-dimensional height and width of the image, the bits that describe each pixel produce a third dimension that describes how light or dark each pixel ishence the term bit depth.
Some vendors try to equate bit depth with dynamic range. This is largely a marketing ploy, because while there is a relationship between bit depth and dynamic range, it's an indirect one.
Dynamic range in digital cameras is an analog limitation of the sensor. The brightest shade the camera can capture is limited by the point at which the current generated by a sensor element starts spilling over to its neighborsa condition often called "blooming"and produces a featureless white blob. The darkest shade a camera can capture is determined by the more subjective point at which the noise inherent in the system overwhelms the very weak signal generated by the small number of photons that hit the sensorthe subjectivity lies in the fact that some people can tolerate a noisier signal than others.
One way to think of the difference between bit depth and dynamic range is to imagine a staircase. The dynamic range is the height of the staircase. The bit depth is the number of steps in the staircase. If we want our staircase to be reasonably easy to climb, or if we want to preserve the illusion of a continuous gradation of tone in our images, we need more steps in a taller staircase than we do in a shorter one, and we need more bits to describe a wider dynamic range than a narrower one. But more bits, or a larger number of smaller steps, doesn't increase the dynamic range, or the height of the staircase.
RGB color images are comprised of three 8-bit or 16-bit grayscale images, or channels, one representing shades of red, the second representing shades of green, and the third representing shades of blue. Red, green, and blue are the primary colors of light, and combining them in different proportions allows us to create any color we can see. So an 8-bit/channel RGB image can contain any of 16.7 million unique color definitions (256 x 256 x 256), while a 16-bit/channel image can contain any of some 35 trillion unique color definitions.
Either of these may sound like a heck of a lot of colors, and indeed, they are. Estimates of how many unique colors the human eye can distinguish vary widely, but even the most liberal estimates are well shy of 16.7 million and nowhere close to 35 trillion. Why then do we need all this data?
We need it for two quite unrelated reasons. The first one, which isn't particularly significant for the purposes of this book, is that 8-bit/channel RGB contains 16.7 million color definitions, not 16.7 million perceivable colors. Many of the color definitions are redundant: Even on the very best display, you'd be hard pressed to see the difference between RGB values of 0,0,0, and 0,0,1 or 0,1,0 or 1,0,0, or for that matter between 255,255,255 and 254, 255, 255 or 255, 254, 255 or 255, 255, 254. Depending on the specific flavor of RGB you choose, you'll find similar redundancies in different parts of the available range of tone and color.
The second reason, which is extremely significant for the purposes of this book, is that we need to edit our imagesparticularly our digital raw images, for reasons that will become apparent laterand every edit we make has the effect of reducing the number of unique colors and tone levels in the image. A good understanding of the impact of different types of edits is the best basis for deciding where and how you apply edits to your images.
Gamma and Tone Mapping
To understand the key difference between shooting film and shooting digital, you need to get your head around the concept of gamma encoding. As I explained in Chapter 1, digital cameras respond to photons quite differently from either film or our eyes. The sensors in digital cameras simply count photons and assign a tonal value in direct proportion to the number of photons detectedthey respond linearly to incoming light.
Human eyeballs, however, do not respond linearly to light. Our eyes are much more sensitive to small differences in brightness at low levels than at high ones. Film has traditionally been designed to respond to light approximately the way our eyes do, but digital sensors simply don't work that way.
Gamma encoding is a method of relating the numbers in the image to the perceived brightness they represent. The sensitivity of the camera sensor is described by a gamma of 1.0it has a linear response to the incoming photons. But this means that the captured values don't correspond to the way humans see light. The relationship between the number of photons that hit our retinas and the sensation of lightness we experience in response is approximated by a gamma of somewhere between 2.0 and 3.0 depending on viewing conditions. Figure 2-2 shows the approximate difference between what the camera sees and what we see.
Figure 2-2. Digital capture and human response
How the camera sees light
How the human eye sees light
I promised that I'd keep this chapter equation-freeif you want more information about the equations that define gamma encoding, a Google search on "gamma encoding" will likely turn up more than you ever wanted to knowso I'll simply cut to the chase and point out the practical implications of the linear nature of digital capture.
Digital captures devote a large number of bits to describing differences in highlight intensity to which our eyes are relatively insensitive, and a relatively small number of bits to describing differences in shadow intensity to which our eyes are very sensitive. As you're about to learn, all our image-editing operations have the unfortunate side effect of reducing the number of bits in the image. This is true for all digital images, whether scanned from film, rendered synthetically, or captured with a digital camera, but it has specific implications for digital capture.
With digital captures, darkening is a much safer operation than lightening, since darkening forces more bits into the shadows, where our eyes are sensitive, while lightening takes the relatively small number of captured bits that describe the shadow information and spreads them across a wider tonal range, exaggerating noise and increasing the likelihood of posterization. With digital, you need to turn the old rule upside downyou need to expose for the highlights, and develop for the shadows!