How a Digital Camera Sees


For the most part, your digital camera works just like any film camera dating back to the origins of photography: a lens focuses light through an aperture and a shutter onto a focal plane. The only thing that makes a camera a digital camera is that the focal plane holds a digital image sensor rather than a piece of film.

Many factors affect your final image, from the quality of your lens, to your (or your light meter's) decision as to which shutter speed and aperture size to use. One of the downsides to digital photography is that the actual imaging technology is a static, unchangeable part of your camera. You can't change the image sensor if you don't like the quality of its output.

Like a piece of film, an image sensor is light sensitive, be it a CCD (charge-coupled device) or CMOS (complementary metal oxide semiconductor) chip. The surface of the chip is divided into a grid of photosites, one for each pixel in the final image. Each photosite has a diode that is sensitive to light (a photodiode). The photodiode produces an electric charge proportional to the amount of light it receives during an exposure. To create an image, the voltage of each photosite is measured and converted to a digital value, thus producing a digital representation of the pattern of light falling on the sensor.

This process of sampling, or measuring the varying amounts of light that were focused onto the surface of the sensor, yields a big batch of numbers, which, in turn, can be processed into a final image.

As you may have already noted, knowing how much light there is in a particular photosite doesn't tell you anything about the color of the resulting pixel. Rather, all you have is a record of the varying brightness, or luminance, values that have struck the image sensor. This is fine if you're interested in black-and-white photography. If you want to shoot full color, though, things get a little more complex.

Mixing a batch of color

You've probably had some experience with the process of mixing colors. Whether from getting paint mixed at the hardware store or learning about the color wheel in grade school, you probably already know that you can mix together a few primary colors of ink to create every other color. Light works the same way.

However, whereas the primary colors of ink are cyan, magenta, and yellow, the primary colors of light are red, green, and blue. What's more, ink pigments mix together in a subtractive process. As you mix more, your colors get darker until they turn black. Light mixes together in an additive process; as you mix more light, it gets brighter until it ultimately turns white.

In 1869, James Clerk Maxwell and Thomas Sutton performed an experiment to test a theory about a way of creating color photographs. They shot three black-and-white photographs of a tartan ribbon. For each photo, they fixed their camera with a separate colored filter: one red, one blue, and one green. Later, they projected the images using three separate projectors, each fitted with the appropriate red, green, or blue filter. When the projected images were superimposed over each other, they yielded a full-color picture.

Some high-end digital cameras use this same technique to create color images. Packing three separate image sensors, each fitted with a red, green, or blue filter, these cameras record three separate images and then combine them to create a full-color image. The problem with this multisensor approach is that image sensors are expensive and large, and by the time you add in the requisite storage and processing power, you have a huge camera that's prohibitively expensive. Some of these cameras are also very slow to create an image, making them impractical for shooting live subjects.

Most digital cameras use a single image sensor and get around the color problem by using a lot of math and some clever interpolation schemes to calculate the correct color of each pixel.

In these types of cameras, each photosite is covered with a colored filterred, green, or blue. Red and green filters alternate in one row, and blue and green alternate in the following row (Figure 2.1). There are twice as many green photosites because your eye is much more sensitive to green than to any other color.

Figure 2.1. In a typical digital camera, each pixel on the image sensor is covered with a colored filter: red, green, or blue. Though your camera doesn't capture full color for each pixel, it can interpolate the correct color of any pixel by analyzing the color of the surrounding pixels.


This configuration of filters is called the Bayer pattern after Dr. Bryce Bayer, the Kodak scientist who thought it up in the early 1970s. Obviously, this scheme still doesn't achieve a color image. For that, the filtered pixel data must be run through an interpolation algorithm, which calculates the correct color for each pixel by analyzing the color of its filtered neighbors.

For example, let's say that you want to determine the color of a particular pixel that has a green filter and a value of 100%. If you look at the surrounding pixels, with their mix of red, blue, and green filters, and find that their values are all also 100%, then it's a pretty safe bet that the correct color of the pixel in question is white, since 100% of red, green, and blue yields white (Figure 2.2).

Figure 2.2. To calculate the true color of the 100% green pixel in the middle of this grid, you examine the surrounding pixels. Because they're all 100%, it's a good chance that the target pixel is pure white, since 100% of red, green, and blue combines to make white.


The pixel may be some other colora single dot of color in a field of white. However, pixels are extremely tiny (in the case of a typical consumer digital camera, 26 million pixels could fit on a dime) so the odds are small that the pixel is a color other than white. Nevertheless, there can be sudden changes of color in an imageas is the case, for example, in an object with a sharply defined edge. To help average out the colors from pixel to pixel and therefore improve the chances of an accurate calculation, digital cameras contain a special filter that blurs the image slightly, thus slightly smearing the color. While blurring an image may seem antithetical to good photography, the amount of blur introduced is not so great that it can't be corrected for later in software.

Although calculating white is easy enough to understand, interpolating a subtle range of full color is plainly very complicated. This process of interpolation is called demosaicing, a cumbersome word derived from the idea of breaking down the chip's mosaic of RGB-filtered pixels into a full-color image. There are many different demosaicing algorithms. Because the camera's ability to accurately demosaic has a tremendous bearing on the overall color quality and accuracy of the camera's images, demosaicing algorithms are closely guarded trade secrets.

The Bayer pattern is an example of a color filter array, or CFA. Not all cameras use an array of red, green, and blue filters. For example, some cameras use cyan, yellow, green, and magenta arrays. In the end, a vendor's choice of CFA doesn't really matter as long as the camera yields color that you like.

No matter what type of color filter array the camera uses, a final image from a digital camera consists of three separate color channels, one each for red, green, and blue information. Just as in Maxwell and Sutton's experiment, when these three channels are combined, you get a full-color image (Figure 2.3).

Figure 2.3. Your camera creates a full-color image by combining three separate red, green, and blue channels. If you're confused by the fact that an individual channel appears in grayscale, remember that each channel contains only one color componentso the red channel contains just the red information for the image; brighter tones represent more red, darker tones, less.


FOVEON X3: LOTS OF PIXELS, NO INTERPOLATION

You may have heard of a type of image sensor made by a company called Foveon. Foveon's sensors are unique in that they don't require any demosaicing. Rather than using an array of single photosites covered with filters, the Foveon sensor exploits the fact that silicon absorbs different wavelengths of light at different depths. Blue is absorbed near the surface, green farther down, and red even farther. The Foveon sensor can take readings at different depths, allowing it to measure all three color signals at the same location.

This lack of interpolation means that the Foveon sensor is not susceptible to interpolation errors or certain types of artifacts that can occur in a "normal" digital image sensor. On the downside, the three absorption layers have some overlap, which means that a photon with a color somewhere between green and red can be absorbed by either layer. Consequently, getting a clear measurement of the three separate RGB channels can be tricky.

While the Foveon chip can produce excellent images, Foveon-equipped cameras don't yet show a clear advantage over cameras equipped with the more prevalent CCD and CMOS sensors.





Getting Started with Camera Raw(c) How to make better pictures using Photoshop and Photoshop Elements
Getting Started with Camera Raw: How to make better pictures using Photoshop and Photoshop Elements (2nd Edition)
ISBN: 0321592131
EAN: 2147483647
Year: 2006
Pages: 76
Authors: Ben Long

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net