How Your Camera Makes a JPEG Image


When you shoot with your camera in JPEG mode (or in TIFF mode, if the camera provides such a feature), a lot of things happen after the sensor makes its capture. First, the image data is amplified. The signals generated by your camera's image sensor are actually quite weak and need to be boosted to a level that's easier to work with. When you increase the ISO setting on your camera, you're doing nothing more than increasing the amount of amplification that is applied to the original data. Unfortunately, just as turning up the volume on your stereo produces more noise and hiss, turning up the amount of amplification in your camera increases the amount of noise in your image (Figure 2.4).

Figure 2.4. As you increase the ISO setting of your camera (often done when shooting in low light), you increase the amount of noise in your image.


Once the signal is amplified, the data is passed to your camera's on-board computer, where a number of important image processing steps occur.

Demosaicing

First, the raw image data is passed through the camera's demosaicing algorithms to determine the actual color of each pixel. Demosaicing is a very complex process, and while the computing power in your digital camera would make a desktop computer user of 15 years ago green with envy, camera manufacturers sometimes have to take shortcuts with their demosaicing algorithms to keep the camera's performance up.

Colorimetric interpretation

Though we noted that each photosite has a red, green, or blue filter over it, we never defined what "red," "green," and "blue" are. Your camera is programmed with colorimetric information about the exact color of these filters and so can adjust the overall color information of the image to compensate for the fact that its red filters, for example, may actually have a bit of yellow in them.

Color space

After all this computation, your camera's computer will have a set of color values for each pixel in your image. For example, a particular pixel may have measured as 100% red, 0% green, and 0% blue, meaning it should be a bright red pixel. But what does 100% red mean100% of what?

Your eye can see a tremendous range of colors, and your computer may be able to mathematically represent an even larger range of colorsso 100% red to your computer may be very different than 100% red to your eye. Similarly, your camera has a particular range, or gamut, of colors that it is capable of capturing and recording.

To ensure that different devices and programs understand what "100% red" means, your image is mapped to a color space. Color spaces are simply specifications that define exactly what color a particular color valuesuch as 100% redcorresponds to.

Most cameras provide a choice of a couple of color spaces, usually sRGB and Adobe RGB. Your choice of color space affects the appearance of your final image because the same color values map to different actual colors in different spaces. For example, 100% red in the Adobe RGB color space is defined as a much brighter color than 100% red in the sRGB space.

All of these color spaces are smaller than the full range of colors that your eye can see, and some of them may be smaller than the full range of colors that your camera can capture. If you tell your camera to use sRGB, but your camera is capable of capturing, say, brighter blues than provided for in the sRGB color space, then the blue tones in your image will be squeezed down to fit in the sRGB space. Your image may still look fine, but if you had shot in a larger color space, there's a chance that the image could have looked much better.

After demosaicing your image, the camera converts the resulting color values to the color space you've selected. (Most cameras default to sRGB.) Note that you can change this color space later using your image editor.

White balance

As technologically impressive as it may be, your digital camera still can't do one simple thing that your eye does without your even noticing. If you look at a colored object indoors, under artificial lighting, and then take the same object outdoors and look at it under sunlight, it will appear to be the same color in both places. Unfortunately, your digital camera can't manage such a feat so easily. (And before your film-using friends go all high-status on you, remember that film has the same trouble. You have to buy different types of film for indoor and for outdoor shooting.)

Different types of light shine at different intensities, measured as temperatures using the Kelvin (K) scale. To properly interpret the color in your image, your digital camera needs to know what type of light is illuminating your scene.

White balancing is the process of calibrating your camera to match the current lighting situation. Because white contains all colors, calibrating your camera to properly represent white automatically accurately calibrates your camera for any color in your scene.

Your camera will likely provide many different white balance settings, from an auto white balance mode that tries to guess the proper white balance, to preset modes that let you specify the type of light you're shooting in, to manual modes that let you create custom white balance settings. These settings don't have any impact on the way the camera shoots or captures data. Instead, they affect how the camera processes the data after it's been demosaiced and mapped to a color space (Figure 2.5).

Figure 2.5. The same image shot with three different white balance settings: daylight, cloudy, and tungsten. As you can see, if you tell your camera that you're shooting under a different type of light, it interprets the colors it "sees" very differently.


If you're shooting in auto white balance mode, the camera employs special algorithms for identifying what the correct white balance setting should be. Though most auto white balance mechanisms these days are very sophisticated, even the best ones can still be confused by mixed lighting situationsshooting in a tungsten-lit room with sunlight streaming through the windows, shooting into a building from outside, and so on. Fortunately, as you'll see, when shooting raw, these situations are no longer a concern, since raw processors allow you to adjust the white balance of an image after the fact.

If you're using a preset white balance or manual mode, the camera adjusts the color balance of the image according to those settings.

Gamma, contrast, and color adjustments

Say you expose a digital camera sensor to a light and it registers a brightness value of 50. If you expose the same camera to twice as much light, it will register a brightness value of 100. Although this makes perfect sense, it is, unfortunately, not the way your eye works. Doubling the amount of light that hits your eye does not result in a doubling of perceived brightness, because your eye does not have a linear response to light.

Our eyes are much more sensitive to changes in illumination at very low levels than they are to changes at higher levels. Because we need to be able to spot a wild animal creeping up on us in the darkor find the light switch in a dark bathroom in the middle of the nightour eyes have evolved to be more sensitive to very subtle changes in low-light intensity. Because of this, we don't perceive changes in light in the same linear way that a digital camera does.

To compensate for this, a camera performs gamma correction, applying a mathematical curve to the image data to make its brightness levels match what the eye would see. Gamma correcting an image redistributes its tones so that there's more contrast at the extreme ends of the tonal spectrum. A gamma-corrected image has more subtle changes in its darkest and lightest tones than does an uncorrected, linear image (Figure 2.6). You'll learn more about gamma correction in Chapter 6.

Figure 2.6. The upper image shows what your camera sees when looking at a black-to-white gray ramp. Each value is precisely double the previous value. The lower image shows how your eye perceives the same ramp. Note that your eye registers many more fine gradations of dark and bright tones.


Next, the camera adjusts the image's contrast and color. These alterations are usually fairly straightforwardan increase in contrast, perhaps a boost to the saturation of the image. Most cameras let you adjust these parameters using the built-in menu system, and most cameras provide some variation on the options shown in Figure 2.7.

Figure 2.7. On most cameras, the contrast, saturation, sharpening, and other in-camera image processing can be controlled through a simple menu.


These options usually don't provide a fine degree of control, but they can improve the overall quality of an image if the camera's default settings are not to your taste.

Noise reduction and sharpening

Many cameras employ some form of noise reduction. In addition, some cameras automatically switch on an additional, more aggressive noise reduction process when you shoot using long exposure times.

All cameras also perform some sharpening of their images. How much varies from camera to camera, and the sharpening amount can usually be adjusted by the user. This sharpening is partly to compensate for the blurring that is applied to even out color variations during the demosaicing stage (Figure 2.8).

Figure 2.8. This figure shows the same image sharpened two different ways. The image on the left has been reasonably sharpened, but the one on the right has been aggressively oversharpened, resulting in a picture with harsher contrast.


8-bit conversion

Your camera's image sensor attempts to determine a numeric value that represents the amount of light that strikes a particular pixel. Most cameras capture 12 bits of data for each pixel on the sensor. In the binary counting scheme that all computers use, 12 bits allow you to count from 0 to 4,095, which means that you can represent 4,096 different shades between the darkest and lightest tones that the sensor can capture.

The JPEG format allows only 8-bit images. With 8 bits, you can count from 0 to 255. Obviously, converting to 8 bits from 12 means throwing out some data, which is manifest in your final image as a loss of fine color transitions and details. Though this loss may not be perceptible (particularly if you're outputting to a monitor or printer that's not good enough to display those extra colors anyway), it can have a big impact on how far you can push your color corrections and adjustments.

JPEG compression

With your image data interpreted, corrected, and converted to 8-bit, it's ready to be compressed for storage on your camera's media card. JPEG compression exploits the fact that your eye is more sensitive to changes in luminance than it is to changes in color.

Although your camera represents color as a combination of red, green, and blue values, there are many other color models that can be used. For example, color can also be represented as a combination of hue, lightness, and saturation values. To begin the process of JPEG compression, your camera converts your image to a color model wherein pixels are represented by separate luminance and color values. It then sets the luminance information aside and leaves it undisturbed.

The color information is divided into a grid of cells measuring 8 pixels by 8 pixels. Each 8 x 8 cell is then averaged to reduce it from a mosaic of slightly varied colors to a single square of more uniform color. Now that there is far less color variation, a simple lossless compression schemelike the kind you might use when you zip or stuff a file before sending it via e-mailcan be applied to greatly reduce the size of the color information.

When the color and luminance information are recombined, your eye doesn't necessarily notice the loss of color, thanks to your eye's more prominent luminance sensitivity.

If you apply a lot of JPEG compression or recompress an image several times, you can actually see the grid pattern begin to emerge in your image (Figure 2.9).

Figure 2.9. In this rather extreme example of JPEG compression, you can see the posterization effects that can occur from aggressive compression (or repeated recompression) as well as the blocky artifacts that JPEG compression can produce.


File storage

Finally, the image is written to your camera's memory card. These days, many cameras include sophisticated memory buffers that let them process images in an on-board RAM cache while simultaneously writing out other images to the storage card. Good buffer and writing performance is what allows some cameras to achieve high frame rates when shooting in burst mode.

Your camera also stores important EXIF (or exchangeable image file) information in the header of the JPEG file. The EXIF header contains, among other things, all of the relevant exposure information for your shot. Everything from camera make and model to shutter speed, aperture setting, ISO speed, white balance setting, exposure compensation setting, and much more is included in the file, and you can reference all of this information in almost any image editor (Figure 2.10).

Figure 2.10. Most image editors let you read the EXIF information that your camera stores with each image.





Getting Started with Camera Raw(c) How to make better pictures using Photoshop and Photoshop Elements
Getting Started with Camera Raw: How to make better pictures using Photoshop and Photoshop Elements (2nd Edition)
ISBN: 0321592131
EAN: 2147483647
Year: 2006
Pages: 76
Authors: Ben Long

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net