When it comes to digital cameras, most consumers (and salespeople) seem obsessed with megapixels. Because "everybody knows" that having more pixels means better images (it doesn't, by the way). In a similar fashion, I've seen almost as much obsession in the scanner section of the computer store... where consumers are snapping up 9600 DPI scanners with slide adapters.
What's lacking in all of this hoopla is a clear understanding of what pixels are and just how many you need. See, the more pixels you have (whether those are captured with your digital camera or acquired with a scanner), the more RAM you need to buy and extra hard-drive space to store them all. So it behooves you to actually understand a little of the technology behind the images you want to capture, manipulate, output, and store.
In the Beginning...
Essentially, computers and video devices use pixels to express image information. Each pixel is a small square of light. The pixel is the smallest portion of an image that a computer is capable of displaying or printing. Too few pixels and images appear "blocky," as there is not enough detail to work with. Too many pixels and the computer or output device just gets bogged down.
Computers use pixels to build with, like a child might use these wooden blocks.
But where did the term pixel come from? A pixel is an abbreviation for picture element. The word was coined to describe the photographic elements of a television image. In 1969, writers for Variety magazine took pix (a 1932 abbreviation of pictures) and combined it with element to describe how TV signals came together. There are even earlier reports of Fred C. Billingsley coining the word at NASA's Jet Propulsion Laboratory in 1965. While the exact origins of the word may be disputed, the meaning is not. The word pixel quickly caught on, first in the scientific communities in the 1970s and then in the computer-art industry in the mid 1980s.
A close-up of TV "picture elements," or pixels.
The red circle shows an enlargement of the image. Notice how you can see actual pixels when you increase the magnification of an image. These sqaures of light are the building blocks of all digital photos.
A portmanteau of the nonverbal kind.
So What Are Megapixels?
When you shop for a digital camera, you are bombarded with talk of megapixels. Consumers are often misled about what megapixels are, and how many are needed. A megapixel is simply a unit of storage, whether internal or on a removable card. A megapixel is 1 million pixels, and is a term commonly used to describe how much data a digital camera can capture. As with your car, just because your tank can hold more gallons of gas doesn't mean it's more fuel-efficient or better than your co-worker's.
Digital cameras use card-based storage, like this compact flash card, to hold the captured pixels.
For example, if a camera can capture pictures at 2048 x 1536 pixels, it is referred to as having 3.1 megapixels (2048 x 1536 = 3,145,728).
If you were to print that picture on paper at 300 PPI (pixels per inch), it would roughly be a 7" x 5" print. Professional photographers need more pixels than this, but a consumer may not. It all depends on how the pixels are meant to be displayed.
The more pixels you capture, the larger the image is (both in disk space and physical print size). Consumer usage such as email or inkjet prints are less demanding than professional usage such as billboards or magazines. Professionals need more megapixels than consumers, hence high-end cameras cost more since they are targeted at people who make money by taking pictures.