1.3. Media Computation: Why Digitize Media?
Let's consider an encoding that would be appropriate for pictures. Imagine that pictures were made up of little dots. That's not hard to imagine: Look really closely at your monitor or at a TV screen and see that your images are already made up of little dots. Each of these dots is a distinct color. You may know from physics that colors can be described as the sum of red, green, and blue. Add the red and green to get yellow. Mix all three together to get white. Turn them all off, and you get a black dot.
What if we encoded each dot in a picture as a collection of three bytes, one each for the amount of red, green, and blue at that dot on the screen? We could collect a bunch of these three-byte-sets to specify all the dots of a given picture. That's a pretty reasonable way of representing pictures, and it's essentially how we're going to do it in Chapter 4.
Manipulating these dots (each referred to as a pixel or picture element) can take a lot of processing. There can be thousands or even millions of them in a picture. But, the computer doesn't get bored and it's mighty fast.
The encoding that we will be using for sound involves 44,100 two-byte-sets (called a sample) for each second of time. A three-minute song requires 158,760,000 bytes. Doing any processing on this takes a lot of operations. But at a billion operations per second, you can do lots of operations to every one of those bytes in just a few moments.
Creating these kinds of encodings for media requires a change to the media. Look at the real world: It isn't made up of lots of little dots that you can see. Listen to a sound: Do you hear thousands of little bits of sound per second? The fact that you can't hear little bits of sound per second is what makes it possible to create these encodings. Our eyes and ears are limited: We can only perceive so much, and only things that are just so small. If you break up an image into small enough dots, your eyes can't tell that it's not a continuous flow of color. If you break up a sound into small enough pieces, your ears can't tell that the sound isn't a continuous flow of auditory energy.
The process of encoding media into little bits is called digitization, sometimes referred to as "going digital." Digital means (according to the American Heritage Dictionary) "Of, relating to, or resembling a digit, especially a finger." Making things digital is about turning things from continuous, uncountable, to something that we can count, as if with our fingers.
Digital media, done well, feel the same to our limited human sensory apparatus as the original. Phonograph recordings (ever seen one of those?) capture sound continuously, as an analog signal. Photographs capture light as a continuous flow. Some people say that they can hear a difference between phonograph recordings and CD recordings, but to my ear and most measurements, a CD (which is digitized sound) sounds just the samemaybe clearer. Digital cameras at high enough resolutions produce photograph-quality pictures.
Why would you want to digitize media? Because it's easier to manipulate, to replicate, to compress, and to transmit. For example, it's hard to manipulate images that are in photographs, but it's very easy when the same images are digitized. This book is about using the increasingly digital world of media and manipulating itand learning computation in the process.
Moore's Law has made media computation feasible as an introductory topic. Media computation relies on the computer doing lots and lots of operations on lots and lots of bytes. Modern computers can do this easily. Even with slow (but easy to understand) languages, even with inefficient (but easy to read and write) programs, we can learn about computation by manipulating media.