Monochrome television involves a single signal: brightness, or luma, usually given the symbol Y. (Although more properly it's Y', a gamma-corrected value.) Color TV requires three times the information: red, green, and blue (RGB). Unfortunately, color TV was supposed to be broadcast using the same spectrum as monochrome, using the same bandwidth, so having three times the information was a bit of a problem.
Engineers at RCA developed a truly brilliant solution. They realized that the frequency spectrum of a television signal had gaps in it largely unoccupied by picture content. They encoded color information in a sine-wave called a subcarrier and added it to the existing luma signal; the subcarrier's frequency fit within one of the empty gaps in monochrome's spectrum, so the luma information was largely unaffected by the new chroma information, and vice versa. The resulting signal remained compatible with existing monochrome receivers and fit in the same broadcast channels: they had managed to squeeze the quart of color information into the pint pot of existing monochrome broadcasting.
In this composite color system, the TV recovers color by comparing the amplitude and phase of the modulated subcarrier with an unmodulated reference signal. Differences in phasethe angle you can see on a vectorscopedetermine the hue of the color; the amplitudethe distance of a color from the center of the vectorscope's displayindicates its saturation. The reference signal is transmitted on every scanline; it's the colorburst signal that you can see between the horizontal sync pulse and the left edge of the active picture.
The picture monitor shows horizontal sync, blanking, and colorburst using its pulse-cross function; the waveform shows the same information in the analog signal itself.
Composite color has some side effects, of course. As we said, the luma and chroma signals are largelybut not completelyunaffected by each other. Certain spatial frequencies, often seen in herringbone jackets or other densely patterned clothing, can overlap the frequency used by the subcarrier and show up as cross-color artifacts, a shimmering moiré of false color floating on top of the affected image. Conversely, sharp color transitions along edges, as with brightly colored text or graphics, often show up as "dot crawl" or "chroma crawl," a cross-luma artifact appearing as series of colored dots marching up or across an otherwise straight edge.
In NTSC composite color, both the hue and the saturation are subject to distortion during recording and transmission, so NTSC monitors have adjustments for hue (make the faces green! No, make the faces purple!) and saturation. For this reason, NTSCshort for National Television Systems Committeeis often referred to as Never Twice the Same Color. PALPhase Alternating Lineadds additional signal alternations to automatically compensate for hue errors, so PAL sets have only saturation controls.
To fix distorted hue, use FCP's (one-way) Color Corrector (Video Filters > Color Correction > Color Corrector) or the Proc Amp's Phase control (Video Filters > Image Control > Proc Amp). To fix saturation, use the Sat control in the Color Corrector 3-way (Video Filters > Color Correction > Color Corrector 3-way) for real-time correction. You can also use the Sat control in the one-way Color Corrector, the Chroma control in the Proc Amp, or the Desaturate filter (Video Filters > Image Control > Desaturate), which lets you increase or decrease saturation.
Additionally, the carriage of color on a subcarrier limits the detail you can resolve in the color; sharp transitions in color show up as dot crawl instead of a crisp edge. NTSC and PAL limit chroma resolution to under a quarter of luma resolution. Fortunately, the human eye is much less sensitive to color details than brightness details; NTSC and PAL were designed with these limitations in mind.
Finally, adding color to the earlier monochrome standard caused interference with the sound signal when sound and picture were modulated for broadcast. To avoid interference, the picture frequenciesframe rate as well as color subcarrierwere slowed by one part in a thousand. The resulting broadcasts could still be played on existing monochrome receivers with no problem, but that one part in a thousand slowed the field rate from 60 Hz to 59.94 Hz, the frame rate dropped to 29.97 Hz, and dropframe timecode had to be used (when, later on, timecode was invented) to keep NTSC's times in sync with wall clocks, a very important consideration for broadcasters.
Broadcasting color is one thing. Recording it on tape, with all the instabilities and variability of electromechanical systems, is something else altogether. Many different approaches have been used, with differing benefits and side effects.
In the early days of color, the composite signal was recorded directly on tape, using 1-inch VTRs. Because the color is carried in the phase and amplitude of a high-frequency signal (3.58 MHz in most NTSC systems, 4.43 MHz in PAL), any timebase errorsminor variations in playback speed that change the apparent phase of the color signalresult in ruined color. Playing back direct color recordings requires timebase correction, using memory buffers to smooth out mechanically-induced jitter. Timebase correctors (TBCs) for direct color playback were very expensive and were essentially specific to the format and type of machine being used.
Heterodyne or color-under recording developed as a way to record and play back color without the need for the expensive TBC. Inside the VTR, the chroma signal is "heterodyned down" to a much lower frequency and recorded "under" the luma signal. The magic of heterodyning is that when the playback signal is "heterodyned up" again, most of the timebase errors cancel themselves out, which reproduces usable color. Color-under is used in ¾-inch, VHS, Video8, S-VHS, and Hi8 formats, among others.
TBC-free color comes at a price: the chroma resolution of the color-under signal is reduced, and the precise timebase of the color subcarrier becomes muddled. The muddled subcarrier makes it impossible to separate the luma and chroma information as accurately as with direct color, so color-under recordings are harder to process in downstream equipment such as proc amps, TBCs, and frame synchronizers, and almost invariably suffer from diminution of high-frequency details.
In the mid-1980s, Sony's Betacam and Panasonic's MII formats introduced component recording. In the mid-1980s, the Hawkeye/Recam/M, Betacam, and MII formats introduced component recording. A major problem with composite video is that once the luma and chroma are mixed together, it's pretty much impossible to recover the separate luma and chroma signals completely intact. Although this is irrelevant for final broadcast, since analog broadcasting uses composite color, it makes manipulation of the image in post production difficult, and dubbing composite color signals across multiple generations, even with TBCs, results in considerable quality loss.
Instead of subcarrier-modulating the signal, Betacam and MII record the luma component and two color-difference components separately, in different parts of the video track. Because the color signal is never modulated on a phase-critical subcarrier, it isn't subject to hue-distorting timebase errors or the resolution limitations imposed by subcarrier modulation.
Color-difference components fall under the general nomenclature of "YUV", although there are several variantsY'UV, Y'/R-Y/B-Y, Y'PRPB, Y'CRCBdepending on the exact format or signal connection being used. There's nothing mysterious about YUV color; it's a simple transformation of RGB signals by matrix multiplication. YUV signals offer several advantages over RGB in the rough-and-tumble world of analog recording and transmission:
Gain imbalances between RGB signals show up as very noticeable color casts in the reproduced picture. The same degree of imbalance between YUV signals appears as a slight change in overall brightness or a change in saturation of some colors. To see what gain imbalances do to images, follow these steps:
Timing differences between RGB signals appear as bright fringes of contrasting colors along the edges of objects, whereas in YUV the same amount of delay shows itself as laterally displaced colors with much milder edge effects. Follow these steps to see what timing differences do in RGB and YUV:
Finally, as mentioned before, the human eye is less sensitive to color resolution than to brightness resolution. By transcoding color into YUV components, it's possible to reduce the bandwidth taken by the color components considerably without markedly affecting picture quality.
In digital recording and transmission, gain and timing differences are rendered moot, but the bandwidth reductions of chroma components is still quite valuable to eke out the most efficient storage on tape and disk. So most digital recorders employ YUV components with reduced-resolution chroma.
The high-end production market still has a need for keeping as much of the raw picture information as possible; the new HDCAM-SR format allows recording of full-bandwidth RGB signals. HDCAM-SR is discussed in Lesson 9.