Color Television


Monochrome television involves a single signal: brightness, or luma, usually given the symbol Y. (Although more properly it's Y', a gamma-corrected value.) Color TV requires three times the information: red, green, and blue (RGB). Unfortunately, color TV was supposed to be broadcast using the same spectrum as monochrome, using the same bandwidth, so having three times the information was a bit of a problem.

Engineers at RCA developed a truly brilliant solution. They realized that the frequency spectrum of a television signal had gaps in it largely unoccupied by picture content. They encoded color information in a sine-wave called a subcarrier and added it to the existing luma signal; the subcarrier's frequency fit within one of the empty gaps in monochrome's spectrum, so the luma information was largely unaffected by the new chroma information, and vice versa. The resulting signal remained compatible with existing monochrome receivers and fit in the same broadcast channels: they had managed to squeeze the quart of color information into the pint pot of existing monochrome broadcasting.

In this composite color system, the TV recovers color by comparing the amplitude and phase of the modulated subcarrier with an unmodulated reference signal. Differences in phasethe angle you can see on a vectorscopedetermine the hue of the color; the amplitudethe distance of a color from the center of the vectorscope's displayindicates its saturation. The reference signal is transmitted on every scanline; it's the colorburst signal that you can see between the horizontal sync pulse and the left edge of the active picture.

The picture monitor shows horizontal sync, blanking, and colorburst using its pulse-cross function; the waveform shows the same information in the analog signal itself.

Composite color has some side effects, of course. As we said, the luma and chroma signals are largelybut not completelyunaffected by each other. Certain spatial frequencies, often seen in herringbone jackets or other densely patterned clothing, can overlap the frequency used by the subcarrier and show up as cross-color artifacts, a shimmering moiré of false color floating on top of the affected image. Conversely, sharp color transitions along edges, as with brightly colored text or graphics, often show up as "dot crawl" or "chroma crawl," a cross-luma artifact appearing as series of colored dots marching up or across an otherwise straight edge.

In NTSC composite color, both the hue and the saturation are subject to distortion during recording and transmission, so NTSC monitors have adjustments for hue (make the faces green! No, make the faces purple!) and saturation. For this reason, NTSCshort for National Television Systems Committeeis often referred to as Never Twice the Same Color. PALPhase Alternating Lineadds additional signal alternations to automatically compensate for hue errors, so PAL sets have only saturation controls.

Tip

To fix distorted hue, use FCP's (one-way) Color Corrector (Video Filters > Color Correction > Color Corrector) or the Proc Amp's Phase control (Video Filters > Image Control > Proc Amp). To fix saturation, use the Sat control in the Color Corrector 3-way (Video Filters > Color Correction > Color Corrector 3-way) for real-time correction. You can also use the Sat control in the one-way Color Corrector, the Chroma control in the Proc Amp, or the Desaturate filter (Video Filters > Image Control > Desaturate), which lets you increase or decrease saturation.


Additionally, the carriage of color on a subcarrier limits the detail you can resolve in the color; sharp transitions in color show up as dot crawl instead of a crisp edge. NTSC and PAL limit chroma resolution to under a quarter of luma resolution. Fortunately, the human eye is much less sensitive to color details than brightness details; NTSC and PAL were designed with these limitations in mind.

Finally, adding color to the earlier monochrome standard caused interference with the sound signal when sound and picture were modulated for broadcast. To avoid interference, the picture frequenciesframe rate as well as color subcarrierwere slowed by one part in a thousand. The resulting broadcasts could still be played on existing monochrome receivers with no problem, but that one part in a thousand slowed the field rate from 60 Hz to 59.94 Hz, the frame rate dropped to 29.97 Hz, and dropframe timecode had to be used (when, later on, timecode was invented) to keep NTSC's times in sync with wall clocks, a very important consideration for broadcasters.

Color Recording

Broadcasting color is one thing. Recording it on tape, with all the instabilities and variability of electromechanical systems, is something else altogether. Many different approaches have been used, with differing benefits and side effects.

Direct Color

In the early days of color, the composite signal was recorded directly on tape, using 1-inch VTRs. Because the color is carried in the phase and amplitude of a high-frequency signal (3.58 MHz in most NTSC systems, 4.43 MHz in PAL), any timebase errorsminor variations in playback speed that change the apparent phase of the color signalresult in ruined color. Playing back direct color recordings requires timebase correction, using memory buffers to smooth out mechanically-induced jitter. Timebase correctors (TBCs) for direct color playback were very expensive and were essentially specific to the format and type of machine being used.

Color-Under

Heterodyne or color-under recording developed as a way to record and play back color without the need for the expensive TBC. Inside the VTR, the chroma signal is "heterodyned down" to a much lower frequency and recorded "under" the luma signal. The magic of heterodyning is that when the playback signal is "heterodyned up" again, most of the timebase errors cancel themselves out, which reproduces usable color. Color-under is used in ¾-inch, VHS, Video8, S-VHS, and Hi8 formats, among others.

TBC-free color comes at a price: the chroma resolution of the color-under signal is reduced, and the precise timebase of the color subcarrier becomes muddled. The muddled subcarrier makes it impossible to separate the luma and chroma information as accurately as with direct color, so color-under recordings are harder to process in downstream equipment such as proc amps, TBCs, and frame synchronizers, and almost invariably suffer from diminution of high-frequency details.

Component

In the mid-1980s, Sony's Betacam and Panasonic's MII formats introduced component recording. In the mid-1980s, the Hawkeye/Recam/M, Betacam, and MII formats introduced component recording. A major problem with composite video is that once the luma and chroma are mixed together, it's pretty much impossible to recover the separate luma and chroma signals completely intact. Although this is irrelevant for final broadcast, since analog broadcasting uses composite color, it makes manipulation of the image in post production difficult, and dubbing composite color signals across multiple generations, even with TBCs, results in considerable quality loss.

Instead of subcarrier-modulating the signal, Betacam and MII record the luma component and two color-difference components separately, in different parts of the video track. Because the color signal is never modulated on a phase-critical subcarrier, it isn't subject to hue-distorting timebase errors or the resolution limitations imposed by subcarrier modulation.

Color-difference components fall under the general nomenclature of "YUV", although there are several variantsY'UV, Y'/R-Y/B-Y, Y'PRPB, Y'CRCBdepending on the exact format or signal connection being used. There's nothing mysterious about YUV color; it's a simple transformation of RGB signals by matrix multiplication. YUV signals offer several advantages over RGB in the rough-and-tumble world of analog recording and transmission:

Gain imbalances between RGB signals show up as very noticeable color casts in the reproduced picture. The same degree of imbalance between YUV signals appears as a slight change in overall brightness or a change in saturation of some colors. To see what gain imbalances do to images, follow these steps:

1.

Exit FCP if it's running.

2.

Install AJW's filters from the DVD included with this book: Drag the folder AJW's Filters (Media > Lesson 07) into your Mac's Plugins folder (Macintosh HD > Library > Application Support > Final Cut Pro System Support > Plugins, where "Macintosh HD" is the name of your system disk).

If you're working on a shared system and don't have the permissions to drop the folder in the prescribed location, you can install it just for yourself in the Plugins folder in your home directory (YourHomeDirectory > Library > Preferences > Final Cut Pro User Data > Plugins). (FCP 1 through FCP 3 used different script locations; look in your User Manual for script installation instructions.)

If the filters don't appear when you run FCP, make sure the AJW's Filters folder and its contents are both readable and writable. Change the permissions as necessary, and the filters should show up inside FCP.

3.

Start FCP and load the DVStressTest3.tif clip into the Viewer.

4.

Put the clip into a 720 x 480pixel Timeline and set the playhead so that the clip shows up in the Canvas. Double-click the clip in the Timeline to load it in the Viewer, then select the Viewer's Filters tab.

5.

Apply the Effects > Video Filters > AJW's Filters > Channel Balance [ajw] filter to the clip.

6.

Set the Green / Cb Gain slider to 70%, simulating an amplitude imbalance.

7.

Change the Color Space setting from RGB to YCrCb (YUV) and back again while looking at the results in the Canvas.

8.

Play with other Gain settings for the various channels and watch what happens in RGB mode and in YUV mode.

Timing differences between RGB signals appear as bright fringes of contrasting colors along the edges of objects, whereas in YUV the same amount of delay shows itself as laterally displaced colors with much milder edge effects. Follow these steps to see what timing differences do in RGB and YUV:

1.

You already installed AJW's filters in the previous exercise, right?

2.

Start FCP and load the DVStressTest3.tif clip.

3.

Put the clip into a 720 x 480pixel Timeline and set the playhead so that the clip shows up in the Canvas.

4.

Double-click the clip in the Timeline to load it in the Viewer, then select the Viewer's Filters tab.

If you're lazy like I am, FCP still has the clip loaded from the previous exercise. That'll do fine. Turn off or delete any filters already applied to the clip.

5.

Apply the Effects > Video Filters > AJW's Filters > Channel Offset [ajw] filter to the clip.

6.

Set the Green / Cb Horizontal slider to 10, simulating a timing difference.

7.

Change the Color Space setting from RGB to YCrCb (YUV) and back again while looking at the results in the Canvas.

8.

Play with other Horizontal settings for the various channels and watch what happens in RGB mode and in YUV mode.

Finally, as mentioned before, the human eye is less sensitive to color resolution than to brightness resolution. By transcoding color into YUV components, it's possible to reduce the bandwidth taken by the color components considerably without markedly affecting picture quality.

In digital recording and transmission, gain and timing differences are rendered moot, but the bandwidth reductions of chroma components is still quite valuable to eke out the most efficient storage on tape and disk. So most digital recorders employ YUV components with reduced-resolution chroma.

  1. Look, if you haven't yet installed AJW's filters from the previous exercises, let's just admit that you're skimming the lesson and that you might as well skip this exercise, too!

  2. Start FCP and load the DVStressTest3.tif clip.

    This synthetic test pattern has full resolution in all channels.

  3. Put the clip into a 720 x 480pixel Timeline and set the playhead so that the clip shows up in the Canvas. Double-click the clip in the Timeline to load it in the Viewer, then select the Viewer's Filters tab. If you have any filters applied to this clip, turn them off or delete them.

  4. Apply the H. Chroma Blur [ajw] filter (Effects > Video Filters > AJW's Filters > H. Chroma Blur [ajw] filter) to the clip.

  5. Set the Blur Amount slider to 2, and the Chroma Shift slider to 0.

    This simulates 4:2:2 sampling as used in Digital Betacam, D-5, and other high-quality, 601-specification video formats. (We'll define "601" later in the lesson.)

  6. Turn the filter on and off while looking at the Canvas.

    Even with chroma blurred, the overall appearance of the clip is acceptable (thus bringing to mind the old phrase, "it's only television").

  7. Set the Blur Amount slider to 4.

    This simulates 4:1:1 sampling (which we'll define later in the lesson), as used in NTSC DV25 and the DVCPRO formats. It's also close to the look of Betacam SP, although Beta SP's luma signal isn't as sharp.

  8. Turn the filter on and off while looking at the Canvas.

    Even 4:1:1 isn't too bad.

  9. Turn the filter off or delete it from the clip's filter list.

  10. Apply the Channel Blur [ajw] filter (Effects > Video Filters > AJW's Filters > H. Channel Blur [ajw] filter) to the clip.

  11. Set Color Space to YCrCb (YUV), set the Green / CB slider to 2, and set the Blue / Cr slider to 2.

    These settings roughly simulate 4:2:0 color sampling as used in PAL DV25, over-the-air DTV, and DVD MPEG-2 formats.

  12. Turn the filter on and off while looking at the Canvas.

The high-end production market still has a need for keeping as much of the raw picture information as possible; the new HDCAM-SR format allows recording of full-bandwidth RGB signals. HDCAM-SR is discussed in Lesson 9.



Apple Pro Training Series. Optimizing Your Final Cut Pro System. A Technical Guide to Real-World Post-Production
Apple Pro Training Series. Optimizing Your Final Cut Pro System. A Technical Guide to Real-World Post-Production
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 205

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net