Symptom #1: Banding and Striping in a Video Frame
"Why is there banding or striping across my gradients in a rendered video frame?"
RGB has a much larger color space than YUV. RGB, using 8-bit color depth, can generate over 16 million different colors between white and black, just at or beyond the level at which human eyes can differentiate color difference.
RGB is great because it is standard across most digital video systems. As software and hardware become more powerful, greater bit depths are likely to become available. 10-bit color will increase the possible color values into the trillions.
If 8-bit is so great, who needs 10-bit or higher? The answer lies in color definition. Computer systems are great, but they can only do the best with the information you give them. If you shoot a sunset horizon that goes from middle gray to orange at nearly 100 IRE, your eyes, and perhaps a 16-bit RGB system, will see an infinite gradation between the two values. But even though an 8-bit system can see millions and millions of color variations, it isn't infinite. So when the computer generates RGB values for the gradient, it has to round off colors that it can't match. When it does this, you see banding, or boundary stripes across the frame where there are no RGB values in the color space to define it.
This is where greater bit depth comes to the rescue. A 10-bit system generates more color variations, making the banding a lot less likely to occur. With 16-bit color depths, you would be hard-pressed to come up with gradients that band other than specialized test-patterns. In general, in all but the worst cameras, you will see very little of the banding just described, because the camera's circuitry actually handles a lot of the color rounding and error correction before the frame is recorded onto tape. A DV camera, using the 8-bit DV codec, generally will record beautiful levels of saturation and chromatic detail.
Where you have to be careful and choose your codec carefully is in the post-production phase. When you bring footage into the system, you have to make sure that the great gradients and minute detail you shot stay in the footage when you render it, so that beautiful sunset vista shots don't end up looking like the wrong sort of rainbows. If you see banding after rendering footage that did not have banding before you added effects, you may need to move up to a higher bit-depth codec. This may be more expensive and involve adding uncompressed hardware, but it's likely worth the added expense when you have to have the cleanest gradients possible.
Greater color range also provides more "headroom" when translating between color spaces. Converting from 8-bit RGB to 10-bit YUV results in better color accuracy than when converting to 8-bit YUV.
Symptom #2: Output Differs from Onscreen View
"Why does the video output look different from what I see in the Canvas and Viewer?"
YUV NTSC and PAL have much smaller color spaces than RGB. Although they can certainly be composed of the same 8-, 10- and 16-bit bit depths, implying more possible chroma variations, they are limited to the possible color variations of color broadcast television standards, which were locked into place over 40 years ago. Although television sets have improved in the past four decades, the color standards have never changed to ensure backward compatibility for broadcast color to all televisions.
YUV is considered a lowest common denominator in video, because, even with relatively inexpensive cameras, you can acquire color and luma detail that will not be seen once the footage is lowered to the broadcast standard, for instance when viewed on a home television set. Ideally, you want to get the best quality footage with the highest range possible so that when the footage is tailored for broadcast, it maintains as much quality as possible.
When editing in FCP, you have to use not only the computer's monitors for the Viewer and Canvas, but also true video monitors from your video output source. The big difference between the two is that the FCP Viewer and Canvas use the RGB color space to display on the computer screen, while your analog video output uses the much more limited YUV range.
Unless you will be restricting your video to computer monitors, its imperative that you view your video on a properly calibrated video monitor. This is also true for the Digital Cinema Desktop Preview feature that sends the video output to a second (or even the primary) computer monitor, since that monitor shows you the RGB scale.
One unique option for users of uncompressed output devices like capture cards and media converters is referred to as a video desktop preview. Some devices will let you use a YUV-type production monitor as an extra computer monitor. The feature lets you drag application windows like the Viewer or Canvas, or even Photoshop images and the like into a broadcast color space to see what they will look like before being sent out the real video output. This can be very convenient if you are developing content in applications that have no native video output. Check with your device documentation to see if your output device offers this tool.