NTSC Decoding Basics (Part 1)

They were some smart guys

By the early 50s, the color transmission system was adopted with the requirement that it be compatible with the thousands of monochrome receivers already in the consumer market. By this time, the channel allocations had been settled, each with the prescribed 6 MHz bandwidth. Fundamentally, if one used three monochrome cameras, each with one of the three primary color filters in front of its lens, a color image could be captured and transmitted to a tri-color receiver system. This is straightforward, but it would use three full-bandwidth television channels to transmit one color program. Clearly this was unacceptable, so a method to construct color images that included all the color information and monochrome detail information into one TV channel became the goal. Obviously, this was accomplished and therein lies the elegance of the NTSC system.

Take a look at figure 1. You will see that a color TV camera does, in fact, incorporate three full-bandwidth image sensors with each sensor "seeing" only one primary color…red, green, or blue. The elegance occurs in the process of recognizing what information from the three sensors is identical and which information is unique. Much of the information is identical, or redundant, from a transmission point of view. The redundant information is the brightness of the scene and the accompanying detail. These image features are seen in a monochrome picture. In other words, if we add together the individual images of each sensor, we obtain a black and white picture having brightness and maximum detail. In our NTSC system, we refer to this composite image as the luminance, or 'Y' information. In the camera, this 'Y' information becomes a reference for determining what image information is unique.

Figure 1

Figure 1

Now, if we compare (subtract) the Y information from each of the three color sensors, we obtain a signal that represents the unique information provided by that sensor. Evaluation of this information for each sensor shows that it is comprised of lower image detail, or contains lower frequency information than the Y channel. Conveniently, this works nicely with the human eye that utilizes two types of image sensors…rods and cones. Rods are sensitive to brightness and detail. Cones are receptors for color information. In dim light, we perceive image details first and color second.

Returning to Figure 1, we see that the R, G, B components are full bandwidth. Each is modified by gamma correction. Gamma is another involved subject. But, it is the name given to the phenomenon whereby the CRT's light output is not linear with respect to the electrical input supplied to it. For many years, gamma correction has been very expensive. Spending money for gamma correction in the camera made home TV much less expensive. So, we live with this yet today. The gamma corrector is an amplifier that modifies the signal from each sensor by transforming it with the inverse function expected at the CRT display. The result is a picture with the appearance of a linear gray scale or brightness characteristic…that is, more normal. At this juncture, luminance is now an incorrect term since the real meaning is modified by the gamma circuitry. From now on we'll refer to it as luma or the Y channel. [Gamma is a good topic for a later issue as it has significant impact on the operation of modern day projector technologies.]

The matrix decoder in Figure 1 performs the math on the three signals (R, G, & B) so as to provide the reference luma signal, Y, and the unique color information, referred to as difference signals. In our system, we say R-Y (that's Red minus Y) and B-Y (Blue minus Y). Now, with three sets of unique color information, it can be determined mathematically that one of the three can be calculated if two of them are known. Sound familiar from algebra class? This means we can throw away one channel of color information and derive it later as long as we have two color difference signals and Y, for comparison. At this point, it is perhaps obvious why component video is important in our industry as it contains the key, separate components of the image information. These difference components are lower bandwidth than the Y signal.

Sections:

Required