Digital Video: Are You 4:2:2 Compliant?
by Steve Somers, V.P. Engineering
Maintaining the original quality of an analog event is paramount in the television broadcast industry. This concern alone drove the development and adoption of the ubiquitous digital recording format referred to using various names: 4:2:2 Component Digital, CCIR 601, D1, or perhaps SDI (Serial Digital Interface).Why is digital "anything" better than its analog version? The reason digital is "better" has to do with its ability to replicate information consistently with very little or no degradation from time-to-time, or copy-to-copy. This is because information used to replicate an analog event is saved as a stream of numbers representing "samples" of the analog level over time. In other words, as the analog event is occurring, our recording system is precisely measuring the video level at regularly controlled time intervals and storing only the value measured. Capturing data from something at regular intervals is called "sampling". If the recorded samples are "played back" under the same sampling time interval with the reverse sampling process, the original event is restored. The value here is obvious for the recording of any analog event. The decision-making process for the design of a digital recording system involves many parameters among which are: How often to sample? What measurement resolution do I need? Do I have enough storage space for all these numbers? Can I transmit the information cost effectively? Can I do it in real time?
Digitizing video data generates lots of numbers to store. Receiving the RGB image from camera sensors would generate three full bandwidth data channels requiring considerable storage and processing speed. But, storing the component version of the video image requires less bandwidth…just as it does in the processing chain which creates composite NTSC. So, devising a system that digitizes and stores the analog component signal makes sense. This is just what the CCIR 601, 4:2:2 digital component system does. And, it's not compressed.
YCrCb…Key to Digital DNA
Recall that component video is made up of the Y channel (higher bandwidth) and two color components, R-Y and B-Y (lower bandwidth). Component digital terminology for these signals is Y, Cr, Cb. A separate A-to-D (analog to digital) channel is dedicated to digitizing each of the components. The sampling rate must be the highest for the Y channel and can be lower for the two color difference channels. According to our father of sampling theory, Nyquist, a signal must be sampled at a frequency of at least two times the highest input frequency. This ensures that byproducts of the sums and differences of the sampling process will not erroneously appear within the pass band of intended signal. When this problem does occur, we call it aliasing.
Now, although there are a number of frequencies that satisfy Nyquist and could be used to digitize quality video, the exchange of video information between standards became a great concern both in the U.S. and Europe. The 4:2:2 standard uses a 13.5 MHz sample rate since it satisfies the minimum sample rate of 12 MHz (Nyquist) and supports data transfers between NTSC and PAL. Remember that no subcarrier frequency exists in component video, so there is no issue with the sampling frequency relationship and a subcarrier. Both standards have 720 active video samples per line and only the number of lines in each frame changes (485 active for NTSC and 576 active for PAL). This makes video transfers much simpler.
Figure 1 |
The chroma channels (Cr, Cb) sample rate is conveniently one half the video sample rate, or 6.75 MHz. So, one half as many chroma difference samples are taken (360) as compared to the Y channel. This provides acceptable results because the chroma bandwidth is lower. All sampling is locked to the horizontal scan time such that each sample is in a predictable location, thus creating an orthogonal sampling methodology (see Figure 1). To maintain a sampling reference, one sample of Y, Cr, and Cb is locked to the 50% point of the falling edge of the analog sync pulse. These relationships explain the terminology "4:2:2" for the sampling of Y, Cr, and Cb. In simple terms, it means we are taking 4 samples of Y and 2 samples of Cr and 2 samples of Cb during a horizontal line scan time. Add these samples rates together…13.5 MHz plus 6.75 MHz plus 6.75 MHz and you get 27 MHz, which is the master clocking frequency for component digital. See Figure 2.
Figure 2 |
Other Cool Stuff
One thing that is a constant in video, whether analog or digital, is the sync timing. Since component digital video is represented by a series of binary numbers, a code may be assigned for the beginning of blanking (EAV - end of active video) and another for the end of blanking (SAV - start of active video) without wasting code space in the replication of a repetitive event. The receiving system can reconstruct the blanking and sync by decoding those two unique codes. In this way, nearly the entire video blanking time interval can be used to transmit a host of other digital information, including digital audio data, line numbering, error checking data, ancillary data, time code, and so on.
A Parallel Universe
Early on, the component digital data stream was 8 bits wide (0 to 255 levels). Eventually, the data stream widened to 10 bits (0 to 1024 levels). The two extra bits significantly improved signal-to-noise and reduced rounding errors during data transfers. So, the data stream is moving 10-bit wide data words at 27 MHz speed. For those who interface component digital systems over relatively short distances [short means 150 feet or less], connecting these systems with the prescribed 25-pin DB style connector is typical. Running ten differential pair data lines along with ancillary differential lines in a 25-conductor cable requires some attention to cable construction detail. Those of you with a fondness for assembling 15-pin HD VGA connectors will love this interface. Data and/or clock skew occurring over non-matched conductors will corrupt transmission. Within the broadcast production plant, the amount of real estate required for 25-pin connectors and their connections becomes an issue…particularly for routing equipment. Because of cabling and connectors, most of us will not work with component digital in the parallel connection scheme. Nor will we desire to. The closest we usually get is running our printer cable to our computer; and, that's limited to ten feet maximum with predictable results.
Enter SDI
The Serial Digital Interface simplifies the connection and routing of component digital signals to one coaxial cable. Most likely, you have already encountered this connection. Like composite NTSC, it can be routed with one BNC cable or optical fiber from place to place, but that's where the similarity ends. This ability comes at the cost of higher data rates…specifically 10 times! A serializer system is used to take each incoming parallel data "byte" and line up each bit one after the other to create a fully serial data stream that can flow on one cable. To arrive at the resulting speed of the data stream requires some complex math…10 bits times 27 MHz equals 270 Megabits per second! The signal is unbalanced and sourced/terminated in 75 ohms just like regular video signals. Signal level is specified to be 0.8 volts peak-to-peak, +/- 10%. The serial digital interface is fully described in SMPTE 259M.
The high data rate of serial digital requires careful attention to cabling and equalization. No clock signal is transmitted with the data. The data coding (scrambler) arrangement is such that enough data transitions occur to allow full recovery of the data clock from the data itself. On the receiver end of a SDI transmission is a phase-locked loop system that locks onto the 270 Mbps signal, derives the clock, and divides the rate by 10 so as to recreate the original 27 MHz clock. The phase lock is obtained by recognizing the EAV and SAV sequences, which correspond to sync in the data stream. The data is decoded, or descrambled, and then deserialized via a shift register. See Figure 3.
Figure 3 |
SDI and SHR
In 1990 I designed the specification for the "perfect" coax cable for use in computer interfacing applications. That spec turns out to be our SHR cable with -3 dB performance at 200 MHz for 100 feet in an RG-6 size. What does this have to do with SDI? Back then, this cable required fabrication technology that was very expensive. It was about three to four years later before it became cost-feasible. Meanwhile in '93, Belden released their 1694A cable to the market to support SDI. Up until that time, everyone used Belden 8281 for long run, low loss situations. Now, with growing popularity of SDI in video production systems, a lower loss cable is necessary. Amazingly, the 1694A and the Extron SHR cable specs are nearly identical.
SHR cable can be used to connect equipment having the SDI feature. Now, here's where the concept of routing SDI and standard analog video signals depart. Although the SHR cable spec highlights a 200 MHz attenuation point, that does NOT mean a cable running longer than 100 feet will not be useful. The 200 MHz marker is useful in spec'ing cable. Using SDI, we are dealing with a digital square wave signal, not an analog signal with amplitude sensitive issues. As long as the bit transitions can be detected reliably on a SDI transmission, the signal may be run long distances (hundreds of feet). We talk about the 3dB point on analog signals because it relates directly to image quality. For SDI, all we need do is recover the digital numbers that represent the signal values. For quality transmission and recovery of data on SDI, you must pay attention to proper routing and termination techniques.
Words to the Wise
Like analog signals, SDI data can be corrupted with improper termination or routing that results in cable reflections. Maintaining a clean distribution path with SDI means that decoding will largely be a function of the decoder sensitivity on the receiving end. Assuming that bit transitions are recognizable, the decoder will only be limited by its peak-to-peak sensitivity. Be sure to apply all the analog signal techniques you have learned to realize the most from a digital component application.