|
How the
Why do you need to know how the TV process works? This is another of those "knowledge is power" things. The more you know about the TV process the easier it will be to use the tools in creative new ways and to solve the inevitable problems that crop up during TV productions. So, let's start at the beginning with...
|
Fields and Frames When you get right down to it, both motion pictures and TV are based solidly on an illusion. Strictly speaking, there is no subject matter "motion" in TV or motion picture images. Interestingly, the foundation for motion pictures was established in 1877 with a $25,000 bet. For decades an argument had raged over whether a race horse ever had all four hooves off the ground at the same time. (Some people must have a lot of time on their hands to sit and debate things like this!) In an effort to settle the issue once and for all, an experiment was set up in which a rapid sequence of photos was taken of a running horse. And, yes, as discussed here, it was found that for brief moments a race horse has all four feet off the ground at the same time. This experiment established something even more important. It was discovered that if this sequence of still pictures was presented at a rate of about 16 or more per second, these individual pictures would blend together, giving the impression of a continuous, uninterrupted image. In this case, of course, the individual pictures varied slightly to reflect changes over time, and the illusion of motion was created when the pictures were presented in an uninterrupted sequence. In the illustration on the right you can more clearly see how a sequence of still images can create an illusion of movement. A more primitive version of this can be seen in the "moving" lights of a theater marquee or "moving" arrow of a neon sign suggesting that you come in and buy something. Although early silent films used a basic frame (picture) rate of 16 and 18 per-second, when sound was introduced this rate was increased to 24 per-second. This was primarily necessary to meet the quality needs of the sound track. (Actually, to reduce flicker today's motion picture projectors use a two-bladed shutter that projects each frame twice, giving an effective rate of 48 frames per-second.) Unlike broadcast television that has frame rates of 25 and 30 per second depending on the country, for decades film has maintained a worldwide, 24-frame per-second sound standard. The NTSC (National Television System Committee) system of television used in the United States, Canada, Japan, Mexico, and a few other countries, reproduces pictures (frames) at a rate of approximately 30 per-second. Of course, this presents a bit of a problem in converting film to TV (mathematically, 24 doesn't go into 30 very well), but we'll worry about that later. A motion picture camera records a sequence of completely formed pictures on each frame of film, just like the still pictures on a roll of film in a 35mm camera. The motion picture camera just takes the individual pictures at a rate of 24 per-second. Things are different in TV. In a video camera each frame is comprised of hundreds of horizontal lines. Along each of these lines there are thousands of points of brightness and color information. This information is electronically discerned in the TV camera (and then later reproduced on a TV display) in a left-to-right, top-to-bottom, scanning sequence. This sequence is similar to the movement of your eyes as you read a section of this page. To reduce flicker and brightness variations during the scanning process, as well as to solve some technical limitations, it was originally decided to divide the scanning process into two halves. The odd-numbered lines are scanned first and then the even-numbered lines are interleaved in between to create a complete picture. Not surprisingly, this process is referred to as interleaved or interlaced scanning. In this enlarged TV image illustration we've colored the odd lines green and the even lines yellow to separate them. By removing the color we can see how they combine to create the black and white video picture on the right. (A color TV picture, which is a bit more complex, will be described later.) Each of these half-frame passes (either all of the odd- or even-numbered lines, or the green or the yellow lines in the illustration) is called a field. The completed (two-field) picture is called a frame, as we've previously noted. Once a complete picture (frame) is scanned, the whole process starts over again. The slight changes between successive pictures are fused together by human perception, giving the illusion of continuous, uninterrupted motion. Today, rather than using an interlaced approach to scanning, some video systems (including computer monitors and some of the new digital television standards) use a progressive or non-interlaced scanning approach where the fields (odd and even lines) are combined and reproduced in a 1-2-3, etc., sequence rather than an odd (1-3-5) and even (2-4-6) sequence. Progressive scanning has a number of advantages, including greater clarity and the ability to more easily interface with computer-based video equipment. But, it adds greater technical demands on the TV system. The interleaved approach, although necessary before recent advances in technology, results in some minor picture artifacts, or distortions in the picture, including variations in color. As we will see in the next module, the specifications for digital
and high-definition television (DTV/HDTV) allow for both progressive and interlaced
scanning. The lens of the television camera forms an image on a light sensitive target inside a video camera in the same way a motion picture camera forms an image on film. But, instead of film, television cameras commonly use a device such as a solid-state, light-sensitive receptor called a CCD (charged-coupled device) or a CMOS (or complementary metal oxide semiconductor, for those who really need to know what such things stand for!). Both of these are commonly referred to as "chips," and they are able to detect brightness differences at different points throughout the image area. The target area of a chip (the small rectangular area near the center of this photo) contains from hundreds of thousands to millions of pixel (picture element) points, each of which can electrically respond to the amount of light focused on its surface. A very small section of a chip is represented below-enlarged several thousand times. The individual pixels are shown in blue. The differences in image brightness detected at each of these points on the surface of the chip are changed into electric voltages. Electronics within the camera scanning system regularly check each pixel area to determine the amount of light falling on its surface. This sequential information is directed to an output amplifier along the path shown by the red arrows. The readout of this information is continually repeated, creating a constant sequence of changing field and frame information. (This process, especially as it relates to color information, will be covered in more detail in Module 15.) In a sense, this whole process is reversed in the TV receiver. The pixel-point voltages generated in a camera are then changed back into light, which we see as an image on our TV screens.
Electronic signals as they originate in microphones and cameras are analog (also spelled analogue) in form. This means that the equipment detects signals in terms of continuing variations in relative strength or amplitude. In audio this is translates into audio volume or loudness; in video it's the brightness component of the picture. As illustrated above, these analog signals can be changed into digital data (computer 0s and 1s) before progressing through subsequent electronic equipment. The top part of the illustration below shows how an analog signal can rise and fall over time to reflect changes in the original audio or video source. (This is the same as the pink waveform represented at the left of the drawing above.) In order to change an analog signal to digital the wave pattern is sampled at a high rate of speed and the amplitude at each of those sampled moments (shown in blue-green on the left) is converted into a number equivalent. These numbers are simply the combinations of the zeros and ones used in computer language. Since we are dealing with numerical quantities, this conversion process is appropriately called quantizing. Once the information is converted into numbers, we can do some interesting things (generally, special effects) by adding, subtracting, multiplying and dividing the numbers. The faster all this is done, the better the audio and video quality will be. But this also means that more data or bandwidth will be involved. Thus, we are frequently dealing with the difference between high-quality equipment that can handle ultra high-speed data rates and lower-level (less expensive) consumer equipment that relies on a lower sampling rate. This answers the question as to why some video recorders cost $500 and others cost $100,000.
Compared to the digital signal, an analog signal would seem to be the most accurate and ideal representation of the original signal. While this may initially be true, the problem arises in the need for constant amplification and re-amplification of the signal throughout every stage of the audio and video process. Whenever a signal is reproduced and amplified noise is inevitably introduced, which degrades the signal. In audio this can take the form of a hissing sound; in video it appears as a subtle background "snow" effect. By converting the original analog signal into digital form, this noise buildup can be virtually eliminated, even though the signal is amplified or "copied" dozens of times. Because digital signals are limited to the form of zeros and ones (0s and 1s, or binary computer code), no "in between" information (spurious noise) can creep in to degrade the signal. When we focus on digital audio, we'll delve more deeply into some of these issues. Today's digital audio and video equipment has borrowed heavily from developments in computer technology-so heavily, in fact, that the two areas seem to be merging. Today, satellite services such as DISH and Direct-TV make use of digital receivers that are, in effect, specialized computers. Progressive radio and TV stations have already switched over to digital signal processing. And you probably listen to music recorded on a shirt pocket-sized device that is capable of storing several hours of digitized music. Some of the advantages of digital electronics in video production are discussed here. |
TO NEXT MODULE Search
Site Video Projects
Revision
Information
Issues
Forum Comment or Problem
Associated Readings
Bibliography
Index for Modules To Home Page Tell
a Friend
Tests/Crosswords/Matching