Optional Audio Introduction

   

Updated: 10/25/2003

 
Module 8

 

  

How the
TV Process Works

Why do you need to know how the TV process works?

Well, this is another of those "knowledge is power" things. The more you know about the TV process the easier it will be to use the tools in creative new ways, and to solve the inevitable problems that crop up during TV productions.

So, let's start at the beginning with...

   

 

Fields and Frames

When you get right down to it, both motion pictures and TV are based solidly on an illusion. Strictly speaking, there is no "motion" in TV or motion picture images.

Interestingly, the foundation for motion pictures was established in 1877 with a $25,000 bet. For decades an argument had raged over whether a race horse ever had all four hooves off the ground at the same time. (Some people must have a lot of time on their hands to sit and debate things like this!)

In an effort to settle the debate once and for all, an experiment was set up in which a rapid sequence of photos was taken of a running horse. And, yes, as discussed —here, it was found that for brief moments a race horse has all four feet off the ground at the same time.

But this experiment established something even more important. It was discovered that if this sequence of still pictures was presented at a rate of about 16 or more per second, these individual pictures would blend together, giving the impression of a continuous, uninterrupted image. In this case, of course, the individual pictures varied slightly to reflect changes over time, and the illusion of motion was created when the pictures were presented in an uninterrupted sequence.

In the illustration on the right you can more clearly see how a sequence of still images can create an illusion of movement.

A more primitive version of this can be seen in the "moving" lights of a theater marquee or "moving" arrow of a neon sign suggesting that you come in and buy something.

Although early silent films used a basic frame (or picture) rate of 16 and 18 per-second, when sound was introduced, this rate was increased to 24 per-second. This was primarily necessary to meet the quality needs of the sound track. (Actually, to reduce flicker today's motion picture projectors use a two-bladed shutter that projects each frame twice, giving an effective rate of 48 frames per-second.)

Unlike broadcast television that has frame rates of 25 and 30 per second depending on the country, film has for decades maintained a worldwide, 24-frame per-second sound standard.

The NTSC (National Television System Committee) system of television used in the United States, Canada, Japan, Mexico, and a few other countries, reproduces pictures (frames) at a rate of approximately 30 per-second.

Of course, this presents a bit of a problem in converting film to TV  (mathematically, 24 doesn't go into 30 very well), but we'll worry about that later.

A motion picture camera records a sequence of completely formed pictures on each frame of film, just like the still pictures on a roll of film in your 35mm camera. The motion picture camera just takes the individual pictures at a rate of 24 per-second.

Things are different in TV. In a video camera each frame is comprised of hundreds of horizontal lines. Along each of these lines there are thousands of points of brightness and color information. This information is electronically discerned by in the TV camera (and then later reproduced on a TV display) in a left-to-right, top-to-bottom, scanning sequence. This sequence is similar to the movement of your eyes as you read a section of this page.

To reduce flicker and brightness variations during the scanning process, as well as to solve some technical limitations, it was decided to divide the scanning process into two halves. The odd-numbered lines are scanned  first and then the even-numbered lines are interleaved in between to create a complete picture. Not surprisingly, this process is referred to as interleaved or interlaced scanning.

Note the extreme closeup of a section of a TV image shown on the left below. In the illustration on the right we've colored the odd lines green and the even lines yellow so you can see how they combine to create a full video picture. (A color TV picture, which is a bit more complex, will be described later.)

Each of these half-frame passes (either all of the odd or even-numbered lines, or the green or the yellow lines in the illustration) is called a field. The completed (two-field) picture is called a frame, as we've previously noted.

Once a complete picture (frame) is scanned, the whole process starts over again. The slight changes between successive pictures are fused together by human perception, giving the illusion of continuous, uninterrupted motion.

Today, rather than using an interlaced approach to scanning, some video systems (including computer monitors and some of the new digital television standards) use a progressive or non-interlaced scanning approach, where the fields (odd and even lines) are combined and reproduced at the same time.

Progressive scanning has a number of advantages, including greater clarity and the ability to more easily interface with computer-based video equipment. At the same time, it also adds greater technical demands on the TV system.

The interleaved approach, although necessary before recent advances in technology, results in some minor "picture artifacts," or distortions in the picture, including variations in color. Most of today's TV receivers still rely on the interleaved approach. 

As we will see in the next module, the specifications for digital and high-definition television (DTV/HDTV) allow for both progressive and interlaced scanning.
  

The Camera's Imaging Device

The lens of the television camera forms an image on a light sensitive target inside the camera in the same way a motion picture camera forms an image on film. But, instead of film, television cameras commonly use solid-state, light-sensitive receptors called CCDs (charged-coupled devices, or "chips") that are able to detect brightness differences at different points  throughout the image.

The target area of the CCD (the small rectangular area near the center of this photo) contains from hundreds of thousands to millions of pixel (picture element) points, each of which can electrically respond to the amount of light focused on its surface.  

A very small section of a CCD is represented below—enlarged several thousand times. The individual pixels are shown in blue. The differences in image brightness detected at each of these points on the surface of the CCD are changed into electric voltages.

Electronics within the camera scanning system regularly check each pixel to determine the amount of light falling on its surface. This sequential information is directed to an output amplifier along the path shown by the red arrows. The sequential readout of information is continually repeated, creating a constant sequence of changing field and frame information. (This process, especially as it relates to color information, will be covered in more detail in Module 15.)

In a sense, this whole process is reversed in the TV receiver. The pixel-point voltages generated in a camera are then changed back into light, which we see as an image on our TV screens.


Analog and Digital Signals

Electronic signals as they originate in microphones and cameras are analog (also spelled analogue) in form. This means that the equipment detects signals in terms of continuing variations in relative strength or amplitude . In audio this would be loudness; in video it would be the brightness component of the picture.

As illustrated on the left, in professional facilities these signals are then changed into digital data (computer 0s and 1s) before progressing through subsequent electronic equipment.

 

The top part of the illustration on the left shows how an analog signal can smoothly rise and fall over time to reflect changes in the original audio or video source.

In order to change an analog signal to digital, the wave pattern is sampled at a high rate of speed and the amplitude at each of those sampled moments is converted into a number equivalent.

It's as if each of the blue columns on the left (which represents a corresponding point on the analog signal above) is instantly assigned a numerical value before it is sent out to represent the original signal. Since we are dealing with numerical quantities, this conversion process is appropriately called quantizing.

The faster all this is done, the better the audio and video quality will be, of course—but also the more "space" (bandwidth) that's required to record or transmit the signal. Thus, we are frequently dealing with the difference between high-quality equipment that can handle ultra high-speed data rates, and lower-level (less expensive) consumer equipment that relies on a lower sampling rate. This answers the question as to why some video recorders cost $500 and others cost nearly $100,000.

Actually, our ears and eyes don't need every bit of the information in a continuous analog wave to get a true impression of the original signal. If the sampling rate is fast enough, we won't notice the "holes" (the spaces between the blue lines above) in the data stream.

Thus, original analog audio and video signals are always "compressed" to some degree in the analog-to-digital conversion process.  The issue of "quality" rests on how much.  We'll revisit this issue a bit later when we focus on the issue of compression.

Once the information is converted into numbers, we can do some very interesting things (generally, special effects) by adding, subtracting, multiplying and dividing the numbers.

Compared to the digital signal, an analog signal would seem to be the most accurate and ideal representation of the original signal. While this may initially be true, the problem arises in the need for constant amplification and re-amplification of the signal throughout every stage of the audio and video process.

Whenever a signal is reproduced and amplified, noise is inevitably introduced, which degrades the signal. In audio this can take the form of a hissing sound; in video it appears as a subtle background "snow" effect.

By converting the original analog signal into digital form, this noise buildup can be virtually eliminated, even though it's amplified or "copied" dozens of times. Because digital signals are limited to the form of zeros and ones (0s and 1s, or binary computer code), no "in between" information can creep in to degrade the signal.

When we cover digital audio, we'll delve more deeply into some of these issues.

Today's digital audio and video equipment has borrowed heavily from developments in computer technology—so heavily, in fact, that the two areas seem to be merging.

Today, satellite services such as DISH and Direct-TV make use of digital receivers that are, in effect, specialized computers. Progressive radio and TV stations have already switched over to digital signal processing.  And, very possibly, you regularly listen to music recorded on a shirt pocket-sized device that is capable of  storing several hours of digitized music.

Some of the advantages of digital electronics in video production are -discussed here.


 


Listing Of All Tests And Puzzles For ModulesLet Someone Know About These MaterialsThe Home Page For This SiteMain Index For Modules And Related InformationHardcopy Readings According to TopicGeneral Information On This Site And Its UseTell Us of a Problem or Make a SuggestionReaders' Comments / CyberCollege ForumNotices and Revision InformationMaster Index of Associated ReadingsUse Key Words To Find Information At This SiteMove To Next Module


To next module       To index          © 1996 - 2003, All Rights Reserved. For Direct Internet Use Only.