AnimeMusicVideos.org > Guide Index
Video can be either interlaced or progressive. Progressive video means that every pixel on the screen is refreshed in order (in the case of a computer monitor), or simultaneously (in the case of film). Interlaced video is refreshed to the screen twice every frame - first every Even scanline is refreshed and then every Odd scanline. This means that while NTSC has a framerate of 29.97, the screen is actually being partially redrawn every 59.94 times a second. A half-frame is being drawn to the screen every 60th of a second, in other words. This leads to the notion of fields.
We already know that a Frame is a complete picture that is drawn onto the screen. But what is a field? A field is one of the two half-frames in interlaced video. In other words, NTSC has a refresh rate of 59.94 Fields per Second.
This has very important ramifications when it comes to working with digital video. When working on a computer, it's very easy to resize your video down from 720x480 to something like 576x384 (A simple reduction in framesize). However, if you're working with interlaced video, this is an extremely bad thing. What resizing video to a lower resolution basically does is it takes a sample of the pixels from the original source and blends them together to create the new pixels (again that's a gross simplification but it should suffice). This means that when you resize interlaced video you wind up blending scanlines together, which could be part of completely different images! For example:
Image in full 720x480 resolution
Enlarged portion - notice the interlaced scanlines are distinct.
Image after being resized to 576x384
Enlarged portion - notice some scanlines have been blended together!
This means that you're are seriously screwing up your video quality by doing this! If you want to avoid this, you have a couple of options, namely, Deinterlacing and Inverse Telecine.
Deinterlacing is the process of interpolating the pixels in between scanlines of the same field (in other words, reconstructing a semi-accurate companion field which would turn the field into a full frame). This has several drawbacks. Not only does it take a lot of processing power to do, it's inherently error prone. You would really only want to do this in cases when Inverse Telecine isn't an option.
Originally people needed a way to display stuff recorded on film on a television set. This posed a problem: how does one turn 24 Frames per Second film into 59.94 Fields per Second video? The process invented to do this is called 3:2 pulldown or telecining. Telecining involves manipulating the film to turn it into a format which can be watched on a TV.
The first thing that is done is the film is slowed down by 0.1% to make it 23.976 Frames per Second. This is done because 29.97 FPS is 0.1% slower than 30 FPS. So from now on we will refer to the two rates as 24 and 30 for simplicity. Now comes the problem of how do we turn 24 FPS into 30 FPS? Those of us who have taken elementary algebra (which I hope is most of you) can see that common factor amongst the two numbers is 6 (24 = 6x4 and 30 = 6x5). This means if we insert an extra frame every 4 frames from the film, we will have 30FPS video.
There's a problem, however. This causes the video to stutter slightly as we're basically duplicating a frame every sixth of a second. So what can we do? Well, we can take advantage of the fact that television is interlaced, and manipulate the fields which make up the 5 frames we've created. To do this, we alternate between two and three fields for each frame that we output (thus the term 3:2 pulldown). If we have four film frames, which we divide up into Odd and Even fields, we get the following:
Now lets interlace together the second and third frame of every series, to give us the following:
Here we can see what telecined video looks like. We've taken the second frame and stretched out its fields across two frames, while the even field of the first frame in the series stays around for an extra 60th of a second, and the odd field of the third frame does so as well.
This gives us an interesting opportunity - if we have a video source that has undergone telecining, we can put it through a process to remove this, appropriately called inverse telecining. This basically reconstructs the 4 frames from every 5 to turn the source back into progressive video. This has many many advantages - most notably that you have less frames to store thus each can be given more bits or the whole file will take less space.
Here's an example of video before and after the inverse telecining process (or after and before the telecining process, if you want to call it that):
Notice that the B Fields from the second and third frame have been reconstructed into 1 frame and that has become the 2nd frame of the series. As you can see, inverse telecining dramatically increases the video quality when viewed on a computer monitor.
If you are using a codec that supports 23.976 FPS video I highly suggest editing in this format. It requires an extra step of preparation, but you wind up with smaller or better looking files at the end.
However, if you're editing in something like DV, which does not support any framerate besides 29.97 or 25, then you do not have this option.