When storing video digitally there are two philosophies in which you can store it: RGB and YUV. Each has a variation or two that change how accurate they are, but that's it (i.e. RGB16,RGB24,RGB32 and then YUV, YUY2, etc).
RGB stores video rather intuitively. It stores a color value for each of the 3 color levels, Red Green and Blue, on a per pixel basis. The most common RGB on computers these days is RGB24 which gives 8 bits to each color level (that's what gives us the 0-255 range as 2 to the 8th power is 256), thus white is 255,255,255 and black is 0,0,0.
YUV colorspace is a little different. Since the human eye perceives changes in brightness better than changes in color, why not focus more on brightness than the actual colorlevel? In YUV colorspace you have 3 values: Luminance, or just Luma (which is the brightness level) is abbreviated at Y. U is the Red Difference sample, and V is the Blue Difference sample. What does the "difference" mean?
YUV is generally stored in 16-bit, and 8 bits are given to Luma and 4 bits to each of the two Chroma samples. This works by only sampling Chroma half as often as you sample Luma, or rather every 2 pixels share the same color values.
So basically YUV stores more relevant data at a lower accuracy than RGB.
This is important because when you convert between the two colorspaces, either you lose some data, or assumptions have to be made and data must be guessed at or interpolated.
The top image is an original and below it is an image sampled with YUV 4:2:0 sampling, notice how the colors of the hairline at the top left become puzzled because of the chroma averaging between pixels.
Converting back and forth between colorspaces is bad because you can lose detail, and it also slows down the process. Thus you want to avoid colorspace conversions as much as possible. But how?
It's important to know how your various programs deal with video.
Premiere, and almost all video editing programs, work in RGB because it's easier to deal with mathematically. Premiere demands all incoming video be in RGB32 - or 24-bit color with 8-bit alpha channel, specifically, and will convert the YUV footage you give it.
AVISynth itself can work in either colorspace, but YUV is preferred and most (if not all) AVISynth filters run in YUV colorspace.
TMPGEnc's VFAPI plugins all operate in RGB colorspace because all of its filtering and processing runs in RGB.This is also true of FlaskMPEG.
VirtualDub runs in RGB when you use Normal Recompress or Full Processing Mode (in the Video dropdown menu). All of VirtualDub's internal functions and filters run in RGB colorspace only. However, Fast Recompress doesn't decode the video to RGB, and instead just shunts whatever your source is into the compressor you've selected - thus if your source is YUV it shunts the video data as YUV into the video compressor.
This is important because almost all distribution video codecs run in YUV colorspace. This includes DivX, XviD, MPEG1, MPEG2, MPEG-4, DV, etc. HuffYUV (guess where the name comes from) runs YUV colormode natively but you can indeed compress RGB video data with it (but it will be bigger). There's an option in the codec controls to automatically convert incoming RGB video data to YUV in order to save space if you want to.
Thus using Fast Recompress in VirtualDub (or by the same token, NanDub) is not only the fastest way to transcode video but also the least costly in terms of colorspace conversions. But the drawback is you cannot you any of VirtualDub's filters in Fast Recompress mode - VirtualDub never even touches the incoming video stream. So how can you do it? Use AVISynth!
By scripting all your filters in AVISynth and operating in YUV colorspace, you can avoid more costly color conversions. The optimal scenario involves only 2 colorspace conversions: MPEG2 from DVD in YUV, converted RGB video in Premiere, converted YUV output in Huffyuv and from that output stay in YUV colorspace all the way through to the video compressor. By doing this you not only save time but also quality by avoiding colorspace conversions.
If you do need an RGB process after your editing, then export the footage as RGB and apply the process before doing any YUV filtering. This is easy in avisynth as you can keep track of the colorspace used - but more about this later :)