1. Is there a way to show only the chroma channels? I've searched for it in a few places but no answer yet. I know a few ways to show only Luma at least...
2. I've noticed that everyone seems to recommend Spline36Resize instead of Spline64Resize, which seems like it would be higher quality. Is there a reason for this aside from speed?
3. Obviously you lose chroma resolution when changing from RGB to YUY2 and YV12, but what kind of loss is there when going from those to RGB? I know I've seen Mister Hatt comment that is isn't entirely lossless. I'm still new to colorspaces in general so I don't really know why, though I do understand the basics.
4. I've seen nnedi2_rpow2 recommended for resizing with powers of two, and also not recommended. I was wondering when it's actually appropriate to use it.
5. When using a d2v source, I seem to be decimating frames, using cleanup filters, and then using changefps to bring the footage back to NTSC video (some cons require this and it gives a little more precision when editing the actual AMV). Logic would tell me this is correct, if redundant, in that it allows temporal filters to work better. Is this normal, or just unnecessary?
6. This is the longest question. I've seen some amazing results with time scaling in some videos, and I've been experimenting with different ways of doing it. My current thought process tells me that changefps would be best for anime in most cases since anime is often animated at low framerate anyway and the motion isn't precise, while in most other cases changefps would introduce visible jerkiness, such as when used on computer graphics and live action. I also know about convertfps for blending, which seems like a bad idea in a lot of cases unless you really need smooth motion and a little blending is acceptable. Finally, there is interpolation. I would like to experiment with this method more, but the only one I've found so far is MSU FRC. Don't get me wrong, it can work pretty well some times, but most of the results I get are unusable. What I want to know is, am I on the right track, and is there anything that I have missed or that I am forgetting?
The way I do my time scaling at the moment is a changefps line followed by an assumefps("ntsc_video") line, which allows me to very precisely change how fast or slow I want the video to play. I mount scripts and use them in Vegas with all the slow filters commented out while editing and re-enabled for rendering. It runs fast enough on my computer. Am I getting any real benefit from doing it this way, or is it just slow, clunky, and unnecessary when I could be using the time scaling in Vegas?
7. Last one. Just curious where everyone gets all these different plug-ins aside from doom9 and avisynth.org.
Thanks for reading. At least these seem to be all the avisynth questions I have for now.
