A few things:
If you want to use TDeint to catch any remaining combed frames, it needs to be done immediately after using TFM, not at the end of the script:
- Code: Select all
TFM(order=-1, mode=5, PP=1, field=-1, slow=2)
TDeint(mode=2, mtnmode=3, blim=100, clip2=last) # 'last' being the value of the previous filter
You may also want to pass the d2v parameter to TFM:
- Code: Select all
resizing, or make sure to resize to a standard size after doing a resize->crop. Whether you want to compensate for the slight aspect ratio distortion is up to you, but a part of the problem may be that you're operating some of those filters with a frame size that's not a mod value that they're optimized for. You were trying to work with an 840x462 frame. Ick. For that matter, with the second script, if it's been resized to 1920x1080, the Crop values taken from the 848x480 script are going to be wrong when applied to that larger frame.
The gradfun filters have long been superseded by flash3kyuu_deband (aka f3kdb) and Dithertools. My personal preference is f3kdb, but either one of them should work fine. They also have the fringe bonus of being able to dither up to 16bits so you can do more 'native'* high bit depth encoding in x264 than just handing x264-10bit an 8-bit source (provided you're using a patched build of x264 that lets you set the --input-depth).
*even if artificially-produced
Finally, this is an aside, but I generally go for clip editing rather than converting an entire block of episodes (also because I'm rather strapped for hard drive space). This can be done in an efficient way if you plan things out ahead of time:
- In addition to your regular HQ script with all of the filtering and slow options, prepare an 'LQ' script that's reduced down to only those filters that affect frame position - IVTC and Deint, basically. You don't want to do any cropping or other filtering, since this is going to be more or less throwaway. If you prefer, you can resize to a smaller resolution using BilinearResize() - use Bilinear, because it's the fast and produces a soft, acceptable image that's easy to compress.
- Convert the LQ script's output to MJPEG using a bitrate where it still looks okay. For 432x240, 900-1500 kbps is acceptable. For 848x480, you'll probably want to raise it to 2500-3000 or so.
- Track through the MJPEG copy in VirtualDub and write down frame ranges you want to use.
- When you're finished, create a bunch of small scripts that consist of only two things: an Import() line which references your HQ script(s), and a line that uses Trim() to isolate the clip itself.
- You can actually just edit with these trim scripts if you want (if your NLE accepts AviSynth scripts as input, anyway), but it may cause issues with running out of memory or general instability. So you may want to use VirtualDub to convert these small scripts to clips using your lossless codec of choice and edit with those instead.
Command line-savvy users can automate two big parts of the above process easily: generating the Import/Trim scripts can be very easily batched, and converting the trim scripts to lossless can be done with a one line for-loop that invokes ffmpeg to do the conversion.
You can of course use a variant of this if you still want to edit with full episodes instead of clips: all you have to do is edit with the MJPEG copies, and then swap them out for the HQ scripts at the end (this is the traditional interpretation of the 'Bait-and-Switch' method described in the AVTech guides).