by Qyot27 » Sun Nov 18, 2007 7:32 pm
I think it would be advisable to explain some of the differences when encoding AAC streams. Particularly because of the fact that the High Efficiency profile (HE-AAC) isn't meant to be used with high bitrates, like videos distributed locally would aim for most of the time, and yet I'm seeing it getting used anyway. The default Low Complexity profile should be used instead for bitrates that aren't intentionally low, like the case would be for streaming (i.e. under 80 kbps, as that's about the cutoff for HE-AAC's usefulness with normal audio streams).
The reason is that, unless the encoder is wonky, HE-AAC achieves the quality it does by halving the frequency so that the low bitrate doesn't negatively affect things, and uses SBR to attempt to restore the lost frequencies. It can't perfectly replicate the lost data, though, and using LC at higher bitrates close to transparency (which are the bitrates I'm seeing used as it is) does a better job at preserving the audio quality, as it doesn't toss out the frequency information.
Videos I've downloaded that actually used HE-AAC correctly had very audible degradation of the sound quality, although it did certainly sound better than the bitrate that was used (mainly 64 kbps) would have sounded otherwise. However, there's truckloads of them which superficially report the use of HE-AAC but CoreAAC reports no transformation of the frequency, and the files really are still 44.1 or 48 kHz, which means that the High Efficiency hinting isn't there and the report is a false positive, or isn't doing anything if it is there.
Also it might help to reiterate the fact that iTunes Plus purchases can be converted without needing to be decrypted first. Even if it is redundant and completely obvious, it is worth noting (especially since the Plus versions are also higher bitrate than typical Store purchases).