is a pretty good place to check out for guides and so on.
Onto the questions.
As the word says, the encoder encodes to a certain format, while a decoder decodes from it.
The exact implications depend on whether the codec itself is lossy or lossless.
The idea originally is that an uncompressed source is just too big. So there are two things you can do:
1) You want to edit it, thus want to retain quality, while still cutting on the filesize and possibly avoid incurring in disk speed limits or cutting too much cpu time for the decode. The solution is to use a lossless codec.
2) You want to share this. You'll then use a lossy codec to cut the filesize while losing quality in a possibly not too visible way (it's still not going to be lossless though).
When the data is uncompressed, anything that can deal with a stream of the given type (video or audio), is going to support it, because there is no special knowledge involved. It's like if I gave you an Italian book with exercises in arithmetic: you'd still be able to solve it because arithmetic is the same all over the world, you don't really need to know Italian to solve the exercises.
But what if I told you to read a history book in Italian? Well, you need to learn Italian first. You need to decode
the language. Of course, you could say that the information in the book originally was just information in the strict sense of the meaning: people did stuff, it's not language related. So that information has been encoded
in the language first to be shared in the most efficient way; you could have made extensive drawings to depict what happened, but that's going to take more time and space to move around, ain't it?
This is basically the idea behind encoding and decoding.
Why would you want them separed? Well, you have a standard. By separing the encoder and the decoder, you can let people focus on one at a time, so you have optimized results, so you can save even more space and decoding time, both when encoding and decoding, as clearly the process of encoding and decoding are quite opposite to each other and require different things to be done.
Clearly, once you have an encoded file, decoding it means that you can see/hear what it originally represented (albeit, in the case of H.264, with the implied differences due to the lossy encoding to save space ─ yes, you can use H.264 for lossless encoding but I'd much rather avoid talking about this now).
And when you have an uncompressed file, you can just encode it so you can save space and easily decode it at a later time when the need arises.
And now, let's make further clarity about formats.
Let's say you have an apple and an orange. You want to keep these on your table. You could just leave them there around on their own, or you could put them in a basket.
This is basically the idea behind containers and audio and video streams.
AVI, MKV, and MP4 are container formats. They are the baskets in which you'd put the fruit.
AVC, ASP, AAC, and MP3 are the fruits themselves, what you want to ultimately eat.
You use the basket for storing commodity, but you aren't necessarily forced to use the basket to keep the fruits around. But then it gets kinda hard to move the fruit around and realize it's related.
So basically what we do is, we say "this video goes with this audio" by putting them together so they can be played back at the same time.
On the other hand, you can also use the basket to just have only the apple or the orange there.
Mainconcept (which is an AVC encoder) can give you the fruit. You can tell it to put it in a basket (.mp4), possibly along with the audio; or you can tell it to just leave it on its own on the table (.avc ─ in MPEG terms, this is called an elementary stream
For various reasons (which I don't feel like explaining here right now, but I will explain in a later post if you want to know), it's best to just output in a lossless format.
You could straight up export uncompressed, but that's going to take even more space. With a lossless format you can save space while still retaining all the quality.
For audio it's actually common to just leave it uncompressed (so in the PCM format) and just encode it lossy later.
For video, there's a number of lossless formats which have been used in the years (you probably read huffyuv, lagarith, and utvideo around).
It's common to use the AVI container to store lossless data; though you could use other containers as well, AVI just sort of stuck with the time, mostly because most lossless codecs have a VfW
interface, so it's easier to do it all at one fell swoop ─ encoding to a lossless format while muxing audio and video together, especially from the NLE. You'll realize that exporting from vegas or premiere is limited compared to what you could theoretically do, which is also one of the reasons why it's better to make the final distro encode with software on its own.
So now you have your avi with lossless video and uncompressed audio. You could just playback this directly in your player, but if you were to share this around, it'd take a lot of time, and still it takes a huge chunk of space on your hard disk. So that's why we use lossy codecs. Yes, you can feed this avi to an AVC encoder (remember, AVC is the format, so it means that there are many codecs which encode to the given format: x264 is one specific encoder, Mainconcept
made their own encoder as well, which is the one you have in vegas, but there are other encoders too), so you can get your encoded file.
As for .mp4/.avc, the reply is above: as .avc is an elemental stream for the video, you wouldn't have the audio (also the elementary stream for avc doesn't include framerate info, so that's another can of worms). But you aren't forced to use mp4, as you can put avc video along with aac audio in mkv as well, for example.
The various containers offer different features, so you'd have to look in what a container has to offer to decide what suits your taste (though if you just have to release audio+video there isn't much of a difference at all between using mp4 or mkv).