Quote:
Radioactive is right. If you have two files with purely a 1khz tone, they will sound exactly the same, you can't have one the has more "depth" than the other. If you were playing a 1khz on a speaker and recorded it from different places, then you wouldn't have just a pure 1khz tone. You'd have the echo and the effect of the room on the tone. Moving it farther away gives a different effect on the sound.
Right on. And this goes for lossy or lossless compression. Keep in mind lossless doesn't mean high quality, it denotes the manner that data is converted and stored. You could sample a song at 50kb/s and use a lossless compressor, it would sound terrible and be missing most of the original data.
The difference is in what is done with the data each time the file is compressed: in a lossless engine the bits are preserved (for the sample rate selected). You could re-compress that 50kb/s file 1000 times and it would sound exactly like the first time you did it, because the file would be identical (assuming no errors in the method used).
However, if you took a 320kb mp3 and kept running it through various encoders, the quality would degrade with each pass. Why?
A lossless engine looks for patterns in bits and makes a reference table; if you keep seeing the same set of 50 bits over and over again, you store that as an 8bit (or whatever) segment in the table that you can keep pulling from and thus shorten up the space required.
A lossy engine looks for sections where they can remodel the way the waveform or data is being represented first, hopefully in a manner that is imperceptible, but uses less data to represent.
this is why a color .gif file
can look so much worse than a .jpg (.gif is lossless, .jpg is lossy). There is more color information in a .jpg, but the .gif has a very definable way of storing it. If you expanded your .gif data table size (thus increasing the file size) you'd be allotted more color information (this is pretty much what a bitmap, raw, or targa is). Jpeg gets away with it by storing color hue separately of brightness because our eyes are much more sensitive to how light or dark a color is than what color it actually is. Each pixel gets lightness information, but color is stored in 2x2 or 4x4 segments and interpreted at decompression.
Make sense? I might be going a little too in depth on that, but it's a complicated subject. The short it that the mp3 will clip out data in busy spots or cut range to try to match a bit rate. If your source is a perfect repeating waveform (1000hz tone, or even a 20khz tone) the two will almost certainly do a fantastic job of compressing this without any perceivable loss. If it has a lot of other stuff mixed in (natural sound from a recording studio?) then the mp3 will make sacrifices to match its selected bit rate.
Starting from a 'pure' source, and same bit rate for both engines? I'd choose the mp3 for a one shot encoding. But we cheat: 320kb mp3 vs a 1000+kb flac, and in this case we usually start from a matched source (someone already made a digital data set on a CD), so we might as well just copy exactly what's on the CD so as not to tinker with the resulting bits at all.