Quote:
Originally Posted by CSMR
While I don't say that your conclusion is wrong, your argument is. For instance take an encoder which takes each 16 bit sample and lops off all but n bits; and a decoder which replaces those bits with zeros. It is certainly true that compressing to 14 bits per sample and then decompressing and compressing to 12 bits per sample is the same as compressing the original to twelve bits per sample.
|
As breez says it's more complex than that. It might be easier to think of it in visual lossy terms. A JPEG will compress a photo by comparing nearby pixels. Those color differences will shift/be eliminated to lower the amount of info often forming a block. The definition is lost. A second round will then compare the already shifted colors to their neighbor thinking it's edges is now the block not pixel and create a larger block. This can be achieved by over compressing anyway, but you hit it earlier because the perceived acceptable definition loss is on a larger scale to begin with (on second round). This is the famous lossy transcoding artifact built on artifact problem.
It's probably best to think of it not as hitting a bit limit, but as a feedback loop/evaluation that resets the rules at each compression resulting on different ends between WAV/AIFF to 192 kbps then WAV/AIFF to 320 to 256 to 224 to 192.
The trick is when is this not noticeable? Three things a) sometimes lossy steps are so small it's invisible. Take a JPEG at 1% compression (99% setting). Recompress it to 2% (98%). Find the difference. b) sometimes the original source doesn't require the extra info. Take a JPEG of a big blue square or a 20's vocals recordings and show me the difference between high and low compression (let alone with or without recompressing/trancoding). I've heard even some Rat Pack stuff that doesn't suffer at 128 'cause the masters are so bad. c) when trancoding (say protected AAC, WMA, etc. to MP3 or even protected MP3 to non-crippled MP3) if the resulting secondary files have a (sometimes) much higher bitrate, I've found the secondary files artifacts may not be audible. I do this with audio books creating ~57 kbps files from 32 kbps ones. There's 'headroom' if you will. The new artifacts are there, they're just far far less then the first step.
I think the evils of transcoding are somewhat based on fear. In the world of P2P networks, you never know how many steps the file can go through. Say the band releases their music on their site as 192 QT files, then someone converts it to CBR 192 MP3, then their friend needs in in 128 for their flash player, then another thinks they can bump it up to 192 for 'greater quality', then they post it again, etc. It's easier just to say never transcode.
But if you talk about just a single transcode (two encodes) from very high to medium or low I think the differences are slight (if this case the second compression is much larger than the transcoding effects). With most music WAV/AIFF to 128 or WAV/AIFF to 320 to 128 is going to be very slightly audibly different and I doubt most could identify each. It would be easy enough to do a comparison with some files, but again the musical complexity is a big part of it.
However again transcoding is certainly not ideal. It's just every time it's mentioned on Head-Fi there are six quick posts talking about baby Jesus crying or
homeland security or something. Remember when making MP3s from FM, using MiniDiscs with non-ATRAC lossy files, or even the new iPod Shuffle fill option, this is going/will go on all the time. Yet the world still spins.