pinnahertz
Headphoneus Supremus
- Joined
- Mar 11, 2016
- Posts
- 2,072
- Likes
- 739
You say this:
Then you say:
OK, Mr. Black and White, now please explain (with minimal verbosity...if possible) how Bigshot was able to repeatedly recode without observable quality loss.
How can he be "wrong" when he's speaking about his observations? And how can anything you've said about mp3 apply to his being "wrong" when he didn't use that codec (he used AAC)?I can't speak for ALL lossy CODECs, but I can tell you that you're wrong about MP3.
Then you say:
...which is, if I'm allowed to be as binary as you, an inaccurate explanation of how MPEG coding works.When you initially compress a file, it does indeed do its best to "throw away unnecessary information". However, the process is not as simple as identifying what doesn't matter and deleting it, and re-encoding something that has already been encoded WILL produce "generational degradation". Basically, the encoder does NOT "just throw away the information you won't miss". What it does is to divide the audio signal into a bunch of frequency bands, each for a short block of time, decide how much "important" information is contained in each, and then divide its "quality/priority" depending on how important the information is that's contained in each. It may discard some information entirely, while other information is simply encoded at lower quality. Each "section" of the information is encoded at the least quality for which "you won't notice the difference" - and the decision of what that will be depends on psychoacoustic properties like masking. Therefore, the majority of information in an MP3 encoded file is neither full quality, nor minimum quality, but somewhere in-between - encoded at "just high enough quality" that you won't notice the loss.
Note the emphasized word above.HOWEVER, in no part of this process is there any sort of specific identification of how each individual sound was treated, and so no way to ensure that the process won't be applied repeatedly to a given section. Therefore, if a given frequency/time slice has been encoded with a lot of quantization error (because it was deemed to contain "unimportant content"), and you re-encode it, it will AGAIN be encoded with a lot of quantization error - and those errors will compound. If you take a file that's been encoded at 128k VBR MP3 and re-encode it at the same settings, either as is or after converting it back into a WAV file, you will probably not lose much ADDITIONAL quality (because pretty much the same decisions are being made), however the encoder will NOT "simply leave it as is" either. It will be re-encoded, AGAIN with encoding that introduces further quantization errors, so the total sum of the errors will increase. (The result is that areas which are considered unimportant will get significantly worse when you re-encode them, because they will have been encoded at poor quality twice instead of once. Areas which are deemed more important will suffer less degradation, because they will have been encoded twice, but both at a higher quality setting, which causes less loss of quality. You may argue that, since those areas were unimportant to begin with, the additional loss of quality won't matter - but it is there - and the overall quality will decrease with repeated generations.)
OK, Mr. Black and White, now please explain (with minimal verbosity...if possible) how Bigshot was able to repeatedly recode without observable quality loss.
NO, it's not. NO visual analogies...again...please!!! Again, if I'm permitted to be as binary as you are, your visual analogies...all of them...are wrong. Copiers don't use perceptual coding!!!! They are just lossy. PLEASE let's not spend time trying to defend your pointless analogies.With lossy compression, the analogy of a photocopier is quite valid,...