Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Nov 27, 2017 at 12:39 PM Post #2,746 of 3,525
You say this:
I can't speak for ALL lossy CODECs, but I can tell you that you're wrong about MP3.
How can he be "wrong" when he's speaking about his observations? And how can anything you've said about mp3 apply to his being "wrong" when he didn't use that codec (he used AAC)?

Then you say:
When you initially compress a file, it does indeed do its best to "throw away unnecessary information". However, the process is not as simple as identifying what doesn't matter and deleting it, and re-encoding something that has already been encoded WILL produce "generational degradation". Basically, the encoder does NOT "just throw away the information you won't miss". What it does is to divide the audio signal into a bunch of frequency bands, each for a short block of time, decide how much "important" information is contained in each, and then divide its "quality/priority" depending on how important the information is that's contained in each. It may discard some information entirely, while other information is simply encoded at lower quality. Each "section" of the information is encoded at the least quality for which "you won't notice the difference" - and the decision of what that will be depends on psychoacoustic properties like masking. Therefore, the majority of information in an MP3 encoded file is neither full quality, nor minimum quality, but somewhere in-between - encoded at "just high enough quality" that you won't notice the loss.
...which is, if I'm allowed to be as binary as you, an inaccurate explanation of how MPEG coding works.
HOWEVER, in no part of this process is there any sort of specific identification of how each individual sound was treated, and so no way to ensure that the process won't be applied repeatedly to a given section. Therefore, if a given frequency/time slice has been encoded with a lot of quantization error (because it was deemed to contain "unimportant content"), and you re-encode it, it will AGAIN be encoded with a lot of quantization error - and those errors will compound. If you take a file that's been encoded at 128k VBR MP3 and re-encode it at the same settings, either as is or after converting it back into a WAV file, you will probably not lose much ADDITIONAL quality (because pretty much the same decisions are being made), however the encoder will NOT "simply leave it as is" either. It will be re-encoded, AGAIN with encoding that introduces further quantization errors, so the total sum of the errors will increase. (The result is that areas which are considered unimportant will get significantly worse when you re-encode them, because they will have been encoded at poor quality twice instead of once. Areas which are deemed more important will suffer less degradation, because they will have been encoded twice, but both at a higher quality setting, which causes less loss of quality. You may argue that, since those areas were unimportant to begin with, the additional loss of quality won't matter - but it is there - and the overall quality will decrease with repeated generations.)
Note the emphasized word above.

OK, Mr. Black and White, now please explain (with minimal verbosity...if possible) how Bigshot was able to repeatedly recode without observable quality loss.
With lossy compression, the analogy of a photocopier is quite valid,...
NO, it's not. NO visual analogies...again...please!!! Again, if I'm permitted to be as binary as you are, your visual analogies...all of them...are wrong. Copiers don't use perceptual coding!!!! They are just lossy. PLEASE let's not spend time trying to defend your pointless analogies.
 
Nov 27, 2017 at 1:33 PM Post #2,747 of 3,525
OK.... blunt statements.... blunt answers.

He claimed that:
"I've experimented a lot with compression codecs and there's something most people don't know about how the way they work. If you compress a song and it throws out inaudible information to make the file smaller; if you run it through the compression again, it has already thrown out all the inaudible information, so it makes no change. You can compress a file over and over and it doesn't degrade. Once it's been compressed, it won't compress any further unless you change the data rate."

I see specific claims there:
1) "it makes no change"
2) "You can compress a file over and over and it doesn't degrade"

To me, the intent of the statement is crystal clear...... He is claiming that, if you run an encoded file through the encoder again, the file will remain the same, and the encoder DOESN'T CHANGE ANYTHING. (The presumption being that, since the file has already been encoded, and any necessary changes have already been performed, there's nothing else to change.)

This is absolutely NOT true for MP3 - because, EVERY time you run an MP3 encoder, it deconstructs the audio file, then re-constructs it, after determining the maximum amount of quantization error that will be "audibly undetectable". And, each time this process is repeated, with no knowledge of the previous encoding, and no option to avoid encoding altogether because the file is already "fully optimized", the quantization errors will accumulate.... resulting in more errors overall. The file will not get smaller, but it will change (unless the encoder is smart enough to discard the newly encoded file when it discovers it is not smaller.... which some may be.)

I don't know the details of how AAC works - but I have seen it claimed that it works "similarly to MP3".....

You can find the details about how MP3 works (without the heavy math) here:
https://arstechnica.com/features/2007/10/the-audiofile-understanding-mp3-compression/

You will note that there is no part of the process where processing is avoided because it won't deliver an improvement.

You say this:
How can he be "wrong" when he's speaking about his observations? And how can anything you've said about mp3 apply to his being "wrong" when he didn't use that codec (he used AAC)?

Then you say:
...which is, if I'm allowed to be as binary as you, an inaccurate explanation of how MPEG coding works.

Note the emphasized word above.

OK, Mr. Black and White, now please explain (with minimal verbosity...if possible) how Bigshot was able to repeatedly recode without observable quality loss.

NO, it's not. NO visual analogies...again...please!!! Again, if I'm permitted to be as binary as you are, your visual analogies...all of them...are wrong. Copiers don't use perceptual coding!!!! They are just lossy. PLEASE let's not spend time trying to defend your pointless analogies.
You say this:
How can he be "wrong" when he's speaking about his observations? And how can anything you've said about mp3 apply to his being "wrong" when he didn't use that codec (he used AAC)?

Then you say:
...which is, if I'm allowed to be as binary as you, an inaccurate explanation of how MPEG coding works.

Note the emphasized word above.

OK, Mr. Black and White, now please explain (with minimal verbosity...if possible) how Bigshot was able to repeatedly recode without observable quality loss.

NO, it's not. NO visual analogies...again...please!!! Again, if I'm permitted to be as binary as you are, your visual analogies...all of them...are wrong. Copiers don't use perceptual coding!!!! They are just lossy. PLEASE let's not spend time trying to defend your pointless analogies.
 
Nov 27, 2017 at 1:52 PM Post #2,748 of 3,525
OK.... blunt statements.... blunt answers.

He claimed that:
"I've experimented a lot with compression codecs and there's something most people don't know about how the way they work. If you compress a song and it throws out inaudible information to make the file smaller; if you run it through the compression again, it has already thrown out all the inaudible information, so it makes no change. You can compress a file over and over and it doesn't degrade. Once it's been compressed, it won't compress any further unless you change the data rate."

I see specific claims there:
1) "it makes no change"
2) "You can compress a file over and over and it doesn't degrade"
1. "audibly"
2. "audibly"
To me, the intent of the statement is crystal clear...... He is claiming that, if you run an encoded file through the encoder again, the file will remain the same, and the encoder DOESN'T CHANGE ANYTHING. (The presumption being that, since the file has already been encoded, and any necessary changes have already been performed, there's nothing else to change.)
No, you've applied binary filtering again. That's NOT what he's saying, that's what you're reading.
This is absolutely NOT true for MP3 - because, EVERY time you run an MP3 encoder, it deconstructs the audio file, then re-constructs it, after determining the maximum amount of quantization error that will be "audibly undetectable". And, each time this process is repeated, with no knowledge of the previous encoding, and no option to avoid encoding altogether because the file is already "fully optimized", the quantization errors will accumulate.... resulting in more errors overall.
Your argument is theoretical. You don't supply any actual proof of anything. And your explanations are inaccurate anyway.
The file will not get smaller, but it will change (unless the encoder is smart enough to discard the newly encoded file when it discovers it is not smaller.... which some may be.)
Again, theoretical. Have you tried this? IF it doesn't get smaller, what's being changed? I'm not saying you're wrong, I'm saying you have supplied no proof. Regardless, that's not what Bigshot is getting at anyway. He's saying it's not audibly changed by successive recodes...up to a point. He's making a claim based on personal observation that includes a gray area...that, I'm not surprised, misses you entirely.
I don't know the details of how AAC works - but I have seen it claimed that it works "similarly to MP3".....
Yeah, well if you don't know how AAC works but state, definitively, that someone making observations about AAC is wrong, guess who's actually wrong?
You can find the details about how MP3 works (without the heavy math) here:
Don't care, we're not discussing mp3 (only or even specifically) anyway...YOU are...we aren't.
 
Nov 27, 2017 at 3:13 PM Post #2,749 of 3,525
Just give it a try and recompress a maxed out AAC file. I think you'll be surprised at how well it recompresses. Trying it is better than arguing about it!
 
Nov 27, 2017 at 5:48 PM Post #2,750 of 3,525
I've read those sentences pretty carefully..... and I can't seem to find the word "audibly" in either claim.
I have no problem with saying that "it won't make an audible difference.... up to a point".
However, saying that "it makes no difference" is incorrect - since it does in fact make a difference, which is cumulative, and eventually becomes audible.
(And the truth makes it a bad idea to multiply encode audio, even though, in some instances, it might not produce audible issues..... because in other instances it might.)

The idea that "nothing is being changed because the file isn't getting smaller" is simply logically invalid.
A significant reduction is size is one external indicator of a successful compression.......
But lack of a change in size suggests nothing at all - besides the obvious.

Incidentally, I know the single most important piece of information about ALL LOSSY ENCODERS....... they're LOSSY.
A LOSSLESS CODEC must retain all of the original content without alteration.
This is a relatively simple task, and one which is easily proven to be true (we can do a bit compare between the copy and the original).
A LOSSY CODEC admits that it is going to alter the information, but asks us to believe that "the change will be small enough that we'll never notice".
Clearly, claiming that something will be damaged, but "we won't notice the damage" is the exceptional claim requiring the exceptional proof.
Furthermore, while I recall many instructions and manufacturer's recommendations that lossy compression should be applied as the last step before distribution....
I don't recall EVER reading a claim that "it's OK to repeatedly apply lossy compression to a file".
In fact, most SPECIFICALLY suggest that, if you wish to re-encode a file with different settings, you start with a fresh LOSSLESS ORIGINAL.

I don't doubt that certain lossy CODECs, when applied to certain content, with certain settings, produce minimal additional changes if applied a second time.
However, even though you may luck out and suffer no ill effects, applying multiple iterations of lossy compression is surely about as far from "best practices" as you can get.

1. "audibly"
2. "audibly"
No, you've applied binary filtering again. That's NOT what he's saying, that's what you're reading.
Your argument is theoretical. You don't supply any actual proof of anything. And your explanations are inaccurate anyway.
Again, theoretical. Have you tried this? IF it doesn't get smaller, what's being changed? I'm not saying you're wrong, I'm saying you have supplied no proof. Regardless, that's not what Bigshot is getting at anyway. He's saying it's not audibly changed by successive recodes...up to a point. He's making a claim based on personal observation that includes a gray area...that, I'm not surprised, misses you entirely.
Yeah, well if you don't know how AAC works but state, definitively, that someone making observations about AAC is wrong, guess who's actually wrong?
Don't care, we're not discussing mp3 (only or even specifically) anyway...YOU are...we aren't.
1. "audibly"
2. "audibly"
No, you've applied binary filtering again. That's NOT what he's saying, that's what you're reading.
Your argument is theoretical. You don't supply any actual proof of anything. And your explanations are inaccurate anyway.
Again, theoretical. Have you tried this? IF it doesn't get smaller, what's being changed? I'm not saying you're wrong, I'm saying you have supplied no proof. Regardless, that's not what Bigshot is getting at anyway. He's saying it's not audibly changed by successive recodes...up to a point. He's making a claim based on personal observation that includes a gray area...that, I'm not surprised, misses you entirely.
Yeah, well if you don't know how AAC works but state, definitively, that someone making observations about AAC is wrong, guess who's actually wrong?
Don't care, we're not discussing mp3 (only or even specifically) anyway...YOU are...we aren't.
 
Nov 27, 2017 at 6:06 PM Post #2,751 of 3,525
I don't doubt it.....

However, I would expect a slow drift down in quality over multiple iterations.

At each iteration the compression algorithm is going to decide how much "quality priority" to give each frequency and time slice.
Those with lowest priority will simply be discarded - which takes them out of the picture.
Those with highest priority will be encoded well, and so should suffer little change.
However, I expect those in the middle, which are allocated just enough priority that they suffer significant quantization errors that fall just below the threshold of audibility, will gradually deteriorate.
(I would expect each iteration to be audibly identical to the previous ones...... but for drift between between later iterations and the original to become more noticeable as the iteration count increases.)

Just give it a try and recompress a maxed out AAC file. I think you'll be surprised at how well it recompresses. Trying it is better than arguing about it!
 
Nov 27, 2017 at 6:11 PM Post #2,752 of 3,525
I've read those sentences pretty carefully..... and I can't seem to find the word "audibly" in either claim.
You are far to literal. It's right there between the lines where most of us can see it. If not, Bigshot will correct me.
I have no problem with saying that "it won't make an audible difference.... up to a point".
However, saying that "it makes no difference" is incorrect - since it does in fact make a difference, which is cumulative, and eventually becomes audible.
(And the truth makes it a bad idea to multiply encode audio, even though, in some instances, it might not produce audible issues..... because in other instances it might.)
There is nothing anyone could post that covers gray areas that will ever satisfy you, though.
The idea that "nothing is being changed because the file isn't getting smaller" is simply logically invalid.
What logic is invalid? If a codec's purpose by design is to result in less data, and after a second pass through it the amount of data didn't change, then what did?
A significant reduction is size is one external indicator of a successful compression.......
Well lets see now: if reduction in size is the goal...make that the ONLY goal...of a lossy codec, then what is the other indicator of successful bit-rate reduction?
But lack of a change in size suggests nothing at all - besides the obvious.
Now THATs what I would call logically invalid!
Incidentally, I know the single most important piece of information about ALL LOSSY ENCODERS....... they're LOSSY.
A LOSSLESS CODEC must retain all of the original content without alteration.
This is a relatively simple task, and one which is easily proven to be true (we can do a bit compare between the copy and the original).
A LOSSY CODEC admits that it is going to alter the information, but asks us to believe that "the change will be small enough that we'll never notice".
Clearly, claiming that something will be damaged, but "we won't notice the damage" is the exceptional claim requiring the exceptional proof.
Furthermore, while I recall many instructions and manufacturer's recommendations that lossy compression should be applied as the last step before distribution....
I don't recall EVER reading a claim that "it's OK to repeatedly apply lossy compression to a file".
In fact, most SPECIFICALLY suggest that, if you wish to re-encode a file with different settings, you start with a fresh LOSSLESS ORIGINAL.
But we are not talking about lossless codecs now, are we? In fact, Bigshot didn't mention just any lossy codec, he was specific: AAC. You can expound on lossless codecs all you want, that is a tangential discussion.
I don't doubt that certain lossy CODECs, when applied to certain content, with certain settings, produce minimal additional changes if applied a second time.
However, even though you may luck out and suffer no ill effects, applying multiple iterations of lossy compression is surely about as far from "best practices" as you can get.
Obviously! And nobody has argued against that either. Your binary filter is working full tilt again. The example was specific: multiple passes through a high-rate AAC codec. Your binary filter has blocked the fact that codecs are adjustable as to the target bit rate, and that codecs are not all the same, and that the number of recodes through a single specific codec may actually not result in audible changes, or indeed, any changes. Codecs are not binary in their action, they are not just on or off. That means results will vary too.

Your example is polarized, and silly: an additional pass produces minimal changes? Really? How's that codec set? I can show you a second pass that is totally destructive, and one that is transparent...oh, and the grays in between them.

"...surely about as far from "best practices" as you can get."???? Why are you even mentioning this? How insulting! Do you actually think everyone here are so stupid and unaware that we'd consider a dozen or so recodes part of "best practices"? Get real, and try to have a little respect. Some of us are audio professionals, even!
 
Nov 27, 2017 at 6:25 PM Post #2,753 of 3,525
I don't doubt it.....

However, I would expect a slow drift down in quality over multiple iterations.

At each iteration the compression algorithm is going to decide how much "quality priority" to give each frequency and time slice.
Those with lowest priority will simply be discarded - which takes them out of the picture.
Those with highest priority will be encoded well, and so should suffer little change.
However, I expect those in the middle, which are allocated just enough priority that they suffer significant quantization errors that fall just below the threshold of audibility, will gradually deteriorate.
(I would expect each iteration to be audibly identical to the previous ones...... but for drift between between later iterations and the original to become more noticeable as the iteration count increases.)
So what you're doing here is a flat-out refusal to even try it, instead expounding on your personal opinions of what will happen. Is that correct? And all of this while still admitting to not be familiar with the codec in question.

Wow!
 
Nov 27, 2017 at 10:06 PM Post #2,754 of 3,525
I've read those sentences pretty carefully..... and I can't seem to find the word "audibly" in either claim.

"Audibly" is a given when talking about lossy. Everyone knows that lossy is different than lossless. There just isn't an audible difference if the bit rate is high enough and if it hasn't been transcoded too many times. Personally, "audibly" is what I worry about. I'm generally a happy and satisfied person. I don't feel anything lacking in my feelings of security that I have to fill with bitrate. As long as it sounds the same, for my purposes it *is* the same.

I would have assumed that re-encoding AAC 320 VBR over and over would create generation loss. But I tried it and found out that you have to re-encode a whole lot of times to create any audible difference. I was surprised to find that out. There's no point arguing about something that is simple to try for yourself. AAC is a damn good codec. For my purposes, it's interchangeable with lossless. But it's a lot smaller. I use it exclusively.
 
Last edited:
Nov 28, 2017 at 9:42 AM Post #2,755 of 3,525
I'm not sure that I agree with your primary assertion: "That 'everyone knows the difference between lossy and lossless'". From my experience, a lot of people seem NOT to know the difference.
That's why I tend to argue against statements which might conceivable mislead people who actually don't know into thinking that they are the same.

I think we're sort of discussing two different things here.

As far as I'm concerned, in terms of the technology, there's nothing to try.
Whenever I listen to a piece of music I've never heard before, I don't know what it sounds like, so I'm relying on my system to let me find out - by playing it accurately.
We all KNOW that lossy CODECs alter the information; they "say so right on the package"; so there's nothing to question.
To me the choice between lossless and lossy would be like the choice between buying a GPS that at least claims that it will take me to the exact right address.....
And one that is advertised as: "It never takes you to the exact right place; in fact it specifically avoids taking you to the exact right place; but it will get you close enough that you won't mind".
Personally, rather than wonder how big the error is, I'd rather just buy the one that takes me to the right place.
(And, in order to convince me to deliberately take the inaccurate version, they're going to have to offer a pretty compelling reason....... and, to me, smaller file size just isn't a compelling reason.)
That's why I personally am never going to try or use lossy compression..... because, to me, it has at least potential serious drawbacks, and no significant benefits.

However, as far as my statement about cumulative errors summing..... well, that's just math.
If you were to ask me "what does 2 + 2 = " I wouldn't go out and buy a bunch of marbles, put two in my left hand, two in my right hand, then put them together and confirm that I now have four on the table.
I would use math and logic to figure out what to expect..... based on how the process works.
Now, on every lossy audio CODEC I've ever read the description of, there is a design intent to ensure that the first generation copy will be "audibly identical to the original" - at least as much as possible.
However, I've never seen any that claim that there is any mechanism included that will prevent iterative changes from summing to a value greater than a single change.

If you were to tell me that you were going to walk for one block in a random direction from your home.... we can both agree that you will end up one block from home.
However, if you were to tell me that you're going to walk for one block in a random direction, then, starting from there, walk for one block in a random direction, and repeat the process five times.....
MATH tells me that, at least some of the time, you will end up more than one block from home.
(Remember that we've included nothing in the process to ensure that this doesn't happen.)

However, I don't dispute that running a lossy CODEC multiple times on the same content MAY, IN SOME CASES, still result in a final copy that is audibly indistinguishable from the original.
And neither do I dispute that, in a specific situation, and with a specific CODEC, a certian person may have had that experience.
However, I do oppose making it as a general statement, when the science suggests that we're looking at the exception and not the general case.

(And, yes, if someone were to suggest that "a cup of Drano is a great cure for a stomach ache" I would probably argue against that too...... WITHOUT trying it.)

So what you're doing here is a flat-out refusal to even try it, instead expounding on your personal opinions of what will happen. Is that correct? And all of this while still admitting to not be familiar with the codec in question.

Wow!
 
Last edited:
Nov 28, 2017 at 10:20 AM Post #2,756 of 3,525
1)
Some things are grey - and some are black and white - but, in many cases, which applies to something depends on your point of view.
For example...... in terms of DATA, the question is clear black and white, either data is retained accurately or not.
I can do a bit compare - and the result will be a simple black and white pass or fail.
(Personally, I like black and white, I can run a checksum on my music library and KNOW, with absolute certainty, that it's exactly the same as yesterday..... and nothing has been changed.)

In terms of technology, lossy CODECs aren't "a grey area" at all.
Lossy CODECs discard information.... this is a given.
Likewise, either the result is or is not audibly identical to the original.... and that's also black and white.
The only grey area I see would be with lower quality CODECs.... where we concede that the losses are audible, but there is a question of opinion about whether the loss is justified.
(The "grey" arises because it's a matter of opinion whether the loss is significant or not.... and whether we consider the cost to be justified by the benefits.)
There is also an area of UNCERTAINTY..... it may turn out that, on 95% of all files processed, the result is perfect, but on 5% it is not......
(If so, we may still claim - in black and white - that "on 95% of a random selection of processed files nobody can hear the difference".)

2)
My problem with your "size assertion" is simply that it isn't true.
Your assumption that "if the file is the same size then it contains the same amount of data" is entirely incorrect.
It is in fact quite simple to make a file larger or smaller without changing the amount of data it contains; or to add or remove data without changing the size of the file.

When we initially run the CODEC, we can assume that, if the file got smaller, then information was discarded.......but that is a VERY special case.
There are several unstated assumptions on which that assertion is based...... and lots of exceptions.
For example, I can compress a file using FLAC, and the file will get smaller, but NO information will have been discarded.
Likewise, any process that makes the information LESS CORRECT, but does so in a way that doesn't result in LESS information, may leave the file the same size or even make it larger.

Your base assertion that "reduction in size is the goal...make that the ONLY goal...of a lossy codec" is incorrect.
The goal of a lossy CODEC is to reduce the size of the file while avoiding altering the contents in an audible way....
And the indicator of success would be that the file has gotten smaller but remains audibly the same.

However, there are several possible "indicators of failure"........
- if the file got larger that would be a definite fail
- if the file sounded audibly different that would be a definite fail
- if the file sounded the same and remained the same size that would be a sort of null result (a waste of time but no harm done).

There is also the potential for "generational failure"...... which is a concept that is applied deliberately in certain copy protection schemes (including the original CD-R music protection scheme).
In "generational failure" the copy is functionally the same as the original - but only in CERTAIN regards - while being very different in other ways.
In one such copy protection scheme, the user was allowed to make a copy of an "original".
However, even though the copy was AUDIBLY identical to the original, the user was unable to make a copy of that copy.
Therefore, while the copy was AUDIBLY identical, it was inferior in OTHER WAYS.
(For a user whose goal was strictly to listen, the copy was 100% perfect; for a user who wished to copy it, it was "broken".)

You are far to literal. It's right there between the lines where most of us can see it. If not, Bigshot will correct me.
There is nothing anyone could post that covers gray areas that will ever satisfy you, though.
What logic is invalid? If a codec's purpose by design is to result in less data, and after a second pass through it the amount of data didn't change, then what did?
Well lets see now: if reduction in size is the goal...make that the ONLY goal...of a lossy codec, then what is the other indicator of successful bit-rate reduction?
Now THATs what I would call logically invalid!
But we are not talking about lossless codecs now, are we? In fact, Bigshot didn't mention just any lossy codec, he was specific: AAC. You can expound on lossless codecs all you want, that is a tangential discussion.

Obviously! And nobody has argued against that either. Your binary filter is working full tilt again. The example was specific: multiple passes through a high-rate AAC codec. Your binary filter has blocked the fact that codecs are adjustable as to the target bit rate, and that codecs are not all the same, and that the number of recodes through a single specific codec may actually not result in audible changes, or indeed, any changes. Codecs are not binary in their action, they are not just on or off. That means results will vary too.

Your example is polarized, and silly: an additional pass produces minimal changes? Really? How's that codec set? I can show you a second pass that is totally destructive, and one that is transparent...oh, and the grays in between them.

"...surely about as far from "best practices" as you can get."???? Why are you even mentioning this? How insulting! Do you actually think everyone here are so stupid and unaware that we'd consider a dozen or so recodes part of "best practices"? Get real, and try to have a little respect. Some of us are audio professionals, even!
 
Nov 28, 2017 at 10:51 AM Post #2,757 of 3,525
That is useful to know...... and I suspect part of that may be due to the fact that AAC is a proprietary standard.
As a result it is more tightly controlled and far more consistent.

One of the strengths and weaknesses of MP3 has always been that the encoding process is entirely open.
All MP3 decoders are supposed to follow a given standard - so, if you play the same MP3 file on different players, you should get the same exact result.
However, the MP3 ENCODER is "open"..... the only real requirement is that the file it outputs will play in a standard decoder (which does set lots of practical constraints).
On the plus side, this encourages programmers to think up new and better encoding methods...... and new and better ways to apply perceptual coding.
However, on the downside, it means that two MP3 encoders may produce different results, from the same input file, even if the same exact settings are applied.
So it is literally true that a given MP3 encoder may produce better results with certain content and poorer results with others.
The upside of this is that it encourages competition and product improvements; the downside is that you never know exactly what you're getting when you receive an "MP3" file.
(I recall one product from years ago that actually encoded each file tree times, using three different encoders, then prompted the user to "pick the version that sounded better" - individually for each song.)

I also do apologize for seeming to be so "pedantic" about the subject of lossy encoders.
However, far from what some folks seem to think, it has been my experience that many people DO NOT understand the difference between lossless and lossy encoders.
For example, I frequently encounter "audio CDs" made from iTunes..... whose owners seem to have no idea that, even though the CD itself is lossless, since their SOURCE was a lossy AAC file, the CD they have is NOT identical to a real purchased CD copy.
I haven't looked lately, but iTunes used to offer a very misleading "make a CD" option in the menu - with no warning that the CD you make will be different than a CD you might purchase.
I see this as a dangerous side effect of "too many grey areas" - which is why I tend to view such grey areas as a significant problem.
I have no problem whatsoever when people use AAC or MP3...... but, when someone presents me with a CD, which he says is "just a CD", but then I find out it was sourced from an MP3 or AAC file, THEN I have a major problem.
Unfortunately, from my experience, an awful lot of people do NOT know enough to make that distinction.

Many non-technical people I talk to seem to understand that there are different ways of encoding a file - but don't understand the lossy/lossless distinction....
Likewise, many people apparently don't understand that JPG is a lossy image CODEC, and that the encoding used on Blu-Ray discs is also lossy (just less so that the one used by their favorite streaming service).

It's also worth noting that, because each MP3 encoder is slightly different, generational copy issues will vary depending on which encoder is used.
(For example, encoding a file a second time using the same encoder as was originally used is likely to produce less loss of quality - especially if the same settings were used. While encoding a file a second time on a different encoder,
or using different settings, is more likely to result in different information being discarded each time, with an overall higher loss of accuracy.)
It makes sense that this would be less important with AAC - because the encoders themselves, being proprietary, should be consistent.

"Audibly" is a given when talking about lossy. Everyone knows that lossy is different than lossless. There just isn't an audible difference if the bit rate is high enough and if it hasn't been transcoded too many times. Personally, "audibly" is what I worry about. I'm generally a happy and satisfied person. I don't feel anything lacking in my feelings of security that I have to fill with bitrate. As long as it sounds the same, for my purposes it *is* the same.

I would have assumed that re-encoding AAC 320 VBR over and over would create generation loss. But I tried it and found out that you have to re-encode a whole lot of times to create any audible difference. I was surprised to find that out. There's no point arguing about something that is simple to try for yourself. AAC is a damn good codec. For my purposes, it's interchangeable with lossless. But it's a lot smaller. I use it exclusively.
 
Last edited:
Nov 28, 2017 at 11:43 AM Post #2,758 of 3,525
Some of your info is out of date there... AAC isn't proprietary. It's been an open standard for years now. Open doesn't mean that people can tinker with the encoding process and make their own version of AAC. Every current AAC encoder works exactly the same. The encoding and decoding is performed by stock cut and paste burned right into the chips of the DAC. There's no difference in quality. It's a standard, even if it is an open standard. And it doesn't work the same as an MP3... it's an MP4 which is a totally different and more advanced compression scheme. AAC is audibly transparent, which means to human ears, it's identical to the original. No loss in fidelity. And generation loss is also transparent for more generations than anyone would be likely to need to re-encode. From a practical standpoint it's all positives and no drawbacks.

I think you're projecting a bit on other people about lossy. Everyone knows that it throws out data. It says so right there in the name "lossy". They just don't care because it's inaudible information. I don't care about things I can't hear. I never have. I focus on improving things I *can* hear. That gets me a lot further when it comes to sound quality, because the best sounding systems sound the best because of the way they present the core audible frequencies. What you hear is what matters. A truly great system will sound just as good with high rate lossy as they do with lossless. The only argument I've heard in favor of lossless is that it assuages people's OCD. I can totally understand that. If I had anxiety over bitrates, I'd want to make sure my file sizes were portly too I suppose.

Lossy... lossless... none of that matters. What matters is how the music sounds. Audio reproduction has advanced to the point where bitrates don't matter. They're nothing more than advertising points, especially in blu-ray where absurd bitrates are touted as being "necessary". The truth is that redbook is plenty and high bitrate AAC sounds exactly like lossless. So if it doesn't make you neurotic to be throwing out unnecessary bits, then it makes sense to use it. My whole music library is AAC 256 VBR. I've ripped tens of thousands of CDs to the music server and boxed the discs up in the garage. It all fits on one disc drive. It's sorted automatically by iTunes. And it sounds perfect on my best equipment. For me, that's all a win with no loss.

The reason I share my listening test with people is so they can find out for themselves where their line of transparency lies. That is very important. If you know that, then you don't have to worry about lossy throwing out something important. You know that above a certain rate, it's identical to lossless. That should be a comforting thought.
 
Last edited:
Nov 28, 2017 at 12:27 PM Post #2,759 of 3,525
However, far from what some folks seem to think, it has been my experience that many people DO NOT understand the difference between lossless and lossy encoders.

How could they? I have an education that makes the difference trivial to me, but most people have other kind of education or even lack higher education altogether. Lossy encoders remove perceptual redundancy while lossless encoders remove information redundancy. One needs to understand these concepts in order to understand the difference. This is how you can explain them in layman's terms to someone who doesn't understand them:

If you have five 10 dollar notes and one 1 dollar note in your wallet, instead of telling you have $10, $10, $10, $10, $10 and $1 you use lossless coding and say you have 5 * $10 + $1 because that's a much shorter expression and no information is lost. You can also use lossy coding: You give $1 to a hobo on the street and tell you have $50. That's very short, but you lost $1 and also the information about what kind of notes you have.
 
Nov 28, 2017 at 12:30 PM Post #2,760 of 3,525
What does the hobo have? Is he stuck with ear buds or something?
 

Users who are viewing this thread

Back
Top