Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Nov 22, 2017 at 4:57 PM Post #2,731 of 3,525
There's something here that seems obvious to me, and probably to you, but I would like to clarify it for everyone else........

I agree entirely that "audibly transparent means that the quality of the sound exceeds your ability to hear it" - and, by that definition, I agree with virtually everything you've ever said.
Where we seem to disagree in in terms of usage.
You seem quite content to declare that "if I can't hear the slightest flaw when I listen to this file today then it's good enough".
Personally, I am far less certain than that...... mostly because I suspect that my needs may change later.
If I were to determine with absolute certainty that a given "level of accuracy" was "absolutely audibly perfect" to me today.....
When I buy my next file, I would still buy the one that's "200% as good as I need" - just to provide myself a margin or error... in case something changes...
(Like I DO decide to turn the volume up a bit on the quiet parts; or I buy a new pair of speakers and, on them, certain things become more obvious).
I can honestly say that, contrary to the alarmist title applied to this thread, I have rarely been disappointed later to find out that I bought something BETTER THAN what I needed.
(I wouldn't pay $100 for a 24/96k copy which didn't sound better to me..... but I'd cheerfully pay an extra $5 to get 200% of the quality I really need instead of just exactly 100.0000000%...... I like safety margins.)

JUST TO BE PERFECTLY CLEAR........ since I may appear to be "taking the other side on this issue".......... I'm not.
I agree entirely that it's extremely useful to have ALL the information.
After all, nobody can make an informed decision without all the information.
I don't personally buy high-resolution files because I'm convinced they sound better.....
And, since that's not the reason I buy them, I doubt anything we find out here will convince me not to.....
However, I am still interested to know whether the differences are really easily audible or not.....
(And, yes, it's also interesting to know whether they're "downright obvious" or "only maybe a tiny bit audible with music I'm really familiar with" or "not audible at all".)
And, for others who MAY be buying high-res downloads based solely on claims of superiority which they may not find to be true, this information will be even more valuable.
There's no such thing as "bad information" - as long as it's accurate and clearly stated.

I might mention one other (small) area where I disagree with you.
I personally enjoy certainty... and I find uncertainty... disquieting.
Therefore, I really do enjoy listening to something more when I am absolutely certain that it is the best copy I have access to... and being unsure of that does reduce my enjoyment - at least a little.
And, when I look at images on my calibrated monitor, I do enjoy them a little more knowing that they're right because my calibration is current and up to date.
I may enjoy them quite a bit on an uncalibrated monitor - but I enjoy them just a tiny bit more with that last little nagging doubt removed.
(And I suspect that lots of "audiophiles" feel this same way about the music they listen to.)


Audibly transparent means that the quality of the sound exceeds your ability to hear it. Beyond that point, better quality doesn't matter because it can't be heard. If you can't hear something, it's irrelevant to your enjoyment of recorded music. No one wants to pay more or suffer inconvenience because of stuff that doesn't matter.

You see, there's a difference here... I've spent a great deal of time to try to carefully document my perception. I spent two weeks straining to hear differences between a wide range of rates and codecs and a wide range of different kinds of music. I've also invested a great deal of time sharing this test with a wide range of people and I know the results of those tests. You just have a concept of sound purity that you have faith in, but you haven't made much effort to see if it's a valid concept. I understand how you can't be sure where the line of transparency lies. And I'm sure you can understand how I can be pretty sure.

To know something, you have to want to know. At least half the time, I hand out this test and I never hear back from the people. They aren't really interested enough to put their ears on the line like this. Ignorance is more comfortable to them because they can remain the same and their thinking won't have to change. Changing your mind can hurt. Other people will fudge the text reports from Foobar or cherry pick their tests to make it look the way they want it to look or engage in logical fallacies to prop up their weak theories. That goes beyond ignorance into disingenuousness. But we're in Sound Science here. We enjoy finding out. We don't want to prop ourselves up and pat ourselves on the back. We just want to know so we can apply that knowledge to get better sound out of our audio systems.

If you make the effort to find out the truth, you don't need to depend on faith. Knowing what matters and what doesn't is the highest level of understanding.
 
Last edited:
Nov 22, 2017 at 5:31 PM Post #2,733 of 3,525
I guess so. 35dB is also as low a noise floor as most living rooms have too.
 
Nov 22, 2017 at 5:35 PM Post #2,734 of 3,525
Pretty much....

There's also a context here.

For environmental noise measurements, you want a microphone that measures down to very quiet levels....
but you may not be especially concerned with really good frequency response accuracy.
For room and speaker calibration, you want a microphone that is exceptionally flat (or well calibrated)....
but, since you're going to run test tones well above the noise floor to ensure accurate measurements, you probably don't care so much about superior noise performance or superior flatness at very low SPLs.

A cheap "general purpose" meter will probably not be very good for either.... but will more or less get the job done.
However, you're going to pay a lot more for a meter that's really good for EITHER purpose......
And you're going to pay a much bigger premium for one that's very good for both....

Isn't 35 dB actually the limit for cheap (or even not so cheap) SPL meters?
 
Nov 22, 2017 at 6:29 PM Post #2,735 of 3,525
You seem quite content to declare that "if I can't hear the slightest flaw when I listen to this file today then it's good enough".
Personally, I am far less certain than that...... mostly because I suspect that my needs may change later.

Think about it logically... If you are ripping a file from a CD and the compressed version sounds completely identical to the original CD, then for the purposes of listening to it, it's perfect. You won't need better sounding than identical in the future. It can't sound any better than the CD it's ripped from, and your ears aren't going to hear any better 20 years from now. It's a good idea to bump the bitrate up a notch just to cover some particularly difficult to compress sort of sound. But I can tell you that I have ripped over 10,000 CDs to AAC 256 VBR and I have never run across an artifact in any of my rips.

I've experimented a lot with compression codecs and there's something most people don't know about how the way they work. If you compress a song and it throws out inaudible information to make the file smaller; if you run it through the compression again, it has already thrown out all the inaudible information, so it makes no change. You can compress a file over and over and it doesn't degrade. Once it's been compressed, it won't compress any further unless you change the data rate. This means, if the file is audibly transparent and you need to transcode it to some other new format, as long as the new format is also audibly transparent, there will be no degradation in the sound quality.

If you have recorded and mixed the song yourself, it makes sense to archive your mix in the full quality without compression. You might want to go back into the track and remix it and higher bitrates will be useful to you. But for the purposes of listening to music in the home, there is no functional difference between 24/96 or a FLAC file or a high bitrate lossy file that has reached the level of audible transparency. You can pack hard drives with big fat WAV files or high bitrate/high sampling rate files, or lossless files if you want. But there is no audible advantage to it. All that extra file size might as well be excelsior in a shipping box. It's just bulk with no purpose.

Now I can't speak to how you feel about having hard drives full of big fat files. If you sleep better at night knowing you are storing a bunch of inaudible bits for posterity, that's fine. But that has nothing to do with sound quality, it has nothing to do with compatibility in the future, and it has nothing to do with logic. It's just plain OCD- the digital audio version of excessive hand washing. Your hands are clean. Your files are audibly perfect. You don't need to wash them again. You don't need to store a bunch of bits and bytes that you can't hear anyway.

The key issue here is that it doesn't make any practical difference if a file is lossless or lossy as long as it is audibly transparent.
 
Last edited:
Nov 22, 2017 at 8:25 PM Post #2,736 of 3,525
You can compress a file over and over and it doesn't degrade. Once it's been compressed, it won't compress any further unless you change the data rate. This means, if the file is audibly transparent and you need to transcode it to some other new format, as long as the new format is also audibly transparent, there will be no degradation in the sound quality.
In my experience that's not true. It may still be audibly transparent after a few transcodings, but at some point it won't be any more. Here's an example how it sounds after 100 transcodings wav->X->wav for mp3 (preset insane) and vorbis (q10) respectively: SymphonyNo5.zip
 

Attachments

  • SymphonyNo5.zip
    3 MB · Views: 0
Nov 22, 2017 at 8:34 PM Post #2,737 of 3,525
I did 10 transcodings of AAC 256 VBR, which is my standard and it sounded fine. I figured since I've been encoding for 15 years or so and I haven't had to re encode yet, 150 years worth should be safe. How likely is it that you're going to do 100 transcodings in your lifetime? Also, perhaps it's different with an older Frauenhofer MP3. AAC is a much more sophisticated codec.

The only caveat is that iTunes tends to boost the volume a hair each time, so you can't be normalized all the way up to zero or it will clip.
 
Last edited:
Nov 24, 2017 at 10:32 AM Post #2,738 of 3,525
I did 10 transcodings of AAC 256 VBR, which is my standard and it sounded fine. I figured since I've been encoding for 15 years or so and I haven't had to re encode yet, 150 years worth should be safe. How likely is it that you're going to do 100 transcodings in your lifetime? Also, perhaps it's different with an older Frauenhofer MP3. AAC is a much more sophisticated codec.

The only caveat is that iTunes tends to boost the volume a hair each time, so you can't be normalized all the way up to zero or it will clip.
With the exact same codec settings each recode, the damage should be minimal. However, if you change codec settings each time, or even one time, or change to a different codec (mp3 > aac), you'll take a pretty obvious hit as perceptual coding is re-done with different rules.
 
Nov 24, 2017 at 11:20 AM Post #2,739 of 3,525
That could be. But if AAC 256 VBR is audibly transparent and the codec you're transcoding to is audibly transparent, then the results should be audibly transparent. A new codec from the future might be able to throw out more info, but it would be inaudible info if both are transparent.
 
Nov 24, 2017 at 1:20 PM Post #2,740 of 3,525
That could be. But if AAC 256 VBR is audibly transparent and the codec you're transcoding to is audibly transparent, then the results should be audibly transparent. A new codec from the future might be able to throw out more info, but it would be inaudible info if both are transparent.
Yes, but each codec arrives at "audibly transparent" differently, and when the algorithms don't match recoding causes more data loss. Cascading dissimilar codecs definitely results in audible degradation more quickly and with fewer recodes.

A number of years ago I evaluated a broadcast digital STL made by Dolby using AC3. The radio station also used an on-air computer audio system that stored files in MP3 in a wav container. I found the combination of MP3 and AC3 easily audible, where each was transparent by itself. This became a non-issue when both the on-air system and the STL went to uncompressed, but the situation did reveal that recoding twice with very different codecs was audible.

None of this should be an issue now of course.
 
Nov 24, 2017 at 1:50 PM Post #2,741 of 3,525
That's interesting. I should try my transcode test with 320 LAME alternating with 320 AAC for ten generations and see what happens.
 
Nov 24, 2017 at 5:52 PM Post #2,742 of 3,525
Pinnahertz is totally right here. You can't cascade independently transparent codecs and assume transparent result. It should be "illegal" to do further lossy coding, because the result can be surprisingly bad depending on how the codecs work. You do lossy coding once and that's it!
 
Nov 24, 2017 at 9:27 PM Post #2,743 of 3,525
Going between codecs will necessitate an intermediate WAV file, so you need to watch out for quantization errors building up as well.
 
Last edited:
Nov 27, 2017 at 10:52 AM Post #2,744 of 3,525
I can't speak for ALL lossy CODECs, but I can tell you that you're wrong about MP3. When you initially compress a file, it does indeed do its best to "throw away unnecessary information". However, the process is not as simple as identifying what doesn't matter and deleting it, and re-encoding something that has already been encoded WILL produce "generational degradation". Basically, the encoder does NOT "just throw away the information you won't miss". What it does is to divide the audio signal into a bunch of frequency bands, each for a short block of time, decide how much "important" information is contained in each, and then divide its "quality/priority" depending on how important the information is that's contained in each. It may discard some information entirely, while other information is simply encoded at lower quality. Each "section" of the information is encoded at the least quality for which "you won't notice the difference" - and the decision of what that will be depends on psychoacoustic properties like masking. Therefore, the majority of information in an MP3 encoded file is neither full quality, nor minimum quality, but somewhere in-between - encoded at "just high enough quality" that you won't notice the loss.

HOWEVER, in no part of this process is there any sort of specific identification of how each individual sound was treated, and so no way to ensure that the process won't be applied repeatedly to a given section. Therefore, if a given frequency/time slice has been encoded with a lot of quantization error (because it was deemed to contain "unimportant content"), and you re-encode it, it will AGAIN be encoded with a lot of quantization error - and those errors will compound. If you take a file that's been encoded at 128k VBR MP3 and re-encode it at the same settings, either as is or after converting it back into a WAV file, you will probably not lose much ADDITIONAL quality (because pretty much the same decisions are being made), however the encoder will NOT "simply leave it as is" either. It will be re-encoded, AGAIN with encoding that introduces further quantization errors, so the total sum of the errors will increase. (The result is that areas which are considered unimportant will get significantly worse when you re-encode them, because they will have been encoded at poor quality twice instead of once. Areas which are deemed more important will suffer less degradation, because they will have been encoded twice, but both at a higher quality setting, which causes less loss of quality. You may argue that, since those areas were unimportant to begin with, the additional loss of quality won't matter - but it is there - and the overall quality will decrease with repeated generations.)

With lossy compression, the analogy of a photocopier is quite valid, and illustrates the situation quite well. If you make a copy of a good quality original on a good quality photocopier there will be little loss. And, if you copy that copy again on that same photocopier, there will again be little loss. However, if you make a copy on a poor quality photocopier, a lot of quality will be lost; and more quality will be lost if you make a copy of that copy at the same poor quality. In this analogy, the way MP3 encoding works is that, for each frequency slice, for each windows interval, the encoder "decides how important the content is and how good a copy it needs to make to avoid audibly noticeable degradation". However, at least some quality is lost in each, and that loss of quality WILL COMPOUND. (The file won't get smaller because some of the data being encoded the second time will be quantization errors caused by the first encode process...... so the amount of "useful data" won't change much, but the "useless data" will be quite different, and most of that (but not all of it) will be discarded when you encode it the second time.

Think about it logically... If you are ripping a file from a CD and the compressed version sounds completely identical to the original CD, then for the purposes of listening to it, it's perfect. You won't need better sounding than identical in the future. It can't sound any better than the CD it's ripped from, and your ears aren't going to hear any better 20 years from now. It's a good idea to bump the bitrate up a notch just to cover some particularly difficult to compress sort of sound. But I can tell you that I have ripped over 10,000 CDs to AAC 256 VBR and I have never run across an artifact in any of my rips.

I've experimented a lot with compression codecs and there's something most people don't know about how the way they work. If you compress a song and it throws out inaudible information to make the file smaller; if you run it through the compression again, it has already thrown out all the inaudible information, so it makes no change. You can compress a file over and over and it doesn't degrade. Once it's been compressed, it won't compress any further unless you change the data rate. This means, if the file is audibly transparent and you need to transcode it to some other new format, as long as the new format is also audibly transparent, there will be no degradation in the sound quality.

If you have recorded and mixed the song yourself, it makes sense to archive your mix in the full quality without compression. You might want to go back into the track and remix it and higher bitrates will be useful to you. But for the purposes of listening to music in the home, there is no functional difference between 24/96 or a FLAC file or a high bitrate lossy file that has reached the level of audible transparency. You can pack hard drives with big fat WAV files or high bitrate/high sampling rate files, or lossless files if you want. But there is no audible advantage to it. All that extra file size might as well be excelsior in a shipping box. It's just bulk with no purpose.

Now I can't speak to how you feel about having hard drives full of big fat files. If you sleep better at night knowing you are storing a bunch of inaudible bits for posterity, that's fine. But that has nothing to do with sound quality, it has nothing to do with compatibility in the future, and it has nothing to do with logic. It's just plain OCD- the digital audio version of excessive hand washing. Your hands are clean. Your files are audibly perfect. You don't need to wash them again. You don't need to store a bunch of bits and bytes that you can't hear anyway.

The key issue here is that it doesn't make any practical difference if a file is lossless or lossy as long as it is audibly transparent.
 
Nov 27, 2017 at 12:09 PM Post #2,745 of 3,525
There's more than one MP3. Frauenhofer MP3 is pretty primitive compared to LAME, and MP4 like AAC is a step beyond both of them. I'm mainly speaking about AAC. I haven't done generation tests with the older codecs, only AAC. Generation loss is minimal to the point of insignificance with AAC 320 VBR with a normal number of generations (under 10). For the purposes of listening to music in the home, high bitrate AAC is pretty much the same as lossless. You'd have to come up with a pretty extreme circumstance to find a difference between them.

But unless I'm mistaken, you're speaking entirely in theory about this. You haven't actually tried it yourself. It's easy to find out for yourself. Just take a great sounding CD. Rip it to WAV and normalize it down to 85%. Convert it to AAC 320 VBR then back to WAV. Rinse and repeat to 10 generations, then compare your original WAV to the 10th generation AAC. Then you'll know for sure what the difference is. I've done this. I know. The results surprised me because I was expecting significant degradation like you are.

The great thing about lossy is that it's so easy to do listening tests to determine exactly what the thresholds are and how it works, but I guess most people don't bother. They go with what seems logical to them or a gut feeling instead of making the minimal effort to find out for themselves. It isn't like a xerox. A xerox throws out stuff you can clearly see. High bitrate lossy is audibly transparent. It throws out things you can't hear. Perhaps after 100 generations it would be audible. But most people will never transcode more than 2 or 3 times in their lifetimes. For the purposes it was designed for it works better than your theories say it does.
 
Last edited:

Users who are viewing this thread

Back
Top