Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Sep 11, 2015 at 12:21 PM Post #1,231 of 3,525
  All this discussion on ABX, Moran and Myer, later experiments etc is all good and well, but I can't get my head around the first order givens and hence why the burden of proof should not fall squarely on those making claims that high res audio sounds better.
 
If 24bit compared to 16bit only offers an increase in dynamic range which is not of any practical use, and lowers the noise floor from an already very low and masked by any music content, why should there be any theoretical advantage in reproducing sound?
 
Secondly, a 100years of testing have proved that the hearing range of humans is within a 20hz to 20khz band.  So why should there be any theoretical advantage in extending the range for music playback?  I understand the arguments, none of them convincing or proved since digital oversampling, that a steep cut-off may result in errors below the threshold.  Surely if that ever was an issue, that would have been resolved in the early 80s?  I know of no peer review standard papers proving otherwise?
 
Lastly, if we look at progress in hi res video technology, eg OLED displays etc, the high res bit depth here makes a difference due to smaller pixellation, hence sharper pictures.  This is not the same as audio where higher bit depth does not increase the accuracy of the sound wave in the band limited frequency.
 
As an analogy with 44.1 vs 96 or 124 debate, what would everyone think if TV manufacturers started marketing sets claiming higher quality pictures becuase they increased the frequency bandwidth deep into infra red or ultra violet?  It appears only in audiophile land that people think such claims are credible.

 
But your analogies aren't actually correct.
 
When HD TVs (1920 x 1080) first appeared, it was NOT universally agreed that the extra resolution actually made a significant difference for most customers. Many people in fact argued that there was very little HD content available, and that using "full 1080p HD resolution" on a screen smaller than 30" was a total waste anyway - because nobody could see the difference between 720p and 1080p on a screen that small. However, today, almost every TV of any size is full 1080p HD, and we're having the same argument about 4k.
 
The problem with your argument is that the basic premise is limited. Yes, if there was absolute reliable proof that frequency response above 20 kHz absolutely, positively produces no audible difference, then it would be unnecessary (although I'm still not convinced that having a "safety margin" above the bare minimum isn't still a good idea). However, the proof you're offering isn't at all "absolute" or "conclusive". In fact, most of those tests were conducted with inadequately sized sample groups, using obsolete equipment, and frequently conducted using dubious test methodology. The fact that twenty or thirty people, using 1980's vintage technology, and 1980's vintage recordings is NOT compelling proof that the difference doesn't exist - at least not to me. And, if we were in fact to prove, with properly sized and run tests, that the difference wasn't audible with the best equipment available today, that wouldn't constitute evidence about whether there might be a difference that is audible with the equipment available in twenty years. I simply don't believe that we actually understand 100.0% of how human hearing works; especially since human hearing takes place partly in the brain - and we certainly don't understand anywhere near 100% of how THAT works.)
 
(The reality is that there have been several tests run in recent times which tend to suggest that frequency response above 20 kHz can in fact produce audible effects - in different ways and with different implications. The recent AES paper seems to show that a small sample of individuals was able to "beat the odds" in terms of telling whether a given sample was high resolution or not. Another test I recall reading about produced a result that demonstrated that, while the participants didn't hear what they considered to be an audible difference with band-limited content, the location of instruments in the sound stage was perceived as being shifted with the band-limited version, which is in fact "an audible effect". Note that I don't consider either of those results to be "compelling" either but, when balanced against tests run decades ago, with the audio equipment then current, I think they raise enough questions to make it unreasonable to "fall back" on those outdated results as being "absolute facts" without confirmation.)
 
Sep 11, 2015 at 12:51 PM Post #1,232 of 3,525
yup, tv resolution alone means nothing, in the end what matters is the angular resolution of the eye, so unless it's in conjunction with screen size and distance from the viewer, it's not working as an analogy and we can't even determine a threshold of visibility.
 
Sep 11, 2015 at 12:52 PM Post #1,233 of 3,525
  we all know that abx isn't perfect, how can anything depending on human senses while self administrated, hope to be perfect? even masturbation can always be improved.
but while you're explaining how some things may elude an abx test, we have a vast majority of people who are misinformed about sighted evaluation and read, and believe
eek.gif
, reviews from guys who claim to be beyond bias. "oh I'm experienced enough to blablablah" kind of nonsense even from professionals.
to me this is the real problem in audio communities. not that a few guys out of the minority who uses ABX, sometimes think too highly of the conclusion of a test.
 each time someone write about ABX not being a panacea, you have 10 dudes who misunderstand it for "I should keep doing no control at all and trust myself because it's better".
 
 
let's first take care of the ship sinking, after that we'll argue about the best way navigate .

 
I agree with you wholeheartedly.
 
So, in case there's any confusion about my opinion here.....
 
We are ALL subject to expectation bias (sometimes called the placebo effect). What we see, hear, taste, and feel is influenced to a significant degree by what we expect. This is true for all humans (at least all humans tested so far). None of us is immune and, while we can acknowledge it, and even do our best to avoid it or compensate for it, it is inescapable. I would also agree that a significant percentage of the "subjective" opinions of most audiophiles - including myself - is quite probably based on bias or mis-perception.
 
There is also a tendency for we humans, especially audiophiles, to confuse science with practicality... and I'm considering this to be a discussion of science rather than practicality. To put it bluntly, I've owned dozens of pairs of speakers in my life, and about a dozen different makes and models of headphones, and I currently own several thousand CDs - and probably about a hundred "high-res remasters" of various albums. Of those, if I exclude situations where the difference is likely to be due simply to obvious differences in the mastering itself, there are about a dozen instances where I'm quite certain that I can hear an actual differences, and only then if I listen using one particular pair of speakers, or two specific pairs of headphones, and only then if I listen very carefully and concentrate on listening for a difference. And, even then, I only notice it by direct A/B comparison - and I almost certainly wouldn't notice it if I were to walk out of the room and come back in.
 
Therefore, if you want to discuss whether there is a "significant difference" or "a difference most people would hear" I would probably agree that it would be unlikely. And, when and if actually asked whether high-res remasters are in fact better, my answer is virtually always that: "I can tell you that I have lots of high-res remasters that sound obviously better than the original; but I'm not sure whether it's because they're high-res or simply because they're mastered better." However, as a point of science, I certainly haven't seen proof that convinces me that there is absolutely positively no audible difference - and, even if someone were to prove to me that I personally really can't detect an audible difference, that still wouldn't prove that nobody else on Earth can do so.
 
I also agree that unbiased testing is the best way to determine for sure, and that ABX testing works pretty well to eliminate a significant amount of the bias that normally leads people to reach unreasonable and inaccurate conclusions. (It's also about the only type of testing you can do for yourself; without a huge budget and a very large group of cooperative friends.) And, in fact, if anyone wishes to determine for themselves whether THEY can hear a difference, using their equipment, and their favorite source material, then performing an ABX test is almost certainly the best way to go about it (and the ABX test plugin for Foobar2000 is an excellent way to do it.) I would also consider it to be a very fair argument that any difference that cannot be detected using a plain old ABX test isn't "significant" or "important" to most people (and, even if limiting the bandwidth to 20 kHz really does shift the violin three inches to the left, I don't personally care about that either).
 
Sep 11, 2015 at 1:03 PM Post #1,234 of 3,525
   
While perhaps not practical, this is not an issue with an ABX test, even using the popular ABX tools provided as a plugin for the free Foobar2000 music player.  There is nothing preventing you from making large enough files to accomplish exactly what you are describing.  I think you would have a difficult time establishing that a lower quality version is actually resulting in giving you headache.

 
Agreed. The problem with testing some of these things is the number of test subjects and the time required. For example, in order to test whether "version x" is more fatiguing than "version y" with any degree of certainty and accuracy we need to get four or five hundred people in a room, get them all to listen to each for three or four hours, then repeat that every day for a few weeks. Unless you have a huge budget, it would be impractical to do this in the format of a proper ABX test.
 
However, a less formal version, which might consist of playing "version x" through the loudspeakers in your local library on Mondays and Wednesdays for a month, and "version y" on Tuesdays and Thursdays, keeping track of how many people complain that the music is annoying, and the average time each customer stays in the library before they leave, and correlating the results, might actually be something that could be arranged (perhaps by the library in cooperation with a local university).
 
And, if you didn't notice the flaw there, we should alternate different days of the week for each version sample to rule out the possibility that there's some other reason why patrons stay for less time on Tuesdays or get more headaches on Wednesdays. This is the sort of detail that people who do these sorts of tests for a living have to account for.
 
wink_face.gif

 
Sep 11, 2015 at 1:20 PM Post #1,235 of 3,525
   
Agreed. The problem with testing some of these things is the number of test subjects and the time required. For example, in order to test whether "version x" is more fatiguing than "version y" with any degree of certainty and accuracy we need to get four or five hundred people in a room, get them all to listen to each for three or four hours, then repeat that every day for a few weeks. Unless you have a huge budget, it would be impractical to do this in the format of a proper ABX test.
 
However, a less formal version, which might consist of playing "version x" through the loudspeakers in your local library on Mondays and Wednesdays for a month, and "version y" on Tuesdays and Thursdays, keeping track of how many people complain that the music is annoying, and the average time each customer stays in the library before they leave, and correlating the results, might actually be something that could be arranged (perhaps by the library in cooperation with a local university).
 
And, if you didn't notice the flaw there, we should alternate different days of the week for each version sample to rule out the possibility that there's some other reason why patrons stay for less time on Tuesdays or get more headaches on Wednesdays. This is the sort of detail that people who do these sorts of tests for a living have to account for.
 
wink_face.gif

 
I wasn't necessarily thinking about large-scale, authoritative testing.  If I was getting headaches and suspected it was due to the format of the file, I'd want to to at least try to test this on myself.  Maybe simply making copies of my favorite few albums at different quality levels and then having the play list shuffled would be enough. 
 
Sep 11, 2015 at 2:12 PM Post #1,236 of 3,525
  Therefore, if you want to discuss whether there is a "significant difference" or "a difference most people would hear" I would probably agree that it would be unlikely. And, when and if actually asked whether high-res remasters are in fact better, my answer is virtually always that: "I can tell you that I have lots of high-res remasters that sound obviously better than the original; but I'm not sure whether it's because they're high-res or simply because they're mastered better." However, as a point of science, I certainly haven't seen proof that convinces me that there is absolutely positively no audible difference - and, even if someone were to prove to me that I personally really can't detect an audible difference, that still wouldn't prove that nobody else on Earth can do so.

 
The way to test hi-res isn't to compare it to a CD that might be a different mastering; you reduce the hi-res version to Redbook and then have a go. I mean, any remastering that required that I AB quick-switch to hear a difference would make me question just how much work the mastering engineer put into the thing!
 
Sep 11, 2015 at 3:13 PM Post #1,237 of 3,525
   
The way to test hi-res isn't to compare it to a CD that might be a different mastering; you reduce the hi-res version to Redbook and then have a go. I mean, any remastering that required that I AB quick-switch to hear a difference would make me question just how much work the mastering engineer put into the thing!

 
Agreed - although "remastering" is a very flexible term, and seems to mean different things to different people. To me, the 24/192k remasters of the Grateful Dead Studio Albums sound a lot different - and a lot better - than all of the previous versions of the same albums I've heard (and I've read descriptions of the significant processing and signal "repair" that was done along the way). However, I've definitely got a few other 24/192k "remasters" that sound so identical to previous versions that I'm pretty sure the ONLY difference is that they ran the same exact master tape through the converter at 24/192k instead of 16/44k. (And that's giving them the benefit of the doubt that they didn't simply upsample the 16/44k version.)
 
The problem with simply down-converting from the 24/192k version to produce an "equivalent 16/44k version" is that ANY sample rate conversion involves some filtering, and so the conversion process itself will in fact alter the signal slightly. Even taking a 16/44k signal, converting it to 24/192k, then converting it back to 16/44k, using the same program, which should produce no difference at all, is usually audible... so the conversion process itself is NOT audibly transparent. (If you can't hear a difference, then it will prove that the conversion and the difference in sample rate taken together aren't audible with your sample content, which would be sufficient for the folks looking to prove that the difference doesn't exist; but, if you do hear a difference, it won't be possible to tell whether the difference is due to the difference in sample rate, or to the conversion process itself, or both.)
 
Therefore, as I've said before, anyone considering whether to purchase a "remastered" version of an album is probably better served by reading a few reviews about the particular version they're considering buying, and deciding in general whether it is likely to be an improvement, than to worry about the sample rate it happens to be offered at.... (and I personally tend to be willing to spend a few dollars more for the higher-resolution version - but more as a matter of "insurance" than out of any specific expectation that it will be better). The simple reality is that, technical realities aside, the whole "high-res file craze" has provided an excellent excuse for the latest wave of "remasters" and "reissues" and, for whatever reason, many of them are in fact very good. Everyone should also remember that, in the end, even if it turns out that 24/192k is capable of sounding audibly better than 16/44k, that's still only going to be true in a specific situation if the master is good enough for the difference to matter, and if the engineering and conversion are good enough to preserve that difference.
 
Sep 11, 2015 at 3:24 PM Post #1,238 of 3,525
The problem with simply down-converting from the 24/192k version to produce an "equivalent 16/44k version" is that ANY sample rate conversion involves some filtering, and so the conversion process itself will in fact alter the signal slightly. Even taking a 16/44k signal, converting it to 24/192k, then converting it back to 16/44k, using the same program, which should produce no difference at all, is usually audible... so the conversion process itself is NOT audibly transparent. (If you can't hear a difference, then it will prove that the conversion and the difference in sample rate taken together aren't audible with your sample content, which would be sufficient for the folks looking to prove that the difference doesn't exist; but, if you do hear a difference, it won't be possible to tell whether the difference is due to the difference in sample rate, or to the conversion process itself, or both.)

 
Of course it involves filtering, but at some point you have to say: this is how we test the same content at different rates. Even feeding the recorded signal into two different paths for hi-res and Redbook could lead to differences not due only to the bit depth or sample rate. But I strongly disagree that interpolating to 192 and back down to 44.1 will be "usually audible." Once again, you can look at and listen to the difference between the original 44.1 and the up/down version, and here the comparison is even more apt because there wasn't any hi-res material to begin with.
 
Sep 11, 2015 at 3:44 PM Post #1,239 of 3,525
   
Of course it involves filtering, but at some point you have to say: this is how we test the same content at different rates. Even feeding the recorded signal into two different paths for hi-res and Redbook could lead to differences not due only to the bit depth or sample rate. But I strongly disagree that interpolating to 192 and back down to 44.1 will be "usually audible." Once again, you can look at and listen to the difference between the original 44.1 and the up/down version, and here the comparison is even more apt because there wasn't any hi-res material to begin with.

 
I guess "usually" is a vague term.
 
I've tried that test (converting a 16/44k original up to 24/192k and then back to 16/44k) with a few programs - and the difference was sometimes rather audible. However, most of the "higher end" programs offer multiple options for dithering and filtering whenever you do a sample rate conversion, as well as various tradeoffs between cutoff frequency, cutoff sharpness, impulse response, and processing time, and at least some of those options are in fact audibly different. I wouldn't rule out the possibility that at least some of those combinations and options may turn out to be inaudible, but that would itself need to be tested. (And I'm not specifically aware of a certain combination of program and settings that I would assume to produce an inaudible conversion.)
 
Of course, as I mentioned, if the test were to conclude that there was no audible difference, then that conclusion would support both claims: that the differences between the different sample rates were inaudible, and that any anomalies caused by the conversion process itself were also inaudible (ignoring the slight possibility that differences caused by each individually might cancel out).
 
Sep 11, 2015 at 3:51 PM Post #1,240 of 3,525
   
I guess "usually" is a vague term.
 
I've tried that test (converting a 16/44k original up to 24/192k and then back to 16/44k) with a few programs - and the difference was sometimes rather audible. However, most of the "higher end" programs offer multiple options for dithering and filtering whenever you do a sample rate conversion, as well as various tradeoffs between cutoff frequency, cutoff sharpness, impulse response, and processing time, and at least some of those options are in fact audibly different. I wouldn't rule out the possibility that at least some of those combinations and options may turn out to be inaudible, but that would itself need to be tested. (And I'm not specifically aware of a certain combination of program and settings that I would assume to produce an inaudible conversion.)
 
Of course, as I mentioned, if the test were to conclude that there was no audible difference, then that conclusion would support both claims: that the differences between the different sample rates were inaudible, and that any anomalies caused by the conversion process itself were also inaudible (ignoring the slight possibility that differences caused by each individually might cancel out).

 
You'll have to describe what rather audible means, because what I tend to get from sox (for a difference) using the typical methods is some content at -70dBFS between 20-22kHz and then dither noise at about -110dB for the rest of the frequency range.
 
Sep 11, 2015 at 4:14 PM Post #1,241 of 3,525
Of course it involves filtering, but at some point you have to say: this is how we test the same content at different rates. Even feeding the recorded signal into two different paths for hi-res and Redbook could lead to differences not due only to the bit depth or sample rate. But I strongly disagree that interpolating to 192 and back down to 44.1 will be "usually audible." Once again, you can look at and listen to the difference between the original 44.1 and the up/down version, and here the comparison is even more apt because there wasn't any hi-res material to begin with.

Yes, I'd have to agree. I've taken "puported" 24/96 or 24/192 files, down/up converted with Audacity's default settings and then nulled in ADM, which reports a null down around -85dB. Sure, when you listen to the difference track there is something there, but, with the best will in the world, nobody is going to hear it in the presence of the original signal at that sort of level.

There's a guy on Youtube who's made video's showing the same thing when he nulls down/up converted files, so I'd have to say any software that produces audible differences due to down/upconverting is pretty lousy and needs to be avoided.
 
Sep 11, 2015 at 5:14 PM Post #1,242 of 3,525
Yes, I'd have to agree. I've taken "puported" 24/96 or 24/192 files, down/up converted with Audacity's default settings and then nulled in ADM, which reports a null down around -85dB. Sure, when you listen to the difference track there is something there, but, with the best will in the world, nobody is going to hear it in the presence of the original signal at that sort of level.

There's a guy on Youtube who's made video's showing the same thing when he nulls down/up converted files, so I'd have to say any software that produces audible differences due to down/upconverting is pretty lousy and needs to be avoided.

Yeah, that's why I use dBpoweramp, it's great software.  If your conversion software actually produces an audible difference when downsampling from 192 to 48, the fact is it's a problem with the software, not with the act of downsampling itself.
 
Sep 11, 2015 at 5:26 PM Post #1,243 of 3,525
   
You'll have to describe what rather audible means, because what I tend to get from sox (for a difference) using the typical methods is some content at -70dBFS between 20-22kHz and then dither noise at about -110dB for the rest of the frequency range.

 
To me "rather audible" means that, when I switch back and forth, I hear an obvious difference.
 
Unfortunately, unless you take actual measurements, the various settings in the various conversion programs aren't comparable. For example, in Izotope RX3's Resample module, I get to pick a new Sample Rate, a Filter Steepness (from 0 to 2000 with a default of 834), a Cutoff Shift (from 0.7 to 1.4) that adjusts the cutoff frequency up or down, and a pre-Ringing setting (that goes from 0 to 1.0), and in Adobe Audition I get to choose between "high quality" and "fast processing" - which I'm pretty sure means that the high quality option uses a filter with more taps. And I'm sure I could look up whether each of those uses FIR or IIR filters, how many taps each uses, and at least some of the filter parameters, although most programs tend to omit many of those important details. And, of course, other programs offer other options. My point there is that the settings you choose when doing resampling, and how audible (or not) any of them are, is a whole subject in and of itself.... so the first part of determining whether the difference in sample rates was audible would be to choose a method for performing the conversion, and then testing whether THAT was in fact inaudible or not. (And, if you go to some of the pro-audio forums, you'll find lively discussions about which converters sound better, and which ones are more transparent, and which settings are best on each for particular kinds of music.)
 
I also notice that your chosen resampler - SoX - offers quite a few options; in fact, their example graph of a 96k to 44k conversion shows the transient response plots for twelve different combinations of settings, all of which look - and potentially sound - different. (There are several other threads where the audibility of those types of differences are discussed and argued.) Which of those should we consider to be "right"? And that's just one parameter that can be adjusted. (And we should note that all of those graphs show ways in which SoX ALTERS the original signal during the conversion process.) There's also a link on the SoX page to a website that shows the performance of a whole slew of sample rate converter programs (SoX ranks very well with certain settings; but many other popular programs do not.)
 
(I'm not trying to derail this discussion onto a siding.... I'm merely pointing out that converting a 96k file to 44k without introducing any audible artifacts isn't at all a trivial proposition, and even such a seemingly simple step as "making a good 44k version of a 96k file" is a lot less simple than it seems at first.)
 
Another factor that most of the participants here seem to ignore is that different DACs process different sample rates differently (most DACs are programmed to use different oversampling multipliers depending on the sample rate of the input - because hardware limitations prevent them from applying high oversampling multipliers to inputs that are already at high sample rates). This means that, even assuming that you did have two samples, at different sample rates, but otherwise audibly identical, they could sound audibly different when played on a specific DAC because that particular DAC responds differently to the different sample rates. (The much maligned recent AES paper addressed that issue by having all of their samples played at 192k - with the "44k" samples filtered AS IF they were prepared for being recorded at 44k. Thus their samples could be expected to exhibit any audible differences due to the bandwidth limitation of the filtering, but, at the same time, avoid differences due to how the individual DAC used handles different sample rates. In other words they used "44k audio played at 192k" for their "44k samples".)
 
Again, however, finding that no audible differences existed in spite of these other variables would prove that none of them was audible.
 
Sep 11, 2015 at 9:55 PM Post #1,245 of 3,525
Re - Frodeni

With respect, I think it is a bit unfair to accuse me of shutting down debate. This is after all a "sound science" forum. The questions I raised are quite fundamental to the science. If you are putting forward a proposition that violates what we know about human hearing, and logic eg why an apparent issue with 16bit would not manifest itself more with analogue 13bit then it is perfectly valid to the debate to ask for evidence, and the maths that you say support this proposition. If these questions regarding logic and evidence behind propositions which challenge what we know technically and physiologically about music playback and music perception cannot be asked on a sound science forum then of what use is it?
 

Users who are viewing this thread

Back
Top