Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Nov 2, 2017 at 9:49 AM Post #2,431 of 3,525
If we add pre-ringing to that recorded bell sound, we will alter its original envelope, which begins rather sharply, into one that ramps up more gradually.
I personally don't know how much that would affect how our brain interprets the location of the "real bell" - but maybe we need to find out.

Therefore, it's not unreasonable to wonder if, while your brain is identifying the pitch you're hearing by using your ear like a spectrum analyzer...
another section has already tentatively identified the approximate location of the sound based on the beginning edges of the sound envelopes.
(Which might suggest that altering the shape of those envelope leading edges might affect that part of the result.)

For example, we could ask people to locate a bunch of objects in the sound field, and rate their accuracy ("point to where it sounds like the violin is coming from").
We could then ask them to repeat the test with test samples recorded at various sample rates and see if their accuracy is the same for each - or not.

An interesting and not at all unreasonable point. However, it's an invalid point and there are several reasons for me stating it's invalid:

1. I agree that pre-ringing effectively changes the envelope but for that pre-ringing to have any effect, it must be audible/detectable. Let's say hypothetically that it may not be consciously audible but is detectable, in terms of the brain's interpretation of location. If the location of the bell is not where I want it to be, I (as a mix engineer) can simply change the location, if it's not where I expect it to be, I would typically investigate why. Never have I found pre-ringing to be the cause of a bell (or any other sound) not being where I expect it to be.

2. Rather ironically, the 1997 Theiss study I mentioned previously (Phantom source perception in 24bit @ 96kHz digital audio) set out to test exactly what you are suggesting. As I mentioned, there was a supplemental test on general perceived sound quality performed under less formal circumstances and it's this test which is frequently quoted by those who have a hi-res agenda but the main experiments were formal DBX tests designed specifically to test localisation and resulted in the conclusion that: "Analyses of the data showed that the hypothesis that localization accuracy improves with higher sampling rates above the professional 48kHz standard has to be rejected". (I linked to the paper above so you can read the details for yourself.)

3. As is frequently the case, the actual reality of the behaviour of sound and the practical realities of recording it are ignored. Very rarely (and pretty much never for a commercial music release) would a single violin be recorded with a single microphone placed a few inches from the instrument. The transients and frequency content of instruments are however typically/always measured and quoted this way. What an instrument actually sounds like from such close proximity is different, often vastly different, from what is expected and what would be heard by the audience. We've got absorption and reflections to consider, which results in very significantly different transients, freq content and dynamic range from what we would measure just a few inches away. This is with with actual live acoustic sound, if in addition we factor in mic response, timing differences between mics and more than one violin, we've got transients smeared all over the place and by all over the place I'm talking tens of milli-seconds up to seconds, not the few micro-seconds which can be detected with test signal! And, this is assuming any transients still even exist from a listening position in the audience and in many/most cases they won't! All this applies to any instrument/sound, even a snare drum rimshot, although with a rimshot there would typically still be a transient, just a very time smeared and different transient from the one created.

One of the difficulties facing us from the position of hobbyists, is the available information. The research and knowledge which covers most of our hobby is typically not led by independent scientific research, it's led by industry and as such is often proprietary and not available. For example, the start of digital audio is arguably the Nyquist Theory in 1924 which actually belonged to AT&T but they allowed it to be published as a scientific paper. However, most of the testing, data and research is not published science and even when there is independent scientific research, it's often/sometimes lagging many years behind industry research and sometimes lacking crucial factors. Then there's people like me, who actually use the results of that industry research day in and day out. For example, I studied, critically compared, then bought and was using greater than 16 bit technology every day, a good 8 years before >16bit even became available to consumers. Another example, the k-weighted filter used in loudness normalisation was the result of a lot of rigorous testing (perceptual DBTs) by the ITU's members, such as the BBC, ORTF and many, many others, but none of that research is published anywhere as far as I'm aware and the ITU specifications which resulted from them have since been modified, after people like me used them every day and discovered the deficiencies/loopholes. On top of this, while some front line companies are effectively unbiased, some industry organisations represent a membership which includes powerful manufacturers and distributors and are not in practice always entirely unbiased, the AES being an example. And finally, as you have mentioned, there is often great financial incentive to fund and publish research which demonstrates a positive result (that hi-res provides a tangible benefit for example) but relatively little or none at all to demonstrate a negative. All of this results in a knowledge landscape which is often extremely difficult to navigate and therefore relatively easy to abuse!

G
 
Last edited:
Nov 2, 2017 at 11:45 AM Post #2,432 of 3,525
When people say things like "Let's say for the sake of argument..." or "Assuming that something that isn't audible may be unconsciously perceived..." for me, it's like stepping over the line into fantasy land.

I believe in the inaudibility of inaudible sound.
 
Nov 2, 2017 at 12:31 PM Post #2,433 of 3,525
I basically agree with everything you've said..... but I would add a few qualifications.

First, I would point out that, to an extent, there are two different discussions going on here. One is about whether there is ANY audible difference between high-resolution and "ordinary" recordings; the other is about whether there is a SIGNIFICANT audible difference. As a mix engineer, you undoubtedly listened to what you were mixing very carefully. However, I'll bet you didn't ask a dozen people to listen to the high-res and standard version of each, on a half dozen different DACs, and a dozen different brands of speakers, and ask each to tell you exactly where in space the bell seemed to be located - and whether it seemed to occupy a distinct location or its apparent location was slightly blurred. Therefore, I would certainly support your assertion that "you've never known it to make a significant difference" - but that falls short of the scientific assertion that "no human being will be able to audibly discern a difference".

I suspect that some people are reading this as a scientific inquiry, while others are reading it as a practical discussion about what's worthwhile (I'm in that first category). In terms of your last point, I suspect you're right, and very little - if any - music would actually allow for such a distinction. However, speaking as a scientist, if a single human, using a single test tone, can reliable hear a difference, then we must concede that "there is an audible difference"..... and the fact that it is so rarely audible that it doesn't justify spending extra for one type of recording or the other is a separate discussion altogether.

Second, considering how the state of the art has changed, I'm not sure I would consider ANY study performed with 1997 vintage A/D converters and DACs to be especially definitive.
Twenty years is a long time, and the technology really has changed significantly.... and a lot of equipment from back then really was audibly inferior to much of what we have now.

I entirely agree with you that the differences are, at most, very small.... certainly much smaller than many other factors.... and so quite probably are insignificant to most people.
(I would also comment that, considering how good a well mastered CD CAN sound, it's sort of sad that so many recent ones sound so bad.)
And, no, I'm not personally convinced that I could hear the difference between a really well mastered 16/44k file and an equivalent 24/192k version of it.

I also agree that the information currently available is confusing, and seemingly often deliberately misleading......
Starting with scope images showing that different DACs deliver different outputs when presented with a totally invalid single-sample transient test signal.
I've been considering this as a scientific discussion rather than a practical one.
(I'm betting that I could in fact think up a test signal where the difference might be audible - but it might not at all be representative of real live music).
However, I would agree that the differences have, at least so far, not been shown to be "significant - in a practical sense".
(I would also agree that most of the differences people claim to hear seem likely to have been based on bias rather than on reality.)

I would also note that it can be very difficult to perform what I would consider to be "fully detailed tests" on subjects like we're discussing.
I can start by trying to compare the 24/192k version of a certain album to the 16/44k version.
However, if I treat the 24/192k version as my master, then I must perform a sample rate conversion to generate the 16/44k version - which introduces another processing step - which includes filtering.
And, if I instead obtain both versions from someone else, then I have to wonder whether they have applied exactly the same parameters to both.
And I also have to wonder if the particular DAC I've chosen happens to perform slightly differently at different sample rates - for whatever reasons.
To be honest, this makes me doubt that anyone will ever bother to perform a rigorous scientific analysis of the subject.

An interesting and not at all unreasonable point. However, it's an invalid point and there are several reasons for me stating it's invalid:

1. I agree that pre-ringing effectively changes the envelope but for that pre-ringing to have any effect, it must be audible/detectable. Let's say hypothetically that it may not be consciously audible but is detectable, in terms of the brain's interpretation of location. If the location of the bell is not where I want it to be, I (as a mix engineer) can simply change the location, if it's not where I expect it to be, I would typically investigate why. Never have I found pre-ringing to be the cause of a bell (or any other sound) not being where I expect it to be.

2. Rather ironically, the 1997 Theiss study I mentioned previously (Phantom source perception in 24bit @ 96kHz digital audio) set out to test exactly what you are suggesting. As I mentioned, there was a supplemental test on general perceived sound quality performed under less formal circumstances and it's this test which is frequently quoted by those who have a hi-res agenda but the main experiments were formal DBX tests designed specifically to test localisation and resulted in the conclusion that: "Analyses of the data showed that the hypothesis that localization accuracy improves with higher sampling rates above the professional 48kHz standard has to be rejected". (I linked to the paper above so you can read the details for yourself.)

3. As is frequently the case, the actual reality of the behaviour of sound and the practical realities of recording it are ignored. Very rarely (and pretty much never for a commercial music release) would a single violin be recorded with a single microphone placed a few inches from the instrument. The transients and frequency content of instruments are however typically/always measured and quoted this way. What an instrument actually sounds like from such close proximity is different, often vastly different, from what is expected and what would be heard by the audience. We've got absorption and reflections to consider, which results in very significantly different transients, freq content and dynamic range from what we would measure just a few inches away. This is with with actual live acoustic sound, if in addition we factor in mic response, timing differences between mics and more than one violin, we've got transients smeared all over the place and by all over the place I'm talking tens of milli-seconds up to seconds, not the few micro-seconds which can be detected with test signal! And, this is assuming any transients still even exist from a listening position in the audience and in many/most cases they won't! All this applies to any instrument/sound, even a snare drum rimshot, although with a rimshot there would typically still be a transient, just a very time smeared and different transient from the one created.

One of the difficulties facing us from the position of hobbyists, is the available information. The research and knowledge which covers most of our hobby is typically not led by independent scientific research, it's led by industry and as such is often proprietary and not available. For example, the start of digital audio is arguably the Nyquist Theory in 1924 which actually belonged to AT&T but they allowed it to be published as a scientific paper. However, most of the testing, data and research is not published science and even when there is independent scientific research, it's often/sometimes lagging many years behind industry research and sometimes lacking crucial factors. Then there's people like me, who actually use the results of that industry research day in and day out. For example, I studied, critically compared, then bought and was using greater than 16 bit technology every day, a good 8 years before >16bit even became available to consumers. Another example, the k-weighted filter used in loudness normalisation was the result of a lot of rigorous testing (perceptual DBTs) by the ITU's members, such as the BBC, ORTF and many, many others, but none of that research is published anywhere as far as I'm aware and the ITU specifications which resulted from them have since been modified, after people like me used them every day and discovered the deficiencies/loopholes. On top of this, while some front line companies are effectively unbiased, some industry organisations represent a membership which includes powerful manufacturers and distributors and are not in practice always entirely unbiased, the AES being an example. And finally, as you have mentioned, there is often great financial incentive to fund and publish research which demonstrates a positive result (that hi-res provides a tangible benefit for example) but relatively little or none at all to demonstrate a negative. All of this results in a knowledge landscape which is often extremely difficult to navigate and therefore relatively easy to abuse!

G
 
Nov 2, 2017 at 1:26 PM Post #2,434 of 3,525
there are two different discussions going on here. One is about whether there is ANY audible difference between high-resolution and "ordinary" recordings; the other is about whether there is a SIGNIFICANT audible difference. As a mix engineer, you undoubtedly listened to what you were mixing very carefully. However, I'll bet you didn't ask a dozen people to listen to the high-res and standard version of each, on a half dozen different DACs, and a dozen different brands of speakers.

There's a third option... There shouldn't be any audible difference on any recording.

I've supervised more mixes than I can count in some very good sound studios in Hollywood. The last step is to output the mix to 16/44.1 and for everyone involved to compare it to the original still in the board for final sign off. If I ever heard a difference between the two, I would have thrown up a red flag, as would have the engineers and talent. The equipment in the room is always carefully calibrated to be consistent and perfect. It represents the reference standard. We never spent much time worrying about how a mix would sound on uncalibrated equipment or DACs that performed out of spec because the range of error would be so broad, there would be no point. We approved 16/44.1 on the reference system and made sure it matched everyone's intentions. And the bounce down never sounded different at all.

I think you're operating beyond the range of reality. It's great to finesse the details, but they have to be perceivable. And the level of finessing that makes sense for a recording studio is greater than the level required to play back that recording in the home. I can see arguing for the need to keep noise floors down in a mix where you're boosting levels on multiple channels, but when I sit in my living room and listen to an album, audibly transparent is audibly transparent.
 
Last edited:
Nov 2, 2017 at 3:48 PM Post #2,435 of 3,525
I entirely agree with you that the differences are, at most, very small.... certainly much smaller than many other factors.... and so quite probably are insignificant to most people.

The point I was trying to make is that tests using constructed signals, dirac pulses or single sine waves are effectively constructed to be audible, they are designed to improve the possibility of differences being audible. With real music recordings the transients are smeared all over the place, there's all sorts of masking going on and noise from processing and/or the recording environment. If we can't hear artefacts even with clean specifically designed test signals, we can completely forget about it with commercial audio. I know that audiophiles often scream the opposite, maybe we can hear "things" with real music that we can't with test signals but beyond marketing and their own anecdotal evidence, there's absolutely no reliable or scientific evidence which supports that assertion, in fact it completely contradicts such a belief. For example, jitter determination has been quoted down to about 20ns with test signals but with music as the signal most test subjects were unable to discriminate jitter below 500ns and the lowest achieved was 200ns. We get a similar picture with pretty much anything we test.

G
 
Nov 2, 2017 at 4:33 PM Post #2,436 of 3,525
You'll get no disagreement there from me.

My point was simply that, from a scientific point of view, if I'm trying to prove whether a difference is audible or not, I am in fact going to do my best to construct a test signal where it will be audible..... because, if it is audible under ANY conditions, then I have proven the assertion that "it is audible". If I fail entirely, after making a reasonably thorough and competent attempt to prove my assertion, then we can reasonably conclude that it ISN'T audible under any test conditions we could currently devise. And, if I succeed, and it does turn out to be audible with some specialized test signal, then we can move on to determine whether it is audible under "reasonable and practical" conditions, and how much that should concern the average consumer. I also agree that it should be possible to construct a test that is more sensitive to any difference that actually exists than any sort of listening under "normal conditions" - because the whole point of a test protocol is to maximize your chances of a definite result. (And, while I've heard a few valid points, I tend to agree that most claims to the contrary are simply ways of rationalizing they they didn't get the "obvious positive result" they expected.)

My honest assessment of the current status of this argument is this..... A significant number of audiophiles are convinced that the difference is so obvious that it should be easily audible. Based on this assertion, several studies have been performed, most of which have so far failed to produce any positive results. (But, of course, a lot of people who are simply "believers" aren't going to believe any results that conflict with their beliefs anyway.) However, because all of the studies I've read about have also been deeply flawed, I do not consider their results to be conclusive. If and when a properly designed and executed test shows positive results, then we can move on to wondering about whether the results are meaningful with normal music, in normal listening conditions. And, if a properly designed and executed test FAILS to produce positive results, then obviously there will be no next stage.

Historically, however, I remember a time when many people insisted that a good quality cassette recording was "indistinguishable from the original" - which I don't think most people would claim today. (Remember "Is it real or is it Memorex?"). I also remember when MP3's were touted as "being indistinguishable from the original" - because "the psychoacoustic research has all shown that nothing audible is being omitted from them". However, the technology changes, the quality of the master content we have available continues to improve - at least sometimes, and our expectations change. (Perhaps a good quality cassette recording was able to match the quality of a master tape; but that doesn't mean it can match the quality of a good quality modern digital master.) However, based on history, I'm not convinced that "there's no possible difference with high-def content, so we shouldn't even wonder". Personally, I would very much like to see results that can reasonably be considered to be conclusive - one way or the other - from a well designed and properly run test. However, I don't think we've reached that point yet... and I don't see any real movement in that direction.

As I've mentioned before, I don't think the sellers of high-res content will ever sponsor those tests - because the value of being proven right is outweighed by the risk of being proven wrong (and even being proven right - but by a narrow margin - would probably do more harm than good to their sales). Likewise, nobody has a vested interest in proving that high-res files aren't better (because nobody makes money by convincing you NOT to bother to buy that next remaster).

The point I was trying to make is that tests using constructed signals, dirac pulses or single sine waves are effectively constructed to be audible, they are designed to improve the possibility of differences being audible. With real music recordings the transients are smeared all over the place, there's all sorts of masking going on and noise from processing and/or the recording environment. If we can't hear artefacts even with clean specifically designed test signals, we can completely forget about it with commercial audio. I know that audiophiles often scream the opposite, maybe we can hear "things" with real music that we can't with test signals but beyond marketing and their own anecdotal evidence, there's absolutely no reliable or scientific evidence which supports that assertion, in fact it completely contradicts such a belief. For example, jitter determination has been quoted down to about 20ns with test signals but with music as the signal most test subjects were unable to discriminate jitter below 500ns and the lowest achieved was 200ns. We get a similar picture with pretty much anything we test.

G
 
Nov 2, 2017 at 6:49 PM Post #2,437 of 3,525
Well if you want to prove that noise or distortion or super audible frequencies are audible (or at least perceivable) that isn't hard. Just put on some good headphones with a nice tight seal, get yourself a really powerful amp, and crank the volume to the max. You may end up deaf as a post, but enough volume will make just about anything perceivable. But that isn't the point. What matters is, "Is this an issue I should be concerned about when I go out shopping for home audio equipment?" If the answer is "no" then you're done and it's time to pay attention to something that really does matter.

When you start challenging yourself to hear things that are generally inaudible, that is a great time to step back and take stock of what you're actually interested in proving. Perhaps it's an ego thing... you desperately want to have golden ears so you can tell people that you need better equipment than them because you're special. Or maybe it's an intellectual interest that's gotten out of hand and gone down the rabbit hole of calculating everything out to the full decimal value of pi. Or maybe it's that you want to justify the money you've already spent on equipment that is audibly identical to much cheaper equipment. Whatever it is, making the inaudible audible isn't helpful to anyone. It doesn't help people who read your carefully written arguments choose audio equipment more wisely. It doesn't help scientists understand perceptual thresholds any better. They already know everything human ears can do. The only people arguing that ears have special powers that are heretofore unheard of are high end audio salesmen that want to play into your ego to sell you something you don't really need.

I have great respect for people who can take science and put it into practical perspective. Ethan Winer is one of those folks, and that's why I include his videos in my sig file. He knows both sides- the equipment side and the perception side- and he helps people understand what matters and what doesn't. There aren't a lot of audio equipment reviewers that fit in that category, but I can spot them when I see them and I listen to what they have to say. I don't have much patience at all for commentators who argue endlessly "purely in theory", because that kind of discussion has no end and it has no value. It's just a bunch of words being generated for their own sake. No one should be required to listen to that kind of stuff.
 
Nov 3, 2017 at 5:42 AM Post #2,438 of 3,525
Historically, however, I remember a time when many people insisted that a good quality cassette recording was "indistinguishable from the original" - which I don't think most people would claim today

Yes but "the original" (a studio reel-to-reel) wasn't itself even linear within the known limits of human hearing and although some may have said consumer cassettes were indistinguishable, no one (beyond maybe some marketers) would have said that cassettes were linear within the limits of human hearing, just that they were maybe good enough to be "indistinguishable". To show that cassettes were distinguishable, all we would have had to do was demonstrate that the non-linearities of cassettes within the limits of human hearing were detectable. The problem we have here is different though, 16/44 is linear within (and beyond) the known limits of human hearing. So, there is nothing there to detect! Therefore, before we could even get to the question hi-res being distinguishable we would first have to demonstrate that the known limits of hearing are incorrect and only then can we get to the question of whether the content which lies outside the currently known hearing limits is actually enough to be distinguishable. To prove/demonstrate both of these questions is a tall order, with a heavy burden of proof. While I agree that pretty much all the published tests have some flaw/s, tests which agree with the known limits of hearing have a lower burden of proof than those which contradict them.

When people say things like "Let's say for the sake of argument..." or "Assuming that something that isn't audible may be unconsciously perceived..." for me, it's like stepping over the line into fantasy land.

I have to agree with KeithEmo on this one, you are confusing "inaudible" with "not consciously aware of", which are two very different things! As a creator/sound engineer much of my time is spent manipulating that which the consumer will not be "consciously aware of" but none of my time is spent on that which is inaudible.

G
 
Nov 3, 2017 at 8:32 AM Post #2,439 of 3,525
You'll get no disagreement there from me.

My point was simply that, from a scientific point of view, if I'm trying to prove whether a difference is audible or not, I am in fact going to do my best to construct a test signal where it will be audible..... because, if it is audible under ANY conditions, then I have proven the assertion that "it is audible". If I fail entirely, after making a reasonably thorough and competent attempt to prove my assertion, then we can reasonably conclude that it ISN'T audible under any test conditions we could currently devise. And, if I succeed, and it does turn out to be audible with some specialized test signal, then we can move on to determine whether it is audible under "reasonable and practical" conditions, and how much that should concern the average consumer. I also agree that it should be possible to construct a test that is more sensitive to any difference that actually exists than any sort of listening under "normal conditions" - because the whole point of a test protocol is to maximize your chances of a definite result. (And, while I've heard a few valid points, I tend to agree that most claims to the contrary are simply ways of rationalizing they they didn't get the "obvious positive result" they expected.)

My honest assessment of the current status of this argument is this..... A significant number of audiophiles are convinced that the difference is so obvious that it should be easily audible. Based on this assertion, several studies have been performed, most of which have so far failed to produce any positive results. (But, of course, a lot of people who are simply "believers" aren't going to believe any results that conflict with their beliefs anyway.) However, because all of the studies I've read about have also been deeply flawed, I do not consider their results to be conclusive. If and when a properly designed and executed test shows positive results, then we can move on to wondering about whether the results are meaningful with normal music, in normal listening conditions. And, if a properly designed and executed test FAILS to produce positive results, then obviously there will be no next stage.

Historically, however, I remember a time when many people insisted that a good quality cassette recording was "indistinguishable from the original" - which I don't think most people would claim today. (Remember "Is it real or is it Memorex?"). I also remember when MP3's were touted as "being indistinguishable from the original" - because "the psychoacoustic research has all shown that nothing audible is being omitted from them". However, the technology changes, the quality of the master content we have available continues to improve - at least sometimes, and our expectations change. (Perhaps a good quality cassette recording was able to match the quality of a master tape; but that doesn't mean it can match the quality of a good quality modern digital master.) However, based on history, I'm not convinced that "there's no possible difference with high-def content, so we shouldn't even wonder". Personally, I would very much like to see results that can reasonably be considered to be conclusive - one way or the other - from a well designed and properly run test. However, I don't think we've reached that point yet... and I don't see any real movement in that direction.

As I've mentioned before, I don't think the sellers of high-res content will ever sponsor those tests - because the value of being proven right is outweighed by the risk of being proven wrong (and even being proven right - but by a narrow margin - would probably do more harm than good to their sales). Likewise, nobody has a vested interest in proving that high-res files aren't better (because nobody makes money by convincing you NOT to bother to buy that next remaster).

I don't see the history repeating argument being applied to the right period. but if I look at every single highres format that came since CD, then for sure I can hear Shirley singing
They say the next big thing is here,
That the revolution's near,
But to me it seems quite clear
That's it's all just a little bit of history repeating.
:stuck_out_tongue_winking_eye:


as for not being sure of much of anything, of course even a billion failures wouldn't definitely prove there is nothing audible, and at least from a scientific perspective, the door is never closed. but at a more realistic and practical level, when there is so little sign of finding solid evidence that music requires more to be noticeably transparent despite many years passing by, there's a moment where it starts to feel like all the Clinton Benghazi investigations. it started like something reasonable. the first 5 investigations could maybe have been seen as really concerned people demanding the truth. but soon it clearly felt more like a "witch pursuit thing"
images

we have to admit that "the dignity of truth is lost with much protesting". so maybe it's time to
0.jpg
until the day we actually get relevant reason to question that issue again.

meanwhile insecure people will have to wait until DXD becomes the standard for streaming music. but let's not kid ourselves, you know that someone will then suggest more and complain about how artificial and lifeless DXD sounds. we'll blow the planet up long before audiophiles admit to a digital resolution being enough for their old ears.
 
Nov 3, 2017 at 9:29 AM Post #2,440 of 3,525
My problem with your assertion is very simple......

All of the facts we have about the known limits of human hearing relate to the very specific question of: Can we perceive the presence of continuous steady state sine waves of a given frequency?.
Therefore your assertion of "there's nothing to detect" is overgeneralized - based on the actual data.
You are generalizing the results of tests conducted using steady state sine waves with all other possible situations involving sound.

We have plenty of data to say with relative certainty that: "We know most humans cannot hear a 25 kHz continuous sine wave".
However, we do not have enough data to claim that: "Therefore, we know that humans cannot discern a 5 microsecond timing difference between two channels.

We CANNOT reasonably claim that: "There's nothing to detect".
The best we can say is that: "Based on the limits determined under other conditions, we suspect that the differences won't be audible under any of the conditions we're discussing".

On your second assertion: "To show that cassettes were distinguishable, all we would have had to do was demonstrate that the non-linearities of cassettes within the limits of human hearing were detectable." (Including my statement that we're going to do the test using content derived from reel-to-reel master tapes.)

1) If we determine that no difference is audible, it could be because we're seeing the limits of tape masters rather than of human hearing. (Perhaps cassettes can reproduce master tapes "audibly perfectly" because, even though they have serious flaws, master reel-to-reel tapes have the same flaws.... so a weakness in our test is masking the audibility of the flaws in cassettes.)

2) Here's an even worse possibility. What if both open reel tapes and cassettes have flaws that are individually inaudible, but they interact to produce audible artifacts (maybe the noise on the cassette modulates the noise from the master tape). If that were the case, it could turn out that both cassettes are "audibly perfect" for recording live music, but that cassettes are NOT "audibly perfect" for reproducing music sourced from master reel-to-reel tapes.

And, to carry that back to digital audio, what if our 16/44k CD can reproduce music in a way that's absolutely audibly indistinguishable from the original live performance, but it makes the background noise from the master tape sound "odd"? In that case, a test using tape-mastered samples might point out a flaw that is NOT detectable with live music. (This is not as far out as you might think. Many digital VIDEO formats, including standard DVDs, do very well at reproducing visible details, yet alter the "background noise" and the "film grain" in very obvious and easily seen ways. This would be analogous to reproducing the music perfectly, but making the tape hiss sound different.... which would have to be considered to be "a perfect reproduction of the music" but NOT a perfect reproduction of the master tape.

Yes but "the original" (a studio reel-to-reel) wasn't itself even linear within the known limits of human hearing and although some may have said consumer cassettes were indistinguishable, no one (beyond maybe some marketers) would have said that cassettes were linear within the limits of human hearing, just that they were maybe good enough to be "indistinguishable". To show that cassettes were distinguishable, all we would have had to do was demonstrate that the non-linearities of cassettes within the limits of human hearing were detectable. The problem we have here is different though, 16/44 is linear within (and beyond) the known limits of human hearing. So, there is nothing there to detect! Therefore, before we could even get to the question hi-res being distinguishable we would first have to demonstrate that the known limits of hearing are incorrect and only then can we get to the question of whether the content which lies outside the currently known hearing limits is actually enough to be distinguishable. To prove/demonstrate both of these questions is a tall order, with a heavy burden of proof. While I agree that pretty much all the published tests have some flaw/s, tests which agree with the known limits of hearing have a lower burden of proof than those which contradict them.



I have to agree with KeithEmo on this one, you are confusing "inaudible" with "not consciously aware of", which are two very different things! As a creator/sound engineer much of my time is spent manipulating that which the consumer will not be "consciously aware of" but none of my time is spent on that which is inaudible.

G
Yes but "the original" (a studio reel-to-reel) wasn't itself even linear within the known limits of human hearing and although some may have said consumer cassettes were indistinguishable, no one (beyond maybe some marketers) would have said that cassettes were linear within the limits of human hearing, just that they were maybe good enough to be "indistinguishable". To show that cassettes were distinguishable, all we would have had to do was demonstrate that the non-linearities of cassettes within the limits of human hearing were detectable. The problem we have here is different though, 16/44 is linear within (and beyond) the known limits of human hearing. So, there is nothing there to detect! Therefore, before we could even get to the question hi-res being distinguishable we would first have to demonstrate that the known limits of hearing are incorrect and only then can we get to the question of whether the content which lies outside the currently known hearing limits is actually enough to be distinguishable. To prove/demonstrate both of these questions is a tall order, with a heavy burden of proof. While I agree that pretty much all the published tests have some flaw/s, tests which agree with the known limits of hearing have a lower burden of proof than those which contradict them.



I have to agree with KeithEmo on this one, you are confusing "inaudible" with "not consciously aware of", which are two very different things! As a creator/sound engineer much of my time is spent manipulating that which the consumer will not be "consciously aware of" but none of my time is spent on that which is inaudible.

G
 
Last edited:
Nov 3, 2017 at 9:34 AM Post #2,441 of 3,525
meanwhile insecure people will have to wait until DXD becomes the standard for streaming music. but let's not kid ourselves, you know that someone will then suggest more and complain about how artificial and lifeless DXD sounds. we'll blow the planet up long before audiophiles admit to a digital resolution being enough for their old ears.

Yeah. I can't wait to see the debates over whether the pre-ringing at 160 kHz ruins the soundstage or not. :jecklinsmile:
 
Nov 3, 2017 at 9:39 AM Post #2,442 of 3,525
You'll get no disagreement there from me.

My point was simply that, from a scientific point of view, if I'm trying to prove whether a difference is audible or not, I am in fact going to do my best to construct a test signal where it will be audible..... because, if it is audible under ANY conditions, then I have proven the assertion that "it is audible". If I fail entirely, after making a reasonably thorough and competent attempt to prove my assertion, then we can reasonably conclude that it ISN'T audible under any test conditions we could currently devise. And, if I succeed, and it does turn out to be audible with some specialized test signal, then we can move on to determine whether it is audible under "reasonable and practical" conditions, and how much that should concern the average consumer. I also agree that it should be possible to construct a test that is more sensitive to any difference that actually exists than any sort of listening under "normal conditions" - because the whole point of a test protocol is to maximize your chances of a definite result. (And, while I've heard a few valid points, I tend to agree that most claims to the contrary are simply ways of rationalizing they they didn't get the "obvious positive result" they expected.)

My honest assessment of the current status of this argument is this..... A significant number of audiophiles are convinced that the difference is so obvious that it should be easily audible. Based on this assertion, several studies have been performed, most of which have so far failed to produce any positive results. (But, of course, a lot of people who are simply "believers" aren't going to believe any results that conflict with their beliefs anyway.) However, because all of the studies I've read about have also been deeply flawed, I do not consider their results to be conclusive. If and when a properly designed and executed test shows positive results, then we can move on to wondering about whether the results are meaningful with normal music, in normal listening conditions. And, if a properly designed and executed test FAILS to produce positive results, then obviously there will be no next stage.

Historically, however, I remember a time when many people insisted that a good quality cassette recording was "indistinguishable from the original" - which I don't think most people would claim today. (Remember "Is it real or is it Memorex?"). I also remember when MP3's were touted as "being indistinguishable from the original" - because "the psychoacoustic research has all shown that nothing audible is being omitted from them". However, the technology changes, the quality of the master content we have available continues to improve - at least sometimes, and our expectations change. (Perhaps a good quality cassette recording was able to match the quality of a master tape; but that doesn't mean it can match the quality of a good quality modern digital master.) However, based on history, I'm not convinced that "there's no possible difference with high-def content, so we shouldn't even wonder". Personally, I would very much like to see results that can reasonably be considered to be conclusive - one way or the other - from a well designed and properly run test. However, I don't think we've reached that point yet... and I don't see any real movement in that direction.

As I've mentioned before, I don't think the sellers of high-res content will ever sponsor those tests - because the value of being proven right is outweighed by the risk of being proven wrong (and even being proven right - but by a narrow margin - would probably do more harm than good to their sales). Likewise, nobody has a vested interest in proving that high-res files aren't better (because nobody makes money by convincing you NOT to bother to buy that next remaster).

About this historical claim about tape recordings, the claim of "indistinguishable from the original" was experience based. And repeatable. If running the same test today, using the exact same setup, the experience would be about the same. That is what is reasonable to expect. There is a need to point out, that this was typically pushed by clerks in stores, with some speakers placed in stupid locations, with horrible room acoustics, signal wandering through multiple analogue cable hubs, and so on. The background noise was insane. Once I got my first 3-headed tape player, I was shocked to get a delayed echo of the music all the time. Once I then went to any store, listening carefully, it was always there, for any playback. For any 3-headed player available at the time. All you had to do, was altering the setup, making it possible to detect the flaw.

Insane use of theory, is an age old thing. Denying clearly audible flaws in music reproduction, has been around forever. Even people claiming to musicians, but unable to flaw a 64bit/s Mp3. Knowing how disharmonic the flaws was, I find it hard to accept that any of them people had an ear for harmonics at

At some point though, we will hit a roof, at which the distribution format is actually surpassing the ability of human hearing. Given the prevailing understanding in physics, it is difficult to understand this need for ultra high dynamic range, way beyond 16 bit. Also, the sampling rate, given the prevailing understand in physics, is difficult to align with the need for these ultra high sampling rates.

When people say things like "Let's say for the sake of argument..." or "Assuming that something that isn't audible may be unconsciously perceived..." for me, it's like stepping over the line into fantasy land.

I believe in the inaudibility of inaudible sound.

For any reasonable discussion on any piece of research, knowing the assumptions, the paradigm, the methodology, the methods, the setting, is critical if to achieve any recoverability. It is critical. If this is a fantasy land, then all valid research is a fantasy land.

Me too,
My "Audiophile" What moment was when I first read a DAC spec, 32bit at 384k samples per second.
So I went and bought a 16 bit ladder (mainly R2R) DAC for my Redbook rips and gave up on stupid numbers and went back to enjoying my music.

Ladder DACs are strange constructions. They may easily use components with say 1% accurate resistors, which gives you, at best, an accuracy of 100 to 1. That is like 8bit DR.

Then there is power supply ripple. Getting a ripple below 16bit? At least for high end ATX powersupplies, measuring anything significantly below 1000 to 1 for ripple, is unheard of. Again, not anywhere near 16 bit.

http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story4&reid=527
"In these pictures, I see about 8mV and 5mV max for the minor rails. Can't get much better than that. The 12V rail is the worst one at 20mV." (Jonny Guru)

The way I see this, we are simply not there yet. The tech most people use, cannot reproduce 16bit/44.1 with any real accuracy. Maybe higher res format helps reducing the flaw of current gear, sure. Personally, I prefer real world testing, not using tech at all, to establish reasonable bounds of what humans can hear. Plenty of research on simple audio, but if someone has done credible research on complex sounds, that research is lost in the noise, at least on me.

Trying to engage in conversation on the experience people have, listening to music using their gear, is a real letdown, at least to me. There is no common understanding, no formalization of any sonic traits, and no will to do so. That includes this forum. It makes it almost impossible to arrive at a common and shared understanding of any experience. It is like a sauce, not a source. A source as to move us forward, to gain greater intersubjective understanding. (remember, this is within the interpretive paradigm) Sure, its is a nice sauce, tasty and stuff. It is still just a sauce. This sauce thing, will drive most people entering this field from say a perspective of physics nuts.

Hopefully, moving into vector sound reproduction, will help establishing the bounds given by human hearing. As that hopefully will enable people to at least engage in conversation that make any sense, using the language they got at hand.
 
Nov 3, 2017 at 9:50 AM Post #2,443 of 3,525
We have plenty of data to say with relative certainty that: "We know most humans cannot hear a 25 kHz continuous sine wave".
However, we do not have enough data to claim that: "Therefore, we know that humans cannot discern a 5 microsecond timing difference between two channels.

Doesn't matter, because 16/44.1 audio can do 5 µs timing differences with complete ease.

We CANNOT reasonably claim that: "There's nothing to detect".

Have you detected something? I haven't, but maybe that's because I listen to CDs for the music, not to detect limitations of 16/44.1 digital audio which I know are nearly impossible to detect at best.
 
Nov 3, 2017 at 10:16 AM Post #2,444 of 3,525
I don't disagree with you at all...... but you have to differentiate between "pure science", "practical science", and just plain old "common usage".
And I would also say that many audiophiles look at that distinction differently than "regular people".
(And this distinction exists in many areas.)

However, you also need to admit to some distinctions in the other direction.

When I measure lumber, I really don't need to have measurements accurate to 1/100 of an inch.
However, I still spent an extra $15 to buy the LASER ruler that was accurate to 1/100" instead of the one that was only accurate to 1/10".
Now, why, if 1/10" is plenty accurate, would I do that?
Will it ever REALLY matter?
Will I ever NEED to measure something to 1/100"?
In fact, it almost certainly won't matter, but I still prefer to have more accuracy than I need rather than risk less, so I paid a little extra for "insurance".
Likewise, when I used to wear a digital watch, I used to pay $10 more for the one that was accurate to 30 seconds a month instead of two minutes.
(I wouldn't pay $1000 more; but I also wouldn't expend a lot of effort to convince people not to spend the extra $10.)
And, will a $100k Lexus really get me to the corner market any better than my $20k Nissan?

Well, a lot of audiophiles seem to think the same way.
They may fancy they can really hear a difference.
Or they may just like the added assurance that they don't have to wonder if there's something better out there.
Or they may in fact just be buying bragging rights.
I remember occasionally saying something like: "Your clock must be wrong, because I KNOW my watch isn't more than a half a minute off".
Well, some audiophiles derive comfort when, after hearing something odd on a recording, they can say: "It must be a bad recording because I KNOW my equipment sounds right."
And, to people who think that way, it's worth a bit extra (or a lot extra) to take the step from "audibly good enough for most people" to "audibly perfect".

To the marketing department at iTunes, being able to say: "95% of listeners think it sounds perfect" is quite good enough.
To the marketing folks at Tidal, who justify their existence, and make their living, based on the other 5%, it would not be good enough.

And, when you get up into "audiophile land" the landscape becomes even stranger....
And there is a fine line between "things that are audibly better", "things that are technically better, even though the improvement may not be audible", and things everyone is just imagining.

From the title of this thread, the current discussion seems to be about the scientific absolute.
The title includes the assertion that "24 bit audio and anything over 48k is not only worthless but actually bad".
It does NOT assert that "16/44k is plenty good enough for most people, so you're probably wasting your money to pay extra for anything better".
(I probably wouldn't argue at all with that second version. However, to me, this thread is dedicated to a far more aggressive, and to me overreaching, claim.)

If you can reasonably suggest that no living human will ever be able to hear the difference - then the discussion is over.
If you cannot - then it becomes a discussion about priorities rather than absolutes.

Would you really want a TV that reproduced colors and brightness so accurately that you ended up with a sunburn after watching The Martian?
Probably not.....
But an audiophile just might.

(And, if you ever happen to see any of my posts on threads dedicated to "whether high-res downloads are worthwhile" you'll find that I universally suggest that people read the reviews about any given re-master, and decide whether to buy it based on the actual virtues of a given offering. I would certainly not recommend buying a high-res remaster that doesn't sound better than the copy you already have. However, if the new 24/192k remaster sounds really good, I also wouldn't suggest NOT buying it BECAUSE it's 24/192k..... and I'm not going to bother to convert it to 44k after I buy it just to save a few cents worth of storage space - even if I don't hear any difference when I do. I absolutely wouldn't be willing to live without a DAC that supports 24/192k..... NOT because it specifically sounds better, but simply because every conversion changes things, and I want to be able to play any file I come across as it sits.... it's just more convenient than being locked into some limitation that makes me do more work. And, yes, if they're selling a 44k version for $25 and a 24/192k version for $30, I probably will pay the extra $5 for insurance; after all, I'll bet they mastered it at 24/192k, so the 44k version went through an extra conversion, which may not be terrible, but I doubt it's going to improve anything - and may even have been deliberately tweaked to NOT sound as good as the premium version. :gs1000smile:)

Well if you want to prove that noise or distortion or super audible frequencies are audible (or at least perceivable) that isn't hard. Just put on some good headphones with a nice tight seal, get yourself a really powerful amp, and crank the volume to the max. You may end up deaf as a post, but enough volume will make just about anything perceivable. But that isn't the point. What matters is, "Is this an issue I should be concerned about when I go out shopping for home audio equipment?" If the answer is "no" then you're done and it's time to pay attention to something that really does matter.

When you start challenging yourself to hear things that are generally inaudible, that is a great time to step back and take stock of what you're actually interested in proving. Perhaps it's an ego thing... you desperately want to have golden ears so you can tell people that you need better equipment than them because you're special. Or maybe it's an intellectual interest that's gotten out of hand and gone down the rabbit hole of calculating everything out to the full decimal value of pi. Or maybe it's that you want to justify the money you've already spent on equipment that is audibly identical to much cheaper equipment. Whatever it is, making the inaudible audible isn't helpful to anyone. It doesn't help people who read your carefully written arguments choose audio equipment more wisely. It doesn't help scientists understand perceptual thresholds any better. They already know everything human ears can do. The only people arguing that ears have special powers that are heretofore unheard of are high end audio salesmen that want to play into your ego to sell you something you don't really need.

I have great respect for people who can take science and put it into practical perspective. Ethan Winer is one of those folks, and that's why I include his videos in my sig file. He knows both sides- the equipment side and the perception side- and he helps people understand what matters and what doesn't. There aren't a lot of audio equipment reviewers that fit in that category, but I can spot them when I see them and I listen to what they have to say. I don't have much patience at all for commentators who argue endlessly "purely in theory", because that kind of discussion has no end and it has no value. It's just a bunch of words being generated for their own sake. No one should be required to listen to that kind of stuff.
 
Last edited:
Nov 3, 2017 at 10:18 AM Post #2,445 of 3,525
My problem with your assertion is very simple......
[1] All of the facts we have about the known limits of human hearing relate to the very specific question of: Can we perceive the presence of continuous steady state sine waves of a given frequency?.
[1a] However, we do not have enough data to claim that: "Therefore, we know that humans cannot discern a 5 microsecond timing difference between two channels.
[1b] We CANNOT reasonably claim that: "There's nothing to detect".
2) ... And, to carry that back to digital audio, what if our 16/44k CD can reproduce music in a way that's absolutely audibly indistinguishable from the original live performance, but it makes the background noise from the master tape sound "odd"?
[2a] This is not as far out as you might think. Many digital VIDEO formats, including standard DVDs, do very well at reproducing visible details, yet alter the "background noise" and the "film grain" in very obvious and easily seen ways. This would be analogous to reproducing the music perfectly, but making the tape hiss sound different....

1. Not it's not, there's been all kinds of tests done, not just with single isolated sine waves. The aforementioned Theiss study used noise for example. Single sine waves are good for certain tests because they are more audible.
1a. Well firstly, there is no timing difference between left and right channels with 44/16, none at all, nada. So there is nothing there to hear, regardless of the limits of human hearing! Secondly, timing inaccuracies within a channel of 44/16 are about half a million times below your quoted 5 micro-sec determination!
1b. No, we CANNOT reasonably claim anything other than there's nothing there to detect!

2. We know very well how the uncorrelated noise floor of 16/44 works and the very fact that it is uncorrelated means that it cannot interact with and change the noise on the original recording. AND, that uncorrelated noise, in the band where our hearing is most sensitive is down at about -120dB, WHERE YOU CANNOT HEAR IT, at least not without destroying your hearing. BTW, the levels at which hearing is damaged was not ascertained by single sine waves!

2a. No, that's not analogous at all! It could be analogous if we were talking about a very lossy audio MP3, say a 96kbps MP3 but we're not, we're talking about 44/16 uncompressed and therefore completely inaudible and UNCORRELATED digital noise artefacts as explained in point 2!

Your arguments are becoming more and more unreasonable because we're talking about demonstrable measurements which are either so far from even the most optimistic limits of human hearing it's laughable or are even more laughable because there are no differences to detect!

G
 
Last edited:

Users who are viewing this thread

Back
Top