Testing audiophile claims and myths
Apr 30, 2018 at 9:52 PM Post #7,186 of 17,336
If they die, you have objective data without them saying anything. In listening test we have to rely on judgement of listeners on what they heard. We don't get that objective data point.

Indeed in medicine much instrumentation is used to determine efficacy. A tumor can be monitored for size during treatment for example. A rash can be seen to disappear. A fever coming down. We don't have any of these tools in listening tests.


When I went to my audiologist last, he played fainter and fainter tones. It got to a point where I was wondering, "did he play something?" "Did I really hear something?" You know what I did? I guessed. Darn it if the audiologist did not have poker face, not letting me know if I was or was not wrong. :D

So even in the cases of inaudibility, we still can get unreliable answers. This is why we use statistical analysis to determine the likelihood of any conclusions we want to draw.

If you're in a quiet room and I suddenly play loud white noise in it, you'll either startle or you won't. Having sensation as the outcome of the test doesn't necessarily make the test subjective. As you say, it's those tests where the signal-to-noise ratio gets low where we need some kind of method for dealing with uncertainty due to human confounding. I guess I don't view leaning on probability to handle part of that task as suddenly making things 'subjective', but rather uncertain.
 
Apr 30, 2018 at 10:14 PM Post #7,187 of 17,336
If you're in a quiet room and I suddenly play loud white noise in it, you'll either startle or you won't. Having sensation as the outcome of the test doesn't necessarily make the test subjective. As you say, it's those tests where the signal-to-noise ratio gets low where we need some kind of method for dealing with uncertainty due to human confounding. I guess I don't view leaning on probability to handle part of that task as suddenly making things 'subjective', but rather uncertain.

If someone jumps out of their chair due to a loud noise, that's an objective manifestation of what's still a subjective perception. If they're just verbally telling you what they heard, or think they heard, you have to go by what they say. If they're not sure about what they heard, and therefore not sure about what to say, the 'uncertainty' is in their subjective perception. The fact that they objectively said some words about their perception doesn't change that perception being subjective.

If you want more objectivity about perception itself, you can do brain measurements during the testing. But then you still have to deal with the problem of trying to relate any observed brain changes to subjective music perception.

Can't do 'sound science' which deals with music perception unless you go past the ear drum and address psychological aspects of perception.
 
Apr 30, 2018 at 11:10 PM Post #7,188 of 17,336
What the heck are you guys talking about? Even plain English gets a quibble out of you! "Black is actually white!" Please put whatever you want me to read in the first few sentences. I'm not motivated to mine through that verbal deluge.
 
Last edited:
May 1, 2018 at 12:08 AM Post #7,189 of 17,336
Can't do 'sound science' which deals with music perception unless you go past the ear drum and address psychological aspects of perception.

So now we have to be neurologists/psychologists in order to identify high fidelity or define high fidelity?

So where exactly the does the bar stop rising?
 
May 1, 2018 at 5:52 AM Post #7,190 of 17,336
Nope. Just no.

What do you mean they "either hear it or they don't?" We have no idea what they are hearing. The only input we get is what they are saying. That involves hearing, perception and giving an answer.

What the heck are you guys talking about? Even plain English gets a quibble out of you! "Black is actually white!" Please put whatever you want me to read in the first few sentences. I'm not motivated to mine through that verbal deluge.

Actually, his first few sentences should be enough to understand where he's coming from. Regardless of what they perceive, the only information you get is what they tell you. This involves a subjective choice, because what they choose to tell you might or might not match their perception.

For instance, they might just choose to give random answers at some point, because they got bored of the test. Granted, not a likely scenario, but sufficient to illustrate that what they tell you is not necessarily equivalent to objective information.
 
May 1, 2018 at 6:38 AM Post #7,191 of 17,336
What the heck are you guys talking about? Even plain English gets a quibble out of you! "Black is actually white!" Please put whatever you want me to read in the first few sentences. I'm not motivated to mine through that verbal deluge.

Yeah I'm done as well. It's going where it always goes: everything human is unknowable and thus everything matters as much as anyone wants.
 
May 1, 2018 at 6:46 AM Post #7,192 of 17,336
So now we have to be neurologists/psychologists in order to identify high fidelity or define high fidelity?

So where exactly the does the bar stop rising?

Absolutely. Perception occurs in the brain/mind, not the ear. And think of how complex music (not just sound) perception is. We’re not talking about whether the ear can transduce given frequencies, it’s about what signals go down the auditory nerves AND how the brain interprets them.

When you guys talk about our perception being biased (at the brain level), it would be more accurate to say that it’s subject to inconsistent errors - and those errors will occur during a blind testing procedure itself.

We have to use whatever science pertains to the questions we’re asking. If the questions are about objective sound in the air, we don’t need to know about ears and brains. If you want to know what nerve signals the ear is transmitting, you need to know about ear anatomy and physiology. If the questions are about sound and music perception, you need to know the relevant brain science and psychology; that’s a complex process and topic which isn’t fully understood.

We can try to make it easier by asking people to describe what they hear, but that will be a limited and sometimes erroneous description of what’s perceived. Think of the person who says ‘it sounds good’ or ‘it doesn’t sound good’ but struggles to articulate in what way and why it sounds good or not good. Emotions elicited by music are also an aspect of perception (a key reason we listen to music!), yet think of how limited our ability is to get a handle on our emotions at the conscious level.
 
Last edited:
May 1, 2018 at 9:51 AM Post #7,193 of 17,336
I tend to notice these sorts of differences most significantly with well-recorded wire brush cymbals - which consist of a complex series of many separate short sharp taps of metal on metal - each with its own overlapping pattern of ringing. What I tend to notice is that, on some recordings, the strike of a wire brush on a cymbal sounds like a simple burst of white noise (like a steam valve hissing), while on others it seems to sound more distinctly like a series of separate small metallic taps. My THEORY is that my brain is able to make some sort of sense of the semi-random pattern of sharp taps, slightly separated in both position and time; that some recordings reproduce this sense of distribution more accurately than others; and that, on recordings that do it especially well, some DACs tend to also reproduce it better than others. I do not purport to notice this distinction on all speakers, nor with most recordings... but, with certain speakers, and certain recordings, different DACs seem to reproduce it with slight differences.

I'll even suggest that, in a typical multi-track recording, the cymbal resides on a single track; so, in the final mix, the cymbal sounds all occur at a single physical position.

On a typical multi-track recording the cymbals will NEVER reside on a single track. With the exception of the Hi-Hat, the cymbals in a drum kit are rarely spot mic'ed and even if they were, there would still be very significant spill into the other mics. So, you get the cymbal sound from nearly all the individual mics but most particularly the stereo overheads (which of course also contain all the other instruments in the kit). In practise, at the small time-scales we're talking about, a drum kit recording is ALWAYS a terrible mess! Typically we would have: 1 (sometimes 2) kick drum mics, 2 snare drum mics (top and bottom heads) but sometimes only 1, a Hi-Hat mic, a mic for each of the toms and a stereo overhead pair. The distance between the kick mic and the overhead mics is going to be around 6ft, so the time difference is going to be in the order of 5-6ms (as sound travels just over 1ft in a ms). The smallest time difference between kit mics will be about 0.7ms, while the biggest is about 5-6ms (if we ignore the likelihood of a room mic, which will have a delay of around 20ms or so) and of course, we're not just talking about time differences between various mics and the overheads (which all vary between about 2-5ms) but also time differences between each individual mic, of which there'd be a minimum of 5 but probably 8-10. So the recording is a mess to start with and then we use compression and other processors, such as EQ and reverb (with a pre-delay of around 15ms and decay of 1-3 secs) which messes with the transients' shapes and timing even further. So, do you hear this terrible mess on pretty much every rock/pop recording in the last 60 years, or do you hear a generally pretty tight/punchy drum kit? If you can't hear this terrible mess, then how can you hear the relatively minor/insignificant filter ringing buried within that terrible mess?

We're used to talking about nano, pico and even femto secs here and sure, we can discriminate a difference caused by timing errors down into a few hundred nano-secs range but we can't actually hear those timing errors as timing errors, our ears are nowhere near that sensitive. In fact, even with the ideal circumstances, we can't discriminate timing much below about 2ms, we hear it as a variation of phasing/frequency, not as separate events in time and, a drum kit recording is hardly the "ideal circumstances"! The common problem in the audiophile world and even here in the sound science forum, is that we don't consider what it is we're actually reproducing. There is a general ignorance of music itself, of it's performance, of it's perception, of the recording and production of music and therefore of how measurements, the science and tested limits of human hearing actually apply in practise (in terms of scale and context) to what it is we're trying to reproduce!

[1] However, if the cymbal itself is recorded in stereo, or using multiple microphones, the fact that the wires actually strike the cymbal in slightly different locations, spread out in both time and space, somehow allows my brain to more easily identify them as separate events... even though they are very closely spaced in time.

[2] As I said, this is a theory.... perhaps someday I or someone else will test it.

1. Again, the cymbal itself would be recorded both with a stereo (overhead) pair and multiple mics and it would be spread across both time and space, as would ALL the instruments in the kit and not only the cymbal/s. If you can hear it in the cymbals why can't you hear it in the even more distinct snare drum or hi-hats? You've taken theory and/or the tested limits of discrimination, ignored the actual practicalities/realities of music recording and production and come up with your own theory which sounds perfectly plausible to you and anyone else unaware of the actual practicalities/realities. What you're suggesting is not absolutely impossible but A) Is very unlikely and B) Is typically very undesirable anyway! And C) There's a much more likely explanation you're ignoring. Have you ever heard of a "Sizzle Cymbal"? Sizzle cymbals are not uncommon and could easily account for what you're hearing, with no need to resort to magic or the most extreme hearing thresholds. The individual taps of the rivets can be heard quite distinctly under some practical/realistic circumstances, given accurate or emphasised HF response speakers or HPs but often it's quite near the edge of audibility and therefore slight differences in level could cause it to become inaudible or even small differences in sitting position (relative to the speakers) or HP placement.

2. It has been tested, everyday for about 50 years by thousands of music engineers all over the world!

Additionally, as amirm has correctly stated, there is nothing in real sound which is anything like an impulse used for testing. Sure, there are transient peaks which can be very slightly similar but then real sound never contains ONLY a transient, there are ALWAYS other components to the sound after the transient.

G
 
Last edited:
May 1, 2018 at 11:00 AM Post #7,194 of 17,336
Absolutely. Perception occurs in the brain/mind, not the ear. And think of how complex music (not just sound) perception is. We’re not talking about whether the ear can transduce given frequencies, it’s about what signals go down the auditory nerves AND how the brain interprets them.

When you guys talk about our perception being biased (at the brain level), it would be more accurate to say that it’s subject to inconsistent errors - and those errors will occur during a blind testing procedure itself.

We have to use whatever science pertains to the questions we’re asking. If the questions are about objective sound in the air, we don’t need to know about ears and brains. If you want to know what nerve signals the ear is transmitting, you need to know about ear anatomy and physiology. If the questions are about sound and music perception, you need to know the relevant brain science and psychology; that’s a complex process and topic which isn’t fully understood.

We can try to make it easier by asking people to describe what they hear, but that will be a limited and sometimes erroneous description of what’s perceived. Think of the person who says ‘it sounds good’ or ‘it doesn’t sound good’ but struggles to articulate in what way and why it sounds good or not good. Emotions elicited by music are also an aspect of perception (a key reason we listen to music!), yet think of how limited our ability is to get a handle on our emotions at the conscious level.
of course it's about what the ear can transduce the signal. that's all it does. pressure to mechanical movement back to vibrations then into electrical signal. treating that as if it's a given that we're getting the signal to our brain is just wrong.
because if the signal doesn't register, or if it does but in a way that can only activate with extreme stimuli, then that would be absolutely enough to conclude that people are making crap up when discussing the audibly better high res.
even with a pure tone our sensitivity goes down rapidly in the high freqs, that is a fact. just like it's a fact that with music, our brain tends to notice less, not more than with nominal test signals. same as anything getting more complex, the brain's performance collapses as soon as there are many things to focus on at the same time. it's logical and follows practical experiments of our daily lives.
when we test pretty much anything, we get some hearing threshold with nominal test signals, and we get lower success when playing music. but somehow we test pure tones, most people can't notice 20khz, yet, discuss music and they all start to claim noticing signals missing at even higher frequencies like analogsurvi and his 2 DSD resolutions. it's slightly suspicious.

my personal hypothesis is simple. rolling off frequencies has little to no impact(aside from the amount of aliasing it might create and other gear related issues with ultrasonic content), simply because our ears are applying their own series of low pass starting at lower frequencies than what redbook or most DACs will do it. so it's like cutting again to remove the markings from the first cut. even if for some people(I expect youngsters to be it, not veteran audiophiles), their own low pass is not enough to remove all the "markings" from the format's brickwall low pass, it's still going to be something attenuated a good deal, at the edge of their audible range.
I don't see why some people(kids) couldn't notice that difference, the same way I can notice the attenuation in the trebles from gears that start the low pass so soon you're already down by 2db at 14khz. but is that significant? is it what makes my music real? when some of my IEMs roll off like crazy starting 10khz, I still subjectively feel like most of the music is there. but those guys are talking significant differences at the edge of what's audible, where nobody has much sensitivity to begin with.

if blind tests have told me something, it's that most audiophiles claiming to hear something are fooling themselves. I cannot say that nobody can hear changes from high res, but I can absolutely say that most audiophiles are unable to identify the oh so obvious superiority of highres in blind test. which brings another question, would this even be a debate if everybody was required to first pass a blind test before running his mouth about the audible superiority of high res? if we had the numbers of people noticing something, and the numbers of the people purchasing highres, everybody would have to admit that high res is mostly something for .... lol .... objectivists.

guys, we really have some rebranding to do. we demand subjective blind tests and don't care for the superior objective fidelity of highres. let's just admit it, we're the subjectivist sub section of this forum.
 
May 1, 2018 at 11:04 AM Post #7,195 of 17,336
I get that, but if you limit the scope of the forum to objective stuff (physics and technology) and ignore subjective aspects, you have a limited definition of sound science, and issues of psychoacoustics, blind testing, etc. have no relevance. From what I've seen so far, the discussions aren't by any means limited to objective aspects. ...
If the questions are about sound and music perception, you need to know the relevant brain science and psychology; that’s a complex process and topic which isn’t fully understood.

Since you started posting here you keep making the same mistake. It's been explained to you several times but after debating it once or twice, you just ignore the explanation and then carry on repeating that same mistake. Then, when accused of repeating the same mistake, you respond by repeating the same mistake again???!

Again, the job of audio reproduction equipment is to reproduce the audio signal which has been produced. This is NOT a hard concept to grasp! Questions of how we perceive sound/music is relevant and part of sound science but is NOT relevant to audio reproduction equipment! It's ONLY relevant to what we put in that signal in the first place (the artists and engineers) and to what happens to the signal after reproduction. To measure the performance of audio equipment all we need is objective measurements, does the output signal match the input signal within the limits of audibility, that's it, no subjective aspects involved.

[1] Unfortunately, it is not. There are tons of instruments with overtones way past 20 kHz, which can be sensed by humans - even if we do not hear them with our ears as pure sine waves, they are perceived in other ways. Some people are more sensitive to this, some less, some not at all - but stating that response above 20k does not matter is just plain wrong.
[2] PCM itself is not NEARLY as perfect and foolproof with regards to timing as its proponents would like us to believe. ...
[3] DSD is inherently free from this defect that can and does occur at least sometimes with PCM - and may be the primary reason as to why I prefer it - BY FAR - to any PCM.
[4] I will NEVER agree on unimportance of frequencies above 20 kHz as far as recreation of space and original acoustics of the venue, where music is performed or has been recorded is concerned; NEVER - EVER !!!

1, 2 & 3. You didn't answer my question, where do you get all this utter nonsense, do you just make it up yourself or do you use some audiophile nonsense database?

4. Fortunately, the facts do not depend on when, or even if, you ever agree with them. If you want to contradict the actual facts then no problem but you MUST back up your claims with something other than just your belief. Otherwise we have no option other than to treat your claims as pure ignorance based assumptions and complete nonsense/falsehoods and, if you keep doing it, as trolling!!

G
 
May 1, 2018 at 11:27 AM Post #7,196 of 17,336
Since you started posting here you keep making the same mistake. It's been explained to you several times but after debating it once or twice, you just ignore the explanation and then carry on repeating that same mistake. Then, when accused of repeating the same mistake, you respond by repeating the same mistake again???!

Again, the job of audio reproduction equipment is to reproduce the audio signal which has been produced. This is NOT a hard concept to grasp! Questions of how we perceive sound/music is relevant and part of sound science but is NOT relevant to audio reproduction equipment! It's ONLY relevant to what we put in that signal in the first place (the artists and engineers) and to what happens to the signal after reproduction. To measure the performance of audio equipment all we need is objective measurements, does the output signal match the input signal within the limits of audibility, that's it, no subjective aspects involved.



1, 2 & 3. You didn't answer my question, where do you get all this utter nonsense, do you just make it up yourself or do you use some audiophile nonsense database?

4. Fortunately, the facts do not depend on when, or even if, you ever agree with them. If you want to contradict the actual facts then no problem but you MUST back up your claims with something other than just your belief. Otherwise we have no option other than to treat your claims as pure ignorance based assumptions and complete nonsense/falsehoods and, if you keep doing it, as trolling!!

G

+1
"we have no option other than to treat your claims as pure ignorance based assumptions and complete nonsense/falsehoods and, if you keep doing it, as trolling!!"
 
Last edited:
May 1, 2018 at 11:39 AM Post #7,199 of 17,336
On a typical multi-track recording the cymbals will NEVER reside on a single track. With the exception of the Hi-Hat, the cymbals in a drum kit are rarely spot mic'ed and even if they were, there would still be very significant spill into the other mics. So, you get the cymbal sound from nearly all the individual mics but most particularly the stereo overheads (which of course also contain all the other instruments in the kit). In practise, at the small time-scales we're talking about, a drum kit recording is ALWAYS a terrible mess! Typically we would have: 1 (sometimes 2) kick drum mics, 2 snare drum mics (top and bottom heads) but sometimes only 1, a Hi-Hat mic, a mic for each of the toms and a stereo overhead pair. The distance between the kick mic and the overhead mics is going to be around 6ft, so the time difference is going to be in the order of 5-6ms (as sound travels just over 1ft in a ms). The smallest time difference between kit mics will be about 0.7ms, while the biggest is about 5-6ms (if we ignore the likelihood of a room mic, which will have a delay of around 20ms or so) and of course, we're not just talking about time differences between various mics and the overheads (which all vary between about 2-5ms) but also time differences between each individual mic, of which there'd be a minimum of 5 but probably 8-10. So the recording is a mess to start with and then we use compression and other processors, such as EQ and reverb (with a pre-delay of around 15ms and decay of 1-3 secs) which messes with the transients' shapes and timing even further. So, do you hear this terrible mess on pretty much every rock/pop recording in the last 60 years, or do you hear a generally pretty tight/punchy drum kit? If you can't hear this terrible mess, then how can you hear the relatively minor/insignificant filter ringing buried within that terrible mess?

We're used to talking about nano, pico and even femto secs here and sure, we can discriminate a difference caused by timing errors down into a few hundred nano-secs range but we can't actually hear those timing errors as timing errors, our ears are nowhere near that sensitive. In fact, even with the ideal circumstances, we can't discriminate timing much below about 2ms, we hear it as a variation of phasing/frequency, not as separate events in time and, a drum kit recording is hardly the "ideal circumstances"! The common problem in the audiophile world and even here in the sound science forum, is that we don't consider what it is we're actually reproducing. There is a general ignorance of music itself, of it's performance, of it's perception, of the recording and production of music and therefore of how measurements, the science and tested limits of human hearing actually apply in practise (in terms of scale and context) to what it is we're trying to reproduce!



1. Again, the cymbal itself would be recorded both with a stereo (overhead) pair and multiple mics and it would be spread across both time and space, as would ALL the instruments in the kit and not only the cymbal/s. If you can hear it in the cymbals why can't you hear it in the even more distinct snare drum or hi-hats? You've taken theory and/or the tested limits of discrimination, ignored the actual practicalities/realities of music recording and production and come up with your own theory which sounds perfectly plausible to you and anyone else unaware of the actual practicalities/realities. What you're suggesting is not absolutely impossible but A) Is very unlikely and B) Is typically very undesirable anyway! And C) There's a much more likely explanation you're ignoring. Have you ever heard of a "Sizzle Cymbal"? Sizzle cymbals are not uncommon and could easily account for what you're hearing, with no need to resort to magic or absolute hearing thresholds. The individual taps of the rivets can be heard quite distinctly under some practical/realistic circumstances, given accurate or emphasised HF response speakers or HPs but often it's quite near the edge of audibility and therefore slight differences in level could cause it to become inaudible or even small differences in sitting position (relative to the speakers) or HP placement.

2. It has been tested, everyday by thousands of music engineers all over the world, for about 50 years!

Additionally, as amirm has correctly stated, there is nothing in real sound which is anything like an impulse used for testing. Sure, there are transient peaks which can be very slightly similar but then real sound never contains ONLY a transient, there are ALWAYS other components to the sound after the transient.

G
Well, I could not have described the typical situation regarding audibility of various aspects of recording and playing back the musiic than you just did.

For the sake of simplicity and not to burden the readers with any additional information, I will concentrate on this one thing - multimiking.

You have, quite correctly, described just what havoc it creates with timing - and what terrible, hopeless mess it ultimately coneys to the listener - instead of the real sound as would be heard by a person attending the real music event played live.
With errors in milisecond - up to half ( - gulp ! ) second range, ANY difference/superiority of DSD vs PCM ( microsecond range ) would be lost - so, if you actually do record music in your studio as you have described above - then yes, DSD sounds just the same as PCM...

There ARE recording techniques that do preserve time cues, down to infinitesimaly low amounts of time, intact. The simplest and most effective of them is binaural - and it WILL , mercilessly so, show the superiority of DSD over PCM.
Not to mention the effect of so made recording - even if recorded in MP3 - will have on the listeners accustomeed to only the multitracking diet, once they hear anything of the sort for the first time.

Now, you can decide to stick to what everybody has been doing for the last 50 years - wrongly so, IMO - or try to grasp an idea how it could possibly be made better.

An analogy that has nothing ( on a second thought, it DOES .... *think*... ) with sound:

Only you can decide whether you are still going to continue to "cook" scrambled eggs, no matter which of many recipes ( due to the willing or unwilling inability to leave the eggs as intact as it gets , deliberate decision to have them always scrambled, possibility to still use not-so-fresh-eggs-anymore and other practical concerns, such as economics of using the existing recording equipment that paid for itself long ago )
-
or you will try your first shot at just - plain and simple - fried eggs; the effort and expense required to do so be damned.

With as little frills as possible being the ultimate goal.
 
May 1, 2018 at 11:50 AM Post #7,200 of 17,336
of course it's about what the ear can transduce the signal. that's all it does. pressure to mechanical movement back to vibrations then into electrical signal. treating that as if it's a given that we're getting the signal to our brain is just wrong.
because if the signal doesn't register, or if it does but in a way that can only activate with extreme stimuli, then that would be absolutely enough to conclude that people are making **** up when discussing the audibly better high res.
even with a pure tone our sensitivity goes down rapidly in the high freqs, that is a fact. just like it's a fact that with music, our brain tends to notice less, not more than with nominal test signals. same as anything getting more complex, the brain's performance collapses as soon as there are many things to focus on at the same time. it's logical and follows practical experiments of our daily lives.
when we test pretty much anything, we get some hearing threshold with nominal test signals, and we get lower success when playing music. but somehow we test pure tones, most people can't notice 20khz, yet, discuss music and they all start to claim noticing signals missing at even higher frequencies like analogsurvi and his 2 DSD resolutions. it's slightly suspicious.

my personal hypothesis is simple. rolling off frequencies has little to no impact(aside from the amount of aliasing it might create and other gear related issues with ultrasonic content), simply because our ears are applying their own series of low pass starting at lower frequencies than what redbook or most DACs will do it. so it's like cutting again to remove the markings from the first cut. even if for some people(I expect youngsters to be it, not veteran audiophiles), their own low pass is not enough to remove all the "markings" from the format's brickwall low pass, it's still going to be something attenuated a good deal, at the edge of their audible range.
I don't see why some people(kids) couldn't notice that difference, the same way I can notice the attenuation in the trebles from gears that start the low pass so soon you're already down by 2db at 14khz. but is that significant? is it what makes my music real? when some of my IEMs roll off like crazy starting 10khz, I still subjectively feel like most of the music is there. but those guys are talking significant differences at the edge of what's audible, where nobody has much sensitivity to begin with.

if blind tests have told me something, it's that most audiophiles claiming to hear something are fooling themselves. I cannot say that nobody can hear changes from high res, but I can absolutely say that most audiophiles are unable to identify the oh so obvious superiority of highres in blind test. which brings another question, would this even be a debate if everybody was required to first pass a blind test before running his mouth about the audible superiority of high res? if we had the numbers of people noticing something, and the numbers of the people purchasing highres, everybody would have to admit that high res is mostly something for .... lol .... objectivists.

guys, we really have some rebranding to do. we demand subjective blind tests and don't care for the superior objective fidelity of highres. let's just admit it, we're the subjectivist sub section of this forum.

Fully agreed that if there's no difference in the signal transmitted by the auditory nerves, there can be no audible difference in perception (aside from effects on perception through other senses, like LF vibration).

I don't know if high-res produces any audible difference, and have no opinion on it. If it does, I'm open to the possibility there could be other factors involved besides frequency content.

Going to back to some of the recent discussion, I think it's important to make the distinction that the question we're really asking is whether listeners can have a different musical experience due to potential auditory differences between physical systems A and B during normal listening; and if so, what is the nature and extent of those differences? That's not quite the same as asking whether a listener's ear can consistently detect differences in a test protocol, nor whether the listener's brain can consistently perceive and report on such differences in the test protocol. There can be differences in perception, especially at the subconscious level, which the listener has limited conscious awareness of, and limited ability to reliably report. Maybe such differences would be too subtle to matter, but for me that's an open question also. There's a ton of cognitive processing going on at the subconscious level which shapes how we experience music. Researchers have made some progress in understanding it, but the field is still in an early stage.
 
Last edited:

Users who are viewing this thread

Back
Top