is this really a problem with blind tests?
Jul 6, 2016 at 8:28 PM Post #76 of 126
   
Yes, from the point of view that psychoacoustics is merely one branch, a sub-set of sound science.
 
 
No, here we don't agree. Commonly today, particularly in the audiophile sector of the market, audio design is dictated by marketing departments rather than by engineers or scientists. Marketing depts are commonly unconcerned about whether there is a difference, whether or not it's potentially audible or, even if "yes" to both these questions, whether or not that potentially audible difference is actually an improvement. As far as marketing depts are concerned all these issues can easily be overcome with standard, long established marketing techniques (pseudo-science, doctored comparisons, testimonials, shills/incentivised reviewers, etc.), there are much bigger fish to fry! For example, many/most/all audiophile DAC (and DAC chip) manufacturers would consider supporting the 192/24 (and higher) format/s to take precedence over the fact that the only potentially audible difference is actually a loss of fidelity (compared to lower sample rates/bit depths).
 
G

 
 
Regarding the question about the differences between two sources A and B in the case that the null test reveals a peak level of -100 dBFS.
 
You say that some situations don't require psychoacoustics to make a determination. But it seems to me like you are invoking psychoacoustics twice in saying that A and B are indistinguishable.
 
First of all, the null test gives us the signal A - B. But that's not what we hear. We hear A first, and then B. Who is to say that we actually perceive the signal "A - B"? 
 
I mean, if we do something like take some music A and then add a signal B which is a sine wave at -20 dBFS, then we probably would hear B it's own separate signal. But what evidence is there that we hear, in the general case, the difference between A and B as "A-B"?
 
Second, you said that a signal must be at least at the level of the background noise to be audible. Now I don't know if this is what you were saying, but doesn't this require psychoacoustics to determine? I'm thinking of how the background noise might be pink noise, while the signal is a pure sine wave at 2000 Hz (a pretty sensitive band of the ear). Whether this is audible is a matter of psychoacoustics and requires experimentation, does it not?
 
It may be true that audiophile design is dictated by marketing concerns, but ultimately I'm interested in what's really true--what differences are really audible.
 
Maybe we should get more specific. Let's talk about high bit rate/ bit depth formats. What does the evidence say? What bit rate and depth is essentially perfect? (That is, no higher rate/depth would improve fidelity.) What is the evidence for this?
 
Jul 6, 2016 at 8:28 PM Post #77 of 126
 
I can't talk for science or the guys who did the research on echoic memory.  it doesn't seem logical for me to try finding a small difference inside a huge passage. from a basic logic of seeking something. but also because of how our brain seems to work(as far as we know). the brain seems to collect a few seconds of audio data and then interpret and store the result. so using long samples, wouldn't that mean basically doing plenty of successive short sample tests? I imagine it's better if the 2 samples are grouped in the same packet of information so that they're confronted directly at the interpretation stage. instead of having one recalled from memory  while the other one is just being interpreted.
 
I don't see an actual situation for the case you're trying to make. it may very well exist, but I don't have any idea when it comes to test audible differences.

 
Regarding the bold passage, maybe it "seems" that way to you. As a professional musician, it "seems" to me that we collect little bits of information across an entire musical phrase in order to make sense of it. But don't we need hard evidence rather than going with how it "seems"? Because if the brain cannot pull in all the significant information in a short sample, then a lot of blind tests aren't very revealing. 
 
Keep in mind it's not just what the ear is physically capable of hearing. And it's not just what the brain stores upon hearing a short signal. It's what information is available to be compared in echoic memory. Who is to say that every single bit of information is available in the neural net that does the echoic comparison? Doesn't that need evidence to determine?
 
Jul 6, 2016 at 9:27 PM Post #78 of 126
 
First of all, the null test gives us the signal A - B. But that's not what we hear. We hear A first, and then B. Who is to say that we actually perceive the signal "A - B"? 
...
 
Maybe we should get more specific. Let's talk about high bit rate/ bit depth formats. What does the evidence say? What bit rate and depth is essentially perfect? (That is, no higher rate/depth would improve fidelity.) What is the evidence for this?

 
A null test would tell us if A and B are identical. If A and B are identical, then we know that they are identical. What we perceive doesn't change that they are identical. 
 
Regarding sampling rates and bit depths, that horse has been flogged, laid out, flogged some more, rattled upside the head, then flogged again. There are a few threads near the top here that have a couple hundred pages talking about it. Parts of them are pretty interesting, they extend back a few years. Sadly you'll notice that if the discussion happened a few years ago, every single person talking any amount of sense has "Banned" next to their user name.
 
Jul 6, 2016 at 9:49 PM Post #79 of 126
   
A null test would tell us if A and B are identical. If A and B are identical, then we know that they are identical. What we perceive doesn't change that they are identical. 
 
Regarding sampling rates and bit depths, that horse has been flogged, laid out, flogged some more, rattled upside the head, then flogged again. There are a few threads near the top here that have a couple hundred pages talking about it. Parts of them are pretty interesting, they extend back a few years. Sadly you'll notice that if the discussion happened a few years ago, every single person talking any amount of sense has "Banned" next to their user name.

Regarding a null test, the page that someone linked earlier in this thread said that a null test is computing the signal A minus B... i.e. adding A to the inverse phase version of B. If that result is ZERO, then they are identical. However, if the result is not zero, then you can do analysis on the result, like computing peak dBFS, or a spectrum analysis, or whatever. Of course, any two real-world analog signals are never identical, but the page mentioned running a null test on digital effects, which could potentially give a zero result.
 
The question of null test came up when I was talking about comparing two signals, A and B. It seemed like Gregorio was not addressing directly the point that we don't hear "A-B" but rather A followed by B.
 
I see one thread about bit rates. It would be nice if a thread were stickied, but I don't see any. I'll poke around. Maybe if I come up with some specific questions, people here can refer me to an older thread that maybe isn't near the top.
 
EDIT: oh just one question about the bit rates that maybe you or Gregorio can answer now. Is there a bit rate/depth that is psychoacoustically "perfect"? In other words, based on known science, can we answer that question directly? Or are there too many variables? For instance, maybe variables in playback systems affect the answer, or the application of lossy compression, or the type of music (maybe its dynamic range?) etc. etc. such that there is no one answer to that question.
 
Jul 6, 2016 at 10:29 PM Post #80 of 126
   
 
Regarding the question about the differences between two sources A and B in the case that the null test reveals a peak level of -100 dBFS.
 
You say that some situations don't require psychoacoustics to make a determination. But it seems to me like you are invoking psychoacoustics twice in saying that A and B are indistinguishable.
 
First of all, the null test gives us the signal A - B. But that's not what we hear. We hear A first, and then B. Who is to say that we actually perceive the signal "A - B"? 
 
I mean, if we do something like take some music A and then add a signal B which is a sine wave at -20 dBFS, then we probably would hear B it's own separate signal. But what evidence is there that we hear, in the general case, the difference between A and B as "A-B"?
 
Second, you said that a signal must be at least at the level of the background noise to be audible. Now I don't know if this is what you were saying, but doesn't this require psychoacoustics to determine? I'm thinking of how the background noise might be pink noise, while the signal is a pure sine wave at 2000 Hz (a pretty sensitive band of the ear). Whether this is audible is a matter of psychoacoustics and requires experimentation, does it not?
 
It may be true that audiophile design is dictated by marketing concerns, but ultimately I'm interested in what's really true--what differences are really audible.
 
Maybe we should get more specific. Let's talk about high bit rate/ bit depth formats. What does the evidence say? What bit rate and depth is essentially perfect? (That is, no higher rate/depth would improve fidelity.) What is the evidence for this?


The null tells us how similar or different something is.  No one is saying we hear a-b.  Prior work shows if the null is somewhere around 60-70 db you won't hear it as different.  It will be too similar.
 
If you wanted an example you can go to liberty instruments page.  http://www.libinst.com/diffmaker_example_files.htm  At the bottom are some files you can listen to in their free Diffmaker software.  Some files have a marching band mixed in at about -60 db and some don't.  See if you can hear it.
 
In cases of a null at -100 db, yes prior work some of which would be psychoacoustics, indicates you need not experiment.  Just know it has been worked out that you can't hear it. You could try some tests yourself to convince yourself.  You will only be wasting time to convince yourself of something already known.  If you need convincing it isn't a waste.
 
Jul 6, 2016 at 10:41 PM Post #81 of 126
  Regarding a null test, the page that someone linked earlier in this thread said that a null test is computing the signal A minus B... i.e. adding A to the inverse phase version of B. If that result is ZERO, then they are identical. However, if the result is not zero, then you can do analysis on the result, like computing peak dBFS, or a spectrum analysis, or whatever. Of course, any two real-world analog signals are never identical, but the page mentioned running a null test on digital effects, which could potentially give a zero result.
 
The question of null test came up when I was talking about comparing two signals, A and B. It seemed like Gregorio was not addressing directly the point that we don't hear "A-B" but rather A followed by B.
 
I see one thread about bit rates. It would be nice if a thread were stickied, but I don't see any. I'll poke around. Maybe if I come up with some specific questions, people here can refer me to an older thread that maybe isn't near the top.
 
EDIT: oh just one question about the bit rates that maybe you or Gregorio can answer now. Is there a bit rate/depth that is psychoacoustically "perfect"? In other words, based on known science, can we answer that question directly? Or are there too many variables? For instance, maybe variables in playback systems affect the answer, or the application of lossy compression, or the type of music (maybe its dynamic range?) etc. etc. such that there is no one answer to that question.


James Johnston (researcher with AT&T formerly Bell Labs), one of the better informed people about what can be heard, said if we had 65 khz sample rate and used a gentler low pass filter from 25 khz, we could be about as sure as sure can be it would be audibly perfect for every human.  I forget what bit depth, but 20 bit would probably do it.  He said 44.1 was almost enough, but some very small number of young people with the most extended frequency of human hearing (1% or so of young adults can hear 22 or 23 khz at high sound levels) might hear some artefacts in theory.  He said 48 khz was surely so close to saying it could never cause audible issues with humans.  But if you just wanted an airtight audible perfection then 65 khz sampling and rolling off starting at 25 khz would do it.  He also pointed out no one has good data  that 44 or 48 rates are in fact audible just that in theory they could be to a few.  So that isn't a standard available rate. So go 88/24 or 96/24 and you should have no worries. 
 
Jul 6, 2016 at 10:47 PM Post #82 of 126
 
The null tells us how similar or different something is. 
 
[1] No one is saying we hear a-b.  Prior work shows if the null is somewhere around 60-70 db you won't hear it as different.  It will be too similar.
 
[2] If you wanted an example you can go to liberty instruments page.  http://www.libinst.com/diffmaker_example_files.htm  At the bottom are some files you can listen to in their free Diffmaker software.  Some files have a marching band mixed in at about -60 db and some don't.  See if you can hear it.
 
In cases of a null at -100 db, yes prior work some of which would be psychoacoustics, indicates you need not experiment.  Just know it has been worked out that you can't hear it. You could try some tests yourself to convince yourself.  You will only be wasting time to convince yourself of something already known.  If you need convincing it isn't a waste.

 
 
Your marching band example is at odds with my point, however. The "marching band signal" is a quiet signal mixed in with a louder choir. The question is whether a quiet signal like that is audible in the presence of the louder one.
 
But A and B, in my example, are not produced by adding a signal to an existing one. They are the output of two different devices that are fed the same input signal. Therefore, the difference of the two, A-B, is irrelevant to what the ear hears.
 
Whereas in the marching band example, let's say B is the choir alone, and A is "choir + marching band." Then A-B would be the marching band. 
 
Two different situations.
 
Gregorio implied that no psycoacoustics is necessary to answer my original point, so that's what I'm disagreeing with. But you also raise a point which is that these are two different situations, and the psychoacoustical experiments from one don't apply to the other.
 
Jul 7, 2016 at 2:37 AM Post #83 of 126
 
 
I can't talk for science or the guys who did the research on echoic memory.  it doesn't seem logical for me to try finding a small difference inside a huge passage. from a basic logic of seeking something. but also because of how our brain seems to work(as far as we know). the brain seems to collect a few seconds of audio data and then interpret and store the result. so using long samples, wouldn't that mean basically doing plenty of successive short sample tests? I imagine it's better if the 2 samples are grouped in the same packet of information so that they're confronted directly at the interpretation stage. instead of having one recalled from memory  while the other one is just being interpreted.
 
I don't see an actual situation for the case you're trying to make. it may very well exist, but I don't have any idea when it comes to test audible differences.

 
Regarding the bold passage, maybe it "seems" that way to you. As a professional musician, it "seems" to me that we collect little bits of information across an entire musical phrase in order to make sense of it. But don't we need hard evidence rather than going with how it "seems"? Because if the brain cannot pull in all the significant information in a short sample, then a lot of blind tests aren't very revealing. 
 
Keep in mind it's not just what the ear is physically capable of hearing. And it's not just what the brain stores upon hearing a short signal. It's what information is available to be compared in echoic memory. Who is to say that every single bit of information is available in the neural net that does the echoic comparison? Doesn't that need evidence to determine?


good luck getting hard evidence about how the brain works and how that translates subjectively. or anything that is true in all circumstances for all humans. even saying humans have 10 fingers is false for some. you're expecting too much from us or science. I'm being cautious because anything touching the human brain is still more mystery than anything else. but as gregorio said, the unknown is the human part, not the sound part. we can analyze and identify the differences from 2 sounds. if you're sure longer time helps you, you could try series of 2 blind tests yourself: one with a short sample where the most significant difference has been picked objectively, and another blind test where you would listen to the full track each time. and then check if you have more success with the longer test. I don't, but I'm no musician, no golden ear, and what I test may be the reason why short samples work better. I would never try a blind test to check if something is fatiguing, or making me sleepy, etc.  I test stuff that a simple test can evaluate for myself.
 
 
and IMO the question isn't to know if the time lapse in echoic memory is perfect(it's most likely not, because... what's perfect in this world?^_^). the question is more about how much more corrupted the information gets once we let time pass, to find the ideal compromise between getting enough data to process, and getting too much time to keep it accurate. you seem to think that we need many seconds to get something, while what I'm concerned about is the music we heard at the beginning of the listening becoming unreliable information by the time the listening ends.
it's trivial to create an example where I'm right, by making the listening extremely long. and one where you're right, by making the sample shorter than an audible frequency.
 
Jul 7, 2016 at 3:49 AM Post #84 of 126
 
good luck getting hard evidence about how the brain works and how that translates subjectively. or anything that is true in all circumstances for all humans. even saying humans have 10 fingers is false for some. you're expecting too much from us or science. I'm being cautious because anything touching the human brain is still more mystery than anything else. but as gregorio said, the unknown is the human part, not the sound part. we can analyze and identify the differences from 2 sounds. if you're sure longer time helps you, you could try series of 2 blind tests yourself: one with a short sample where the most significant difference has been picked objectively, and another blind test where you would listen to the full track each time. and then check if you have more success with the longer test. I don't, but I'm no musician, no golden ear, and what I test may be the reason why short samples work better. I would never try a blind test to check if something is fatiguing, or making me sleepy, etc.  I test stuff that a simple test can evaluate for myself.
 
 
and IMO the question isn't to know if the time lapse in echoic memory is perfect(it's most likely not, because... what's perfect in this world?^_^). the question is more about how much more corrupted the information gets once we let time pass, to find the ideal compromise between getting enough data to process, and getting too much time to keep it accurate. you seem to think that we need many seconds to get something, while what I'm concerned about is the music we heard at the beginning of the listening becoming unreliable information by the time the listening ends.
it's trivial to create an example where I'm right, by making the listening extremely long. and one where you're right, by making the sample shorter than an audible frequency.

 
Come on, I'm not asking you to prove something "for all humans," or "for all circumstances." Let's just see any evidence at all for the accuracy of echoic memory.
 
Regarding the bold portion of your post, I would agree with you that it is reasonable to suppose that information/memories in the nervous system decay or get corrupted over time, although we would of course like some data about that (say, a model of the auditory brain circuits). So that's not what I'm talking about. Let's see if we can get on the same page about this.
 
A musical phrase usually contains a large number of similar, but slightly different, details. Maybe it has 15 notes, each with an attack. So that's 15 similar attacks. Maybe the notes themselves (the sustained sound) contain lots of spikes or little transients, so that's hundreds of transients. Maybe the instrument in piano, so each note contains an attack as well as a characteristic decay (evolution of timbre), and in that case notes in similar registers would be similar. 
 
Of course I'm a classical musician, so I speak only of what I know. It seems to me that classical music is rich with subtle details, and that means it's a stringent test of all sorts of things---the hearing of the musician or conductor, the aesthetics of the recording engineer, and the fidelity of the playback system. 
 
So in classical music (and perhaps in other genres) a "musical impression" is made from a large number of individual details and the interrelationship of those details. It's just like a painting--the overall impression is formed from shapes and colors occurring all over the canvas.
 
So making the sample longer gives the brain a chance to perceive the interrelationship of all those details.
 
EDIT: I just realized I didn't quite finish my point. We know from ordinary experience that when you are exposed to something (information) repeatedly, it's easier to remember. This is why it would be wrong to assume that listening to 10 seconds of music (given that most music repeats similar details non-stop) is just a series of 1 second "perceptual formations" each of which fades. 
 
Jul 7, 2016 at 7:22 AM Post #85 of 126
 
First of all, the null test gives us the signal A - B. But that's not what we hear. We hear A first, and then B. Who is to say that we actually perceive the signal "A - B"?

 
No one can say that. What we can say is that if in a null test A - B (phase flipped) = Zero, then A = B, there is no difference period and therefore obviously no audible difference. However, you are also correct with your question "who is to say that we actually perceive the signal as A -B?". Actually, we can go a step further than your question because we can demonstrate that under certain circumstances, even when A and B are absolutely identical, we can clearly perceive an obvious difference! Here's a famous example. It's for this reason that I've specifically asked you in previous posts the question; "are you talking about what is audible or what is perceivable?". You reply has been that you want the truth and want to know what is audible. The danger here and the problem with so many of the discussions here on head-fi is that many audiophiles do not know or refuse to accept the fact that audibility and perceivability are two different things. This is why the science of psychoacoustics exists in the first place! If audibility and perceivability were the same thing, we wouldn't need psychoacoustics as everything we perceive would be explained by sound science and ear physiology. 
 
 
Second, you said that a signal must be at least at the level of the background noise to be audible. Now I don't know if this is what you were saying, but doesn't this require psychoacoustics to determine? I'm thinking of how the background noise might be pink noise, while the signal is a pure sine wave at 2000 Hz (a pretty sensitive band of the ear). Whether this is audible is a matter of psychoacoustics and requires experimentation, does it not?

 
Possibly, to an extent but it depends: Certainly we can manufacture a sine wave in the critical hearing band and depending on the frequency content of the background noise we would be able to hear that sine wave well below the noise floor. This raises several points:
 
1. This is a manufactured scenario. In practice the difference between two musical signals is very unlikely to be a pure sine wave and even if it is, it's even more unlikely to be a sine wave which just happens to fall in the critical hearing band, it's more likely to fall in a band to which we are insensitive.
 
2. Even if we take this scenario to it's absolute theoretical limit, by saying that there is zero noise floor or in the case of our example, that we are able hear a pure sine wave which is 50dBSPL below our (50dBSPL) noise floor, we still have to have a system capable of a >100dBSPL dynamic range and actually be playing our music at greater than 100dBSPL. If we play back our music at a peak level of 100dBSPL then our -100dBFS sine wave will peaking at 0dBSPL, IE. Our speaker/s will not be outputting  any of our sine wave and therefore it's obviously not audible. Depending on how much the music to which we're listening has been compressed, >100dBSPL peaks is likely to be uncomfortably loud and still potentially damaging (though not instantly).
 
3. Even ignoring the noise floor of the listening environment, let's not forget that the most dynamic commercial recordings have a range of about 60dB (and most have 40dB or less), so we're talking about a difference signal which is at least 100 times lower in level than the noise floor of the recording itself!
 
4. Taking all the above factors into account cumulatively, it's inconceivable that in any real world situation a -100dB difference signal could be audible. However, while "inconceivable" is enough to safely say it's inaudible, it's not really enough in scientific terms. This is why I stated; "1. Is there actually a difference and [if so] 2. What is the difference and is it even potentially audible?". If that difference signal is actually limited to the critical hearing band, then it's potential of being audible increases.
 
 
Let's talk about high bit rate/ bit depth formats. What does the evidence say?

 
Maybe we should deal with this in the other thread you've started.
 
G
 
Jul 7, 2016 at 3:37 PM Post #86 of 126
   
No one can say that. What we can say is that if in a null test A - B (phase flipped) = Zero, then A = B, there is no difference period and therefore obviously no audible difference. However, you are also correct with your question "who is to say that we actually perceive the signal as A -B?". Actually, we can go a step further than your question because we can demonstrate that under certain circumstances, even when A and B are absolutely identical, we can clearly perceive an obvious difference! Here's a famous example. It's for this reason that I've specifically asked you in previous posts the question; "are you talking about what is audible or what is perceivable?". You reply has been that you want the truth and want to know what is audible. The danger here and the problem with so many of the discussions here on head-fi is that many audiophiles do not know or refuse to accept the fact that audibility and perceivability are two different things. This is why the science of psychoacoustics exists in the first place! If audibility and perceivability were the same thing, we wouldn't need psychoacoustics as everything we perceive would be explained by sound science and ear physiology. 
 
 
 

 
Regarding a -100 dBFS signal in the presence of a 0 dBFS signal, I don't want to get hung up on the number -100. Aren't you describing the difference signal again as if it were something that actually existed? If you say "the difference signal is audible" that implies there is some physical signal that equals A-B, when no such signal exists. I don't mean to harp on this point but it seems to keep getting lost. I want to make sure we draw a distinction between the question "Can A and B be distinguished by sound?" and "Is A minus B audible?"
 
What does psychoacoustics say about it? Reading some of the links provided in this thread, I see only experiments asking the question "Can A be distinguished from A+B?" where B is usually a low-level signal. But that's a different question!
 
I understand that we can perceive changes when there are none. I think another distinction needs to be made, however. It's not merely "audibility" and "perceivability." 
 
Let's use the word "audible" for now to mean you can become consciously aware of a signal or difference between two signals in a reliable way (i.e. mirrors the reality of the signal).
 
I'll make a rough analogy. The ear is like a microphone and the lower brain is like a device that processes the signal, such as a spectrum analyzer. Consciousness (or higher brain) is like the room in which the spectrum analyzer is located, in which is visible not only the readout of the spectrum analyzer but other images as well.
 
We can ask these questions about a sound X:
 
"Can the microphone/ear pick it up?" (Does it have sufficient resolution, bandwidth, noise floor, etc.) The answer must be "yes" for X to be audible.
 
"What is happening in the lower brain?" Three things can go wrong in the lower brain: insufficient resolution, filtering, and distortion.
 
(1) Insufficient resolution would be the signal getting swamped by the noise, or fading due to memory inadequacies, that kind of thing.
 
(2) Filtering is the idea that not all the raw information gets through to consciousness. We are never aware of every bit of information, but rather experience a condensed version of reality.
 
(3) "Optical illusions" in the visual domain are an example of distortion, such as the way we see a curved line when it actually straight. I don't know much about "audio illusions" but I'm sure there are some.
 
Any of these things could potentially affect audibility. Distortion may not prevent a person from being aware that something is there, but might be a form of misdirection instead, causing a person to fail a listening test because they have wrong expectations.
 
"What is happening in consciousness?" A person is aware, during a listening test, of things besides the sound. So do they fail because they get distracted or focus on the wrong sensations?
 
Let's use the term "Reason 1" to describe inaudibility due to ear or lower brain resolution limitations, filtering, or distortion. Let's say Reason 2 is distraction or wrong focus at the conscious level.
 
I think it's important to understand that Reason 1 and Reason 2 are two totally different phenomena. 
 
When I say I'm "interested in reality" and "what is really audible" what I mean is that I'm interested in Reason 1.
 
In any area of science, a test can give a null result due to Reason 2. In medicine there is the placebo effect. A null result is deeply dissatisfying when we don't know whether Reason 1 or Reason 2 is at work. In a sense we haven't learned anything at all! 
 
What is maddening is there's no reason to believe Reasons 1 and 2 can't be investigated and understood. It's not like these are imaginary phenomena. There is no reason to believe we can't make theories about them and test these theories, in principle. Yet there is a deep limitation in practice.
 
Jul 7, 2016 at 6:05 PM Post #87 of 126
   
 
Your marching band example is at odds with my point, however. The "marching band signal" is a quiet signal mixed in with a louder choir. The question is whether a quiet signal like that is audible in the presence of the louder one.
 
But A and B, in my example, are not produced by adding a signal to an existing one. They are the output of two different devices that are fed the same input signal. Therefore, the difference of the two, A-B, is irrelevant to what the ear hears.
 
Whereas in the marching band example, let's say B is the choir alone, and A is "choir + marching band." Then A-B would be the marching band. 
 
Two different situations.
 
Gregorio implied that no psycoacoustics is necessary to answer my original point, so that's what I'm disagreeing with. But you also raise a point which is that these are two different situations, and the psychoacoustical experiments from one don't apply to the other.


If two amps or two other devices differ ever so slightly (-60 db or smaller), the differences are closely correlated with the signal which means they are much more likely to be masked.  If a signal at -60 db is completely different and uncorrelated to the larger signal masking is much less effective and much more variable.  Meaning the second condition should be heard more easily than the first. So if -60 db is enough to make its presence inaudible, your comparison of more similar signals means they will be even less audible. As to why you say differences between A and B is irrelevant to what one hears, well you said it, that doesn't mean it makes sense.  Because it doesn't.  When two signals are similar enough you can't tell them apart.  When they are different enough you can.  A null test is just another approach to specifying how different one signal is to another. THD and noise levels is another approach.
 
Jul 8, 2016 at 12:00 AM Post #88 of 126
 
If two amps or two other devices differ ever so slightly (-60 db or smaller), the differences are closely correlated with the signal which means they are much more likely to be masked.  If a signal at -60 db is completely different and uncorrelated to the larger signal masking is much less effective and much more variable.  Meaning the second condition should be heard more easily than the first. So if -60 db is enough to make its presence inaudible, your comparison of more similar signals means they will be even less audible. As to why you say differences between A and B is irrelevant to what one hears, well you said it, that doesn't mean it makes sense.  Because it doesn't.  When two signals are similar enough you can't tell them apart.  When they are different enough you can.  A null test is just another approach to specifying how different one signal is to another. THD and noise levels is another approach.

 
I didn't say "differences between A and B", I said the signal resulting from the subtraction of B from A, i.e. "A-B".
 
Think about it this way. Suppose someone tells you that a listener has a choice of two signals, A and B. They want you to analyze the signals and comment on what the listener might hear in each one, and how they might experience a difference.
 
Now suppose that the only information you are provided with is the signal "A-B". What would you be able to determine about this listener's experience? What wouldn't you be able to determine?
 
The answer to that question tells you something about how relevant A-B is.
 
Jul 8, 2016 at 4:11 AM Post #89 of 126
If you mean to ask whether it's actually physically possible to extract the difference signal ("A minus B") from signals A and B, the answer is yes in most cases, and certainly yes in the case of comparing digital files. If comparing analog / amplified output, you could also digitize the signal using a high quality line input and ADC, align the signals in time and amplitude, then invert one of them to produce again your subtracted output.

Similar ideas were put into practice even before the advent of digital audio. I believe it first involved Quad the amplifier company demonstrating that its amplifiers have achieved audible perfection by producing an inaudible error signal in the 1970s. A direct description of said experiment escapes me at the moment, the closest description I can find is this pdf that passingly mentioned the experiment but also put forward an analog circuit for reproducing said experiment, along with several interesting observations on the audio industry in 1977 that gives me a big case of deja vu:

http://www.keith-snook.info/wireless-world-magazine/Wireless-World-1977/Audible%20amplifier%20distortion%20is%20not%20a%20mystery.pdf

It's been demonstrated in many cases of A and B signals that some people are interested in ABXing, that the difference signal, A minus B, is inaudible when played at the same volume that one would listen to A and B at. (same volume in the sense of maintaining the volume relationship between the volume of signal A and the volume of the error signal A-B within it.) Inaudible means that when you turn the difference signal on and off, the subject cannot tell that you're playing anything at all, at any point in time, even in an anechoic chamber. Many people regard such cases of A and B to be categorically impossible to ABX--if somebody shows a positive ABX result for such an A-B pair, it must be because they have done something wrong...

... and the reason for that would be, that people assume that listening for something (a flaw, say, an injected 50Hz hum, similar to the case you put forward) in total silence would be inordinately easier than picking it out from within loud music. Do you think this is not the case?

It's true that in many cases the difference between A and B is not heard as A minus B. It is usually a lot less obvious. Which is the premise for most of the lossy audio compression codecs we have today (what kind of error signals would be least obviously heard, or not heard at all, in the given piece of music / audio “A” to be encoded, and how to make use of this, given that we can encode various pieces of audio "B" that are different from "A" in various ways but would take up much much less storage space in the encoding / decoding protocol we have chosen?)

Blind testing was performed extensively in the research for these modern lossy compression codecs.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 8, 2016 at 6:57 AM Post #90 of 126
 
 
good luck getting hard evidence about how the brain works and how that translates subjectively. or anything that is true in all circumstances for all humans. even saying humans have 10 fingers is false for some. you're expecting too much from us or science. I'm being cautious because anything touching the human brain is still more mystery than anything else. but as gregorio said, the unknown is the human part, not the sound part. we can analyze and identify the differences from 2 sounds. if you're sure longer time helps you, you could try series of 2 blind tests yourself: one with a short sample where the most significant difference has been picked objectively, and another blind test where you would listen to the full track each time. and then check if you have more success with the longer test. I don't, but I'm no musician, no golden ear, and what I test may be the reason why short samples work better. I would never try a blind test to check if something is fatiguing, or making me sleepy, etc.  I test stuff that a simple test can evaluate for myself.
 
 
and IMO the question isn't to know if the time lapse in echoic memory is perfect(it's most likely not, because... what's perfect in this world?^_^). the question is more about how much more corrupted the information gets once we let time pass, to find the ideal compromise between getting enough data to process, and getting too much time to keep it accurate. you seem to think that we need many seconds to get something, while what I'm concerned about is the music we heard at the beginning of the listening becoming unreliable information by the time the listening ends.
it's trivial to create an example where I'm right, by making the listening extremely long. and one where you're right, by making the sample shorter than an audible frequency.

 
Come on, I'm not asking you to prove something "for all humans," or "for all circumstances." Let's just see any evidence at all for the accuracy of echoic memory.
 
Regarding the bold portion of your post, I would agree with you that it is reasonable to suppose that information/memories in the nervous system decay or get corrupted over time, although we would of course like some data about that (say, a model of the auditory brain circuits). So that's not what I'm talking about. Let's see if we can get on the same page about this.
 
A musical phrase usually contains a large number of similar, but slightly different, details. Maybe it has 15 notes, each with an attack. So that's 15 similar attacks. Maybe the notes themselves (the sustained sound) contain lots of spikes or little transients, so that's hundreds of transients. Maybe the instrument in piano, so each note contains an attack as well as a characteristic decay (evolution of timbre), and in that case notes in similar registers would be similar. 
 
Of course I'm a classical musician, so I speak only of what I know. It seems to me that classical music is rich with subtle details, and that means it's a stringent test of all sorts of things---the hearing of the musician or conductor, the aesthetics of the recording engineer, and the fidelity of the playback system. 
 
So in classical music (and perhaps in other genres) a "musical impression" is made from a large number of individual details and the interrelationship of those details. It's just like a painting--the overall impression is formed from shapes and colors occurring all over the canvas.
 
So making the sample longer gives the brain a chance to perceive the interrelationship of all those details.
 
EDIT: I just realized I didn't quite finish my point. We know from ordinary experience that when you are exposed to something (information) repeatedly, it's easier to remember. This is why it would be wrong to assume that listening to 10 seconds of music (given that most music repeats similar details non-stop) is just a series of 1 second "perceptual formations" each of which fades. 

evidence of the accuracy of echoic memory, well, does it matter? it's not like long listening bypasses echoic memory, so whatever inaccuracy will at the minimum be ported onto the next step of memorization. there is no memory of sound that we got without the echoic memory step anyway.
evidence that more time leads to more mistakes in recalling audio information, well there is your everyday life experience. else I remember 2 papers on the subject, but finding them again is another story. I have pdfs and bookmarks about audio that are like the treasure cave in ali baba. I know how to open the cave, but then it's just a giant mess of all the stuff I will mostly never read again that I didn't even care to rename properly for the sake of future search ^_^.
but in any way you won't be satisfied by those, as to test short vs longer recall, the test requires to have a short sample in the first place so that it doesn't exceed the echoic step.
 
again what you explain sounds to me like when I would want to analyze music in my head to get a sense of something, a perceived preference or whatever. and that is not the purpose of a blind test!!!  I'm not saying long listening can't have it's use or that we should judge music based only on short samples. let's make this very clear, I'm talking about trying to discriminate 2 audio samples here! nothing else.  now let's say I'm setting up my EQ, to decide if I'll keep it, just turning it ON and OFF a few times is not a good method because my impressions will be impacted by the "louder is better" feeling. instead I'll use that EQ for some times, then turn it off and listen again without any particular timing or agenda. only then will I decide if I preferred one or the other. that's taste.
but if my question was "is this EQ sounding audibly different compared to no EQ?" then of course I would just turn the EQ ON/OFF repeatedly while playing music and see if I detect a change.
 

Users who are viewing this thread

Back
Top