is this really a problem with blind tests?
Jul 3, 2016 at 9:58 AM Post #61 of 126
  The general question is "Do A and B sound different?" and how we go about answering that. Let's assume they are definitely different signals.

 
OK, we'll assume that for the sake of taking the argument further, even though it's not an assumption we should generally make because there are many circumstances where identical signals can be perceived as different.
 
Originally Posted by johncarm /img/forum/go_quote.gif
 
What calculations do you use to reduce that continuous signal to a single number, -100 dB?

 
As I've mentioned (twice now), a null test. Assuming the signals are not identical, the result of a null test will be the difference between the signals. We can now record and then measure this resultant signal as we can any other. Which, also for the sake of argument, we'll say is -100dBFS.
 
  You said that the differences might be audible if the difference is greater than that, depending on the part of the spectrum. That sounds like you are referring to theories about thresholds of audibility, correct?

 
No, we can just as easily turn the argument on it's head and arrive at the answer that way. For example, to be able to identify a signal, the signal obviously needs to be at or above the level of any background noise. If the level of background noise in the average sitting room is say 50dBSPL then the level of our (difference/null result) signal needs to be at least 50dBSPL. If our difference signal is at 50dBSPL and peaks at -100dBFS, the peak (0dBFS) level of our original signal/recording would therefore be 100dB higher, at 150dBSPL. Few, if any, consumers have a system capable of at least 100dB of dynamic range and a peak output of 150dBSPL at the listening position but even if they did, 150dBSPL would cause physical damage to some of the structures in the human ear. Models of the threshold of audibility are therefore unnecessary in this example, knowledge of sound science and human ear physiology are enough to arrive at the "truth", without recourse to any type of listening test which at best would only provide a certain probability of arriving at the "truth"!
 
G
 
Jul 3, 2016 at 2:38 PM Post #62 of 126
   
OK, we'll assume that for the sake of taking the argument further, even though it's not an assumption we should generally make because there are many circumstances where identical signals can be perceived as different.
 
 
As I've mentioned (twice now), a null test. Assuming the signals are not identical, the result of a null test will be the difference between the signals. We can now record and then measure this resultant signal as we can any other. Which, also for the sake of argument, we'll say is -100dBFS.
 
 
No, we can just as easily turn the argument on it's head and arrive at the answer that way. For example, to be able to identify a signal, the signal obviously needs to be at or above the level of any background noise. If the level of background noise in the average sitting room is say 50dBSPL then the level of our (difference/null result) signal needs to be at least 50dBSPL. If our difference signal is at 50dBSPL and peaks at -100dBFS, the peak (0dBFS) level of our original signal/recording would therefore be 100dB higher, at 150dBSPL. Few, if any, consumers have a system capable of at least 100dB of dynamic range and a peak output of 150dBSPL at the listening position but even if they did, 150dBSPL would cause physical damage to some of the structures in the human ear. Models of the threshold of audibility are therefore unnecessary in this example, knowledge of sound science and human ear physiology are enough to arrive at the "truth", without recourse to any type of listening test which at best would only provide a certain probability of arriving at the "truth"!
 
G

 
I think the confusion is related to what a null test is. 
 
@johncarm this might help: http://music.tutsplus.com/tutorials/how-to-null-test-your-gear-part-1--cms-22425
 
Jul 3, 2016 at 7:45 PM Post #63 of 126
[1] For the first question, it would be better if you familiarize yourself with the basics of digital audio : what is a sound, what is a digital sound, how does an audio editor work, sample rate, quantization, then go on to more advanced topics, like spectrum analysis, RMS level etc.
 
For the second topic, maybe wikipedia can help you.
 
Answering your questions directly would fill entire pages.
I don't know. My advice is : experiment by yourself. This knowledge doesn't come from mathematical theorems. And the forum won't answer everything.

 
I was a double major in my undergraduate years: applied mathematics and music. In my math studies, we did Fourier Analysis. I also took a class called "Projects in Music and Science" where I used a spectrum analyzer and worked with digital signal theory. I am somewhat familiar with how analog Fourier Analysis is modified in the digital signal realm. So I am pretty sure I can follow the discussion here.
 
I know RMS is "root mean square" but not sure how it is computed. Let's say we have a three second signal that results from subtracting B from A. Let's say that the question is whether the difference between A and B is audible, and you are going to use RMS level to help answer that question. Is the RMS level a single number? You are reducing three seconds of signal to a single number, right? How is that done? Do you sample this signal and average the square of the samples, then take the square root? Or are you looking for peaks?
 
Jul 3, 2016 at 7:56 PM Post #64 of 126
   
OK, we'll assume that for the sake of taking the argument further, even though it's not an assumption we should generally make because there are many circumstances where identical signals can be perceived as different.
 
 
[1] As I've mentioned (twice now), a null test. Assuming the signals are not identical, the result of a null test will be the difference between the signals. We can now record and then measure this resultant signal as we can any other. Which, also for the sake of argument, we'll say is -100dBFS.
 
 
[2] No, we can just as easily turn the argument on it's head and arrive at the answer that way. For example, to be able to identify a signal, the signal obviously needs to be at or above the level of any background noise. If the level of background noise in the average sitting room is say 50dBSPL then the level of our (difference/null result) signal needs to be at least 50dBSPL. If our difference signal is at 50dBSPL and peaks at -100dBFS, the peak (0dBFS) level of our original signal/recording would therefore be 100dB higher, at 150dBSPL. Few, if any, consumers have a system capable of at least 100dB of dynamic range and a peak output of 150dBSPL at the listening position but even if they did, 150dBSPL would cause physical damage to some of the structures in the human ear. Models of the threshold of audibility are therefore unnecessary in this example, knowledge of sound science and human ear physiology are enough to arrive at the "truth", without recourse to any type of listening test which at best would only provide a certain probability of arriving at the "truth"!
 
G

 
[1] A null test is subtracting the signal B from A, correct? That gives you a continuous signal of some duration. You are reducing this signal to a single number. What is the calculation? Note that I was a math/music double major and did even Fourier theory so I can probably understand your answer.
 
[2] I don't know if you understand my question, because you give the impression of trying to avoid answering it. It's really very simple. Suppose we have two devices A and B, and we want to know if there is an audible difference. I started by suggesting we would want to run listening tests. You seemed to be inclined not to run listening tests. That is a little strange on the surface---claiming to know the answer to something without running the test. However, it could possible make sense if you could do the following two things: (1) measure the difference between A and B, (2) refer to theories of audibility. Now you seem to dislike something about that statement. You give an example above trying to avoid reference to theories of audibility. Are you saying that there is no circumstance in which you need a theory of audibility/discrimination to answer the question "Does A sound different than B?"
 
Jul 3, 2016 at 8:01 PM Post #65 of 126
   
I don't know. My advice is : experiment by yourself. This knowledge doesn't come from mathematical theorems. And the forum won't answer everything.

 
Well, there are a lot of claims by audio researchers that are based on listening tests and theories about discrimination. For example, there are claims about how much mp3, of certain bit rates, degrades the sound. In fact the mp3 format itself is designed based on theories about thresholds of audibility. If these audio researchers are going to make general claims, we certainly want their methodology to be sound.
 
If appears that a lot of listening tests are based on making comparisions in echoic memory. That suggests we want to be very sure that echoic memory is an accurate comparison that can work with most of the details in a rich musical signal.
 
If the only experiments trying to establish that were based on frequency and intensity discrimination, then what that suggests is that scientists actually know very little about what musical details can be compared in echoic memory.
 
Jul 3, 2016 at 9:18 PM Post #66 of 126
  Not quite able to parse your grammar. Are you saying that you don't believe C1 equals C2?

how dare you? my engrish is yes yes good good!
biggrin.gif

 
the more I read and the more I feel like you have too much of an idea that what we perceive is what is. a complex piece of music is still only 1 amplitude at instant T for a microphone or 1 ear. when we mix many microphones and output the music in stereo, then it's only 1 amplitude per channel at instant T. whatever the actual complexity in playing the piece or in how our brain will interpret it. the actual sound is something simple that can be decomposed in so many simple elements. so even if there were no benefits to short samples for the quality of our memory, we would probably still favor it to look for differences using dichotomy.
about C1 and C2 echoic memory isn't an option. all memories will go there and if given enough time, move on to long term memory with some potential differences(we know that because people tested start making more mistakes in recalling auditory cues after a given time). so there is no actual C1 equal C2 IMO. because for a memory to be recalled from long term memory, you need to change at least time(length of the sample or time between sample and question) it would mean that we're not testing sound differences anymore but the effect of time. and if you keep listening to a longer passage, then you've changed the music being tested.
 
 
you seem to take music like you would a speech. how we wouldn't understand the same meaning if we cut a sentence and test only one or 2 words. but that's significant only if what we're looking for is the meaning of the sentence. if the test is about finding an audible difference between 2 speeches, the meaning of the sentence doesn't matter. we just have to find the most effective way to detect when something changes between the 2 speeches. if it's something as big as a different meaning, then for sure we could just cut the words that are different, we would get conclusive result with just that. maybe just a syllable that is different enough would let everybody pass a blind test.
 
 
about scientists and claims, they're the very people who know better than to make definitive claims. when you say they claim this and that, usually they don't, or what they claim is related to statistical results and the conditional truth of their own tests. the ones making weird night and day claims are us all, poorly reading and interpreting their work.
 
Jul 3, 2016 at 10:27 PM Post #67 of 126
  how dare you? my engrish is yes yes good good!
biggrin.gif

 
the more I read and the more I feel like you have too much of an idea that what we perceive is what is. a complex piece of music is still only 1 amplitude at instant T for a microphone or 1 ear. when we mix many microphones and output the music in stereo, then it's only 1 amplitude per channel at instant T. whatever the actual complexity in playing the piece or in how our brain will interpret it. the actual sound is something simple that can be decomposed in so many simple elements. so even if there were no benefits to short samples for the quality of our memory, we would probably still favor it to look for differences using dichotomy.
about C1 and C2 echoic memory isn't an option. all memories will go there and if given enough time, move on to long term memory with some potential differences(we know that because people tested start making more mistakes in recalling auditory cues after a given time). so there is no actual C1 equal C2 IMO. because for a memory to be recalled from long term memory, you need to change at least time(length of the sample or time between sample and question) it would mean that we're not testing sound differences anymore but the effect of time. and if you keep listening to a longer passage, then you've changed the music being tested.
 
 
you seem to take music like you would a speech. how we wouldn't understand the same meaning if we cut a sentence and test only one or 2 words. but that's significant only if what we're looking for is the meaning of the sentence. if the test is about finding an audible difference between 2 speeches, the meaning of the sentence doesn't matter. we just have to find the most effective way to detect when something changes between the 2 speeches. if it's something as big as a different meaning, then for sure we could just cut the words that are different, we would get conclusive result with just that. maybe just a syllable that is different enough would let everybody pass a blind test.
 
 
about scientists and claims, they're the very people who know better than to make definitive claims. when you say they claim this and that, usually they don't, or what they claim is related to statistical results and the conditional truth of their own tests. the ones making weird night and day claims are us all, poorly reading and interpreting their work.

 
 
 
It's not quite right to say that I'm taking the passage as having musical meaning. It's simpler than that, really.
 
Let's say we have a musical passage M_1, which consists of a set of details D_1. The details could include note attacks, which have features such as the evolution of a spectrum over time. Often there are similar attacks throughout the passage with similar features such as the evolution of the spectrum. So D_1 will have many features that are similar to each other, e.g. notes that are played in a similar way.
 
Now let's say we run it through some equipment that distorts it slightly, equipment E. That gives a similar passage M_2, which consists of details D_2. Because it is only a slight distortion, D_2 will be similar to D_1. Also, just as D_1 contains many similar details to each other, D_2 will contain many similar details to each other.
 
Now there are several ways we can compare M_1 and M_2.
 
One way would be to listen to the entire passages. Say we listen to M_1. That means we will be hearing every detail D_1. Because some of those details are similar, we will be hearing many similar details.
 
Then we listen to M_2 with details D_2. Again we are registering many details which are slightly different from D_1, but different in the same way (because the equipment E distorts every details in the same way).
 
One important thing about listening to all of M_1 and all of M_2 is that you are hearing these similar details many times. 
 
Another way to compare would be to pick a one second excerpt of M_1 and M_2 and compare in echoic memory. In that case, you are hearing most details only once or twice.
 
It seems that scientific acoustical studies of audibility to answer questions like "What DACs are indistinguishable from each other?" or "What mp3 bit rates are good enough?" often rely on comparing things in echoic memory.
 
Therefore, it's an important question to ask, can you hear every significant difference between D_1 and D_2 in echoic memory?
 
If the only attempt to answer that question is doing experiments to measure discrimination of intensity levels and frequencies, that seems like a very poor understanding of this question.
 
Jul 4, 2016 at 4:54 AM Post #68 of 126
about hearing a difference only once with a short sample to test, we tend to go back and forth as many times as we want in most tests, so we can very much become familiar with a difference from a short sample and a single difference. the point being to notice any difference, finding one is conclusive result. trying to find all the differences is another kind of test.
 
about
can you hear every significant difference between D_1 and D_2 in echoic memory?

is it a trick question? ^_^  if we notice something, then it's significant for audibility. I don't get where you're going. I thought I had it, but I don't.
 
Jul 4, 2016 at 5:46 AM Post #69 of 126
  about hearing a difference only once with a short sample to test, we tend to go back and forth as many times as we want in most tests, so we can very much become familiar with a difference from a short sample and a single difference. the point being to notice any difference, finding one is conclusive result. trying to find all the differences is another kind of test.
 
about
is it a trick question? ^_^  if we notice something, then it's significant for audibility. I don't get where you're going. I thought I had it, but I don't.

 
Using a short sample is different in many ways from using a longer sampler, it's not just echoic memory, but it appears to me that psychoacoustic researchers think pretty highly of echoic memory as an accurate tool for comparison. I should say that in a short sample you are hearing only one instance of a detail, say the attack of a note, rather than many similar instances.
 
I'm not saying this is a test about "finding all the differences." It's about asking whether echoic memory is the right tool for comparing two samples. If we are going to draw conclusions from such a test, conclusions like the best mp3 algorithms or the design of DACs or amplifiers, then we don't want a tool that allows significant differences to escape notice.
 
I probably should be more clear about my language. I was using "hear a detail" to mean it "enters your ear and registers in some way on your nervous system" and "notice a difference" to mean that you consciously have the impression of a difference (a real one). In this sense you can hear many details but not necessarily notice differences depending on the test protocol.
 
Jul 4, 2016 at 6:19 AM Post #70 of 126
  A null test is subtracting the signal B from A, correct? That gives you a continuous signal of some duration.

 
Correct.
 
  You are reducing this signal to a single number. What is the calculation?

 
No, I'm not reducing it to a single number, I stated for argument sake, that the peak level of the difference signal was -100dBFS.
 
  I don't know if you understand my question, because you give the impression of trying to avoid answering it.

 
I did understand your question and I answered it concisely. It would seem that either you didn't read my answer or that you didn't understand it.
 
  You seemed to be inclined not to run listening tests. That is a little strange on the surface

 
Why? I've explained more than once that listening tests (even double blind ones) are relatively unreliable evidence. Using known scientific fact/s to arrive at the answer is always preferable.
 
  claiming to know the answer to something without running the test. However, it could possible make sense if you could do the following two things: (1) measure the difference between A and B, (2) refer to theories of audibility.

 
1. We can measure the difference with a null test.
2. What has a theory of audibility got to do with anything? Commercial music recordings always peak at (or extremely close to) 0dBFS, which is 100dB higher in level than our example difference signal. Question 1: Does your entire playback equipment chain actually provide a dynamic range of at least 100dB? If the answer is "no", then any theory of audibility is completely irrelevant because if your sound system is physically incapable of producing that signal then it's obviously inaudible because there is no signal! Question 2: Even if the answer to question 1 is "yes", can your system output a peak level roughly 100dB higher than the noise floor or your listening environment (IE. Roughly 150dBSPL given the average home noise floor of 50dBSPL)? If the answer is "no", then the level of your difference signal is going to be lower than your background noise. Even if the answer is "yes", then you've still got the problem of trying to identify a signal near the noise floor while suffering from severe pain and actual physiological damage to your ears. Severe pain and actual physical damage automatically puts us outside any sensible definition of "audible range" and therefore any theories of audibility are irrelevant.
 
  You give an example above trying to avoid reference to theories of audibility.

 
Of course, that goes without saying! If a question can be answered with actual provable fact/s, why would you instead chose to answer it with some relatively vague/generalised theories?
 
  Are you saying that there is no circumstance in which you need a theory of audibility/discrimination to answer the question "Does A sound different than B?"

 
No, are you not reading what I am saying? I stated in post #51: "If that difference [signal] is higher [than "below -100dB"], it might be audible, depending on how much higher and where in the frequency spectrum the differences are. In which case, we might be forced to use a blind test, because despite the fact they have potential flaws, they have far fewer flaws than the remaining alternatives (sighted tests or anecdotal evidence for example)."
 
G
 
Jul 4, 2016 at 6:56 AM Post #71 of 126
   
Correct.
 
 
No, I'm not reducing it to a single number, I stated for argument sake, that the peak level of the difference signal was -100dBFS.
 
 
I did understand your question and I answered it concisely. It would seem that either you didn't read my answer or that you didn't understand it.
 
 
Why? I've explained more than once that listening tests (even double blind ones) are relatively unreliable evidence. Using known scientific fact/s to arrive at the answer is always preferable.
 
 
1. We can measure the difference with a null test.
2. What has a theory of audibility got to do with anything? Commercial music recordings always peak at (or extremely close to) 0dBFS, which is 100dB higher in level than our example difference signal. Question 1: Does your entire playback equipment chain actually provide a dynamic range of at least 100dB? If the answer is "no", then any theory of audibility is completely irrelevant because if your sound system is physically incapable of producing that signal then it's obviously inaudible because there is no signal! Question 2: Even if the answer to question 1 is "yes", can your system output a peak level roughly 100dB higher than the noise floor or your listening environment (IE. Roughly 150dBSPL given the average home noise floor of 50dBSPL)? If the answer is "no", then the level of your difference signal is going to be lower than your background noise. Even if the answer is "yes", then you've still got the problem of trying to identify a signal near the noise floor while suffering from severe pain and actual physiological damage to your ears. Severe pain and actual physical damage automatically puts us outside any sensible definition of "audible range" and therefore any theories of audibility are irrelevant.
 
 
Of course, that goes without saying! If a question can be answered with actual provable fact/s, why would you instead chose to answer it with some relatively vague/generalised theories?
 
 
No, are you not reading what I am saying? I stated in post #51: "If that difference [signal] is higher [than "below -100dB"], it might be audible, depending on how much higher and where in the frequency spectrum the differences are. In which case, we might be forced to use a blind test, because despite the fact they have potential flaws, they have far fewer flaws than the remaining alternatives (sighted tests or anecdotal evidence for example)."
 
G

 
We may have to clarify some terminology.
 
You say that "sound science" is different from "psychoacoustics" which is different from engineering, and so forth. That's okay, but from a larger point of view, I'm interested in the process of making good/accurate audio equipment, digital formats, listening rooms, etc. So it all comes together. This forum is called "Sound Science" and I once met a guy who did experiments on mp3 bit rates who called himself a "scientist."
 
A "theory of audibility/discrimination" can probably be a number of different things. It could be about what small signals are audible in the presence of a larger signal, for instance. It could be about what changes in frequency of a tone are detectable. It could be about whether two different signals can be told apart. So I don't want to muddy the water and conflate all these theories. I will do my best to be clear what I'm talking about.
 
So let's review. We have two samples, A and B and ask the question "Is the difference audible?" First of all, we agree that this is an important question to be able to answer, right? Do we also agree that scientists and engineers make decisions about the audio design on the basis of their understanding of how to answer that question?
 
Let's see if we are on the same page so far.
 
 
 

 
Jul 4, 2016 at 7:51 AM Post #72 of 126
   
I was a double major in my undergraduate years: applied mathematics and music. In my math studies, we did Fourier Analysis. I also took a class called "Projects in Music and Science" where I used a spectrum analyzer and worked with digital signal theory. I am somewhat familiar with how analog Fourier Analysis is modified in the digital signal realm. So I am pretty sure I can follow the discussion here.


 
I this case I apologize. The question about how to get the -100 db number from the substraction seemed a bit strange, so I assumed -wrongly- that you didn't know what a waveform was.
As said by Gregorio, we should look at the value of the highest peak, or more precisely, the level, in dBfs, of the sample that has the highest absolute value.
 
  If appears that a lot of listening tests are based on making comparisions in echoic memory. That suggests we want to be very sure that echoic memory is an accurate comparison that can work with most of the details in a rich musical signal.
 
If the only experiments trying to establish that were based on frequency and intensity discrimination, then what that suggests is that scientists actually know very little about what musical details can be compared in echoic memory.

I agree that blind tests with samples chosen in advance by someone else than the listener are not the most sensitive.
 
Blind listening tests in high fidelity differ from usual scientific blind tests because we are testing extremely improbable hypothesis. 
 
Usually, in scientific research, we look for average thresholds, and aim at a statistical risk of mistake inferior to 5 % in case of success.
 
First, since the hypothesis tested are improbable, we can't be happy with 5 % of probability of error. Take for example the effect of the mains plugs on the sound. If someone scores 5/5 in an ABX test, what is the most probable explanation ?
a) The listener just guessed right five times in a row
b) The type of plug has an effect on the sound that is not measurable, unexplained, but audible 
Most of us would choose a), and the test was completely useless.
 
The most unusual is the claim under test, the most robust must be the statistics. All the more because these tests have been performed a lot of time already (80 times for high definition audio, according to the recent meta-analysis published by the AES). The fact that we are retesting over and over the same thing strongly decreases the power of any statistical result.
 
Second, we are not looking for the average threshold of audibility, but we are looking if one people at least in the world can hear it under the most favorable conditions. And this calls for a completely different experimental setup. This is where you are right in questioning the arbitrary use of short samples.
For this kind of test, we must start from an observation, which means that we must first have an audible difference under normal listening conditions. This point is quite difficult, as usually, people who can hear a difference don't like to perform blind tests. The difference seems so obvious to them that blind listening looks absurd.
Then, if we can setup a blind test, it is the one who can hear the difference who must decide of the setup. Only he or she can tell if a short sample or a long sample is needed.
 
That's where I think that the study of echoic memory vs long term memory is completely irrelevant. The right choice of samples should not be decided by theories of human audition, but by the one who can hear it. I mean, for claims that would be completely unexplained if they were true.
For claims that has already been understood, the theory can tell us what sample is good : sharp transients and high frequencies for mp3, extremely high dynamics for 24 bits recordings. But for cables or standard DACs, since the theory tells nothing, only the listener can tell what sample will be the best.
 
Jul 4, 2016 at 8:19 AM Post #73 of 126
  You say that "sound science" is different from "psychoacoustics"

 
Yes, from the point of view that psychoacoustics is merely one branch, a sub-set of sound science.
 
Originally Posted by johncarm /img/forum/go_quote.gif
 
We have two samples, A and B and ask the question "Is the difference audible?" First of all, we agree that this is an important question to be able to answer, right?

 
Yes, although, the point I'm making is that to arrive at the question "is the difference audible" we first need to ask: 1. Is there actually a difference and 2. What is the difference and is it even potentially audible? If the answer to either of these questions is "no" then we need go no further because the answer to our original question ("is the difference audible?") is automatically "no". This is important because ...
 
Originally Posted by johncarm /img/forum/go_quote.gif
 
Do we also agree that scientists and engineers make decisions about the audio design on the basis of their understanding of how to answer that question?


 
No, here we don't agree. Commonly today, particularly in the audiophile sector of the market, audio design is dictated by marketing departments rather than by engineers or scientists. Marketing depts are commonly unconcerned about whether there is a difference, whether or not it's potentially audible or, even if "yes" to both these questions, whether or not that potentially audible difference is actually an improvement. As far as marketing depts are concerned all these issues can easily be overcome with standard, long established marketing techniques (pseudo-science, doctored comparisons, testimonials, shills/incentivised reviewers, etc.), there are much bigger fish to fry! For example, many/most/all audiophile DAC (and DAC chip) manufacturers would consider supporting the 192/24 (and higher) format/s to take precedence over the fact that the only potentially audible difference is actually a loss of fidelity (compared to lower sample rates/bit depths).
 
G
 
Jul 4, 2016 at 10:33 AM Post #74 of 126
  Using a short sample is different in many ways from using a longer sampler, it's not just echoic memory, but it appears to me that psychoacoustic researchers think pretty highly of echoic memory as an accurate tool for comparison. I should say that in a short sample you are hearing only one instance of a detail, say the attack of a note, rather than many similar instances.
 
I'm not saying this is a test about "finding all the differences." It's about asking whether echoic memory is the right tool for comparing two samples. If we are going to draw conclusions from such a test, conclusions like the best mp3 algorithms or the design of DACs or amplifiers, then we don't want a tool that allows significant differences to escape notice.
 
I probably should be more clear about my language. I was using "hear a detail" to mean it "enters your ear and registers in some way on your nervous system" and "notice a difference" to mean that you consciously have the impression of a difference (a real one). In this sense you can hear many details but not necessarily notice differences depending on the test protocol.


I can't talk for science or the guys who did the research on echoic memory.  it doesn't seem logical for me to try finding a small difference inside a huge passage. from a basic logic of seeking something. but also because of how our brain seems to work(as far as we know). the brain seems to collect a few seconds of audio data and then interpret and store the result. so using long samples, wouldn't that mean basically doing plenty of successive short sample tests? I imagine it's better if the 2 samples are grouped in the same packet of information so that they're confronted directly at the interpretation stage. instead of having one recalled from memory  while the other one is just being interpreted.
 
I don't see an actual situation for the case you're trying to make. it may very well exist, but I don't have any idea when it comes to test audible differences.
 
Jul 4, 2016 at 3:40 PM Post #75 of 126
there are a few published reports - but as far as I know not replicated, not generally accepted
 
Kunchur, Oohashi both claim it required a minute or so for their subjects to respond to some "conventionally ultrasonic" content
 
https://en.wikipedia.org/wiki/Hypersonic_effect
 
there are plenty of questions about both researchers methods, one used music, the other, test signals
 

Users who are viewing this thread

Back
Top