is this really a problem with blind tests?
Jun 27, 2016 at 1:15 PM Post #16 of 126
But it was your fault and nobody elses that your original thread title and first few posts were total bollocks, I can't imagine how these could have been any further away from what you claimed you wanted to discuss. Bluntly, you were being disingenious in subsequently claiming you weren't criticising blind testing, it was only the "heat" you got that made you backtrack, so spare us the guff, your silence on that thread since speaks volumes, there are questions on that thread you haven't answered. I guess the magic disappeared for you once the thread ceased to be a blind test kicking contest.

And nobody was interested in a discussion involving more "could it be's" than several seasons of Ancient Aliens. I wonder why?

 
According to my dictionary the definition of criticize is "indicate the faults of (someone or something) in a disapproving way. I was not disaproving of blind tests as a whole. From my perspective, you are just oversensitive to the flaws of blind testing being talked about.  Like how reginalb below would have needed it to be a flaw "inherent' in blind testing in order for me to be able to use that phrase... so a flaw that is present in all blind tests (or discussions about blind tests) that have ever been conducted or discussed. That is an absurdly high standard that forces people to walk on eggshells. But hey, if you want Sound Science to stay a cloistered echo chamber that's your choice.  

 I laid out what I would have liked to hear in one of the final posts, and no one had anything to contribute on those subjects. In my opinion, the reason that no one was interested in discussing "could it be's" was because the people here are probably technicians by training more than scientists, and not because there is a generally agreed upon (outside of this forum) sacred standard of evidence that qualifies or disqualifies a discussion as being worthy of the title "science".
 
 
Quote:
 
 
It does get tiresome. You still haven't figured out why people were annoyed, just FYI. For example, I would quote something you said, respond to it directly like, line by line response, and you would reply that I wasn't even reading what you were saying. You can't say X=Y then get mad at people for saying, "No it doesn't" with the claim that it's not about Y and X. 
 
But again, if you're trying to describe a flaw of blind testing it has to be a flaw inherent to blind testing. If you are criticizing the design of a particular blind test, that's fine, but you aren't criticizing blind testing, just a flawed implementation of it.

To your first point, that is your opinion. From my point of view, you still haven't figured out how to read my posts and are still confusing territoriality with scientific scruples.
 
Jun 27, 2016 at 1:31 PM Post #17 of 126
   
According to my dictionary the definition of criticize is "indicate the faults of (someone or something) in a disapproving way. I was not disaproving of blind tests as a whole. From my perspective, you are just oversensitive to the flaws of blind testing being talked about.  Like how reginalb below would have needed it to be a flaw "inherent' in blind testing in order for me to be able to use that phrase... so a flaw that is present in all blind tests that have ever been conducted or discussed. That is an absurdly high standard that forces people to walk on eggshells. But hey, if you want Sound Science to stay a cloistered echo chamber that's your choice.  

 I laid out what I would have liked to hear in one of the final posts, and no one had anything to contribute on those subjects. In my opinion, the reason that no one was interested in discussing "could it be's" was because the people here are probably technicians by training more than scientists, and not because there is a generally agreed upon (outside of this forum) sacred standard of evidence that qualifies or disqualifies a discussion as being worthy of the title "science".
 
 
Quote:
To your first point, that is your opinion. From my point of view, you still haven't figured out how to read my posts and are still confusing territoriality with scientific scruples.

 
Are you just a really good troll? If so, bravo. I mean, I've almost been bated in to a potential ban by you numerous times now.
 
Jun 27, 2016 at 2:13 PM Post #18 of 126
   
You can move back and forth in the time
 

 
What does that mean? You are controlling the source, A, B, or X, right? How do you control the time in these tests?
 
 
It's entirely possible that you think you're recalling it a lot better than you are. 
 

 
Perceiving sound is how musicians and instrument makers navigate toward a polished result.
 
If someone said, "I used a device that measured changes in my position to navigate from LA to New York City," and they really got to New York City, that would be pretty strong evidence that the device was functioning well.
 
 
 
 
But the real answer is that years and lots of studies are what got to the answer you're looking for. The ability to quickly switch has been the only method wherein humans are able to overcome (or not overcome) their hubris when it comes to the ability of their ears and actually identify differences between sources. 

 
But how were these studies designed? Who were the listeners and what were the signals?
 
Which is a limitation of human hearing. Masking means that humans often can't hear quiet sounds immediately before much louder ones, depending on how much more quiet the first signal is, relative to the second, and how close they are to each other. 
 

 
The point is that in music, small details contribute to the musical effect. They are integral to it. They are not the "font" in a "book."
 
Jun 27, 2016 at 3:27 PM Post #19 of 126
There is no problem with blind tests at all,
 
the problem starts immediately when people outside of the original tests start interpretation beyond the limits of the test.
A test is conducted in a specific setting, in specific conditions (equipment, music examples), with specific participants and what not.
 
The results of this test are LIMITED to these specific conditions and can NOT be extrapolated to other equipment, other music or other participants.
Simple as that. If only 5 people out of 50 from a test were able to detect a difference in SQ, that doesn't mean that 10% of the general public will be able to pick up the same difference with different music. Generalization isn't a good thing in general
rolleyes.gif
but when it comes to blind test results this is particularly wrong.
 
So people are mostly getting completely roiled over differences in interpretations that are not valid in the first place
cool.gif
 
 
Jun 27, 2016 at 3:34 PM Post #20 of 126
  There is no problem with blind tests at all,
 
the problem starts immediately when people outside of the original tests start interpretation beyond the limits of the test.
A test is conducted in a specific setting, in specific conditions (equipment, music examples), with specific participants and what not.
 
The results of this test are LIMITED to these specific conditions and can NOT be extrapolated to other equipment, other music or other participants.
Simple as that. If only 5 people out of 50 from a test were able to detect a difference in SQ, that doesn't mean that 10% of the general public will be able to pick up the same difference with different music. Generalization isn't a good thing in general
rolleyes.gif
but when it comes to blind test results this is particularly wrong.
 
So people are mostly getting completely roiled over differences in interpretations that are not valid in the first place
cool.gif
 

 
Well there is a "body of knowledge" in sound science. For example there are theories about the limits of human hearing, or the limits of audio memory. I don't know how these theories were arrived at, but it makes sense that blind tests are involved. And at least some members of the sound science forum are making claims that apply to most or all people, such as the idea that audiophile cables are snake oil.
 
Jun 27, 2016 at 3:57 PM Post #21 of 126
   
Well there is a "body of knowledge" in sound science. For example there are theories about the limits of human hearing, or the limits of audio memory. I don't know how these theories were arrived at, but it makes sense that blind tests are involved. And at least some members of the sound science forum are making claims that apply to most or all people, such as the idea that audiophile cables are snake oil.

 
Well, that is based on more than just the body of knowledge in sound science. (EDIT: Removed a line, sorry johncarm, thought I was replying to someone else)
 
That audiophile cables are snake oil comes largely from physics and understanding of electronics in general. Especially with digital cables, many claims made by both manufacturers and "enthusiasts" are literally impossible based on physics. I mean, the bit gets there or it doesn't, if the receiving end reconstitutes the signal, it does so perfectly. As someone who went from managing communications in the Army (signal), to software development, and now product management in software, I can tell you with certainty that I have read claims that are simply impossible with regards to digital cables. Note that none of my experience is related to audio. But it definitely applies. 
 
You don't need to read a single AES study, or anything related to audio, to understand why "[This unnamed] USB [cable] transfers digital audio from a computer to a USB DAC. Standard USB cables are not up to the task of large bit rate information so [company name redacted] has optimized this cable for audio transmision of large High Res files...." is patently absurd.
 
DXD is 8.4672 Mbits/second. Sure, there are a couple USB 2.0 cables that can't handle that, but they're broken and not to spec. Any to-spec USB 2.0 cable can handle that with a yawn. USB 1.x can handle that. These data rates are trivial for USB. So that marketing statement, lifted directly from the web site of a large head-fi aimed online store, is just a flat lie.
 
Jun 27, 2016 at 3:58 PM Post #22 of 126
there's a lot known about the electronics, recording, playback from the EE, "Signals and Systems" perspective - only a little bit of the Psychoacoustic limits are needed to determine that the objective, technical performance of many analog audio electronics are better than relevant demonstrated audio perception limits
 
since there are audio electronics that are designed to "add color" we have to make some distinctions - but designed for literal signal fidelity electronics aren't hard to make or buy
 
some cable choices do matter in some transducer interface situations - guitar preamp cable's capacitance changes the audio frequency peaking with inductive pickups, and at the limits some longer speaker cabling has known, predictable, measureable losses that barely reach the so far known limits of human perception
 
for "reasonable" equipment, analog audio line level home length audio signal interconnect, merely competent, at the level of Blue Jeans/Mogami basic offerings doesn't introduce errors anywhere near human hearing thresholds
 
Jun 27, 2016 at 4:07 PM Post #23 of 126
   
Well there is a "body of knowledge" in sound science. For example there are theories about the limits of human hearing, or the limits of audio memory. I don't know how these theories were arrived at, but it makes sense that blind tests are involved. And at least some members of the sound science forum are making claims that apply to most or all people, such as the idea that audiophile cables are snake oil.

 
There is a very long history of psychophysics research (well over 100 years) , and there are many decent resources out there on the web , even the wikipedia page is okay for starters a quick google search on psychophysics will pull up loads of decent starter points. 
 
It is a fascinating field and well worth having a look at, do searches on discrimination thresholds and so on - there is a lot out there
 
Jun 27, 2016 at 4:08 PM Post #24 of 126
   
[1]What does that mean? You are controlling the source, A, B, or X, right? How do you control the time in these tests?
 
 
 
[2]Perceiving sound is how musicians and instrument makers navigate toward a polished result.
 
If someone said, "I used a device that measured changes in my position to navigate from LA to New York City," and they really got to New York City, that would be pretty strong evidence that the device was functioning well.
 
 
 
 
[3] But how were these studies designed? Who were the listeners and what were the signals?
 
 
[4] The point is that in music, small details contribute to the musical effect. They are integral to it. They are not the "font" in a "book."

 
[1] In some ABX tests, such as comparisons of sources in Foobar, you can easily and repeatedly replay any section of the file, as many times as you would like. 
 
[2] Not following as to how that is related to the fact that auditory memory is likely very short. With a GPS device, it's easier to quantify accuracy, though
 
[3] I provided one example, but it is true that a lot of AES studies are pretty poorly designed. It seems that they are rarely designed by experienced researchers, unfortunately. Maybe I've just been reading the wrong ones, but study design is certainly a problem that I've seen with a lot of these AES studies.
 
[4] I certainly agree with that, and don't think many people here would argue with it. Analogies with regards to audio are very rarely any good.
 
Jun 27, 2016 at 4:18 PM Post #25 of 126
The point is that icebear said there is no problem with blind tests as long as you don't generalize beyond the test conditions. But if there is a body of knowledge about psychoacoustic hearing limits, then that knowledge is applied in a widespread way. So there must be some widespread conclusions being drawn from blind tests.
 
So there is a body of knowledge that results from blind tests. These blind tests were done under certain conditions. It seems that those conditions, in practice, are usually quick switching and short duration signals.
 
Yet, we know in music that you can't hear very much of a sound in a short duration signal.
 
So what is the proof that short duration ABX tests can be generalized to be accurate in a widespread manner?
 
Jun 27, 2016 at 4:27 PM Post #27 of 126
   
[1] In some ABX tests, such as comparisons of sources in Foobar, you can easily and repeatedly replay any section of the file, as many times as you would like. 
 
[2] Not following as to how that is related to the fact that auditory memory is likely very short. With a GPS device, it's easier to quantify accuracy, though
 
[3] I provided one example, but it is true that a lot of AES studies are pretty poorly designed. It seems that they are rarely designed by experienced researchers, unfortunately. Maybe I've just been reading the wrong ones, but study design is certainly a problem that I've seen with a lot of these AES studies.
 
[4] I certainly agree with that, and don't think many people here would argue with it. Analogies with regards to audio are very rarely any good.

 
[1] But is this the way tests were done to establish hearing limits?
 
[2] GPS is a measure of absolute position. But comparing two sounds is a matter of noticing relative changes. Musicians must make comparisons between sounds that occur hours or days apart. By doing so, they navigate in the general direction of improvement of their sound. Instrument builders do the same thing. Experimental changes to an instrument take hours or days to execute. It is impossible to do quick switching.
 
[3] I'm talking about the studies done to determine that audio memory is short. How were they designed?
 
[4] okay.
 
Jun 27, 2016 at 4:33 PM Post #29 of 126
   
Citation ?
 
What exactly do you mean by short duration (< 1s, < 50ms)  - see psychophysics research...


Put it this way. We have two signals, A & B. We have a test design that includes signals of duration D. So the question is, can you or can't you hear all the differences between A and B in a signal of duration D? A well-designed blind test would choose D so that the answer is "can."
 
Or put it another way: when listening to music signal M, either you *do* hear more of the details in a longer sample of M, or you *don't*.  A well-designed blind test would be based on the answer to that question.
 
Jun 27, 2016 at 4:36 PM Post #30 of 126
 
Put it this way. We have two signals, A & B. We have a test design that includes signals of duration D. So the question is, can you or can't you hear all the differences between A and B in a signal of duration D? A well-designed blind test would choose D so that the answer is "can."
 
Or put it another way: when listening to music signal M, either you *do* hear more of the details in a longer sample of M, or you *don't*.  A well-designed blind test would be based on the answer to that question.

 
 
You are not answering my questions 
 

Users who are viewing this thread

Back
Top