KeithEmo
Member of the Trade: Emotiva
- Joined
- Aug 13, 2014
- Posts
- 1,698
- Likes
- 868
Someone suggested the idea of a Kickstarter campaign to finance some tests.
I think that would be an excellent idea.
I just want to make it perfectly clear that I am 100% behind doing proper testing.
But I am very much against reaching premature conclusions based on the results of inadequate tests.
I've designed proper scientific tests - which is why I'm unwilling to ignore serious flaws when I see them.
It's not really all that difficult to do it right.
However, sadly, it rarely seems to work out that way.
And, when it comes to testing, the biggest part of the cost is usually time and labor.
This is what makes it expensive for a company to do it - unless they are certain of a return on their cost.
On the other hand, an audio club or group of enthusiasts can save a lot of that cost by using volunteer labor.
Another way to encourage the most rigorous testing is to encourage "self selection".
You DO NOT arbitrarily choose "a good pair of headphones" to use for the test.
You hold a meet - and invite everyone to "bring their most revealing pair of headphones".
Then you create a set of test files, with varying amounts of THD, and ringing, and perhaps noise, in them.
Then you test the test equipment...
With the goal of finding out which of those headphones make the known differences easiest to notice.
This gives you the best chance at detecting audible unknown differences if they really exist...
And makes it plausible when you say you did your best to detect differences if there are any...
As far as I'm concerned, the basic methodologies of various blind testing methods aren't the big problem.
The biggest problem is simply sloppy test procedure.
For example, if you were trying to test whether a certain additive would make a noticeable change in the taste of a product, you would start by adding known amounts of the substance, and you would analyze your samples to make sure they really contained the proper amounts before letting anybody taste them. Yet, in contrast, in most of the various tests that have been run about "audible differences", nobody bothered to get out an analysis microphone and confirm whether that ringing, or ultrasonic content, or whatever, was actually ARRIVING AT THE EARS OF THE TEST SUBJECTS. (It seems pretty obvious that you cannot conclude that something is "inaudible" if you haven't first confirmed that it is arriving at the ears of your test subjects. Yet almost all tests omit this basic step, preferring to rely on the idea that "good speakers" are delivering the test signal properly, without actually confirming it.)
I think that would be an excellent idea.
I just want to make it perfectly clear that I am 100% behind doing proper testing.
But I am very much against reaching premature conclusions based on the results of inadequate tests.
I've designed proper scientific tests - which is why I'm unwilling to ignore serious flaws when I see them.
It's not really all that difficult to do it right.
However, sadly, it rarely seems to work out that way.
And, when it comes to testing, the biggest part of the cost is usually time and labor.
This is what makes it expensive for a company to do it - unless they are certain of a return on their cost.
On the other hand, an audio club or group of enthusiasts can save a lot of that cost by using volunteer labor.
Another way to encourage the most rigorous testing is to encourage "self selection".
You DO NOT arbitrarily choose "a good pair of headphones" to use for the test.
You hold a meet - and invite everyone to "bring their most revealing pair of headphones".
Then you create a set of test files, with varying amounts of THD, and ringing, and perhaps noise, in them.
Then you test the test equipment...
With the goal of finding out which of those headphones make the known differences easiest to notice.
This gives you the best chance at detecting audible unknown differences if they really exist...
And makes it plausible when you say you did your best to detect differences if there are any...
As far as I'm concerned, the basic methodologies of various blind testing methods aren't the big problem.
The biggest problem is simply sloppy test procedure.
For example, if you were trying to test whether a certain additive would make a noticeable change in the taste of a product, you would start by adding known amounts of the substance, and you would analyze your samples to make sure they really contained the proper amounts before letting anybody taste them. Yet, in contrast, in most of the various tests that have been run about "audible differences", nobody bothered to get out an analysis microphone and confirm whether that ringing, or ultrasonic content, or whatever, was actually ARRIVING AT THE EARS OF THE TEST SUBJECTS. (It seems pretty obvious that you cannot conclude that something is "inaudible" if you haven't first confirmed that it is arriving at the ears of your test subjects. Yet almost all tests omit this basic step, preferring to rely on the idea that "good speakers" are delivering the test signal properly, without actually confirming it.)
You guys keep on piling up the imagined reasons for why existing blind test methodologies may be flawed / not sensitive enough. But the correct action from there would be to devise and conduct improved blind tests using your purported state of the art equipment and improved methodologies, not sit here blabbing about why we should accept your SIGHTED test results. SIGHTED tests are the lowest of low in terms of methodology and can never be accepted as any sort of evidence no matter how much detail you go into the perceived differences or how baldly obvious you swear the differences are!
Really, for the amount of fruitless bickering recorded here, this thread should be locked until somebody posts results from a new blinded test he conducted!