Quote:
Originally Posted by aristos_achaion /img/forum/go_quote.gif
So, I'm curious...how would you use a DBT to determine whether, say, the Zu Mobius cable on the HD650 was snake oil or not?
|
Showing that it's not is easier than the opposite.
You need to find some people who can hear the improvement.
Then you have to design an experimental setup, in accordance with the listeners. The cable must not be detectable by the listener in any way : weight, stiffness, noise... The operator must not put the headphones on the listener's head because the test wouldn't be double blind. You must design a setup where the listener can take the headphones and put them on without toutching the cable, nor even moving it. Actually, the headphones should not move, the listener should put his head below the headphones instead of putting the headphones on.
Switching cables is also a challenge : the listener must hear absolutely nothing : the noise of the plugs, or the noise of the cables put down.
These constraints must respect the conditions in which the listener can hear the difference. The DBT setup must not introduce any variable that may prevent the listener to hear the difference, such as the inability to move the head or set the headphones in the right position on the head...
The listener may then decide what test setup suits him or her best : ABX or any other random sequence. A fixed success criterion (chances of guessing) must be agreed between all people involved, and taking into account the expectations of the people who are going to read the result.
The expected number of listeners, and all other published tests must be taken into account, as their existence drags down the significance of the result. The more you try, the more you risk a false success. The real probability of guessing must be evaluated according to these data.
If the test is supposed to be a "classic", that other forumers may repeat, then we have no control at all on the amount of listeners and sessions, therefore no control on the real significance of the result.
In this case, the significance can be drastically restored introducing trial sessions. Any listener who wants to take the challenge must success a trial session first, and the result of the trial session won't be taken into account, whatever it is. If your targeted p value is 0.001, introducing a trial session raises the significance by a factor 1000, which means that you don't have to worry about significance until 1000 other people have tried.
Then, when you get a success, you have to make some measurments in order to find the origin of the difference. It may have been a loose contact, for example.
Then you need to reproduce the success. Things are much easier here, because we now know what kind of music sample / sample duration / listening volume and repetition sequence can work.
It is also possible to record the signal at both ends of both cables, if the measurment device has a high enough impedance, in order to see what is the most transparent cable. If the difference is still audible listening to the recording, it can greatly simplify the reproduction : the sample can be uploaded and anyone can try to ABX them without having to setup a physical double blind test.
The way the samples were got might need to be reproduced, though, in order to eliminate the possibility that something is wrong with the setup (something broken in the cable, that was not obvious during mesurements , for example).
That is if you want to prove that the cables sound different. If you want to prove that one of them is snake oil, that's another matter.
You might begin to look for claimed technical superiority, and measure it in order to check. But that won't change the opinion of those who can hear a difference.
The only way would be, through a very long process, to organize DBTs with a representative selection of trusted people who can head the difference : reviewers, forumers etc.
All of them must have been able to find the difference easily. Then all DBT must be setup according to each listener's habits and requirements for optimal listening experience. And all failed DBT must be repeated, with training of the listeners, until they can clearly tell if the difference is there, or not there, and why. All suggestions that may explain a failure must lead to a new DBT setup that take into account the objection.
Once all loopholes have been adressed, and all people who claimed to hear the difference have recognized that the test conditions were perfect, and that the difference was thus just a product of their imagination, then we can conclude that the cable does not have the properties that everyone was thinking it had.
It still might not be snake oil, but it would mean that no one has yet found what it may improve over a standard cable.
In order to confirm a difference with several independant DBT and some subsequent measurements, expect about six months, with, say, between 2 and 10 meetings.
In order to prove that the sonic properties attributed to a given device are not real, expect several years of work.