Originally Posted by SunByrne
Well, we would in fact know something. For instance, here's a hypothesis which is still tenable in the face of the current data (I'm not saying I agree with this, just that it's still tenable): people can't really tell the difference and are just responding randomly. If we could reject the null on a test of association, we could reject that explanation.
I couldn't agree more. In fact, I routinely recommend rejection for submissions to the scientific journals for which I review, usually on the basis of flawed experimental methdology/statistics.
I just didn't think this was quite the right context for being quite
I've also rejected one or two manuscripts for bad statistics/experimental design. However, I do look at the design before even considering the statistics. In this test, the question appears to have been "which cable is which"? However, the subjects appear to have no control conditions whatsoever. What basis would they have for making an identification? Unless they had outside experience with one or more of the cables in question, they would be making a blind guess. The expected result of a blind guess is of course random.
If I give a subject an unlabelled headphone, and ask, "Is this a Grado HP-1?" and the subject has never heard a Grado HP-1 (nor seen one) I'd expect a fairly random response. In fact, if I gave the subject three completely unfamiliar headphones, and asked them to identify Sennheiser HD-650, Sony SA-5000 or AKG K-701 (and they have never seen or heard any of them) how could you get anything besides random? And yet, I do suspect that the majority of listeners who have heard those three headphones will agree that they do not sound the same. In fact, I don't even think the "can we detect headphone differences?" question even attracts controversy. The listeners would simply not have had the experience needed to identify the sonic signatures of the headphones, and so results would be random.
If the analogous experiment to the one reported, using items with known audible differences, would be expected to generate random data, why would we try and draw conclusions from this one? An experiment expected to generate random results yielded random results. I guess you could do the analysis to show the results were truly random, but suppose that you got statistical significance. What scientific hypothesis would be supported by a significant result? If there isn't a clear answer to that question, stats are wasted time.
Incidentally, the Rat Shack "Fusion" cables, since discontinued, offered very nice sound for the money.