Quote:
Quote:
AV Review spent 6 months with $205,000 worth of equipment measuring 60 HDMI cables. They found that at certain lengths certain cables they would fail to transmit a signal that kept a clear 'eye' where the 1s and 0s were clearly represented. We already know that TVs can suffer from crackles, snow and lines and it was assumed that the failed cables would then be the ones to show such. But they did not. Two of the lowest performing cables had to be joined and were 65 feet long before such signal degredation appeared.
What happened was that a difference was found by measuring, but in actual use such a difference made no visual difference. The fail was going from theory to practice. You can measure all you want, but the final test is is it audible or visual.
Sighted and blind testing are indicators of whether that difference is audible. I prefer blind as it removes other causes and leaves the cable or whatever to perform on its own.
Measuring the performance of cables with a HATS in a sound proof room still needs to pass the theory to reality test.
It does.
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately.
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed.
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted.
I believe that if someone spent a long time looking at them they would have eventually seen the difference as the differences were visible when shown.
So DBT isn't necessarily reliable either. Per myself, I would rather trust measurements taken by reliable sources and peer reviewed than spotty DBT results and subjective testament.
Dave
I like your picture analogy, but I have a different take on it. Blind testing is to see if humans can identify a difference or not.
A - If the testers easily, quickly and accurately spot a difference, then there is a big difference.
B - If it takes time and the accuracy level falls then the difference is smaller.
C - If there is a difference, but hardly anyone gets it, then the difference is so small as to be insignificant.
D - If there is no difference and people fail to find a difference, then there is no difference.
E - If there is no difference and people report finding a difference, then the difference is no longer in the item being tested, the difference is now in the tester.
The latter is what I believe is the case with audiophiles and cables in that cables may look different and have different constructions, but when it comes to using your ears only, those differences vanish.
So with the picture analogy, the test showed C, though given a bit more time, maybe B. That does not make the test unreliable.