Head-Fi.org › Forums › Equipment Forums › Sound Science › ABX Reliability
New Posts  All Forums:Forum Nav:

ABX Reliability - Page 5

post #61 of 73


 

That's actually the standard criterion for statistical hypothesis testing.

Quote:
Originally Posted by Slaughter View Post the funny thing is....they just made up the 95% rule for audio.

 

post #62 of 73
Quote:
Originally Posted by Slaughter View Post

I'm good now. Bring it on...


You still don't know what an ABX test is; it is getting tiresome not to say pathetic.

post #63 of 73
Quote:
Originally Posted by eucariote View Post

Quite good actually.  Here is how hypothesis testing is done:

 

H0: the difference between a and b is not detectable

H1: the difference between a and b is detectable

...


Equations and numbers.  :)

That jogs the memory.  I think I'm going to get the hang of how to do this stuff again.

post #64 of 73
Quote:
Originally Posted by Slaughter View Post

Bob, I assume you failed the test, even though there was in fact a measurable difference. I preferred image B over image A in the side by side comparison, even though I failed in a blind test, does this mean that I don't in fact prefer image B?


Looking at both images at the same time, you concluded you preferred  B.  Say I am the image store and you choose to buy B to take home and enjoy.  You pay the money and I slip B into the sack and off you go.  At home, you take it out.  As it happens, I accidentally gave you A.  How will you ever know if you can't tell the difference when looking at them separately?

post #65 of 73

I am just very happy that I have lost the sense of wondering if an expensive cable would really make a difference and that the fun in making my own cables does not reduce the sound quality out of my hifi.

post #66 of 73

If there was no swap delay, anyone who could see the differences side by side, would get it right 100% of the time. The long delay just makes this test silly, because it's like listening to a headphone, then rapidly taking it off your ears, and then pluging in the next one to find the differences. This is not standard ABX at all.


Edited by hannyjuca - 5/19/10 at 4:48pm
post #67 of 73

If there actually was a difference we would not be so concerned by ABX tests. ABX tests have shown that there is either no difference or the difference is so small we cannot reliably spot it.

post #68 of 73

Actually, that's the catch, with properly executed ABX test, you can spot minimum changes on the samples, if the device used for the test has enough detail reproduction to do so.

post #69 of 73
Quote:
Originally Posted by hannyjuca View Post

Actually, that's the catch, with properly executed ABX test, you can spot minimum changes on the samples, if the device used for the test has enough detail reproduction to do so.


So do you think the ABX/blind tests I linked to at the start of this thread are properly executed?

post #70 of 73
Quote:
Originally Posted by Prog Rock Man View Post




So do you think the ABX/blind tests I linked to at the start of this thread are properly executed?


Not at all, it have a swap delay.

post #71 of 73

To me delay is an issue that varies on how well you think you can remember sound. If it familiar test tracks and you listening for specifics, I say you can go a long time between switches. If you are unfamiliar with the music, then the less time the better.

 

In any case if there really were the differences that audiophiles describe them, hearing the differences should be easy. 

post #72 of 73
Quote:
Originally Posted by hannyjuca View Post




Not at all, it have a swap delay.


Not only that, but you only get to see A and B at the very beginning and cannot refer back to A and B during the test.  So you get to see A and B at the beginning (better commit them perfectly to memory) and then get to see 20 versions of X one at a time for the duration of the test.  A proper ABX test would allow you to refer back to A and B at any time that you want.

 

An ABX test should be designed so that you are testing whether the subject can detect a difference and properly match X with A or B.  The test should be designed to minimize other factors that may influence the test.

 

The ABX test by Sieveking introduces a source of error due to forcing you to memorize the colors of A and B for the duration of the test.  The test becomes more of a psychology experiment on color memory than a proper ABX test to show whether you can or cannot detect a difference between A and B.

 

The Sieveking ABX test is an example of how not to design a proper ABX test.

 

There is always going to be some amount of a psychology experiment component to an ABX test relating the memory of audio (or in this case color) and other psychology experiment factors.  The goal when designing an ABX experiment should be to minimize those other factors not maximize them.  The Sieveking test chooses to try to maximize the influence of color memory rather than minimize that influence.

post #73 of 73

Ham Sandwich said it all.


Edited by hannyjuca - 5/20/10 at 7:51pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › ABX Reliability