24bit, SACD and DVD-A recordings. (first update)
Sep 23, 2008 at 1:05 PM Post #16 of 23
Quote:

Originally Posted by nick_charles /img/forum/go_quote.gif
I would not base my choice of medium on even a 10% chance of hearing a difference with simple samples and well trained and young ears, possibly if I were 20 years younger


Are you sure? If I guaranteed you that, for a spend of $1000, one out of every ten of your CDs would sound better, every time, every set of ten CDs ... I think many would pay that.

It's not about the probability that A is better than B as estimated using some objective mass testing protocol. Who cares? It is about the size of the effect for you, how often you will enjoy A over B. If never, then fine. But if you can consistently prefer A to B once out of every 10 (blind) trials, and detect no difference the other 9, then buy A.
 
Sep 23, 2008 at 3:00 PM Post #17 of 23
Quote:

Originally Posted by wavoman /img/forum/go_quote.gif
The statistical reasoning presented by some posters here, while correct, is based on classic significance testing, which begins with the assumption (the "null hypothesis") that there is no difference between the two systms and will not move from that hypothesis unless there is major-league evidence in the other direction. It also assumes a homogeneous population vis-a-vis the ability to detect a difference.

Not appropriate here.



I think this is still the way it should be. If an inventor produces a new system and declares it is "better" you really do want to see some hard evidence and you want to be cautious at least. So the inventor has to provide evidence that system B is better than system A. Since better is difficult to quantify (and preference can go against technical superiority, see CD vs LP
wink.gif
) we often settle on testing for different. If not different then B cannot be better than A.

As for distribution of discriminatory abilities this is a straw man. Look at all the most interesting AES listening test papers they all describe the listening populations, often (Benjamin and Gannon , Ashihara et al, Meyer and Moran, Blech and Yang***) they deliberately choose subjects whose abilities should be better than average. In short they often go out of their way to give System B as strong a chance as possible.


Quote:

Think like this: suppose there is in fact a small difference, not always obvious, but some of the time A sounds better than B to some individuals (and never the other way).


This is a hypothesis that you must go out and find evidence for before going further. If you can find evidence for this then you can proceed.

Quote:

With this hypothesis, people who really can hear a difference some of the time, and would like therefore to own A instead of B, still post results that would seem to only be chance.


If the data does not support the model then the model has to be revisited

Quote:

The way out of the bind is to isolate the subjects who show a preference for A over B (not significant, but in the right direction), and re-test them.


This is called cherry picking , I have done this myself and I have an ongoing debate with one of my committee members about how valid it is, in the sub-field of the academic community which I am nominally active in this is a semi common practice. You can justify it but you really have to have a strong case to do it. I do not think it is a valid approach here.

Quote:

Moreover the published tests are not well done, they don't simulate real listening, they ask for difficult (A/B/X) instead of realistic (A>B) comparisons, etc. etc. ... all discussed in other threads.


First establish a difference then worry about preference, without difference preference is meaningless. In any case I and many others have successfully used ABX testing to show discrimination of real differences between things like codecs, file formats, distortion levels, volume levels, frequencies and so on. ABX testing can be really sensitive and there are positive results out there if you look for them.

The reason ABX testing gets a bad press in some quarters is that it flies in the face of accepted audiophile wisdom. It is uncomfortable to have evidence that contradicts a given world view.
 
Sep 23, 2008 at 3:01 PM Post #18 of 23
many modern PCM input DACs use Delta-Sigma modulation principles to get to analog out

http://www.iet.ntnu.no/~ivarlo/files...t_audiodac.pdf

as a closed consumer distribution format SACD/DSD has advantages for Sony - the audible benefit vs other Hi-Rez formats is theoretically implausable once sample rates reach 96K for 24 bit PCM (Sony's own recomendation for SACD/DSD is high order analog reconstruction filtering of the DSD output above 50KHz)

both SACD and DVD-A have information theroretic superiority over CD - the practical audibility debate will likely go on for a while longer...
 
Sep 23, 2008 at 3:23 PM Post #19 of 23
Quote:

Originally Posted by wavoman /img/forum/go_quote.gif
Are you sure? If I guaranteed you that, for a spend of $1000, one out of every ten of your CDs would sound better, every time, every set of ten CDs ... I think many would pay that.


You are changing the argument here. My premise is that if there is only a 10% chance that I can tell the difference between two different systems the outlay in new kit and new media (my collection is modest but would not be cheap to replace) is not worth it, others may feel differently, their choice.

You are presenting a case where I ***would*** be able to tell the difference 1/10 times if I buy component A. I still do not go for that because 9/10 times you would not gain any benefit from component A. If A does not make all my CDs (or at least most of them) sound better to me why am I bothering ?
 
Sep 24, 2008 at 12:30 AM Post #20 of 23
Quote:

Originally Posted by nick_charles /img/forum/go_quote.gif
... If A does not make all my CDs (or at least most of them) sound better to me why am I bothering ?


For rapture the one time in ten that it does!

BTW, I still strongly disagree with you over A/B/X, and over the statistics.

I will argue with you later, however. Also, YGPM.
 
Sep 24, 2008 at 6:15 PM Post #21 of 23
Quote:

Originally Posted by wavoman /img/forum/go_quote.gif
It also assumes a homogeneous population vis-a-vis the ability to detect a difference.


This assumption only holds true if you are trying to assess the ability of the human ear in general.
But you can drop this assumption and still run the test. The only impact is that your conclusion is restrained to your set of listeners, and no more generalizable to the rest of the human kind.

When this assumption is not matched, it often means that the listeners are more trained than average. Which is good when the test result is negative.

Quote:

Originally Posted by wavoman /img/forum/go_quote.gif
Think like this: suppose there is in fact a small difference, not always obvious, but some of the time A sounds better than B to some individuals (and never the other way).


Here, individual scores were reported separately, so no problem if only some individuals can hear the difference.

The papers says "The listeners had complete operational control over the ABX software by means of a control unit, so they could determine the course and timing of the listening comparison process. This ability was an important factor in minimizing the previously mentionned risk of performance anxiety in the test subjects".

I don't see then why a subject would answer "X is A" or "X is B" when he doesn't hear any difference !

Quote:

Originally Posted by wavoman /img/forum/go_quote.gif
Moreover the published tests are not well done, they don't simulate real listening, they ask for difficult (A/B/X) instead of realistic (A>B) comparisons, etc. etc. ... all discussed in other threads.


Since the listeners could determine the course of the comparison, they could listen to A/B/X as well as A/B.
Thus I don't see why a listener would deliberately choose A/B/X if he or she finds it more difficult than A/B.
 
Sep 28, 2008 at 5:25 AM Post #22 of 23
Pio -- it has been shown over and over that subjects in tests give biased responses (in other words, not what they really think) depending on how the questions are asked.

In fact, asking "are these the same" gets different results from asking "are these different" !

And asking "is X like A or B" will get people to make a choice (many times the HAVE TO make a choice to participate) when in fact they experience no difference. And so on ... the food industry has done tons of work on this stuff.

I think that when the paper says "control over the A/B/X box" they don't mean control over the protocol, just the timing of the samples. They still had to A-B-X ... in other words, hear X and declare it A or B (even if they heard no difference ... the idea is then the answers would be random).

You and I are actually agree on your other point -- these results should be taken individually, not aggregated. I was making the same point, arguing with another poster. Sorry if that was not clear -- we agree on this one!
 
Sep 29, 2008 at 6:30 PM Post #23 of 23
Quote:

Originally Posted by wavoman /img/forum/go_quote.gif
Pio -- it has been shown over and over that subjects in tests give biased responses (in other words, not what they really think) depending on how the questions are asked.

In fact, asking "are these the same" gets different results from asking "are these different" !



This bias favours the null hypothesis, because if it is big enough to trouble people in blind conditions, it should trouble them even more in normal conditions, where much more influences can bias their perception.
 

Users who are viewing this thread

Back
Top