Not Just About Cables: Objectivism vs. Subjectivism in Audio
Oct 15, 2007 at 12:21 PM Post #76 of 89
Quote:

Originally Posted by infinitesymphony /img/forum/go_quote.gif
I guess it boils down to this... Some people are more interested in the theory behind equipment, and others prefer to listen and draw their own conclusions. They're both valid methods. It's just that one is easier to prove than the other.
tongue.gif



Not necessarely, no. If you're the listener and hear a difference, then that is enough proof for you. Something good on paper doesn't have to sound good at all! It's just what you value as being your most valued test method, It comes down to preference again. Some trust their ears, some only writing on paper.
 
Oct 15, 2007 at 12:44 PM Post #77 of 89
Quote:

Originally Posted by Riboge /img/forum/go_quote.gif
Look, there are several people at head fi who have reported getting scores consistently above 6 out of 10 on their attempts at blind testing. A few have reported about other audiophile groups who have been able to do so, or some of them have. That's good enough for them to be promising subjects for searching for possible correlations. If some are found then there is something to go on. If you wait for proof in your sense of expertise, you will wait forever.


Why do people even bother quoting ABX tests when the testers get "6/10"...? For goodness sake that's terrible. Do you even understand ABX and how the probability works? 6/10 means over a 33% chance that the tester is guessing. If I was trying to prove something, that's a bad result...

Google gives:
http://www.provide.net/~djcarlst/abx_bino.htm

---------------

On testing whether there is or is not a difference in the first place, subjective decisions on which is best aside, is it not possible to test this with equipment? E.g. cables: if we think there is no difference between them but the faithful insist there is, pipe the same digital track via the different cables into a high quality ADC at high resolution/depth, and analyze the resulting files? The factors Steve listed above can be examined and would apparently serve as decent indicators.

Identical results may be expected, yes, and this makes what is otherwise a negative argument into something which is disprovable thus a valid hypothesis?
 
Oct 15, 2007 at 1:46 PM Post #78 of 89
Quote:

Originally Posted by badmonkey /img/forum/go_quote.gif
Why do people even bother quoting ABX tests when the testers get "6/10"...? For goodness sake that's terrible. Do you even understand ABX and how the probability works? 6/10 means over a 33% chance that the tester is guessing. If I was trying to prove something, that's a bad result...



For heaven's sake, please read carefully before you pop off with such a comment. What I said was "above 6 out of 10". That means at least 7 out of 10, doesn't it? And I said "consistently". That means taking the test over and over, right? If you score 7 out of 10 on a testing procedure of a certain number of trials and then repeat that a few times, the odds of randomness or guessing are decreased even further.
 
Oct 15, 2007 at 4:27 PM Post #80 of 89
Quote:

Originally Posted by edstrelow /img/forum/go_quote.gif
What factual evidence is there for claims about the worth of headphones, in the headphones section, eg. the Stax 007 is the best electrostatic headphone? Just a lot of individual's opinions, some saying yeah and some saying nay.


True, but I was referring to claims about audible differences between power cords and other snake oil stuff.


Regards,

L.
 
Oct 15, 2007 at 5:53 PM Post #81 of 89
Quote:

Originally Posted by badmonkey /img/forum/go_quote.gif
7/10 is just under 20% chance of guessing.

It's about halfway between blind chance and certainty.



If someone consistently does it, as I said, the probability of chance drops fast. Doing it two times in a row, you multiply that probability of doing it one time times the same again, giving you 4%. And so forth. And then there is the "at least" meaning some do 8 of 10 or higher. Someone better than me at statistics might try determining the probability of chance if someone scores either 7,8, or 9 two times in a row.
 
Oct 15, 2007 at 6:18 PM Post #82 of 89
One thing that DB test proponents always seem to fail to mention is that DB tests do not prove that the test subjects are identical. They can only prove that they are different.

I use DB tests to compare equipment (including cables and amps) and am generally happy with the results. But occasionally, something like this happens: I test two items using DB tests and I can't tell them apart. I leave the new item hooked up to my system for an extended period using all sorts of recordings that I typically listen to and after some time I believe I hear something different in a specific recording. I then DB test using the recording that sounds different, and I pass the DB test.

Again, DB tests can only prove differences. If you conclude that item A is equal to item B because of a DB test, you are misinterpreting DB test results.
 
Oct 15, 2007 at 6:56 PM Post #83 of 89
Quote:

Originally Posted by Scrith /img/forum/go_quote.gif
One thing that DB test proponents always seem to fail to mention is that DB tests do not prove that the test subjects are identical. They can only prove that they are different.


Not always. For example:

Quote:

Originally Posted by Febs /img/forum/go_quote.gif
While I am an advocate of ABX testing, I think that we need to be careful not to overstate what ABX tests show. A "failed" ABX test does not prove that people hear no difference. It simply means that it was not established that the participants did in fact hear a difference. This is a subtle distinction, but as we (rightfully) point out the logical fallacies in the arguments of others, we should not fall into the fallacy of claiming that the failure to prove a positive is proof of the negative.


Quote:

Originally Posted by Febs /img/forum/go_quote.gif
The absence of double-blind tests does not prove that there are no real differences. It just proves that no such differences have been established.

Don't get me wrong. I am an advocate of double-blind testing. But I think that we need to be careful to understand what such tests do and do not "prove" and not to overstate the conclusions that can be drawn from them.



 
Oct 15, 2007 at 9:00 PM Post #84 of 89
I sure hope we are not veering off into a DBT debate. The nature of this discussion has taken a pleasing turn toward useful calm discourse and away from polemic dissension, crusading politicized views, baiting, etc. So, while the explicit topic of this thread is seldom addressed, one can study how and why this thread has gone this way instead of the usual.
Quote:

Originally Posted by Scrith /img/forum/go_quote.gif
One thing that DB test proponents always seem to fail to mention is that DB tests do not prove that the test subjects are identical. They can only prove that they are different.

I use DB tests to compare equipment (including cables and amps) and am generally happy with the results. But occasionally, something like this happens: I test two items using DB tests and I can't tell them apart. I leave the new item hooked up to my system for an extended period using all sorts of recordings that I typically listen to and after some time I believe I hear something different in a specific recording. I then DB test using the recording that sounds different, and I pass the DB test.

Again, DB tests can only prove differences. If you conclude that item A is equal to item B because of a DB test, you are misinterpreting DB test results.



What this also illustrates along with your point is how small and limited the differences can be and often are, when there are differences. It is important not to make more of the difference than is apt. Even the believers have to admit that there is often an even greater than usual with other equipment disproportion between the degree of difference/improvement and the increase in price. Not confusing the degree of excitement at hearing the difference with the degree of difference would help a lot and avoid understandably inflaming the skeptical.
 
Oct 15, 2007 at 9:25 PM Post #85 of 89
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
The qualities of audio reproduction that affect sound quality are...

Frequency Response
Dynamics
Harmonic Distortion
Signal To Noise
Channel Separation
Pitch (aka wow and flutter)
Phase / Timing

Did I miss any?

See ya
Steve



Yes, since believe you're not looking quite deep enough. With amplifiers for instance, it's not enough to look at FR, THD, channel separation and so forth at the designated test level, it's necessary to examine its characteristics at many power levels ranging from milliwatts all the way up to full power, as well as examining overload characteristics. A class AB amplifier with feedback will often measure quite well at levels of several watts up to full power, but measure it at fractions of a watt and its performance falls apart. The THD will actually start rising once you drop below a watt or so, which is indicative of poor low level linearity, which destroys low-level details in the signal. Very simple, but very few people do it.

Moving on to speakers, Stereophile's tests are a start but they're still missing far too much. For starters, all the curves are smoothed over so you never see what's really happening, you want an unsmoothed set of curves to catch all the monkey business which speakers are fond of. The other thing they never show is the distortion curve of a speaker with respect to frequency, that is, how much 2nd, 3rd, 4th & 5th harmonics it's producing at any given frequency. Pro-sound manufacturers will have these curves for their products, though usually only for 2nd & 3rd harmonics. It doesn't make much sense to buy an amplifier with 0.0001% THD when the speaker has 10% THD.

A lot of work also needs to be done with those cumulative spectral decay plots, Stereophile measures it from one location, and unfortunately so do many speaker manufacturers. The problem here is that stored energy and resonances can often be emitted in narrow beams only a few degrees wide, if one of those beams hits the microphone or bounces off a wall to the mic, it registers in the CSD as a ridge of energy, otherwise it's completely missed. You can often hear this for yourself by walking around a speaker, on many designs the sound will suddenly change character several times as you move around the front hemisphere. That's the beams of stored energy along with the effects of edge diffraction. To spot them, a CSD plot needs to be done every couple degrees all around the speaker, you're looking for sudden transitions in the shape of the plots. Those are a bad thing, they destroy details, muddy the soundstage, smear transients and kill the tone of instruments. In a good speaker the plot will change smoothly as one moves around the speaker without any sudden discontinuities.

There's a lot more but unfortunately I don't have the time to get into it at the moment.
 
Oct 15, 2007 at 9:45 PM Post #86 of 89
You're talking about "how", while I was talking about "what". If we could just get people to describe sound using the technical terms I listed, instead of vague, flowery adjectives, we'd be achieving a lot. THEN we could move on to trying to determine the best ways to measure what people are hearing.

That said, I agree with the subjectivists that sometimes technical measurement can get into the area of Brobnigagian discussions of how to break an egg. What really matters is what we hear.

Perhaps we need a third category... someone who believes in scientific testing, but realizes the limitations of human hearing. That's what I am. Perhaps that's a "pragmatist".

See ya
Steve
 
Oct 15, 2007 at 10:14 PM Post #87 of 89
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
You're talking about "how", while I was talking about "what". If we could just get people to describe sound using the technical terms I listed, instead of vague, flowery adjectives, we'd be achieving a lot. THEN we could move on to trying to determine the best ways to measure what people are hearing.


Unfortunately many people are afraid of numbers & graphs, which is a nice way of saying they probably wouldn't understand them.

Quote:

That said, I agree with the subjectivists that sometimes technical measurement can get into the area of Brobnigagian discussions of how to break an egg. What really matters is what we hear.


Agreed.

Quote:

Perhaps we need a third category... someone who believes in scientific testing, but realizes the limitations of human hearing. That's what I am. Perhaps that's a "pragmatist".


It also works the other way as scientific testing also has its limits. For instance I could have two speaker drivers which are identical save for magnet material, one being Alnico & the other made of ferrite. Same flux density, same efficiency, same FR, same CSD & impulse plots, effectively identical in every measurement which I could think of and make. And yet they don't sound the same for whatever reason, a mystery which I'm still working on.
 
Oct 16, 2007 at 9:03 PM Post #88 of 89
Sorry about sidetracking a bit with DBT stuff. And I should have used the word "often" rather than "always" when it comes to ignoring the true value of DB testing.

One other phenomenon I've noticed with equipment comparisons (as mentioned here in this thread to some extent) is the tendency to exaggerate differences between similar equipment using interesting language in order to draw some kind of conclusion. I first noticed this when reading DAC comparisons (here at Head-Fi and elsewhere)...people would compare two DACs and then write long, complex descriptions of how the DACs were different. Often one would think there was a huge world of difference between DACs such as a Benchmark DAC1 and a Lavry DA10 based on some of the comparisons one finds here and elsewhere, for example. However, at one point I did some testing of these specific DACs on my own and discovered that, yes, there are differences, but that these differences were relatively small when compared to, for example, the differences between headphones. But mentioning that reduces the significance of the comparison, I suppose, so paragraphs must be written about how the two DACs in question are different from each other.

Anyway, just another thing to look out for...comparisons between two devices (especially two with a big price difference, it seems) almost never conclude with something like "and they sounded pretty much the same."
 

Users who are viewing this thread

Back
Top