Head-Fi.org › Forums › Equipment Forums › Sound Science › Proposal to ban posts that question validity of DBT from Science Forum
New Posts  All Forums:Forum Nav:

Proposal to ban posts that question validity of DBT from Science Forum - Page 2

post #16 of 89
Quote:
Originally Posted by Chef View Post

....Otherwise you're going to have to listen to people like mulveling every time you try to make progress.
That personal attack was completely uncalled for Chef.

Quote:
Originally Posted by terriblepaulz View Post

Banning any topic, IMO, degrades the forum and re-enforces the perception that head-fi operates for the benefit of its sponsors and not its members. Nobody's time is so valuable that they need to be protected from misinformed opinions. Of course the answer would be to remove the dbt restriction from the cable forum, but that will not happen.

The problem in the sound science forum (again IMO) is assertions of fact without citation or based only on an individual's subjective experience. But that usually prompts some rhetorical beat downs, so self-policing seems to work better than a ban.
I agree with you.

USG
post #17 of 89
I like Eucariote's idea:

Quote:
Originally Posted by eucariote View Post
But it does get a little tiring have to re-explain the fundamentals any time someone new logs on as to why one 'I hear a difference' is not enough to settle any issue. Perhaps a better idea is to have a sticky outlining the tenets of hypothesis testing (much like headphone faqs) so discussions don't return to first principles every fifth post.
post #18 of 89
Quote:
Originally Posted by JxK View Post
I think that true double blind testing is something that is out of the reach of most people as it is prohibitively expensive and fraught with logistical errors. Large sample size composed of "audiophiles", music lovers, laymen, etc. Multiple copies of headphones per person to account for driver variability. Established controlled conditions with only one variable...The list just goes on and on.
Finally! Someone else that gets to the core problem with DBT in respect to a small audio community. Not a single audio organisation - not even Sennheiser will be able to recoup the cost of such a huge R&D expense. Medical researchers can because a majority will pay for life - not for extra "air" between instruments (if research does indeed confirm such phenomenon).
post #19 of 89
You mistake ABX tests for statistical studies.
post #20 of 89
Quote:
Originally Posted by SP Wild View Post
Finally! Someone else that gets to the core problem with DBT in respect to a small audio community. Not a single audio organisation - not even Sennheiser will be able to recoup the cost of such a huge R&D expense. Medical researchers can because a majority will pay for life - not for extra "air" between instruments (if research does indeed confirm such phenomenon).
Harman (JBL) do extensive double blind testing on their speakers and have published several interesting reports from their research, some of Sean Olive's stuff (such as just how bad audio reviewers are compared to audio retailers) is really fascinating.
post #21 of 89
Sorry, I kinda meant DBT's with respect to fringe subjects such as cables.

Quote:
Originally Posted by Pio2001 View Post
You mistake ABX tests for statistical studies.
Which I believe is kinda necessary for such a polarising subject as cables with what I'd imagine 50% of head-fi'ers swing either way.
post #22 of 89
Quote:
Originally Posted by SP Wild View Post
Not a single audio organisation - not even Sennheiser will be able to recoup the cost of such a huge R&D expense.
Phuie! I bet the membership here could pull it off ...


I think one of the problems with ABX testing audio is that there is so much to listen for; a simple "that sounds better/worse" is somewhat meaningless.

Simple ABX tests done by single individuals summing up the multitude of characteristics of an audio signal and grading it on a single axis of "better/worse" may be the cause of the problem with ABX testing in audio.

I suggest that a system of subjective testing that acts to evaluate audio on many characteristics using audio material that highlights those characteristics would be more productive.


To the OPs topic, I think the diversity of opinion and education makes constructive dialog slow ... but more sure, over time.

Though I would add that better educated participants makes for more productive dialog.
post #23 of 89
Quote:
Originally Posted by Shark_Jump View Post

Additionally claims of improvements gained by a piece of audio equipment should be discouraged without at least an intention to do a DBT or ABX.
+1! One guy was almost offended (?) when I asked whether he did at least some kind of blind testing in order to verify his conclusion that different USB cables make a BIG (quote) difference in SQ. If we assume that head-fi is not some kind of kindergarten, then the OP suggestion is very timely and welcome (-:
post #24 of 89
Quote:
Originally Posted by Tyll Hertsens View Post
To the OPs topic, I think the diversity of opinion and education makes constructive dialog slow ... but more sure, over time.

Though I would add that better educated participants makes for more productive dialog.
Well put. That's what I was trying to say in opposing a ban. Keeping the discussion open results in the loss of time. Bans result in the loss of (potentially valuable) ideas.
post #25 of 89
Quote:
But you just contradicted yourself: ABX tests tell you what they tell you, but AB tests tell you nothing? (I'm inferring this by your suggestion that they aren't worth anything.)
Experimenter expectancy is so fatal to the reliability of data that yes, there is really nothing to be gained from a sighted test. Yeah, if you do a blind test where you stab someone with a knife, and where you poke them with a bat you're probably going to get the same result, but that doesn't prove anything. There's been so many studies on the effect of experimenter expectancy, and so many scandals from before it was widely observed, that it's pretty ludicrous anyone would trust the results of an AB test they didn't already think they knew the answer to. For example (HARHARHAR) take two knives and tell a person one has been sharpened and is brand new, while the other is old and hasn't been changed in a long time. The knives are actually the same. Do you think there'd be a disproportionate number of people saying the supposedly sharp one was actually better? That's the problem with testing audio equipment you can see. You're influenced by it's aesthetic appeal, what you've read about its brand, and how expensive it is. Tests like that are not legitimate, and I wouldn't spend money depending on their results. It's as good as not doing the AB test in the first place.

An abx test is valuable because, if the results come back positive, we know something for sure! There's detectable difference. If an abx test comes back negative it doesn't conclusively prove that there's no difference to be heard, and it also doesn't prove that my ears won't be better than whoever performed the abx test, but at least it tells me what we don't know. An AB test tells you neither because it can't distinguish between a placebo effect and a real effect as the variables co exist. You have to get rid of variables to know exactly what is affecting what. An AB test is only useful if you're trying to prove something about the person and you already know about the objects they're testing.
post #26 of 89
Quote:
Originally Posted by nick_charles View Post
Harman (JBL) do extensive double blind testing on their speakers and have published several interesting reports from their research, some of Sean Olive's stuff (such as just how bad audio reviewers are compared to audio retailers) is really fascinating.
I second this, Sean Olive's blog is very instructive and shows in detail how to do proper Design of Experiments and eliminate potential sources of bias. They use special acoustically transparent but optically opaque screens to prevent the sight of the speakers from influencing results, and developed a motorized speaker rig to rotate speakers while keeping the same placement.

Perhaps Tyll will give us something like that in his new venture.
post #27 of 89
The scientific method works great when you can boil your model down to relatively few, simple variables. Then it's fairly straightforward to conduct an experiment and make the leap from hypothesis to conclusion. You can make a prediction based on weather models, of what the weather is LIKELY to be on Sunday, but you can't PROVE that one way or the other (except to wait until Sunday). It's far too complex for that. That's why it's not knowledge - it's a prediction.

The problem with any hypothesis involving the human perception of sound - and this most certainly includes the whole class of "can anyone hear any difference between X and Y model of component/cable" - is that the human sensory system is impossibly complex and unreasonable to model. Am I saying I think that people can hear a difference between say power cord sheathing? NO, but on the other hand ABX testing is an incredibly weak tool for proving that they can't. If all the tests for hearing a difference come out negative, then absolutely it is much more LIKELY that nobody can hear a difference. That is not PROOF.

The human brain is the most advanced general purpose pattern matcher out there. Most importantly, it can interpolate, extrapolate and FILL IN GAPS in datasets based on a vast body of prior experience - both recent and long in the past. The brain inherently WANTS to match patterns, and it has some incredibly advanced techniques for doing so - probably many we're not even aware of yet. What could that POSSIBLY have an effect on? Perhaps ABX testing of extremely similar signals sampled in relatively short intervals. So it's my proposal that the seemingly simple logical leap from "nobody could hear a difference in our ABX tests" to "therefore nobody can hear a difference in what we were testing, under any circumstances" is NOT necessarily a valid one. Especially for cases that are testing differences near the true thresholds of the human sensory system. You have the monster variable of the human brain potentially masking minute differences based on context. At least, the usual ABX formula of rapid switching on short samples over a short period of time should be re-examined.

THAT is why I feel an ABX test, especially of the garden variety most often described, generally lacks rigor as a tool for disproving that humans can hear a difference in ___. Certainly, it is NOT something to mandate a belief in! That is ridiculous!
post #28 of 89
^
Proofs exist in mathematics, not empirical science. No credible scientist would ever discount the possibility of new data that contradicts old findings. However the method for accepting a new factual hypothesis is always the same: assume the null hypothesis until data shows otherwise to a commonly accepted criterion (p < .05). If scientific hypotheses were done in the reverse direction, we could assume that we are all the holy creation of the Flying Spaghetti Monster (or anything else) until proven otherwise.

Here is a refresher of the scientific method from another post. This is the technique that would be used to test, for example, the claim : "I can hear the difference between cable x and y". Just as it would be used to test: "drug x reduces the mortality rate of y". Yes, it can even be done with, in, or on perception. Lather, rinse and repeat as necessary.
post #29 of 89
Quote:
Originally Posted by mulveling View Post

The problem with any hypothesis involving the human perception of sound - and this most certainly includes the whole class of "can anyone hear any difference between X and Y model of component/cable" - is that the human sensory system is impossibly complex and unreasonable to model. Am I saying I think that people can hear a difference between say power cord sheathing? NO, but on the other hand ABX testing is an incredibly weak tool for proving that they can't. If all the tests for hearing a difference come out negative, then absolutely it is much more LIKELY that nobody can hear a difference. That is not PROOF.

The human brain is the most advanced general purpose pattern matcher out there. Most importantly, it can interpolate, extrapolate and FILL IN GAPS in datasets based on a vast body of prior experience - both recent and long in the past. The brain inherently WANTS to match patterns, and it has some incredibly advanced techniques for doing so - probably many we're not even aware of yet. What could that POSSIBLY have an effect on? Perhaps ABX testing of extremely similar signals sampled in relatively short intervals. So it's my proposal that the seemingly simple logical leap from "nobody could hear a difference in our ABX tests" to "therefore nobody can hear a difference in what we were testing, under any circumstances" is NOT necessarily a valid one. Especially for cases that are testing differences near the true thresholds of the human sensory system. You have the monster variable of the human brain potentially masking minute differences based on context. At least, the usual ABX formula of rapid switching on short samples over a short period of time should be re-examined.

THAT is why I feel an ABX test, especially of the garden variety most often described, generally lacks rigor as a tool for disproving that humans can hear a difference in ___. Certainly, it is NOT something to mandate a belief in! That is ridiculous!
I like this stuff ^ !

Ditch the testing I say, if a difference is so minute then its not worth worrying about, just enjoy the music.
post #30 of 89
I can care less about DBX. But if it floats someones boat. Good for them. Don't care enough to want to ban it. I will just go back to listening to music.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Proposal to ban posts that question validity of DBT from Science Forum