royalcrown
500+ Head-Fier
- Joined
- Sep 27, 2006
- Posts
- 714
- Likes
- 11
Quote:
The reason that the bullet analogy doesn't convince me is because the situations work off of different assumptions. For one, in the case of putting it's a matter of purely skill, whereas in this instance it's about whether or not a difference can be detected in the first place. So while with putting the variable we're looking to isolate is purely the skill of the subject, whereas with audio components, we're looking to isolate both the skill of the subject and the equipment as well. That aside however, the big problems I have with assumptions are as follow:
1) With bullets flying over someone's head, we a) know that those bullets are there and b) we know that the bullets have effects that can foil the test. With quick-switching, we only know that quick-switching is employed - we don't know whether or not quick-switching has any effect that can foil the test. It's undeniable that loud noises and the fear of death will affect just about any test, let alone one concerning golf putting - it's common knowledge that those test conditions are awful. However, it's not common knowledge that quick-switching is inherently bad. In fact, we have empirical evidence that quick-switching increases the chances of success in other aspects of audio (i.e. codec testing).
2) If ABX tests were successful, nobody would argue that quick-switching mucks up the tests - so the only reason why quick-switching (or other aspects of testing) is even suspected to make a difference is because ABX tests don't reveal differences between components. However, I think suspecting something because of a given test result trend is not enough - there needs to be something more in order to give the argument credence.
Of course, this isn't limited to believers - I think more skeptics would argue that ABX tests are flawed if they provided positive cable results than would admit to it (not really in the sound science section as much as on other subsections and other forums altogether), and they would probably argue similar things (that some factor or other caused faulty results). Nevertheless, if all the evidence we have for quick-switching messing tests up is the fact that ABX tests aren't providing positive results, that's not enough proof. It's a possibility, but we don't usually take possibilities seriously unless we have reasonable cause to assume so - I don't, and I don't think most do or should, consider every single possibility and give them all equal weighting unless there's sufficient and equal probable cause for each one.
The reason I think that biases are at play here is because I've seen empirical evidence that placebo/biases have affected many tests, in the sense that many tests will pretend to switch a cable out and leave the test setup the same, only to find that subjects still believe there to be differences between cables. So there must be some sort of bias influence when it comes to testing, but I haven't seen compelling evidence of other factors making a large impact in the same sense that biases do.
Quote:
That's not what I'm saying at all. If you want me to translate it into "stat-speak," I don't think there's reasonable cause to assume that p is greater than 0.5, for all of the reasons listed above. I try to refrain from such word usage because it's imprecise and impedes clarity, but if you insist. I was referring to your original post on quick-switching and imagination contamination, before you started referring to statistics. I don't know how many more times I have to say this: the thread is not about statistics.
Originally Posted by PhilS /img/forum/go_quote.gif Maybe you could repeat your reply to my previous post with quick-switching, as I don't see why the putting analogy isn't pretty good. I think you're focusing on the trees rather than the forest, when you say that we know how people react to bullets. (I'm not being critical; I'm just trying to explain my difficulties with the argument.) To me, if a believer says quick-switching is something that can prevent people from hearing differences in ABX tests, I don't see why it isn't reasonable to accept a positive result and reject a negative result when quick-switching is utilized. The positive result may mean (1) the differences were so obvious that the quick-switching was not enough of a hindrance to overcome them, or (2) quick-switching is not a hindrance at all. OTOH, if the result is negative, we don't know that quick-switching is not a hindrance. Maybe that is what cause the negative result. |
The reason that the bullet analogy doesn't convince me is because the situations work off of different assumptions. For one, in the case of putting it's a matter of purely skill, whereas in this instance it's about whether or not a difference can be detected in the first place. So while with putting the variable we're looking to isolate is purely the skill of the subject, whereas with audio components, we're looking to isolate both the skill of the subject and the equipment as well. That aside however, the big problems I have with assumptions are as follow:
1) With bullets flying over someone's head, we a) know that those bullets are there and b) we know that the bullets have effects that can foil the test. With quick-switching, we only know that quick-switching is employed - we don't know whether or not quick-switching has any effect that can foil the test. It's undeniable that loud noises and the fear of death will affect just about any test, let alone one concerning golf putting - it's common knowledge that those test conditions are awful. However, it's not common knowledge that quick-switching is inherently bad. In fact, we have empirical evidence that quick-switching increases the chances of success in other aspects of audio (i.e. codec testing).
2) If ABX tests were successful, nobody would argue that quick-switching mucks up the tests - so the only reason why quick-switching (or other aspects of testing) is even suspected to make a difference is because ABX tests don't reveal differences between components. However, I think suspecting something because of a given test result trend is not enough - there needs to be something more in order to give the argument credence.
Of course, this isn't limited to believers - I think more skeptics would argue that ABX tests are flawed if they provided positive cable results than would admit to it (not really in the sound science section as much as on other subsections and other forums altogether), and they would probably argue similar things (that some factor or other caused faulty results). Nevertheless, if all the evidence we have for quick-switching messing tests up is the fact that ABX tests aren't providing positive results, that's not enough proof. It's a possibility, but we don't usually take possibilities seriously unless we have reasonable cause to assume so - I don't, and I don't think most do or should, consider every single possibility and give them all equal weighting unless there's sufficient and equal probable cause for each one.
The reason I think that biases are at play here is because I've seen empirical evidence that placebo/biases have affected many tests, in the sense that many tests will pretend to switch a cable out and leave the test setup the same, only to find that subjects still believe there to be differences between cables. So there must be some sort of bias influence when it comes to testing, but I haven't seen compelling evidence of other factors making a large impact in the same sense that biases do.
Quote:
Originally Posted by mike1127 /img/forum/go_quote.gif You say it pretty well here, just as you have all throughout this thread. I think maybe Royalcrown's point is that SOME believers would base their entire argument on the success or failure of a test. In other words, this strawman "believer" being criticized by Royalcrown thinks like this: - Someone didn't pass an ABX test. - Therefore ABX tests are flawed. - Now let me invent some reasons why. There may in fact be a few people who think like that, but certainly none of them have participated in this sound science forum during the time I've been here. |
That's not what I'm saying at all. If you want me to translate it into "stat-speak," I don't think there's reasonable cause to assume that p is greater than 0.5, for all of the reasons listed above. I try to refrain from such word usage because it's imprecise and impedes clarity, but if you insist. I was referring to your original post on quick-switching and imagination contamination, before you started referring to statistics. I don't know how many more times I have to say this: the thread is not about statistics.