mike1127
Member of the Trade: Brilliant Zen Audio
- Joined
- Oct 16, 2005
- Posts
- 1,114
- Likes
- 25
Quote:
That would be amazing. I appreciate your effort.
One question. Because listeners differ in discriminatory ability, wouldn't you have to test individuals many times? Let's say 75% of the listeners either lack discriminatory ability or self-choose a poor method of evaluating the differences. Wouldn't that "drag down" the power of the test, and overwhelm the 25% who can tell a difference?
By the way, I am more concerned with people self-choosing a poor method of evaluating the differences, than I am with lack of discriminatory ability. My hypothesis is that cable differences are in fact real, and that people who try different cables and live with them long-term have a unconscious way of settling on what they like. But I wince when I see them trying to compare cables consciously. They seem to have no ability to control the conditions of listening. Not their fault... it is damn difficult to do. I think I may have found a way to do it, in my current protocol. Of course, only for me. Other people may have to find their own way.
I see this as probably the most difficult problem in testing subtle differences. How does the listener control their attention? How does the tester give instructions so the listener uses their attention in a consistent way? The best way of doing so might be different for every listener.
EDIT: one way of running this test might be to mail one cable at a time and have the participant rate it relative to their usual setup. So they could answer: (1) Much better than my usual setup (2) Slightly better (3) about the same (4) Slightly worse (5) Much worse. This has many problems, but just throwing it out there as a suggestion.
Originally Posted by wavoman /img/forum/go_quote.gif This is turning in to a SmellyGas love fest, but the following, already quoted by mike and others, is just perfect: ... For testing cables I have a solution. Blind testing at home, relaxed, with no partner. |
That would be amazing. I appreciate your effort.
One question. Because listeners differ in discriminatory ability, wouldn't you have to test individuals many times? Let's say 75% of the listeners either lack discriminatory ability or self-choose a poor method of evaluating the differences. Wouldn't that "drag down" the power of the test, and overwhelm the 25% who can tell a difference?
By the way, I am more concerned with people self-choosing a poor method of evaluating the differences, than I am with lack of discriminatory ability. My hypothesis is that cable differences are in fact real, and that people who try different cables and live with them long-term have a unconscious way of settling on what they like. But I wince when I see them trying to compare cables consciously. They seem to have no ability to control the conditions of listening. Not their fault... it is damn difficult to do. I think I may have found a way to do it, in my current protocol. Of course, only for me. Other people may have to find their own way.
I see this as probably the most difficult problem in testing subtle differences. How does the listener control their attention? How does the tester give instructions so the listener uses their attention in a consistent way? The best way of doing so might be different for every listener.
EDIT: one way of running this test might be to mail one cable at a time and have the participant rate it relative to their usual setup. So they could answer: (1) Much better than my usual setup (2) Slightly better (3) about the same (4) Slightly worse (5) Much worse. This has many problems, but just throwing it out there as a suggestion.