1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Testing audiophile claims and myths

Discussion in 'Sound Science' started by prog rock man, May 3, 2010.
767 768 769 770 771 772 773 774 775 776
778 779 780 781 782 783 784 785 786 787
  1. Phronesis
    I agree with all of that. The question though is how to relate (a) our ability to detect differences with short segments to (b) differences experienced in longer term normal listening, in which conscious and subconscious perception may be operating differently than in the short-term testing where the focus is on trying to consciously detect differences, and subconscious perception may be working differently. I do suspect that null results in the short term testing indicate insignificant differences for long term listening, but I'm not sure, and I'd like to see some solid evidence of the connection.

    For example, maybe a small difference in the short term (less or more bass, less or more of some type of distortion, etc.) isn't consistently consciously detectable or seems insignificant, but in the long term it may be significantly more pleasant or annoying, and the difference may be perceived mainly subconsciously without being able to consciously point out the difference. I'm sure many of us have had the experience of there being a slight hum or high-frequency whine in our sound system or environment, and we fail to notice it because it's constant, but we notice when it suddenly goes away, and we realize it was subconsciously bothering us all along.
    Last edited: Dec 18, 2018
  2. KeithEmo
    I'm inclined to agree with you.

    However, many listeners insist that they notice things over long periods of time, like "product A sounds more fatiguing than product B". I personally suspect that this is just a matter of bias. However, because we each perceive things differently, I'm not prepared to claim that I know for a fact that they're imagining it without some thorough testing. Perhaps their minds just process things somewhat differently than mine.

  3. Phronesis
    Yes, see my post above. Without evidence, I'm not quite ready to conclude that a difference not detected or found to be very small in the short term necessarily means that it's also insignificant in the longer term. We need to look at both magnitude of differences and the time over which those differences are experienced to understand the effects of the differences on listeners. And in this regard, the nature of the difference may matter a lot also - there are different types of differences, and type of difference may make a difference!
    analogsurviver likes this.
  4. analogsurviver
    I concur that small differences that are usually too small to be perceived in quick ABX can be a determining factor in the end. If I can't stand something while washing dishes or doing similar chores - and something else, under same conditions, pleases me - WHAT do you think will I choose - even if the short term DBT ABX.......Ž revealed - nothing ... ?
  5. Zapp_Fan
    This is a little late but the LAME development community used to conduct a buttload of ABX tests, just about every time they released a new encoder preset I believe it was subjected to ABX testing of fair rigor. So if you want to see how this can be done over time, that would be a good example. Of course that sort of testing is much easier to conduct than hardware testing, but the principles involved are the same.

    Also @Phronesis this is a little OT but what did you think of the new Jacob Collier album? I found it a bit soundtrack-y compared to previous stuff... going to see him live in March though.
  6. Phronesis
    The first time I heard the new album, I wasn't sure I liked it overall, but I liked some things and was intrigued. I've now listened to the album many times and I love it. IMO, the kid is a musical genius of a rare kind. I currently have the album cover as my avatar to pay tribute! Really looking forward to the upcoming three volumes, and what he does beyond that.
  7. Zapp_Fan
    Agree about the "musical genius" thing, even if I don't always love what he does with his talent, it's undeniable. I had the same feeling where I wasn't quite sure how I felt about the music at first, but he's a phenomenon either way. I'll give Djesse a few more listens... :)
  8. Phronesis
    I actually like the new album a lot more than his first one. For me, the first one is an appetizer which shows his potential, whereas the new album is like a full meal which unfolds like an epic journey. The stylistic twists and turns can be jarring at first, but once I sort of know they're coming, I can go with his flow and really like it.
  9. bigshot
    If you want to know the best way to conduct tests, doing some tests yourself will teach you that fast. Or just ask someone who does them. They’ll tell you. But asking for test results on how to conduct tests is crossing the line into absurdity.
    sonitus mirus likes this.
  10. Phronesis
    Not sure I agree. At this point, I've done a fair bit of testing, but I don't assume that my tests didn't have flaws I'm not aware of, nor that I'm interpreting the test results properly.

    In any case, I myself am not really looking for advice on testing, but rather info on tests which have already been done by others, which considered the kinds of variables I mentioned.
  11. KeithEmo
    Errrrrr....... not really.
    Almost every five year old has determined, after extensive testing, that wet mud makes excellent pies.
    Many have years of research, and stunning results, to back up their claims.
    However, as adults, we find their results... suspect.
    (They probably based their conclusions on how their pies look... whereas adults are more concerned with taste, safety, and nutritional value).

    One problem is that so many people think they're doing it right, or think they're being thorough, but they really aren't.
    Another problem is that so many people don't read the details - or manage not to absorb them.
    (And, in all fairness, it's possible that some of the details that appear to be missing aren't missing, but just weren't included in that magazine article.)

    Oddly enough, there are whole college courses in designing, performing, and documenting tests PROPERLY.
    And whole textbooks dedicated to the subject.
    Perhaps it isn't actually as simple as some people seem to think.

    If you want to test whether a difference will be audible with your system then testing it with your system is fine.
    However, if you want to produce results that are valid in the general term, it isn't good enough.
    (You will need to document why you are certain that some other system won't reveal differences that yours fails to.)

    And, no, it doesn't have to be all that difficult or expensive.
    For example, if you and five buddies did a great double blind test of DACs, using five different headphones.
    Then say so in your conclusions.....
    List the DACS, and the headphones, and the source material you used.
    But be sure to mention that, since all the test subjects were between 40 and 50 years of age, you cannot rule out the possibility that younger listeners might hear something you didn't.
    And also be sure to mention that you used planar and dynamic headphones, so you cannot rule out the possibility that differences might be audible with electrostatic headphones either.
    And be sure to provide details of the test samples you used so, if someone else wants to duplicate your results, they can go out and purchase the same exact ones.
    (And, if we're talking abut ultrasonics, provide some spectrum plots showing that the samples contain ultrasonics, and that your headphones are capable of playing them.)
    And then, after being thorough, and documenting it all carefully....
    Don't go out on a limb by claiming that people should generalize your results to ALL DACs, and ALL headphones, and ALL listeners.

    And, yes, I went to college for this sort of thing.
    And, yes, I did product comparisons for a living for several years (commercial computer products - mostly big network, communications, and security gear).
    And, yes, it included devising, performing, and documenting tests - both for publication and internal use by our customers.
    And that included justifying both that our results were valid and that they were actually demonstrating the differences they were designed to test for.
    And, yes, we were always extremely careful to include the limitations of our tests, rather than attempt to claim that they were true "everywhere, for everyone, forever".
    (We were paid by manufacturers to analyze how their products performed compared to their competitors and make suggestions for making their products more competitive.)

  12. KeithEmo
    In fact I think many of the tests listed at the beginning of the thread provide lots of useful information.
    However, many of them really need to be considered in context.
    For example, many tests have shown that FOR MOST PEOPLE, MOST OF THE TIME, CDs are audibly perfect.
    So perhaps now it's time to concentrate on the exceptions... and either rule each out or list it for further study.
    If CDs CAN sound near perfect - then why do they fail to do so so much of the time?
    And where is the exact gap between "near perfect" and "perfect" and can we narrow it?
    And can we perhaps establish a list of priorities about what needs to be fixed?

  13. Phronesis
    I agree.

    Design and interpretation of tests is a skill (and even somewhat of an art). Experts will generally have that skill to a much higher degree than amateurs and hobbyists, and the tests listed in the first post in this thread generally don't seem to have been done by experts.

    The other point to add is that tests have to be designed and interpreted in the context of a theoretical framework. If the framework has problems or important omissions, that can create problems in the design and interpretation of tests. If we're doing listening tests, we need to recognize our assumptions regarding how perception and memory are working when doing the tests.
  14. bigshot
    Tests are useful. Everybody should do them to understand for themselves how things work. They don't have to be perfect. They don't have to be conducted by PhDs. All that is required is a desire to know for yourself. If you don't do any tests yourself and you depend on authorities or your gut feelings or what large groups of people say or the size of the price tag, then you probably don't want to know for yourself.

    Testing isn't a tool for self validation, and it isn't a tool to prove someone is wrong. It's just a way to find out something. Once you find it out, that leads to more questions and more tests. The more you do that, the more you know. The best way to defend an incorrect conclusion is to refuse to do tests yourself and nitpick everyone else's tests, saying that they aren't good enough for you. If you can keep that up long enough, you can continue to believe a lie forever. Armchair quarterbacks don't know jack diddly. They just think they do.

    I am looking for someone who has access to a DAC that sounds clearly different through line out under reasonably careful comparison. If anyone has access to something like that and they would like to join in a test to determine if there is a difference and how it measures, let me know. There are a couple of other people who have PMed me who are also interested in participating. I've been asking for this for over a year now. It's interesting that so far no one has been able to help us with this. I think that might show that if a different sounding DAC exists, it must be pretty rare.
    Last edited: Dec 18, 2018
  15. Phronesis
    Fully agreed that people should do some testing themselves. Even if the testing isn't rigorous, it can still be very educational, and there's no better way to see the effects of expectation bias than to do some controlled testing and experience previously clear differences suddenly vanish, like discovering that a mirage was only a mirage. People will continue to 'trust their ears' until they find out for themselves, firsthand, that their ears aren't so trustworthy after all.

    That said, I'd still like to see some rigorous testing done which meets scientific standards, shows the effects of various variables, and links short term to long term. My interest isn't for any practical reasons (my existing headgear sounds plenty good to me, and I doubt there's much room for improvement), but more just for scientific curiosity.
767 768 769 770 771 772 773 774 775 776
778 779 780 781 782 783 784 785 786 787

Share This Page