1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Testing audiophile claims and myths

Discussion in 'Sound Science' started by prog rock man, May 3, 2010.
772 773 774 775 776 777 778 779 780 781
783 784 785 786 787 788 789 790 791 792
  1. old tech
    Its been many years since I've read Stereophile but as you say, yes they did provide their own measurements (not sure if they still do) but more often than not there was a disconnect between that and their reviews.
  2. Phronesis
    It's disappointing that good test data on this isn't generally available to the public. In any other area of science that interests me, I can quickly find lots of peer-reviewed journal papers, books by specialists, etc. in order to get up to speed on the topic and see where the start of the art is. There are multiple journals focused on music perception, yet sound perception related to audio gear is a neglected area of published academic research. Most of these questions could be put to bed if the needed data were publicly accessible.

    Maybe part of the problem is that, on the academic side, audio engineering isn't typically a university department like other engineering disciplines, and it's not specifically a scientific area either (as engineering generally isn't). And on the practice side, in the US there's no professional engineering license available for an audio engineer: https://ncees.org/engineering/pe/. In my area of engineering, it's the norm for engineers with the prerequisite experience to get a PE license, and all of the eligible engineers in my firm have it.
  3. KeithEmo
    And it's no great tragedy to accept those limitations.

    You cannot say that "nobody ever gets hit by lightning" (if you did you would be wrong).
    But you can say that "it's extremely unlikely that YOU will be hit by lightning".

    And most of us wouldn't pay extra for "special lightning insurance".

    Last edited: Dec 20, 2018
  4. KeithEmo
    As with any magazine - Stereophile thrives on debate and discussion....

    If everyone agreed on what they said there would be nothing to talk about... or write about.
    So, rather than take a side, they've chose the somewhat interesting option of presenting all sides.

  5. KeithEmo
    There is some truth to that claim....

    However, surprisingly, as long as common sense wasn't especially uncommon...

    Most people managed to enjoy the buzz while they were children...
    And still managed to escape the trap, all on their own, long before reaching adulthood...

  6. KeithEmo
    I stand corrected - about the equipment and sample list on the Meyers and Moran study.
    The PDF reprints I acquired was only the study and didn't include the addendum.

    I would note, however,

    1) They described their primary system as: "has a wide frequency range, good definition and detail, and a stereo image with both specificity and depth. Pink noise measured with temporal averaging was very flat broadband." They also persisted in using descriptive terms like "large and capable monitors" when describing the other systems. However, they failed to provide actual measurements, so we have no idea what "wide frequency range" or "very flat" actually mean. Particularly, since one clearly measurable result of inserting a "CD loop" in the signal chain would be to eliminate all frequencies above 22 kHz, they should have confirmed that the speakers and other equipment they used were in fact capable of reproducing those frequencies when they were present. And, no, "likely to" has no technical meaning.

    2) They also noted that: "The vast majority of productions have a minimum noise level that swamps the residual noise in the CD link, and no differences in the quality of that noise, or of reverberant tails, could be heard." This is an obvious assertion that the test samples they used had higher noise floors that the CD loop they were attempting to test. They basically stated that, as far as noise was concerned, they were essentially testing for audibly differences made to the noise present in the samples, since the noise present in their samples would have swamped any musical content that might have been present. (I suspect that some modern digital recordings do in fact have a noise floor below the noise floor level of a CD.)

    3) I will also repeat the same concern expressed by several other critics of the test. Many of their samples were "audiophile SACDs" or "audiophile recordings". However, they failed to document whether they actually contained any ultrasonic content or not. Since they DID include a full list in the addendum.... I wonder if anyone has actually analyzed those specific discs to determine whether any of them did in fact contain any spectral content that extended into ranges where "reducing it to CD quality" would have altered it.

    4) I would finally note - yet again - that AT MOST they could reasonably conclude a statistical result that applied to a specific sample group, specific test systems, and specific sample content. However, you CANNOT LOGICALLY prove the nonexistence of something using any amount of statistical data. (You can, however, INFER that it may be UNLIKELY.)

    5) And, since you insist on being pedantic, I will now be equally so. If they were attempting to show that no human being could hear a difference - as a general case - then they have automatically failed. It is simply a matter of definition. Such a test is simply impossible to perform. Since they didn't test every human being, or every sample, or every music system, they cannot make a valid absolute claim that extends to all of them. (This is no big surprise... proving a general negative is usually impossible.) Therefore, assuming that they were attempting to produce a valid result, and had a basic understanding of logic and testing, and had chosen to use a statistical method, at most they must have been trying to find out "whether a statistically significant number of people could hear a difference a statistically significant amount of the time". (And, yes, that is further limited to "in their test sample".)

    6) We do, however, seem to be in final agreement. What they offered was "reasonably compelling evidence"... but NOT "absolute proof". (And the term "reasonable" defines a matter of opinion on the part of the person assessing it.) And this is ALL I've been saying all along.

    Last edited: Dec 20, 2018
  7. KeithEmo
    That's still the way they do it.

    In full equipment reviews, they provide both a subjective review, and measurements, and often even comments about whether the measurements tend to support or conflict with the subjective assessment. Their measurements are usually quite thorough and complete, and it is up to the reader to decide how much credibility to assign to the subjective opinions of a specific reviewer.

    (And, yes, the subjective review often disagrees with the conclusions of the "technical review". They basically invite the reader to pick a side and express no overriding editorial opinion about which is "right". And, yes, they do make their living from advertising revenue. And, yes, as with any modern media publication, their main goal in order to attract readers is to "be interesting".)

  8. bigshot
    Damn! Those pesky numbers again! Why won't they validate my subjective impression?! I'm just going to ignore them and go over to sound science and sperg up threads with lengthy and repeated posts claiming measurements aren't enough and blind testing is fatally flawed!
  9. sonitus mirus
    I recall reading comments from David Moran (co-author of "Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback") in some discussion where he went on to state that they were looking to test what was then considered to be obvious improvements with specific SA-CDs, and that these were selected for their testing. In the end, many of these were found to be technically no different at all, yet some people were lauding them as superior. I think this strengthens the case that hi-res can and should be insignificant.

    I can't find the exact reference, as it was long ago. Here is a short discussion about some of the details from the testing.

  10. KeithEmo
    I suspect you're quite right....
    They were essentially "looking for statistical proof that most people wouldn't notice any difference".
    And, if that was their goal, then they did a pretty good job of it.
    They made a reasonable attempt to statistically detect a difference - and failed.
    From that, it's reasonable to INFER that it is unlikely that a significant difference exists.
    (Although I still insist that they failed to prove that their speakers and other equipment was really capable of reproducing the claimed differences if they did exist.)

    However, I do dislike your specific wording...
    I see nothing to suggest that: "hi-res SHOULD be insignificant".
    I would suggest that it's more accurate to say that: "hi-res can be and often IS insignificant".
    I would certainly agree that I've heard many high-res recordings that sounded no different than their CD counterparts.

    I should also point out that there is no specific NEED to attempt to prove the negative.
    When offering purchasing advice, it's perfectly adequate to offer a statistical assertion that "someone is really unlikely to notice any difference".

  11. sonitus mirus
    I carefully selected the word "should". I've seen no reliable and repeatable evidence to suggest that Red Book cannot sound audibly transparent to hi-res. Though, such things as different masters, design choices (like the Pono player using filters that perform oddly with 16/44.1), or issues introduced from conversion and/or encoding processes can create audible differences or measurable differences that indicate that differences might be audibly identifiable.

    Any hi-res music file (PCM-based) should be able to be converted to Red Book and sound identical to all humans on every system. There are, of course, potential exceptions. I feel strongly that any exceptions presented would be abnormal and something that could be completely avoided with relatively little effort or cost.
  12. bigshot
    If high bitrate/high sampling rate audio sounds different than 16/44.1, then there's probably something wrong with the equipment it's being played on or the settings. The sound mixes I have supervised always end with a bounce down and side by side playback of the original mix and the 16/44.1 bounce down. Neither i nor the engineers I work with have ever heard any difference.
    Last edited: Dec 20, 2018
  13. KeithEmo
    The dictionary actually includes several different meanings for the word "should"....
    (This is one of those trivial distinctions that can lead to apparent disagreements that don't necessarily actually exist.)

    One meaning of the word "should" is essentially "can be expected to".
    I assume that it the meaning you're using.... in which case I definitely agree with you.
    (That definition refers to an expectation based on something like previous experience or known facts.)
    I would say it's reasonable to EXPECT a CD to sound audibly indistinguishable from a hi-res version of the same file.

    (Let me rephrase that as "I would definitely NOT expect a high-res version to sound better".)
    I may not be as convinced as you are that it will ALWAYS be true...
    But I do agree that, in most cases, it is.
    And, in many of the cases where high-res remasters do sound noticeably better, they have also been remastered or remixed, so the difference could be due to that.

    There is also another subtly different definition of the word "should" that suggests a sort of obligation or quality of right and wrong.... as in "you should always obey laws".
    (That definition suggests that it is PREFERABLE for something to "do as it should" rather than that we simply expect it to do so.)
    I would disagree with applying that defninition of the word in this case.

    However, if exceptions DO exist, I would be very interested in knowing about them, and figuring out why and how they exist.
    If there are a few exceptions out there I want them in my collection - just as a demonstration of what is possible.

    I also suspect that I may also be somewhat less optimistic than you are.
    For example, I agree that it is not at all difficult to perform a simple sample rate conversion without introducing obvious artifacts.
    Any competent programmer should be able to do the math correctly, and there are free programming libraries that do it very well, that you can use to do it for you.
    However, when the conversions performed using commercial audio editing products are compared, many are in fact found to produce obvious artifacts.
    (We cannot know whether they are due to simple incompetent programming - or whether someone "liked the way they sounded.)

    If you check out this website.... http://src.infinitewave.ca/
    You'll see that, of all the SRCs they tested, about half produced excellent results, but the other half produced obviously inferior results.
    Without getting into an endless debate about which flaws would be audible, it is obvious that some do a far more accurate job than others.
    This may suggest that exceptions may not be as rare as you assume - or as we would like them to be.

  14. gregorio
    1. This is not the "What KeithEmo Suspects" forum. Neither is it the: "This is what would happen IF something which has been demonstrated to be false, turns out to be true" forum.
    1a. Of course there is. There's overwhelming evidence that the differences lie outside of human hearing and there's numerous controlled listening tests which support this fact. On the other side of the coin, there's no evidence of any differences that fall within the range of human audibility and no controlled listening test has provided any evidence that they do. Therefore, CONTRARY to your claim, there is in fact an extremely good "legitimate justification" for the general claim that differences are not audible. And, as there's NO reliable evidence that differences are audible, then there is NO "legitimate justification" to claim otherwise. Why don't you follow YOUR OWN ADVICE and "apply a little logic"?
    1b. Exactly! I have certainly "tested it fully and properly", numerous times and in numerous different ways, including ABX. That alone disproves your claim that "NOBODY ELSE" has tested it but additionally, MANY others have tested it exhaustively as well. "Your whole point" is therefore based on a FALSEHOOD!
    1. You keep repeating this BUT it is FALSE! You have absolutely zero evidence that the guy was "correct" 8/10 times.
    2. I agree but you seem to be arguing against yourself here!
    3. Exactly but again, you are arguing against yourself! There is reliable evidence to support an estimate of at least 60,000 people a year being struck by lightening and as you say, "most of us wouldn't pay extra for lightening insurance". However, there is no reliable evidence that even a single person can hear the difference (between hi-res and CD), so why would anyone pay the extra for "insurance" (a hi-res audiophile DAC)? Where's that application of logic YOU advised?
    1. Two obvious points that for some (inadvertent?) reason you failed to note: Firstly, the primary system was not the only system, they also used a $100,000 audiophile system, a university listening laboratory and a SACD mastering facility. I personally have never seen a SACD mastering facility which did not have super-tweeters or speakers capable of reproducing ultrasonic freqs. Secondly, the subjects' hearing response was tested and the upper limit was 16kHz - 18kHz (the young students). So provided the systems exceeded 18kHz, extending beyond 22kHz would have made no difference anyway.
    2. This is NOT the "What KeithEmo Suspects" forum!! And even if it were, your suspicion contradicts the facts/evidence. The COMBINATION of the noise floor of the mics, the mic pre-amps and the recording venue will "swamp" the digital noise floor of CD by a factor of at least 10 times but far more commonly, by a factor of 100 times or more.
    3. Why would you want to repeat the same fallacy "expressed by several other critics of the test"? Yes, many of their samples were "audiophile SACDs/recordings" (although NOT all!!) but then the only people claiming a difference between (so called) "hi-res" and CD are audiophiles and those selling to them!!
    3a. It would be more than surprising if the "professional SACD test disk" didn't contain any ultrasonic content or in fact any of the other recordings but even if they didn't, what difference would it make? None of the test subjects had a hearing response beyond 18kHz!
    4. Yes but the sample group AND the test systems AND the sample content were ALL exceptional. So we can INFER that it is EXCEPTIONALLY UNLIKELY.
    4a. They were not trying to, and we do not need to LOGICALLY prove anything. You are making the claim that the differences are (or might be) audible, therefore the Burden of Proof is on you and so far you've failed to produce even any "reasonably compelling evidence" let alone proof!!
    4b. In addition to the "exceptionally UNLIKELY" logical inference of this test, we also have the "exceptionally UNLIKELY" inference from numerous other tests and the "exceptionally UNLIKELY" inference from all the objective (measured) differences. What is "exceptionally unlikely" plus "exceptionally unlikely" plus "exceptionally unlikely"? Add all this together and weigh it against the "reasonably compelling evidence" that we can hear a difference (of which there is none) and what is the inescapable LOGICAL INFERENCE?
    5. No, you are NOT being "equally pedantic", you are being unequally disingenuous! You know that is NOT what they were "attempting to show" because YOU, YOURSELF quoted what they were attempting to determine!
    6. Great, then we are in agreement. Again, all you have to do now is take your own advice and apply a little logic: Add this "compelling evidence" to all the other "reasonably compelling evidence" and weigh all of that against all the "reasonably compelling evidence" that differences are audible (of which there is NONE)!
    1. That's because you are ignoring the evidence. This isn't the "What KeithEmo Can't See" forum!
    2. It is not more accurate to say that (it is far less so) and this is NOT the "What KeithEmo Suggests" forum!

    Round and round and round we go!

  15. KeithEmo

    If you want to see the "proof" that Meyers and Moran had a subject who got "8/10 right" then you might try reading their report.
    It's in the results section..... page 776, top right, on the copy I linked to.


    The “best” listener score, achieved one
    single time, was 8 for 10, still short of the desired 95%
    confidence level. There were two 7/10 results. All other
    trial totals were worse than 70% correct.

    Personally I though it was interesting that many of their subjects who supposedly had "better" hearing actually liked the SACDs less....
    Not an especially significant result... but... interesting.


    And, just to be clear, it's really nice that you're convinced that "any SACD mastering studio would have supertweeters that go to 28 kHz"....
    And that you're quite sure that "$100k audiophile speakers" would absolutely be able to reproduce any differences that might possibly exist between CDs and SACDs....
    Personally, after having heard a lot of equipment, I'm not nearly as convinced about either of those as you are....
    However, real scientists don't take things like that on faith, from me or you, and don't expect their audience to do either - which is why they measure and document it.
    With science experiments you don't "just assume all the gear is doing what you think it's doing".... because often it doesn't.


    Finally, quoting you:
    "the subjects' hearing response was tested and the upper limit was 16kHz - 18kHz (the young students). So provided the systems exceeded 18kHz, extending beyond 22kHz would have made no difference anyway."
    That sure sounds like you're claiming to know the results before the test is even done.
    And that, not only are you certain of the results, but you expect us to take your word for them.
    Did it occur to you that "just on principle" it is worthless to claim to "test whether CDs are audibly different than SACDs" unless your test equipment is first SHOWN,and then documented, to accurately reproduce the measured differences?
    In order to be valid, and determine whether differences are audible, the test MUST be performed using equipment that is KNOWN to be able to reproduce any measurable differences that are there completely and accurately (not assumed to be able to).
    You cannot test for what isn't there.
    And "guessing" or "assuming" that the speakers you're using can deliver it is not at all good enough.

    That is NOT on Meyers and Moran; they made it plain exactly what they did and did not confirm about their test setup...
    For example, they said that they were using "really high end equipment"; bnut NEVER stated that all of the equipment they used had actually been measured and its performance confirmed.
    What they tested for was "whether a certain group of people could hear a difference between certain SACDs and their CD equivalents, using certain test gear".
    They then proceeded to suggest that we should "trust" that the test gear they selected was up to the task - without actually confirming it.
    So, if that's all you really wanted to know, then their results were just fine.

    I give up.
    You may be a great mixing engineer.
    But you would definitely NOT have gotten a passing grade in the lab courses I took in college.
    (And neither would Meyers and Moran.)


    And, incidentally, there is no "burden of proof" "on me" - because I'm not claiming anything at all.
    I simply said "I don't know for sure".
    It is you who are making a claim.
    Therefore, the burden to prove it is on you.

    And, if you have indeed "tested it fully and properly", we'd love to see your fully documented test results.
    Otherwise we'll be glad to accept "your anecdotal opinion - based on years of professional experience".

    I am now going to join those who are waiting for actual legitimate scientific proof either way...
    If and when you have some I'll be very interested to see (read) it...
    (But arguing about who said or meant what seems quite unlikely to ever lead to it.)


772 773 774 775 776 777 778 779 780 781
783 784 785 786 787 788 789 790 791 792

Share This Page