1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Testing audiophile claims and myths

Discussion in 'Sound Science' started by prog rock man, May 3, 2010.
847 848 849 850 851 852 853 854 855 856
858 859 860 861 862 863 864 865 866 867
  1. KeithEmo

    After spending a lot of time critiquing the flaws in other test protocols... I decided to offer a very simple and effective one everyone can use on their own.
    (THhs can be a lot more difficult with hardware.... but it's not terribly complicated when it only involves software.)

    This protocol can be adapted to a wide variety of situations.
    It avoids most of the weaknesses of most others I've seen (for example it allows you to choose the test content and associated equipment).
    It can be performed in a totally blind fashion.
    It can be "scored" in a variety of different ways - and can be self scored.
    It does require a small amount of help from an outside third party to set up (but it is comletely "self operated"; they don't have to stay around to switch wires or push buttons.)

    Let's assume that, as an example, we wish to determine "If a 24/192k lossless audio file is audibly altered by being converted to a 16/44k lossless audio file."

    The first step is to select a test sample - in the highest quality format we wish to test.
    So, in this case, choose a 24/192k file with which you are very familiar, and which you believe will be likely to lose quality by being converted.
    We need to start with that file in WAV format (because a WAV file of a given bit depth, resolution, and time will always be the same size).
    Name this file ORIGINAL.WAV

    Now, convert that file into 16/44k format, using any converter you choose, and any settings you choose (pick the ones you believe will be "audibly tarnsparent").
    Now, convert that 16/44k file back into 24/192k format, again using whatever conversion software and settings you choose.
    Save the new file as a 24/192k file named CONVERTED.WAV

    You should now have two files of identical size and the same parameters.
    If you were to do a bit compare they would NOT be the same.
    JHowever, they will be the same size, resolution, bit depth, etc.
    And, if the process of converting the original to 16/44k, and back again, was really audibly transparent, then they will be AUDIBLY identical.

    Now, put both files on a USB stick and give them to your helper.
    Instruct your helper to create a set of sample files.... named SAMPLE01.WAV, SAMPLE02.WAV.... through SAMPLE10.WAV.
    They are to copy ORIGINAL.WAV five times to make five of the samples.
    They are to copy CONVERTED.WAV five times to make five of the samples.
    They may decide which numbers to use for which any way they like (they don't have to be "really random" as long as you don't know what they are).
    Your friend should also keep track of which are which - and list that information in a separate text file.
    ( A very simple program could be written to do this automatically.)

    You may now conduct the test any way you like.
    You may listen to each sample as long as you like, as often as you like, and as many times as you like.
    You may even listen to all of them on a varierty of different equipment if you like.
    You may simply attempt to guess whether each sample file is a copy of the original or the converted version.
    You may listen to both ORIGINAL.WAV and CONVERTED.WAV and then do a formal A/B/X test of the ten sample files.
    Or you may simply listen to the files in various orders, and note when you believe you hear a difference between two of them when you play them in sequence.
    (If they're "audibly identical" then you would NOT expect to hear differences between any two when played in any order.)

    Whatever way you choose to conduct the test....record your results.

    When you finally look at the "Key" text file....
    You will either find that your results have no correlation whatsoever with what the files really are (in which case they really are "audibly identical").
    Or you may find that you were able to group them with statistical significance...
    Or that you were able to reliably note differences when you played files from different sources one after the other...
    Or that you consistently noted that files from the same source sounded similar a larger percentage of the time.

    And, of course, be careful not to fall into the "false significance trap".
    (If you flip a coin enough times, odds are you WILL eventually throw five heads in a row, by pure random chance.)
    (It is quite possible that patterns may appear by random chance... which is why tests like this should be run many times or with many variations.)

    Obviously you can try this as often as you like, with as many files as you like, and YOU get to pick songs that you are familiar with.
    (And it requires minimal assistance from a friend who merely needs to be computer literate.)

    There are only two real requirements:
    1. That you start with the highest measured quality.
    (So, for example, if you want to compare CD quality and lossy compressed files, your ORIGINAL file should be the lossless one.)
    2. That your sample files end up being the same format and size.
    (So you can't tell which is which by looking at the sizes of the files, or the indicator in your player, or get a clue by which loads faster.)
  2. bigshot
    The Pioneer is rated at 45 watts per channel at 8 ohm. From what I'm reading on the Magnepan site, that would be a little low but close to the lower recommended limit. The Magnepans are 4 ohm which would shift that rating a bit. If they were underpowered, it be noticeable in the bass, but Magnepans aren't known for their deep bass anyway. If there wasn't enough power to drive them, I would think that they would have noticed a difference. But as I said before, they were comparing sound quality, not power. I'm sure all of the amps were compared at the same loudness... they probably chose the loudest setting of the least powerful amp where they didn't clip and calibrated everything else to that.
    Last edited: Jun 12, 2019
  3. GearMe

    ^If that's the case, then it would be a flawed test in my book (i.e. for my use case).

    TBH..the Maggie's 4 Ohm rating probably wouldn't shift the Pioneer wpc rating much compared to some of the other, more robust amps (just guessing).

    FWIW, the Maggie's were rated at Sensitivity: 83-85 dB/W/m
    So...being generous and doubling the Pioneer's power output to 90 wpc (for argument's sake) would only yield 92.5 dB at 4 meters -- which I wouldn't be happy with!


    Checking on a couple SPL Calculators you need 400 watts to get 99dB at 4 meters...If my logic's flawed, the SPL Calculators are wrong, etc., feel free to explain so I can learn!


  4. bigshot
    Why didn't they detect a difference in the test? I understand that theoretical numbers on a sheet of paper can indicate things, but it's been my experience that common knowledge on the thresholds of perception usually take the worst case scenario and extend it a few notches further. This is true of a bunch of the metrics of sound... frequency, dynamics, noise floor, loudness, distortion, etc. In the real world with sound coming out of speakers playing music in a room, our ears aren't as sensitive as hearing test tones in an anechoic chamber.

    On the Magenapan site, they don't give a recommended power rating. They say that their customers have reported good results with amps ranging from 50 watts to 1000 watts. Is there something else that might be responsible for it? I doubt that the test ran the Pioneer through another amp to raise the power.

    By the way, 99dB is not a comfortable listening level. Around 80dB is as loud as I can tolerate for normal listening myself, and I usually listen around 65dB. Have you listened to very loud music and measured it with an SPL meter? If not, give it a try and see what you come up with as being comfortable. Most people don't listen to music at 99dB, and if they do, the distortion might be coming from their hearing, not the speakers. In fact, it wouldn't be wise to listen to a whole album that loud.

    What sort of wattage would be needed to push the Magnepans between 60 and 80dB?
    Last edited: Jun 13, 2019
    Steve999 likes this.
  5. analogsurviver
    All of the above is the main reason why I disagree with bigshot so much.

    Listening at home - and at home only - is subject to many limitations - self-imposed ones included. There may not be enough space for the listening room, the acoustic treatment may be limited or non-existant, the power available may be too short, the speakers used could not take available power without severe distortion/damage, etc, etc.

    So, there are any number of factors why listening levels at home are usually smaller than listening live - MUCH smaller in the majority of cases. I too have a friend, who would pay for a front row ticket of the symphony - and try to "escape" somewhere in the back after the pause - provided there are any free seats left, of course. He just does not like it loud - even live.

    Most audiophiles adjust the loudness according to whatever maximum level their room and equipment will allow. And are shocked to hear acoustic music live - because the peaks exceed whatever they are accustomed to listen to at home - considerably so.

    And that is why most commercially available recordings - even of classical music - are SEVERELY compressed/limited. One way or another ... Listen to any piano recording from the mainstream labels - and then listen to piano live, in a reasonably sized concert hall - not closet. Piano CAN get loud - very loud indeed - if the score demands it. And well below 1% available recordings capture this dynamic range - which can be, again, played up to correct SPL , by yet another less than 1% speaker systems. So, even if approximating the real dynamic range/loudness at home is desired, it is VERY hard to realize in practice.

    Now, I am aware that the size of the listening room dictates the maximum "supportable" dynamic range/loudness. And that at home, within a tiny fraction of the space volume of the original venue, the loudness can not possibly be the same.

    But, this IS head-fi. And such limitations do not apply for headphone listening.

    Bottom line : listening at 99 dB is NOT harmful - not if the recordoing has not been doctored ( compressed/limited/mastered beyond death ). Those 99 dBs will be reached for about 1% of the total time - if not even considerably less. With a dynamic range of say 60 dB and above, the average listening level would be around 50 dB - not more.

    It is horrible to see what loudness wars have done to the music delivered by digital means. Comparing the rip of the original analog recorded vinyl record to currently available CD of the same recording is not going to put smile on your face - either when looking at the files in an editor, or listening.

    I did LOTS of work on analog record playback during the time some of you were glad that I did not post in this thread. And have, as a "collateral damage", been forced to actually learn just how certain analog mastering engineers and record labels handled the task.

    There has been ONE vinyl record, which - up to now - proved to be "unplayable"; with the majority of phono playback equiment at least. It is Wagner music conducted by Carlos Paita https://en.wikipedia.org/wiki/Carlos_Païta - a 1969 recording by Decca, reissued on his own Lodia label in 80s. https://www.discogs.com/Wagner-Carlos-Païta-New-Philharmonia-Orchestra-Tristan-und-Isolde-Der-Fliegende-Holländer-Die-Meis/release/10689527 A cartridge that can do it justice is anything from the upper thirdf tier of Grado models late 70s/mid 80s ( has to track cleanly at least 90 micrometer amplitude - some Grados of the period went past 110; bass and dynamic range without any hint of compression like most others can not even dream about ) in an arm that mates well with Grado ( not an easy task ).

    This recording is phenomenal ... regardless from which point of view. And, it is from 1969 ... - one wonders what went so damn wrong today we are getting such limited in everything scale models of what has been, obviously, possible in 1969.

    Needless to say, any amp/speaker combo capable of max 93 dB SPL, has no place in listening to such great recordings. And most will have more problems at the soft end range of this record - room noise floor in most domestic settings during daytime is (too) high, during night time you don't want to start a war with neighbors - even if it is only about 2% of the time.

    Use any of the SPL calculators and input your conditions ( room size, speaker placement, listening distance ) to get an answer. There is one not so tiny detail usually omitted in SPL calculators; and that is the polar pattern of the speaker. For a typical box speaker, the polar pattern is ( at least for the lower ferequencies, which actually define how loud it goes ) a point source - which falls off in SPL with the distance SQUARED - it means only one fourth of the SPL available at 1 metre is available at 2 metres. That's why box speakers pretty quickly "dissapear" in large rooms - and large planars, which are close approximation to the line source ( for which SPL fall off with distance in linear fashion ), can with proper placement in fact play loud enough in huge rooms. Dipoles ( most ESLs) do in fact start their real life in 100 and more square metres rooms - and they may never achive the same loudness in small domestic rooms. In such large rooms, box speakers can be FAR too loud in proximity to the speakers - and "inaudible" in the far corner of the room. I am aftraid no calculation can give you precise answer with Maggies - but up to 80 dB SPL should be fine , with any of the amps tested by Stereo Review.

    Except that 80 dB SPL is about an equivalent of maximum speed of 40 miles per hour a vehicle on public road can achieve. It is simply not enough for safe driving under real conditions.
    Last edited: Jun 13, 2019
  6. gregorio
    1. Which is luckier as far as a sound science forum is concerned?: Having experts who can interpret the tens of thousands of tests every year OR, having marketers who just make-up whatever suits their agenda based on their FALSE assertion that these tests/evidence don't exist, then require the impossible (that science prove a negative, that their made-up nonsense could never be true) and then finally, they call this complete anti-science "pure science"? That's so absurd it's funny!!

    2. There aren't any "interesting results in there"! You don't seem to have much of a grasp of "science" ... how do you think we arrive at facts? Running tests, examining the results and what they show/demonstrate. So, assuming that by "interesting" you mean results that don't align with the asserted facts (and therefore could be used to support marketing nonsense), then that doesn't exist because the asserted facts include those results. What is interesting, is that some of the asserted scientific facts are often a representation of an upper limit, that may only true rarely and under extreme conditions. For example, the asserted fact of human hearing being 20Hz - 20kHz. We tested several thousand different teenagers over the course of several years; at a normal listening level the mean highest freq response was between 16kHz and 17kHz, only a tiny fraction could reliably hear 19kHz and not a single one of them could hear 20kHz.

    3. See #1!!
    3a. Huh? For example, if you want to find out "how fast a human can run", by your logic we would have include kindergarten children and new born babies (as they are a significant demographic). As we obviously can't test every single human baby, we cannot be certain that there isn't (or hasn't been) one somewhere in the world who can run faster than Usain Bolt. BTW, do you know many tall, professional athletes who are 5 years old? How about 5 year old experienced professional music/sound engineers? Doesn't any of this sound absurd to you? Apparently not, the logic of your argument/position has been put to you before (for example, point #3, post #12756) but you simply refuse to answer the question/s and just rephrase your absurd logic! Furthermore, you seem to have (yet again) created an analogy that's entirely counter-productive to the argument you intended! Yes, it would be "foolish" to exclude professionals but in the audiophile world that's exactly what happens, the results from professional engineers ARE routinely excluded/discounted/ignored!!
    1. "If you hear a difference but can't seem to find a measurement that would account for it", do a damned Null Test and that way you're unequivocally measuring ALL differences collectively, except what you are imagining!! You know this, it's been pointed out to you on numerous occasions but here you are yet again just completely ignoring this fact and restating the same made-up "maybe's" (and implying a common audiophile myth)!

  7. TheSonicTruth
    "What went so damn wrong"?

    It's called the LOUDNESS WAR. And it has been going even before digital. Digital simply allowed the loudness war to be conducted on a 'nuclear' scale, with DRC in-the-box and plugins much more powerful than any in the analog domain.

    Plus, you have to understand that bigshot, and Calbi, have paying customers. Just as do the big labels that employ their skills. Those clients are what drive the trend toward over-compressed CD and download versions of youre favorite music.

    And of course, playing something at 80dB SPL in your living room will sound louder than playing it at 80 SPL in a church sanctuary, auditorium, or Madison Square Garden. Loudness-mastered CDs - and excessive DRC & limiting in live sound - only exacerbate this phenomenon.
    Last edited: Jun 13, 2019
  8. Glmoneydawg
    80db is good policy if you want to be able to still enjoy your music into your golden years....i have friends that liked it loud,they are paying for it now.
    Steve999 and bigshot like this.
  9. GearMe
    Wow...60 to 80 dB? That's basically 'normal conversation' to 'dial tone' levels!
    What would your peaks get to playing back well-recorded music?

    Agreed, 99 dB is not comfortable for sustained levels and extended periods.
    That said, I'm interested in peak levels that can at least get close to replicating live conditions...understanding that recorded music is somewhat limited compared to the live experience.

    Yes, I've listened to music (under measured conditions that peaked north of 100 dB) Most likely, you have as well?

    Normal concert levels peak in the 100-120 dB SPL levels (chart below). Heck, even Jazz concerts register in the 90's...

    As far as the test goes, I can't surmise why they didn't detect the difference. Maybe nobody cared about replicating live conditions? Which would be a huge miss in my book!
    Did you find an SPL number/range anywhere in the article?

    Also, given your point that our ears aren't as sensitive in real world/speakers conditions ('In the real world with sound coming out of speakers playing music in a room, our ears aren't as sensitive as hearing test tones in an anechoic chamber') , is it possible that this reduced sensitivity might 'bias' the results at lower listening levels? Meaning, that we'd be able to discern differences more easily in conditions that allowed for live music levels? Posing the question for the group to think on...have no idea what the psychacoustic science behind it is or the impact it would have on being able to discern equipment differences.

    Lastly, regarding the Maggie's -- having owned these speakers (and several others very nice ones), I can state that my experience was that the Maggie's definitely needed amps with 400+ wpc to achieve realistic sound levels in large rooms with a listening position 12-15 feet from the speakers.

    Regarding Magneplanar's amp recommendations, there's a reason the range goes to 1000 watts...the headroom is needed to sound realistic. As far as the 50 watt number, given this group's trust level for audio equipment vendors, I'm surprised that we'd consider that low number to be 'real'...snake-oil and all. :wink:

    Seriously, think about trying to sell speakers and telling people you need 400+ wpc amps to run them properly...not a message the marketing department would want to deliver.

    Last edited: Jun 13, 2019
  10. gregorio
    1. If you really did think that then why do you continually do the opposite: Not do reliable tests, deny the existence of reliable tests, ignore the weight of reliable evidence and then just restate audiophile myths (or invent new ones) as fact or possibilities even though they contradict the weight of reliable evidence?

    2. Many of us here have done reliable tests, in fact that's probably why many of us are here!!

    3. Firstly, the "popular view" on head-fi is that there are very audible differences between amps. Secondly, we have a wealth of evidence which demonstrates that when this "popular view" is subject to more reliable/controlled testing these very audible differences magically become inaudible.
    3a. The tests which support the unpopular view (that amps do not sound different) are "clearly compelling" because they are more reliable/controlled than the sighted tests which support the opposite (popular) view.
    3b. That's a misrepresentation! Virtually all listening tests have at least one flaw, even the most rigorous scientific ones. A rational mind, capable of critical thought, will weigh the flaw/s of a particular test, along with the wealth of other evidence/tests/facts and evaluate their significance.

    4. You are free to invent a misrepresention the posts here and then believe that misrepresentation. However, posting that misrepresentation as fact is discourteous to the point of insulting to those you are misrepresenting and the whole point of this forum in the first place!

    5. Ah, something we can completely agree on! Indeed, what justification could there be in spending good money to run a controlled/reliable test which demonstrates that although a product might be "a little bit better" on paper (measurements/specifications), that improvement is beyond threshold of audibility? Who would "spend good money" to demonstrate the exact opposite of their marketing assertions?

    So as far as you are concerned: "Unplayable" = "Phenomenal". That really does explain so much of what you post. For everyone else of course, "unplayable" is the exact opposite of "phenomenal".
    Unfortunately, in addition to this completely opposite view to everyone else, you also include a bunch a obvious falsehoods: There is no loudness war in classical music. Classical music is never "severely compressed", except in a few special case uses, for example some radio broadcasts. The actual fact is that when compression is used on classical recordings, it is used very lightly and typically to reduce the dynamic range of the recording to the dynamic range which would be experienced in a live performance. So the actual facts are again the complete opposite of what you are falsely asserting!
    That's funny, for two reasons: Firstly, you seem to be quoting the inverse square law or rather mis-quoting it. Every doubling of distance, the SPL reduces by half, so at 2 meters the SPL would be half, not one fourth, of the SPL at one metre. Secondly, this does not only apply to speakers, it applies to all sound sources. Therefore, although a piano is capable of high SPLs when recorded with mics say only half a meter away from the piano strings, an audience sitting 10 or 20 meters away is actually only going to experience a tiny fraction of that SPL. From a typical concert audience position, a piano cannot get "very loud", in fact it's a relatively quiet instrument and it's dynamic range is many times lower than what CD offers, even without noise-shaped dither!!

  11. TheSonicTruth
    And those are averages, not peaks.

    It's those averages, for longer periods of time, that can lead to hearing damage and potential loss, not the peaks.
  12. Phronesis
    I see that this thread came back to life. I only glanced at the recent posts. I doubt there's much to say and beyond what's already been said many times. People may need to agree to disagree. Here are the conclusions I came to after months of talking to you guys, doing my own tests, and thinking about this stuff:

    - All or nearly all of the night and day differences people describe when comparing DACs, amps, cables, and connections are likely due to misperception. There may sometimes be subtle differences in the sound differences between these components.

    - Listening tests aren't a reliable way to discern subtle differences between the sound of components for normal long-term listening, because the listening tests involve short-term listening, and fallible auditory perception and memory. Listening to longer music segments in tests involves the problem of exacerbating memory issues. Blinding can help avoid false positive results, but it may result in more false negative results.

    - Perception is adaptive, resulting in perceived differences in the sound of components (including transducers) becoming less apparent over time.

    - If comparing decent sound systems, the quality of the recording affects the sound quality more than the sound system.

    - Paying too much attention to sound quality can interfere with enjoyment of music. If enjoyment of music is the goal, best approach is to set up a decent sound system and then forget about sound quality, just get into the music.
    Last edited: Jun 13, 2019
  13. GearMe
    Wholeheartedly agree with your last point...rather hear a song I like on a transistor radio than one I don't on an awesome system!

    However, when possible, I'll opt for a great song on a great system :wink:
  14. Steve999
    I disagree with you on point two and I think point four is too over-generalized to be meaningful! Who cares? I sure don’t. Points one, three five I absolutely agree! Most importantly, good to have you back!! Missed you! How long until you change your pic? :L3000:
    Last edited: Jun 13, 2019
    baseonmars likes this.
  15. analogsurviver
    @gregorio Regarding "unplayable"/"phenomenal" analog records: Record mastering, although not strictly art, is also not strictly science. And it is ALWAYS human decision to what extent of the cutting capability one wants to actually go with the finished record.

    From the ultimately achievable fidelity, it is desirable to cut at the maximum recording level - which may well be impossible due to program material running time. Even if there is place on the record to cut the required time of the program, ther IS a concern whether the "average high quality" customer has the phono equipment capable of playing back so high amplitude/velocity grooves. And even if that is the case, there is a concern whether the equipment has been set up optimally.

    There is a reason for metering "colours" as used in Audition CC - and numerous other editors: green below and up to - 18dBFS, yellow between -18 and -6 dBFS , red from -6 to 0dBFS . Analog record reference 1 kHz level is 5cm/sec - and that corresponds to -18dBFS . In theory, there should be no cutting above 0dBFS - yet, in practice, there is. At least 1 dB - or 2, to stay on the safe side - lower recording digital level is prudent to use if you have only one pass to record an unknown analog record. So, play it safe would be setting the recording level at -20 dBFS for analog signal recorded at 1kHz at 5cm/sec.

    The above asumes perfectly adjusted cartridge that has the required trackjability to begin with. Not everyone does have such a cartridge... and therefore, most cutting engineers would be willing to sacrifice the ultimately achievable quality of the finished analog master for something more easily playable by either/or lower quality cartridges and/or less optimally set up turntables. Here, very quickly peak cutting is limited to -3dBFS or even lower - usually - 6 to -7 dBFS. With corresponding losses in dynamic range / increase in record noise. And the corresponding ability to play them back by inumerably more real world record players. HERE is that never ending dilemma faced by record mastering engineer

    Records that actually do come close to 0dBFS are very scarce. But, they do exist. Those that exceed 0dBFS also exist - and are even scarcer. Now, I would have to check the actual file , but the record in question does approach/exceed 0dBFS if the reference recording level at 1 kHz is set for -18dBFS.

    No moving coil cartridge I am aware of can track those peaks without an audible protest - distortion.

    But those additional 3 dB of signal to noise ratio do contribute to overall better sound quality . I do not have any overall quieter Wagner orchestra recording on vinyl - and in >2000 records. The dynamic range is huge. Nither is the low end better represented in any other record I have heard.

    I hope that clarifies "unplayable"/"phenomenal" in this - or any other vinyl recording. This one IS "phenomenal" - IF played back with superbly adligned and adjusted superb tracking cartridge. And is "unplayable" if lesser or less well adjusted equipment is used.

    I also have a few other releases of the other originally same recording - on the same label, but from various countries. And there ARE marked differences in cutting levels - sometimes amounting to 7 dB and more. It is the compromise regarding the actual diameter of the groove ending from the centre of the record - allowing less inner groove distortion with lesser styli - but paying the penalty in reduced dynamic range and higher noise. Again, the final result achieved depends on the playback equipment used. Cheap cartridge pushed too hard by good record can actually sound bad - but no cartridge can make up for the loss on too conservatively cut record. Compromises ...
847 848 849 850 851 852 853 854 855 856
858 859 860 861 862 863 864 865 866 867

Share This Page