Testing audiophile claims and myths
May 19, 2015 at 3:48 AM Post #6,016 of 17,336

 
Thanks for the interesting link.  I found this little tidbit on page 193...
 
 I found 13 A flowing in a water pipe under the stage in a rehearsal hall.

 
Who needed wires anymore?  We'll just let the plumbing do double duty.
 
evil_smiley.gif

 
May 19, 2015 at 9:26 AM Post #6,017 of 17,336
  Following on, all the measurements we make are important.  It's sometimes not hard to show what happens or doesn't happen when one or more of them is way out of line.  Tke 10% distortion at 1KHz for example.  Most of us would notice that (another assumption). But there are a lot of instances where specs aren't all that different yet people claim they hear a difference between two pieces.  In this case it's two pieces of Schiit.
 
This thread has been down this path many times.  I can't tell you how many times I have evaluated stuff before buying it and had a different reaction to that of others who reviewed it.  This points to a need for objective testing. Since we know some of the numbers that tell us important things about gear, it's valuable to have them even if we can't explain why we hear a difference.  To go back to Occam's razor, the first and most obvious question to ask is "is there a difference in sound better the two (even if everyone agrees there is)?" 
 
You know where this is going.  If you don't do a DBT of some kind, maybe ABX, maybe not--there are other designs that can be used, you don't really know if the gear sounds  different, or how it sounds different.  Another way to say this is if you make the assumption that all the reports are correct and they are not, you will never ask the next right question that will lead you to why they are different.
 
In the case being discussed I don't think it's unlikely that there is a difference in sound between the two pieces.  I'm not contesting that.  Just saying that good methodology is to independently verify that listeners can tell the two pieces of gear apart.

Why do we always end up here?  And why is this so seldom done?  Why can we only talk about this here?  Life has so many imponderables.

 
I think I can answer your last question - about why we always end up discussing this yet nobody seems to actually do the tests...
 
The most obvious reason is that most of the people who have the budget and technical ability to run well designed and documented tests rarely have the motivation to do so. A company that sells expensive cables has little reason to run tests that might show their product is totally devoid of technical merit (even a test that shows their claims are exaggerated, or that some, but only a few, customers can hear a difference is going to cost them sales). Spending the money on more advertising is simply more likely to sell more product for them - and they are in the business of selling product, not of pure science, nor of educating the public. And their competitor, who sells cheap cables, which might well be just as good as the expensive ones, doesn't have the advertising budget to run the tests. And, since they sell a lot of those cheap cables, which are the default choice for most people anyway, they really can't expect to sell a lot more cables if they can prove theirs are as good as their more expensive competitor. (Losing a dozen customers might cost the company that sells expensive cables thousands of dollars; winning those dozen customers isn't going to make the company who sells $10 cables much at all.) Can you honestly imagine Amazon spending thousands of dollars to prove that their $9 USB cable is as good as someone else's fancy $200 one?
 
Even when it comes to magazines (and web sites), discussing how cables sound, and arguing about how they sound, will generate a lot more site traffic than publishing a single study that actually answers the question. Besides which, they would lose a lot of advertising if they were to even suggest that the products being sold by many of their advertisers weren't worth buying. (Just as I'm sure the wine industry makes a lot more on "wine tasting" than they would on selling "the one really best brand of wine".) Some few sites feel the opposite, and count on being educational for their audience, and they DO sometimes run and publish this sort of tests.... like audioholics.)
 
Besides all that, industry courtesy prevents many companies from even considering attacking their competitors products (more so in some industries than others).
 
I've joked that audio technology would make a good subject for a college term paper, but even there it would really probably be viewed as a "silly and unimportant topic" if you were to want to do so.
 
May 19, 2015 at 10:19 AM Post #6,018 of 17,336
   That is a conundrum. How are the tools used to measure calibrated? In analytical biochemistry we have some standards we use to make sure that a measurement taking by person A on day one is the same (within excepted variation) with the measurement taken by B on another. And depending on how critical the sample is- possibly across labs.
Is there a universal standard that all instruments use? If you take 5 such devices for measuring the usual suspects do they all give the same value? First order, second order, third order harmonic distortion at 1000 hz (obviously measuring the 2nd harmonic of 12 Khz is outside of audibility) . Intermodulation distortion? Group delay? What else? Even vs. Odd harmonics? Square wave response? Ringing?
Are we measuring everything that can affect the quality of sound? 

 
Someone else commented that many of the "test instruments" used by audiophiles are indeed not necessarily reliable or accurate, but I would say that, at the commercial level, the opposite is true. (They are indeed correct when it comes to "affordable" test equipment, and sound cards, many of which simply don't meet spec.) However, if you take any Audio Precision test set (which is the industry standard), and run the same test, it will give you similar answers. (And, when you spend $50k on a piece of test equipment, you keep it in good repair and calibration - although I don't think the APs require much maintenance.) Likewise, if you look at a waveform to see what the ringing looks like, it will look the same on any AP printout, or on the output from any good quality oscilloscope. (You may get slight variations in what you rad or observe, but I don't think those variations have even been mentioned... nobody here is arguing a few percent either way.)
 
Someone else wondered what the smaller companies - who can't afford a $50k AP test set - use, and the answer there is that it varies anywhere between "lower cost but still reasonably accurate test equipment" and nothing at all. (There are several companies out there making expensive "audiophile USB cables" that don't work well at all, and don't even meet the minimum standards of a cheap data cable - presumably partly because the companies that make them don't own the equipment necessary to test them properly. Likewise, many boutique amplifier companies simply "play by ear" and don't actually test their designs or products at all. After all, the chef at your favorite restaurant probably doesn't send every new dish he creates out to a lab to be analyzed, right?) In the case of DACs, even an oscilloscope costing a few thousand dollars, and probably even a test run on a sound card, will usually allow you to see things like ringing. (Besides which, the chip manufacturers publish specs and oscilloscope images and, while people may question the scientific knowhow and ethics of an audio cable or DAC manufacturer, I don't think anybody is accusing Texas Instruments or Wolfson of publishing inaccurate or falsified data on the data sheets for their chips.)
 
Unlike with chemical processes, I also don't think lack of calibration standards, or equipment inaccuracies, are much of a factor at the level of these discussions. Most of the recent discussions are qualitative rather than quantitative. We're not arguing about whether 1 mS of ringing on a DAC sounds audibly different than 2 mS, or whether only the longer period is audible, we're arguing about whether it's audible AT ALL. Likewise, nobody is disputing about how significant the difference is between speaker cables; the discussion is "Is there a difference, or is it all snake oil?"
 
Unfortunately, the audio industry has a long history of claims with no scientific basis at all, and claims based on downright false science, which has caused some people to get so frustrated that they automatically assume that everything for which they don't understand or agree with the science must be fraudulent. This is why you see reactions like:"Unless you can prove it's real, I don't even want to discuss it." (Note that, when I say history, I'm not suggesting that it is past... a significant percentage of products sold today make unrealistic or false claims - or claims that are based solely on "subjective opinions".) Also, unfortunately, as it is whenever the topic is very small differences in what humans perceive, psychological actors like the placebo effect have such a major effect on the results that they can sometimes be the ONLY actual cause of those results. (The huge market in snake oil is "powered" by the desire of people to hear what you can convince them they think they hear, or want to hear.) 
 
A lot of the problem is also that many audiophiles simply don't have a good grasp of the science involved. This makes them easy to fool with pseudo-science, but it also renders them unable to understand legitimate science when it's presented to them.... as with ringing.
 
When you see ringing in the output of an amplifier, it is a sign of instability, and implies certain flaws in circuit design. (Ripples or tilts in the top of a square wave generally result from errors in frequency response, and can be used to detect them.) In an amplifier, ringing is strictly energy at audio frequencies that doesn't belong there, usually caused by an instability in the circuit itself, and will show up in distortion figures.
 
However, ringing in a DAC occurs for wholly different reasons, and has different implications. In a DAC, the ringing normally seen is a result of the oversampling filter, and consists of energy that DOES belong there, but has been shifted to appear at the wrong times by the filter. This means that, if you take any sort of steady state distortion measurement, which sums and compares "the energy that belongs" and "the energy that doesn't belong" over some time interval, the energy the ringing contains doesn't count as an error (the energy sums correctly). This is how a DAC which shows very visible ringing can still measure with very low THD numbers; with a steady state signal the ringing won't be visible at all; with a transient signal, the ringing is part of the "legitimate signal" and in fact must be there for the total to sum correctly.
 
This being the case, for example, if you were to send a 5 mS burst of 1 kHz sine waves into a typical DAC, the output waveform would show something resembling your input signal, with ringing occurring both before and after it. If you were to test that DAC for THD, you would find that it measured very low - because that ringing is part of the signal that belongs there, so any test that sums the energy over time will not consider it to be distortion. However, if you were to instead measure the output at a whole series of instantaneous points in time before and after your input signal had stopped, you would find ringing present (and, if you considered that result "instantaneously", for those instants you would have 100% ringing and 0% "legitimate signal" - so, if you looked at it that way, at a point 1 mS after your impulse input was stopped, you would have 100% THD at the output.
 
Since the ringing in a DAC "really is" parts of the signal being "distorted" by being shifted in time, whether we can hear the ringing or not (or whether it sounds different if it happens before or after the impulse) becomes a matter partly of physiology and partly of psychoacoustics. (We have signal occurring at times where it shouldn't, quite near in time to when it should occur, so the question is whether the main signal masks us from hearing the signal that shouldn't be there or not. This masking could occur physically, in our ear, or psychoacoustically, in our brain.) Therefore, arguing that the steady state THD is so low it can't possibly be audible is a red herring. The real question is of whether the ringing is masked by the main signal, and if it is, whether it always is or only under some circumstances. (The proponents of "apodizing filters" are quite convinced that post-ringing is better masked than pre-ringing, and so that mathematically shifting some of the ringing from before the impulse to after it makes the signal "sound better" - at least with certain signals, and claim to have demonstrated this. I personally believe that I've heard differences that are consistent with this claim. Since the subject of masking is still not "thoroughly understood", I consider this to be something worth testing.)
 
(To me, since this is consistent with the science, I don't see it as especially unlikely to be true - and so it's clearly worth testing. Other folks here seem to find the science not to be credible, and so seem to require that time first be spent proving that there's something there worth testing - or even discussing.)
 
May 19, 2015 at 10:34 AM Post #6,019 of 17,336
(To me, since this is consistent with the science, I don't see it as especially unlikely to be true - and so it's clearly worth testing. Other folks here seem to find the science not to be credible, and so seem to require that time first be spent proving that there's something there worth testing - or even discussing.)


When can we expect to see your response to AudioBear's request?

http://www.head-fi.org/t/486598/testing-audiophile-claims-and-myths/5985#post_11613824

Here's mine.

http://www.head-fi.org/t/486598/testing-audiophile-claims-and-myths/6000#post_11614332

se
 
May 19, 2015 at 10:35 AM Post #6,020 of 17,336
  I am not over-joyed at the superfluous negativity from both of you but the dialog is an excellent opportunity to review some basic critical thinking skills and the methodology of science.   Let me ask both of you to comment on the following.
 
Occam's razor says among competing hypotheses that predict equally well, the one with the fewest assumptions should be selected (to test first).  How does each of your positions stack up against the razor?
 
As a practical matter, it's often the case that we test several hypotheses and often pick the easiest or least expensive or least time consuming first for obvious but illogical reasons.  My question to you both is:
 
1.  what are all the reasonable available hypotheses that explain the facts available to us?
 
2.  which has the least assumptions?
 
3.  How can we test it?
 
Let's stick to the logical analysis of the phenomenon at hand, I really don't care which planet either of you comes from.

 
This all hinges on a single assumption:
We have a situation where a lot of people, including myself, claim to hear a difference.....
(We also have science that clearly shows that a significant difference does exist - so the only question is whether that difference is audible.)
 
Therefore it all hinges on which assumption you choose:
a) "Maybe they all claim to hear a difference because an audible difference really exists" or
b) "They're all either mistaken - or lying - for whatever reasons".
 
Considering both the science and the "state of the audio industry" I personally don't consider either assumption unreasonable....
 
Logically, it might seem to make sense to run two separate tests; first testing whether any audible difference exists; then, if that test shows a difference, testing what the difference is.
 
However, logistically, since it requires the same resources, test subjects, and test setup to test both of those assumptions, I think it's simpler to simply assume that the difference exists, and test for that. (If our test of the difference shows that the difference is in fact "none", it will answer the first question; and, if the test shows an audible difference, then we can go on to learn some details about that difference.)
 
(We also seem to have slightly different goals. Since I already expect a difference to be there, and am quite convinced from my interpretation of the science and my personal experience that it is audible, I might hope to learn some details about how audible it is, and under what circumstances. I'm pretty sure some folks here are actually neutral, and simply want to learn the truth, while others are already convinced that there is no difference, either from their own personal experience or their interpretation of the science, and so their primary goal is to prove that those of us who claim to hear a difference are "imagining it".)
 
May 19, 2015 at 10:46 AM Post #6,021 of 17,336
This all hinges on a single assumption:
We have a situation where a lot of people, including myself, claim to hear a difference.....


Science relies on logic and reason. Yet your single assumption here is a logical fallacy known as appeal to popularity.


Logically, it might seem to make sense to run two separate tests; first testing whether any audible difference exists; then, if that test shows a difference, testing what the difference is.

However, logistically, since it requires the same resources, test subjects, and test setup to test both of those assumptions, I think it's simpler to simply assume that the difference exists, and test for that. (If our test of the difference shows that the difference is in fact "none", it will answer the first question; and, if the test shows an audible difference, then we can go on to learn some details about that difference.)


Translation: Screw logic, I'm too lazy.

And I also note that you did not provide answers to the specific questions AudioBear put to us.

1. What are all reasonable available hypotheses that explain the facts available to us?

2. which has the least assumptions?

3. How can we test it?

se
 
May 19, 2015 at 11:17 AM Post #6,022 of 17,336
  I am not over-joyed at the superfluous negativity from both of you but the dialog is an excellent opportunity to review some basic critical thinking skills and the methodology of science.   Let me ask both of you to comment on the following.
 
Occam's razor says among competing hypotheses that predict equally well, the one with the fewest assumptions should be selected (to test first).  How does each of your positions stack up against the razor?
 
As a practical matter, it's often the case that we test several hypotheses and often pick the easiest or least expensive or least time consuming first for obvious but illogical reasons.  My question to you both is:
 
1.  what are all the reasonable available hypotheses that explain the facts available to us?
 
2.  which has the least assumptions?
 
3.  How can we test it?
 
Let's stick to the logical analysis of the phenomenon at hand, I really don't care which planet either of you comes from.

 
I already responded to the part about assumptions....
 
My hypothesis is that different DAC filters really do sound different - specifically that there are audible differences that are due to their ringing characteristics.
 
And here are the things we need to do to test that:
 
1) Establish that a measurable difference actually exists. (We could use a specific DAC chip, and accept that the impulse response images provided by the manufacturer are legitimate, or we could run a quick test to verify that we do indeed get different ringing patterns with different switch settings - we specifically want to confirm that we can select two filters which have equally flat frequency response and low distortion, but one of which exhibits "symmetrical ringing" and the other "all post-ringing and no pre-ringing" since this is claimed to be the most audible difference).
 
2) Design a test set to play our test signals to our test subjects. (We need to ensure that our test set can deliver our test signals reasonably accurately. I suspect that many speakers and headphones, probably due to mechanical ringing in the transducers, will simply mask the differences in the signals. Therefore, we need to ensure that the differences actually "play into the air" with the test speakers or headphones we choose. We can do this by recording the output with a good quality microphone and confirming that the differences are visible on an oscilloscope. We also MIGHT invalidate the entire procedure at this point if we're unable to find or create a transducer capable of accurately reproducing our test signals. Also note that it is NOT a requirement that our test speakers or headphones "sound good" as long as they are capable of reproducing our test signals.)
 
We will, of course, need some way to switch our signal to play through one or the other of the DAC filters. Since a slight tick or discontinuity when we switch filters is probably unavoidable, it makes sense to separate EVERY set of samples with a similar tick to avoid test subjects "learning" that a slightly different sounding tick does or does not signify a filter change. Since we're testing a sound characteristic of DAC filters, it also makes sense to do the test "live"; we can't simply record samples and play then through a DAC - because the filter of THAT DAC will contaminate our samples.
 
3) Select some test signals (music) which can be shown to demonstrate our different test characteristics. (Since we now have test speakers which we know can accurately render our test signals, and a microphone that can be used to document that those differences are "reaching the air", we have to select sample material that provides the type of stimulus to allow those differences to be visible/audible. For example, if we see the differences in ringing - on an oscilloscope trace - with recorded cymbals, but not with recorded bass drums, then we need test samples with cymbals - where we can document that the differences actually exist in our test samples.)
 
4) Now we actually test.
 
4a) I would use paired samples to determine if the difference is audible. Assuming we call one filter "A" and the other "B", we make up a series of test samples, each of which consists of two short clips of music separated by a brief pause. For each sample, the music before the pause, and the music after the pause, will each be played through a randomly chosen filter. Therefore, each sample will randomly be either  "A,A" "A,B" "B,A" or "B,B". The test subject will be presented with a reasonable number of test sample pairs, and asked whether they believe "the two halves were played through the same filter or not" in each case. Statistical analysis of the results will show if the subjects were indeed able to hear a difference.
 
4b) We could now also do a more standard ABX test to determine if test subjects were able to recognize one or the other filter. I would want to do this regardless of the results of the first part (so we don't need to wait for the results of the first part). We might find that, even though they claim not to hear a difference, subjects still "guess" correctly more often than statistically likely - which might suggest that they actually hear a difference but find it too slight to consider it reliable.
 
So, in this stage, we get to test two things:
a) whether the subjects claim to hear a difference and, if so, whether their claim is validated
b) whether they claim NOT to hear a difference, yet their guesses suggest that they really are expressing a "weak preference"
(if they say they don't hear a difference, yet guess correctly a statistically significant percentage of the time, then we can deduce that they are subconsciously reacting to a difference)
 
4c) Finally, since we've already collected our test subjects and equipment, I would run some sighted/interview tests - and I would probably do two phases there as well:
 
a) I would tell the subject what filter they were listening to and ask them to describe "how it sounds"
b) I would do a blind test, where I would have each subject write down their impressions of each filter without knowing which is which
 
This will give us additional information both about any real differences they hear (and the character of those differences), and about any differences that may be based on their expectations. (For example, if a lot of subjects describe the apodizing filter as "smoother" when they know they're listening to it, but describe one as "darker" and one as "brighter" when they don't know which is which, then we may conclude that they are biased to expect the apodizing filter to sound "smoother" when, in reality, while there is some difference audible, it is less well defined.)
 
If we wanted to test public perceptions and marketing success, we might even deliberately MIS-inform some subjects and record their reactions. (It would be very informative, for example, if test subjects described what they THOUGHT was the apodizing filter as being smoother, even if that wasn't really what they were listening to: it would provide statistical proof that their expectation was actually a more compelling reason to hear a difference than the actual difference, and that they had somehow come to expect "apodizing filters to sound smoother". This is something that the marketing department would very much like to know! 
 
I'm sure someone else can add some more things worth including - but there's a start....
 
May 19, 2015 at 11:32 AM Post #6,023 of 17,336
I already responded to the part about assumptions....


First of all, this has to do with the Schiit amps, not DACs.

Answer the questions. Particularly number two.

1. What are all reasonable available hypotheses that explain the facts available to us?

Your answer here:___________________________________________

2. which has the least assumptions?

Your answer here:___________________________________________

3. How can we test it?

Your answer here:___________________________________________

se
 
May 19, 2015 at 11:40 AM Post #6,024 of 17,336
   
Someone else commented that many of the "test instruments" used by audiophiles are indeed not necessarily reliable or accurate, but I would say that, at the commercial level, the opposite is true. (They are indeed correct when it comes to "affordable" test equipment, and sound cards, many of which simply don't meet spec.) However, if you take any Audio Precision test set (which is the industry standard), and run the same test, it will give you similar answers. (And, when you spend $50k on a piece of test equipment, you keep it in good repair and calibration - although I don't think the APs require much maintenance.) Likewise, if you look at a waveform to see what the ringing looks like, it will look the same on any AP printout, or on the output from any good quality oscilloscope. (You may get slight variations in what you rad or observe, but I don't think those variations have even been mentioned... nobody here is arguing a few percent either way.)
 
Someone else wondered what the smaller companies - who can't afford a $50k AP test set - use, and the answer there is that it varies anywhere between "lower cost but still reasonably accurate test equipment" and nothing at all. (There are several companies out there making expensive "audiophile USB cables" that don't work well at all, and don't even meet the minimum standards of a cheap data cable - presumably partly because the companies that make them don't own the equipment necessary to test them properly. Likewise, many boutique amplifier companies simply "play by ear" and don't actually test their designs or products at all. After all, the chef at your favorite restaurant probably doesn't send every new dish he creates out to a lab to be analyzed, right?) In the case of DACs, even an oscilloscope costing a few thousand dollars, and probably even a test run on a sound card, will usually allow you to see things like ringing. (Besides which, the chip manufacturers publish specs and oscilloscope images and, while people may question the scientific knowhow and ethics of an audio cable or DAC manufacturer, I don't think anybody is accusing Texas Instruments or Wolfson of publishing inaccurate or falsified data on the data sheets for their chips.)
 
Unlike with chemical processes, I also don't think lack of calibration standards, or equipment inaccuracies, are much of a factor at the level of these discussions. Most of the recent discussions are qualitative rather than quantitative. We're not arguing about whether 1 mS of ringing on a DAC sounds audibly different than 2 mS, or whether only the longer period is audible, we're arguing about whether it's audible AT ALL. Likewise, nobody is disputing about how significant the difference is between speaker cables; the discussion is "Is there a difference, or is it all snake oil?"
 
Unfortunately, the audio industry has a long history of claims with no scientific basis at all, and claims based on downright false science, which has caused some people to get so frustrated that they automatically assume that everything for which they don't understand or agree with the science must be fraudulent. This is why you see reactions like:"Unless you can prove it's real, I don't even want to discuss it." (Note that, when I say history, I'm not suggesting that it is past... a significant percentage of products sold today make unrealistic or false claims - or claims that are based solely on "subjective opinions".) Also, unfortunately, as it is whenever the topic is very small differences in what humans perceive, psychological actors like the placebo effect have such a major effect on the results that they can sometimes be the ONLY actual cause of those results. (The huge market in snake oil is "powered" by the desire of people to hear what you can convince them they think they hear, or want to hear.) 
 
A lot of the problem is also that many audiophiles simply don't have a good grasp of the science involved. This makes them easy to fool with pseudo-science, but it also renders them unable to understand legitimate science when it's presented to them.... as with ringing.
 
When you see ringing in the output of an amplifier, it is a sign of instability, and implies certain flaws in circuit design. (Ripples or tilts in the top of a square wave generally result from errors in frequency response, and can be used to detect them.) In an amplifier, ringing is strictly energy at audio frequencies that doesn't belong there, usually caused by an instability in the circuit itself, and will show up in distortion figures.
 
However, ringing in a DAC occurs for wholly different reasons, and has different implications. In a DAC, the ringing normally seen is a result of the oversampling filter, and consists of energy that DOES belong there, but has been shifted to appear at the wrong times by the filter. This means that, if you take any sort of steady state distortion measurement, which sums and compares "the energy that belongs" and "the energy that doesn't belong" over some time interval, the energy the ringing contains doesn't count as an error (the energy sums correctly). This is how a DAC which shows very visible ringing can still measure with very low THD numbers; with a steady state signal the ringing won't be visible at all; with a transient signal, the ringing is part of the "legitimate signal" and in fact must be there for the total to sum correctly.
 
This being the case, for example, if you were to send a 5 mS burst of 1 kHz sine waves into a typical DAC, the output waveform would show something resembling your input signal, with ringing occurring both before and after it. If you were to test that DAC for THD, you would find that it measured very low - because that ringing is part of the signal that belongs there, so any test that sums the energy over time will not consider it to be distortion. However, if you were to instead measure the output at a whole series of instantaneous points in time before and after your input signal had stopped, you would find ringing present (and, if you considered that result "instantaneously", for those instants you would have 100% ringing and 0% "legitimate signal" - so, if you looked at it that way, at a point 1 mS after your impulse input was stopped, you would have 100% THD at the output.
 
Since the ringing in a DAC "really is" parts of the signal being "distorted" by being shifted in time, whether we can hear the ringing or not (or whether it sounds different if it happens before or after the impulse) becomes a matter partly of physiology and partly of psychoacoustics. (We have signal occurring at times where it shouldn't, quite near in time to when it should occur, so the question is whether the main signal masks us from hearing the signal that shouldn't be there or not. This masking could occur physically, in our ear, or psychoacoustically, in our brain.) Therefore, arguing that the steady state THD is so low it can't possibly be audible is a red herring. The real question is of whether the ringing is masked by the main signal, and if it is, whether it always is or only under some circumstances. (The proponents of "apodizing filters" are quite convinced that post-ringing is better masked than pre-ringing, and so that mathematically shifting some of the ringing from before the impulse to after it makes the signal "sound better" - at least with certain signals, and claim to have demonstrated this. I personally believe that I've heard differences that are consistent with this claim. Since the subject of masking is still not "thoroughly understood", I consider this to be something worth testing.)
 
(To me, since this is consistent with the science, I don't see it as especially unlikely to be true - and so it's clearly worth testing. Other folks here seem to find the science not to be credible, and so seem to require that time first be spent proving that there's something there worth testing - or even discussing.)

THANKS- this is very interesting!Off to work so I'll delve into this later!
 
May 19, 2015 at 11:58 AM Post #6,025 of 17,336
@KeithEmo
 
Thank you for the experimental design above!
 
Brilliant start!  Even if it is not about amps, your response goes far beyond what I ever hoped to capture with my question and turns the light on how we can use science to answer questions.  I'll read and re-read it before I comment further.  At first read it sounds like a pretty darn solid approach.  I've always wanted to do false testing to expose how much is real and how much is expectation.  Great example of how we should be thinking even if there are things over-looked(not much) or which need to be changed or added.
 
You do touch on a problem we need to crack. What you have outlined is a pretty good plan for a couple of doctoral dissertations.  It's going to take resources and time.  As you have pointed out it is not in the interests (or budgets) of the industry to do this kind of analysis although you would hope that people who design DACs are worrying about the answers to these questions.
 
You have given us much to think about.  You deserve some constructive responses.
 
May 19, 2015 at 12:11 PM Post #6,026 of 17,336
For any ABX test you can ask the test participants to describe the difference, if they perceive any.
That of course requires a large enough number as in the analysis for the description of the difference you will only count the results from people who were correct in finding the odd one out. Of course you can also do a descriptive analysis, using a trained panel comparing only two samples - which will be more expensive.
 
As with ANY measurement you can increase the level of sensitivity of your method to the edge of what is technically possible. You can analyse the level of gold in sea water (which is surprisingly high). You can also screen the participants that you pick for the ABX testing. Getting people off the street fro $20 will yield a different level of accuracy in detecting any difference than recruiting classical musicians and paying them $500 for 1hrs test. The type of music program used for the test will strongly influence the result. There are certain examples of acoustic music that will easily reveal low quality mp3 (e.g. brush on cymbal). Heavy guitar rock with 10% added distortion on purpose is not likely to reveal any difference.
 
Basically this discussion will get no where because no one will be able to present whatever kind of proof (either way) that goes beyond the exact parameters of how that particular test has been carried out. The results are ONLY valid in the given scope of the test.
It's close to what applies to statistics : "Never trust any statistics you haven't made up yourself"
biggrin.gif

 
As for measuring the performance of hifi gear ... of course measurements can reveal any defect but measurements can not positively predict good sound.
How do you measure the ability of equipement to accurately reproduce a 3D like soundstage with acoustic instruments being played live in a real room?
 
May 19, 2015 at 1:12 PM Post #6,027 of 17,336
Originally Posted by icebear /img/forum/go_quote.gif
...
 
Basically this discussion will get no where because no one will be able to present whatever kind of proof (either way) that goes beyond the exact parameters of how that particular test has been carried out. The results are ONLY valid in the given scope of the test.
It's close to what applies to statistics : "Never trust any statistics you haven't made up yourself"
biggrin.gif

 
As for measuring the performance of hifi gear ... of course measurements can reveal any defect but measurements can not positively predict good sound.
How do you measure the ability of equipement to accurately reproduce a 3D like soundstage with acoustic instruments being played live in a real room?

It all depends on the objective or the research as captured in the hypothesis to be tested.  One can in fact design experiments that produce conclusions that apply universally. If you want to compare brand x amp y to brand w amp z your claim that the scope of the test is limited to the test might be true.  If the test was being done to determine how much ringing on a square-wave was required to be audible, once independently verified and accepted by peers as solid research, the result contributes to our basic knowledge of auditory perception.
 
I don't think anyone who believes that scientific inquiry can help us understand the world around us would accept the claim that measurements can't predict good sound.   Most measurements are in fact used in the opposite sense as for example making sure inter-modulation distortion is low will help minimize a negative attribute. No, we can't at the moment say what will sound good, largely we have focused on what sounds bad.  There is also the subjective problem that what sounds good to you may not sound good to me; some people like analog and tubey sounds, some don't but we do agree they can (not must) sound different.  None of us likes the sound of fingernails scratching on a blackboard.  The purpose of doing the kind of research we are discussing is to further our understanding to the point that we will in fact be able to say what it is that reproduces soundstages etc.  It is totally within the realm of possibility that we can figure that out.  
 
As KeithEmo pointed out a few posts back, it just hasn't been in the industry's interest to do this research nor has there been support for independent investigators to do very much of it.
 
You have in effect issued an interesting challenge for researchers:  "How do you measure the ability of equipement to accurately reproduce a 3D like soundstage with acoustic instruments being played live in a real room?"  Good question.  How do we do that?  I think science can be used to answer that question.  I hope you're not suggesting it can't be answered.
 
May 19, 2015 at 1:30 PM Post #6,028 of 17,336
...  
You have in effect issued an interesting challenge for researchers:  "How do you measure the ability of equipement to accurately reproduce a 3D like soundstage with acoustic instruments being played live in a real room?"  Good question.  How do we do that?  I think science can be used to answer that question.  I hope you're not suggesting it can't be answered.

Positively not for the future - my crystal ball is a little fuzzy
biggrin.gif

but for the time being I haven't heard of anything that can be measured that directly relates to how a human brain perceives the soundstage, the positioning of sound sources in the recording room.
 
May 19, 2015 at 1:30 PM Post #6,029 of 17,336
There is also the subjective problem that what sounds good to you may not sound good to me ; some people like analog and tubey sounds, some don't but we do agree they can (not must) sound different.  None of us likes the sound of fingernails scratching on a blackboard.  The purpose of doing the kind of research we are discussing is to further our understanding to the point that we will in fact be able to say what it is that reproduces soundstages etc.  It is totally within the realm of possibility that we can figure that out.  


But that "subjective problem" is really BIG when you understand that it's about aesthetic experience. Science doesn't do so good at explaining at why one person likes one piece of art over another, or one song over another. Given that the preference for frequency response is tied to an aesthetic experience, we may never know how to predict that well. It's like trying to find out why some people would choose one temperature of light for viewing a painting over another. And the preference could be culturally derived, such as the current popularity of bassy consumer headphones. Is that based on human physiology? Or does it come from music preference? Or is it about shared experience?
 
May 19, 2015 at 2:09 PM Post #6,030 of 17,336
But that "subjective problem" is really BIG when you understand that it's about aesthetic experience. Science doesn't do so good at explaining at why one person likes one piece of art over another, or one song over another. Given that the preference for frequency response is tied to an aesthetic experience, we may never know how to predict that well. It's like trying to find out why some people would choose one temperature of light for viewing a painting over another. And the preference could be culturally derived, such as the current popularity of bassy consumer headphones. Is that based on human physiology? Or does it come from music preference? Or is it about shared experience?

I thought we're dealing with SQ.
 

Users who are viewing this thread

Back
Top