Testing audiophile claims and myths
Aug 4, 2013 at 5:57 AM Post #2,101 of 17,336
Quote:
1 - I assume that you apply the "jitter" by altering the values of succesive samples. Theoretically, this may generate components outside the Nyquist bandwidth. Do you filter the results before writing the output file?  "Real jitter" also generates components outside the Nyquist bandwidth.

 
The signal is oversampled for the jitter simulation (which is a variable delay modulated by a mix of filtered noise and several sine waves), to minimize aliasing and interpolation artifacts. Components are indeed produced above the original Nyquist frequency (e.g. 22050 Hz), but the downsampling process filters them out.

 
Aug 4, 2013 at 6:05 AM Post #2,102 of 17,336
By the way, here is the link for more information and samples. Anyone who wants to try the test can submit a different sample (< 30 s length), or suggest changes to the jitter parameters or improvements to the model.
 
Aug 4, 2013 at 7:04 AM Post #2,105 of 17,336
Quote:
Well, let's say a guy gets a new shiny USB cable, installs it and suddenly his whole system sounds better. Measurements show the cable is identical to the old, cheap one.
 
the new USB cable performing no better = placebo
hearing an improvement = placebo effect

I think you missed my point, which was that "hearing an improvement" cannot (I thought) be called a placebo effect, because it's not measurable.
 
Since then, I've been corrected, as the placebo effect seem to also includes subjective (not measured) improvements.
 
Nevertheless, i think it's important that those not familiar with the placebo effect in medecine, know that the improvements following the use of a placebo, are quite often measurable and objective. People actually get cured by taking a placebo. Knowing such things helps in putting things into perspective...
 
Aug 4, 2013 at 8:08 AM Post #2,107 of 17,336
Quote:
 
I would hardly consider popular opinion on internet forums as a reliable basis for discussion. Especially when all those who share the opinion are also affected by the same well known and proven psychological and psycho-acoustical effects. If you asked 10 people about this image, they would probably give the wrong answer, unless they already know about the illusion. Does that make their opinion fact, or just confirm that their perception is affected by bias ?
 
 
Under those conditions, it can never be proven that a difference does not exist, just like it cannot be proven that alien abduction does not happen. Audiophiles trust sighted testing (even though it has been proven to be prone to showing differences that do not exist, like when comparing an amplifier to itself without knowing about it), and look for every possible excuse to invalidate DBT. But is it really worth obsessing so much about a very low chance that the cable makes a difference that is in most likelihood miniscule at best anyway ?

 
Well that's a good way of putting it regarding obsessing over miniscule chances of making a difference,  but you know a lot of audiophiles are pretty obsessive when they are willing to pay more than $25 for a cable, especially when it is provided without any measurements to prove that it makes a difference.  I'm probably preaching to the wrong audience regarding the inpracticality of DBT's and their tendency to show null results.  To be honest I have no idea why some things haven't been proven in double blind tests, given that to my ears the differences are there to be heard.  I also think this is why a lot of audiophiles don't place much weight on controlled testing, mainly because they trust themselves not to hold a bias and keep expensive equipment instead of sending it back at the end of the trial period.  Sometimes audiophiles compare a range of cables at one time in a comparison, and describe very specific sound signatures, indeed very complex congnitive bias.  And then other audiophiles who haven't even come across this opinion spontaneously describe the same attributes.  For example with USB cables, certain models are consistently described as sounding brighter, warmer, or smoother etc. where to me it seems highly unlikely that different audiophiles are merely being conciliatory given how audiophiles like to disagree with each other.
 
Again though, I have probably come to the wrong place to discuss why I find it highly unlikely that certain claims of audio skepticism are accurate.  I guess they should ban audiophiles from criticising sound science in the sound science forum right?
ph34r.gif

 
Aug 4, 2013 at 9:29 AM Post #2,108 of 17,336
Quote:
 
Sometimes audiophiles compare a range of cables at one time in a comparison, and describe very specific sound signatures, indeed very complex congnitive bias.  And then other audiophiles who haven't even come across this opinion spontaneously describe the same attributes.  For example with USB cables, certain models are consistently described as sounding brighter, warmer, or smoother etc. where to me it seems highly unlikely that different audiophiles are merely being conciliatory given how audiophiles like to disagree with each other.
 

There are seem to exist certain consistencies with what audiophiles 'hear' from given equipment which in testing offers negligible if any sonic differences. That's simply down to visual stimuli and verbal descriptions of the product in question. 
 
Aug 4, 2013 at 10:36 AM Post #2,109 of 17,336
Quote:
 
Again though, I have probably come to the wrong place to discuss why I find it highly unlikely that certain claims of audio skepticism are accurate.  I guess they should ban audiophiles from criticising sound science in the sound science forum right?
ph34r.gif

Skepticism is usually not about making claims but rejecting them for lack of evidence. Or because they are unreasonable. Or because they're just stupid.
 
Aug 4, 2013 at 10:57 AM Post #2,110 of 17,336
Quote:
 I guess my point was that skepticism is fine and all for saving money not buying "snake oil" but consider for a moment that this information cannot provide some kind of perfect prediction for how all components will perform, and that in some cases might overlook certain variables and mechanisms affecting performance.  

I don't want to presume that this statement is meant to imply there are mysterious and un-measureable effects in the world of electronics that cause devices to interact unpredictably, and in a favorable way with each other.  Hopefully, that's not was was being said here.  But just in case, that view is shared by many who also have in common a lack of understanding of electrical principles and measurements.  Measuring and predicting "interactions" is not difficult, and there isn't anything audible that cannot be measured.  The part of that science that is still under development is better correlation between measured results with their degree of audibility over demographics and population segments.  What DBT does is isolate what's audible from bias, which is absolutely necessary for reliable data on the degree of audibility of anything. 
Quote:
...a lot of what I have read regarding cables doesn't match my own experience, and if you can think of a double blind test that can compare SPDIF cables of different lengths (not saying that there is an ideal, jus that they sound different in my experience) without adding anything to the signal chain, and without any wasted time or effort from myself being put in, I'm sure someone who has already ignored the sound science knowledge and drunk the kool aid will be willing to spend their time to prove what is already considered common knowledge outside of this subforum.

Designing a DBT that can compare SPDIF cables without adding anything to the signal chain is easy.  Wasting your time and effort...that in itself is subjective to you, and outside of scientific test design parameters.  
 
Aug 4, 2013 at 1:05 PM Post #2,113 of 17,336
Quote:
I'm probably preaching to the wrong audience regarding the inpracticality of DBT's and their tendency to show null results.

 
Everybody knows that DBTs between some components are nontrivial to set up (between audio files: is trivial). A lot of things aren't tested or could be tested more, sure.
 
So tests that aren't carefully controlled have a much higher tendency to produce non-null results. You reject the null: great. The problem is that there is pretty much no basis to make any claims about what caused what in the conclusions. If you don't properly control the well-known nuisance factors, you don't really have an idea of which "treatment" gave you that outcome. It could have been the treatments you were testing, like sound of device A vs. sound of device B. Or one of the unintended effects. If you can't honestly tell them apart in the data, you can't draw any good conclusions without resorting to leaps of faith—and you'll find that without good rationale, others may not take the leap with you.
 
Now, plenty of experiments with a double-blind procedure could be set up or have different parameters so as to increase the statistical power and make rejection of null more likely. Some DBTs are more sensitive than others, yeah. Some DBTs could even be biased in other ways, resulting in null rejections that shouldn't happen, and of course you can always reject the null by chance even if there's no effect and the experiment was proper. But we expect DBTs to generate more null results than uncontrolled or less controlled tests because we're throwing away a bunch of outcomes where you get a rejection due to some kind of biases and nuisance factors (i.e. not the ones you're trying to show). That's how it's supposed to be.
 
 
I guess they should ban audiophiles from criticising sound science in the sound science forum right?
ph34r.gif


 
Depends on the criticism. If you want to suggest a better method, better explanations, better data or otherwise point out weaknesses with what's current (i.e. improve the current science or refute it based on scientific process and principles), that's a good thing. If you want to express your opinions and perceptions—despite whatever it is that other people are saying—that should be fine. People are entitled to believe whatever they want regarding things that aren't dangerous. But if you want to criticize things based on some anecdotes of "I think I heard it and so did others", then that's not going to go very well. You're probably not going to convince many people in sound science without using some sound science.
 
Aug 4, 2013 at 1:25 PM Post #2,114 of 17,336
Quote:
 
Everybody knows that DBTs between some components are nontrivial to set up (between audio files: is trivial). A lot of things aren't tested or could be tested more, sure.
 
So tests that aren't carefully controlled have a much higher tendency to produce non-null results. You reject the null: great. The problem is that there is pretty much no basis to make any claims about what caused what in the conclusions. If you don't properly control the well-known nuisance factors, you don't really have an idea of which "treatment" gave you that outcome. It could have been the treatments you were testing, like sound of device A vs. sound of device B. Or one of the unintended effects. If you can't honestly tell them apart in the data, you can't draw any good conclusions without resorting to leaps of faith—and you'll find that without good rationale, others may not take the leap with you.
 
Now, plenty of experiments with a double-blind procedure could be set up or have different parameters so as to increase the statistical power and make rejection of null more likely. Some DBTs are more sensitive than others, yeah. Some DBTs could even be biased in other ways, resulting in null rejections that shouldn't happen, and of course you can always reject the null by chance even if there's no effect and the experiment was proper. But we expect DBTs to generate more null results than uncontrolled or less controlled tests because we're throwing away a bunch of outcomes where you get a rejection due to some kind of biases and nuisance factors (i.e. not the ones you're trying to show). That's how it's supposed to be.
 
 
 
 
Depends on the criticism. If you want to suggest a better method, better explanations, better data or otherwise point out weaknesses with what's current (i.e. improve the current science or refute it based on scientific process and principles), that's a good thing. If you want to express your opinions and perceptions—despite whatever it is that other people are saying—that should be fine. People are entitled to believe whatever they want regarding things that aren't dangerous. But if you want to criticize things based on some anecdotes of "I think I heard it and so did others", then that's not going to go very well. You're probably not going to convince many people in sound science without using some sound science.


Here you are, that's the kind of "brillant" posts I was referring to.
cool.gif

 
Drez, you simply don't know how lucky you are to get such elaborate, meaningful, sensible, educational, balanced, sound replies.
 
Talking of which mikeaj, did you intend the pun in "without using some sound science"?
wink.gif

 
Aug 4, 2013 at 2:15 PM Post #2,115 of 17,336
Quote:
I have come across is the jitter audibility numbers which used to be thrown around, where more recent studies by Julian Dunn
 
Sadly Dunn is no longer with us so none of his studies are terribly recent, however in one of his last published papers he acknowledged the Benjamin and Gannon study which placed the thresholds substantially higher and Dunn (and Hawksford too) never did any empirical listening tests.
 
 
have revised the threshold down to 15-20ps.  
 
let's face it most double blind tests are set up by audio skeptics
 
Do you have a citation to back this up or is this just your opinion 
 
who have a vested interest in not making fools of themselves by discrediting the what they have been claiming for years.  
 
Can you give one concrete example of this ?
 
Double blind testing methodologies are difficult to set up, time consuming (agreed), and prone to produce null results.  
 
Actually look hard enough and you can find several DBT that produce non null results, even members here have succeeded at differentiating between codecs, filters and even between DACs. Where there is a significant objective difference between stimuli detecting the difference is eminently possible. In fact the rapid switch DBT has been found to be pretty sensitive as Tom Nousaine showed in the "Flying Blind" article.
 
Even the legendary Meyer and Moran study found non null results on noise levels between high-res and red book (at sufficient gain). Humans have been found capable of detecting a difference of between 0.1 and 0.3 db on single frequency tests or frequency differences of less than 1% .
 
The issue with tests is whether the difference between stimuli is above or below the JND which is different for the type of stimulus, detecting jitter for instance is easier with a single high frequency sound but much harder in music where masking effects appear. When you look at the jitter sidebands from real measured digital devices it is highly unusual to see distortion products that poke above -100db 
 
I've measured the effect of 10, 30 and 100ns correlated jitter vs no added jitter and the effect is amazingly small , no more than 0.2db at any frequency point point for 100ns ! http://hddaudio.net/viewtopic.php?id=15 - so far from the same test data only one person has been able to reliably detect the 10ns jitter sample this is broadly in line with what B & G found.
 
In short the effect of jitter is vanishingly small on all but criminally incompetent gear (http://www.stereophile.com/content/mcintosh-ms750-music-server-measurementsso it is in no way puzzling that DBts show poor human ability to detect jitter ! 
 
See also http://hddaudio.net/viewtopic.php?pid=117795#p117795 for what even a relatively cheap DAC can do to remove incipient jitter
 
 
 
Clearly it is not reasonable to expect everyone discussing hifi components to go to this level of trouble, especially when the likely result will not provide any useful data for the discussion.

 

Users who are viewing this thread

Back
Top