A proposed optical digital cable test
Mar 16, 2010 at 2:09 AM Post #46 of 138
Keep in mind this is how I understand things, but I really don't have a clue.

I applaud the effort, but your not going to prove anything.

As I understand it, your going to do:
010101010 -> digital, digital, digital, digital -> 010101010

Your going to buffer the data, and that will remove jitter. All your going to test is if the data is lost, and it won't be unless you have bad equipement.

010101010 -> digital, digital, digital, digital -> 01 010 1010
This what they claim causes the difference (notice the spaces).

What I don't understand, is why DACs don't buffer (maybe they do, and this is all stupid).
 
Mar 16, 2010 at 3:58 AM Post #47 of 138
A few suggestions for your test-

1 - If you are going to align and trim the same sample of music ten times, you could reduce a risk of misalignment (and throwing off the analyses) by burning a 1 minute segment only (or using a very short track) and analyzing it in its entirety. I have a program that can manipulate .wav files (Matlab) if you want help with that.

2- To address any criticisms of a spectral analysis missing other important features of sound (someone mentioned PRAT in your analog cable thread), I or google could also produce .wav files of square waves. A spectral analysis should reveal an infinite series of harmonics of the fundamental frequency that drop off gradually.

3- Once you have the numbers in your spreadsheet, run an analysis of variance to see if there are any significant differences in the spectral power of the frequency bands/components (e.g. if there are 1000 components, compare two 10 x 1000 arrays from 10 replications of two cables). Excel can at least run one way anovas which should suffice. Or you could post those numbers and I can run stats in a couple of minutes. A P value of <.05 would indicate a statistically significant difference. However any change in how the samples were acquired (see suggestion 1) could overwhelm subtle or nonexistent differences and give you a false positive.

4- Run these tests with your analog cables instead
tongue.gif
 
Mar 16, 2010 at 12:28 PM Post #48 of 138
Thanks for the suggestions..

Quote:

Originally Posted by eucariote /img/forum/go_quote.gif
A few suggestions for your test-

1 - If you are going to align and trim the same sample of music ten times, you could reduce a risk of misalignment (and throwing off the analyses) by burning a 1 minute segment only (or using a very short track) and analyzing it in its entirety. I have a program that can manipulate .wav files (Matlab) if you want help with that.



I have a bank of wav file segments from 5s to 1 minute collected from my prior experiments, but if Matlab allows more precise alignment that might be useful. I got quite good at alignment to the point where I could get it down to about 1/100,000th of a second (in Audacity).


Quote:

2- To address any criticisms of a spectral analysis missing other important features of sound (someone mentioned PRAT in your analog cable thread), I or google could also produce .wav files of square waves. A spectral analysis should reveal an infinite series of harmonics of the fundamental frequency that drop off gradually.


I have a bank of square waves from 50hz up to 10Khz but Audacity and Cool Edit Pro can also generate square waves.



Quote:

3- Once you have the numbers in your spreadsheet, run an analysis of variance to see if there are any significant differences in the spectral power of the frequency bands/components (e.g. if there are 1000 components, compare two 10 x 1000 arrays from 10 replications of two cables). Excel can at least run one way anovas which should suffice. Or you could post those numbers and I can run stats in a couple of minutes. A P value of <.05 would indicate a statistically significant difference. However any change in how the samples were acquired (see suggestion 1) could overwhelm subtle or nonexistent differences and give you a false positive.


I was able to "prove" a significant difference between the cable directions in my prior tests using "directional" cables, with enough samples even an average 0.001 db difference becomes statistically significant.


Quote:

4- Run these tests with your analog cables instead
tongue.gif


Shrug, I could test the final analog outputs by proxy but this would be less pure and more prone to random variation.
 
Mar 16, 2010 at 6:55 PM Post #49 of 138
@ Nick Charles:

I have just finished reading “Theoretical and Audible Effects of Jitter on Digital Audio Quality” (by Benjamin and Gannon) that you recommended and I am pretty much baffled by what I read:
1) They acknowledge that real clock jitter is complex and yet they choose to simulate different level of jitter with sinusoids. I can’t help but question if a real world jittery clock produces the same perfect sine waves at 20kz, or if the clock phase noise generated by the clock and circuitry is more complex.
In my opinion, anybody who has a critical mind should take with a grain of salt any figure given by such a test.
2) The DAC used for the test is not known
3) And best of all ...they used a cheap Sony MDR V6(!!!!) For their listening tests!
Couldn’t they get their hand on a better headphone set?

However, they noted some interesting things:
1) Training enabled people to be more sensitive to jitter.
2) And they concluded by saying that “it should not be assumed that jitter induced distortion is a non-issue”
3) They encourage pursuing all kind of distortions (including jitter) to the lowest levels.

Personally, I don’t think that a test that uses such a low grade listening tools (Sony MDR V6) should be cited as an example for audiophiles. If understand correctly the intent of the authors (I don’t want to misquote them), they target the more common consumer market rather than the audiophile market.
 
Mar 16, 2010 at 8:28 PM Post #50 of 138
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
@ Nick Charles:

I have just finished reading “Theoretical and Audible Effects of Jitter on Digital Audio Quality” (by Benjamin and Gannon) that you recommended and I am pretty much baffled by what I read:
1) They acknowledge that real clock jitter is complex and yet they choose to simulate different level of jitter with sinusoids. I can’t help but question if a real world jittery clock produces the same perfect sine waves at 20kz, or if the clock phase noise generated by the clock and circuitry is more complex.
In my opinion, anybody who has a critical mind should take with a grain of salt any figure given by such a test.



The sinusoidal jitter represents a signal-correlated jitter which creates distinct distortion sidebands, this represents a worst case scenario, all waves have amplitude and frequency, any arbitrarily complex wave will behave fundamentally the same as any other and the determinants of jitter distortion are frequency and amplitude of the jitter signal. A more random pattern would be less problematical as that would just lead to random non-signal-correlated noise as in the Ashihara paper which we know to much less detectable.

Quote:

2) The DAC used for the test is not known


Fig 12 tells you the jitter sensitivity of DAC B. You do not need to know who it is made by, this is just not relevant.

Quote:

3) And best of all ...they used a cheap Sony MDR V6(!!!!) For their listening tests!
Couldn’t they get their hand on a better headphone set?


The MDR V6 has a pretty flat FR, certainly in the critical 1k to 4k band and low distortion ! This would look like decent headphones for detecting distortion. I could point you to some pretty strong subjective reviews for this pair as monitoring, i.e critical listening tasks, but I don't believe in subjective reviews so I won't
wink.gif



Quote:

However, they noted some interesting things:
1) Training enabled people to be more sensitive to jitter.
2) And they concluded by saying that “it should not be assumed that jitter induced distortion is a non-issue”
3) They encourage pursuing all kind of distortions (including jitter) to the lowest levels.


1. Yes it is a distortion and training always helps, but even the best trained listeners had thresholds that were pretty big and certainly far bigger than you would find in *most* digital kit available. No subject at any point in any test ever detected jitter of less than 3ns. AFAIK there is one music streamer (McIntosh) and one DVD/Universal player (oppo) with performance worse than that, most competent digital devices show below 1ns.

2. This is what is called in the trade a sop, they also said:

Quote:

"The influence of jitter in causing audible distortion was found to be less than anticipated by the authors, and less than that predicted by both the technical and consumer audio press. Jitter induced by the digital audio interface was **not found to be an audible problem*** for any of the program material auditioned"


3. See above

Quote:

Personally, I don’t think that a test that uses such a low grade listening tools (Sony MDR V6) should be cited as an example for audiophiles. If understand correctly the intent of the authors (I don’t want to misquote them), they target the more common consumer market rather than the audiophile market.


See above
 
Mar 16, 2010 at 9:03 PM Post #51 of 138
Let's assume for one second that jitter is audible. People who say they hear difference between different transports usually notice changes in the following aspects: Soundstage & Imaging, Frequency extremes (deep bass and high treble) and low level details. Those same people take a lot of care in selecting the best DACs, interconnects, headphone amps and headphones. None of them would consider the Sony MDR V6 as being revealing enough for those characteristics.
So do you honestly believe that the Sony MDR V6 was good enough to reveal changes in soundstage, low level details and frequency extension at the extremes?
Wouldn't a set up similar to the one used by M. N. Kunchur help get more accurate results?

There is limited measurement of the DAC used. We don't have detailed measurement nor the kind of design (power supply, output stage, feedback, ...). Again, if you refer to Kunchur research, you will see that one has to be careful when selecting test equipment.

I always thought that science meant a critical approach. But sadly, it doesn't seem to be the case here.
If you really convinced yourself that their method and tools were beyond reproach, I am really saddened for you.

Scientists are not gods, they make assumptions and they also make mistakes. It has happened in many fields other than the audio industry. Kunchur's work is a step in the right direction. He didn't make assumptions about what was good enough and had surprising results. He could have used crappy equipment and concluded : the temporal resolution of human hearing is less than what we expected. Would that have been true according to your standards?
 
Mar 16, 2010 at 10:19 PM Post #52 of 138
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
Let's assume for one second that jitter is audible.


We know it is audible, at about 10ns according to B and G.


Quote:

People who say they hear difference between different transports usually notice changes in the following aspects: Soundstage & Imaging, Frequency extremes (deep bass and high treble) and low level details.


But we do not honestly know how many of these actually do hear a difference and how many merely think they hear a difference. So what they say they notice may or may not have a basis in reality. So unless you have a sample proven to unequvocably be able to detect differences we are talking speculation here.

Quote:

Those same people take a lot of care in selecting the best DACs, interconnects, headphone amps and headphones. None of them would consider the Sony MDR V6 as being revealing enough for those characteristics.


Best defined as what ? Highest priced, most lovingly reviewed by Stereophile, FOTM ? What reliable criteria makes item X better than Y especially when we are talking about a very specific discrimination test. I have never heard the Sony's but I think they have been used quite heavily in studio environments where the ability to pick out differences may be important ?

Quote:

So do you honestly believe that the Sony MDR V6 was good enough to reveal changes in soundstage, low level details and frequency extension at the extremes?


This begins to look a bit like snobbishness. That the Sony's are not a boutique headphone does not mean that they cannot be used to detect distortion. Again they have a good FR and low distortion, that they are not $1000 is neither here nor there.


Quote:

Wouldn't a set up similar to the one used by M. N. Kunchur help get more accurate results?


I could not possibly speculate except to say it seems to have nothing at all to do with listening to music which formed part of B and G's tests.

Quote:

There is limited measurement of the DAC used. We don't have detailed measurement nor the kind of design (power supply, output stage, feedback, ...). Again, if you refer to Kunchur research, you will see that one has to be careful when selecting test equipment.


The DAC used: DAC B had a Dynamic range of 105 dB.SNR. I would have liked to see precise FR figures but I don't need to know what power supply was used. If it had a flat FR and low distortion that is all that is necessary on top of its known jitter rejection.

Quote:

I always thought that science meant a critical approach. But sadly, it doesn't seem to be the case here.
If you really convinced yourself that their method and tools were beyond reproach, I am really saddened for you.


A forced choice model would have been a better method rather than up-down, that is a fair criticism , but that is more likely to produce more generous results i.e subjects reporting false detection. Beyond that the methods look ok to me, they keep all variables the same except one.

As for critical approach, part of my working life is reviewing academic papers for journals and conferences, I am fully aware of the value of a critical approach.

Quote:

Scientists are not gods, they make assumptions and they also make mistakes. It has happened in many fields other than the audio industry. Kunchur's work is a step in the right direction. He didn't make assumptions about what was good enough and had surprising results. He could have used crappy equipment and concluded : the temporal resolution of human hearing is less than what we expected. Would that have been true according to your standards?


Kunchur lowers the temporal resolution to say 4.7 microseconds, which is interesting in itself and a decent enough finding, Krumbholz (2003) already had it at ~10 microseconds so this is a decent refinement but it is not orders of magnitude different. A microsecond is 1000 nanoseconds. In terms of jitter this is an eternity and orders of magnitude worse that what we get in real world kit. And again unless you listen to signal generators which, contrary to what some folks think, I do not, this is wholly moot, the best audio system cannot reproduce the conditions that Kunchur created so you will never be in a position where anything you actually listen to will show this limitation even under the most extreme test.
confused_face(1).gif


More broadly, I find it somewhat amusing that the folks who are the most fervent jitter-worriers always hedge and fudge when empirical jitter papers are discussed but cannot point to a *single* reliable study that shows jitter to actually be an audible problem.
 
Mar 16, 2010 at 11:32 PM Post #53 of 138
Quote:

Originally Posted by nick_charles /img/forum/go_quote.gif
Thanks for the suggestions..

I have a bank of wav file segments from 5s to 1 minute collected from my prior experiments, but if Matlab allows more precise alignment that might be useful. I got quite good at alignment to the point where I could get it down to about 1/100,000th of a second (in Audacity).



Well that should keep power analyses accurate for frequencies far beyond human hearing.

Quote:

I have a bank of square waves from 50hz up to 10Khz but Audacity and Cool Edit Pro can also generate square waves.


ok

Quote:

I was able to "prove" a significant difference between the cable directions in my prior tests using "directional" cables, with enough samples even an average 0.001 db difference becomes statistically significant.


If you're not sure how many samples will keep you from making a type 1 error, you could set a threshold of decibel difference of two standard deviations above/below the mean with the formula

757223b816ebe241a0736bacad40f00a.png


Where B is the bound/threshold below and above the sample mean equal to two standard deviations (2*square root of the variance) divided by the square root of the sample size.
 
Mar 20, 2010 at 9:35 PM Post #54 of 138
I did a few dry runs to see if this was a goer. I will cut to the chase. There is enough random variation betwen trials to make it necessary to do at least 20 trials.

The long answer..

I did 4 trials recording the digital signal form my CD player to the PC using the cheapest optical cable I have, via the Edirol. At no point could I get a recording that was identical to the reference wav file. The level of deviation was small, at no point did it ever get above 0.0046 db, but it was still not perfect.

But there was also random variation between trials which reached a maximum of 0.01db at 20564 for one trial (max - Min) , whilst normally a lot lower, mean 0.000639db difference between the highest and lowest values this is still enough to get in the way.

As I did more trials the max difference between the average values of the recordings and the reference got smaller , down to a max of 0.002db. It is unlikely however that I will get it below 0.001db without 20 trials.

The possible sources of this variation are:

The CD player
The Cable
The Edirol
USB
The recording software
Human error

The Edirol seems the most likely culprit.
 
Mar 21, 2010 at 8:06 AM Post #56 of 138
Quote:

Originally Posted by nick_charles /img/forum/go_quote.gif
I did a few dry runs to see if this was a goer. I will cut to the chase. There is enough random variation betwen trials to make it necessary to do at least 20 trials.

The long answer..

I did 4 trials recording the digital signal form my CD player to the PC using the cheapest optical cable I have, via the Edirol. At no point could I get a recording that was identical to the reference wav file. The level of deviation was small, at no point did it ever get above 0.0046 db, but it was still not perfect.

But there was also random variation between trials which reached a maximum of 0.01db at 20564 for one trial (max - Min) , whilst normally a lot lower, mean 0.000639db difference between the highest and lowest values this is still enough to get in the way.

As I did more trials the max difference between the average values of the recordings and the reference got smaller , down to a max of 0.002db. It is unlikely however that I will get it below 0.001db without 20 trials.

The possible sources of this variation are:

The CD player
The Cable
The Edirol
USB
The recording software
Human error

The Edirol seems the most likely culprit.



Hi nick_charles,

I have been the following report on high resolution DACs and jitter : http://www.iet.ntnu.no/courses/fe811...t_audiodac.pdf, and there is a lot of interesting stuff on it.

If you read page 18, it gives general indications on measuring jitter:

Quote:

2.3 Measurement of jitter
Jitter measurements on a data converter are usually done by analyzing the converted output of
the device under test (DUT). Then performance is analyzed and by through the model for
sampling jitter we can also find the jitter amount and transfer function. Dunn proposed several
methods in [Dunn94], which has more or less become standards for jitter measurements since.
In this document we will consider measurements assuming the DUT is a DAC including an
integrated or external SP-DIF receiver with clock recovery.
To perform the measurements, we need some equipment:
- A low-jitter digital test signal generator.
- A low-jitter interface (i.e. short cable) from generator to DUT.
- A low-jitter ADC for data acquisition.
- A high-resolution FFT-analyzer (or software that does FFT) to evaluate the result
in the frequency-domain.
- For some tests, a delay-modulator to generate jitter.
It is important that the test-equipment does not contribute to the result, it is thus very
important that the reference source and ADC have very low intrinsic jitter.


The methodology I copied is intended for measuring jitter in DACs, but I don't see why we shouldn't take the same methodology and carefulness when measuring digital cables.

Indeed, one of the characteristics in measuring the jitter of a DAC, is to use a short low jitter digital cables.
They mention several time in the report that AES/spdif cables can have jitter of a few ns (but most of it suppressed by the DIRs and PLLs), especially on longer cables.

So if you apply their methodology, we should have measured differences between optical cables. Whether that is audible or not is another story. But at least we would have a few reference points.
 
Mar 21, 2010 at 4:13 PM Post #57 of 138
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
So if you apply their methodology, we should have measured differences between optical cables. Whether that is audible or not is another story. But at least we would have a few reference points.


My intent was to measure the differences between cables using digital to digital captures not going through the DA/AD Stage. I can capture the analog outputs but there is more variation in my ADC for this.

Despite the variability found it may still be possible to do this to some extent. Since with enough trials in a mixed (between subjects/within subjects) experiment you can separate the variability caused by each.

To my mind there are two big suspects for the variation found. The Edirol may not be passing through but resampling and downsampling and the variation may be due to dither (signal decoupled random noise) , the only viable USB alternative is the E-MU 0404 which is $185.

The second big suspect is the CD player, this is a modern CD player but is a bitstream machine, I am unsure whether this would make a difference
confused.gif


I begin to doubt how much useful info I can gather, nevertheless I commited to this so I will carry on for a little while anyway. First finishing off the baseline tests then getting a couple of other cables to try out. I'll resell them here
wink.gif


As for audible differences I can make samples available including the reference wav files.
 
Mar 21, 2010 at 5:45 PM Post #58 of 138
I remember reading somewhere(or maybe a long explanation on youtube?) that on consumer grade equipment, you could record the same source 10 times and each time get 10 different checksums...mostly because the clocks are not accurate enough on a PC.

some leads: sample accurate recording - Google Search

I guess you're running these tests on Vista/W7? because you really want to use HPET: Guidelines For Providing Multimedia Timer Support

and USB doesn't help either(click on "Technology"): Google Translate

if I ever remember the link I mentioned earlier, I'll report back...thanks for the tests!
 
Mar 21, 2010 at 6:25 PM Post #59 of 138
Would it be better to buy different length plastic cables rather then multiple glass cables then?

Also, I couldn't change my yourcablehookup order to add a cable for you
frown.gif
, but I don't recommend you buy from them because they charge $15 shipping and a $5 fee for sub-$50 orders.
 
Mar 21, 2010 at 6:41 PM Post #60 of 138
Quote:

Originally Posted by leeperry /img/forum/go_quote.gif
I remember reading somewhere(or maybe a long explanation on youtube?) that on consumer grade equipment, you could record the same source 10 times and each time get 10 different checksums...mostly because the clocks are not accurate enough on a PC.


Interesting, in my analog cable tests I did get a lot (relatively speaking) of variation i.e 100ths and even often 10ths of a db, whereas with the digital to digital I am getting variations much lower

I updated the drivers for my Edirol and started over...
 

Users who are viewing this thread

Back
Top