Head-Fi.org › Forums › Equipment Forums › Sound Science › Jitter Correlation to Audibility
New Posts  All Forums:Forum Nav:

Jitter Correlation to Audibility - Page 10

post #136 of 302
Quote:
Originally Posted by UltMusicSnob View Post
 

Are we highly confident that there is no other possible source of differences in these tracks?

 

Well, you can subtract one sample from the other in an audio editor, and see (or even better, listen to) how much difference is there.

 

Edit: you can also attenuate the difference signal, and add it to path30n to find a threshold where it becomes audible.


Edited by stv014 - 9/24/13 at 2:29am
post #137 of 302
Quote:
Originally Posted by stv014 View Post
 

 

Well, you can subtract one sample from the other in an audio editor, and see (or even better, listen to) how much difference is there.

 

Edit: you can also attenuate the difference signal, and add it to path30n to find a threshold where it becomes audible.

SoundForge Statistics says peak difference is in the -77db range, which doesn't seem like much.

 

However, when I boost it and listen, it doesn't sound like a pure noise signal. The difference file retains a shadow of the original sounds, with lots of distortion, of course.

post #138 of 302

It is not supposed to sound like pure noise. After all, it is phase modulation by noise and a mix of sine waves, and that produces sidebands around the original frequencies, in other words, a kind of non-harmonic distortion.

post #139 of 302
Quote:
Originally Posted by stv014 View Post
 

 

The lower jitter (jl) version has ~35-36 ns RMS, and ~260-270 ns peak to peak, however, note that much of the peak to peak level is from low frequency noise, which is not as audible as modulation by high frequency tones. With the noise components removed, leaving only the sidebands, the level is 32 ns RMS and 126 ns peak to peak. The other (j) files use the same modulator signal, but multiplied by 3.

 

We are still here outside the detection threshold for signal correlated components established by Benjamin and Gannon (20ns) , but very close to the threshold for the random component established by Ashihara et al (250ns), of course both are present not just one or the other.

post #140 of 302
Quote:
Originally Posted by nick_charles View Post
 

We are still here outside the detection threshold for signal correlated components established by Benjamin and Gannon (20ns) , but very close to the threshold for the random component established by Ashihara et al (250ns), of course both are present not just one or the other.

 

It seems some more level reduction is needed then, even though with path30jl.wav the difference signal is already very "quiet", despite the high amount of jitter added.


Edited by stv014 - 9/24/13 at 8:57am
post #141 of 302
Quote:
Originally Posted by nick_charles View Post
 

 

We are still here outside the detection threshold for signal correlated components established by Benjamin and Gannon (20ns) , but very close to the threshold for the random component established by Ashihara et al (250ns), of course both are present not just one or the other.

And, 20ns is about an order of magnitude of jitter greater than we'd actually expect to see in any device that's not absolute junk, if I read the previous material correctly. And I would expect that it's the signal correlated components that are contributing to the piano sound.

post #142 of 302
Quote:
Originally Posted by Digitalchkn View Post
 

 

Vast majority of the time the audio section is sourced by say a 25MHz at cut crystal running through some sort of a multi-purpose clock generator/buffer that spits out all sorts of clocks for CPU, memory, peripheral busses, I/O bridges, etc.  It is reasonably clean, except it's PLLs bandwidths are often above the audio frequencies. The higher speed digital standards don't really care about what happens to clocks at audio frequencies.  I have personally measured plenty of motherboard clocks that wander around like crazy at low frequencies -- but they pass all the jitter requirements set out by that fancy 5Gbps high speed standard.

 

I did a few more tests of 3 different onboard audio outputs and a few sound cards, all recording was done with the setup described here in 96/24 format.

 

ALC887 in a desktop PC (frequency = 11024.7515 Hz, jitter+noise = -79 dBr - this translates to 2.3 ns (RMS) of jitter, although not all of it is necessarily jitter in fact); click to zoom:

ALC270 in a laptop (frequency = 11024.5878 Hz, jitter+noise = -86 dBr; it would look somewhat cleaner at 48 kHz sample rate):

ALC850 in an old desktop PC (frequency = 12000.1886 Hz, jitter+noise = -71 dBr); this is an outdated AC97 codec, with a lot of ultrasonic imaging, aliasing, high frequency roll-off, and other problems:

Sound Blaster Live! Value - now that is some really old hardware :normal_smile : (frequency = 12001.973 Hz, jitter+noise = -64 dBr - the reason it is so high despite the clean looking graph is that there is a high amount of very low frequency random jitter):

Sound Blaster Audigy SE, 16/24-bit samples (frequency = 11998.3397 Hz, jitter+noise = -86 dBr); note that this card auto-mutes the output when there is no signal, and that most of the peaks on the 16-bit graph are part of the JTest signal:

   

Xonar D1, 16/24-bit samples - finally something that is not ancient and actually performs very well (frequency = 11024.9427 Hz, jitter+noise = -103 dBr):

   

 

The frequency values are the measured frequency of the recorded JTest tone. It should be 11025 Hz or 12000 Hz, depending on whether the DAC has hardware support for 44.1 kHz sample rate. However, note that the sound card used for recording of course does not have perfectly accurate clock frequency either, so that skews all results somewhat (using a method of timing the real duration of a 10-minute tone with the NTP-synchronized system clock of the PC, I approximated it to be ~34 ppm (edit: fixed incorrect value) too "fast", but that still might not be correct).

 

The jitter+noise figure is the total unweighted RMS level of the signal, referenced to the test tone, around which the bandwidth is +/- 8 kHz, with a narrow (~1 Hz bandwidth) notch at the tested frequency. In some cases, much of it is not actual jitter, however.


Edited by stv014 - 9/25/13 at 4:31am
post #143 of 302
Quote:
Originally Posted by stv014 View Post
 

 

I did a few more tests of 3 different onboard audio outputs and a few sound cards, all recording was done with the setup described here in 96/24 format.

 

....

 

Thanks for the interesting results.

 

Digging around some impelemenation guides it seems basically the Intel-based systems have a common design that sources the audio clock from a general purpose system clock generator. The USB clock @48MHz is taken into the I/O hub chip that in turn generates a 24MHz clock for the HDA codec such as ALC887/ALC260.  The only requirements that the clock has is stability within +/-100ppm and jitter of less than 2ns. The jitter is likely meant to be specified as cycle-to-cycle (judging by the fact that clock is also used for digital I/O bus) but that is not explicitly stated. The 24MHz clock is likely run through another PLL internal to the codec chips to generate the DAC clock. In summary, clearly not an implementation with audiophile in mind -- it just happens to be clean enough in the cases you looked at.

 

 

Did you detect any anomalies outsid of the +/- 8 kHz windows in any of the tests?  Did you run any significant system processing while doing these tests (e.g. heavy CPU processing, graphics, drive activity)?

 

Interesting to me that the ALC887 test shows asymetric first order spurs around main tone. There is probably something else stirring.

 

 

I want to say that the noise floor measurement should point out the obvious interference from external sources, whereas the tone measurement will point to jitter+THD + other impairments.


Edited by Digitalchkn - 9/24/13 at 11:50am
post #144 of 302
Quote:

Originally Posted by Digitalchkn View Post

 

Did you detect any anomalies outsid of the +/- 8 kHz windows in any of the tests?

 

Well, here are a few of the graphs with 0-48 kHz bandwidth:

       

I did not check the others yet, however. The band-limiting was applied to exclude ultrasonic THD and imaging products, and low frequency noise.

 

Quote:

Originally Posted by Digitalchkn View Post

 

Did you run any significant system processing while doing these tests (e.g. heavy CPU processing, graphics, drive activity)?

 

Nothing significant, although for the Xonar D1 in particular I have some older graphs here (low system activity) and here (high CPU+HDD+GPU activity), where the difference is minor. For the onboard codecs, I would expect a more significant difference. It can obviously also vary with the PC the card is installed in. However, I think generally noise becomes an audible problem sooner than jitter when there are interference issues.

post #145 of 302
Quote:
Originally Posted by stv014 View Post
 

 

Well, here are a few of the graphs with 0-48 kHz bandwidth:

      

I did not check the others yet, however. The band-limiting was applied to exclude ultrasonic THD and imaging products, and low frequency noise.

 

 

Nothing significant, although for the Xonar D1 in particular I have some older graphs here (low system activity) and here (high CPU+HDD+GPU activity), where the difference is minor. For the onboard codecs, I would expect a more significant difference. It can obviously also vary with the PC the card is installed in. However, I think generally noise becomes an audible problem sooner than jitter when there are interference issues.

I could depend on the design. One of the major jitter sources in clocks comes directly from power supply noise, which itself is a function of activity in the system.  In principal, the activity in the system could fall within the audio band and inject low frequency noise (AC noise on top of DC voltage) into the clock generators causing added jitter on the clock sources (particularly single ended clocks such as that 24MHz clock source).  Naturally the outboard codec should behave cleaner since the card's designer has freedom in regulating these aspects of the design better.

post #146 of 302

I have a question STV104 about your jitter tests.  When you record the J-test signal, rarely are the two clocks running at the same speed (unless doing a loopback with soundcards).  Do you adjust for that in any way?  I find if I don't adjust the speed in software it artificially widens the base of the 11,025 hz tone.  It doesn't affect the other types of jitter induced tones spaced some hundreds of hertz from 11,025 hz.  But often what looks pretty bad near the base of the tone ends up quite clean if you fix as much as possible the basic speed of the recorded tone.  Published FFT's often show the widened base of the tone and attribute it to low frequency or close in jitter.  But you get that same effect from a few ppm speed difference in the ADC and DAC clocks. 

post #147 of 302

svt104,

 

I  noticed you writing about checking the clock speeds with a 10 minute signal.  I usually look at the waveform in Audacity.  You see an up and down pattern rather than the straightline you get if both clocks are the same speed.  The up and down nature is caused by the peaks of the waveform shifting in phase due to clock speeds being different.  From one dip to another is how long it takes to alter the phase by one sample period. I take the time between two dips, multiply by the sample rate and you have a ratio for the speed between the two clocks. 

 

In the sample screenshot below there are 5.4 seconds between the dips. I have that selected in darker gray.  5.4 seconds x 44,100 samples per second is 238,140 samples.  If you turn this ratio into ppm (invert the value and divide by 1 million) you get right about 4.2 ppm speed difference in the ADC and DAC clock.  This way you don't need a 10 minute recording.  It also would let you check something like a ten minute recording at beginning, middle and end, to see if the clocks relative speed is varying over longer times.

 


Edited by esldude - 9/24/13 at 8:07pm
post #148 of 302
Quote:
Originally Posted by esldude View Post
 

I have a question STV104 about your jitter tests.  When you record the J-test signal, rarely are the two clocks running at the same speed (unless doing a loopback with soundcards).  Do you adjust for that in any way?  I find if I don't adjust the speed in software it artificially widens the base of the 11,025 hz tone.

 

I am aware of that fact, and my older Xonar D1 measurements did indeed correct the pitch error. However, these newer jitter tests were plotted using a large window size and a window type that has a sufficient sidelobe rejection to make the spectral leakage insignificant on the graphs. Here is a comparison with the commonly used Blackman-Harris window (note: these graphs have different scale on both axes compared to the ones above to make the difference more visible):

       

Since the original Y range was -140 to 0 dB, the sidelobes are entirely under that range, and the main lobe has a width of about 6.4 Hz, which translates to ~0.5 pixel width on the original X scale (12050 Hz / 936 pixels = 12.87 Hz resolution).

post #149 of 302
Quote:

Originally Posted by esldude View Post

 

I  noticed you writing about checking the clock speeds with a 10 minute signal.  I usually look at the waveform in Audacity.  You see an up and down pattern rather than the straightline you get if both clocks are the same speed.  The up and down nature is caused by the peaks of the waveform shifting in phase due to clock speeds being different.  From one dip to another is how long it takes to alter the phase by one sample period. I take the time between two dips, multiply by the sample rate and you have a ratio for the speed between the two clocks. 

 

In the sample screenshot below there are 5.4 seconds between the dips. I have that selected in darker gray.  5.4 seconds x 44,100 samples per second is 238,140 samples.  If you turn this ratio into ppm (invert the value and divide by 1 million) you get right about 4.2 ppm speed difference in the ADC and DAC clock.  This way you don't need a 10 minute recording.  It also would let you check something like a ten minute recording at beginning, middle and end, to see if the clocks relative speed is varying over longer times.

 

I did not use a 10 minute recording to measure the relative speed of each DAC compared to the ADC. I simply used my sinetest utility (it can be downloaded from the link in my signature) that can measure frequency accurately - using quadratic interpolation on a large FFT with a Gaussian window - on the recorded JTest signals, which were of course much shorter than 10 minutes :normal_smile :.

 

However, I also tried to find out the absolute speed of the ADC itself, and that is more difficult without the right equipment for measuring frequency with a high accuracy. That is why I played and recorded the very long tone in a loopback configuration on the same sound card, and measured its length in real time with the hope that the system clock (which has a speed automatically adjusted using NTP) is more accurate and can be used as a reference, at least over a sufficiently long time.

post #150 of 302

This note is just to check that I haven't missed a follow-up file or recommendation. I think it would be useful to nail down how a real 2-3ns jittered file compares to a non-jittered one. I think I do need to leave it to the experts what sort of reference file to use. Something really recent, that we [somehow?] know had a very accurate clock at ADC?

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Jitter Correlation to Audibility