Originally Posted by regal
Its pseudo-science because the DAC developers who come up with the jitter reducing schemes don't have any reliable way to measure the jitter. ASTM doesn't have a test method for jitter in digital audio, until it does it is just marketing and black art stuff. Not saying its meaningless just saying its definately not an empirical science, yet.
Think about it this way, measuring Jitter is of the same order of magnitude as measuring the speed of light. It takes highly calibrated equipment with standards to make this credible.
There are a lot of indirect methods of measuring jitter but they require a leap of faith that hasn't been accepted anywhere in the literature.
I sure hope that if that person ever needs an MRI test, that the converters inside the MRI gear operate with low enough jitter, or else the image pictures will be fuzzy and distorted. Jitter is not some audio thing. Jitter is an issue for ALL conversion, be it medical, weighing scales, digital video, industrial, telecom.... name it. Jitter is a timing error. Given that nothing is ever perfect, there is always some timing errors, and the question is at what point the error gets to impact what you are trying to do.
Jitter is a part of electronics. There is equipment to measure jitter, and it is costly. Look at Agilent (the original HP) and Tektronix sites. Some of that gear costs $100000 and people in the technology field are not just buying it for the hack of it.
Jitter is becoming a major issue for speeding up digital communications into the many GHz range. At say 10GHz, a clock cycle is 100psec (50psec up and 50psec down) so you can see that a timing error of 60 psec can “ruin your day” (you miss or repeat data!). Measuring jitter for audio still very costly. But a good Audio test gear is also costly. As a rule, good jitter test gear is over $10000. Same for good audio test gear. I would call it costly science, not pseudo science.
For audio, the jitter is important because the ear has an ability to hear tiny imperfections, and timing errors cause noise and distortions. The "basics" are not too difficult to understand:
Say you have a video camera and a video projector. The idea is to first “record” video frames with the camera, then to project them with the projector. But for the concept to work, both the camera and the projector should work in a very constant rate. Say your camera (the recording device) takes 50 picture frames each second. You want the time of each frame to be .02 second because .02 X 50 is one second. You also want the projector (the playback device) to project 50 frames each second at the same rate, one frame each .02 second. What would happen if projector speed is not steady? Say the first 25 frames are .01 second each, and the next 25 frames are .03 second. You still have 50 frames in one second but there are timing errors.
If your movie is about still object, then there is not much harm done. But say you are taking a movie of a ball moving from the left side of the screen to the right, I one second. If the first 25 frames are too fast, and the next 25 are slow, the ball will get to the middle of the screen in ¼ second, then it would slow way down and move through the rest of the screen is .75 second. That is a distortion due to timing errors, a jitter induced distortion.
Video is not audio, and analogies may be misleading. I am just doing my best to show how a timing error can cause distortions. In audio, the idea is to take samples with AD at as constant time intervals as possible (lowest jitter) and then play back with the DA with the lowest jitter possible. If there is jitter, your playback will “place” the samples at the WRONG time. The output voltage of the DA will be distorted.
You can see that jitter has impact at BOTH the AD and the DA. You need to take the samples at the record side with low jitter. Once the samples are taken, they are just “numbers” describing the sample amplitudes. Those “numbers” (values) can be stored in a memory, on a CD or what not. You can send the data down a cable, and timing is not much of an issue. You can even send music data in “chunks”, and timing jitter is not much an issue as long as you get receive the sample values as they were. But when you want to play back – the conversion of the numbers has to be with good timing to match what happened at the recording side. That means keeping the conversion jitter low.
How low? That is another matter. I already posted numbers showing the impact in bits. How low can one hear? I will let you argue that, and I did point out that the music itself has to do with it. As in the video example, a slow moving object (or a low frequency content) is less impacted by jitter. A fast moving object (or a fast signal at high amplitude) are very impacted by timing errors (jitter).
I hope my post provides basic enough information to understand that jitter is not just one more made up stuff such as the "need to keep a minimum cable length".
And while direct measurements of jitter are costly and require much expertise, the IMPACT of jitter in terms of distortions and noise is measurable with a good audio test system. The key is to decide what kind of distortion, and at what level the ear is impacted. Clearly, if I suddenly stopped you CD player for an hour and then restarted it, you would notice such huge timing error. That is very extreme. Can you hear a 1psec slow down? 100psec?
The science is there, and it is very solid and real. For audio the question is the impact of jitter on the ear. For instrumentation and medical the question may be “how accurate is the result”. For video, it may be about fuzzy picture, distortions coloration…
One can look at some "pictures" and explanations in my 1997 paper "On Jitter":http://www.lavryengineering.com/white_papers/jitter.pdf
I may be deviating from the subject of the thread again. I will take some time to answer off topic question and comments addressed to on the Lavry Forum here at head-fi.