Originally Posted by gevorg
Are you following this thread, or just me?
Since your confusion sounds sincere, I'll clarify it for you:
In post #286
, Dan made a very interesting statement that "DA jitter matters at ONE LOCATION - where the digital signal is altered to analog."
This statement made me think about software jitter, so I asked Dan about it in post #296
. Software jitter is currently an interesting topic in computer audio that reappears in other threads and forums.
Then, Steve replied to my question to Dan to give his opinion on the issue (post #316
). His reasoning, or lack thereof, gave no support for or against the idea of software jitter.
I hope you can get it from this point on.
Let me say it again as simply as I can:
At the record side:
Sound is air motion. It changes with the music. We capture it with a microphone and convert it to voltage. We amplify it. We take "snap shots" at a high rate such as 44100 times a second for each channel. Each sample contains a NUMBER describing the voltage at the sample time. If the air pressure was zero, the voltage was zero, and the sample value is zero. If the air pressure was high, the voltage is high and so is the sample number. At 44.1KHz, the time between taking the snapshots (samples) is 22.675usec.
Storing the data:
We can store data on a hard drive, a memory stick or what not. In fact, if one had the patience to, one could write the sample values on a paper. That would call for a lot of paper. You can have an army of cavemen put the sample numbers on rocks. As long as you have the sample values and they are kept in order, you can reconstruct the music.
When storing the data, one does not need to worry about the timing. Say you want to copy a CD, or send music over the internet, in such cases it may be of benefit to do it fast, and you can MOVE the data samples faster, so the time interval between samples can be much shorter then 22usec. For data transfer, timing is not an issue as long as the transfer is correct (no loss or alteration of data).
Playing the music:
When playing music, we are reconstructing the sound. We want to duplicate the changing air motion. To have that happen, we need to duplicate the voltage.
We do not have the voltage, but we have the NUMBERS – the sample values, and we have them in order (sample #1, sample #2, sample #3…). So we convert the numbers to voltage – that is what a DA does.
But clearly in order to duplicate the original music, we now need to make sure that the conversion from numbers to voltage takes place AT THE SAME RATE as what happened at the record side. In the case of 44.1KHz we need to make sure that each new sample happens at the same 22.675usec time interval.
Clearly, if the timing were consistently short, the music duration will be shorter, and the pitch will be higher. But that is not a jitter issue. The jitter is any timing error, making the conversion of samples at the wrong time, instead of the intended time. Jitter can increase noise, introduce unwanted sounds that were not the part of the music.
The above is pretty simple to understand. Once you do, then you also understand that timing jitter IS CRITICAL at 2 locations. At the AD (recording) and the DA (playback). The timing at the record and playback should be MATCHED, and if it is not, you are ruining the reproduction. To have the timing matched, we keep the time interval between samples THE SAME FOR ALL SAMPLES. Any deviation from the ideal timing is CONVERSION JITTER.
The data sent to a DA from say a computer, CD transport or other devices may have some timing errors, and that is DATA TRANSFER JITTER. If that jitter gets to be so high that the data communication breaks down, then we are in trouble. But such jitter is relatively huge. We are just moving data from one place to another, just like we do on the internet. As long as the data gets there, we can use it. The transfer rate for SPDIF is such that data changes once every 177nsec, and as long as we can “catch it” on the DA side we are fine.
Once the data gets into the DA, we now must make sure to line the samples up with a proper time interval. We have to make sure that the time between samples is correct (22.675usec for 44.1KHz) AT THE LOCATION WHERE WE CONVERT NUMBERS TO SAMPLES.
Of course life of a DA designer would be easier the DA we could receiving samples that are already spaced correctly in time. But such is not the case.
The computer hardware is not at all idea environment for clean signals, it is a noisy place. The computer is a machine oriented towards computation and high speed data handing. It is electrically noisy. The computer is not oriented towards time precision. The issues are hardware related, not software.
A good transport may output a low jitter (good timing) data. But by the time the data gets to the DA, it had to go through a cable, where it may pick up environmental electromagnetic noise. The SPDIF cable serves as ground connection between 2 chassis, so there is ground loop issue, there is a termination accuracy issue and more factors allowing more jitter then were intended. And all of that jitter needs to be removed INSIDE THE DA before the circuit that converts the sample (numbers) to analog voltage.
I would of course agree that if the incoming data has less DATA TRANSFER JITTER, the DA will have an easier time cleaning up such jitter. But in reality, there is always a need to do some serious jitter removal to get the CONVERSION JITTER to a low enough level.
The notion of software jitter is screwed up. A computer operates by assigning instructions to clocks. For example:
Instruction 1: send sample #1 on the next assigned clock
Instruction 3: send sample #2 on the next assigned clock
Instruction 4: send sample #3 on the next assigned clock
The instructions are controlled; the timing is assigned by the machine and is very deliberate. If you intended to send or process data at some clock and it did not happen, you have a breakdown. The software assumes that the timing will be perfect.
In reality, the clocks and the data timing will be off by some and that is jitter (nothing is ever perfect), but the timing errors are due to HARDWARE such as unsteady clock, electrical noise and more.
Software jitter? I would ask for a CLEARE DEFINITION of what it means. For example: Software jitter is a timing error due to (XXXXX). Of course in such a definition, XXXXX should not point at any jitter caused by hardware!