Let's fix it at the very end
Originally Posted by sejarzo
So, if we have a "better than average" transport, in good condition, playing a disc in as-new quality, how often does error correction typically kick in? Is it hundreds of times per second? Or once per hundred seconds? Somewhere in between?
Oh nice - may the username help me with the explanation then.
Your question is eligible but instead of "how often does error correction typically kick in?" you should ask "which
error correction kicks in at all?". This is because it is a fundamental fact that - when seen over a longer distance or time - errors always
occur. Every transfer or storage suffers from degration per se - with the difference that digital information (which only exists on a logical level, btw) is immune within a certain tolerance. The CD is no exception here. The read-out process could be called pretty analog (high frequency signal which occurs from the laser-beam reflection) and at the raw level the thereby reconstruced digital data will always contain errors. The stated typical error rates for the raw data vary from 10^-5 to 10^-6 which means up to 1 error every 100000 bits. Assuming a data rate of about 4 MBit/s you can count on 4~40 errors per second (!) when everything is in good shape while up to > 200 may be corrected by CIRC in the worst case. Since there are several "levels" of error correction, it is always the question at which of them which amount of errors occur. The exact number of raw levels errors which a pickup-system is able to correct depend on its implementation of decoding and correction in detail. E32-errors (up to three symbols at the C2 stage) exceed the correction capabilities of most devices that will then perform error concealment.
So maybe you see the point. One has to strictly distinguish between error correction and error concealment. Whereas the first one is mandatory and fundamental for every digital system, the second one is the last step if every other actions failed. At this point, the player tries to mask the error as good as possible. There are also differences in the implementation. Someone has tested a few devices in this regard. If you're interested, i'll post the link.
To return to your basic question: When a disc is in good condition and a transport which merits the name, there will be always errors, especially coming out of the C1-decoder. Bear with it because C2-errors normally don't occur. If they do, something already has gone wrong. You can verify this by yourself by recording the s/pdif output of any decent player and then comparing the result with that what you get via digital audio extraction. After correcting the offsets (starting points) there will be no differences if the player's output is bit perfect (not all are which is a real shame).
For further information in regard to cd technology i can highly recommend you the book "The Compact Disc Handbook" by Ken C. Pohlmann.
A note at the end: Different players may sound different due to internal jitter and different DACs used while different transports may let an external DAC sound different due to the jitter of the signal path. As long as the data is correct, there is no direct
conjunction between the drive and the sound.
I hope i could bring some light into the dark of myth and confusion.
Originally Posted by infinitesymphony
It's already been pointed out that computer drives and the mechanisms employed in audio players are not the same, so there's not much reason to compare them. What does fast, asynchronous reading prove about a device's ability to send synchronized timing signals?.
Actually they differ not that much some here seem to think. Of course, pc-drives are more advanced and thus likely proner to faked errors some commercial cds make use of (like screwed-up session pointers which would be ignored by simple single-session devices like stand-alone cd-players) but when playing an audio-cd, a pc-drive behaviors the same like any audio-cd-player whereas its error concealment strategies may be different. Please note that the conversion timing is uncoupled from the actual reading process as well (catchword: FIFO). Sure you're right - the process is more "realtime" here and thus more sensitive to dropouts and errors since there's still not the time for much (if any) retries.
Originally Posted by infinitesymphony
Jitter has nothing to do with the actual D/A conversion, it relates to either the reading of the CD (doubling or skipping of audio samples), or timing errors in the reception of the digital information from transport to source.
If think you're mixing up things here. When it comes to the actual D/A conversion (still assuming the data per se is intact), all if anything that has to do with it is jitter. The skipping or repeating of audio samples (and unloved behavior of early drives) acutally has nothing to do with the term "jitter". Although in use, this naming is a somewhat unlucky choice for it and increases the confusion already present even more.
Originally Posted by ezkcdude
Does jitter matter with EAC or CD-ROM? No, that's ridiculous.
Very interesting statement you gave here. And you're right. Of course it is ridiculous. But why is it ridiculous?
Latently, you seem to recognize the fact that the timing of the copying process can't have any impact to the actual data because the destination (in most cases it will be a hdd) won't care about delays since the data is reclocked anyway when reread. Good point and you are correct. But why do you guys think different when it comes to sending the same data to an external DAC? It is exactly the same with the difference that the actual realtime process starts within the DAC (in fact, s/pdif is a realtime interface as well but the data is allowed to be reclocked anyway regardless of this detail).
Jitter of a transport system may lead to audible differences. I can't say much about it since i never heard any jitter artefacts so far. However, trying it the fix it on the source is the wrong way.
Thus my suggestion is: Use any source which is just able to provide you the correct data in a timing which still makes it possible for a receiver to recognize and spend all the effort into the DAC.