For the engineers, perhaps you can help me understand this. I've been told on occasion that error correction can address timing errors in digital signals. I'm not sure about this. Anyone could clarify this for me?
I'm not sure if you're asking the question you're intending to ask, but maybe it should be answered as is, first.
First off, what are we considering a timing error?
Every digital transmission involves sending a real-world analog signal of some kind. The receiving end just interprets the received analog signal in some predetermined way and uses that to recover the digital information encoded into that analog signal. On the transmitter side, there is a mapping of digital information to an analog signal, and more or less the reverse process is attempted at the receiving end. No analog system is perfect, so there are going to be some small drifts in timing, magnitude, phase, and so on. Unless the problems are large, despite any small errors—timing or otherwise—the receiver will correctly interpret all the digital information that was encoded.
A timing error could be severe enough, or maybe you're talking about a different kind of timing, such that the receiver doesn't even know it's supposed to be getting data. Whoops. Or on a small scale, it could be bad enough that the received analog signal is distorted to the point that the receiver misinterprets what the digitally-encoded original data was. Maybe one or two 0's or 1's gets interpreted as the wrong thing.
Error correction can take two different forms: one is to use some kind of data redundancy scheme in the transmission, and the other is to provide a mechanism for retransmitting any "bad" (erroneous) data. Some systems use both kinds of error correction, some use one but not the other, and some use neither.
For an example of redundancy, suppose that instead of sending 1 0 1 0 1 1 1, I send 111 000 111 000 111 111 111 (I repeat each digit twice). If there are a few scattered errors and the receiver interprets the signal as 101 000 110 100 111 111 011, it can infer that most likely, 111 000 111 000 111 111 111 was sent (more likely that 1 digit was wrong in each grouping than 2, more likely that 0 wrong than 3), and the original information was 1 0 1 0 1 1 1, which is correct. That's a form of extremely primitive and inefficient data redundancy. It can help the receiver correct errors in the transmission. For the other part of error correction, suppose that it's determined that some data is corrupt, some bits are wrong. Then maybe the receiver can tell the transmitter to send the data again, and keep doing so until it gets the data correctly. (How can the receiver figure out that the data is bad? There can be many reasons for this, but often some cyclic redundancy check or kind of checksum is applied to the data. If there is an error, then with extremely high probability, by doing a few calculations on the received data, it can be determined that some data is faulty.)
So error correction can treat some of the symptoms of having timing errors, if they are actually problematic—they may not be, depending on what you consider a "timing error".
Since this is an audio forum, you're probably thinking about a context of sending data to DACs. Keep in mind that DAC performance is dependent (a little bit) on the timing of its own output. The timing of the signalling used to send data to the DAC is irrelevant, unless it actually impacts the DAC's output timing.
note: for some S/PDIF DAC implementations, the timing of the signalling is actually quite relevant. That said, the magnitude of error is generally not high, generally so low for most equipment that there is little or no established evidence to say that it may have an audible and negative impact on the sound.
I assume jitter is the timing error. I've heard people throw around timing errors and data errors when discussing digital signals (whether HDMI or digital coax). Error correction as I understand it fixes data errors. It makes sure that the value to be output is correct. But to my knowledge, and you can correct if I'm wrong, it doesn't say or do anything with regards to *when* the DAC should output it.
Some people have told me that error correction does not deal with or address timing issues. That is handled separately. Other people tell me that timing issues are handled by error correction, in addition to data errors. So I'm getting confused here.
Error correction is used to mean (data) error correction. It doesn't fix timing issues. It can fix data errors, which can result from timing issues.
Again, make a careful distinction between the timing of a digital transmission and the timing of the DAC's output. The timing of the DAC is mostly up to the DAC designer, whereas consumers tend to be fiddling around with and worrying (often without merit) about the data transmission. If the DAC is clocked completely independently of the data communication timing, then the data communication timing can be absolutely horrendous and not impact anything (unless it's so bad as to cause data errors).
Yes, but maybe that was misleading.
In practice, for wireline communications at the kinds of distances and interfaces we're talking about, data errors pretty much don't happen (as a result of timing issues or anything else). For example, standards-compliant USB cables need to support bit error rates in the range of 10^-12, or one wrong bit in 1000000000000 of them. If you're talking about jitter in particular, it has to be really really really really really bad to result in a data error.