Quote:
Originally Posted by LawnGnome /img/forum/go_quote.gif
Show how those qualities affect audio.
|
You ready?
First, we need to realize that yes, indeed, bits are bits, and it's unlikely that a cable will cause any bits to drop. BUT, digital noise which is certainly at frequencies within human hearing range can be caused by timing errors introduced by the cable, if it is presenting an impedance mis-match between either the transport or the DAC. take a deep breath (this was gleaned from another audio board, and may be viewed as "unsubstantiated"... as posted by an audiogon member) but it covers the topic quite well:
"Let's talk about jitter. Jitter is not a harmonic distortion. It is a clock timing error that introduces an effect called phase noise either when a signal is sampled (at the A/D) or the reconstruction of a signal (at the D/A), or both.
Think of it this way: a sine wave goes through 360 degrees of phase over a single cycle. Suppose we were to sample a sine wave whose frequency was exactly 11.025 kHz. This means that with a 44.1 kHz sample rate we would take exactly four samples of the sine wave every cycle. The digital samples would each represent an advance in the sine wave's phase by 90 degrees (1/4 of a cycle). The DAC clock is also supposed to run at 44.1 kHz; think of this as a "strobe" that occurs every 22.676 nanoseconds (millionths of a second) that tells the DAC when to create an analog voltage corresponding to the digital word currently in the DAC's input register. In the case of our sine wave, this creates a stairstep approximation to the sinewave, four steps per cycle. Shannon's theorem says that by applying a perfect low pass filter to the stairsteps, we can recover the original sinewave (let's set aside quantization error for the moment... that's a different issue). Jitter means that these strobes don't come exactly when scheduled, but a little early or late, in random fashion. We still have a stairstep approximation to the sine wave, and the levels of the stair step are right, but the "risers" between steps are a little early or late -- they aren't exatly 22.676 microseconds apart.
When this stairtep is lowpass filtered, you get something that looks like a sine wave, but if you look very close at segments of the sine wave, you will discover that they don't correspond to a sinewave of at exactly 11.025 kHz but sometimes to a sinewave at a tiny bit higher frequency, and sometimes to a sinewave at tiny bit lower frequency. Frequency is a measure of how fast phase changes. When the stairstep risers which corresponds to 45 degrees of phase of the sinewave in our example, comes a little early, we get an analog signal that looks like a bit of a sine wave at slightly above 11.025 kHz.
Conversely, if the stairstep riser is a bit late, it's as if our sine wave took a bit longer to go through 1/4 of a cycle, as if it has a frequency slightly less than 11.025 kHz. You can think of this as a sort of unwanted frequency modulation, introducing a broadband noise in the audio. If the jitter is uncorrelated with the signal, most of the energy is centered around the true tone frequency, falling off with at lower and higher frequencies. If the jitter is correlated with the signal, peaks in the noise spectrum can occur at discrete frequencies. Of the two effects, I'd bet the latter is more noticeable and objectionable.
Where does jitter come from? It can come if one tries to construct the DAC clock from the SPDIF signal itself. The data rate of the SPDIF signal is 2.8224 Mb/sec = 64 bits x 44,100 samples/sec (the extra bits are used for header info). The waveforms used to represent ones and zeroes are designed so that there is always a transition from high to low or low to high from bit to bit, with a "zero" having a constant level and a "one" having within it a transition from high to low or low to high (depending on whether the previous symbol ended with a "high" or a "low"). Writing down an analysis of this situation requires advanced mathematics, so suffice it to say that if one does a spectrum analysis of this signal (comprising a sequence of square pulses), there will be a very strong peak at 5.6448 MHz (=128 x 44.1 kHz). A phase locked loop can be used to lock onto this spectrum peak in attempt to recover a 5.6448 MHz clock signal, and if we square up the sine wave and use a simple 128:1 countdown divider would produce a 44.1 kHz clock. Simple, but the devil is in the details. The problem is that the bit stream is not a steady pattern of ones and zeroes; instead it's an unpredictable mix of ones and zeros. So if we look closely at the spectrum of the SPDIF waveform we don't find a perfect tone at 5.6448 MHz, but a very high peak that falls off rapidly with frequency. It has the spectrum of a jittered sine wave! This means the clock recovered from the SPDIF data stream is jittered.
The jitter is there due to the fundamental randomness of the data stream, not because of imperfections in transmitting the data from transport to DAC, or cable mismatch, or dropped bits or anything else. In other words, even if you assume PERFECT data, PERFECT cable, PERFECT transport, and PERFECT DAC, you still get jitter IF you recover the clock from the SPIF data stream. (You won't do better using IMPERFECT components, by the way). The way out of the problem is not to recover the DAC clock from the data stream. Use other means. For example, instead of direct clock recovery, use indirect clock recovery. That is, stuff the data into a FIFO buffer, and reclock it out at 44.1 kHz, USING YOUR OWN VERY STABLE (low-jitter) CLOCK -- not one derived from the SPIF bitstream. Watch the buffer, and if it's starting to fill up, bump up the DAC clock rate a bit and start emptying the buffer faster. If the FIFO buffer is emptying out, back off the clock rate a bit. If the transport is doing it's job right, data will be coming in at constant rate, and ideally, that rate is exactly 44,100 samples per seconds (per channel). In reality, it may be a bit off the ideal and wander around a bit (this partly explains why different transports can "sound different" -- these errors make the pitch may be a bit off, or wander around a tiny bit).
Note that recovering the DAC clock from the SPDIF data stream allows the DAC clock to follow these errors in the transport data clock rate -- an advantage of direct clock recovery. But use a big enough buffer so that the changes to DAC clock rate don't have to happen very often or be very big, and even these errors are overcome.
Thus indirect clock recovery avoids jitter, and overcomes transport-induced data rate errors (instead of just repeating them). Better audio DACs, such as the Levinson 360S use this FIFO buffering and reclocking idea to avoid jitter. In principle, a DAC that uses this kind of indirect clock recovery will be impervious to the electrical nuances of different digital cables meeting SPDIF interface specifications."
So by this estimation, not only can errors occur as a result of signal carrying cables (being that they're not perfectly impedance matched and may be affected by EM, i.e., poor shielding, especially at SPDIF's transmission frequency... go read some transmission line theory type stuff to see that less than optimal impedance matching can introduce standing waves within the cable, causing timing errors, or jitter, yada yada), but by introducing jitter, they alter the frequency of the original signal (what we hear as pitch or tone), and potentially introduce spurious noise and at frequencies which directly correlate to the audio band which most of us can actually hear).
But, the flipside is that bits are bits...
regardless, my bits is music.
-reference thread, there's MUCH more worth reading here:
AudiogoN Forums: Why do digital cables sound different?