Perhaps I wasn't clear enough in my really long post, so here is another.
Jitter matters when you don't have flow control. Like my Toslink example, the sender is forced to send data at exactly the same rate as the receiver can consume it. Any significant variations will cause problems. Without buffers, late data causes artifacts because it didn't get to the receiver in time. Early data causes artifacts because the receiver is not ready to use it quite yet. Buffers can help smooth out jitter -- the bigger the buffer, the less effect jitter has. However, it has its limits, you can still overrun or underrun a buffer and it introduces latency because you have to wait for the buffer to fill up Edit: [and for bits to travel through it].
Bits are not always bits, you can get errors from EM interference, impurities in fibre or what not. The probabilities can be reduced significantly. But when they happen, you are SOL, either use the broken data, try to fix it or drop it... unless you have flow control and error detection / correction. Even then, they only help to reduce the probability of errors getting through. In typical circumstances they are enough.
Why does the quality of digital^H^H^H^H^H^H^H S/PDIF cables matter? Because
THERE IS NO FLOW CONTROL AND NO ERROR MANAGEMENT!!!!! You absolutely and completely at the mercy of jitter and errors. Don't believe me? There's less than a foot of wall between my computer and the washer / dryer. Every time the motors change speeds or the buzzer goes off, my coax connection to my receiver goes absolutely bonkers!!!
The reason people prefer coax over optical is that if you're sneaky, the sender can get limited feedback from the ground wire. The sender can get a feel for the jitter on the wire and compensate by speeding or slowing the data flow. But with errors, even if the sender knew, S/PDIF doesn't support retransmits.
Now, the Ethernet line from my Audiotron to my MP3 server runs along the same path. When the EM storm hits, the collision / error indicators on my hub lights up. But, because it has extra bandwidth, flow control and error detection, Edit: [it can retransmit damaged packets and refill the player's buffers before they run out].
As long as we have to put up with a crappy interlink like S/PDIF, people will have to spend hundreds or thousands to get the same performance as two $20 Ethernet cards and $10 worth of CAT5 cable.
Edit: [To be fair to S/PDIF, since it was based on AES/EBU which was designed for studio work, they intentionally removed buffering, flow control and error management in favour of low latency & real-time performance. Studios could shell out the $$$ for the super high quality XLR differential cables and pay for the extra hardware to ensure reliable digital transmission.]
Now, with redbook audio, yes there are error detecting and error correcting codes. Add EAC like rereading algorithms and you can get perfect reproduction off most CDs. The problems arise when you have rediculously damaged CDs or equipment that doesn't try hard enough.
At the end of the day, both camps are right. What's really wrong are the protocols and implementation used by consumer equipment. Edit: [Now everybody say sorry and make up
]