Originally Posted by AKG240mkII
I say it's YOU throwing in 'an interesting question' ..
Either the '0' and '1's are there .. Or they are not .
IF the 'cable' complies with standards and has no mechanical defects it will transfer all the 0 and 1's .
If not, the receiving end will know .
(There is a slight possibility of ground-loop issues in some cases, this could easily be audible)
It's not alchemy, it's computer-'science' .
Welcome to Head-Fi - I see you have taken the obligatory 1's and 0's pamphlet to heart. Now that I have grasped this I have been able to fully resolve a theory of quantum gravity and at the same time solved world hunger. Seriously sorry just I have heard that argument about a million times and don't find it cute anymore
In fact what I was saying is that some of the assumptions about how timing variations from the computer or cable may be far from what is happening given what dvw was saying about the clock frequency of USB it may not be possible for USB to send individual samples over USB and is instead probably sending a certain number of samples at a time in bulk mode. My guess is that asynchronous just makes it easier for the USB audio device to manage the buffer.
Most decent USB transports now include galvanic isolation, but otherwise if you are running a desktop PC you could just use the same earth for your DAC and the computer. If you are running a laptop then galvanic isolation might be necessary as the laptop uses a floating ground. I don't think this is directly related to most USB cable sound difference claims.
Under normal circumstances I don't think USB transports drop packets as long as they have well designed drivers and a large enough buffer on the computer side. Most of what I have been discussing is trying to figure out what is causing some of the observable differences between computer hardware and software that I have and continue to observe derive from given that there is little scientific explanation beyond simply writing off these observations as invalid or flawed/unreliable. I personally find it highly unlikely that suggestion and cognitive bias is responsible for repeated and independent observations of variations between hardware and software including cables, playback software etc. Unfortunately philosophical and financial factors have generally thwarted attempts to make progress in establishing a scientific understanding of computer based music transports.
For USB cables I think the influence may be how little they can add noise and jitter to the digital signal pair, power and ground connections. Why do I think this might affect audio quality - mostly because USB was not designed to handle real-time multimedia interfaces like Firewire, PCI etc. This may be a tenuous argument but if you look at the Firewire and USB standards they are very different in both hardware and software. USB audio now uses asynchronous transfer but I am not sure that this magically fixes the limitations of using USB for audio in the same way that buffers do not magically fix any timing variations that originate upstream no matter what the manufacturers claim.
As to why software might make a difference the most convincing arguments I have heard are from the makers of Puremusic for Mac and JPlay for PC - they claim that by using different driver level changes they can achieve more consistent latency, and that this, even with asynchronous transports, can lead to better sound quality. Who knows, maybe the latency is not directly having an influence but instead it is affecting electrical noise and voltage stability of the USB connection, nobody has really shown a causal link between any of these factors and sound quality. From my own observations though these factors can be heard with most equipment quite easily. If you cant hear a difference or don't believe me that's fine, maybe your gear has better jitter immunity or something but for me it is worthwhile to pursue these things as I do hear differences, and I would suggest that they are at least worth investigating if not investing in