BackToAnalogue
100+ Head-Fier
- Joined
- Oct 6, 2014
- Posts
- 106
- Likes
- 30
OK it is impossible for a USB hub to in some way affect the audio signal carried over Asynch USB, other than the very slight possibility of mains noise but a hub is going to be better than a PC for noise so I use a hub. Everything in Asynch introduces delays and it doesn't matter within reason, and unless you want it for AV, that is why it is called Asynch. i.e. no clock.
Not surprisingly some of you haven't quite got my explanation of how an Asynchronous protocol works. And I speak as someone who has coded two asynchronous device drivers, one entirely in assembler, and I still remember the sinking feeling in my stomach both times I was told my name was against that work package, Oh no. These were running at vastly slower speeds than USB 2.0 but the principles are exactly the same.
Basically they do what is called a hand-shake, the receiver sends an ACK or a NAK after each packet . If the sender gets a NAK it sends the packet again and records an error. If it gets an ACK then it sends the next packet of data when it is ready to. Note the last part, when ever it feels like it, not when a clock is telling it to. This means that it is not possible for odd bits to be lost and if it happens sometimes then the protocol will deal with it and keep a log somewhere. If these NAKs start to happen very often then the sender may do a few things to try and fix it like slowing down a bit or other clever stuff. There is probably some clever code looking for patterns or whatever and deciding if an error needs to be raised to the O/S. If the errors keep stacking up it will eventually fall over and log a complete failure. So bits cannot get lost or corrupted. You could stream a file backwards and forwards across an Asynch USB 2.0 interface an infinite number of times and the file will still be perfectly mathematically unchanged.
So when the guys at Schiit were testing all this they will have had their shiny DACs connected to a few different PCs (and this is the problem, how many can we expect them to test ?) running streaming overnight. In the morning they will look at the error logs and see if there have been any problems. The first time you do this you expect to see lots odd little errors here and there just as you describe and you hope to get a nice warm feeling when you see that your code coped with it, no one died, no bits were lost, you emerge a hero, well not quite.
What I bet they saw and also what I saw was no errors at all. You end up having to find a way of injecting some in order to test the bloody error handler. In real life when there is a problem, which is rare, then you will see either an instantaneous complete failure, or a few short bursts of errors followed by a complete failure.
But to make all this work you need everything to be a lot faster than when you have a clock keeping everything in order. But it will eliminate any jitter other than that between the DAC and the sample which is very small.
I do quite like the 'more processing somehow leads to more noise' argument, because you can't actually show that it is Mathematically incorrect and there is certainly more processing. But it does rather have the feel of an idea dreamed up at sales conference rather than an audio engineering symposium. Am I wrong?
Not surprisingly some of you haven't quite got my explanation of how an Asynchronous protocol works. And I speak as someone who has coded two asynchronous device drivers, one entirely in assembler, and I still remember the sinking feeling in my stomach both times I was told my name was against that work package, Oh no. These were running at vastly slower speeds than USB 2.0 but the principles are exactly the same.
Basically they do what is called a hand-shake, the receiver sends an ACK or a NAK after each packet . If the sender gets a NAK it sends the packet again and records an error. If it gets an ACK then it sends the next packet of data when it is ready to. Note the last part, when ever it feels like it, not when a clock is telling it to. This means that it is not possible for odd bits to be lost and if it happens sometimes then the protocol will deal with it and keep a log somewhere. If these NAKs start to happen very often then the sender may do a few things to try and fix it like slowing down a bit or other clever stuff. There is probably some clever code looking for patterns or whatever and deciding if an error needs to be raised to the O/S. If the errors keep stacking up it will eventually fall over and log a complete failure. So bits cannot get lost or corrupted. You could stream a file backwards and forwards across an Asynch USB 2.0 interface an infinite number of times and the file will still be perfectly mathematically unchanged.
So when the guys at Schiit were testing all this they will have had their shiny DACs connected to a few different PCs (and this is the problem, how many can we expect them to test ?) running streaming overnight. In the morning they will look at the error logs and see if there have been any problems. The first time you do this you expect to see lots odd little errors here and there just as you describe and you hope to get a nice warm feeling when you see that your code coped with it, no one died, no bits were lost, you emerge a hero, well not quite.
What I bet they saw and also what I saw was no errors at all. You end up having to find a way of injecting some in order to test the bloody error handler. In real life when there is a problem, which is rare, then you will see either an instantaneous complete failure, or a few short bursts of errors followed by a complete failure.
But to make all this work you need everything to be a lot faster than when you have a clock keeping everything in order. But it will eliminate any jitter other than that between the DAC and the sample which is very small.
I do quite like the 'more processing somehow leads to more noise' argument, because you can't actually show that it is Mathematically incorrect and there is certainly more processing. But it does rather have the feel of an idea dreamed up at sales conference rather than an audio engineering symposium. Am I wrong?