Low-Jitter USB: Dan Lavry, Michael Goodman, Adaptive, Asynchronous

May 25, 2010 at 7:28 PM Post #16 of 166

 
Quote:
...
The point I was making is that there is leading edge and trailing edge sync issue by nature of taking a bit stream and dividing it up in slices to be reassembled by clock-sync technology. bits are lost (discarded) or padded (Null padding) according to the SDO/BClk data at the block interface.
...
I was saying that it still is a best effort no matter what because the USB Interface driver to begin with is not doing a 100% accurate job to begin with....

Does this apply to other proprietary USB drivers or only to the driver that comes with the OS?
 
 
May 25, 2010 at 11:34 PM Post #17 of 166
Someone is going to have to biff me and explain to me the following:
 
All bus protocols (all of them) have some level of jitter with respect to the clock syncing the bus.  From PCI express to USB.  Obviously some bus protocols allow for retransmits as well as self correcting BER using special encodings (8b/10b like in SAS).
 
What folks are not addressing is that any jitter caused by the OS driver not delivering packets out the HCI within the exact time domain should be buffered at the other end, i.e. whether the master clock is at the host or the target, I would *THINK*  that all DAC implementations would buffer the input and RECLOCK the bits to feed into the DAC, i.e. the Windows driver stuff seems to me hand waving over the real issue (no offense to anyone, does it surprise you a senior Unix kernel developer says that).
 
Unless you are losing bits due to the sample size (which you aren't AFAIK), then the USB cable is simply a bit pusher that needs to push the bits in a relatively timely fashion (which it does in spades).  Putting driver implementation and interrupt latency aside, why oh why would I be that concerned over small jitter flucations due to clocking/sync issues caused by the initial bits being sent over the wire to the DAC.  The DAC is clearly going to buffer and RESAMPLE the bits with a much better clock to give me an accurate D/A conversion.
 
The only aspect of this I can't verify is that if you get so out of sync over the USB cable the connection literally has to resync itself while data is in flight, you will suffer bit loss (more likely full interrupt of playback) - btw this is similar to network cards losing sync over a connection due to improper autonegotiation implementations which will drop packets and cause TCP retransmits, etc.
 
I don't get it, please help!  :)
 
May 25, 2010 at 11:48 PM Post #18 of 166


Quote:
 
 why oh why would I be that concerned over small jitter flucations due to clocking/sync issues caused by the initial bits being sent over the wire to the DAC.  The DAC is clearly going to buffer and RESAMPLE the bits with a much better clock to give me an accurate D/A conversion.
 


 
Most DAC's don't have clocks.  I have seen a few good clock implementations like the Pass D1 or the Tent DAC,  but these are rare birds.  Most DAC's with a clock are doing Asynch resampling which most of us aren't found of.   And the whole buffer in a DAC theory has been shot to death.   Do some searches it is a lot more complex than your are leading it to be.
 
The best way to deal with jitter is to have one clock shared for the DAC(master) and the transport(slave),   like the studios do.
 
May 26, 2010 at 3:22 AM Post #19 of 166
@Trogdor: Usb audio dacs/receivers (like the pcm2707) do have a buffer of at least 1ms worth of audio data.
 
@regal: Whats wrong with ASRC?
 
@glt: Happens to proprietary stuff only I think.
 
May 26, 2010 at 8:22 AM Post #20 of 166

It has been proven to change the sound signature,  some people like it some don't.  It doesn't eliminate any jitter,  it spreads it out (alters it.)  You have those who argue it pushes most of the jitter above the hearing limit,  and you have those that say it sounds bad.  The implementations I've heard weren't to my liking.   If it were the " answer "  we would even be talking about asynch USB.
 
 
Also a 1mS buffer in know way can be enough to eliminate jitter,  again search especially at diyhifi.org,  there are true experts on that forum.
 
Quote:
 
@regal: Whats wrong with ASRC?
 
 



 
May 26, 2010 at 10:25 AM Post #21 of 166
http://www.planetanalog.com/showArticle.jhtml;jsessionid=KSCVCEF4HQC15QE1GHPSKH4ATMY32JVN?articleID=12801991
 
Wow....that was informative (I am not qualified for a lot of it but the basic problem of USB conversion is outlined).  Though I stand by its not software related really.
 
May 26, 2010 at 3:01 PM Post #22 of 166
 
Quote:
  What I'm saying is that we are going to be experiencing some form or another of jitter (I know we are talking pico-seconds here) for some time, under the method that Windows uses (only talking about Windows here) matures together with Intel's standards (and yes, AMD) and other methods evolve to allow the waveform to be handled as a true stream and not as a data packet synchronized to a clocking window.

 
When using Async or networked (Ethernet), none of this applies. Jitter can be made as low as the design and components will allow.  The difference with Adaptive, is that the jitter is a function of how well you design the DLL or PLL and its loop-filter.  You dont have any of this with Async.
 
Steve N.
Empirical Audio
 
May 26, 2010 at 3:16 PM Post #23 of 166


 
Quote:
Someone is going to have to biff me and explain to me the following:
 
 I would *THINK*  that all DAC implementations would buffer the input and RECLOCK the bits to feed into the DAC, i.e. the Windows driver stuff seems to me hand waving over the real issue (no offense to anyone, does it surprise you a senior Unix kernel developer says that).
 
Unless you are losing bits due to the sample size (which you aren't AFAIK), then the USB cable is simply a bit pusher that needs to push the bits in a relatively timely fashion (which it does in spades).  

When using Async, you are quite right.  There is always some level of buffering in the DAC USB interface.
 
Quote:
 Putting driver implementation and interrupt latency aside, why oh why would I be that concerned over small jitter flucations due to clocking/sync issues caused by the initial bits being sent over the wire to the DAC.  The DAC is clearly going to buffer and RESAMPLE the bits with a much better clock to give me an accurate D/A conversion.

 Again, with Async you are correct.  Not correct for Adaptive.
 
Quote:
  The only aspect of this I can't verify is that if you get so out of sync over the USB cable the connection literally has to resync itself while data is in flight, you will suffer bit loss (more likely full interrupt of playback) - btw this is similar to network cards losing sync over a connection due to improper autonegotiation implementations which will drop packets and cause TCP retransmits, etc.

 Yes, this is why some people have drop-outs with Async and Adaptive USB and even networked audio.  It's because their computer cannot keep-up with the average streaming rate for whatever reason, slow I/O bus, USB contention, software interrupt conflicts, network traffic etc..  This is not jitter however, these are drop-outs.  Jitter a totally different thing.

Steve N.
Empirical Audio
 
 
May 26, 2010 at 3:24 PM Post #24 of 166


 
Quote:
 
Most DAC's don't have clocks.  I have seen a few good clock implementations like the Pass D1 or the Tent DAC,  but these are rare birds.  Most DAC's with a clock are doing Asynch resampling which most of us aren't found of.   And the whole buffer in a DAC theory has been shot to death.   Do some searches it is a lot more complex than your are leading it to be.
 
The best way to deal with jitter is to have one clock shared for the DAC(master) and the transport(slave),   like the studios do.


This is actually ground that has been covered well also.
 
Most industry experts agree that the best scenerio is to have the master clock inside the DAC and clock the source/Transport as a slave.
 
If one tries to use an external clock, then it is unaware of the sample-rate.  Therefore, it cannot change automatically when the sample rate changes.  A master clock or clocks in the DAC or a reclocker can do this however.
 
Steve N.
Empirical Audio
 
 
May 27, 2010 at 4:38 AM Post #25 of 166
Said thing is I built a DAC that had a clock that would only work if it could slave the transport,  however I never got around to building a transport for it and sold it,  just ran it in slave mode to the transport.  I'm not up to speed on digital theory (wasn't a course when I was at Purdue) like some are.   I guess we all want easy implementations I'm as guilty as the next guy.

 
Quote:
 

This is actually ground that has been covered well also.
 
Most industry experts agree that the best scenerio is to have the master clock inside the DAC and clock the source/Transport as a slave.
 
If one tries to use an external clock, then it is unaware of the sample-rate.  Therefore, it cannot change automatically when the sample rate changes.  A master clock or clocks in the DAC or a reclocker can do this however.
 
Steve N.
Empirical Audio
 



 
May 28, 2010 at 8:44 PM Post #26 of 166
Guys - there is tons of jitter on the USB cable! It's not even funny - If you were to take the samples arriving from the USB cable and feed them straight to the DAC chip, it would sound mushy and dull with extremely low level of detail (read: high jitter).
 
For the Linux kernel-mode guy - we are in your camp: CEntrance has written drivers for dozens of famous brands in the Audio industry.
 
http://centrance.com/licensing/
 
But we are also hardware guys and we know that if you put a jitter analyzer on the USB bus, you will see that there is a lot of instability there. In other words, the data arriving on the other side of that USB cable is not arriving at precisely even times. The "wobble" is due to the MOBO controller, which sits below the HCI driver in the stack (actual link layer). That controller is anything but stable and can not do any time stamping if its life depended on it. It sends packets on 1ms period, but that 1ms is sometimes .997 ms, sometimes 1.002 ms. This variability is jitter. And this is precisely why it doesn't matter if you are using asynch or adaptive. Even if the device is clock-sourcing the  computer, the link layer chip on the motherboard will screw up any data that the computer sends to the device, and you will need to reassemble it in the DAC device prior to presentation to the DAC chip itself. Take a look at the USB data lines with a scope and you will see how dreadfully unstable that traffic is.

Adaptive or asynchronous - what comes in from the USB cable needs to be cleaned up. Period. If you know how to clean it up, it doesn't matter which approach you use. If you don't, your equipment will not sound transparent. That's the end of the technical argument.
 
The rest is marketing from companies trying to position themselves in a better light through consumer deception disguised as technical expertise. Deception never works for a long time - eventually facts take over.
 
May 29, 2010 at 7:00 PM Post #27 of 166
I think you missed the point:
 
It doesn't matter that the 1ms isn't exact.  Its buffered at the other end a bit so the only jitter I would be concerned about is AT THE point of DAC.  Not over the cable.  The links above as well as Steve's response (and links to his site) should clarify this fact.
 
May 30, 2010 at 8:26 PM Post #28 of 166
I think we are all making the same point actually. If you clean the signal up at the DAC, it doesn't matter if you use adaptive of asynchronous transfers on the USB cable. This is what Stereophile magazine jut confirmed in its June issue, when it called the CEntrance DACport a "highly recommended buy". Like Jude said - device implementation is everything.
 
May 30, 2010 at 10:23 PM Post #29 of 166
Most industry experts agree that the best scenerio is to have the master clock inside the DAC and clock the source/Transport as a slave.

 
funny, I was reading about this yesterday: http://www.head-fi.org/forum/thread/185591/m-audio-audiophile-usb-anything-to-mod#post_2226722
 
What worries me using this "Audiophile USB" as a transport would be the sample rates accuracy(it's got only one PLL)....m2tech boast about spot-on sample rates at the end of that white paper: http://www.m2tech.biz/public/pdf/White%20Paper%20on%20hiFace.pdf
 
and even the musiland in "high precision" mode fail blatantly: http://hifiduino.wordpress.com/2010/04/08/reading-sample-rate
 
the idea that I've never listened to my music at the right pitch is starting to grow on me, making me simultaneously sad and upset
ideenoire.gif

 
May 31, 2010 at 9:13 AM Post #30 of 166

Yes, but when it comes to jitter, isn't async supposed to be theorically easier to clean and implement than adaptive, and therefore potentially yield better results at lower costs.
 
Quote:
Adaptive or asynchronous - what comes in from the USB cable needs to be cleaned up. Period. If you know how to clean it up, it doesn't matter which approach you use. If you don't, your equipment will not sound transparent. That's the end of the technical argument.



 

Users who are viewing this thread

Back
Top