1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Low-Jitter USB: Dan Lavry, Michael Goodman, Adaptive, Asynchronous

Discussion in 'Computer Audio' started by jude, May 20, 2010.
First
 
Back
1
3 4 5 6 7 8 9 10 11 12
Next
 
Last
  1. glt

     
    Quote:
    Does this apply to other proprietary USB drivers or only to the driver that comes with the OS?
     
     
  2. Trogdor
    Someone is going to have to biff me and explain to me the following:
     
    All bus protocols (all of them) have some level of jitter with respect to the clock syncing the bus.  From PCI express to USB.  Obviously some bus protocols allow for retransmits as well as self correcting BER using special encodings (8b/10b like in SAS).
     
    What folks are not addressing is that any jitter caused by the OS driver not delivering packets out the HCI within the exact time domain should be buffered at the other end, i.e. whether the master clock is at the host or the target, I would *THINK*  that all DAC implementations would buffer the input and RECLOCK the bits to feed into the DAC, i.e. the Windows driver stuff seems to me hand waving over the real issue (no offense to anyone, does it surprise you a senior Unix kernel developer says that).
     
    Unless you are losing bits due to the sample size (which you aren't AFAIK), then the USB cable is simply a bit pusher that needs to push the bits in a relatively timely fashion (which it does in spades).  Putting driver implementation and interrupt latency aside, why oh why would I be that concerned over small jitter flucations due to clocking/sync issues caused by the initial bits being sent over the wire to the DAC.  The DAC is clearly going to buffer and RESAMPLE the bits with a much better clock to give me an accurate D/A conversion.
     
    The only aspect of this I can't verify is that if you get so out of sync over the USB cable the connection literally has to resync itself while data is in flight, you will suffer bit loss (more likely full interrupt of playback) - btw this is similar to network cards losing sync over a connection due to improper autonegotiation implementations which will drop packets and cause TCP retransmits, etc.
     
    I don't get it, please help!  :)
     
  3. regal


    Quote:

     
    Most DAC's don't have clocks.  I have seen a few good clock implementations like the Pass D1 or the Tent DAC,  but these are rare birds.  Most DAC's with a clock are doing Asynch resampling which most of us aren't found of.   And the whole buffer in a DAC theory has been shot to death.   Do some searches it is a lot more complex than your are leading it to be.
     
    The best way to deal with jitter is to have one clock shared for the DAC(master) and the transport(slave),   like the studios do.
     
  4. xnor
    @Trogdor: Usb audio dacs/receivers (like the pcm2707) do have a buffer of at least 1ms worth of audio data.
     
    @regal: Whats wrong with ASRC?
     
    @glt: Happens to proprietary stuff only I think.
     
  5. regal

    It has been proven to change the sound signature,  some people like it some don't.  It doesn't eliminate any jitter,  it spreads it out (alters it.)  You have those who argue it pushes most of the jitter above the hearing limit,  and you have those that say it sounds bad.  The implementations I've heard weren't to my liking.   If it were the " answer "  we would even be talking about asynch USB.
     
     
    Also a 1mS buffer in know way can be enough to eliminate jitter,  again search especially at diyhifi.org,  there are true experts on that forum.
     
    Quote:


     
  6. Trogdor
    http://www.planetanalog.com/showArticle.jhtml;jsessionid=KSCVCEF4HQC15QE1GHPSKH4ATMY32JVN?articleID=12801991
     
    Wow....that was informative (I am not qualified for a lot of it but the basic problem of USB conversion is outlined).  Though I stand by its not software related really.
     
  7. audioengr
     
    Quote:
     
    When using Async or networked (Ethernet), none of this applies. Jitter can be made as low as the design and components will allow.  The difference with Adaptive, is that the jitter is a function of how well you design the DLL or PLL and its loop-filter.  You dont have any of this with Async.
     
    Steve N.
    Empirical Audio
     
  8. audioengr


     
    Quote:
    When using Async, you are quite right.  There is always some level of buffering in the DAC USB interface.
     
    Quote:
     Again, with Async you are correct.  Not correct for Adaptive.
     
    Quote:
     Yes, this is why some people have drop-outs with Async and Adaptive USB and even networked audio.  It's because their computer cannot keep-up with the average streaming rate for whatever reason, slow I/O bus, USB contention, software interrupt conflicts, network traffic etc..  This is not jitter however, these are drop-outs.  Jitter a totally different thing.

    Steve N.
    Empirical Audio
     
     
  9. audioengr


     
    Quote:

    This is actually ground that has been covered well also.
     
    Most industry experts agree that the best scenerio is to have the master clock inside the DAC and clock the source/Transport as a slave.
     
    If one tries to use an external clock, then it is unaware of the sample-rate.  Therefore, it cannot change automatically when the sample rate changes.  A master clock or clocks in the DAC or a reclocker can do this however.
     
    Steve N.
    Empirical Audio
     
     
  10. regal
    Said thing is I built a DAC that had a clock that would only work if it could slave the transport,  however I never got around to building a transport for it and sold it,  just ran it in slave mode to the transport.  I'm not up to speed on digital theory (wasn't a course when I was at Purdue) like some are.   I guess we all want easy implementations I'm as guilty as the next guy.

     
    Quote:


     
  11. mgoodman
    Guys - there is tons of jitter on the USB cable! It's not even funny - If you were to take the samples arriving from the USB cable and feed them straight to the DAC chip, it would sound mushy and dull with extremely low level of detail (read: high jitter).
     
    For the Linux kernel-mode guy - we are in your camp: CEntrance has written drivers for dozens of famous brands in the Audio industry.
     
    http://centrance.com/licensing/
     
    But we are also hardware guys and we know that if you put a jitter analyzer on the USB bus, you will see that there is a lot of instability there. In other words, the data arriving on the other side of that USB cable is not arriving at precisely even times. The "wobble" is due to the MOBO controller, which sits below the HCI driver in the stack (actual link layer). That controller is anything but stable and can not do any time stamping if its life depended on it. It sends packets on 1ms period, but that 1ms is sometimes .997 ms, sometimes 1.002 ms. This variability is jitter. And this is precisely why it doesn't matter if you are using asynch or adaptive. Even if the device is clock-sourcing the  computer, the link layer chip on the motherboard will screw up any data that the computer sends to the device, and you will need to reassemble it in the DAC device prior to presentation to the DAC chip itself. Take a look at the USB data lines with a scope and you will see how dreadfully unstable that traffic is.

    Adaptive or asynchronous - what comes in from the USB cable needs to be cleaned up. Period. If you know how to clean it up, it doesn't matter which approach you use. If you don't, your equipment will not sound transparent. That's the end of the technical argument.
     
    The rest is marketing from companies trying to position themselves in a better light through consumer deception disguised as technical expertise. Deception never works for a long time - eventually facts take over.
     
  12. Trogdor
    I think you missed the point:
     
    It doesn't matter that the 1ms isn't exact.  Its buffered at the other end a bit so the only jitter I would be concerned about is AT THE point of DAC.  Not over the cable.  The links above as well as Steve's response (and links to his site) should clarify this fact.
     
  13. mgoodman
    I think we are all making the same point actually. If you clean the signal up at the DAC, it doesn't matter if you use adaptive of asynchronous transfers on the USB cable. This is what Stereophile magazine jut confirmed in its June issue, when it called the CEntrance DACport a "highly recommended buy". Like Jude said - device implementation is everything.
     
  14. leeperry
     
    funny, I was reading about this yesterday: http://www.head-fi.org/forum/thread/185591/m-audio-audiophile-usb-anything-to-mod#post_2226722
     
    What worries me using this "Audiophile USB" as a transport would be the sample rates accuracy(it's got only one PLL)....m2tech boast about spot-on sample rates at the end of that white paper: http://www.m2tech.biz/public/pdf/White%20Paper%20on%20hiFace.pdf
     
    and even the musiland in "high precision" mode fail blatantly: http://hifiduino.wordpress.com/2010/04/08/reading-sample-rate
     
    the idea that I've never listened to my music at the right pitch is starting to grow on me, making me simultaneously sad and upset[​IMG]
     
  15. shamu144

    Yes, but when it comes to jitter, isn't async supposed to be theorically easier to clean and implement than adaptive, and therefore potentially yield better results at lower costs.
     
    Quote:


     
First
 
Back
1
3 4 5 6 7 8 9 10 11 12
Next
 
Last

Share This Page