Quote:
Originally Posted by EliasGwinn /img/forum/go_quote.gif
This isn't far from the way UltraLock works. The clock-portion of the incoming data is completely thrown out. The data is buffered, the ASRC determines the sample by averaging the incoming rate over a long period of time (64 samples), affectively eliminating any affect of jitter at or above 1 Hz.
|
Right, and that's really great for a whole lot of use cases - especially for any source/input connection that has incoming jitter, which is most of them. (However it's not true that the "clock-portion of the incoming data is completely thrown out" because you MUST "average the incoming rate over a long period of time" in order to infer a smoothed sample rate for use by the ASRC.)
But as clever and effective as that is, it is a totally unnecessary mechanism for asynch USB - where there is
no incoming playback clock signal and hence
no input jitter, period.
Neither should there be. Computer audio systems should merely be data delivery systems. Basically they need to make sure the data arrives at the transceiver (the DAC) in a timely fashion and present a UI to the user for control...
Quote:
Originally Posted by EliasGwinn /img/forum/go_quote.gif
You're talking about using I2S-type transmission...data with no clock.
|
Unless I'm mistaken I2S sends an explicit clock signal. But you're right - I am talking about data with no clock. On the other hand, the data stream
has semantics, including an attribute specifying the playback sample rate the receiving device must use. As you pointed out this is done in electronic devices all the time:
Quote:
Originally Posted by EliasGwinn /img/forum/go_quote.gif
The TAS1020B takes its cue from the incoming sample rate, adjusts its clock accordingly, and sends the data out at the native sample rate of the audio data.
|
Quote:
Originally Posted by EliasGwinn /img/forum/go_quote.gif
This involves using large buffers, buffer management, and some sort of sample-rate information that could be usually obtained by reading SOF.
|
I'm not familiar with the acroynm "SOF", but buffer management is needed, although "large buffers" may not be for some value of large
(or more precisely, "large" depends on the requirements of your application). As dvse asserts, buffer management already takes place on the playback computer today. This is merely extending it to be ultimately controlled by the external DAC.
Quote:
Originally Posted by EliasGwinn /img/forum/go_quote.gif
Ideally, every engineer would love to eliminate a problem without a downside. However, that scenario isn't really a 'tradeoff'. When a real tradeoff situation comes about, you must determine the best solution that acheives the most important objectives.
|
True, it's not a "tradeoff" as I stated it - apologies for abusing the term to make a point.
I think there may be actual tradeoffs here because the DAC1 has a broader set of use cases than a straight "asynch USB DAC", but the tradeoff is
NOT about ASRC vs asynch mode USB. It's likely one or more of the following:
(1) You have built a product which does a really great job on input signals that have jitter. Why go to the expense of building and maintaining a second mechanism (for what might turn out to be a negligible or even entirely inaudible improvement) for one interface method?
(2) Asynch USB ideally relies on USB audio drivers built in to major OSes. Those (IIRC) don't support the highest sample rates and/or bit depths. You want to sell a product that does as far as possible...but don't really want to ship custom USB audio drivers.
(3) You need to support really low latency playback, and you can't achieve it with asynch USB because of the distributed buffer management.
I can understand any of these tradeoffs leading you to avoid asynch mode USB (although the second is pretty weak because it (presumably) applies to your existing USB support). But if you're only addressing use cases that can be supported by asynch USB mode, I've not seen any reason to use ASRC instead.
Cheers,
Mazz.