Mazz
100+ Head-Fier
- Joined
- Apr 13, 2008
- Posts
- 295
- Likes
- 10
Quote:
Ah, yes, I see what you're saying. If I could rephrase it, in order to have only one clock in the playback data path the audio player also needs to respond to the DAC's requests to speed up/slow down data transmission, otherwise you have two clocks and risk buffer under-runs/over-runs somewhere in the system. And we don't know if any media players support that mode (and the default OS drivers should provide the right APIs for the media players to use if you also want to avoid custom drivers and I don't know if they do either).
Quote:
I don't understand why this might be the case. Under those circumstances there is only one data clock that affects audio output timing, and thus the jitter due to computer generation and data transmission in other methods disappears. Yes, you still have to arrange your software and communications stacks to avoid buffer under-runs and over-runs, but you're not getting jitter due to data transmission between computer and DAC, nor due to the computer's clock. True, you still have other sources of jitter - e.g. due to the DAC's clock - but you have that under all possible circumstances anyway.
Are you saying this method might have more jitter than others because it's DAC clock implementation has more than an alternative method? If so, this is an attribute of the implementation, not of the method. Or do you have some attribute of the method in mind?
I think it's also worth pointing out that - were there to be truly only one clock that affected data output - that on the computer sample rate conversion would be required rather than resampling. The sample rate conversion algorithm (presuming a Nyquist limited original sampling process and good quality ADC clock, as for any good recording) is unique, entirely deterministic and independent of the quality of the computer's clock, so if implemented properly wouldn't introduce any errors. But it sounds like we don't have a truly one clock world today so it doesn't really matter...
Originally Posted by EliasGwinn /img/forum/go_quote.gif In a USB playback system, unless the driver and the media software are directly linked, the software operates using an internal clock. This data stream is then sent to the audio stack in the operating system. |
Ah, yes, I see what you're saying. If I could rephrase it, in order to have only one clock in the playback data path the audio player also needs to respond to the DAC's requests to speed up/slow down data transmission, otherwise you have two clocks and risk buffer under-runs/over-runs somewhere in the system. And we don't know if any media players support that mode (and the default OS drivers should provide the right APIs for the media players to use if you also want to avoid custom drivers and I don't know if they do either).
Quote:
Originally Posted by EliasGwinn So, even if the computer did not re-sample, and the device was receiving bit-transparent (or otherwise high-precision) audio, the asynchronous method still may not have less jitter. |
I don't understand why this might be the case. Under those circumstances there is only one data clock that affects audio output timing, and thus the jitter due to computer generation and data transmission in other methods disappears. Yes, you still have to arrange your software and communications stacks to avoid buffer under-runs and over-runs, but you're not getting jitter due to data transmission between computer and DAC, nor due to the computer's clock. True, you still have other sources of jitter - e.g. due to the DAC's clock - but you have that under all possible circumstances anyway.
Are you saying this method might have more jitter than others because it's DAC clock implementation has more than an alternative method? If so, this is an attribute of the implementation, not of the method. Or do you have some attribute of the method in mind?
I think it's also worth pointing out that - were there to be truly only one clock that affected data output - that on the computer sample rate conversion would be required rather than resampling. The sample rate conversion algorithm (presuming a Nyquist limited original sampling process and good quality ADC clock, as for any good recording) is unique, entirely deterministic and independent of the quality of the computer's clock, so if implemented properly wouldn't introduce any errors. But it sounds like we don't have a truly one clock world today so it doesn't really matter...