Quote:
Originally Posted by thomaspf /img/forum/go_quote.gif
Hi Elias,
It's always great to have a company stand behinds it products but some of what you posted here differs a bit from my experiences. So let me question some of the statements you made in order to avoid any confusions on this forum.
1. If your sound card driver uses kmixer than the bits will get altered even if only a single stream is playing. What procedure did you use to test this otherwise? You'd be the first to come to a different conclusion.
2. The Windows standard USB driver uses kmixer and any USB audio device using the standard driver will not work bit perfect unless you use kernel streaming. Vista which does not have kmixer anymore uses a different mixer but is still not bit perfect. Is Benchmark shipping with a different USB driver?
3. I was also intrigued by your statement of a clock recovery system in the DAC1. Is that a new feature in the USB model. As far as I understood up to now, the DAC1 does not use any form of clock recovery at all but is using an AD1896 asynchronous sample rate converter running at a fixed frequency. While that reduces jitter is also changes all the samples in the process and therefore the DAC1 never actually plays bit perfect. Is that not correct?
Cheers
Thomas
|
Thomas,
I answered your first two questions in some detail in a previous post:
http://www.head-fi.org/forums/showpo...4&postcount=39
But here's the testing method:
"The testing consisted of the 'psuedo-random' bit-test that was mentioned in the press release. This is, quiet simply, testing "what-bits-go-in-and-what-bits-come-out". This is a standard test developed by Audio Precision, the leading audio electronics testing equipment manufacturer. When the Audio Precision (AP) sends a digital audio signal into a device, it checks to see if the exact same bits come out. So, for example, if the AP sends in 101100111000, a 'bit-transparent' data path will output the exact same bits: 101100111000. This was our testing proceedure."
As for the question about the DAC1 clocking, it is true that we convert the sample-rate to a rate which the D-to-A chip is most efficient. Your assumption that the D-to-A is not getting a bit-perfect data stream is correct, but this is by design. A converter chip is going to perform best at a specific frequency, due simply to the real-life limitations of semiconductors. So from a distortion performance stand-point, it is best to convert to analog at the chips "favorite" sample-rate.
So, this raises the question of, "Why is it important to get bit-transparent audio from the computer if its not bit-transparent when it gets to the D-to-A chip?"
The answer, of course, is - who knows whats happening to audio when going through a computer system, etc. Its so hard to tell whats happening, and why, and what affect its having on the audio. All we know is, we want the audio to come out untouched, and there is no reason why it shouldn't be the case. Also, you can be sure that any signal processing happening behind the scenes in the computer is not done with the D-to-A's best interest in mind as it is with the DAC1.
Unlike in a computer, we know what is happening to the audio within the DAC1, every step of the way. And, everything that is happening is done with absolute care and specific design with the goal in mind to achieve the most accurate (least distortion) conversion possible. If the ideal D-to-A chip existed that performed equally well at all sample-rates, we would be using it, and so would every other D-to-A manufacturer. Unfortunately, real-life limitations must be taken into account. Well, perhaps I should say fortunately, because thats what makes an engineer's job special, and exciting, and challenging. And, I should say, secure
.