Benchmark DAC1 now available with USB
May 16, 2007 at 10:03 AM Post #511 of 3,058
Thanks yourmando, for again sidestepping my specific argument and going after stuff irrelevant for my argument, like a politician
rolleyes.gif

Perfect example: I say such and such measurement is important, and you say oh, there are these tons of other measurements.
Analogy: I go to the mechanic to have a detailed checkup on my car, and he checks everything but the brakes. I ask him to check the brakes, but he says, oh, but I checked all these dozens of other things, everything's perfect!
I think you see the point. By asking him to check the brakes, it doesn't mean I think they are in poor condition, but that they are important for proper functioning.

I think your defensiveness is due to a failure to see that my criticisms already presume that the DAC1 is doing a good job. I'm simply pointing out gaps that make your certainty of unrivaled superiority not fully justified, and your implication that any further improvements are beyond the audiblity, entirely unsupported. I'm sure virtually all commercial DACs suffer from issues I've mentioned, but that doesn't mean they don't exist. I'm not making a comparison here, as it's not something I'm really interested in.
 
May 16, 2007 at 11:48 AM Post #512 of 3,058
I remember when a device with 60dB of dynamic range was considered good. What would it take to *hear* the difference between a noise/distortion floor of -115dB versus one at -125dB? I'm interested in this because the effects of using software volume control seem very slight to me after looking at the Benchmark Wiki graphs.
 
May 16, 2007 at 1:47 PM Post #514 of 3,058
Quote:

Originally Posted by EliasGwinn /img/forum/go_quote.gif
Monkey,

Thank you very much for this very valuable feedback. I will bring this up with our production team.

Thanks,
Elias



Could you do us a favor and give us the thread type so we can replace screws on units in the field?
 
May 16, 2007 at 2:07 PM Post #515 of 3,058
Quote:

Originally Posted by lowmagnet /img/forum/go_quote.gif
Could you do us a favor and give us the thread type so we can replace screws on units in the field?


4-40 thread, 1/4" length

Thanks,
Elias
 
May 16, 2007 at 3:30 PM Post #516 of 3,058
Crowbar,

I'll try to round up your questions and answer them here. Forgive me if I miss any...

1. Thermal Memory Distortion

This is, as you indicated, an often overlooked design consideration. I checked out the paper you linked (thank you for the references by the way...it makes for a much more constructive conversation when references are given). It seems they are referring to high-gain scenarios (specifically, power amps).

The DAC1 addresses Thermal Distortion conditions by maintaining low gain operating conditions and using stable resistors. The opamps used in the DAC1 are all operating in low-to-no gain, buffer-type applications. The gain is set with thin metal film resistors which are (pardon the pun) very resistant to changes due to current and temperature. If thermal conditions were to effect the open-loop gain of the opamp, the overall (closed-loop) gain will not be affected much because the metal film resistors in the feedback network are the dominant factor.

Also, the I-to-V converter is external of the D/A chip. This configuration is much less prone to thermal distortion then voltage-output converters.

2. Jitter measurements

When you mentioned that we only tested the DAC1 with jitter frequencies up to 9 kHz, I was as confused as you were. In fact, we measure with jitter frequencies up to 100 kHz. I see where the miscommunication came about, though. On page 34 of the DAC1 USB manual:

http://www.benchmarkmedia.com/manual...USB_Manual.pdf

...we show a measurement of THD+N vs. Jitter Amp and Jitter Frequency. We show about 20 plots of THD+N vs. Jitter Amplitude, with each plot representing a different (constant) Jitter Freq. The consecutive Jitter Freq plots are increased by 500 Hz intervals from 2 Hz to 9 kHz. This plot demonstrates that the THD+N vs. Jitter Amp plots do not change based on Jitter Freq.

The graph on the preceding page (33), however, is what you are looking for. It demonstrates the DAC1's Jitter Tolerance (Distortion vs. Jitter Freq) from 100 Hz to 100 kHz Jitter Freq's.

As can be seen from the two graphs, there is no change in performance with varying Jitter Amplitudes (up to 12.75 UI) or Frequencies (up to 100 kHz).

3. Improvements to the DAC1

There were several points you made about ways the DAC1 could be improved, so I'll try to address them all...

The D/A converter used in the DAC1 is the AD1853. This chip, as it is used in the DAC1, actually achieves linearity down into the -130's dB (nearly -140 dB). I will try to find our measurement graph for this and post it here.

The DAC1 achieves 21-bits of signal-to-noise ratio.

The DAC1 can accurately resolve the 24th-bit (although it will be below the noise floor).

4. Publishing performance plots of the DAC1

I am very glad to see customers analyzing and appreciating the performance plots which are offered in the manual. We would have put even more plots in the manual, but the manual is already too thick. But, due to the feedback I am hearing from you all, we will publish all performance plots of the DAC1 on our website. I will keep you posted (pardon the pun).

Thanks,
Elias
 
May 16, 2007 at 3:50 PM Post #517 of 3,058
Hi Elias,
Have you guy's at Benchmark looked at designing a DAC with a discrete output stage rather than op-amps? Possibly transformer based such as Townshend use?

Cheers
 
May 16, 2007 at 4:07 PM Post #518 of 3,058
Hi Elias,

finally I found the time to answer you. Because I thought, this topic could the of public interst, I answer you here. Maybe, it would be a good idea to create an own thread for the S/PDIF stuff at least, though.


I've checked the speed of the crystal as you adviced. The clock is running at 28.322 MHz. However, I don't understand what you mean by "your's may or may not be upgraded". "Upgraded" sounds like changing devices after you already sold them instead of directly manufacturing new models. Except that, I thought that *all* DAC1s were capaple of 192 kHz - at least via the electrical input. So there were some even not accepting 192 kHz through the other inputs?

After all, if the crystal (and the rest of the parts) are able to support 192 kHz, you would just have to exchange the optical receiver by a newer model, right?

However, I doubt if the whole 192 kHz is worth the fuss since the DAC1 is going to convert everything down to ~ 110 kHz, anyway. The only (technical) advantage for 192 kHz material to be played back would be to avoid the conversation to 96 kHz in software (which could be worse) and the theoretical benefit of about 14 kHz Samplerate (the difference between the internal rate and the 96 kHz, resulting from the input's limitation).

Since the DAC1 is able to recognize sample rates over a wide range (not only the common steps 44.1, 48, 96, etc.), I wonder if even the older optical inputs aren't able to receive 110 kHz SR. Thus, playing a file, sampled 192 kHz from the computer and resampling it to the internal sample rate of the DAC1 before sending it via Toslink could work and if the resampling is of the same quality like the one of the DAC1, no quality would be lost (in theory, I doubt that I could hear any difference, anyway).

In regard to S/PDIF TTL from PC drives, I did additional tests and also tried the DAC1 in conjunction of a Yamaha CD-Player which is said to handle the "valid flag" in the S/PDIF-stream correctly.
Although this conforms to the standard, I had to recognize that in the case of the DAC1, this is even a disadvantage to the (strange) usage of the "non pcm" flag, like pc drives seem to use it instead. Why? Because the DAC1 doesn't display invalid S/PDIF samples. :frowning2:
This is a real pity because this way I don't see anything. When using the PlexWriter as source (you could see it on the picture I had linked, hehe), on most (unfortunately not all) errors, the "non pcm" led lights up at least. This is unusual but still better than nothing.
After thinking about it, this is clear. Even when (uncorrectable!) C2-errors occur, the S/PDIF stream itself stays valid, so of course the "error" led of the DAC1 doesn't show this. "Non pcm" isn't the case either (at least when using common CD-players as source). So the result is that the DAC1 simply doesn't have a led to show standard conform C2-errors. This would be a real feature for people who want to know in which conditions their CDs are (C2 errors can indicate a soon death, so one better makes a copy before it's too late).

I wonder also how the DAC1 handles samples, flagged as invalid (VALID=true - the boolean logic is inverted here)? It doesn't show them, that's for sure. But is there a stage which performs interpolation like built into every CD-player or are they simply ignored? Because you have to distinguish between errors on the distance source --> DAC resulting in loss of sync ("error" led lights up) and erroneous content. The S/PDIF standard also mentions parity bits used for error detection. Does the DAC1 make use of them?
Some time ago I had a soundcard, featuring two Toslink interfaces (in and out). When using a pretty long (and cheap) cable between it and the CD-Player, the sound from the sound card was distorted and noise. The amazing thing was that it even gaves a sound when holding the plug near the input. The more the distance, the more the distortion. The DAC1 however either gives a perfect sound signal or no at all. The is no in between. I wonder how it detects that the data is errorfree before converting it to analog. For the standard, here you can find more information:

http://www.epanorama.net/documents/audio/spdif.html

It would be very interesting to know how the DAC1 handles and processes the whole S/PDIF data (interpolation, parity checks, etc.). If someone can organize such an overview, then YOU. :)

Thanks again, Elias!

little-endian
 
May 16, 2007 at 5:24 PM Post #519 of 3,058
Quote:

Originally Posted by CanMad /img/forum/go_quote.gif
Possibly transformer based such as Ayre and Townshend use?


LOL dude, all the tube guys try to build output-transformerless amps, and you want to add transformers... All magnetic core transformers have hysteresis problems and low frequency distortion, and usually phase problems. This is not a hi-fi solution.
 
May 16, 2007 at 6:12 PM Post #520 of 3,058
Thanks for the comments.

Quote:

Originally Posted by EliasGwinn /img/forum/go_quote.gif
It seems they are referring to high-gain scenarios (specifically, power amps)....metal film resistors which are (pardon the pun) very resistant to changes due to current and temperature....the overall (closed-loop) gain will not be affected much because the metal film resistors in the feedback network are the dominant factor.


You're right about modern resistors being sufficiently stable. However, negative feedback is used for correction of static distortions, and its ability to correct for dynamic distortions of this type is limited. Couple this with an opamp's extremely high loop gain...
I'm working on SPICE simulations based on thermal models for transistors, and I'm not sure if I can easily modify it for opamps, but I'm going to try this over the next month or so, as it's easier than the measurement setup described in that paper as I don't have the right equipment.

Quote:

Also, the I-to-V converter is external of the D/A chip. This configuration is much less prone to thermal distortion then voltage-output converters.


Yes, that certainly helps. I've tried voltage-out DAC chips before and only one sounded good.

Quote:

I see where the miscommunication came about, though. On page 34 of the DAC1 USB manual


Yes, that clear it up.

Quote:

This chip, as it is used in the DAC1, actually achieves linearity down into the -130's dB (nearly -140 dB). I will try to find our measurement graph for this and post it here.


Here's the datasheet: http://www.analog.com/UploadedFiles/...ets/AD1853.pdf
And the one for the AD1955: http://www.analog.com/UploadedFiles/...ets/AD1955.pdf
Dynamic range and THD+N are about 5 dB better with the AD1955 (can be seen in the specs and the graphs as well). The linearity plots are in graphs 19 and 12, respectively. Though the AD1853's LSBs seem to go lower, note the major kink around 117 dB (preceded by a droop). Though it's not very large, the scale is logarithmic so the higher LSB linearity on this graph is less important than something with much larger energy.
Overall though, the difference is not great. The digital filters the two chips use appear to be identical. I guess ADI saved money on redesigning that part hahaha
 
May 16, 2007 at 10:19 PM Post #521 of 3,058
Quote:

Originally Posted by EliasGwinn /img/forum/go_quote.gif
The DAC1 achieves 21-bits of signal-to-noise ratio.

The DAC1 can accurately resolve the 24th-bit (although it will be below the noise floor).



Interesting statement, Elias.
I already asked myself why Benchmark doesn't list the dynamic range of the DAC1. Although this value is often used equally to the signal to noise ratio, some seem to distinguish between them. For example, according to the mastering engineer Bob Katz, one can hear details below the noise level, thus the dynamik range can be greater than the SNR, especially in conjunction with dithering (as far as I remember he gave 91 dB SNR and ~ 116 dB dynamik range for properly dithered 16 bit material).

Now it would be interesting to know how great the dynamic range (!) of the DAC1 actually is. If it should be really able to resolve the 24th bit, it would have to exceed 140 dB. Is this the case?

I'm confused also why more than 20 Bit of wordlength are used, at all if no converter is actually able to reach such a huge SNR and dynamic range. Many devices don't even match 20 bit performance (by pure math).

I'm sure you can clarify this. Again this would be worth an own thread.

little-endian
 
May 16, 2007 at 10:28 PM Post #522 of 3,058
Quote:

Originally Posted by little-endian /img/forum/go_quote.gif
I'm confused also why more than 20 Bit of wordlength are used, at all if no converter is actually able to reach such a huge SNR and dynamic range. Many devices don't even match 20 bit performance (by pure math).


24 bits is more than the dynamic range of the human ear. The reason for long words is most types of DSP processing effectively reduces the resolution, even if the DSP internally uses more bits. You can easily see that with an image editing program. Take a look at the histogram of an image, then apply some global processing such as equalization, and take a look at the histogram again. It will no longer be smooth and continuous; the effective quantization is worse.
192 kHz sampling rates also don't make sense from an audibility standpoint, but they could be useful to get increased effective resolution by dithering, and so it goes to the same point as the increased wordlength.
 
May 16, 2007 at 11:26 PM Post #523 of 3,058
My apologies to everyone, I didn't mean to thread cr@p.

It would seem however that a transformer based output stage might not be the most accurately measuring. Therefore I understand that this would be undesirable in a device such as the Benchmark which is striving to achieve the most accurate performance.

However this is from the Townshend web page for peoples info. Make of it what you will :
Audio Amplifier
Normal practice is to use integrated circuit operational amplifiers in the audio signal path. Unfortunately, there are serious problems, even with the “Best” audio grade devices. The first problems are over-complication. These devices may contain up to 1000 transistors and resistors every one of which has the capacity to loose a minute amount of fidelity. Secondly, the myriad resistors are not perfectly linear at very low voltages. The result is a component which has slight veiling and “grunge” distortion. Our simple discrete component fully class A operational amplifier simply doesn’t suffer from this.
Even the best mix of dual monolithic junction field effect transistors and bipolar junction transistors in a single ended pure class A configuration with optimum global feedback for the highest linearity and lowest distortion was still not good enough, so now the gain is provided by our unique EDCT wired step up transformer and unity gain buffer to eliminate all high order harmonics to bring the ultimate in fidelity. THD and IMD measured at better than -120dB.


I'd still be interested if Benchmark considered a discrete output stage though?
 
May 16, 2007 at 11:43 PM Post #524 of 3,058
Quote:

Originally Posted by CanMad /img/forum/go_quote.gif
Not to mention all the VERY positive reviews?


Reviews weren't based on blind tests, so they are irrelevant. Those people are hearing little but their psychological bias.

Quote:

Maybe the problems you mention are why they sound so good (I suspect probably not).


Some people like the sound of coloration. Why do you think Grado headphones are popular?

Quote:

Anyway the question was more about a discrete output stage avoiding op-amps than specifically a transformer based one.


I prefer discrete stages myself, and I have no problem with that. But the suggestion to add a distortion-producer a.k.a. transformer had to be addressed.

Quote:

I do realise that there are very good op-amps these days, but I don't think the Benchmark uses the latest and greatest (correct me if I'm wrong Elias).


I think the new LM4562 opamp used in the latest version is very nice. It's already very linear before any feedback, which is not commonly seen in opamps.

Quote:

When are you starting your own hi-fi company


Is that a challenge? One doesn't need to sell commercial equipment to exercise in electronics. I'm a long time DIYer and I've no doubt I and a number of other DIYers around this and other forums can build a DAC that beats even Lavry and MSB stuff for much less of the cost. As a DIYer I have none of the economic considerations of a company that has to maximize profit.

Quote:

as you obviously know more about engineering than Charles Hansen and Max Townshend?


You're embarassing yourself with an argumentum ad verecundiam here. I recommend taking a basic critical thinking course at your nearest college.

But if you're going to go this route, I suggest you research the enormous engineering effort that has gone towards eliminating transformers from the audio signal path, with the numerous OTL tube amp designs from a variety of companies, and multiple improvements of the technology over time such as Rozenblit's Transcendent OTL, and culminating in Berning's ingenious ZOTL circuit. Speaking of Berning, he has a great visual demonstration of the evils of transformer coupling; take a look at Figure 3 to see the distortion and hysteresis caused by a transformer in the signal path: http://www.davidberning.com/Transfer%20Char.htm

Quote:

Thanks for focusing on the one part of the question that you thought was bad.....like a politician!


Why focus on the other parts, if I didn't see anything wrong with them? Attention should be paid where there are improvements to be made.
 

Users who are viewing this thread

Back
Top