Head-Fi.org › Forums › Equipment Forums › Sound Science › does a 24bit dac use actual 16bit when reading 16bit?
New Posts  All Forums:Forum Nav:

does a 24bit dac use actual 16bit when reading 16bit?

post #1 of 12
Thread Starter 

behind the question is the "problem"(not realy one) that 16bit dacs don't really have 16bit available. as some of the "paths" are used for other fuctions of the chipset.making most 16bit dacs more like 12bit dacs or something like that(don't know if there is a usual value).

 

so when using a 24 or 32bit dac, does it use actual 16bits? or is it ******* stupid and using in the 16bits those already reserved for other tasks, turning a 24 or 32bit dac into a 12bit?

 

I guess my question is do I get real -96db noise on a 24bit dac or is it (if 12bits available) 12*6=72  -72db?

post #2 of 12

I might not be right on this, but I don't think there is other 'path' that reduces the resolution, The reduction of resolution is mostly caused by the SNR of the whole circuit - say a 16 bits DAC has an line output of 80dB in dynamic range, then you effective resolution is only 80/6 = 13 bits. So this is as much an problem for the analog stage as it is in the digital stage. Takes X5 for an example. It has a 24bits DAC and the line-out dynamic range is about 115dB. So you get an effective resolution of 19bits. The problem is, regardless how great the DAC is, you are still limited by the system as a whole. To get a full 24bits, you will need to output 144dB, which is near impossible to achieve.

post #3 of 12

24 bit DAC will "use" all 16 bits, that is it will accept data incoming in packagaes of 16 bits. The actual resolution you get at the output, as ClieOS has written, depends on the noise performance of the cicuit as a whole.

 

Could you write a little more about these "paths" used for other functions? Do you mean oversampling or something else?

post #4 of 12
Thread Starter 

thank you guys, I'm a little confused about my own question TBH (I need to stop asking stuff in the middle of the night, but hey that's when my brains decide to ask those ^_^).

I guess I talked about "paths" thinking about what was plugged on the chip and how it would play with... stuff like the ground, maybe changing the actual values the dac would output, or making noise. and that maybe one of those external "paths" would become open or closed depending if it was 16 or 24bit music, thus really affecting the signal by "interferring" with it.

 

I'm looking for a reason why the noise on 24bit is really lower (not thinking theorical quantization errors here, just how to actually reach -96db overall). is it something inside the 24bit dac chip, linked to the real value of the least significant bit(not the theorical one)? or is it due to external factors like a better overall conception and implementation of the 24bit chipset compared to 16bit because, well, why bother getting -110db noise everywhere if the chipset will never reach 96db?

is it possible with care to get 16bit out of a 16bit dac?

my first idea was that with say R2R(because at least I understand how that works a little) the precision of the resistors would slowly but surely push the output values away from theorical values, so that the least significant bit would have an error margin making that -80db noise ClieOS talks about for 16bit dacs.  but I don't know if there is any truth to that idea. I'm wild guessing trying to figure it out.

post #5 of 12

Two things mixed up here I think. First, the overall lower noise achievable with "24 bit" chips is possible because of... higher bit depth, meaning that the number of bits that constitute the package allows for higher dynamic range (from the simple fact that there are 24 of them), that much for theory.

 

In practice not all the dynamic range is used because, as you mentioned, the overall implementation allows for "only" 14, or 17, or 20 bits of dynamic range as the rest is covered by circuit noise. As far as I remember most modern DAC chips present in mainstream stuff (like wm8740 and higher, es9023 and higher, and other TI and Cirrus chips) easily achieve noise performance below -96 dB, so no worries here.

 

If you have a 16 bit DAC chip and you want to max the dynamic range out, then even if the noise or distortion artifacts inherent to the chip itself are at, say, -80 dB, then -110 dB noise everywhere else means that the noise or distortion of the chip will be overall increased only by a tiny tiny amount when measured input to output (full circuit not only the chip itself), as these artifacts sum up, not mask each other, the question is what is the limiting factor in a given design and how can we minimise any additional distortion.

post #6 of 12
Thread Starter 
Quote:
Originally Posted by MaciekN View Post
 

Two things mixed up here I think. First, the overall lower noise achievable with "24 bit" chips is possible because of... higher bit depth, meaning that the number of bits that constitute the package allows for higher dynamic range (from the simple fact that there are 24 of them), that much for theory.

that part is about how the least significant bit set the value of quantization noise generated from a conversion. I don't think it is the part I'm wondering about.

 

 

 

 

Quote:
Originally Posted by MaciekN View Post
 

In practice not all the dynamic range is used because, as you mentioned, the overall implementation allows for "only" 14, or 17, or 20 bits of dynamic range as the rest is covered by circuit noise. As far as I remember most modern DAC chips present in mainstream stuff (like wm8740 and higher, es9023 and higher, and other TI and Cirrus chips) easily achieve noise performance below -96 dB, so no worries here.

this is it!

and now I read ClieOS comment in a new light.

here is what I thought(so as it seems, not true at all):

I was truly believing that the reason why a 16bit chip couldn't go all the way to -96db was because of internal lack of precision. back to my R2R Illuminati theory. I though that maybe when having 2 resistors used to divide the voltage, one wouldn't be exactly the same value as the other one, so that the output voltage of each division wouldn't be split perfectly in half, and would slowly derail from the theoretical value. so the signal would end up with the precision of the least significant bit, minus the small error in voltage caused by resistor differences(a fixed value). thus a resulting noise not of -96db but maybe -90 or -85db depending on how precise the resistors were.

you confirm there is no reality in that? or maybe it is like that but the error margin is kept low enough, so that the resulting noise of that specific error would be lower than -96db?

 

 

and here is what I understand from both you and ClieOS posts:

the chip actually has a circuit noise under the quantization one(here I'm guessing only if perfectly implemented), and it's only the dac (as a gear not the chipset alone) that makes 16bit dacs as good as only 13bit dacs when reaching the line output.  is that right?

post #7 of 12

I'm not very familiar with R2R DACs, I only the general working principle. Difference between resistor pairs seems relatively questionable, for example you use two resistors to set gain in a voltage feedback circuit and there usually are no problems with channel imbalance resulting from any difference, probably not a problem if you use 1% (or better) and measure them to get exact pairs, although in R2R any difference would stack through consequtive divisions...

 

There are many techniques developed that deal with quantization noise, dithering and other noise shaping stuff, although these usually give "weighted" noise specifications (it should always be noted by the manufacturer). Modern DAC chips marketed as 24 bit may have quantization noise well below the -96 db threshold, even when they are not perfectly implemented - simply put modern chips are so designed, that even if you use them in a poor layout there is plenty of room for error. You are right that it is sometimes the fact that the analog stage is the dominant noise contributor.

post #8 of 12
Thread Starter 

ok all this seems logical.

well I was just a "bit"curious about the practical application results I guess, because theory only led me so far and I didn't want to make affirmations based only on the pretty "quantization noise" part, that is so convenient to understand as a discret system.

 

thank you and if somebody knows about my R2R theory, I'm interested to know more about that too.

or even delta sigma, I just avoided it as the basic delta sigma is actually very noisy from what I understand. so it would just add confusion to my question about precison.

post #9 of 12

Well, it is worth pointing out that R2R ladder dacs solved some of the issues you have been pondering about unequal resistors.

R2R used only two values of resistors.  R and 2R.  One was twice the resistance of the other.  There are no other values used. There were more than two resistors total, but only in two values. It is also worth pointing out such configurations output a current proportional to the digital signal being represented.  The variable current then fed a current-to-voltage converter which output a voltage of the appropriate amount. 

 

Maybe this will help it make some sense:

 

http://www.tek.com/blog/tutorial-digital-analog-conversion-%E2%80%93-r-2r-dac

 

Also in such architecture you can keep stringing out the proper configuration for as many bits as you wish.  As long as your I-V stage (current to voltage converter) was of enough quality. 

 

Now things can go awry in a number of ways.  You could have a good, linear 16 bit DAC putting out the appropriate signal, but buffer the I-V stage with a lousy analog circuit having poor SNR and you loose some of your lower bits just from the signal being swamped by noise.  But with a good analog buffer with low distortion, low noise levels such a device will put out your signal cleanly with 96 db of range to work with. 

 

You seem to have had this picture in your mind of each resistor being smaller than the previous one, and in the very smallest resistors inaccuracy in the resistance would hamper performance in the lowest signal levels.  Dacs can and have been made that way.  R2R was a solution to that issue.  I believe the DAC form you are thinking of was called binary weighted resistor DAC's.  Each resistor half the size of the next higher bit. Here is a page I found that illustrates that for you along with R2R DACs.

 

http://www.allaboutcircuits.com/vol_4/chpt_13/3.html

post #10 of 12
Thread Starter 

well you too confirm the ability to actually deal with noise up to -96db. so everybody seems ok on that one as far as, at least, the dac chip is concerned.

 

about the R2R no I never though the resistors were of different values per se. what I had in mind was as Maciek understood it in his last post. that one resistor would be 2ohm and the other one, because of margin error would be at 2.001ohm or something, and the next one might be back to 2ohm, or maybe 1.9999999ohm. the kind of error I would expect of anything produced in great number in a factory.

post #11 of 12

>16 linearity has been available for a while now in ADC/DAC - maybe the earliest late 1980's ADC at the beginning of CD digital audio weren't up to modern levels of differential linearity

 

but today, for most of the last few decades, many flagship ADC/DACs have been close to 20 bits S/N and differential linearity - basically limited by analog noise in any real world musical recording

 

with high linearity ADC in the studio and noise shaped dither 16 bit audio actually has the full linearity of the studio ADC encoded - using perceptual noise weighting over 110 dB S/N is theoretically available on dithered 16/44 audio CD

 

to get the full advantage your DAC does have to be much better than 16 linear - but this is no problem today when everybody who cares uses 24 bit DAC with enough extra linearity to make the DAC errors less than any of the recording noise or your output electronics noise floors

post #12 of 12
Thread Starter 

and it's a wrap. you synthetized pretty much everything I was asking about. congrat for actually understanding me(there are no converter/app for that) and thanks for posting that answer \o/

 

I'll leave you guys alone for now. but expect badly formulated questions from me anytimes now! (I'm in the middle of looking at the common points between delta sigma and what's used for dsd dacs, knowing nothing about it, it's gonna be epic).

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › does a 24bit dac use actual 16bit when reading 16bit?