16bit vs 24bit -90db sinewave are different yet dac upsamples incoming data to a higher level of bits?
Dec 26, 2016 at 8:09 AM Thread Starter Post #1 of 9

460414

Head-Fier
Joined
Oct 5, 2016
Posts
70
Likes
29
Edit:
Totally misunderstood up-sampling, what I read on it was misleading and referred it to some kind of advanced interpolation and not just filtering as in bandwidth limiting through higher sampling rate (+ digital filter, also HF low pass analogue filter to eliminate aliasing etc on up sampled waveform).
Since then "mathematically" (as in not using filter programs) made a rudimentary FIR closed from digital filter for a DAC.

/Edit

Looking at -90db sine waves tests 16 bit is well.. 16 bit (stepped at -90db). Vs 24 bit which smooth. Yet this still happens with dacs which upsample data to a higher number of bits?

16 bit


24bit

The dac the images are from upsamples source data to 32bit (1.5Mhz).
 
Last edited:
Dec 26, 2016 at 8:43 AM Post #2 of 9
Are you asking if that looks like reasonable DAC output from for 16-bit vs. 24-bit -90dBFS sine input?
 
Dec 26, 2016 at 8:50 AM Post #3 of 9
  Are you asking if that looks like reasonable DAC output from for 16-bit vs. 24-bit -90dBFS sine input?

 
No. The dac upsamples all incoming data to 32bit. Yet when fed 16bit data vs 24bit data you get different results for a sine wave. 
 
If they are both received as 32bit by the dac then shouldn't they both look like the 24bit sine?
 
Upsampling within software from 16 bit to 24bit gives a result which looks like the 24bit sine afaik. So DACs are "upsampling" data without actually changing it? If so what for?
 
Dec 26, 2016 at 9:30 AM Post #5 of 9
   
No. The dac upsamples all incoming data to 32bit. Yet when fed 16bit data vs 24bit data you get different results for a sine wave. 
 
If they are both received as 32bit by the dac then shouldn't they both look like the 24bit sine?
 
Upsampling within software from 16 bit to 24bit gives a result which looks like the 24bit sine afaik. So DACs are "upsampling" data without actually changing it? If so what for?

 
If the sine wave *was already at 16-bits*, then the truncation errors (the square-wave-like distortions) are already there. Padding to 32-bits does nothing to change that. The DAC has no way of knowing that you wanted to feed it a pure sine wave rather than a sine wave + 16-bit truncation distortion.
 
Dec 26, 2016 at 9:33 AM Post #6 of 9
That information is gone, there is no way for the DAC to know that it was originally a sine wave and that there should be samples at those intermediate quantization levels. All that is done (in software or in the DAC) is to add zeros to the LSB end of the sample, and perhaps add some dither.
 
Perhaps you are confusing upsampling the bit depth with the sample rate. When increase the sample rate you can interpolate and generate intermediate samples.
 
Dec 26, 2016 at 1:09 PM Post #7 of 9
upsampling/oversampling is used for sample rate. you're making it all much harder to read and understand if you use that word on bit depth. for bit depth you increase it(zero padding) and that's it. it doesn't add precision to the already existing 16bit signal, you just avoid adding extra 16bit quantization noise from the DAC itself. 
and if you were to low pass the first signal, the result would be more of a sine. it's because the sample rate is so big that high frequencies aren't filtered out and it looks like this. the actual sine is somewhere in there.
 
Dec 26, 2016 at 3:35 PM Post #8 of 9
Well, the first picture is 1 bit, since we're talking about 16 bit - 90db (15 bit). So it only has 2^1 levels. This is probably like a 21 bit dac, so the second one is 6 bit, or 64 levels, so it can actually still look like a sine wave. The issue here is that the bit depth increase happens too late. If you were to do it first and then lower 90db, then it would look similar. Increasing the sample rate would also help I think because then you are spreading the noise over a greater frequency range.
 
Dec 26, 2016 at 11:20 PM Post #9 of 9
So I assumed that when upsampling dacs essentially got say 16bit data and with the use of a higher sample rate and higher level of bits they did something along the line of draw a (not necessary straight) line in between each "point" depending on past and to a small extent future to try mimic a higher bit level and sample rate?
 
 
Ok I read up on oversampling and oversampling and I have majorly misunderstood what it actually is, nothing nearly as complex as I thought. Sorry, pointless thread. 
(why is the NOS dac argument even a thing???)
 

Users who are viewing this thread

Back
Top