Err... so what you are trying to say is that you don't need higher bit-depth but do want smaller quantization steps? But bit-depth and quantization step size are linked by definition (in contrast to gauge size and scale). So that's like saying you don't need higher bit-depth but do want higher bit-depth. Fine, I guess, you can "want" whatever you like, but I don't see how this relates to the discussion.
So if you cut of 20 cm from your 30 cm gauge, you have a 10 cm gauge doing 0.1 mm gradients.
PCM works a bit different.
The loudest you can play at the digital side is by design 0 dBFS.
Hence a 16 bit recording can resolve details down to -96 dBFS.
A 24 bit recording can resolve details down to -144 dBFS. This is a bit theoretical as no recording or playback chain can resolve this but 20 bits (-120 dBFS) might be the best we can obtain at the present.
Likewise you can't reproduce frequencies above half the sample rate. So red book (CD) is limited to 44.1/2
Likewise a 96 kHz recording can reproduce frequencies up to 48 kHz.
So by choosing your bit depth and sample rate, you can get the "gauge" you want.
Don't mistake the technical differences for audible differences as audibility is a complete different matter.
I have a 10cm rain gauge that measures in 1 cm gradients.
I have a 30cm rain gauge that measures in 0.1 mm gradients.
I don't necessarily need a 30cm gauge, but I want the 0.1 mm gradients the 30 cm gauge provides.
According to your analogy, I have an ADC that measures on 0.01mm gradients and then a DAC which outputs a signal calculated from those gradients which is the same signal recorded and has no gradients whatsoever! You don’t appear to understand how digital audio works?
It’s common for the audiophile crowd to get wrapped up in “increased bit depth = increased precision” without studying further to understand what that actually means in a digital audio context.
So why not demonstrate to yourself what changing the bit depth does? It’s pretty easy to do using Audacity or a similar audio editor. Take a 24-bit audio file and create a copy converted down to 16-bit with dither. Then load the original and the copy together and invert the copy to subtract it from the original.
You’ll be left with silence. The noise floor may still be difficult to detect even if you turn up the volume.
Why is it silent? Because the audio signals are 100% identical between the two files and they cancel each other out. You’re only left with the (virtually inaudible) dither noise present in the second file but not the first.
Put another way, you’re hearing the difference between the two files, and that difference is virtually nothing.
I’ve been playing with this a bit today to remind myself how it all works. I have some 24-bit files I purchased just for this sort of experimentation a long while back. Converting from 24-bit to 16-bit with dither, I get exactly what I said when the 16-bit version is subtracted from the 24-bit one: silence.
Converting from 24-bit to 8-bit is more audibly interesting but still illustrates the point. Applying dither with 8-bit results in a much more easily audible noise floor, but it’s still quite a bit quieter than good old-fashioned tape hiss, and the audio is still very listenable. If you once again subtract the 8-bit version from the 24-bit one, you just get the (now louder) dither noise. The signal above that higher noise floor is still identical to what was in the 24-bit original, or they wouldn’t cancel each other out when one is inverted.
Interestingly, I’ve found some tracks (but only some) that exhibit distortion for the first second or two when converted to 8-bit with Audacity. You can tell when you play back the canceled audio and very briefly hear a tinny piece of the original signal still present. That then goes away and the rest of the track has the signal completely canceled out with nothing but the dither noise left over, just as happened with the 16-bit conversion. I’m not knowledgeable enough on the topic to know why the signal wasn’t completely uncorrelated for that first second or two in those cases. I’m interested to dig more into that, but it doesn’t really matter much since we don’t use 8-bit for audio consumption anyway. No such issues occur when going to 16-bit (or, if it does, it's inaudible).
Otherwise, the 8-bit conversion serves as a good illustration of what changing the bit depth does.
Edit: danadam found that this is due to a bug in Audacity that causes the dither to be applied incorrectly when exporting to 8-bit! So if you’re using that program, you can just stick to converting 24-bit to 16-bit.
Audio like this to the Dac.
NO STAIR STEP IN ANY MEANS.
You even found a perfect zero flat line in moment of time.
Remember dac and high freq filter works together
Applying dither with 8-bit results in a much more easily audible noise floor, but it’s still quite a bit quieter than good old-fashioned tape hiss,
...
Interestingly, I’ve found some tracks (but only some) that exhibit distortion for the first second or two when converted to 8-bit with Audacity.
It's probably dither with shaped noise if it is quieter than tape hiss. AFAIR that's the default in Audacity. It is possible that it is causing clipping and that's why the files don't null to just noise.
Interestingly, I’ve found some tracks (but only some) that exhibit distortion for the first second or two when converted to 8-bit with Audacity. You can tell when you play back the canceled audio and very briefly hear a tinny piece of the original signal still present. That then goes away and the rest of the track has the signal completely canceled out with nothing but the dither noise left over, just as happened with the 16-bit conversion. I’m not knowledgeable enough on the topic to know why the signal wasn’t completely uncorrelated for that first second or two in those cases. I’m interested to dig more into that, but it doesn’t really matter much since we don’t use 8-bit for audio consumption anyway. No such issues occur when going to 16-bit (or, if it does, it's inaudible).
Otherwise, the 8-bit conversion serves as a good illustration of what changing the bit depth does.
Audacity doesn't "support" 8 bit format as far as I know. How did you create these 8 bit versions? One way to do it is to use 16 bit format for 8 bits reduced signal:
Nyquist prompt: (mult s (/ 1.0 256))
Mixing to 16 bit with dither
Nyquist prompt: (mult s 256)
Depending on the version of Audacity this syntax may not work, but that's the idea.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.