Originally Posted by jcx
try closer reading, which points out that 24 bits is not necessary for the headroom - post processing
and he fails to credit psychoacoustic noise shaped dither of 16 bits with delivering perceived noise floor and linearity below -96 dB - basically to your DAC's limit
and today PC sound software can recognize, use a 24 bit DAC to "preserve the bits" of a 16 bit stream with local EQ, digital volume applied
redithering (while pointless at 24 bits, Johnson noise in your electronics is higher) can be done too in most of today's digital audio hardware
even low power 16 bit uC in DAP can redither after digital volume, EQ - see RockBox code - but many now have 24bit DAC
Like I said, it points out both areas of the debate. I also read this closely-
"Can we expect the same when moving from 16 bit (CD audio) to 24 bit?
If we look at the numbers, the answer is yes.
16 bit integer allows for 2^16= 65536 different values
24 bit integer has 2^24= 16777216 different values, a 256 times better resolution!
No doubt, this must be a substantial audible difference."
The headroom argument is no longer valid, one knows the maximum level when producing the final master.
Noise floor: your gear must have an S/N better than 96 dB otherwise the extra bits 24 offers will be drowned in the noise. But most gear does.
DSP: if you use digital volume control or any other kind of DSP e.g. re-sampling, you can profit by using 24 bits.
In case of 16 bits the result must be dithered otherwise the artifacts of the DSP become audible.
Again 24 bits has the advantage because of the quantization error.
If you have hardware supporting 24 bit words, padding 16 bits audio with 8 bits does the job too."
Ok guys, don't mean to "try to exit the conversation" as another poster mentioned...but it's Millertime in So Cal.