infinitesymphony
Headphoneus Supremus
- Joined
- May 15, 2006
- Posts
- 4,621
- Likes
- 16
Quote:
The limits of human audio perception are well established running from 20 to 20,000 Hz on average with only young children being able to hear the high frequencies and the lower ones generally felt rather than heard. Right now music is sampled at 24 bits, but from everything I've read (there are a couple of articles on this forum that cover it pretty well) what you get is increased dynamic range (headroom), not a better representation of the sound.
I'd be interested in a hypothesis on why more is better, rather than the presumption. The best explanation for why more != better is gregorio's thread from two years ago.
Yes, 20 to 20 kHz is the established metric for the average range of human hearing. No disputes there. However, as stated earlier, higher sampling rates can yield more accurate frequency response within that audible range.
gregorio's explanation hinges on the idea that dither solves everything. "The result is that we have an absolutely perfect measurement of the waveform plus some noise. In other words, by dithering, all the measurement errors have been converted to noise." In other words, dither is perfect except for the fact that it's not perfect; it adds randomized white noise. Instead of adding random noise to the signal to reduce quantization errors, why not start with a more accurate representation of the signal to begin with?
Edit: Dig deeper than the first post on gregorio's thread and you'll see him correcting some of the mistakes in his assertions.