SilentNote
100+ Head-Fier
So I've already learned about audio bit depth- 16bit vs 24 bits etc., and understand that 16bit audio has a quantization noise about -96 dB under peak loudness (less if dither is considered). 24 bit is around -140 dB. Considering the fact that most "quiet" room is 30dB. I really can't be bothered about -96 dB quantization noise. And if you play music at 110db I think the 14 db quantization noise is that last thing you need to be worried about (OSHA permitted exposure for 110dB is less than 5 minute per day total). I listen to music with isolating IEMs, so it's around 70dB peak loudness, which means, I'll never hear the quantization noise at -26 dB.
Moving on to the sampling rate - I understand that according to the Nyquist-Shannon sampling theorem, to PERFECTLY reproduce an original time-function, you need to sample at (greater than) twice the upper frequency. As the human hearing is limited to 20 kHz. Maybe 21 kHz (when I was 16), 44.1 kHz is sufficient to perfectly reproduce all the sounds of the audible frequency. In fact, a low pass filter is generally used to remove any signal past 20 kHz to reduce aliasing problems.
I've also learned that 44.1 kHz was used as it was most compatible with NTSC and PAL, and 48 kHz was compatible with all motion picture frame rates. 24 bits recording is useful for mixing, but after mastering for playback, I can't seem to find a reason that any human can possibly differentiate a perfect reproduction of audio signal at reasonable listening conditions (44.1/16) from perfect reproduction of audio signal (96/24).
So it would seem like sample rate at 96 kHz / 192 kHz and bit depth of 24 bits has nothing to do with the resolution of audio at the playback level?
Moving on to the sampling rate - I understand that according to the Nyquist-Shannon sampling theorem, to PERFECTLY reproduce an original time-function, you need to sample at (greater than) twice the upper frequency. As the human hearing is limited to 20 kHz. Maybe 21 kHz (when I was 16), 44.1 kHz is sufficient to perfectly reproduce all the sounds of the audible frequency. In fact, a low pass filter is generally used to remove any signal past 20 kHz to reduce aliasing problems.
I've also learned that 44.1 kHz was used as it was most compatible with NTSC and PAL, and 48 kHz was compatible with all motion picture frame rates. 24 bits recording is useful for mixing, but after mastering for playback, I can't seem to find a reason that any human can possibly differentiate a perfect reproduction of audio signal at reasonable listening conditions (44.1/16) from perfect reproduction of audio signal (96/24).
So it would seem like sample rate at 96 kHz / 192 kHz and bit depth of 24 bits has nothing to do with the resolution of audio at the playback level?
Last edited: