from a pure neurophysiological point of view, healthy little humans up to the age of roughly around 25yo, cannot process any sound lower than 20 Hz, and higher than 20 to 21 kHz with our cochlea and our end-point in the brain, the auditory cortex.
To be accurate, I think we need to acknowledge that there are conditions under which this quote isn’t necessarily true. There have been some studies demonstrating that some young adults can process and hear/detect frequencies up to around 24kHz. However, this is uncommon and requires pure tones/sine waves at extremely high sound pressure levels (around 110dB or so). Such studies are somewhat rare and old, because it is now known that such levels can result in hearing damage after a relatively short exposure time and so it would be difficult/impossible to design a study today that is medically ethical. None of this applies to consumers listening to music though, because music never consists of just a single sine wave above 20kHz, peak levels are typically around 200Hz with relatively low or no energy above 20kHz. At reasonable listening levels, young adults can typically achieve about 17.5kHz, not uncommonly only around 16kHz and extremely rarely more than about 19kHz.
In production and mastering, higher bitrates (e.g. 24 over 16) give the potential for lower noise and noise floor, as well as providing more headroom for higher dynamic range, whereas higher sampling frequency gives options for better low-pass (anti-aliasing) filters. You push the artifacts created by filtering up to higher, safely non audible frequencies... and that's one part of the Nyquist–Shannon sampling theorem.
We have to be careful here because we have to separate file format from processing environment, a fact that is pretty much always omitted in audiophile marketing, reviews, etc. A 24bit file format provides more headroom than a 16bit file format when recording but that’s it, that’s pretty much where the benefit ends. Using more than 16bit is necessary when processing (mixing and applying effects) as error/noise occurs in the LSB (least significant bit) and sums with each successive process/plugin. Even with 24bit, with enough channels and processing, this can add up to an audible increase in the noise floor and this is why professional DAWs and hardware mixers have never operated at 16 or 24bit. The internal (virtual) mixing and processing environment in professional DAWs today is 64bit float and even the very first commercially available digital mixer (in 1987 from Yamaha) used a 27bit environment. In other words, the 16bit or 24bit file is loaded into RAM (in the DAW) and processed at 64bit, noise/error from each process/processor is therefore confined to roughly the 64th bit and even summing many thousands of channels and processors together does not result in noise/error anywhere near even reproducible levels, let alone audible levels. The final stage is then to come out of the virtual environment and record the completed mix to a file format (16 or 24bit), after all the processing has been completed.
So, if I'm right and if you have extremely good recording equipment, say with up to 48 kHz, in mastering, you would be wise to double the FQ range to give enough leeway to apply any kind of filtering without producing artefacts, in order to keep the data as pristine as possible (for whatever reason, but mostly they refer to archival/preservation aspects of it).
Mmmm, not really. There was (20+ years ago) a legitimate reason for higher than 48kHz sample rates, although it had nothing to do with filtering. There were certain processes that required higher than 24kHz audio frequencies, for example, vintage analogue limiters and compressors had particular sound signatures partially reliant on intermodulation distortion (IMD). Ultrasonic distortion caused IMD within the audible band. So to emulate such processing in the digital domain required calculating distortion products up to around 30kHz or so, to derive the IMD, which obviously could not be accomplished if the max allowable freq was 22.05kHz or 24kHz. So in the early days of higher than 48kHz sample rates, just apply a processor on a 44.1/48kHz file (such as a vintage compressor emulation plugin) and it could quite easily be ABX’ed against the original analogue unit or the same plugin operating on a 96kHz file. Hey presto, an audible difference between 48kHz and 96kHz! However, that situation only existed for a few years. In the 2000’s computers got much more powerful and plugin programming got much more sophisticated. Feed say a 48kHz recording into such a plugin in the mid 2000’s and the plugin would internally upsample to 96kHz, calculate what it needed to in the ultrasonic freqs, create the IMD in the audible range and downsample back to the input sample rate (48kHz), which obviously preserved the IMD in the audible range. Now 96kHz vs 44.1/48kHz could no longer be ABX’ed and the benefit of higher sample rate recordings no longer existed!
Audiophiles seem very troubled by up/over sampling, apparently without realising that the music they’re listening to has already been up/over sampled (and down sampled again) many times, commonly around a dozen times or so and occasionally several dozens of times. One more time will not make any audible difference (unless you apply a deliberately inappropriate/broken filter).
G