Sample Rate, Bit Depth and High Resolution Audio
Aug 13, 2019 at 1:36 PM Thread Starter Post #1 of 45

SilentNote

100+ Head-Fier
Joined
Jun 6, 2019
Posts
130
Likes
73
Location
Malaysia
So I've already learned about audio bit depth- 16bit vs 24 bits etc., and understand that 16bit audio has a quantization noise about -96 dB under peak loudness (less if dither is considered). 24 bit is around -140 dB. Considering the fact that most "quiet" room is 30dB. I really can't be bothered about -96 dB quantization noise. And if you play music at 110db I think the 14 db quantization noise is that last thing you need to be worried about (OSHA permitted exposure for 110dB is less than 5 minute per day total). I listen to music with isolating IEMs, so it's around 70dB peak loudness, which means, I'll never hear the quantization noise at -26 dB.

Moving on to the sampling rate - I understand that according to the Nyquist-Shannon sampling theorem, to PERFECTLY reproduce an original time-function, you need to sample at (greater than) twice the upper frequency. As the human hearing is limited to 20 kHz. Maybe 21 kHz (when I was 16), 44.1 kHz is sufficient to perfectly reproduce all the sounds of the audible frequency. In fact, a low pass filter is generally used to remove any signal past 20 kHz to reduce aliasing problems.

I've also learned that 44.1 kHz was used as it was most compatible with NTSC and PAL, and 48 kHz was compatible with all motion picture frame rates. 24 bits recording is useful for mixing, but after mastering for playback, I can't seem to find a reason that any human can possibly differentiate a perfect reproduction of audio signal at reasonable listening conditions (44.1/16) from perfect reproduction of audio signal (96/24).

So it would seem like sample rate at 96 kHz / 192 kHz and bit depth of 24 bits has nothing to do with the resolution of audio at the playback level?
 
Last edited:
Aug 13, 2019 at 2:24 PM Post #2 of 45
That's about it. The reason people demand it is because of psychological reasons, not sound quality ones. They feel more secure listening to a "higher quality" track than one that is at a "common" quality level. It's basically the "more is better" theory carried to extremes. You can show them that there is no scientific reason for a difference to exist. You can give them a controlled listening test and prove to them that they can't hear a difference. But it won't matter. They will go right back to wanting the "HD audio" again. It has nothing to do with Sound Science, but they still try to make up scientific reasons why inaudible frequencies and inaudible noise floors MIGHT just be audible under very specific extreme situations that would never occur in the real world. It all goes back to their inability to feel secure. Too much is never enough. If you offered them something higher than 24/96, they would snap at that too. The audio industry plays into the delusion because it sells product.
 
Last edited:
Aug 13, 2019 at 2:44 PM Post #4 of 45
All we need for what?

LPs are well below 16/44.1 and people swear they sound better. 12/30 would probably not sound much different than anything higher for commercially recorded music played back In your living room.
 
Last edited:
Aug 14, 2019 at 2:04 AM Post #5 of 45
So I've already learned about audio bit depth- 16bit vs 24 bits etc., and understand that 16bit audio has a quantization noise about -96 dB under peak loudness (less if dither is considered). 24 bit is around -140 dB. Considering the fact that most "quiet" room is 30dB. I really can't be bothered about -96 dB quantization noise. And if you play music at 110db I think the 14 db quantization noise is that last thing you need to be worried about (OSHA permitted exposure for 110dB is less than 5 minute per day total). I listen to music with isolating IEMs, so it's around 70dB peak loudness, which means, I'll never hear the quantization noise at -26 dB.

Moving on to the sampling rate - I understand that according to the Nyquist-Shannon sampling theorem, to PERFECTLY reproduce an original time-function, you need to sample at (greater than) twice the upper frequency. As the human hearing is limited to 20 kHz. Maybe 21 kHz (when I was 16), 44.1 kHz is sufficient to perfectly reproduce all the sounds of the audible frequency. In fact, a low pass filter is generally used to remove any signal past 20 kHz to reduce aliasing problems.

I've also learned that 44.1 kHz was used as it was most compatible with NTSC and PAL, and 48 kHz was compatible with all motion picture frame rates. 24 bits recording is useful for mixing, but after mastering for playback, I can't seem to find a reason that any human can possibly differentiate a perfect reproduction of audio signal at reasonable listening conditions (44.1/16) from perfect reproduction of audio signal (96/24).

So it would seem like sample rate at 96 kHz / 192 kHz and bit depth of 24 bits has nothing to do with the resolution of audio at the playback level?

Sometimes there is a difference in sound due to factors outside the sampling rate and bit depth. For example many hi res releases are mastered differently to the regular CD or streaming versions - sometimes they sound better and other times not, it is very subjective.

The best way to test for yourself is to load up a hi res track into Foobar with the ABX plug in. Most people cannot tell the difference between a lossy 256kb MP3 file and CD quality, let alone CD quality against 24/96.
 
Aug 14, 2019 at 6:31 AM Post #6 of 45
So it would seem like sample rate at 96 kHz / 192 kHz and bit depth of 24 bits has nothing to do with the resolution of audio at the playback level?

Yes, that's effectively true. There were a few minor errors in your post but they don't affect this conclusion. For example, most 16 year olds have a high freq response up to around 18kHz, you'd have to go back to the age of a young child for 21kHz and even then, only some/few young children. Also, 44.1kHz was chosen because it provided the duration required of a CD, whilst still having a Nyquist Point above the range of hearing. 48kHz is more compatible with video and film rates. TV and Film (with digital audio) has always been specified with a sample rate of 48kHz, it's only CD that uses 44.1kHz. Lastly, 24bit makes no difference for mixing, it's only useful for recording (or transferring recordings for further mixing).

G
 
Aug 15, 2019 at 9:56 PM Post #7 of 45
Yes, that's effectively true. There were a few minor errors in your post but they don't affect this conclusion. For example, most 16 year olds have a high freq response up to around 18kHz, you'd have to go back to the age of a young child for 21kHz and even then, only some/few young children. Also, 44.1kHz was chosen because it provided the duration required of a CD, whilst still having a Nyquist Point above the range of hearing. 48kHz is more compatible with video and film rates. TV and Film (with digital audio) has always been specified with a sample rate of 48kHz, it's only CD that uses 44.1kHz. Lastly, 24bit makes no difference for mixing, it's only useful for recording (or transferring recordings for further mixing).

G

When it comes to physiology, there is quite a range of samples: there are samples of young people who have already damaged their hearing and cannot hear high frequencies....there are samples of older people who don't fall in the under 12khz range. I myself am 41 and haven't subjected myself to much loud music and can still hear 18khz+ frequencies. There are also studies that say people during certain situations may have hypersensitive hearing *slightly* above 20khz. I'm not saying we should then aim for these exceptional circumstances. I think the main arguments for high-resolution masters has to do with realistic modeling/less compression. I invested in SACD because of the native recording for DSD classical music and series like the RCA masters: which was claiming more direct studio master tapes to SACD (vs earlier titles on vinyl which had noise and not as true to obtainable quality).
 
Aug 16, 2019 at 3:34 AM Post #8 of 45
[1] When it comes to physiology, there is quite a range of samples: there are samples of young people who have already damaged their hearing and cannot hear high frequencies....
[1a] there are samples of older people who don't fall in the under 12khz range.
[2] I myself am 41 and haven't subjected myself to much loud music and can still hear 18khz+ frequencies.
[3] There are also studies that say people during certain situations may have hypersensitive hearing *slightly* above 20khz.
[4] I'm not saying we should then aim for these exceptional circumstances.
[5] I think the main arguments for high-resolution masters has to do with realistic modeling/less compression.
[6] I invested in SACD because of the native recording for DSD classical music and series like the RCA masters: which was claiming more direct studio master tapes to SACD (vs earlier titles on vinyl which had noise and not as true to obtainable quality).

1. True, but that's why I said most, rather than all.
1a. Of course that depends on what you mean by "older". On average, the basic rule of thumb is that we loose (very roughly) about 1kHz per decade. Some 70 year olds can still hear up to 12kHz but many/most, particularly in their later 70's have significant loss even below 8kHz, which affects speech intelligibility.

2. Very few late teens and early 20 year olds can hear 18kHz+, so someone in their early 40's achieving that is truly exceptional! In every case I've come across of such a claim, it's turned out not to be true. Either there was some distortion (IMD for example) at a lower freq they were hearing, their testing methodology was faulty or both. I've seen quite a few people setup say an 18kHz sine wave, increase the volume until they can hear it and then state they can hear 18kHz. Typically, the volume at which they can "hear it" is so high that they've introduced some distortion and even if there is no distortion, they're at a volume level above/well above safe limits (for lower freqs). The correct way to setup such a test (if you don't have an SPL meter) is to dial in a 3kHz sine wave, set the volume to comfortably loud (approx 80dBSPL), increase the sine wave's frequency WITHOUT changing that set volume and finally (at say 18kHz) mute and unmute the signal, blind or preferably double blind, and correctly identify when it's unmuted. 16kHz would be a very good result for someone in their early 40's. I can't say for sure that 18kHz+ is absolutely impossible for that age group but I've never seen or heard of it and it's at least highly unlikely.

3. I've not seen or heard of such studies, do you have any links? There's two problems with your statement: Firstly, "hypersensitive hearing" does not mean one's threshold of audibility is increased, it's a medical condition where the threshold of pain/uncomfortableness is significantly raised. For example, if say a 3kHz sine wave at 80dB is the limit of comfort for someone with normal hearing, someone with "hypersensitive hearing" would reach that limit/threshold at say 60dB and 85dB would be beyond their pain threshold. However, their threshold of audibility, the lowest level at which they can still hear the signal, is unchanged (IE. roughly the same as someone without the condition). Secondly, there are some studies in which some (young adult) subjects demonstrated the ability to hear test tones (isolated sine waves) above 20kHz, even as high as 24kHz. However, that was at very high sound pressure levels, if I remember correctly, in excess of 110dBSPL. Those tests are a few decades old though and are unlikely to be repeated as such levels are now considered potentially damaging (and therefore unethical). Also, these are isolated sine waves, all the evidence (psychoacoustic masking, etc.) indicates/suggests that even at such inadvisable levels, those same freqs would be inaudible within a music signal (where we invariably have far higher levels far lower in the spectrum).

4. Agreed, I don't think there's much of a music consumer market for recordings that contain nothing but ultrasonic sine waves replayed a extremely high levels! :) More seriously though, even actual music recordings containing freqs above around 16kHz is problematic. Sure, we can use high sample rates and mics with extended freq responses (well beyond 16kHz) and if it's there in the first place, record very high/ultrasonic freqs but none of the musicians, engineers or producer creating the recording can actually hear it, so we have no idea exactly what is there or how our mixing/processing is affecting it.

5. Those maybe the main audiophile arguments but they're not factually accurate arguments, although it might (under certain very specific conditions) depend on what you mean by "master" and "realistic modelling". There's no need/advantage of a high resolution distribution master, beyond the obvious marketing gimmick.

6. Native DSD recording of say classical music was just a marketing gimmick, it made no audible difference compared to standard CD. However, it could affect what we put on the SACD. SACDs and SACD players were relatively expensive compared to CDs and CD players, were not portable or ripp-able and SACDs were therefore typically only played in relatively good listening environments on relatively good reproduction systems. This enabled producers and engineers to create mixes/masters specifically targetted for those relatively optimal listening conditions, EG. Potentially using less compression, etc. In other words, an SACD mix will sometimes/often sound better than a CD mix on a good quality system but that's not due to any intrinsic audible benefit/difference between the two media types, we could easily put that SACD mix on a CD without any audible difference.

G
 
Last edited:
Aug 16, 2019 at 10:01 AM Post #9 of 45
1. True, but that's why I said most, rather than all.
1a. Of course that depends on what you mean by "older". On average, the basic rule of thumb is that we loose (very roughly) about 1kHz per decade. Some 70 year olds can still hear up to 12kHz but many/most, particularly in their later 70's have significant loss even below 8kHz, which affects speech intelligibility.

2. Very few late teens and early 20 year olds can hear 18kHz+, so someone in their early 40's achieving that is truly exceptional! In every case I've come across of such a claim, it's turned out not to be true. Either there was some distortion (IMD for example) at a lower freq they were hearing, their testing methodology was faulty or both. I've seen quite a few people setup say an 18kHz sine wave, increase the volume until they can hear it and then state they can hear 18kHz. Typically, the volume at which they can "hear it" is so high that they've introduced some distortion and even if there is no distortion, they're at a volume level above/well above safe limits (for lower freqs). The correct way to setup such a test (if you don't have an SPL meter) is to dial in a 3kHz sine wave, set the volume to comfortably loud (approx 80dBSPL), increase the sine wave's frequency WITHOUT changing that set volume and finally (at say 18kHz) mute and unmute the signal, blind or preferably double blind, and correctly identify when it's unmuted. 16kHz would be a very good result for someone in their early 40's. I can't say for sure that 18kHz+ is absolutely impossible for that age group but I've never seen or heard of it and it's at least highly unlikely.

3. I've not seen or heard of such studies, do you have any links? There's two problems with your statement: Firstly, "hypersensitive hearing" does not mean one's threshold of audibility is increased, it's a medical condition where the threshold of pain/uncomfortableness is significantly raised. For example, if say a 3kHz sine wave at 80dB is the limit of comfort for someone with normal hearing, someone with "hypersensitive hearing" would reach that limit/threshold at say 60dB and 85dB would be beyond their pain threshold. However, their threshold of audibility, the lowest level at which they can still hear the signal, is unchanged (IE. roughly the same as someone without the condition). Secondly, there are some studies in which some (young adult) subjects demonstrated the ability to hear test tones (isolated sine waves) above 20kHz, even as high as 24kHz. However, that was at very high sound pressure levels, if I remember correctly, in excess of 110dBSPL. Those tests are a few decades old though and are unlikely to be repeated as such levels are now considered potentially damaging (and therefore unethical). Also, these are isolated sine waves, all the evidence (psychoacoustic masking, etc.) indicates/suggests that even at such inadvisable levels, those same freqs would be inaudible within a music signal (where we invariably have far higher levels far lower in the spectrum).

4. Agreed, I don't think there's much of a music consumer market for recordings that contain nothing but ultrasonic sine waves replayed a extremely high levels! :) More seriously though, even actual music recordings containing freqs above around 16kHz is problematic. Sure, we can use high sample rates and mics with extended freq responses (well beyond 16kHz) and if it's there in the first place, record very high/ultrasonic freqs but none of the musicians, engineers or producer creating the recording can actually hear it, so we have no idea exactly what is there or how our mixing/processing is affecting it.

5. Those maybe the main audiophile arguments but they're not factually accurate arguments, although it might (under certain very specific conditions) depend on what you mean by "master" and "realistic modelling". There's no need/advantage of a high resolution distribution master, beyond the obvious marketing gimmick.

6. Native DSD recording of say classical music was just a marketing gimmick, it made no audible difference compared to standard CD. However, it could affect what we put on the SACD. SACDs and SACD players were relatively expensive compared to CDs and CD players, were not portable or ripp-able and SACDs were therefore typically only played in relatively good listening environments on relatively good reproduction systems. This enabled producers and engineers to create mixes/masters specifically targetted for those relatively optimal listening conditions, EG. Potentially using less compression, etc. In other words, an SACD mix will sometimes/often sound better than a CD mix on a good quality system but that's not due to any intrinsic audible benefit/difference between the two media types, we could easily put that SACD mix on a CD without any audible difference.

G

When it comes to testing hearing, I've used online sources. For example, on my laptop (which isn't capable of high volume) I start hearing the sweep around 18-17khz (so I can safely hear 17khz):

https://www.audiocheck.net/audiotests_frequencycheckhigh.php

The problem with trying to generalize age groups is that everyone does have different physiologies, so that you're looking at an average. That means there's outliers that have hearing above and below "normal hearing". Studies have found that anywhere from 15hz,16hz,20hz-18khz,20khz is the "normal range for a healthy young adult". We also have to factor that our hearing ability changes throughout the day or at any given moment (from factors such as our ear muscles tensing or how healthy our inner ear fluids are). I have seen at least one study that's more recent that did find outliers that had hearing above the 20khz range (and under "ideal situations"). The paper I'm thinking of was linked in the Testing Audiophile Myths. Another extreme example of environment influencing our hearing is with divers: who can detect sounds up to 100khz (where there's more interaction of water to bone conduction).

https://www.navy.mil/submit/display.asp?story_id=60632

Lastly, as for my claim about high res medium possibly having more "realistic modeling": I meant it may be more faithful to the studio master (tape or digital recording). I think we're in agreement in these areas...as I've hinted that the only audible difference of a SACD vs CD is due to the mixing.
 
Aug 17, 2019 at 5:23 PM Post #10 of 45
First off, I should disclose that I'm not trying to persuade anyone or prove out anything, and I'm not going to 'walk' anyone through the text that follows by pointing or citing anyone or anything. It's a well documented and easily searchable topic. If you think my "conjecture" below sounds like it could be plausible, then you're free to search it out. In similar form, if you have any refuting conjecture (or even 'evidence') to this. I'm very open to researching that and changing my belief on the matter. I am not close-minded.

To understand the reason 24 bit PCM 'during playback' does present a higher resolution we need to firstly agree on what resolution is. Resolution is the ability to 'resolve' or render something as close as possible to the original analog wave form that was present during the recording. Now, we have no control over the ADC that was used, but shall assume that a 24/192 PCM master exists and that this is the 'best that it can get'. From this master, the file is resampled (decimated) to other formats for sale, 16/44 and 24/44 let's say.

So, if the original was 24 bit and we decimate that to 16 bit, we have indeed lost resolution. Why? Well to understand that, you have to understand the role of oversampling and interpolation (reconstruction) in sigma delta DACs during playback:

1) In common D/S DACs all original data points are discarded in favor of an 'interpolated' wave form. When you overlay a 16 bit file's raw data plot with the interpolated output from the DAC, you will observe slight alterations. This is true even when using a linear phase steep roll-off filter (the most accurate interpolator). IOW, the output waveform doesn't perfectly intersect with the raw sample points, but is a best guess based on the sinc function (math calculation). With each round of 'oversampling' the waveform can get further away from the original data points. Closed form filters do keep the original sample points, but come with other tradeoffs and beyond the scope of this argument.

2a) 16 bit holds approx 65,500 value points for amplitude. 16 bit has a dynamic range of 96 db. So during reconstruction/interpolation you get a precision of 682 points PER DECIBEL that will be used to interpolate, aka best guess the final output of the audio.

2b) 24 bit holds approx 16.7 milliion value points for amplitude and has a dynamic range of 144 db. So during reconstruction/interpolation you get a precision of 116,000 points PER DECIBEL that will be used to interpolate, aka best guess the final output of the audio.


Now let's consider the full chain of DSP happening here and this leads us to why 24 bit empirically has better resolution - again - resolution being defined by 'as close as possible to the original analog wave form'. And again, let's assume a 24/192 studio master PCM is our best it can get:

Step 1 - decimation - the resampling to 16 bit truncates the precision of amplitudes by a factor of 256. Dither is applied so that the quantization noise is not correlated to the waveform and sits near the noise floor. The resampling software attempts to model out the original wave form, but... MUST place a data point at one of 65,000 possible points. Original wave form has become very slightly distorted at this point because of decimation.

Step 2 - interpolation - during playback of the new 16 bit file, your DAC is attempting to reconstruct the original waveform but can only latch onto 682 points of amplitude per decibel. It overshoots and undershoots these interpolated values again at this stage and tosses out the original data points in favor of a newly reconstructed wave form based on the sin (x) function. So now you have a 'best guess' interpolation of a wave form that's already been 'best guessed' during decimation. These guesses can only be as precise as 65,000 points per sample, or 682 points per decibel.

PCM 16 bit that originated from 24 bit master will always be 'twice removed' from the original digital master by the time it reaches your ears. However, if we use 24 bit file for playback, we avoid these two rounds.

So with that said, I'll be the first to admit that my ears cannot always tell a difference between a 16 and 24 bit file. Furthermore when I do hear a difference I cannot say for sure that it's because of different mastering, "fake hires" files that have just been upsampled, my gear, or even expectation bias. I firmly believe that many audiophiles buy hires files to err on the side of caution or they buy into the 'higher rate is better' paradigm feeling they might possibly be missing out on something. There is a ton of psychological bias and belief in this hobby and that makes it fun but a bit financially dangerous at the same time.
 
Aug 17, 2019 at 8:32 PM Post #11 of 45
Is "resolution" audible? If it isn't, it doesn't matter. What we can't hear doesn't make music better. I would like to see someone discern a difference between the same music at 16/44.1 and 24/96. I have done many checks like this, and I've never heard any difference. I'd like to see evidence that someone can. The science of audibility is as important as the science of digital sound... more important actually, because our ears are all we have to hear with.
 
Last edited:
Aug 17, 2019 at 11:52 PM Post #12 of 45
First off, I should disclose that I'm not trying to persuade anyone or prove out anything, and I'm not going to 'walk' anyone through the text that follows by pointing or citing anyone or anything. It's a well documented and easily searchable topic. If you think my "conjecture" below sounds like it could be plausible, then you're free to search it out. In similar form, if you have any refuting conjecture (or even 'evidence') to this. I'm very open to researching that and changing my belief on the matter. I am not close-minded.

To understand the reason 24 bit PCM 'during playback' does present a higher resolution we need to firstly agree on what resolution is. Resolution is the ability to 'resolve' or render something as close as possible to the original analog wave form that was present during the recording. Now, we have no control over the ADC that was used, but shall assume that a 24/192 PCM master exists and that this is the 'best that it can get'. From this master, the file is resampled (decimated) to other formats for sale, 16/44 and 24/44 let's say.

So, if the original was 24 bit and we decimate that to 16 bit, we have indeed lost resolution. Why? Well to understand that, you have to understand the role of oversampling and interpolation (reconstruction) in sigma delta DACs during playback:

1) In common D/S DACs all original data points are discarded in favor of an 'interpolated' wave form. When you overlay a 16 bit file's raw data plot with the interpolated output from the DAC, you will observe slight alterations. This is true even when using a linear phase steep roll-off filter (the most accurate interpolator). IOW, the output waveform doesn't perfectly intersect with the raw sample points, but is a best guess based on the sinc function (math calculation). With each round of 'oversampling' the waveform can get further away from the original data points. Closed form filters do keep the original sample points, but come with other tradeoffs and beyond the scope of this argument.

2a) 16 bit holds approx 65,500 value points for amplitude. 16 bit has a dynamic range of 96 db. So during reconstruction/interpolation you get a precision of 682 points PER DECIBEL that will be used to interpolate, aka best guess the final output of the audio.

2b) 24 bit holds approx 16.7 milliion value points for amplitude and has a dynamic range of 144 db. So during reconstruction/interpolation you get a precision of 116,000 points PER DECIBEL that will be used to interpolate, aka best guess the final output of the audio.


Now let's consider the full chain of DSP happening here and this leads us to why 24 bit empirically has better resolution - again - resolution being defined by 'as close as possible to the original analog wave form'. And again, let's assume a 24/192 studio master PCM is our best it can get:

Step 1 - decimation - the resampling to 16 bit truncates the precision of amplitudes by a factor of 256. Dither is applied so that the quantization noise is not correlated to the waveform and sits near the noise floor. The resampling software attempts to model out the original wave form, but... MUST place a data point at one of 65,000 possible points. Original wave form has become very slightly distorted at this point because of decimation.

Step 2 - interpolation - during playback of the new 16 bit file, your DAC is attempting to reconstruct the original waveform but can only latch onto 682 points of amplitude per decibel. It overshoots and undershoots these interpolated values again at this stage and tosses out the original data points in favor of a newly reconstructed wave form based on the sin (x) function. So now you have a 'best guess' interpolation of a wave form that's already been 'best guessed' during decimation. These guesses can only be as precise as 65,000 points per sample, or 682 points per decibel.

PCM 16 bit that originated from 24 bit master will always be 'twice removed' from the original digital master by the time it reaches your ears. However, if we use 24 bit file for playback, we avoid these two rounds.

So with that said, I'll be the first to admit that my ears cannot always tell a difference between a 16 and 24 bit file. Furthermore when I do hear a difference I cannot say for sure that it's because of different mastering, "fake hires" files that have just been upsampled, my gear, or even expectation bias. I firmly believe that many audiophiles buy hires files to err on the side of caution or they buy into the 'higher rate is better' paradigm feeling they might possibly be missing out on something. There is a ton of psychological bias and belief in this hobby and that makes it fun but a bit financially dangerous at the same time.

You used a lot of words to describe what quantization noise is. The thing is that at 16 bit, the quantization noise is at -96 dB, allowing for a dynamic range of 96 dB. I'm not saying that 24 bit has less or equal "resolution" than 16 bit, but does it matter anymore? If you listen to your music at 110 dB (max total daily exposure is only 4 mins!) the quantization noise is only at 14 db. Do you know what 14 db sounds like? Do you know what 14 dB sounds like under a 110 dB signal? Do you know what 14 dB sounds like under a 50 dB signal? I know, I've tried it (not the 110 db of course).

Now use a practical listening volume like a peak volume at 90 dB. Where's the quantization noise now? -4 dB. Is that low enough? Does your weakest audio chain even have a -96 dB noise floor? Does it really matter at this stage? For me, it's a resounding no. I can't be bothered to buy into the marketing hype of Hi-Res audio for something that is so insignificant. For all intent and purposes 16 bit will sound identical to 24 bit versions.
 
Aug 18, 2019 at 1:08 AM Post #13 of 45
I'm not suggesting (and even said as much) that there is an audible difference between 16 bit and 24 bit. I primarily stressed the distortions in the interpolated waveform as it's resampled and reconstructed. There is no quantization noise created during playback (reconstruction) of a file at least. Quantization noise is only generated during ADC or resampling decimation, when the 'math' must pick a value of amplitude that doesn't exactly match the antecedent source. Quantization noise is nothing more than a byproduct, it is not the culprit of resolution loss (in my opinion). My point was that a more accurate waveform can be interpolated at 24 bit than 16.

Now, sure we can discuss 'what's good enough' and is 'more amplitude slots really better'. In another life I would've argued that with you. I'm telling you patently, more data points is really better (given real world hardware limitations) and can be measured as such. I'm also agreeing that 65,000 is 'probably for most people' good enough (close enough to the source - and any more is inaudible). I was teeing up an interesting way to look at your OP. That it's not about dynamic range, sampling frequency or even quantization error. It's about the shortcomings of resampling and the digital to analog conversion process (interpolation). The shortcomings being that DACs can approach IIR and true sin (x) functions, but haven't yet and won't for a long time - no matter how many times they oversample or 'taps' they have. Added bit depth can help these shortcomings in the meantime.

Now... I was going to let bits and pieces of the above unfold over a series of posts where fellow enthusiasts had enough respect for one another to be open minded and courteous but ........

You used a lot of words to describe what quantization noise is.

Do you know what 14 db sounds like? Do you know what 14 dB sounds like under a 110 dB signal? Do you know what 14 dB sounds like under a 50 dB signal?

I find you condescending, so I'll leave the thread instead.
 
Aug 18, 2019 at 1:38 AM Post #14 of 45
Now, we have no control over the ADC that was used, but shall assume that a 24/192 PCM master exists and that this is the 'best that it can get'. From this master, the file is resampled (decimated) to other formats for sale, 16/44 and 24/44 let's say.

Yep, that's how audiophile myths/marketing commonly works: Make some (false) assumption/scenario, talk about "best guesses" and then (falsely) conclude the result is anything from barely/not always audible to a "night and day" difference.

Dealing with these in turn:

1. Your assumption/scenario could only exist in the case of a 2 channel, direct to distribution 24/192 recording, IE. With no mixing or mastering. This assumed scenario occurs in approximately 0% of commercial music recordings! In practice our master is never 24bit, it's most commonly 64bit float or in earlier decades either 32bit float or 48bit fixed. Furthermore, during mixing and mastering, there is likely to be several/many resampling processes. In fact not even the engineers themselves know how many, because many digital/plugin processors resample internally. So the actual scenario is a 64bit master that includes at least a few (and commonly a couple of dozen) resampling processes, from which both the 24/192 and the 16/44 versions with be resampled. Using your words, both the distribution versions will be once removed from the master.

2. Using the given assumption, it would not be a "best guess" it would be a perfect reconstruction and stating otherwise is contradicting the already proven Nyquist/Shannon Theorem. In practice, the Theorem cannot be perfectly implemented so we do not get absolutely perfect reconstruction but we do get exceedingly close, certainly far closer than human ears could possibly detect. Which brings us to...

3. Audibility. Why don't you take a 24bit music recording, truncate it to 16bit and see for yourself if you can hear the truncation error? Good luck with that! Even if you could, that's NOT the scenario we're actually dealing with anyway! What we're actually dealing with, in the case of standard mastering practice, is a noise-shaped resampling process with the dither/quantization noise down at -120dB and we can be certain that this is inaudible because your transducers cannot reproduce white noise 120dB below peak, it's swamped by the transducer's own internal noise. And, even a hypothetical, never observed, super human hearing ability can't hear what isn't being reproduced in the first place!

G
 
Aug 18, 2019 at 2:22 AM Post #15 of 45
[1] I'm not suggesting (and even said as much) that there is an audible difference between 16 bit and 24 bit.
[2] I'm telling you patently, more data points is really better (given real world hardware limitations) and can be measured as such.
[3] I find you condescending,
[3a] so I'll leave the thread instead.

1. You stated "I'll be the first to admit that my ears cannot always tell a difference between a 16 and 24 bit file." - This clearly suggests that some/most of the time there is an audible difference!

2. This is patently false! Given real world hardware limitations, we cannot even reproduce 16bit (NS dithered) and can't possibly reproduce 24bit without breaking the known rules of physics. And not only can we measure this but it has been measured countless times! According to your understanding, 1bit SACD should have terrible audible resolution compared to 16bit CD.

3. Hang on, you are "patently telling us" something that isn't even true, assertively making suggestions of audibility, which also are not true (and are self-contradictory) and we are the ones being "condescending"?
3a. Yep, typical. Regurgitate some audiophile marketing myths/falsehoods then when challenged go off in a huff, complaining about some behaviour that you yourself are guilty of!

G
 

Users who are viewing this thread

Back
Top