Why 24 bit audio and anything over 48k is not only worthless, but bad for music.

May 23, 2025 at 7:31 AM Post #3,677 of 3,692
How much better can sound reproduction get, are we almost at the pinnacle or is there still some way to go, can we expect major improvements in the future?
Somehow I suspect AI will find an application here because the way things are going now AI will find its way into my bowl of breakfast porridge.

Maybe some new medium will come out and make everything we now use obsolete.
It had better be round. I like round.
 
May 23, 2025 at 8:57 AM Post #3,678 of 3,692
@eq1849: You don't remember we had this same discussion with you at least 5 times?
Admins should delete circular arguments that have already been asked and answered.
 
May 23, 2025 at 9:13 AM Post #3,679 of 3,692
Dave how many quantization steps are there in the 23rd bit of 24 bit audio?
For the love of god, how dense can you be?

First of all, an infinite amount of digits are not needed to encode the result of a measurement with a finite precision. If an ADC has a precision of +/- 10uV, using digits to show the result of the measurement down to the nano volt range is completely pointless. The ADC can't measure voltage perfectly so only a finite amount of bits are needed. This should be easy enough to be understood by children, so why can't you do it?

Second, when a signal gets quantized a small random "error" is made. Random error is just an other name for noise. I put error in quotes because as an example, there is no error made when a measurement taken with a +/-10micro Volt precision gets rounded to the closest 10nano Volts.

With 16bits, there are ~66k levels. This is enough for playback because the error contributed by anything else but the rounding is what limits the precision of playback.
With 24bits there are ~17000k levels. This is also enough for playback because the error produced by the quantization is not the limiting factor for precision.
 
Last edited:
May 23, 2025 at 9:23 AM Post #3,680 of 3,692
Maybe some new medium will come out and make everything we now use obsolete.
I hope not, at least for a few million years, as the medium we currently use for music/sound/acoustics is air! :)
Somehow I suspect AI will find an application here because the way things are going now AI will find its way into my bowl of breakfast porridge.
Not sure, I can’t see how it would be useful for playback/reproduction. Maybe for acoustics or binaural processing but it’s not really clear what it could accomplish, apart from marketing BS of course. It might provide an interesting toy/gimmick for consumers, for example being able to just listen to a particular instrument or sound in a mix. Certainly AI has an application in sound production and indeed I’ve been using tools that employ machine learning for 7 or 8 years already.

G
 
May 23, 2025 at 9:34 AM Post #3,681 of 3,692
No. It doesn't call for any quantization steps. It only tells us how high the sampling frequency has to be. The properties of human hearing and music consumption practicalities call for about 8,000-10,000 steps (13 bit).

The sampling theorem, at 44.1khz, I believe would resolve the analog signal (infinity, infinity) as:

(Infinity, 44,100)

(Infinity, infinity)


I don’t think your 13 bit audio can do that, given (8192,44,100).

Which leads me to ask how have you determined 13 bit audio is enough?
 
Last edited:
May 23, 2025 at 10:09 AM Post #3,682 of 3,692
The sampling theorem, at 44.1khz, I believe would resolve the analog signal (♾️,♾️) as:

♾️, 44,100

♾️,♾️
You explain to us how you get an infinite analogue signal without breaking the laws of physics (and destroying the universe) and we’ll explain to you how many bits that would need! Are you crazy?

G
 
May 23, 2025 at 10:26 AM Post #3,683 of 3,692
In attachment there are files with repeating: 0.5 second of silence and 0.5 second of 13-bit dither. One files uses flat dither, the other one uses shaped dither ("gesemann" filter in SoX).

At the highest volume level I ever listen (to some classical) the shaped dither is barely audible. And that's when it's played on its own, not being masked by music. So to me 13-bits is about right.

dither.flat.png

dither.gesemann.png
 

Attachments

May 23, 2025 at 10:40 AM Post #3,684 of 3,692
In attachment there are files with repeating: 0.5 second of silence and 0.5 second of 13-bit dither. One files uses flat dither, the other one uses shaped dither ("gesemann" filter in SoX).

At the highest volume level I ever listen (to some classical) the shaped dither is barely audible. And that's when it's played on its own, not being masked by music. So to me 13-bits is about right.

dither.flat.png
dither.gesemann.png
@danadam bringing the heat. Bit depth and sample rate are the two properties of digital audio that are the most misinterpreted by audiophiles. They try to tie it to things they hear but they fundamentally do not understand that the difference between 16 bit and 24 bit audio isn't more "dynamics" in the instruments, it's a lower noise floor. 192kHz sample rates don't reproduce "details" better, it just means it can reproduce a bunch of frequencies you can't hear (but are ultimately useful in the production workflow so lossless pitch shifting/tempo alterations can happen, among other advantages).
 
May 23, 2025 at 11:05 AM Post #3,685 of 3,692
The sampling theorem, at 44.1khz, I believe would resolve the analog signal (infinity, infinity) as:

(Infinity, 44,100)

(Infinity, infinity)
Nonsense. There is no noise situation ever in this universe, analog or digital. Also, 44.1 kHz goes only up to 22.05 kHz theoretically and up to 20-21 kHz in practise.

I don’t think your 13 bit audio can do that, given (8192,44,100).
"My" 13 bit audio can do sound that is transparent to human ears in any sane listening scenario that doesn't lead to severe hearing damage. If you want audio that can first do mosquitos flying in extremely silent anechoic chamber followed by realistic cannon that will make you deaf in slit second, then sure, "my" 13 bit audio doesn't cut it. I don't want to loose my hearing and I don't listen to mosquitos+cannons music so 16 bit as is the standard is more than enough for me and should be for everybody else too.

Which leads me to ask how have you determined 13 bit audio is enough?
It provides about 80 dB of dynamic range. Human hearing and listening practicalities are such that about 70 dB is enough for just about anything and having a little safety margin (the dynamic range doesn't need to be used optimally) is nice. You can try yourself reducing bits by truncating them and you should find out that around 10 or 11 bits hearing the truncation becomes very very hard. Vinyl offers about 10 bits worth of dynamic range (good condition records) and vinyl enthusiasts rarely whine about having too little dynamic range.
 
Last edited:
May 23, 2025 at 12:05 PM Post #3,689 of 3,692
Controlled listening tests determine transparency. The thresholds known as JDD have been established so you can look at measurements and be pretty clear if it’s transparent.
 
May 23, 2025 at 12:49 PM Post #3,690 of 3,692
Controlled listening tests determine transparency. The thresholds known as JDD have been established so you can look at measurements and be pretty clear if it’s transparent.

What is the expected outcome of these listening tests you perform?

Is the expected outcome negatively biased? In other words, is failing the test the expected outcome?

If the bias is negative, and you keep failing tests where the expected outcome is failing, do you believe this provides strong or weak evidence to support your claims?

Conversely, if the expected outcome is negative, but some pass with statistical significance, would that provide strong or weak evidence?
 
Last edited:

Users who are viewing this thread

Back
Top