Really sorry if this has been covered, but I have two questions.
If I understand correctly, the bit size (16/24) is relative to the db range the music has. Would it not be possible that 24bit can fit into the same db range but perhaps with more accuracy? Certainly, we don't want music playing at 150db, but perhaps that 60 or so db range can have more detail using 24 bit, something like over sourcing I guess?
for resolution. I was actually thinking before I read it "I wonder if they use 44k to cover the 22kHz range in music - that could cover both wavelengths." Turns out it seems right. Though, I would wonder if perhaps 96k resolution may be better to pick up off phase sounds (if I said that correctly) and not so much for playback of higher frequencies that we can't possibly hear. Rather, higher resolution just the same as a picture, more detail within the hearing range we do have.
If anything I'm thinking applies or makes sense, I would suppose that 24/96 is ideal. Though both arguments pretty much say the same thing, more detail within the range we do listen to and not so much used to extend beyond that range. I would think, being a digital format, it couldn't possibly merrit every single bit of range either frequency or decibel output, just close enough that one would have a very hard time pointing it out. Would that make any sense or am I completely off base?
I'll try the dumb explanation as that's the only way I can think ^_^.
let's say one sound is a wave, another sound is another wave, when a band is playing you record only one signal accumulating all the waves. you know like ripples on the surface of water that will add up or cancel each others so that the surface at one point is always only at one position. same for the album, the signal is always only at only 1 amplitude at a time, however how many instruments are recorded.
when the music is recorded on 16bit instead of 24bit, you end up with each sample slightly different from the original, that difference to us is as if another instrument had been added to the playing band. it doesn't change how the band sounds, just like if you added one more guitar in the mix, the singer would still sing the same way, but the signal would be modified.
that's the magic of audio, quantization errors don't change the music, they just add some noise to the music.
now does it matter? well that noise on a 16bit track is 96DB quieter than the max signal, so even with some headroom on the record, you can expect the noise to be a good 90db below the music. remember that the quietest part of the song will rarely go 60db below the max loudness. so you pretty much end up with a slight hiss some 30db below the quietest sound recorded on the most dynamic album you own. that's how dramatic 16/44 really is ^_^.
you don't improve music, with highres you only improve the silence really. on both resolutions the music itself will be perfectly reproduced down to the quietest sound.
going to 24bit reduces the error value of the samples, so you end up with the band playing and now some noise at -144db(in theory at least) instead of -96. but it changes nothing to the sound of the band. that's the crazy cool thing about sound.
and getting higher sample rate can yield the same kind of result, be it in time error or in noise(both are linked for waves). it is factually better, but could you actually hear the noise on a 16/44 track? if you could, then highres might be justified, if you couldn't(like everybody at normal listening level) you're only improving something you don't hear and most likely are paying premium price for that.