I came across this thread by chance and I have read the first page with interest. I notice the thread is 75 pages long now. Unfortunately I don't have a couple of weeks spare to read it all, so apologies if my point has already been made/disputed/disproved.
Digital audio reproduces sound along two axes. Frequency and Amplitude. The shape of the waveform is a function of the frequencies conventionally along the horizontal (x) axis and the amplitude along the vertical (y) axis.
One tends to imagine that if you sample a smooth analog curve every so often and draw a bar chart of the results, you get a jagged edge in place of the smooth curve. The more frequently you sample, the smoother and less jagged the digital representation and as the sampling frequency approaches infinity you arrive at a perfectly smooth curve. That is what differential calculus is all about. In theory you do not need to do this. It is not easily understood and is in any case counter-intuitive, that by sampling at a higher frequency than is used in CD, you do not get a closer approximation to the shape of the original analog waveform, (closer to a smooth curve than a bar chart), but Nyquist has proved this and I don't have the maths to argue. According to his theorem, 44.1kHz is enough of a sampling frequency to reproduce perfectly a waveform of up to 20kHz content. You obviously need a greater sampling frequency to reproduce accurately waveforms of higher frequency than human hearing is capable of, but we are here considering human audio.
However bit depth is a different matter. In practice, the 144 dB that 24-bit allows, does not translate (and is not intended to translate) into nearly 200dB of sound, destroying the ear-drums. You can always turn the volume down, after all.
A single musical digit somewhere along the x-axis is a number from -32,767 to +32,767. If you try to code a number higher than these into a recording, it gets clipped off. The equipment cannot understand what 32,769 is intended to mean and you usually end up with some very odd and unpleasant artefacts. There is a useful function here. If you take a snatch of music (say a sine wave) ranging from e.g. -12,000, through zero to +12,000; when this has been through a DAC and amplified out into speakers, this will play at a certain sound pressure level. If you change nothing else, but double the digits arithmetically to range from -24,000 to +24,000 - you double the volume. This makes trimming digital music to increase or reduce the volume very easy.
What this is also saying is that even in CD-quality 16bit sound, you can record 65,000 or so different levels of volume of whatever instrument you are recording. Whether the human ear can distinguish between a sine wave ranging from -12,000 through zero to +12,000 and one doing the same but at 12,001, I do not know. Electronic keyboards used to have something like 127 different volume settings according to how hard you hit the key. That was acknowledged to be inferior to the analog results of striking a piano key, but 65,000?
So if you get in amongst the digits and start adjusting them, with 24bit sound, you do not multiply the numbers up from 32,768 to some astronomical number - still none of your equipment would understand what any number above 32,767 meant. In practice, you get the option to adjust each digit not a whole digit at a time, but to a decimal place. So you can vary the volume not just from 12,000 to 12,001, but from 12,000.6 to 12,000.7 etc. In this way, you get an increase in dynamic range by adding decimal point precision to your amplitudes. Sounds that were previously recorded at the same amplitude (let's say Volume) which were rounded in CD-quality 16bit sound to the nearest whole digit, may now be represented by different, more precise numbers when recorded in High-Res.
Whether many human ears can actually detect a difference is a relevant question, but at least (unlike Nyquist and sampling frequency) nobody has yet come up with a mathematical proof that it makes no difference.