1. You are not looking at a waveform!!! You are looking at a graphical representation of the data point on your disk, NOT the waveform once it has been converted (via a dithering quantiser in the DAC)!
2. Again, NO! It will have zero error after dither.
3. No one is arguing for "good enough" (10bit audio), we are arguing for at least 40 times "better than we need" which is what we get with 16bit. What do you not understand?
4. Yes, white noise that's 40 times lower than the noise floor of any commercial recording! What is there here which you do not understand?
5. The only fact which remains is that you're refusing to understand how digital audio works and are using any audiophile myth you can find to prove a point about 24bit which is false! The fact is, that with dither the waveform shape is perfect, down to about -92dBFS with dither and then it's still perfect but covered by white noise.
Answer the question please, how much more than 40 to 100+ times more dynamic range than is ever used do you want, and why?
G
I'm not sure why you are saying I'm not looking at the waveform, do you suggest I get inside the wires and watch the electrons drifting back and forth?
I don't think you realise what dither actually is, it's just noise added before quantisation to even out the quantisation errors, it doesn't actually make each little wavelet have the right shape, each point will still be quantized to the wrong value.
There's a Wikipedia article that's fairly good at explaining:
https://en.wikipedia.org/wiki/Dither
Wikipedia said:
If a series of random numbers between 0.0 and 0.9 (ex: 0.6, 0.1, 0.3, 0.6, 0.9, etc.) are calculated and added to the results of the equation, two times out of ten the result will truncate back to 4 (if 0.0 or 0.1 are added to 4.8) and the rest of the times it will truncate to 5, but each given situation has a random 20% chance of rounding to 4 or 80% chance of rounding to 5. Over the long haul this will result in results that average to 4.8 and a quantization error that is random — or noise.
Note the phrase 'Over the long haul' as a clue to the temporal difficulties: the quantisation is still there, dither is not magic, it just trades the repeating error into noise over time.
This noise is also itself quantised of course and there are various home brew dithers that sound better than others, but it's not analogue noise, it's still stuck to a number of levels like a randomised PWM.
For instance a brief level of 0.5 bit is transformed from a truncated 0 with dither to a random 50/50 split between 0 and 1 which one hears as noise. Not a nice noise as in analog, but a rather coarse noise: try it on an 8 bit waveform and listen.
Why exactly are you against 24bit, do you need a bigger disk or is it metering charges on the internet?
Here's a study that found 24bit was more perfect than 16bit BTW:
http://www.aes.org/tmpFiles/elib/20170620/18296.pdf
But even disregarding that, I'm puzzled by the fight for 'good enough' when clearly 24/96 is not only achievable with ease but it seems used by everyone outside of audio: why have the HiFi crowd got the worst format? Is the obsolete CD format really that important anymore? I can't recall the last time I listened to a CD on a CD player, they arrive in the post and get ripped that day.
Perhaps the lack of easily available iTunes downloads at 96/24 isn't the record companies fault after all, but out fault for fighting and insisting we get sold an inferior format?
Why are we demanding mediocrity in audio? Do we turn down 100W amps because we may only want 2W to use? Is that speaker too good for us so we get a lower grade one? Is that steel beam holding up the house too good for the job and we demand a smaller one? Format wars appear to be the only branch of HiFi where people demand less, rather than more.