Interesting arguments. Interesting because some of them are based on a perception of fact rather than the reality. For example, when looking at a waveform on a computer screen, what are you actually looking at? You're looking at a graphical representation of the digital data stored in the audio file. What you are not looking at is a representation of what the analogue waveform will look like once it's come out of a DAC. This is obvious if you think about it, how could a piece of software emulate the effects of all the different processes that take place in every DAC on the market? In other words, of course a 44.1kFs/s file is going to look less detailed than a 96kFs/s file, the question is, is it any less accurate once it's converted back to an analogue waveform? The answer is no! The answer has to be no, otherwise the whole theory of digital audio is wrong and digital audio doesn't exist!! You need to have two sampling points per waveform in order to perfectly recreate that waveform. Having more than two points is not going to make the recreated waveform more perfect. Hence why the Nyquist theorem states that you need to have twice the sampling frequency to encode a given audio frequency.
In response to freqs >22kHz having an affect on the freqs in the hearing range. Possibly but what difference does it make? If anything is affected in the hearing range, those effects would be encoded at 44.1kFs/s as well as they would at 96 or 192.
I routinely use a system that has 48bit resolution, that's 288dB dynamic range. So it must sound wicked compared to 24bit when listening to completed mixes, err no. It makes no difference whatsoever, nor does comparing my 48bit system with 16bit. It's not unusual to find pop songs with a dynamic range of less than 10dB, for classical it's usually less than 50dB. 96dB (16bit) is more than enough, why do you need 144dB (24bit)? Maybe you want to hear the tuba player's nose hairs vibrate, just before he plays a note and vapourizes your eardrums!
Why do some of you refuse to believe that intrinsically 24/96 as a consumer format is no better than 16/44.1 and that any perceived difference is an effect of a DAC?
My guess is it's because it's difficult to get past the logical (but incorrect) assuption that more data must mean more detail and therefore better quality.
Crowbar - The idea of recording is not to make the recording sound identical to the live performance. Most pop music cannot be performed acoustically. Even with classical music this statement is incorrect. You go listen to a french horn, tuba or even flute, up close. It sounds nothing like it does in a big concert hall from an audience point of view, but we can't put the mic too far away or we'll get SNR problems and no clarity. So we have to fake it, we make value judgements about the perception of our target demographic and then we work to thier expectation. We also have to fake it because recording equipment is far from perfect (sometimes deliberately so) and we have to make compensations.