Don’t let anyone tell you whether you can or cannot hear a difference. Download some sample files and see for yourself what you can or can’t hear (using good equipment). Also, I wouldn’t trust anything anyone says here unless you find evidence or you try it yourself. It seems people get off on sounding knowledgeable even though a lot of them don’t even bother reading. [size=medium]I have even heard people claim that there are tests to prove there are no audible difference between 128 bitrate mp3 files and flac lossless. [size=medium]Hey, if in your humble opinion you think there is a difference and you enjoy it more, what gives them the right to tell you otherwise? [size=medium]Most of them will attack other opinions because they think it discredits their own opinions.[/size][/size][/size]
Given the existence of musical-instrument energy above 20 kilohertz, it is natural to ask whether the energy matters to human perception or music recording. The common view is that energy above 20 kHz does not matter, but AES preprint 3207 by Oohashi et al. claims that reproduced sound above 26 kHz "induces activation of alpha-EEG (electroencephalogram) rhythms that persist in the absence of high frequency stimulation, and can affect perception of sound quality." [4]
Oohashi and his colleagues recorded gamelan to a bandwidth of 60 kHz, and played back the recording to listeners through a speaker system with an extra tweeter for the range above 26 kHz. This tweeter was driven by its own amplifier, and the 26 kHz electronic crossover before the amplifier used steep filters. The experimenters found that the listeners' EEGs and their subjective ratings of the sound quality were affected by whether this "ultra-tweeter" was on or off, even though the listeners explicitly denied that the reproduced sound was affected by the ultra-tweeter, and also denied, when presented with the ultrasonics alone, that any sound at all was being played.
From the fact that changes in subjects' EEGs "persist in the absence of high frequency stimulation," Oohashi and his colleagues infer that in audio comparisons, a substantial silent period is required between successive samples to avoid the second evaluation's being corrupted by "hangover" of reaction to the first.
The preprint gives photos of EEG results for only three of sixteen subjects. I hope that more will be published.
In a paper published in Science, Lenhardt et al. report that "bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects." [5] They speculate that the saccule may be involved, this being "an otolithic organ that responds to acceleration and gravity and may be responsible for transduction of sound after destruction of the cochlea," and they further point out that the saccule has neural cross-connections with the cochlea. [6]
Even if we assume that air-conducted ultrasound does not affect direct perception of live sound, it might still affect us indirectly through interfering with the recording process. Every recording engineer knows that speech sibilants (Figure 10), jangling key rings (Figure 15), and muted trumpets (Figures 1 to 3) can expose problems in recording equipment. If the problems come from energy below 20 kHz, then the recording engineer simply needs better equipment. But if the problems prove to come from the energy beyond 20 kHz, then what's needed is either filtering, which is difficult to carry out without sonically harmful side effects; or wider bandwidth in the entire recording chain, including the storage medium; or a combination of the two.
On the other hand, if the assumption of the previous paragraph be wrong — if it is determined that sound components beyond 20 kHz do matter to human musical perception and pleasure — then for highest fidelity, the option of filtering would have to be rejected, and recording chains and storage media of wider bandwidth would be needed.