I don't totally disagree with you but I do believe you need to think about what you are saying here. The suggestion (that you made to me), is that the fundamental tones are not actually that fundamental! I say that because, to me, you are emphasizing the role of 'all the other sound' outside the fundamentals that 'distinguish' something (like an instrument). Just because you can slide an EQ around and hear something change for the better or for the worse, it doesn't conclude that a problem like sibilance (which again is not necassarily bad) is solved at one frequency extreme or the other. My suggestion was that there is a difference between vocal and instrumental sibilance, and there is a difference between a sibilant recording, and a sibilant speaker. The former you can personally do nothing for, the latter, you can try to 'fix'. In a completed recording that you are playing back, tweaking a frequency for a voice is impacting everything else in the recording also, so as you note, maybe the voice problem isn't as bad, but now the trumpet sounds a little flat. That is why it is so important to have a good recording and a good speaker - at the end of the day, the issues that may be audible to you are not really going away.
I am not an expert on sound by any means, but I wish I could find the link I posted eons back that had a website where you could listen to slices of a radio broadcast (with some music), to discover why certain recordings are processed the way they are. To the contrary of what you said, the article was actually about all the sound information that you DO want to process out, to maximize vocal intelligibility and definition. Just because recording equipment captures the deep bass of a man's voice, it does not mean you want all that bass in the broadcast - it will be boomy and distracting, and 'unnatural'. It lead by example, by letting you listen to slices of the same recording over and over again. As it turned out, it was easy to hear how bfrequencies at, say, 200 hz were where voices could sound muddy and bloated, while around 2 khz they could become very sibilant and harsh. Without question, all frequencies were important to a well-done recording. However, if there was but one range of frequencies to have, for the sake of actually being able to hear what the persons were saying at all (and still somewhat human), it was right around 1 khz. There was virtually NOTHING going on above 4 khz and anything higher than that was basically inaudible - just scraps of sound that would be very easily missed. So, as I suggested, and what I hold to be an important rule of audio-wisdom is this: YES it might be there, but IS IT AUDIBLE, is it CRITICAL to reproduction of the sound??? That is the most important question sometimes. Without question, there is more vocal sibilance potentially at 1-4 khz than you will find at 15 khz.
The presence of second, third . . . 90th order 'harmonics' as providing the missing link of musical beauty to me sounds more like marketing speak than science-speak. Today's recordings are typically digital, giving audio engineers free-reign to tweak every aspect of the recording one instrument and vocal at a time. They can and will choose to shape the sound of everything recorded, they will set playback levels, speed, add artificial resonances, and on and on. So if they screw it up (and boy, do they frig with everything sometimes, to our loss), we (the listener), are basically screwed. You may think that the HE-400 will never sound 'real', and that is fine, but lets not confuse multiple subjects together.
I get that there are big differences between real, live music, instruments, and recording and playback electronics. But I think you should take another look at my post. To what extent do you think the harmonics outside the fundamental tones are not only audible, but critical to the enjoyment of these headphones? Because I am not convinced that an issue like sibililance should be tackled in the 15 khz range, when the human voice cannot physically create that frequency itself.