I think what you mean is that real music gets convolved with your ears "self distortion" function, in the case of the reproduced signal gets convolved with both the amplifiers distortion profile and your ear's?
No, that’s not really what I meant. I was talking about what the signal is that you’re reproducing, the music recording itself, how is it created and what does it contain? The recording already has a “distortion profile”, microphones and mic pre-amps produce distortion and often deliberately so, electric guitars are pretty much nothing but distortion, bass guitars almost the same, synth sounds typically have a lot of distortion, drum kits are massively processed/distorted and then more distortion is typically added during mixing and mastering. 60 years ago that was achieved using tape saturation, overdriving mic amps, compressors and limiters. These days we can be far more choosy, with a whole raft of very highly configurable distortion plugins, transient shapers, modelled vintage gear and endless tools to manipulate the freq content of both the audio and the added distortion (PEQ being an obvious example).
The important part here is; on what basis is all this manipulation and distortion being added/created? Obviously it’s being done by the musicians (in the case of e-guitars, basses and synth patches), by the mix engineer and the mastering engineer and they are incredibly choosy (almost to the point of being anal) about exactly what distortion and how much is applied but their basis for all these choices is their own human hearing. In other words, if we’re just talking about a sort of general human “distortion profile”, then that is already baked into the recording that you’re reproducing because a general human distortion profile was what the musicians/engineers were relying on when creating the recording’s content. Take for example the equal loudness contours (I assume you’re familiar with them), let’s say an amp manufacturer thinks to themselves; “I know, let’s create an EQ curve to compensate for the way we perceive music/sound, so a sort of inverse of the equal loudness contour, EG. A bit of a reduction around 3kHz because our hearing artificially boosts that frequency region and a lot more bass and treble because our hearing rolls-off a lot in those ranges”. Sounds like a good idea but it would actually be a terrible idea because the mix and mastering engineers have created the mix and master according to their hearing, so the recordings you’re playing already automatically have a compensation for the equal loudness contour, otherwise it would have sounded way wrong to the engineers. An amp that actually did that compensation would effectively result in a playback where the equal loudness contour had been compensated for twice and it would sound terrible (although you might find someone who likes it).
The best thing you can do, is just playback the recording with the best fidelity you can and then the “distortion profile” will be as correct as practical. Incidentally, the amplifier’s distortion profile is irrelevant because in virtually all cases it’s inaudible, the only exception is one or two rare tube amps with such horrifically bad distortion that it’s actually audible (or user error, using the wrong amp for the task).
Definitely agree that you can't see real music as a superposition of pure tones which then get processed individually by the ear/brain.
Real music is of course just a superimposition of pure tones but we don’t perceive them as such, we just perceive sounds/instruments with a “timbre” rather than all the individual harmonics separately. The human ear is especially good at separating these sounds though, current evidence suggests we’re better at this listening task than any other animal, even those with more sensitivity and a greater frequency range.
G