Separate names with a comma.
Your questions are all very good. These are problems the music and recording industry have been wrestling with basically since their inception. The smartest guy on this subject is probably Floyd Toole, formerly of Harman. According to Floyd, the "circle of confusion" makes it extremely difficult to maintain the fidelity of a performance from the recording to the listening environment... This is why good sound engineers get paid pretty good salaries.
Recently there have been attempts to virtualize performances to produce an exact reproduction (or as close as possible) at the ear, using various forms of 3D/binaural/immersive audio technology. I haven't kept up with it, but you can probably read about the tech on T. Hertsen's site, Inner Fidelity.
What if there is no live recording space though, and the music is mostly or solely created using samples or synthetic sounds? That creates a different set of problems. And then you to start think about differences and relationships between the mastering and listening spaces. That can be a trap as well though, because the intended listening space may be very different acoustically than the space in which the music is authored or mastered. When it comes to the artist's intent, I tend to feel that the intended listening space is more important than the recording or mastering spaces. But that's just me.
Don't quote me on this. But I believe Floyd and the researchers at Harman reached the conclusion that tone controls (EQ) could be a useful tool to compensate for differences in sound authoring. I recall reading a white paper where they suggested that a bass and treble control could be useful for that purpose.
I use a compact mixer with bass, mid-range and treble controls to compensate for both deficiencies in my HP's response, and also variations in the content I listen to. Probably more for the former than the latter. It is possible I've become too reliant on EQ for all the above. My headphones sound pretty awful without it though.