Actually the waveform you are referring to is caused by reduced low frequency content. There is no initial LF response that is larger. Square waves are sines with odd harmonics. The initial edge is from higher harmonics. So were this a low frequency part of music the amp won't respond with an initial higher level that fails to continue on. It will simply respond at a lower level, and that is it.
That is another interpretation.
Fact remains that it takes larger voltage swing in an amplifier and greater headphone or speaker excursion to reproduce limited LF response square wave than required for a (near) DC amp - for the same reference level in the midrange, usually 1 kHz. That makes LF limited response amps audibly "louder" in the bass.
This fact also means that LF limited amp can reach clipping point earlier than (near) DC amp - for the same power rating and same midrange reference level. This IS important, as bass is usually the part in the audible spectrum that takes the lion's share of power. A clipped amp will definitely sound different than one still working within its limits. The differences can be more than 3 dB in the actual SPL achievable, therefore clearly audible. 3 dB SPL is the difference between a 100 and 200 W/ch amp, for example.
No matter how interpreted, it is a deviation from the original signal, it is a form of distortion and it is audible. The problem is compounded by the fact that in any real scenario there are many (pre)amps connected in series - there can be quite a few from the microphone to the actual amp driving one's headphones or speakers. All those LF rollofs simply add and by the time final output is reached, the original waveform can be objectionably audibly distorted.
It is one source of why the amplifiers can sound differently, despite being used well within their power limitations and their specs being well in excess of what is required in terms of noise and non linear distortion(s).
Edited by analogsurviver - 8/17/14 at 11:43pm