Ok, let me see if I can explain it a bit better.
To test for harmonic distortion, you feed the device being tested a tone made up of just a single frequency. Let's say 1,000 Hz. Perhaps you remember back in the olden days when the local TV stations would go off the air they'd transmit color bars (or back in the olden olden days, the test pattern with the indian head on it). They'd also transmit a test tone. That was a 1,000 Hz tone.
Anyway, if the device is perfectly linear, then what you will see on the output is only that 1,000 Hz tone. However if it's non-linear, and distorts the signal, you'll see other frequencies at multiples of the test frequency. For example with a 1,000 Hz tone, you'd see frequencies at 2,000, 3,000, 4,000 Hz, etc. These are called harmonics, hence the term harmonic distortion.
The levels of those harmonics relative to the level of the test tone give an indication of how much the device is distorting the signal being fed into it.
The first big peak in the Headroom graphs represent the frequency of the test tone used for the measurement. And in order to make any meaningful comparison between headphones, the level of that tone should be the same for the headphones being tested as it's the level of the harmonics relative to the test frequency that's important.
And that's why the first big peak in the measurements are the same from headphone to headphone.
Does this help?