And do remember that the nominal impedance (e.g. 600 ohm, 32 ohm) is NOT the impedance at each frequency.
And that's what I'm trying to show. The impedance peaks in theory cause voltage to drop, but in reality, it also causes power to drop, and the difference in listening level (dB SPL) actually reduces as output impedance increases.
That's the case for a DT990 with 250 Ohm nominal impedance and 350 Ohm impedance peak at around 100 Hz region.
[dB SPL @ 1 Vrms, x Hz] - 10*log10(1000 / [Impedance @ x kHz]) = [dB SPL @ 1 mW, x Hz]
Plug in the values for all frequencies (x).
I think you are just stating what I stated... in a different way.
Basically you are saying:
[dB at 1Vrms] - [dB produced by 1Vrms @ certain Hz] = [dB SPL/mW]
But how does this show more efficiency at resonant frequencies?
I'm seeing 2 scenarios here:
If [dB at 1Vrms] is constant, then as impedance peaks at certain frequencies, [dB SPL/mW] increases along with those peaks. In which case, I can see how efficiency increases at resonant frequency.
If [dB SPL/mW] is constant, then as impedance peaks at certain frequencies, [dB at 1Vrms] actually drops along with [dB produced by 1Vrms @ certain frequency].
Why? It's easy to see 1000/350 would be smaller than 1000/250.
So in that case, efficiency actually drops, and higher output impedance actually helps reducing volume variations between frequencies.
Or is there something else I'm missing?