or Connect
Head-Fi.org › Forums › Equipment Forums › Sound Science › Would lack of amp power causes degradation to sound quality for high impedance headphones?
New Posts  All Forums:Forum Nav:

Would lack of amp power causes degradation to sound quality for high impedance headphones? - Page 5

Quote:
Originally Posted by xnor

[dB SPL @ 1 Vrms, x Hz] - 10*log10(1000 / [Impedance @ x kHz]) = [dB SPL @ 1 mW, x Hz]

Plug in the values for all frequencies (x).

And do remember that the nominal impedance (e.g. 600 ohm, 32 ohm) is NOT the impedance at each frequency.

Quote:
Originally Posted by liamstrain

And do remember that the nominal impedance (e.g. 600 ohm, 32 ohm) is NOT the impedance at each frequency.

And that's what I'm trying to show. The impedance peaks in theory cause voltage to drop, but in reality, it also causes power to drop, and the difference in listening level (dB SPL) actually reduces as output impedance increases.

That's the case for a DT990 with 250 Ohm nominal impedance and 350 Ohm impedance peak at around 100 Hz region.
Quote:
Originally Posted by xnor

[dB SPL @ 1 Vrms, x Hz] - 10*log10(1000 / [Impedance @ x kHz]) = [dB SPL @ 1 mW, x Hz]

Plug in the values for all frequencies (x).

I think you are just stating what I stated... in a different way.

Basically you are saying:

[dB at 1Vrms] - [dB produced by 1Vrms @ certain Hz] = [dB SPL/mW]

I'm seeing 2 scenarios here:

If [dB at 1Vrms] is constant, then as impedance peaks at certain frequencies, [dB SPL/mW] increases along with those peaks. In which case, I can see how efficiency increases at resonant frequency.

If [dB SPL/mW] is constant, then as impedance peaks at certain frequencies, [dB at 1Vrms] actually drops along with [dB produced by 1Vrms @ certain frequency].

Why? It's easy to see 1000/350 would be smaller than 1000/250.

So in that case, efficiency actually drops, and higher output impedance actually helps reducing volume variations between frequencies.

Or is there something else I'm missing?
Edited by Bill-P - 10/11/12 at 7:27am
Quote:
Originally Posted by Bill-P

[dB at 1Vrms] - [dB produced by 1Vrms @ certain Hz] = [dB SPL/mW]

You do not see that from that formula. It can be seen from the frequency response graphs, which are usually created using a low impedance amplifier (<2 Ω in Tyll Hertsens' setup) that can be considered a voltage source. Since you usually do not see a drop in the frequency response at the impedance peak(s), the efficiency has to be higher there. It also makes sense that resonance increases efficiency.

Edited by stv014 - 10/11/12 at 7:33am
Quote:
Originally Posted by Bill-P

And that's what I'm trying to show. The impedance peaks in theory cause voltage to drop, but in reality, it also causes power to drop, and the difference in listening level (dB SPL) actually reduces as output impedance increases.

I am explaining this a last time. Impedance peaks do not cause voltage to drop. If anything they cause voltage to drop less if you have an amplifier with high output impedance. There is no discrepancy between theory and reality.

Power is a function of voltage multiplied by current, and current is a function of voltage divided by resistance. Same voltage but higher impedance means less current and therefore less power (needed).

Differences in dB SPL / frequency increase with higher output impedance, as does distortion.

Quote:
That's the case for a DT990 with 250 Ohm nominal impedance and 350 Ohm impedance peak at around 100 Hz region.

I don't think so.

Quote:

I think you are just stating what I stated... in a different way.
Basically you are saying:
[dB at 1Vrms] - [dB produced by 1Vrms @ certain Hz] = [dB SPL/mW]

No, I am not. I was posting exactly what I meant.

Quote:
I'm seeing 2 scenarios here:
If [dB at 1Vrms] is constant, then as impedance peaks at certain frequencies, [dB SPL/mW] increases along with those peaks. In which case, I can see how efficiency increases at resonant frequency.
If [dB SPL/mW] is constant, then as impedance peaks at certain frequencies, [dB at 1Vrms] actually drops along with [dB produced by 1Vrms @ certain frequency].
Why? It's easy to see 1000/350 would be smaller than 1000/250.
So in that case, efficiency actually drops, and higher output impedance actually helps reducing volume variations between frequencies.
Or is there something else I'm missing?

Neither of those values is constant.

No, efficiency doesn't drop and no, higher output impedance does not help.

Please think this through again and get the basics right next time.

Quote:
Originally Posted by xnor

I am explaining this a last time. Impedance peaks do not cause voltage to drop. If anything they cause voltage to drop less if you have an amplifier with high output impedance. There is no discrepancy between theory and reality.

Power is a function of voltage multiplied by current, and current is a function of voltage divided by resistance. Same voltage but higher impedance means less current and therefore less power (needed).

Differences in dB SPL / frequency increase with higher output impedance, as does distortion.

Please think this through again and get the basics right next time.

Sorry. Had some inverse thoughts there. You are right. Impedance peaks don't drop voltage. I was thinking nominal impedance would drop voltage relative to impedance peak, but somehow I wrote it backward.

Let's make this short: does sensitivity increase at resonant frequencies?

And I'm not trying to rebut, state, or do anything of the sort if that's what you are thinking. I am trying to understand the underlying issue. So basically, I'm asking for the "basic" from you. If that's too much to ask, perhaps you can point out some articles for me to look at to get the same information.

Quote:
Originally Posted by stv014

You do not see that from that formula. It can be seen from the frequency response graphs, which are usually created using a low impedance amplifier (<2 Ω in Tyll Hertsens' setup) that can be considered a voltage source. Since you usually do not see a drop in the frequency response at the impedance peak(s), the efficiency has to be higher there. It also makes sense that resonance increases efficiency.

I know it shows in the graph, that's why I know something is missing with the formula.

If efficiency is higher at those peaks, I think that basically means the rated sensitivity (dB SPL/mW) that manufacturers put on specs sheets are not enough/correct... as that value should be scaled with efficiency. I just need a confirmation at this point.

Edited by Bill-P - 10/11/12 at 9:48am

Sensitivity defined as dB SPL @ 1 Vrms, x Hz is basically* shown in the frequency response graphs. So it is different for every headphone.

*) Actual voltage used for the measurement may be a lot lower, but that doesn't really matter. What's important is that the same voltage is used for all frequencies.

Quote:
Originally Posted by xnor

Sensitivity defined as dB SPL @ 1 Vrms, x Hz is basically* shown in the frequency response graphs. So it is different for every headphone.

*) Actual voltage used for the measurement may be a lot lower, but that doesn't really matter. What's important is that the same voltage is used for all frequencies.

So is it safe to assume sensitivity defined as dB SPL @ 1 mW varies depending on the frequency?

Yup. The sensitivity and impedance provided by manufacturers is taken at a single frequency. It is safe to assume that both are different at other frequencies.

Edited by xnor - 10/11/12 at 10:06am

Usually sensitivity and impedance are given at 1 kHz.  Maybe it would make more sense to do some sort of averaging over the frequency range.

doctorhead.ru also shows averages (though different people would have different ideas about how to do that) in their FR and impedance graphs:

Thanks. That makes more sense.

Here's the last question I have: would the Fiio E11 be adequate for driving 250 Ohm headphones? Specifically, let's say it's the DT990 Pro that I'm concerned about.
Quote:
Originally Posted by Bill-P

Thanks. That makes more sense.
Here's the last question I have: would the Fiio E11 be adequate for driving 250 Ohm headphones? Specifically, let's say it's the DT990 Pro that I'm concerned about.

Depends how loud you want to listen, what your expectations are (some people expect amps to add some kind of coloration, EQ settings, or maybe have crossfeed, or whatever).  As always, you will have people saying, "no, it is not okay" and so on.

As far as I can tell, if you're not expecting any coloration from the amp, the E11 form factor and functionality is okay with you, and you are getting enough volume, I don't see any real evidence or believable (to me) theory that says that it would be inadequate.  Actually, it should do quite well.

So volume is an issue?

And according to measurements done by NwAvGuy, the E11 has odd phase characteristics at low frequencies. Wouldn't that distort bass to some extent?

Quote:
Originally Posted by Bill-P
And according to measurements done by NwAvGuy, the E11 has odd phase characteristics at low frequencies. Wouldn't that distort bass to some extent?

It is simply a side effect of the slight low frequency roll-off that is caused by the DC blocking. The E11 has somewhat more roll-off than the O2 - almost -1 dB at 5 Hz, and thus also more phase shift (note that the graphs are scaled differently and visually exaggerate the difference). Although it does seem to be higher than what it should be for a simple 6 dB/octave highpass filter, maybe there are multiple DC blocking capacitors, while the O2 only contains one per channel. However, it is still not significant enough to be much of an issue. It also does not depend on the impedance of the headphones driven, by the way.

Edited by stv014 - 10/12/12 at 1:04pm

No, I don't think it depends on the impedance of the headphones driven, but I'd think it would cause some differences to be heard compared to O2.

Also the Fiio E11 has a low/high power switch in the battery compartment that actually matters. In low power mode, the volume switch would not get past 3 before audible distortions could be heard in DT990 Pro 250 Ohm. High power does better... though not by much.

Problem is... the amp comes default in "low power", and if a user has no reason to fiddle around with the battery compartment, they wouldn't know about the switch.

Quote:
Originally Posted by Bill-P

No, I don't think it depends on the impedance of the headphones driven, but I'd think it would cause some differences to be heard compared to O2.

Also the Fiio E11 has a low/high power switch in the battery compartment that actually matters. In low power mode, the volume switch would not get past 3 before audible distortions could be heard in DT990 Pro 250 Ohm. High power does better... though not by much.

Problem is... the amp comes default in "low power", and if a user has no reason to fiddle around with the battery compartment, they wouldn't know about the switch.

I think you're overestimating peoples' sensitivity to small, gradual phase shifts at the extremes of hearing, though maybe this is less absurd than fractions of a dB differences in subsonics.  I know people may have legitimate complaints with crossover circuits, but do you know what they're doing to the phase response, the frequencies involved?  That's not to even mention systems with only one subwoofer instead of stereo subwoofers.

The E11 low power mode reduces the power supply rails, which generally should decrease the performance of the electronics a little and definitely makes the clipping point lower (which is what you notice on those headphones when turning the volume up).  Anyway, I agree about the obscurity of the setting, but that's not exactly relevant now, because you know it exists.

New Posts  All Forums:Forum Nav:
Return Home
Back to Forum: Sound Science
• Would lack of amp power causes degradation to sound quality for high impedance headphones?