OK, I'm finding it difficult to entirely understand the effects of source output impedance and headphone impedance on the frequency curve.
In essence I would just like to know enough about impedance matching to help me be competent at buying audio gear.
What I understand is that in short, a larger source output impedance will cause a reduction in volume in headphones, in general. So with a pair of headphones with a normal, non-flat impedance curve, the frequency sections with higher impedance would generally be louder than the sections with lower impedance, and the effect would be exacerbated in cases where output impedance is larger than 1/8 of the headphone impedance. So for example with a 40 ohm output impedance and 40 ohm headphones with an impedance hump of 100 ohms at 80hz, I would hear a significant and presumably undesirable db boost in mid-bass.
Something I don't get is, with portable players, isn't the concern usually that higher impedance headphones are quieter? In other words, it looks as if to some extent, or at some point, a higher impedance pair of headphones would result in a lower volume output than a lower impedance one, which is the opposite to the situation above. But impedance matching still seems to be an important issue, in order so that big swings in impedance wouldn't hurt the overall frequency curve. The higher impedance you go with your headphones, the lower the volume goes. To what extent does this matter? is it more, or less of a concern if you follow the 1:8 rule?
So how do you know if a pair of earphones or headphones is appropriate for a certain source? It seems that you'd need to know a lot about a source's capabilities to know whether a headphone was suitable. You'd need to know the output impedance, in addition to the power output it has at various impedances. As an example, The Sansa Clip+ portable player can put out 16 mW into 16 ohms but only 0.8 mW into 300 ohms. It has an output impedance of 1.1 ohms. If I have a pair of earphones which have 1.1 ohms impedance at 20hz, 8 ohms at 80hz, and 300 ohms at 10khz, how exactly would the frequency curve respond? If everything I know is right, at 20hz there'd be a dip in volume. At 80hz the volume would be fine, and it would stay fine and perhaps keep going up for a while on the way to 10khz, at which point the lower power output of the Clip+ would start to make volume go lower again at some point?
It's confusing, because it looks to me like there are two conflicting mechanisms. One mechanism raises volume as your raise headphone impedance, and another mechanism does the opposite, and it's not immediately obvious to me how to figure out how to appropriately pick a pair of earphones for a source. I'm looking at getting a Sansa Clip+ to pair with my Etymotic HF5, which happens to be terribly matched with my current mp3 player (which doesn't have a lineout, so shouldn't be amped). The HF5 is rated at 16 ohms impedance, but the curve actually goes up to 90 ohms and above as the frequency climbs, being that of an armature IEM. That's a huge difference. According to what I know, the 1.1 ohm output impedance of the Clip+ means that for some while during the ascent to 90 ohms, the volume will marginally climb together with frequency. And then I suppose at some point the lacking power output at higher impedances will start to lower volumes as the frequency climbs, is that right?
I also happen to have a PC source for which I've got mismatched headphones. The source has a 40 ohm output, so presumably I want headphones with 320 ohms or maybe even 600 ohms. Results show that as resistance rises, the voltage output on the amp rises continuously until it reaches 5.4V at 600 ohms (41mw), which is twice the amount needed for 105db for a pair of headphones like the DT880 600 ohm. Those headphones reach an impedance peak of over 700 ohms though. Would it be safe to assume that the PC amp would completely power the DT880? I have no idea what its output is like at 700+ ohms, but up to 600 ohms, the voltage kept rising, and if it's twice as powerful as necessary to deafen you at 600 ohms, then it seems likely to be powerful enough, right? Or is that too strong an assumption? Would I be better off being safe and getting 300 ohm headphones like the HD650 or HD600 despite them not being quite 1:8 with the source to headphones? Let's assume I have no opportunity to test these headphones, and can only choose based on others' descriptions, and their impedance.
Sorry for the long post, and thanks - I appreciate any help.