Building a Headphone Measurement Lab
Jun 25, 2010 at 3:18 PM Post #316 of 355

 
Quote:
it should be possible to eq any reasonable LTI response with today's DSP horsepower - but not necessarily with a simple "graphic equalizer"
 
right up to the limits of the Linear and Time Invarient assumptions
 
Smyth SVS Realizer does it with in ear microphones
 
of course even the same headphone can be audibly different depending positioning on your head


Quote:
As you say, EQ can only go so far.  You cannot EQ slowness away.  You cannot EQ resonant/ringing modes away completely.  You cannot easily EQ phase coherence when it's not there to begin with.  (You can to a degree with good DSP, but if you know enough to do that you also know you need better headphones.)  You can't make a silk purse out of a sow's ear.  Pigskin is still pigskin no matter what color you paint it. 
 
Headphones do NOT sound the same and cannot be made to sound the same when they're in a different grade of performance.

 
Quote:
How can you be so sure, have you tried it? Guess not, because IR convolution works fairly well... but you're right that "You can't make a silk purse out of a sow's ear."


I have the Smyth Research Realiser, and have tried doing HPEQ's with different headphones, and sorry to say that you can't make up for the short comings of different headphones.  In fact, the Realiser scales well with better headphones. 
 
-Ed
 
Jun 25, 2010 at 3:22 PM Post #317 of 355
SP Wild, I'm dismissing your ideas because they don't seem to be well thought-out and are just guesses, in my eyes.
 
You do realize that the left headphone driver radiates primarily to the left and the right one primarily to the right? You do know that the wavelength of a 100 Hz (many headphones' resonant frequency is close to this, see impedance curve) sound wave is 3.4m or over 11 feet? You do realize that if you hold your headphones 30cm away from your ears that the bass will be gone?
 
Feel free to explain how that will work with your idea that "leaked sound will reinforce or negate phasing in the other channel".
 
And what does "where the soundstage maxes out in its frequency response" even mean?
 
Regarding 4 and 6: don't be shocked, but if you look at a single driver only without any kind of housing it still has a resonant frequency.
 
 
I'm open for weird ideas, but this is too much. You might want to read up on some basics..
 
Jun 25, 2010 at 3:38 PM Post #318 of 355
When I speak of a "resonant frequency" I speak of the cup and frame itself - not the driver.  The sound leakage of open headphones is really obvious, If you had one on and turned it down and someone else immediately next to you is listening to the same open phone turned up - you will still hear it - and it is this residual sound that mix with the driver output.  If you've never heard the "soundstage" of a headphone then you can forget about what I am trying to convey. 
 
Jun 25, 2010 at 4:15 PM Post #319 of 355
Quote:
When I speak of a "resonant frequency" I speak of the cup and frame itself - not the driver.  The sound leakage of open headphones is really obvious, If you had one on and turned it down and someone else immediately next to you is listening to the same open phone turned up - you will still hear it - and it is this residual sound that mix with the driver output.  If you've never heard the "soundstage" of a headphone then you can forget about what I am trying to convey. 


So the cups and frame of your headphone are vibrating? That must be a bass monster!
I'm sorry but there's a definition for "headphone resonant frequency" and it certainly is not what you're talking of.
 
I know that sound can leak from one channel into the other but come on, you talked about negation of phasing.
And I didn't ask what "soundstage" is, read my post(s) again and please don't ignore my questions.
 
Jun 25, 2010 at 4:28 PM Post #320 of 355

 
Quote:
So the cups and frame of your headphone are vibrating? That must be a bass monster!
 
The D7000 uses this concept to develop its bass, and they are not lite in bass!
 
I'm sorry but there's a definition for "headphone resonant frequency" and it certainly is not what you're talking of.
 
Cup resonance is probably a better term
 
I know that sound can leak from one channel into the other but come on, you talked about negation of phasing.
 
The sound signal in music is mostly mono with slight variation per driver to create stereo, if the same sound is arriving a little later and a little lower, will it not interact with the wave energy, no matter how minute?
 
And I didn't ask what "soundstage" is, read my post(s) again and please don't ignore my questions.
 
Where the cup resonates the most - it is at this frequency that the widest soundstaging occur in a headphone.  The HD650 has it's widest soundstage at the lower mid, the K701 is widest at the upper mid.  The HD595 is widest at the center mid.  These same frequencies, to me contain a resonant character (as in a shade of reverb) that correlates to the tone generated by gently tapping the cups or frame with your finger whilst wearing the can.  I also believe in this resonant zone the agility of the driver is at it's most efficient with said frequencies.  The cup resonance is in fact the single most important factor in determining sonic signature.


 
 
Jun 25, 2010 at 4:54 PM Post #321 of 355


Quote:

Did you make most of this stuff up? I agree with xnor on this one, unless you can find some empirical evidence to back up your hypotheses.
 
Jun 25, 2010 at 4:56 PM Post #322 of 355
Hold on fellas, I need to inject a little reality for a moment here...
 
Real Soundstage is captured in the original recording assuming that the recording was intended to preserve this.  Anything else that you may choose to describe as *soundstage* is nothing more than a psychoacoustic parlor trick.  Various implementations of electronic crossfeed (as implemented by the likes of Headroom and Meier Audio, etc.) may or may not enhance the crossfeed captured in the original recording, depending upon the original microphone placement.  In order for your reproduction chain to accurately reproduce the original soundstage of the recording, everything must be phase coherent across the spectrum.  True soundstage retrieval depends entirely upon the faithfulness of your reproduction chain.  Soundstage retrieval has nothing to do with earcup construction of the headphone, open or closed.  If, however the headphone's ability is hindered in any way from faithful reproduction of frequency, phase, and time alignment, then soundstage will be concomitantly reduced.
 
Jun 25, 2010 at 5:07 PM Post #323 of 355
^That make a lot of sense to me ^
smily_headphones1.gif

 
Quote:
Did you make most of this stuff up? I agree with xnor on this one, unless you can find some empirical evidence to back up your hypotheses.


It's not hard to get this evidence.  Using Tylls head, play something in mono in one channel, turn it up.  Measure the sound bleeding to the other cup via the dummy head on the other channel.  The difference is the inherent crossfeeding.
 
Jun 25, 2010 at 7:45 PM Post #324 of 355
When I listen to music in foobar and look at the visualization, the top end(treble) of the "dynamic chart" (for lack of a better term) is already knocked down quite a few db from the bass/mids. This is on music that doesn't have the greatest recordings like metal and some rock. I imagine the better recordings would already have more effort put into this.
 
 I can understand that these frequency test studies, like hrtf, were most likely done with test signals all at the same power on different frequencies, which is why the headroom "best graph" statement says a smooth 8-10db from 1khz+.
 
I wonder why everyone still thinks this is how a headphone graph should be with real music, considering this equalization seems to have been done in the mastering stages. I'm starting to think most of the head-fi "audiophiles" are really just bassheads that either 1) can't accept it or 2) just don't know they are due to what people think a graph should look like.  
 
It would be nice if someone would explain this because it seems to be a simple thing that has eluded everyone so far. Please whoever listens to music from a pc/mac, set the visualization to the frequency chart and see if your music does the same thing. Foobar2000 has the frequencies on the bottom and 0 to -60db in 10db steps on the right. It also just reads the signal of the music, adjusting the output volume doesn't effect this.
 
Jun 25, 2010 at 9:48 PM Post #325 of 355
My Sony MDR-V6's are completely closed, with very little cross-cup bleed. Unfortunately, due to limitations in amplifier design (fwir) there is a significant amount of crosstalk. I can mute the right channel, and still have volume in that ear, from the other channel. This is not audio bleeding from one side to the other, it is quite clearly being generated by that transducer, due to problems with grounding.
 
Take from it what you will, but these cans are also well known for having negligible "soundstage" (see: parlor trick).
Quote:
^That make a lot of sense to me ^
smily_headphones1.gif

 

It's not hard to get this evidence.  Using Tylls head, play something in mono in one channel, turn it up.  Measure the sound bleeding to the other cup via the dummy head on the other channel.  The difference is the inherent crossfeeding.

 
Jun 26, 2010 at 12:26 AM Post #326 of 355
I didn't precisely understand what you said, you seem to have mistaken one notion for another at times.
 
But, let's talk about speakers for a moment, they frequency response is supposed to be a flat line from 20 to 20000 Hz, that means if fed by an input signals of varying frequencies of the same level (voltage), they will measure the same volume on a dB meter. It has nothing to do with the frequency distribution of the music itself, it merely ensure that the frequency distribution in the music will not be changed by the speaker. If the music is bass heavy, the speaker will reproduce a bass heavy sound, if it has a lot of treble, they will reproduce a lot of treble.
 
In addition to that, the sound engineers that mixed the music, probably mixed it on speakers, almost all studio monitors have an extremely flat frequency response curve (whether they are musical or not is another question), all the equalizing was done using those monitors as a reference. Thus if you want to hear what was intended to by the sound engineers, you'd have to buy speakers with a flat FR (frequency response).
 
However, higher frequency are attenuated faster when propagating in the air or in most media (that's with you only hear the bass when your neighbors play music). A good sound engineer mixes the music taking care of that parameter, that is to say, if microphones are placed near the played near the instrument and the listener in the middle of a concert hall, the high frequencies would be attenuated according to that distance. The final product ie. the music sold would be realistic as far as frequency distribution is concerned when listened with a speaker wit a flat FR at the same distance as the sound engineer did (but in real life a good sound engineer would be mastering as if they used a speaker with a flat FR at 2/3 m if they mixed "audiophile" music).
 
Knowing that, we can move to the FR of headphones, the transducers being so close to the ear, high frequencies aren't attenuated when they arrive to the ear drum, that's why the FR curve of headphones is not flat, higher frequencies should be reproduced at a lower volume to mimic the effect of air attenuation. Even so, the added high frequencies of headphones with a flatter FR make them sound more detailed than some speakers.
 
I hope this answers your questions.
 
*The lack of content in the high frequencies is partly due to distance attenuation and partly due to the fact that instruments don't naturally produce these frequencies, the highest sung note is an F6 which fundamental is 1397 Hz, for other instruments, refer to this chart;
 
Quote:
When I listen to music in foobar and look at the visualization, the top end(treble) of the "dynamic chart" (for lack of a better term) is already knocked down quite a few db from the bass/mids. This is on music that doesn't have the greatest recordings like metal and some rock. I imagine the better recordings would already have more effort put into this.
 
 I can understand that these frequency test studies, like hrtf, were most likely done with test signals all at the same power on different frequencies, which is why the headroom "best graph" statement says a smooth 8-10db from 1khz+.
 
I wonder why everyone still thinks this is how a headphone graph should be with real music, considering this equalization seems to have been done in the mastering stages. I'm starting to think most of the head-fi "audiophiles" are really just bassheads that either 1) can't accept it or 2) just don't know they are due to what people think a graph should look like.  
 
It would be nice if someone would explain this because it seems to be a simple thing that has eluded everyone so far. Please whoever listens to music from a pc/mac, set the visualization to the frequency chart and see if your music does the same thing. Foobar2000 has the frequencies on the bottom and 0 to -60db in 10db steps on the right. It also just reads the signal of the music, adjusting the output volume doesn't effect this.



 
Jun 26, 2010 at 12:37 AM Post #327 of 355


Quote:
I didn't precisely understand what you said, you seem to have mistaken one notion for another at times.
 
But, let's talk about speakers for a moment, they frequency response is supposed to be a flat line from 20 to 20000 Hz, that means if fed by an input signals of varying frequencies of the same level (voltage), they will measure the same volume on a dB meter. It has nothing to do with the frequency distribution of the music itself, it merely ensure that the frequency distribution in the music will not be changed by the speaker. If the music is bass heavy, the speaker will reproduce a bass heavy sound, if it has a lot of treble, they will reproduce a lot of treble.
 
In addition to that, the sound engineers that mixed the music, probably mixed it on speakers, almost all studio monitors have an extremely flat frequency response curve (whether they are musical or not is another question), all the equalizing was done using those monitors as a reference. Thus if you want to hear what was intended to by the sound engineers, you'd have to buy speakers with a flat FR (frequency response).
 
However, higher frequency are attenuated faster when propagating in the air or in most media (that's with you only hear the bass when your neighbors play music). A good sound engineer mixes the music taking care of that parameter, that is to say, if microphones are placed near the played near the instrument and the listener in the middle of a concert hall, the high frequencies would be attenuated according to that distance. The final product ie. the music sold would be realistic as far as frequency distribution is concerned when listened with a speaker wit a flat FR at the same distance as the sound engineer did (but in real life a good sound engineer would be mastering as if they used a speaker with a flat FR at 2/3 m if they mixed "audiophile" music).
 
Knowing that, we can move to the FR of headphones, the transducers being so close to the ear, high frequencies aren't attenuated when they arrive to the ear drum, that's why the FR curve of headphones is not flat, higher frequencies should be reproduced at a lower volume to mimic the effect of air attenuation. Even so, the added high frequencies of headphones with a flatter FR make them sound more detailed than some speakers.
 
I hope this answers your questions.

Unfortunately many typical run of the mill studios and even some really good ones use the ubiquitous NS-10 Yamaha near-field monitor for mixdowns, whose midband frequency response totally sucks.  It is terribly colored.  The best studios are now starting to pay attention to high accuracy monitoring cans and use them in conjunction with speakers to arrive at a good mix, along with replacing the NS-10s with something more accurate. (almost anything is more accurate.)
 
 
Jun 26, 2010 at 12:54 AM Post #328 of 355
That's why I said a "good" sound engineer, one that if they do not have an accurate monitor are familiar the faults of what they are mastering on and compensate for it. That's the reason, I guess, for euphonic speakers or headphones that compensate for bad decisions at the mixing/mastering stage.
 
Quote:
Unfortunately many typical run of the mill studios and even some really good ones use the ubiquitous NS-10 Yamaha near-field monitor for mixdowns, whose midband frequency response totally sucks.  It is terribly colored.  The best studios are now starting to pay attention to high accuracy monitoring cans and use them in conjunction with speakers to arrive at a good mix, along with replacing the NS-10s with something more accurate. (almost anything is more accurate.)

 
Jun 26, 2010 at 12:58 AM Post #329 of 355


Quote:
That's why I said a "good" sound engineer, one that if they do not have an accurate monitor are familiar the faults of what they are mastering on and compensate for it. That's the reason, I guess, for euphonic speakers or headphones that compensate for bad decisions at the mixing/mastering stage.
 

Yeah, sometimes that helps...  
wink_face.gif

 
 
Jun 26, 2010 at 1:05 AM Post #330 of 355

 
Quote:
....
However, higher frequency are attenuated faster when propagating in the air or in most media (that's with you only hear the bass when your neighbors play music). A good sound engineer mixes the music taking care of that parameter, that is to say, if microphones are placed near the played near the instrument and the listener in the middle of a concert hall, the high frequencies would be attenuated according to that distance. The final product ie. the music sold would be realistic as far as frequency distribution is concerned when listened with a speaker wit a flat FR at the same distance as the sound engineer did (but in real life a good sound engineer would be mastering as if they used a speaker with a flat FR at 2/3 m if they mixed "audiophile" music).
 
Knowing that, we can move to the FR of headphones, the transducers being so close to the ear, high frequencies aren't attenuated when they arrive to the ear drum, that's why the FR curve of headphones is not flat, higher frequencies should be reproduced at a lower volume to mimic the effect of air attenuation. Even so, the added high frequencies of headphones with a flatter FR make them sound more detailed than some speakers.
 
I hope this answers your questions.
 
*The lack of content in the high frequencies is partly due to distance attenuation and partly due to the fact that instruments don't naturally produce these frequencies, the highest sung note is an F6 which fundamental is 1397 Hz, for other instruments, refer to this chart;

Besides free field HF attenuation, there are many other factors that proper headphone voicing compensates for, including the acoustic "shadow" created by one's pinna.
 
 

Users who are viewing this thread

Back
Top