crinacle's IEM FR measurement database
Apr 23, 2018 at 11:19 AM Post #797 of 1,335
I much prefer the look/format of your new graphs :) What's the new sotfware you're using?

I hate to keep banging on this same old point, but why should we ignore the 8 kHz peak? If your distances are anything like those in an actual ear canal, it'll be there. And if not, it'll still be there, but at a slightly different frequency. What am I missing?

Consider it as a kind of systematic error, I guess. For instance, if every single IEM you hear has a 8k peak, then said peak really doesn't matter because you always hear it. If your ears have a resonance point of 10k, you'll always hear a 10k peak. And so it doesn't matter, because since the peak is always there, it just becomes a normality.

How do you arrive at an 8 kHz resonant peak based on a 3mm separation?
Curious about the physics here...

It's not based on theory but on the data that I've collected. A certain distance from reference plane results in a certain resonant peak. I don't have the exact specifics but the average insert depth on my coupler will usually create a 8k peak. This will change with insert depth and I'm certain I'll get a different resonant peak with something like, say, an Etymotic ER4 with triple flange inserted deeply into the coupler.
 
Last edited:
Apr 23, 2018 at 1:50 PM Post #800 of 1,335
Hi @crinacle - please forgive me if I've missed this, but are you applying any kind of compensation curve to you new measurements?

Also, do you have links to where you bought your new mic and coupler?

No compensation, it's raw IEC.

It's been a year since I bought the coupler so I'll have to look for the source again. PM me a reminder in a few days.
 
Apr 23, 2018 at 6:43 PM Post #802 of 1,335
No. There can be discrepancies in amplitude and frequency, but if there's a resonance in the coupler, the same thing will happen in your ear canal.



Resonance is just energy trapped or accumulated at a particular wavelength. The length (and volume, if you have transverse modes) plays a role, but the whole point of the coupler is to mimic the length and volume of the average ear canal. Likewise, the materials used (@crinacle's vinyl coupler) will mimic the inside of the ear canal - which also isn't perfectly reflecting. Measurement differences aren't an error. The fact that you'll hear something slightly differently to me, doesn't mean your ears have an error - it just means they're slightly different.

There are much larger sources of discrepancy from compensation curves for mic, soundcard, and shift to some (arguably debatable) target loudness curve, and the variation in mic sensitivity as a function of frequency.
I've always had some concern about this. the standing wave involved will not be as intense through a curve or less regular shape than the coupler. Skin is also one of the very best damping materials. I'm not arguing that the resonance is eliminated because we all hear them. Just that it's likely less than measured in most cases.

I suspect that if you measure at two different enough depths of insertion and the peak changes frequency as expected, you could replace it with the lower value attained in both graphs and get something closer to right.
 
Last edited:
Apr 23, 2018 at 7:13 PM Post #803 of 1,335
I've always had some concern about this. the standing wave involved will not be as intense through a curve or less regular shape than the coupler. Skin is also one of the very best damping materials. I'm not arguing that the resonance is eliminated because we all hear them. Just that it's likely less than measured in most cases.

I suspect that if you measure at two different enough depths of insertion and the peak changes frequency as expected, you could replace it with the lower value attained in both graphs and get something closer to right.
For sure. Although the skin covering the bony section of ear canal that's mostly exposed beyond the end of the eartip is very thin. My comment was just about whether these resonance spikes could be considered an 'error', and to that question, I'd say no. But I agree we've got a way to go in coming up with a suitably-accurate hardware model (or software correction) to perfectly match what each of us hear.

On a similar line of thought - I recently did the Smyth A16 realizer demo. It's very impressive, but it still struggles a bit with the imaging directly ahead of you. It's like Heisenberg's uncertainty principle - as soon as you put microphones in the ear canal, you change the response, so the transfer functions aren't perfect. I guess what's really needed is an improvement in mic technology, so we can put very, very tiny mics at the ends of our IEMs...?
 
Last edited:
Apr 24, 2018 at 8:30 AM Post #804 of 1,335
For sure. Although the skin covering the bony section of ear canal that's mostly exposed beyond the end of the eartip is very thin. My comment was just about whether these resonance spikes could be considered an 'error', and to that question, I'd say no. But I agree we've got a way to go in coming up with a suitably-accurate hardware model (or software correction) to perfectly match what each of us hear.

On a similar line of thought - I recently did the Smyth A16 realizer demo. It's very impressive, but it still struggles a bit with the imaging directly ahead of you. It's like Heisenberg's uncertainty principle - as soon as you put microphones in the ear canal, you change the response, so the transfer functions aren't perfect. I guess what's really needed is an improvement in mic technology, so we can put very, very tiny mics at the ends of our IEMs...?
stuff right in front of you will work or they won't. it depends on the frequency response for the vertical axis, and on your. some people basically just rely on their eyes for anything in front to estimate distance and a big part of the vertical axis. so when they see nothing or a wall, the brain goes "nope there nothing here, let's put it in the head because that makes sense, head on the nose for the lolz". as the eyes are clearly our dominant sense, it's not uncommon at all for everything to go to crap if they don't get the right cues.
but for other people so long as the rest of the 'image' seems right, they will deduct the center to go with it and will be bamboozled by the realism(I wonder if that can be trained by listening to music in the dark of something like that?).
of course a small part of this could have to do with your own ear canal being significantly different from whatever they have modeled for the A16, but it wouldn't be my first guess.

in my case, playing basic music, no 3D DSP no nothin, if I setup an IEM response to have a frequency close to what I get from my speakers at 30°, which feels pretty flat to me and I like that, I end up with the center image on my forehead(and I don't like that...). with a different FR I get the mono sound to come back at eye level and often goes at a tiny distance. but then everything else sounds unbalance IMO. I need a different signature for mono and for the rest at the very least. which is why I have high expectations for the A16 as it will do just that and then some more, so I can stop using my weird attempt to make it myself with 2 impulses picked out of some HRTF that came close enough for me.
but not everybody has a messed up head like I have. many people have a convincing impression of depth even with normal music and the right IEM/headphone. and of those who don't, many have great success with something like OOYH. I do not. different people are different :'(
 
Apr 24, 2018 at 10:34 AM Post #805 of 1,335
stuff right in front of you will work or they won't. it depends on the frequency response for the vertical axis, and on your. some people basically just rely on their eyes for anything in front to estimate distance and a big part of the vertical axis. so when they see nothing or a wall, the brain goes "nope there nothing here, let's put it in the head because that makes sense, head on the nose for the lolz". as the eyes are clearly our dominant sense, it's not uncommon at all for everything to go to crap if they don't get the right cues.
but for other people so long as the rest of the 'image' seems right, they will deduct the center to go with it and will be bamboozled by the realism(I wonder if that can be trained by listening to music in the dark of something like that?).
of course a small part of this could have to do with your own ear canal being significantly different from whatever they have modeled for the A16, but it wouldn't be my first guess.

in my case, playing basic music, no 3D DSP no nothin, if I setup an IEM response to have a frequency close to what I get from my speakers at 30°, which feels pretty flat to me and I like that, I end up with the center image on my forehead(and I don't like that...). with a different FR I get the mono sound to come back at eye level and often goes at a tiny distance. but then everything else sounds unbalance IMO. I need a different signature for mono and for the rest at the very least. which is why I have high expectations for the A16 as it will do just that and then some more, so I can stop using my weird attempt to make it myself with 2 impulses picked out of some HRTF that came close enough for me.
but not everybody has a messed up head like I have. many people have a convincing impression of depth even with normal music and the right IEM/headphone. and of those who don't, many have great success with something like OOYH. I do not. different people are different :'(
There's no model in the A16 - it's all based on measurements specific to the individual. According to Stephen Smyth, the imaging at the front is the part that's most difficult to get right, because it's the part most influenced by the shape of the pinnae. Imaging behind/above doesn't much depend on the ear shape and is really well captured (at least it was for me).

BTW, I also found out a juicy bit of gossip from Stephen Smyth. I've used OOYH and have a couple of licenses (I quite like it). Apparently all the HRTFs in OOYH were reverse-engineered from the A8. Which, to me, sort of puts OOYH in a sketchy gray area, ethically.
 
Apr 24, 2018 at 10:55 AM Post #806 of 1,335
Unless binaural, you won't get a good front to back perspective. I look for info and balance and get enough clues to not make it something that concerns me. As long as there's musical layering, I'm good. Width is another issue. Some IEMs with a lot of out head width are simply a bit phasey and lose PRAT. It can sound open, still be mostly in your head and be a very enjoyable experience with a high goosebump factor.
 
Last edited:
Apr 24, 2018 at 11:02 AM Post #807 of 1,335
There's no model in the A16 - it's all based on measurements specific to the individual. According to Stephen Smyth, the imaging at the front is the part that's most difficult to get right, because it's the part most influenced by the shape of the pinnae. Imaging behind/above doesn't much depend on the ear shape and is really well captured (at least it was for me).

BTW, I also found out a juicy bit of gossip from Stephen Smyth. I've used OOYH and have a couple of licenses (I quite like it). Apparently all the HRTFs in OOYH were reverse-engineered from the A8. Which, to me, sort of puts OOYH in a sketchy gray area, ethically.

I meant a model of compensation for the ear canal. compared to IEMs use, the earcanal it's not as significant(acoustic chamber), but maybe they still compensate a little something by default based on average ear canal? IDK.
about OOYH, it would certainly be a practical tool to do it. that's very much what I was planning to do to have better sound on my portable gears and even IEMs once I get the A16. ^_^ capturing impulses for 30° on a headphone with a close enough signature(or EQed), and use a convolver on a cellphone for a custom crossfeed.

mono is influenced by the pinna mostly for altitude and tell the difference between front and back, the real problem is that we have no interaural cues from a mono sound(captain obvious). but there are definitely difference between people that seem to go beyond simple morphology. I'd have to go look where I remember reading about that.

ps: you're not off topic if nobody talks about it. ^_^
 
Apr 24, 2018 at 11:29 AM Post #808 of 1,335
I meant a model of compensation for the ear canal. compared to IEMs use, the earcanal it's not as significant(acoustic chamber), but maybe they still compensate a little something by default based on average ear canal? IDK.

They don't use any kind of model or compensation - it's all just transfer functions from measurements made open ear, receiving from the 16-speaker surround, to headphone measurements (HD800S).

BTW, @goodvibes - to all intents and purposes, what the A16 is trying to do is binaural, but it's trying to come up with a better binaural signal - one that was recorded on your head, rather than a random dummy head, since your brain has adapted to the unique shape of your own (inner and outer) ear. A generic dummy head is better than nothing, but tends to go a little bit wrong in the verical imaging at the front.
 

Users who are viewing this thread

Back
Top