Recording Impulse Responses for Speaker Virtualization

Nov 18, 2019 at 12:30 PM Post #136 of 2,025
I noticed some problems with the HRIR measured after cutting off the hooks of my mics. The overall tonal balance is a lot brighter now which I don't like and when I'm using room correction there is weird low frequency ringing that is very noticeable. I need to investigate more into this. In the meantime I wouldn't recommend anyone cutting hooks off of their The Sound Professionals mics.
 
Nov 19, 2019 at 4:48 AM Post #137 of 2,025
Perhaps the glued ear plugs are adding physical dampening to the microphone capsule which it's not designed for? Sucks.

I did some new recordings today. Getting the most holographic and realistic recording yet. I'm not sure if it's placebo but I recorded at lower levels today and left more headroom. Perhaps a lower playback volume excites less resonances which leads to a more accurate recording?

It's been a while since I used Impulcifer with proper open backs - using really open/light ones like the Grado GW100 really makes the experience ridiculously real in the room you made the recording in. My main use case is mobile and when there's noise in the house, but it's so nice for late night listening sessions to have the Grado's.

Edit: Also used the single speaker method to take a near field (arms length/3 foot) of one of my LS50s. Recording process was a breeze this time round with the new recorder.py compared to Audacity. The nearfield one is really good for use with a monitor or tablet on the go. And there's no way I could fit my LS50's on my desk - but this way I get to use them virtually in the position they'd be on my desk.
 
Last edited:
Nov 19, 2019 at 12:18 PM Post #138 of 2,025
Perhaps the glued ear plugs are adding physical dampening to the microphone capsule which it's not designed for? Sucks.

I did some new recordings today. Getting the most holographic and realistic recording yet. I'm not sure if it's placebo but I recorded at lower levels today and left more headroom. Perhaps a lower playback volume excites less resonances which leads to a more accurate recording?

It's been a while since I used Impulcifer with proper open backs - using really open/light ones like the Grado GW100 really makes the experience ridiculously real in the room you made the recording in. My main use case is mobile and when there's noise in the house, but it's so nice for late night listening sessions to have the Grado's.

Edit: Also used the single speaker method to take a near field (arms length/3 foot) of one of my LS50s. Recording process was a breeze this time round with the new recorder.py compared to Audacity. The nearfield one is really good for use with a monitor or tablet on the go. And there's no way I could fit my LS50's on my desk - but this way I get to use them virtually in the position they'd be on my desk.

I don't think it's plugs because there already is a silicone wrapper on the capsules. What I suspect is that the same problem has been there always. Depending on the recording sometimes it's both ears, sometimes it's only one and sometimes I've gotten both mics in a good placement. It could be that having the capsules closer to the ear canal opening without the hooks messing around the pinna response is captured better but the headphone compensation doesn't capture the pinna response when wearing headphones. Headphone compensation is supposed to cancel out the pinna effect when wearing headphones so that there would be only one pinna response in playback and that's the one captured in the HRIR. According to the literature I've read the headphone compensation isn't quite as solid solution as I'd like.

I'm going to run some more experiments to investigate what the headphone compensation is doing to the frequency response and how the current ones compare to the earlier ones made with the hooks still in place. At least now there should be superior reproducibility, I just have to crack the compensation. Maybe I'll implement manual EQ tools to see how that would work out.

I've been wondering about the recording levels too. I get some pretty nasty resonances in my apartment if I crank up the volume too much. Definitely a nice thing about the ear plug attached mics is that I can now use high volumes without risking my hearing (or could if I didn't have the resonance issue). This measurement process definitely seems to have a lot of practical problems one needs to overcome to get it perfect. It would be nice to have an algorithmic solutions for all of these but that might be naive. Channel balance definitely has a very big impact on the final results. This might be the main reason why my virtualized speakers with HD 800 sound better than the physical speakers.
 
Nov 19, 2019 at 12:42 PM Post #139 of 2,025
Actually I'm not sure if the new measurments are really worse than the best one I made with the hooks in place. The channel balance wasn't maybe as good but that could be because now I have surround recording and the trend balancer could be confused by the side, center and rear channels. I made a quick adjustment to only use FL and FR channels for the channel balancing and it sounds very good. Other thing was level matching: the original with hooks is a lot louder but when I boost the new one with Audacity to roughly the same loudness it's even better in a way. It's very hard to make conclusive observations because brains adapt so fast to these changes.

I still have the low frequency ringing there which could be caused by the room acoustics or resonances because I made some temporary adjustment to the room layout to try and capture better acoustics. The room correction FR plot would indicate this as well. There is a very steep rise at 55 to 65 Hz and two sharp spikes around 200 Hz: https://imgur.com/a/O2N7wco. These kind of features in the EQ filter can easily cause ringing. Need to do something about the algorightm to avoid this...
 
Nov 19, 2019 at 2:53 PM Post #140 of 2,025
The trend channel balance algorithm you implemented has been the biggest impact on fidelity for me. Prior to that - the sound was definitely out of my head and way better than anything else I'd experienced. But it didn't really sound like the system I measured. The characteristics of the speaker/room I mean. Since the channel balance I can actually tell that it's my LS50's in the office, or my R300's in my theater. It actually sounds like my speakers.

That method has also made measurement much simpler - today I actually just did one speaker recording a compensated 3 headphones back-to-back. They all sounded identical to my ears - one was a cabled Creative Aurvana SE, the other was a Grado GW100 and finally a Bose 700. Prior to that getting multiple headphones working the same was very difficult. Prior to that I used to have to check each HRIR, it was quite painful.

I didn't realise a surround measurement would mess with the trend setting. All my recordings have been surround. Now that I've become meticulous about saving recordings I'm excited to re-process with only the L and R once you push it to the master.

We've briefly talked about it before but I find running a flat target via oratory's measurements worked really well for my IEMs. I know there's the 4khz resonance peak from the simulator that shouldn't be flattened. Perhaps you can develop a real flat target in AutoEQ that takes that into account?

The only issue I had with a low volume on recordings was the one I raised today on Github - one of the algorithms got confused by the low level. I know from calibrating my subs over the years -when you go for max SPL the waterfall plots used to fall apart. With the Behringer DAW I have and the XLR sound professionals - I should be able to go for really quiet level recordings.

I'm selling some B&W 803s soon that I no longer use. Kinda cool that I can "copy" my speakers before selling them. Speaker piracy!

As well as getting a proper transform measurement completed for my Airpod Pros tomorrow I plan on measuring the B&W's too now that I've practiced the single speaker method. Single speaker method also makes the room EQ much easier to place because I don't need to sit in my sofa.
 
Nov 20, 2019 at 2:37 AM Post #142 of 2,025
The trend channel balance algorithm you implemented has been the biggest impact on fidelity for me. Prior to that - the sound was definitely out of my head and way better than anything else I'd experienced. But it didn't really sound like the system I measured. The characteristics of the speaker/room I mean. Since the channel balance I can actually tell that it's my LS50's in the office, or my R300's in my theater. It actually sounds like my speakers.

That method has also made measurement much simpler - today I actually just did one speaker recording a compensated 3 headphones back-to-back. They all sounded identical to my ears - one was a cabled Creative Aurvana SE, the other was a Grado GW100 and finally a Bose 700. Prior to that getting multiple headphones working the same was very difficult. Prior to that I used to have to check each HRIR, it was quite painful.

I didn't realise a surround measurement would mess with the trend setting. All my recordings have been surround. Now that I've become meticulous about saving recordings I'm excited to re-process with only the L and R once you push it to the master.

We've briefly talked about it before but I find running a flat target via oratory's measurements worked really well for my IEMs. I know there's the 4khz resonance peak from the simulator that shouldn't be flattened. Perhaps you can develop a real flat target in AutoEQ that takes that into account?

The only issue I had with a low volume on recordings was the one I raised today on Github - one of the algorithms got confused by the low level. I know from calibrating my subs over the years -when you go for max SPL the waterfall plots used to fall apart. With the Behringer DAW I have and the XLR sound professionals - I should be able to go for really quiet level recordings.

I'm selling some B&W 803s soon that I no longer use. Kinda cool that I can "copy" my speakers before selling them. Speaker piracy!

As well as getting a proper transform measurement completed for my Airpod Pros tomorrow I plan on measuring the B&W's too now that I've practiced the single speaker method. Single speaker method also makes the room EQ much easier to place because I don't need to sit in my sofa.
What do you exactly mean by flat? And what do you mean by a real flat target?

Mabey a stupid question but how and at which step can i use the trend channel balance in Impulcifer?
No stupid questions here. In the final step when you run impulcifer.py you simply add a parameter --channel_balance=trend. That's literally only thing you have to do.
 
Nov 20, 2019 at 3:13 AM Post #143 of 2,025
I made the naive assumption that the Out of your Head, and now Impulcifer, recordings have all the HRTF information in them. Because most of the ear/head phones we use have some sort of target built into them that try and emulate a loud speaker (diffuse field, harmen etc.) you need to flatten your headphone response and then the HRIR sounds much more natural and not overly harsh. But I know the way I compute flat EQ from AutoEQ is wrong because it also flattens some of the resonances that shoudln't be flattened from the dummy head.

Now a "real flat" target for the purposes of Impulcifer would know that, say, Oratory's measurements use a certain Gras copuler that has certain challenges when it's a deep insert IEM and not to try and flatten resonances from the raw measurement results. Or mess with any EQ beyond 10khz because ear canal length will be the primary factor on response that high. But it'll still flatten the natural curve built in of the headphone because the HRIR contains all the room we need.

Does that make sense? Or am I just on the wrong track?
 
Nov 20, 2019 at 3:23 AM Post #144 of 2,025
No stupid questions here. In the final step when you run impulcifer.py you simply add a parameter --channel_balance=trend. That's literally only thing you have to do.
So something like this:
python impulcifer.py --channel_balance=trend --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" ?

But this does not change anything in the final HeSuVi file?
 
Nov 20, 2019 at 3:29 AM Post #145 of 2,025
So something like this:
python impulcifer.py --channel_balance=trend --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir" ?

But this does not change anything in the final HeSuVi file?
Like that, yes. Of course it's possible that you already have good channel balance which doesn't require correction. You can check this by inspecting the Results.png graph in plots folder when not using --channel_balance. If the purple difference curve looks flat then you have naturally good channel balance. Trend balancing method doesn't affect the narrow notches and peaks in the difference curve but works on a bigger scale balancing bass, mids and treble.
 
Nov 20, 2019 at 7:04 AM Post #146 of 2,025
Results_HD555.png Headphones_HD555.png

The original curves look like that. it is not really flat but still I don't hear any difference after balancing.
 
Last edited:
Nov 20, 2019 at 7:16 AM Post #147 of 2,025
I tried the transform headphone feature for my IEM's using my Bose 700 - it seems to work - the image is extremely realistic but it's severely lacking in bass compared to no headphone compensation and EQing the Airpod Pros to flat.

This was the command I ran

Code:
python frequency_response.py --input_dir="rtings/data/onear/Bose Noise Cancelling Headphones 700" --output_dir="my_results/Bose 700 (AirPods Pro)" --compensation="rtings\resources\rtings_compensation_avg.csv" --sound_signature="results/rtings/avg/Apple AirPods Pro/Apple AirPods Pro.csv" --equalize --parametric_eq --max_filters=5+5 --ten_band_eq --bass_boost=4

I tried adding a boost 10 but to no avail.

I need to trouble shoot more or pick up a pair of decent open backs that can act as a simulator headphone. The DT990 that I have will have too much treble simblance for that purpose. Is there a way I can bake in the EQ compensation from oratory on my DT990s that remove the nasty treble and use that as the simulator headphone? That'd be the ideal headphone for me for critical Impulcifer use.

Edit: I also tried to use the trend balancer with a stereo only recording vs 7.1. The virtual center was much more accurate with the 7.1 recording. The stereo recording felt like it expanded the sound stage around me.

I also compared my headphone compensated Bose 700 recording with no headphone compensation and a flat EQ with oratory as a source. They're very very similar but the treble is more on point with the actual headphone comp
 
Last edited:
Nov 20, 2019 at 10:05 AM Post #148 of 2,025
I made the naive assumption that the Out of your Head, and now Impulcifer, recordings have all the HRTF information in them. Because most of the ear/head phones we use have some sort of target built into them that try and emulate a loud speaker (diffuse field, harmen etc.) you need to flatten your headphone response and then the HRIR sounds much more natural and not overly harsh. But I know the way I compute flat EQ from AutoEQ is wrong because it also flattens some of the resonances that shoudln't be flattened from the dummy head.

Now a "real flat" target for the purposes of Impulcifer would know that, say, Oratory's measurements use a certain Gras copuler that has certain challenges when it's a deep insert IEM and not to try and flatten resonances from the raw measurement results. Or mess with any EQ beyond 10khz because ear canal length will be the primary factor on response that high. But it'll still flatten the natural curve built in of the headphone because the HRIR contains all the room we need.

Does that make sense? Or am I just on the wrong track?

Something like this could be relevant for OOYH if that doesn't have headphone compensation. I remember reading that it doesn't have but I could be just as easily imagining this. If there is no headphone compensation then some aspects of the frequency response would need to be flattened but definitely not all. Basically it's very hard to say what would have to be done if the headphone compensation is missing entirely. Maybe compensate out the pinna response but the headphone FR measurements include all the other aspects of ear as well like ear canal resonances and these should not be flattened even with OOYH.

HRIR doesn't contain anything beyond the ear canal opening so those parts should not be touched. Unless the measurement has been made at the ear canal using silicone tube mics but this is only ever used in academic situations and requires a physician to insert the tubes.

In conclusion: flat target is never desired. Compensation for headphone's pinna activation is and it's done with headphone compensation in Impulcifer.



The original curves look like that. it is not really flat but still I don't hear any difference after balancing.

Ooh, that's beautiful. No wonder the trend doesn't do anything because you already have near perfect channel balance. I've never managed to do a measurement like that. Enjoy!

I tried the transform headphone feature for my IEM's using my Bose 700 - it seems to work - the image is extremely realistic but it's severely lacking in bass compared to no headphone compensation and EQing the Airpod Pros to flat.

This was the command I ran

Code:
python frequency_response.py --input_dir="rtings/data/onear/Bose Noise Cancelling Headphones 700" --output_dir="my_results/Bose 700 (AirPods Pro)" --compensation="rtings\resources\rtings_compensation_avg.csv" --sound_signature="results/rtings/avg/Apple AirPods Pro/Apple AirPods Pro.csv" --equalize --parametric_eq --max_filters=5+5 --ten_band_eq --bass_boost=4

I tried adding a boost 10 but to no avail.

I need to trouble shoot more or pick up a pair of decent open backs that can act as a simulator headphone. The DT990 that I have will have too much treble simblance for that purpose. Is there a way I can bake in the EQ compensation from oratory on my DT990s that remove the nasty treble and use that as the simulator headphone? That'd be the ideal headphone for me for critical Impulcifer use.

Edit: I also tried to use the trend balancer with a stereo only recording vs 7.1. The virtual center was much more accurate with the 7.1 recording. The stereo recording felt like it expanded the sound stage around me.

I also compared my headphone compensated Bose 700 recording with no headphone compensation and a flat EQ with oratory as a source. They're very very similar but the treble is more on point with the actual headphone comp

Are you using this transform EQ during headphone compensation? Because the better way is to record headphone compensation without any EQ and the apply the transform EQ during processing. This would mean you have to point input dir to AirPods Pro and sound signature to Bose 700 results. Apply 4 dB bass boost is you use pre-computed results. Then you take the minimum phase impulse response WAV file and copy it to the Impulcifer folder as eq.wav. This way the tranform EQ which turns AirPods Pro into Bose 700 will be incorporated in the produced HRIR.

Your experience with stereo vs 7.1 balancing is probably due to the success with the recording process instead of the channel balancing algorithm.
 
Nov 20, 2019 at 11:06 AM Post #149 of 2,025
Ah understood about the flat target. I still don't get why it sounds better and more natural to me - I just have to reduce the 4000hz attenuation and it's virtually there for me.

Yep - I'm copying the eq.wav and doing the headphone compensation then. I guess I got it the wrong way round - will try again tomorrow.

Edit: the balancer issue was user error - I compensated the wrong headphones. The center image is slightly tighter with just the stereo recording. Would it be an idea to run the balancer per speaker?
 
Last edited:
Nov 20, 2019 at 11:34 AM Post #150 of 2,025
Ooh, that's beautiful. No wonder the trend doesn't do anything because you already have near perfect channel balance. I've never managed to do a measurement like that. Enjoy!
.

Ok, that explains why I don't hear any difference. Actually this sound already very close to the real speakers in the room. But the funny thing is that I did only one test measurement before and that this was the first real measurement with Impulcifer and I had trouble with the microfon (Pui 5024HD capsule with XLR plug working at approx. 4V) placement in my ears. I hope I find time in the weekend to do more measurements. I have also put together a second microfon working at 6V and a microfon with the 3,5mm plug (but this looks a bit noisy). Actually I only measured with my HD 555 and I also want to test my AKG 701.
 

Users who are viewing this thread

Back
Top