Recording Impulse Responses for Speaker Virtualization
Nov 10, 2019 at 2:30 PM Post #121 of 1,816
Will give this a try soon as I can. Does the channel balance logic have any consequences for the room correction? Ideally my next recordings I wanted to work on getting that incorporated also.
 
Nov 11, 2019 at 2:08 AM Post #122 of 1,816
Will give this a try soon as I can. Does the channel balance logic have any consequences for the room correction? Ideally my next recordings I wanted to work on getting that incorporated also.
Channel balancing is done after the room correction using the final results. Room correction is not needed for channel balance correction. New recordings are actually not needed either but channel balance can be corrected from an older existing recordings also.
 
Nov 13, 2019 at 2:59 AM Post #123 of 1,816
I'm trying to make my own notes for the room correction measurement process and hopefully help improve the documentation :) Can you tell me if I've got it right?

0) Run a regular recording with the binural mics. But do not process.
1) Replace the csv/text with the mic calibration data from MiniDSP's website
2) Select the UMIK-1 in Windows as default and run (I've renamed the default to prefix it with room and with --channels=1)

python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FC,FR,SR,BR,BL,SL.wav"

3) Run

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir"

Where I'm confused is how to tell Impulcifer that you've run the room correction recording? Part 3 is just the regular process - if I don't specify --no_room_correction will it by default process the room correction? How will it know the room correction recording file?
 
Nov 13, 2019 at 3:45 AM Post #124 of 1,816
I'm trying to make my own notes for the room correction measurement process and hopefully help improve the documentation :) Can you tell me if I've got it right?

0) Run a regular recording with the binural mics. But do not process.
1) Replace the csv/text with the mic calibration data from MiniDSP's website
2) Select the UMIK-1 in Windows as default and run (I've renamed the default to prefix it with room and with --channels=1)

python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FC,FR,SR,BR,BL,SL.wav"

3) Run

python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --dir_path="data/my_hrir"

Where I'm confused is how to tell Impulcifer that you've run the room correction recording? Part 3 is just the regular process - if I don't specify --no_room_correction will it by default process the room correction? How will it know the room correction recording file?
You got it pretty much right with minor changes.

Impulcifer uses the same principle for finding the room recording files as it uses for the HRIR recording files: it looks at the file name patterns. For room recordings it's room-<CH1>,<CH2>,...<CHN>-left|right.wav. If the files exist then Impulcifer is going to do room correction unless --no_room_correction is present.

Actually you have to run the room recording twice: first with the measurement mic in the same location where the left ear mic was and then again at the location of the right ear mic.
python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FC,FR,SR,BR,BL,SL-left.wav"
python recorder.py --channels=1 --play="data/sweep-seg-FL,FC,FR,SR,BR,BL,SL-7.1-6.15s-48000Hz-32bit-2.93Hz-24000Hz.wav" --record="data/my_hrir/room-FL,FC,FR,SR,BR,BL,SL-right.wav"

Or you could only run in once with the measurment microphone at the center of the head and copy that file as -left and -right but then it cannot be guaranteed that the results will be stellar. Room correction EQ extends all the way up to 20 kHz and the frequency response can change quite a lot between two locations 8 cm apart, depending on the room. It also could be just fine. You can use webcam.html to help placing the measurement mic to a correct location.

I'm going to implement general room.wav support which doesn't EQ to so high frequencies and the location doesn't have to be so precise.
 
Nov 13, 2019 at 5:30 AM Post #125 of 1,816
Excellent - I'll give it a shot now.

This morning was productive - I nailed a good recording with my LS50 setup. Balance was off, but my process of using the 7.1.4 atmos test tones center channel worked well to figure out the balance. Impulcifer then adjusted it with the new commands. Compared to even my old recording I liked this is much better.

I did try playing around with the other balancing methods (mids and both avgs). They did funky things with the localisation for me which felt very un-natural. The numerical balance is the best method.

Edit: Tried to place the UMIK properly but because of my setup it's extremely difficult to get it in exactly the same position. I'm taking measurements on a sofa. I have an idea of using a laser level which I'll try another time.

The next thing I'll do is wait for the airpods pro to get updated into AutoEQ so I can transform my overear measurements to that!
 
Last edited:
Nov 16, 2019 at 7:37 AM Post #126 of 1,816
I added another channel balancing strategy: "trend". This takes the frequency response difference between left and right sides and smooths it heavily. This smoothed curve is then used as the equalization target for right side. Now because the smoothing is so heavy this doesn't create the uncanny feeling that "avg" or "min" can do in some cases while managing to balance bass, mids and treble. I tested it with two measurements, one with quite good natural channel balance and one with poor balance. I prefer trend over all other strategies for both. Here's a graph illustrating the trending.
h7shOMX.png
 
Nov 16, 2019 at 11:24 AM Post #127 of 1,816
I took the leap and cut the hooks off from my mics and glued them to ear plugs. Works fantastically and now they are so much more stable. I got very good results on the first try with good channel balance although using balance correction improves it. I'll do a surround setup measurement tomorrow.

https://imgur.com/a/0ELAqti
 
Nov 17, 2019 at 3:46 AM Post #128 of 1,816
Trend looks interesting - I assumed that's what room correction was going to be. You just flatten out the natural response. Did you compare trend to the (manual) numerical mode?

I've got to get into the habit of saving my damn recordings, I keep having to re-measure to try out new stuff!

The idea of gluing to a foam plug sounds ideal - they'll be much more stable then. That way you can also compensate multiple headphones with one recording because they'll stay put better. I ended up ordering the XLR version because I assumed a higher higher SNR would be better, so I'm a bit reluctant to cut into those more expensive ones.

Edit: I did have a recording saved. I just tried trend - works very well - the same as manual (numerical) correction really. Really nice feature to help make this a turn key solution without much faffing around.

Edit: How do I get the chart for the channel balance output like you've done above? --plot doesn't output it. I'm pretty surprised because it sounds like my stereo imagining is better over the headphones than my real speakers. I want to see what's going on with the charts! Amazing
 
Last edited:
Nov 17, 2019 at 5:27 AM Post #129 of 1,816
I calibrated my binaural mics agains my MiniDSP UMIK-1. Quite surprisingly the frequency response is ±0.5 dB between 90 Hz and 9 kHz with a roll off on both ends. I expected bigger variation. There is a level difference between left and right but that I already suspected.

Now I start to wonder what would happen if I didn't do headphone compensation at all but instead used the calibration files produced for the binaural mics and baked in Harman target equalization for my headphones without the bass boost. Headphone compensation seems to be quite tricky business because the heaphone and mic placements affect the results more than I would like. It could be that the mic placement affects the HRIR measurements similarly and the headphone compensation is really needed even when the mics have been calibrated. I need to test this hypothesis.

There's a tool for the mic calibration now in research/mic-calibration: https://github.com/jaakkopasanen/Impulcifer/tree/master/research/mic-calibration

Here are the same images in Imgur for future's sake in case something happens to the readme: https://imgur.com/a/9YzJtwx

Trend looks interesting - I assumed that's what room correction was going to be. You just flatten out the natural response. Did you compare trend to the (manual) numerical mode?

I've got to get into the habit of saving my damn recordings, I keep having to re-measure to try out new stuff!

The idea of gluing to a foam plug sounds ideal - they'll be much more stable then. That way you can also compensate multiple headphones with one recording because they'll stay put better. I ended up ordering the XLR version because I assumed a higher higher SNR would be better, so I'm a bit reluctant to cut into those more expensive ones.

Edit: I did have a recording saved. I just tried trend - works very well - the same as manual (numerical) correction really. Really nice feature to help make this a turn key solution without much faffing around.

Edit: How do I get the chart for the channel balance output like you've done above? --plot doesn't output it. I'm pretty surprised because it sounds like my stereo imagining is better over the headphones than my real speakers. I want to see what's going on with the charts! Amazing

I did compare trend with manual numerical method and I prefer trend method because it balances out bass, mids and treble. With numerical method it's possible that male voices are in the center but female voices are off-center or vise versa.

The trend chart is not included in impulcifer, that was a one time trick I did. You could add a new line: trend.plot_graph() on line 271 in hrir.py if you wanted to see it.
 
Nov 17, 2019 at 5:43 AM Post #130 of 1,816
I saw that calibration rig you made - impressive DIYing.

I've been listening to the results of the latest HRIR I made with the trend - it's so much like I thought room correction would be. It seems to have improved everything. I'm noticing surround effects more, the harsh treble I sometimes thought I noticed has gone, everything seems to have the smooth natural response of the LS50s. The thing I really did notice most was that the virtual center channel was so much more precise than my real speakers. Previously I'd been using a ffdshow to output in Dolby Prologic over headphones like I do on my real system to get a perfect center channel. Now I don't need to - virtual center sounds like a real center just like on one of the OOYH presets or Dolby Headphone

And it's not just stereo music - in the Dolby Atmos test clips I play I notice the side and rear channel placement much better when panning, e.g. jets flying from behind, to the side and then front. I guess when I play the 7 channel test tones - those are easy to localise but it's the panning effects that relies on good imagining between speakers to get right.

The room correction routine you've built - does that basically just target a Harman Room Loudspeaker Target? Or does it do anything with T20 etc?

It's such a shame that Atmos/DTS:X can't be decoded in windows.
 
Nov 17, 2019 at 5:48 AM Post #131 of 1,816
I saw that calibration rig you made - impressive DIYing.

I've been listening to the results of the latest HRIR I made with the trend - it's so much like I thought room correction would be. It seems to have improved everything. I'm noticing surround effects more, the harsh treble I sometimes thought I noticed has gone, everything seems to have the smooth natural response of the LS50s. The thing I really did notice most was that the virtual center channel was so much more precise than my real speakers. Previously I'd been using a ffdshow to output in Dolby Prologic over headphones like I do on my real system to get a perfect center channel. Now I don't need to - virtual center sounds like a real center just like on one of the OOYH presets or Dolby Headphone

And it's not just stereo music - in the Dolby Atmos test clips I play I notice the side and rear channel placement much better when panning, e.g. jets flying from behind, to the side and then front. I guess when I play the 7 channel test tones - those are easy to localise but it's the panning effects that relies on good imagining between speakers to get right.

The room correction routine you've built - does that basically just target a Harman Room Loudspeaker Target? Or does it do anything with T20 etc?

It's such a shame that Atmos/DTS:X can't be decoded in windows.
I'm very glad it works so well for you. I might add the trend as default if more people here could try it out and report their experiences.

Currently room correction is only a minimum phase EQ but I will try to implement decay management to ensure that all frequencies decay at the same speed. Also suppressing reflections that happen before 30 milliseconds could help fool brains into thinking that they are hearing the recording venue acoustics instead of the room where the listener is sitting right now.
 
Nov 17, 2019 at 7:22 AM Post #132 of 1,816
I have tested Impulcifer and it works great! I have measured only stereo speakers but want to add a LFE dummy head recording.
I only don't understand how can i add a "real" LFE channel in HeSuVi?
 
Nov 17, 2019 at 7:45 AM Post #133 of 1,816
I just reprocessed my recordings with the trend channel balance and when swapping back and forth between the old hrir without balancing and the new one with, surround effects are better placed in the 'room' and the center channel sounds better localized. I watched a few scenes from favorite movies that I test often and definitely prefer the balanced hrir - I think it sounds bigger and more spacious in the room. You're doing awesome work!
 
Nov 17, 2019 at 9:40 AM Post #134 of 1,816
I have tested Impulcifer and it works great! I have measured only stereo speakers but want to add a LFE dummy head recording.
I only don't understand how can i add a "real" LFE channel in HeSuVi?

You don't need to add an LFE track - if you run through atmos/dts x test tones when they come to the LFE channel you still get playback.

If the speakers you measured don't produce subbass you can use Peace to boost those frequencies so that your playback will work. The speakers do produce them but just at a lower volume.
 
Nov 18, 2019 at 6:58 AM Post #135 of 1,816
You don't need to add an LFE track - if you run through atmos/dts x test tones when they come to the LFE channel you still get playback.

If the speakers you measured don't produce subbass you can use Peace to boost those frequencies so that your playback will work. The speakers do produce them but just at a lower volume.

I already added a simple text file in HeSuVi that boost the LFE channel before processing but I don't think that it sounds as good as a "real" LFE channel.
 

Users who are viewing this thread

Back
Top