Recording Impulse Responses for Speaker Virtualization
Dec 15, 2020 at 1:50 PM Post #496 of 1,816
In the pre plots ( python impulcifer.py --test_signal="data/sweep-6.15s-48000Hz-32bit-2.93Hz-24000Hz.pkl" --no_room_correction --dir_path="data/my_hrir " --plot) you see that the left channel shows more noise. Audacity shows me that it is -45 dB on the left channel and -51.5 dB on the right channel. Only the SR,BR.wav does not show this noise in the left channel. Look at the waveform where I compare your FC-left measurement (the lower one) with my measurement (-56 dB). Actually the noise on the left channel sounds like a noise I had with some Pui capsulse when touching them.

At what distance did you perform the measurement?


FC-left.pngFC-right.pngSpurpanel001.png
 
Last edited:
Dec 15, 2020 at 2:54 PM Post #497 of 1,816
Interesting. I'm about 1,5m from the speakers. They are for a computer.
Now I'm thinking, could be problem that it's 2.1 system? Not 2.0 full range...
But to the PC it's connected as stereo 2.0.
 
Last edited:
Dec 25, 2020 at 7:58 AM Post #498 of 1,816
A little investigation into the plot error has led me here: https://stackoverflow.com/a/64971496. This answer says the problem is caused by BLAS subprogram which was broken somehow in a recent Windows update. Hopefully Windows will fix this soon but until then you could try using conda instead of pip for managing the dependencies.

@musicreo @Benik3 are you on Windows, 32-bit or AMD CPU perhaps? I tested with files supplied by @Benik3 but could not reproduce. I'm running 64-bit Windows 10 on Intel CPU.
 
Jan 4, 2021 at 12:01 AM Post #503 of 1,816
I'd like to see the virtual surround with headphones extended to 7.1.4 (12 channels) or other immersive formats. Obviously this works well on the Realiser A16 (I have a 2u balanced unit) but the cost is beyond the audio budget of most.

So, how would one use impulsifer to make the measurements to produce BRIR files for the height channels, and is there any available software to do the equivalent of HeSuVi/Equalizer APO for immersive channel configurations?

Given the BRIR I know I can do it using per channel per ear IRs and Guitar Cab Convolver VSTs, but would prefer something a little more user friendly.

I guess non-realtime conversion of 7.1.4 audio files to binaural would be OK, but realtime would be best.

FYI my interest is because I've been experimenting with up/re-mixing of stereo or 5.1 to 7.1.4 with music source separation tools, along with my own upmixer, and would like to see an immersive eco system for playback.
 
Jan 4, 2021 at 5:30 AM Post #504 of 1,816
So, how would one use impulsifer to make the measurements to produce BRIR files for the height channels

You could measure the height channels like every other channel but have to do the deconvolution in two steps. First do your 7.0 setup and then rename your height channels to FL,FR,SL,SR and do again the deconvolution. After that you can merge the two output files together.

and is there any available software to do the equivalent of HeSuVi/Equalizer APO for immersive channel configurations?

FYI my interest is because I've been experimenting with up/re-mixing of stereo or 5.1 to 7.1.4 with music source separation tools, along with my own upmixer, and would like to see an immersive eco system for playback.

I think there is one realtime solution. Do the upmix in EQ-APO directly before the convolution.
 
Jan 4, 2021 at 10:39 PM Post #505 of 1,816
I guess it's a little off topic but I had some success with convolution with ffmpeg. At least for now, I'm doing it in 3 steps.

Convolve an input 7.1 file with the 7.1 impulse responses for the left ear and output as mono (first file is the input, second file is the IRs):

ffmpeg.exe" -y -i "ChannelIDs714-sides first-mapped.wav" -i "A16_7.1_ sides_first_left_ear.wav" -filter_complex "[0] [1] afir=dry=10:wet=10" -ac 1 Ch_id_left.wav​

now the right ear (same input file):

ffmpeg.exe" -y -i "ChannelIDs714-sides first-mapped.wav" -i "A16_7.1_ sides_first_right_ear.wav" -filter_complex "[0] [1] afir=dry=10:wet=10" -ac 1 Ch_id_right.wav​

then join the left and right ear mono files into a stereo (binaural) file:

"D:\Google Drive\12ch-mono-split\ffmpeg.exe" -y -i Ch_id_left.wav -i Ch_id_right.wav -filter_complex "[0:a][1:a]join=inputs=2:channel_layout=stereo[a]" -map "[a]" -acodec pcm_s24le Ch_id_binaurl.wav

Besides the details of the ffmpeg syntax, and how the impulses should be normalized, I don't know what to do with the LFE channel. At the moment I just have an all zeros IR, which zeros out the LFE. Should I just copy the Center channel IR in there?
 
Jan 5, 2021 at 6:50 AM Post #507 of 1,816
FYI my interest is because I've been experimenting with up/re-mixing of stereo or 5.1 to 7.1.4 with music source separation tools, along with my own upmixer, and would like to see an immersive eco system for playback.

Off topic, but: what are you doing for the height channels in your upmixing and since you mentioned realtime playback, is the upmixing real time at all?
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jan 5, 2021 at 1:21 PM Post #509 of 1,816
What is the advantage of using ffmpeg compared to EQ-APO or ConvolverVST?

My goals are to 1) go beyond 7.1 and 2) To have tool chain that is more user friendly (install AND use), and cross platform. e.g. drag a drop a multichannel surround file to a converter that gives you Binaural.

But real time tools would also be good. Especially if it was a plugin for popular players.

I have not looked at convolverVST, but VSTs are not super friendly for average users that don't have/use DAWs etc. Especially if you need multiple instances and routing to do surround virtualization for headphones.
 
Jan 5, 2021 at 3:06 PM Post #510 of 1,816
My goals are to 1) go beyond 7.1 [...]
But real time tools would also be good. Especially if it was a plugin for popular players.

You can work with virtual channels in EQ-APO and this way you can work with more than 7.1.

I have not looked at convolverVST, but VSTs are not super friendly for average users that don't have/use DAWs etc. Especially if you need multiple instances and routing to do surround virtualization for headphones.

I mentioned convolverVST because I used it before EQ-APO for movie playback in MPC (there is a VST and a directshow Filter) and still use it in foobar2000 for converting up to 7.1 binaural audio. I think it can work with more than 7.1 but I have never tested it. Compared to other convolution VSTs it can work with multiple channels with just one instance.
 

Users who are viewing this thread

Back
Top