Mesh2hrtf
Apr 17, 2022 at 8:49 AM Post #76 of 93
It was me who mentioned it above, but not many audiologists are equipped with otoscan and even those might not hand out the 3d scan, the right format assumed.
Bear in mind, that the ear canal scan needs to be attached to the 3d pinnascan at the correct angle.

Coudn't we fit something mouldable into the earcanal and create a 3d scan of it?

It seems like mesh2hrtf or at least sergey (the youtube guy from the tut) intends to simulate blocked ear channel measurement, but why?
 
Last edited:
Apr 17, 2022 at 9:40 AM Post #79 of 93
What I have done is used the headphone compensation from impulcifer and use it in the virtualisation in hesuvi.
I know, but it probably is not a perfect "fit" with the Mesh2hrtf results. But don't get me wrong, I am talking a bit theoretical and idealistically here. The result may still be good enough. The point is that ideally speakers and headphones would be measured at the ear drums, and without anything (mic, cable) in the ear that could influence the measurement (which would be very difficult to say the least :), and that is exactly why I like the idea of scanning and modelling everything, then you don't have such problems).
Measuring at the entrance of a blocked ear canal often leads to the virtual speakers sounding a bit too bright compared to the real speakers. That is what the Smyth brothers are saying and I have the same experience with the Smyth Realiser (that basically works the same as Impulcifer/HeSuVi but with headtracking). And this can be solved by doing additional manual EQ but it is very difficult to get it right.
And now you mix 2 different methods, which may add to the imperfections.

But don't worry about it, I am just thinking out loud.
 
Apr 17, 2022 at 9:54 AM Post #80 of 93
I know, but it probably is not a perfect "fit" with the Mesh2hrtf results. But don't get me wrong, I am talking a bit theoretical and idealistically here. The result may still be good enough. The point is that ideally speakers and headphones would be measured at the ear drums, and without anything (mic, cable) in the ear that could influence the measurement (which would be very difficult to say the least :), and that is exactly why I like the idea of scanning and modelling everything, then you don't have such problems).
Measuring at the entrance of a blocked ear canal often leads to the virtual speakers sounding a bit too bright compared to the real speakers. That is what the Smyth brothers are saying and I have the same experience with the Smyth Realiser (that basically works the same as Impulcifer/HeSuVi but with headtracking). And this can be solved by doing additional manual EQ but it is very difficult to get it right.
And now you mix 2 different methods, which may add to the imperfections.

But don't worry about it, I am just thinking out loud.
No it’s cool thinking like that advances how we go about doing things. I’m asking because you guys know much more than me. I’m just a hobbyist
 
Apr 29, 2022 at 5:39 PM Post #82 of 93
When I have time I will look again into the python sofa API and do the same with python but this have to wait for know.

The SOFA API for python is needed: python3 -m pip install python-sofa --user

I think the translated matlab code for python should look like this (tested with Spyder (5.1.5)):
import sys sys.path.insert(0, '../../src') import sofa import numpy as np import soundfile as sf # location and filename of your mesh2hrtf sofa file folder='C:/Users/xxxx/' file='HRIR_ARI_48000.sofa' # location and filename of you converted wav files out='C:/Users/xxxx/' outname='new_file_sofa(L-R-C-LFE-LS-RS-LB-RB).wav' outname2='new_file_sofa(hesuvi).wav' outpath=out+outname outpath1=out+outname2 HRTF_path = folder+file HRTF = sofa.Database.open(HRTF_path) HRTF.Metadata.dump() #source_positions = HRTF.Source.Position.get_values(system="cartesian") source_positions = HRTF.Source.Position.get_values(system="spherical") theta=0 CH_L = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==30 ))) CH_R = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==360-30) )) CH_C = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==0 ))) CH_LS = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==110) )) CH_RS = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==360-110 ))) CH_LB = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==135 ))) CH_RB = np.ndarray.item( np.argwhere((np.round(source_positions[:,1])==theta) & (np.round(source_positions[:,0])==360-135))) emitter=0 ACH_LL=HRTF.Data.IR.get_values(indices={"M":CH_L, "R":0, "E":emitter}) ACH_L=HRTF.Data.IR.get_values(indices={"M":CH_L, "R":1, "E":emitter}) ACH_RL=HRTF.Data.IR.get_values(indices={"M":CH_R, "R":0, "E":emitter}) ACH_R=HRTF.Data.IR.get_values(indices={"M":CH_R, "R":1, "E":emitter}) ACH_CL=HRTF.Data.IR.get_values(indices={"M":CH_C, "R":0, "E":emitter}) ACH_C=HRTF.Data.IR.get_values(indices={"M":CH_C, "R":1, "E":emitter}) ACH_LSL=HRTF.Data.IR.get_values(indices={"M":CH_LS, "R":0, "E":emitter}) ACH_LS=HRTF.Data.IR.get_values(indices={"M":CH_LS, "R":1, "E":emitter}) ACH_RSL=HRTF.Data.IR.get_values(indices={"M":CH_RS, "R":0, "E":emitter}) ACH_RS=HRTF.Data.IR.get_values(indices={"M":CH_RS, "R":1, "E":emitter}) ACH_LBL=HRTF.Data.IR.get_values(indices={"M":CH_LB, "R":0, "E":emitter}) ACH_LB=HRTF.Data.IR.get_values(indices={"M":CH_LB, "R":1, "E":emitter}) ACH_RBL=HRTF.Data.IR.get_values(indices={"M":CH_RB, "R":0, "E":emitter}) ACH_RB=HRTF.Data.IR.get_values(indices={"M":CH_RB, "R":1, "E":emitter}) HRTF.close() audiodata=np.transpose([ACH_LL,ACH_L,ACH_RL,ACH_R,ACH_CL,ACH_C,ACH_CL,ACH_C,ACH_LSL,ACH_LS,ACH_RSL,ACH_RS,ACH_LBL,ACH_LB,ACH_RBL,ACH_RB]) newaudiodata=audiodata[:,[0, 1, 8, 9, 12, 13, 4, 3, 2, 11, 10, 15, 14, 5 ]] samplerate=48000 sf.write(outpath1, newaudiodata, samplerate,'PCM_32') sf.write(outpath, audiodata, samplerate,'PCM_32')
 
Last edited:
May 6, 2022 at 9:50 AM Post #85 of 93
This should let us change the distance of the nodes positions more than 1.5 meters.

@musicreo I’ve tried that script but still struggling using it in python. Do I cmd from the window my .sofa file is in? And just copy and paste the script. Or do I need to add python/python3 before it. Ive tried both but it still isn’t working
I also downloaded sofa.api spider and anaconda.
https://sourceforge.net/p/mesh2hrtf/wiki/Evaluation grids/



C8CCD60D-724C-481A-855F-2705AD0EFC18.png
 
Last edited:
May 6, 2022 at 10:54 AM Post #86 of 93
@musicreo I’ve tried that script but still struggling using it in python. Do I cmd from the window my .sofa file is in? And just copy and paste the script. Or do I need to add python/python3 before it. Ive tried both but it still isn’t working
I also downloaded sofa.api spider and anaconda.
Spyder is a python gui and has its own editor to run the script. It is much nicer for me than to work with the cmd. If you look for a very beginner friendly python gui I suggest to look for "Thonny".

If you use the cmd I suggest you save the script with a editor to some folder and name it like for example "sofatowav.py". Then you have to go to the folder where the script is located in the cmd with "cd c:\yourfoldername\" . The script is started with "python sofatowav.py"
 
Last edited:
May 6, 2022 at 2:26 PM Post #87 of 93
I've done what you suggested now I get this

C:\Users\morgi\Downloads\mesh2hrtf\musicero sofa to wav>python sofatowav.py
Traceback (most recent call last):
File "sofatowav.py", line 6, in <module>
import soundfile as sf
ModuleNotFoundError: No module named 'soundfile'
 
May 7, 2022 at 5:06 AM Post #90 of 93
The documentation tells you how to do this plot: Open and plot HRTF

You could add the following to the code from above before "HRTF.close()" and it will save you the plot in the same folder as the wav file.

import matplotlib.pyplot as plt import numpy as np plotname="source_positions.png" def plot_coordinates(coords, title): x0 = coords n0 = coords fig = plt.figure(figsize=(15, 15)) ax = fig.add_subplot(111, projection='3d') q = ax.quiver(x0[:, 0], x0[:, 1], x0[:, 2], n0[:, 0], n0[:, 1], n0[:, 2], length=0.1) plt.xlabel('x (m)') plt.ylabel('y (m)') plt.title(title) plt.savefig(out+plotname) return q source_positionsplot = HRTF.Source.Position.get_values(system="cartesian") plot_coordinates(source_positionsplot, 'Source positions');
 

Users who are viewing this thread

Back
Top