Mesh2hrtf
Apr 12, 2022 at 2:14 AM Post #31 of 93
@morgin: I could guess it doesn't sound like speakers in a room, but is the sound outside your head? And how far? Have you also tried listening to a single channel and how far outside the head does that sound? (The Smyth Realiser has a "solo" function which lets you listen to a single virtual speaker of choice which can be very informative. In your case to do that you could use a test track that only has sound in one channel.)
I’m just going to bed now but will try it. The sounds were like when I used the Dolby atmos preset the distance is the same but sounds are more clearer and distinct. I wish I could adjust the width of the surround to match impulcifer.

Also there no headphone compensation so I might try a oratory EQ
 
Apr 12, 2022 at 2:33 AM Post #32 of 93
Wow! ok, so I wasn't expecting that but it's actually very good. And considering this is a not the best detailed ears scan or perfect ears microphone placement. Its way more detailed and crisp than my impulcifer, but I prefer impulcifer because of the wide sound stage it gives, like the whole room is filled with sound. If theres a way to get this to be as wide or sounding like there are speakers around me it's a very very good option. I would recommend you try this out if you have an iphone 10 (newer models have worse true depth camera), and 16gb ram or more. I would love to help if anyone wants to try it. I will be torn between using this and impulcifer now.
That sounds promising.
@musicreo thankyou again I hope you PM me. was it diffcult to convert it into .wav or will someone like me be able to do it in future. I'm pumped to get better scans now.
It is not difficult. In matlab you can use the sofa API to load the sofa file into a struct. It should work similar in python. The matlab API also works with the free programm octave.
This would be the code:
SOFAstart; % Load your impulse response into a struct hrtf = SOFAload('HRIR_ARI_48000.sofa'); %% find the channels for 7.1 CH_L = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==30 ); CH_R = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==360-30 ); CH_C = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==0 ); CH_LS = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==110 ); CH_RS = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==360-110 ); CH_LB = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==135 ); CH_RB = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==360-135 ); % combining the channels[/SIZE][/SIZE] [SIZE=4][SIZE=3]audioch=[hrtf.Data.IR(CH_L,:,:),hrtf.Data.IR(CH_R,:,:),hrtf.Data.IR(CH_C,:,:),...[/SIZE][/SIZE] [SIZE=4][SIZE=4] hrtf.Data.IR(CH_C,:,:),hrtf.Data.IR(CH_LS,:,:),hrtf.Data.IR(CH_RS,:,:),... hrtf.Data.IR(CH_LB,:,:),hrtf.Data.IR(CH_RB,:,:),];[/SIZE][/SIZE] [SIZE=4][SIZE=3]audioch=squeeze(audioch)'; %normal[/SIZE][/SIZE][/SIZE] [SIZE=4][SIZE=4] [SIZE=3]audioch_hesuvi=audioch:),[1 2 9 10 13 14 5 4 3 12 11 16 15 6 ]); %normal to hesuvi[/SIZE][/SIZE] [SIZE=4][SIZE=4]% writing audio data outfolder='C:\Users\matlab_privat\SOFA_myHRTF_project_merged\'; outname='HRIR_ARI_48000(L-R-C-LFE-LS-RS-LB-RB).wav'; outname2='HRIR_ARI_48000_hesuvi.wav'; Fs=48000; audiowrite([outfolder,outname],audioch,Fs,'BitsPerSample',32); audiowrite([outfolder,outname2], audioch_hesuvi,Fs,'BitsPerSample',32);

edit:
:) removed
 
Last edited:
Apr 12, 2022 at 7:21 AM Post #33 of 93
@morgin: I could guess it doesn't sound like speakers in a room, but is the sound outside your head? And how far? Have you also tried listening to a single channel and how far outside the head does that sound? (The Smyth Realiser has a "solo" function which lets you listen to a single virtual speaker of choice which can be very informative. In your case to do that you could use a test track that only has sound in one channel.)
The sounds with a single track are just outside my head like where the headphones sit, so not like impulcifer where the sound is 6tf away as they were when I did my measurements.

But I’m reading on mesh2hrtf tutorial there are ways of simulating speaker positions and how many you want. Just don’t know how to do that yet.

SOFAstart;
% Load your impulse response into a struct
hrtf = SOFAload('HRIR_ARI_48000.sofa');
%% find the channels for 7.1
CH_L = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==30 );
CH_R = find(hrtf.SourcePosition:),2)==0 &
Why does the script have emoji’s surely that can’t be part of the code

Is it a bad thing to mention this in the Impulcifer thread? after including oratorys EQ for my HD560S I think this is what I'll be using to watch my movies, music and gaming. I want others to definitely try this.

678E4370-7535-420A-9EF0-331D1FBAD1A9.jpeg

These sliders is there a way to go above the 30 limit by changing the program code?
 
Last edited:
Apr 12, 2022 at 9:30 AM Post #36 of 93
Why does the script have emoji’s surely that can’t be part of the code
The Head-Fi website interprets it as BB code and changes :) into a smiley.
If you temporarely quote that post, press the toggle BB ("[ ]" upper right of the area where you can type in your post) then you see the correct text (in this case starting after [ SIZE = 3 ] end ending before [ / SIZE ] (I inserted spaces here in the SIZE bits to avoid similar problems in this post now) if I am correct) and you can copy and paste that part to a text file then.
(Afterwards delete the quote instead of posting it of course.)

[Edit: and that is exactly what happened to me now: : <without space> ) was changed into :) ]
 
Last edited:
Apr 12, 2022 at 9:54 AM Post #37 of 93
Code:
SOFAstart;
% Load your impulse response into a struct
hrtf = SOFAload('HRIR_ARI_48000.sofa');
%% find the channels for 7.1
CH_L = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==30 );
CH_R = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==360-30 );
CH_C = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==0 );
CH_LS = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==110 );
CH_RS = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==360-110 );
CH_LB = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==135 );
CH_RB = find(hrtf.SourcePosition,2)==0 & hrtf.SourcePosition,1)==360-135 );
% combining the channels
audioch=[hrtf.Data.IR(CH_L,:,,hrtf.Data.IR(CH_R,:,,hrtf.Data.IR(CH_C,:,,...
hrtf.Data.IR(CH_C,:,,hrtf.Data.IR(CH_LS,:,,hrtf.Data.IR(CH_RS,:,,...
hrtf.Data.IR(CH_LB,:,,hrtf.Data.IR(CH_RB,:,,];
audioch=squeeze(audioch)'; %normal
audioch_hesuvi=audioch,[1 2 9 10 13 14 5 4 3 12 11 16 15 6 ]); %normal to hesuvi
% writing audio data
outfolder='C:\Users\matlab_privat\SOFA_myHRTF_project_merged\';
outname='HRIR_ARI_48000(L-R-C-LFE-LS-RS-LB-RB).wav';
outname2='HRIR_ARI_48000_hesuvi.wav';
Fs=48000;
audiowrite([outfolder,outname],audioch,Fs,'BitsPerSample',32);
audiowrite([outfolder,outname2], audioch_hesuvi,Fs,'BitsPerSample',32);

In the tools on top of the posting area(where you can add emojis, pictures change font size etc), there's one for code that looks like this </> and avoids interpreting what's writen.

edit, did I just create a new mess with that? ^_^ ahahah, I'm a noob.
 
Last edited:
Apr 12, 2022 at 11:14 AM Post #38 of 93
A pure hrtf doesn't contain any echo/ reverb and therefore the sound will omitt the sense of room or distance, it's simulating the acoustics of an anechoic chamber.
This experience is also described here with genelec's "aural id" hrtf service, so a programm for adding room reverb is desired.


I don’t know how to apply the hpeq from impulcifer to mesh2hrtf hrir? I’ll try 3d toolkit and anaglyph
The personal hpeq is still indispensable for using the hrir convolution properly, doesn't impulcifer store the hpeq file separately anywhere?
 
Last edited:
Apr 12, 2022 at 12:47 PM Post #39 of 93
The personal hpeq is still indispensable for using the hrir convolution properly, doesn't impulcifer store the hpeq file separately anywhere?
I'm sorry but I don't know what a hpeq is or if Impulcifer stores or even what I can do with both of those. Sorry for my incompetence.

so a programm for adding room reverb is desired.
I found Ambient reverb and it is widening the room. But I cant change the knobs or turn them?
 
Last edited:
Apr 12, 2022 at 12:59 PM Post #40 of 93
I'm sorry but I don't know what a hpeq
Hpeq is the Smyth name for heaphone compensation.

[Edit: But I don't know how exactly to use that seperately with HeSuVi and your Mesh2hrtf based HRIR.]
 
Last edited:
Apr 12, 2022 at 2:02 PM Post #41 of 93
At the end an hpeq is a set of eq-values and therefore can be loaded into hesuvi's eq bank, right?
There is an hpeq section ( based on autoeq ) in hesuvi, where the irs for headphone compensation are placed, we should be able to import our personal hpeqs.
But I don't know where impulcifer stores the hpeq file, we need to ask jaakopasaanen.
 
Apr 12, 2022 at 2:05 PM Post #42 of 93
At the end an hpeq is a set of eq-values and therefore can be loaded into hesuvi's eq bank, right?
There is an hpeq section ( based on autoeq ) in hesuvi, where the irs for headphone compensation are placed, we should be able to import our personal hpeqs.
But I don't know where impulcifer stores the hpeq file, we need to ask jaakopasaanen.
Is this what gives us the speaker feeling or is this just for EQ for our headphones?
 
Apr 13, 2022 at 10:55 AM Post #44 of 93
Which is the most appropriate device for the 3d scan including the ear canal or is it more a question of the right handling?
Iphone x/ xs with face id or one of those android sps with tof sensor like galaxy s10 5g or a dedicated 3d scanner?
 
Apr 13, 2022 at 11:15 AM Post #45 of 93
Which is the most appropriate device for the 3d scan including the ear canal or is it more a question of the right handling?
Iphone x/ xs with face id or one of those android sps with tof sensor like galaxy s10 5g or a dedicated 3d scanner?
They have done the tests and iphone 10 is the best model. I have both the iphone 10 and iphone 13 pro and iphone 10 true depth camera is way more detailed.

this is from their tutorial
  1. iPhone with Face-ID sensor – a lot of iOS devices have Face-ID feature with a structured light 3D sensor (this sensor has been independently evaluated for use as a 3D scanner - paper1, paper2, paper3). Scanning using iPhone TrueDepth sensor requires repeated attempts, care and practice, but can produce results that are detailed enough for as little as 10 Euros for a suitable app (assuming you have access to a compatible iOS device).
    • iPhone Face-ID method currently (2022) produces the best results among near-free 3D scanners.
    • There are non-Apple devices (both standalone and integrated into smartphones) with similar or better structured light sensors as in iPhones, but the difficult task is to find good 3D scanning software for those devices. Plus the cost of alternative devices often exceeds the cost of a second-hand iPhone with suitable hardware.
    • LIDAR sensor on some iOS devices and other smartphones (for 2022) have too low resolution to be useful for head scanning. The same issue applies to other long-range 3D scanners - not all 3D sensors are suitable for scanning small details at close range.

https://sourceforge.net/p/mesh2hrtf/wiki/Basic_tutorial_3d_scanning/
 

Users who are viewing this thread

Back
Top