Ok, thanks, sounds like iphone x / xs / xr is the way to go and mostly approved for ear 3d scans, but will the ear canal also be captured?
Latest Thread Images
Featured Sponsor Listings
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an alternative browser.
You should upgrade or use an alternative browser.
morgin
100+ Head-Fier
No usually its a hole that you end up filling in blender or meshmixer. The ear canal needs to be plugged just like when we put foams into the ears. We can experiment how deep we want the canal and mics to be in future. But with my scan I had blocked my canal as there cant be any holes in the mesh.
musicreo
100+ Head-Fier
- Joined
- Jun 25, 2011
- Posts
- 401
- Likes
- 151
For adding more "room" shouldn't it be possible to do an additional convolution with a room impulse response?
@morgin
Wait, one question:
If the hrtf is nothing more then a set of many hrirs (in free field without room refelections), then which of these positions are chosen for your hrtf to hrir conversion?
Wait, one question:
If the hrtf is nothing more then a set of many hrirs (in free field without room refelections), then which of these positions are chosen for your hrtf to hrir conversion?
Probably, yes, but isn't this to be done by a program like mesh2hrtf simulating the individual earshape in a room?For adding more "room" shouldn't it be possible to do an additional convolution with a room impulse response?
It all depends on where the room impulse response is measured, right?
Last edited:
morgin
100+ Head-Fier
To put it simply…. No idea@morgin
Wait, one question:
If the hrtf is nothing more then a set of many hrirs (in free field without room refelections), then which of these positions are chosen for your hrtf to hrir conversion?

Do you have any phantom center with your hrir generated from the 3d scan hrtf?To put it simply…. No ideaI don’t know much of how it works. I just follow the guides and magic happens.
Could be possible, that the selected hrir matches the left/ right positions instead of FRONT left/ FRONT right!
musicreo
100+ Head-Fier
- Joined
- Jun 25, 2011
- Posts
- 401
- Likes
- 151
@lohfcasa The image shows the source positions of the Sofa file from morgin. The head is ad (0,0,0)m.
The more than 1500 source positions are at r=1.2m.
The more than 1500 source positions are at r=1.2m.

Who needs over thousand positions within such a small radius, I thought of several meters and more.
Is mesh2hrtf limited to the radius of max. 1.2 m, that's a strange decision.
Is mesh2hrtf limited to the radius of max. 1.2 m, that's a strange decision.
morgin
100+ Head-Fier
I’m reading the positions can be changed
This is a short primer on the possibilities and limitations of individual HRTFs.
While the simulated (or measured) HRTF .sofa file is the key ingredient for the accurate sound over headphones, it is important to optimize and take advantage of other aspects of the reproduction pipeline:
This is a short primer on the possibilities and limitations of individual HRTFs.
While the simulated (or measured) HRTF .sofa file is the key ingredient for the accurate sound over headphones, it is important to optimize and take advantage of other aspects of the reproduction pipeline:
- Different SOFA binauralizers - There are different binaural synthesis approaches and different methods to switch or interpolate between sound source positions, therefore different software that uses SOFA files can sound different and also require more or less computation resources.
- Headphone HpTF equalization - There are different headphone equalization approaches based on manual tuning or measurements of headphone transfer function (HpTF). Headphone equalization for HRTFs still has a lot of improvement potential.
- Different headphones and different types of headphones both have their hardware limitations and may be more or less difficult to equalize. For example in-ear headphones have the best fit-consistency while open-back headphones are the easiest to measure and equalize using blocked ear-canal microphones.
- The usual recommendation for headphones is to use open-back, over-ear headphones (see "FEC headphone" criteria in Møller et al. 1995) that have good re-seating consistency. But clearly it is possible to equalize any type of headphone to give desired sound in combination with individual HRTF.
- Headphone electronics (DAC sound card and amplifier) may also be a limitation in some scenarios. Note that most headphones perform great on very inexpensive modern electronics (that is, additional improvements in DAC and amplification may not provide audible difference).
- Headtracking is known to significantly improve externalization and overall immersion if it is possible to achieve low latency and reliable tracking.
- Upmixing - as HRTF binauralization is not limited to the number of physical speaker elements it is practical to perform advanced up-mixing of, for example, 2-channel stereo to some immersive multi-channel format before the sound is rendered to the headphones. Therefore just like in home cinema setups it is possible to experiment with different up-mixing algorithms.
- Sound source manipulations - Binauralizer algorithms can place each audio channel as an individual point-source or as a combination of multiple point-sources within a given area. Also it is possible to experiment with the preferred virtual sound source ("speaker") locations - for example, try placing stereo pair higher than usual (elevation angle) or change the default +/- 30 deg angle.
- Change the distance - it is possible to simulate HRTFs for different sound source distances (different simulation grids) and pick the most suitable option. (Currently there is almost no software that can dynamically use the mutiple sound source distances for the same angle, so the choice must be done by user and stays constant for a given HRTF)
- Room acoustics can be added to the binauralization to effectively achieve something like a virtual BRIR (note that normal HRTF binauralization may often be preferable because added room reflections reduce the clarity of the incoming signal). Room acoustics can be implemented by a wide range of methods starting from simple added reverb to convolution with recorded room impulse responses up to custom advanced simulated room impulse responses.
- Target curves - by exporting the HRTF frequency response for a specific source direction or an average of several relevant directions it is possible to study individual headphone frequency response requirements.
- Ambisonincs and Immersive sound - provided there is a possibility to access and play back the relevant content, with binauralization it is possible to accurately experience multichannel or object based content that otherwise requires very advanced listening space with precisely positioned and calibrated speaker system.
- VR, Gaming & Movies - As a minimum it should be possible to apply SOFA HRTF data to listen to accurate virtual 7.1 surround sound with existing content. Going forward SOFA support could be added to more platforms and content such as VR and AR platforms (Steam VR, OpenXR, etc.), specific computer game engines (Unreal, Unite, etc.) or known audio technologies (Facebook 360, Dolby Atmos, etc.).
morgin
100+ Head-Fier
@musicreo I'm Having a hard time working that script to convert my latest .sofa to .wav if you have the time could you do this one for me. Its done using better scans and merging and also better mic placement
https://mega.nz/folder/kZsSyAzB#zMmVjQLzIAQT7Y8pRkcsOA
>> hrtf = SOFAload('HRIR_ARI_48000.sofa');
>> %% find the channels for 7.1
>> CH_L = find(hrtf.SourcePosition
,2)==0 & hrtf.SourcePosition
,1)==30 );
CH_L = find(hrtf.SourcePosition
,2)==0 & hrtf.SourcePosition
,1)==30 );
↑
Invalid expression. When calling a function or indexing a variable, use
parentheses. Otherwise, check for mismatched delimiters.
https://mega.nz/folder/kZsSyAzB#zMmVjQLzIAQT7Y8pRkcsOA
>> hrtf = SOFAload('HRIR_ARI_48000.sofa');
>> %% find the channels for 7.1
>> CH_L = find(hrtf.SourcePosition


CH_L = find(hrtf.SourcePosition


↑
Invalid expression. When calling a function or indexing a variable, use
parentheses. Otherwise, check for mismatched delimiters.
Last edited:
What do you mean by "better mic placement", if it's a 3d scan?
morgin
100+ Head-Fier
You choose which triangle in the mesh you want the mic in each ear
Last edited:
What is the maximum radius mesh2hrtf can simulate?
Is it possible to choose the positions of the virtual sound sources, if only a few are desired?
Mesh2hrtf and sofa files are developed for object based sound, vr and that stuff, but for stereo and multichannel such a huge amount of sound source positions isn't required nor very useful.
Is it possible to choose the positions of the virtual sound sources, if only a few are desired?
Mesh2hrtf and sofa files are developed for object based sound, vr and that stuff, but for stereo and multichannel such a huge amount of sound source positions isn't required nor very useful.
Last edited:
morgin
100+ Head-Fier
I was gonna ask that. Is there a benefit of having so many sources for 7.1 movies and games? Or am I just wasting time trying to get it to work. Is there any benefit over impulcifer?
Is mesh2hrtf better for gaming or music that support virtual sounds?
The radius I read can be changed and speakers positions can be put in. But the person making the tutorial said he won’t be going into all that.
Is mesh2hrtf better for gaming or music that support virtual sounds?
The radius I read can be changed and speakers positions can be put in. But the person making the tutorial said he won’t be going into all that.
Users who are viewing this thread
Total: 3 (members: 0, guests: 3)