Mesh2hrtf
Apr 13, 2022 at 2:09 PM Post #47 of 93
No usually its a hole that you end up filling in blender or meshmixer. The ear canal needs to be plugged just like when we put foams into the ears. We can experiment how deep we want the canal and mics to be in future. But with my scan I had blocked my canal as there cant be any holes in the mesh.
 
Apr 13, 2022 at 5:09 PM Post #53 of 93
@lohfcasa The image shows the source positions of the Sofa file from morgin. The head is ad (0,0,0)m.
The more than 1500 source positions are at r=1.2m.
Figure 2022-04-13 225819.png
 
Apr 13, 2022 at 6:57 PM Post #55 of 93
I’m reading the positions can be changed

This is a short primer on the possibilities and limitations of individual HRTFs.

While the simulated (or measured) HRTF .sofa file is the key ingredient for the accurate sound over headphones, it is important to optimize and take advantage of other aspects of the reproduction pipeline:

  1. Different SOFA binauralizers - There are different binaural synthesis approaches and different methods to switch or interpolate between sound source positions, therefore different software that uses SOFA files can sound different and also require more or less computation resources.
  2. Headphone HpTF equalization - There are different headphone equalization approaches based on manual tuning or measurements of headphone transfer function (HpTF). Headphone equalization for HRTFs still has a lot of improvement potential.
  3. Different headphones and different types of headphones both have their hardware limitations and may be more or less difficult to equalize. For example in-ear headphones have the best fit-consistency while open-back headphones are the easiest to measure and equalize using blocked ear-canal microphones.
    • The usual recommendation for headphones is to use open-back, over-ear headphones (see "FEC headphone" criteria in Møller et al. 1995) that have good re-seating consistency. But clearly it is possible to equalize any type of headphone to give desired sound in combination with individual HRTF.
  4. Headphone electronics (DAC sound card and amplifier) may also be a limitation in some scenarios. Note that most headphones perform great on very inexpensive modern electronics (that is, additional improvements in DAC and amplification may not provide audible difference).
  5. Headtracking is known to significantly improve externalization and overall immersion if it is possible to achieve low latency and reliable tracking.
  6. Upmixing - as HRTF binauralization is not limited to the number of physical speaker elements it is practical to perform advanced up-mixing of, for example, 2-channel stereo to some immersive multi-channel format before the sound is rendered to the headphones. Therefore just like in home cinema setups it is possible to experiment with different up-mixing algorithms.
  7. Sound source manipulations - Binauralizer algorithms can place each audio channel as an individual point-source or as a combination of multiple point-sources within a given area. Also it is possible to experiment with the preferred virtual sound source ("speaker") locations - for example, try placing stereo pair higher than usual (elevation angle) or change the default +/- 30 deg angle.
  8. Change the distance - it is possible to simulate HRTFs for different sound source distances (different simulation grids) and pick the most suitable option. (Currently there is almost no software that can dynamically use the mutiple sound source distances for the same angle, so the choice must be done by user and stays constant for a given HRTF)
  9. Room acoustics can be added to the binauralization to effectively achieve something like a virtual BRIR (note that normal HRTF binauralization may often be preferable because added room reflections reduce the clarity of the incoming signal). Room acoustics can be implemented by a wide range of methods starting from simple added reverb to convolution with recorded room impulse responses up to custom advanced simulated room impulse responses.
  10. Target curves - by exporting the HRTF frequency response for a specific source direction or an average of several relevant directions it is possible to study individual headphone frequency response requirements.
  11. Ambisonincs and Immersive sound - provided there is a possibility to access and play back the relevant content, with binauralization it is possible to accurately experience multichannel or object based content that otherwise requires very advanced listening space with precisely positioned and calibrated speaker system.
  12. VR, Gaming & Movies - As a minimum it should be possible to apply SOFA HRTF data to listen to accurate virtual 7.1 surround sound with existing content. Going forward SOFA support could be added to more platforms and content such as VR and AR platforms (Steam VR, OpenXR, etc.), specific computer game engines (Unreal, Unite, etc.) or known audio technologies (Facebook 360, Dolby Atmos, etc.).
 
Apr 14, 2022 at 8:26 PM Post #56 of 93
@musicreo I'm Having a hard time working that script to convert my latest .sofa to .wav if you have the time could you do this one for me. Its done using better scans and merging and also better mic placement

https://mega.nz/folder/kZsSyAzB#zMmVjQLzIAQT7Y8pRkcsOA


>> hrtf = SOFAload('HRIR_ARI_48000.sofa');
>> %% find the channels for 7.1
>> CH_L = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==30 );
CH_L = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==30 );

Invalid expression. When calling a function or indexing a variable, use
parentheses. Otherwise, check for mismatched delimiters.
 
Last edited:
Apr 15, 2022 at 7:40 AM Post #59 of 93
What is the maximum radius mesh2hrtf can simulate?

Is it possible to choose the positions of the virtual sound sources, if only a few are desired?
Mesh2hrtf and sofa files are developed for object based sound, vr and that stuff, but for stereo and multichannel such a huge amount of sound source positions isn't required nor very useful.
 
Last edited:
Apr 15, 2022 at 7:51 AM Post #60 of 93
I was gonna ask that. Is there a benefit of having so many sources for 7.1 movies and games? Or am I just wasting time trying to get it to work. Is there any benefit over impulcifer?

Is mesh2hrtf better for gaming or music that support virtual sounds?

The radius I read can be changed and speakers positions can be put in. But the person making the tutorial said he won’t be going into all that.
 

Users who are viewing this thread

Back
Top