Off Topic Thread: Off Topic Is On Topic Here
Apr 9, 2020 at 8:09 AM Post #76 of 184
[1] I think there is a fundamental misunderstanding here. A headphone matched to your hrtf and without phase lag/lead artefacts will not project a soundstage of its own. It'll take the soundstage of the recorded content. If the content has a map relating to space as your brain deciphers, you'll be able to perceive. In my opinion that is what I can call "accuracy".
[2] I'd like to be placed at the position of the mic.
[3] Comparing space with stereo recordings is futile, not because you can't perceive space but because you can make multiple ways of creating space and all but one will be accurate to the recording ambience.

I think we're broadly in agreement here, although there are a couple of points worth clarifying:

1. The content effectively does (with a binaural recording) have "a map relating to space as" a brain deciphers, the danger though is asserting "your brain" or "you'll be able to perceive", because different people have significantly different brains/perceptions. Some people report that certain standard stereo recordings on headphones sound "just like being there" (even with height information), for others a relatively simple crossfeed is enough, a generic HRTF is enough for others, while a more comprehensive/personalized HRTF would be required by others and the addition of head-tracking would encompass even more people. By the time we get to the last one, we've probably accounted for the vast majority of people but probably still not everyone.

2. Mmmm, possibly but if so, that's your personal preference and it might not even be true for you. Many audiophiles have asserted the same/similar but it's not applicable generally and often not applicable even to those making the assertion (although I obviously don't know if that's the case with you personally). Because:

3. With the exception of binaural recordings, virtually no recordings are ever accurate to the recording space, intentionally. You cannot be "placed at the position of the mic" for the vast majority or orchestral recordings for example, because the mic position is about 30 or more different positions many meters apart and this is true of both 2 channel stereo and surround recordings. And with the vast majority of popular music recordings, there never was a "recording ambience" but an artificial conglomeration of different ambiances, some/many of them generated by processors.

[1] I have went through that video long ago and unfortunately It didn't answer my specific question.
[2] Regarding the visualizations using instruments, they can be deceptive.
[2a] I am looking for the pure math that deals with this. Like the link I posted above.
[3] And I don't see phase plots in any of his visualizations, and didn't see signals that can have phase deviations between different frequency components.

1. Then I must have misunderstood you specific question, I thought you were disputing the ability of PCM to represent timing differences less than the sample period?

2. They can be deceptive but not in this case, the output to the oscilloscope is conclusive, and I don't see how the example you can try for yourself could be deceptive.
2a. It's your choice to only accept "pure math" rather than objective measurements or practical experiments but that's a very specialist area. You'd probably need to look into the literature for certain DSP programmers/developers. Hydrogen Audio is about as far down that path as I personally have ventured and you can find several threads relating to this topic, which include some math and MatLab example/explanations. Just found this one but I recall seeing several over the years.

3. Again, starting around 20:50 on the video it's not just a visualisation but a visualisation plus the proof at the output (with an oscilloscope) that temporal resolution exceeds the sample period.

G
 
Last edited:
Apr 9, 2020 at 8:37 AM Post #77 of 184
1. POT, KETTLE, BLACK!!!

2. And anyone else will see that a binaural recording is defined by having some form of HRTF, that without some form of HRTF a recording is NOT a "binaural" recording and they can see this by reading the quote in the post to which you're replying!

3. You've got to be joking?? My first response to you in this thread (post #22): "The argument of accurate spatiality is therefore similar to arguing that an image of a white unicorn is more accurate than an image of a pink unicorn! The exception, ironically, is binaural recordings (reproduced on headphones), which ARE spatially accurate, although only relative to a certain generic HRTF." and "HRTFs, etc, replicate all that spatial information being reduced down to the two datum points of your ear drums and therefore headphones with the correct HRTFs, etc., should be "better at localisation" than even the latest surround format with multiple speakers."

My next response to you (post #38): "In fact, this is entirely possible, this calculation is called a HRTF (Head Related Transfer Function). This is inherently BETTER at localisation than speakers, because ..."

And the next (post #47): "Please provide some supporting evidence that most people who've heard a binaural recording (with a compatible HRTF) have not been able to localise "anywhere"".

And the next (post #51): "I absolutely did not say "never mind binaural or not" and I've clearly been stating ears (which include pinna) and HRTFs, which also includes pinna."

And even my very first post in this thread (#2): "This OBVIOUSLY doesn't prove that for an individual listener in their own home, speakers are always better than headphones with a binaural recording suited to their HRTF. " ... And: "Given a good binaural recording suited to an individual's HRTF, headphones can indeed sound better than a 5.1 speaker setup..."

It's hard to imagine how your assertion (of "absolutely no disclaimer about binaural recording") could be more FALSE!!

4. But I'm NOT "back peddling", I'm doing the exact opposite!
4a. And "last time I checked most all recordings are not" Dolby Atmos, didn't seem to stop you from arguing about it though! However, the point (which I've also made clear) is that headphones can be better at localisation than speakers and there are enough binaural recordings (some of which are aimed at mass markets) to make this assertion realisable in practice (for some people) and not just a theoretical possibility.

5. And again, another FALSE quote attributed to me! I claimed there were TWO points and TWO sensory organs, don't you know what the word "binaural" means? Therefore:
5b. How do you think that humans decode localisation information, you think maybe our ears talk to each other and work it out themselves, without the involvement of the brain?
5c. You should indeed be "sorry" and probably sue for your class fees to be returned if your anatomy/physiology prof taught you that the auditory nerves are connected to each other rather than to the brain. You would have been far better served by spending a few minutes on Wikipedia:

"The sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time- and level-differences (or intensity-difference) between both ears, spectral information, timing analysis, correlation analysis, and pattern matching. " ... "The brain utilizes subtle differences in intensity, spectral, and timing cues to allow us to localize sound sources."

6. And sad that I should have to!!

G

What is sad is that you write these long randomly outlined responses to either take things out of context or have comprehension issues. Example, what do you think afferent and efferent nerve innervation means?? When I said "point" I was obviously meaning one point on each side of the head. How many times do I have to say sound first starts with sound interaction on the pinnae, which influences how frequencies get to the eardrum and inner ear? Researchers have even found an amplification through efferent nerves going to the inner ear. Also if you're content with these long responses, how is one supposed to figure to go back 16 posts to see that you mentioned binaural recording (when you could have included it with your claim that headphones can localize "anywhere"). Even in this post, you seem to want to be conflating HRTF with binaural recording again.
 
Apr 9, 2020 at 8:53 AM Post #78 of 184
I think we're broadly in agreement here, although there are a couple of points worth clarifying:

1. The content effectively does (with a binaural recording) have "a map relating to space as" a brain deciphers, the danger though is asserting "your brain" or "you'll be able to perceive", because different people have significantly different brains/perceptions. Some people report that certain standard stereo recordings on headphones sound "just like being there" (even with height information), for others a relatively simple crossfeed is enough, a generic HRTF is enough for others, while a more comprehensive/personalized HRTF would be required by others and the addition of head-tracking would encompass even more people. By the time we get to the last one, we've probably accounted for the vast majority of people but probably still not everyone.

2. Mmmm, possibly but if so, that's your personal preference and it might not even be true for you. Many audiophiles have asserted the same/similar but it's not applicable generally and often not applicable even to those making the assertion (although I obviously don't know if that's the case with you personally). Because:

3. With the exception of binaural recordings, virtually no recordings are ever accurate to the recording space, intentionally. You cannot be "placed at the position of the mic" for the vast majority or orchestral recordings for example, because the mic position is about 30 or more different positions many meters apart and this is true of both 2 channel stereo and surround recordings. And with the vast majority of popular music recordings, there never was a "recording ambience" but an artificial conglomeration of different ambiances, some/many of them generated by processors.



1. Then I must have misunderstood you specific question, I thought you were disputing the ability of PCM to represent timing differences less than the sample period?

2. They can be deceptive but not in this case, the output to the oscilloscope is conclusive, and I don't see how the example you can try for yourself could be deceptive.
2a. It's your choice to only accept "pure math" rather than objective measurements or practical experiments but that's a very specialist area. You'd probably need to look into the literature for certain DSP programmers/developers. Hydrogen Audio is about as far down that path as I personally have ventured and you can find several threads relating to this topic, which include some math and MatLab example/explanations. Just found this one but I recall seeing several over the years.

3. Again, starting around 20:50 on the video it's not just a visualisation but a visualisation plus the proof at the output (with an oscilloscope) that temporal resolution exceeds the sample period.

G

"I thought you were disputing the ability of PCM to represent timing differences less than the sample period" - I am. He is only talking about higher frequency signals, I am talking about time/sample alignment. I've clearly defined what I find to be missing in that video, what the abstraction is masking (missing coverage on phase data), what I am expecting as a proof and I have also pointed out an article that theoretically derives some of that.

I will not accept the abstractions in the video as proof. Give me better coverage in areas I've asked for and I'll try to verify on my side and take the data in.
 
Apr 9, 2020 at 9:53 AM Post #79 of 184
"I thought you were disputing the ability of PCM to represent timing differences less than the sample period" - I am. He is only talking about higher frequency signals, I am talking about time/sample alignment. I've clearly defined what I find to be missing in that video, what the abstraction is masking (missing coverage on phase data), what I am expecting as a proof and I have also pointed out an article that theoretically derives some of that.

I will not accept the abstractions in the video as proof. Give me better coverage in areas I've asked for and I'll try to verify on my side and take the data in.


It's your responsibility to support claims you make, not the responsibility of everyone else. Is this going to turn into another "I hear differences in every USB cable but have no evidence" and "I hear differences in every music player but have no evidence" like your other threads on this forum?
 
Apr 9, 2020 at 10:03 AM Post #80 of 184
It's your responsibility to support claims you make, not the responsibility of everyone else. Is this going to turn into another "I hear differences in every USB cable but have no evidence" and "I hear differences in every music player but have no evidence" like your other threads on this forum?

I don't know why you're so butthurt or why you want to sneak in an off topic discussion. In the usb cable thing I clearly said it was my "prediction". Differences in music player software have been measured independently by more than one person, so there is clear evidence on that front. So Here it's nothing of that sort either the former or latter. Things are already studied for timing precision requirements. I put forward a valid question on the ability to capture timing delay. I also embedded atleast one link that explains the concept mathematically with derivations and visualizations. Don't tell me mathematics is subjective. I will not accept the video as conclusive proof because it misses a lot of coverage metric scenarios that exist in real signals - and I specifically pointed those out.
 
Last edited:
Apr 9, 2020 at 10:13 AM Post #81 of 184
I would take the effort to code the same in matlab once I am clear with the thesis part. I'm more of a math/non-linearities guy who loves to look at the derivation in full with all the bounding criterion.

...[snip]...

I'll do my homework to derive the same thing before coming to a conclusion. I do it for everything, I don't like accepting something just because someone said so. In the end I'll have a much better understanding and conclusion.
Excellent! That's the approach I often take in these situations. I completely agree that your understanding will be better, if you prove it to yourself.
I've clearly defined what I find to be missing in that video, what the abstraction is masking (missing coverage on phase data), what I am expecting as a proof ...
[snip]
I will not accept the abstractions in the video as proof. Give me better coverage in areas I've asked for and I'll try to verify on my side and take the data in.
I understand this stuff pretty well (or at least in my mind, I do). I also understand the terms you use, but perhaps not so much the way you use them. You seem very clear in your own mind, what you know and what you want to know, but it is unclear to me. Perhaps I can help with the "coverage", but I don't want to put in a lot of time, if I'm not really understanding what "coverage" you mean.
...I have also pointed out an article that theoretically derives some of that.
I am looking for the pure math that deals with this. Like the link I posted above.
I'm sorry I missed it. Can you post the link again?
"I thought you were disputing the ability of PCM to represent timing differences less than the sample period" - I am.
This is where I would start. I can easily tell you how to prove this mistake to yourself. This may focus further questions/explorations.

Good luck, and have fun!
 
Apr 9, 2020 at 10:18 AM Post #83 of 184
Apr 9, 2020 at 10:24 AM Post #84 of 184
By coverage I meant signals measured along with their phase for starters. You can have a different wave with the same harmonic amplitude spectra as a square wave, with the differences being in the relative phase. So when I want to analyse a signal via fft, I expect a complex fft or having both amplitude and phase info. This will also expose in case there are any artefacts in the windowing method used. And in case of timing differences, capturing the exact timing difference, in this case, something at 10us delay and reconstruct within a 44.1khz sampling if it is possible.
 
Apr 9, 2020 at 3:40 PM Post #85 of 184
It has been several centuries since any single person could absorb all knowledge, and even then it was truly rare. In recent times, it is just a truth that all of us don't know far more than we know.

There are a few Zen-like tricks to deal with information overload...

• Know what you know, and know what you don't know. Try your best to just speak about the former.

• Listen to people carefully, because just about everyone knows something you don't.

• Consider your audience and convey what you know to them clearly in a way they can absorb.

• Truth is sometimes relative, not absolute. Seeing things from other points of view can reveal aspects of truth you wouldn't have noticed otherwise.

• Don't let your ego stand between you and knowledge, and don't let it prevent you from conveying your knowledge to others.

• Don't suffer fools gladly.

I'll let you guys go back to your Brobnigagian discussions about which side of the egg to crack. Eventually, you can all declare yourself the winner and go home.
 
Last edited:
Apr 9, 2020 at 7:44 PM Post #86 of 184
In this thread there is a lot going on, several complex subjects are touched, many details involved that are not mentioned but sometimes assumed to be known, some details overlooked.

I want to try to clear a few things up.

First: there is one difference of opinion that maybe not everyone here is fully aware of but plays a key role in accepting or rejecting the "two datum points" idea.
Sound first interacts with our earlobes (or pinnae, which funnels frequencies on certain areas of the eardrum).
sound on eardrum starts with pinnae which filters frequencies on different areas of the ear drum (and is more involved than his claim of just one point audio sensation that's all decoded by brain).
Before arguing whether or not this is true and whether or not this - exciting different areas of the ear drum - plays a role in sound localisation, first I want to make sure that you @Davesrose understand what I, and I assume gregorio (again, @gregorio: correct me if I am wrong) think is actually happening:

The body, head, ear lobes, and pinnae filter the sound and the result is just that the frequency spectrum of the sound has changed when it reaches the ear drum. Different sound components coming from different directions have been filtered differently, and the frequency spectra of the different components have been changed differently. However they are arriving at the ear drum as one single sum signal with one total frequency spectrum. It is not about reaching different areas of the ear drum. The brain somehow seperates different components from the sum signal. At that point in the total system - at the ear drums - we have just the 2 signals with all the information implicitly in there. (Well, all the information that is in there, it could very well be that not all components can be localised - or even seperated - unambigiously from that. But information from other senses, memory, involuntary movements of the head providing another perspective, or simply already knowing where the sound is coming from, and who knows what more can solve ambiguities, and maybe some ambiguities remain.)

I tried to find information about the idea that exciting different areas of the ear drum could play a role in sound localisation, but I couldn't find anything. All I find is stuff in line with how I understand things. Yet, who knows maybe you are right and it does play a role. But I strongly suspect that even if that were the case (which I think is very unlikely), it is not essential enough to make it impossible for headphones to create a very convincing 3D sound experience for ar least the vast majority of people (assuming the proper signals can be generated for the headphones, in accordance with the listeners personal HRTF). Now for the moment, while I continue, I will assume that the "exciting different areas of the ear drum" idea is not playing a role, or not an essential one. We can later discuss this aspect seperately, but first I want to make sure that you @Davesrose fully get what I, and as I assume @gregorio, is thinking.

The next thing:

As I think several people already mentioned, the sound of the headphones is also filtered by (part of) the outer ear and the pinnae. However this is not a problem for binaural simulations (or binaural recordings). This filtering can be compensated for with EQ.

So based on the above: if you want to create a 3D sound experience: generate the corresponding signals that should be arriving at the ear drums, apply the inverse EQ of what the signal will undergo on it's way from the headphones to the ear drums (including thefrequency response of the headphones themselves), then you will achieve that the correct signals arrive at the ear drums and that the 3D experience will be realised.

With normal loudspeakers you can never accomplish this because with normal loudspeakers you can not independently control what sound is arriving at each ear drum. (With cross talk free loudspeakers maybe you can.)

This is in accordance with my experiences with the Smyth Realiser A16, and with that as far as I know the vast majority of people who tried the A16 (with personal made measurements and head tracking active) claim that it really sounds like real speakers at a distance.

Now I want to point your attention to the following: please be aware that if you listen to certain content over virtual loudspeakers using the A16, that you are actually listening to several sound rendering systems/concepts in a row. Let's say for the moment we listen to a stereo recording over 2 loudspeakers. The first is the priciple of binaurally rendering loudspeakers in 3D space. The other is the principle of panning between two sound sources (the principle of stereo over loudspeakers).
The solo function of the A16, where you select one of the virtual loudspeakers to play on it's own and the other loudspeakers are silent is very interesting. When you use this, you take out the second principle (of panning) and get a nice demonstration of the first principle, binaurally rendering one sound source at one location in 3D space.
From my experience, the A16 puts that one sound source pin point precise and rock steady on it's intended position.
You could compare the following 2 situations:

Situation 1:
put 2 real speakers in front of you. Use a music track with one instrument playing panned for example to the position halfway between the left speaker and the mid point between the speakers.

Situation 2: let the A16 create one virtual speaker, placed halway between the left real speaker and the mid point between the two real speakers. Play a mono version of the above music track.

Now I predict that this comparison will reveal/support one thing that @gregorio was saying (and it for sure is what I experience) :
The second situation will give a more precise and more realistic placement of the instrument in 3D space, at the location described. Because this placement does not rely on the principle of panning between two different sound sources in 3D space (which is a limited principle, as @gregorio mentioned it just works for most people because in nature there never are 2 such sound sources so most people's brains just interpret it as one sound being at the intended/panned location).
Instead it relies on the principle of binaural simulation that brings the proper signals to the ear drums, with the exact same directional clues that one real speaker in that position (or another sound source in that position) would have resulted in.

You were the one who claimed headphones can localize from "anywhere" (with absolutely no disclaimer about binaural recording or virtual surround DSP).

To me it was perfectly clear that gregorio was not talking about normal recordings, but about the potential that headphones have given the proper input signals. What else could he have meant? Of course he knows that with standard normal stereo recordings this doesn't work (except sometimes for the odd person).

By the way: with a realiser you can measure and binaurally simulate loudspeakers in any location you choose, also below your self. Works perfect!

How many times do I have to say sound first starts with sound interaction on the pinnae, which influences how frequencies get to the eardrum and inner ear?
That doesn't contradict the above! (Except for your interpretation that exciting different areas of the ear drum is important, which - again - I highly doubt and for certain is not essential.)
 
Apr 9, 2020 at 7:49 PM Post #87 of 184
In this thread there is a lot going on, several complex subjects are touched, many details involved that are not mentioned but sometimes assumed to be known, some details overlooked.

Haha! The way you put it, it makes it sound like all this crankiness and posturing is important! I intended this thread to just be a place where people can blow off some steam so they don't pick fights in other threads, but I think I might have invented a perpetual motion machine... harnessing the massive power of ego and snark! All we need to do now is find some practical purpose for it! You guys are a lot of fun. It's like roller derby with college professors- or better yet, this Monty Python sketch...

 
Last edited:
Apr 9, 2020 at 8:37 PM Post #89 of 184
Haha! The way you put it, it makes it sound like all this crankiness and posturing is important! I intended this thread to just be a place where people can blow off some steam so they don't pick fights in other threads, but I think I might have invented a perpetual motion machine... harnessing the massive power of ego and snark! All we need to do now is find some practical purpose for it!
Of course it is extremely important whether or not the principle of generating correct binaural signals for headphones in potential can give a more realistic 3D sound than (normal) loudspeakers ever can! Even if it doesn't interest you, or if you don't see a usefull practical application for yourself at this moment. Also I find it important to try to solve miscommunications or misunderstandings if I suspect that they exist and possible cause or contribute to personal conflicts.
Simulating loudspeakers this way is just one application, that is relatively easy and very practical because you can use all existing content that was created with playback over loudspeakers in mind. The A16 is now very expensive but I am confident that in the not so far future more affordable alternatives will come. There are many, many people who can not put real speakers in their home, or have to mind the neighbours, or have bad acoustics, or who would like the sound of the very best speakers that they can not afford but that can be simulated by the A16 or similar devices.
Talking about ego: I am very surprised in a negative way by how you behave this entire discussion. Gregorio is right, you are deflecting. Now again, by suggesting this entire discussion is irrelevant, unpractical and humorous.
 
Apr 9, 2020 at 8:41 PM Post #90 of 184
The body, head, ear lobes, and pinnae filter the sound and the result is just that the frequency spectrum of the sound has changed when it reaches the ear drum. Different sound components coming from different directions have been filtered differently, and the frequency spectra of the different components have been changed differently. However they are arriving at the ear drum as one single sum signal with one total frequency spectrum. It is not about reaching different areas of the ear drum. The brain somehow seperates different components from the sum signal. At that point in the total system - at the ear drums - we have just the 2 signals with all the information implicitly in there. (Well, all the information that is in there, it could very well be that not all components can be localised - or even seperated - unambigiously from that. But information from other senses, memory, involuntary movements of the head providing another perspective, or simply already knowing where the sound is coming from, and who knows what more can solve ambiguities, and maybe some ambiguities remain.)

I tried to find information about the idea that exciting different areas of the ear drum could play a role in sound localisation, but I couldn't find anything. All I find is stuff in line with how I understand things. Yet, who knows maybe you are right and it does play a role. But I strongly suspect that even if that were the case (which I think is very unlikely), it is not essential enough to make it impossible for headphones to create a very convincing 3D sound experience for ar least the vast majority of people (assuming the proper signals can be generated for the headphones, in accordance with the listeners personal HRTF). Now for the moment, while I continue, I will assume that the "exciting different areas of the ear drum" idea is not playing a role, or not an essential one. We can later discuss this aspect seperately, but first I want to make sure that you @Davesrose fully get what I, and as I assume @gregorio, is thinking.

That doesn't contradict the above! (Except for your interpretation that exciting different areas of the ear drum is important, which - again - I highly doubt and for certain is not essential.)

I'm not going to respond to Greggorio any longer, since he purposely convolutes and even refuses to understand my statement about nerve innervation. I'm not sure how much research you have done of ear physiology, but my point is that the input stimuli for localization is not a single point on the eardrum. Sound perception isn't just how it's processed in your brain, or what your memory is. If one were to follow through with what I have mentioned: afferent and efferent neural pathways, you would see what kind of feedback loops there are in your outer, middle, and inner ear. These influence sound perception at any given time (where some could be reflexes from the brain, while others biochemical homeostasis). To consider those implications: your perception of volume or pitch can change at any time. When it comes to the mechanical dynamics of the eardrum, it's not a static input of frequencies but change of direction of sound on with the pinnae and ear canal.


https://www.frontiersin.org/articles/10.3389/fncir.2014.00116/full
"We first briefly turn to the three classes of acoustic cues that animals can theoretically use (for a more detailed description see Grothe et al., 2010). First there are spectral cues that change when a sound-source moves from one position in space to another. Such changes are most prominent when a sound moves in the vertical plane and thus thought of as monaural. Particularly in animals with prominent outer ears (pinnae), long ear canals and well-developed high-frequency hearing – i.e., most mammals – the complex reflection patterns created by the pinna and ear canal can lead to frequency-specific amplifications, attenuations and even cancelations (defined as so-called “head-related transfer functions,” HRTFs). These effects, however, are not fixed but depend on the direction from which the incoming sounds impinge on the pinna and ear canal. Moreover, the shape and size of the head, and even body posture, can modulate such effects (Blauert, 1997)."

And to get to a statement you have said: the brain receives "2 signals". Nerve innervation is more complex then that. The organ responsible for converting mechanical sound to nerve innervation is your cochlea, which have up to 3500 inner hair cells (sensory) in each ear. There isn't just a direct connection of eardrum to these cells (and certain changes in psychology of the middle ear bones or fluid dynamics of the cochlea have an effect on perception)

http://www.hitl.washington.edu/projects/knowledge_base/virtual-worlds/EVE/III.A.2.Auditory.html
"The transmission of sound through the ear
Sound waves hitting the outer ear are both reflected (scattered) and conducted. Conducted sound waves will travel through the ear canal and will hit the eardrum causing it to be driven inwards. This portion of the process is measured in HRTF generation by embedded microphone. Although the remaining process is not modelled currently, it is offered to help understand how complex the human auditory system is and how much work remains in the 3D sound synthesis world. The inward force will cause the malleus and incus to push the stapes deep into the oval window of the inner ear. The surface area of the eardrum is 30 times greater than the stapes. This causes the pressure on the oval window to be 30 times greater than the original pressure on the eardrum. This pressure is needed for the stapes to be able to transfer the energy into the "perilymph". The basilar membrane of the perilymph is compressed inward by the movement of the stapes. The compression of the flexible membrane causes the round window to bulge into the middle ear. The organ of the Corti pivots in response to the movements of the basilar membrane. The action of the organ of the Corti and the tectoral membrane sliding against each other cause the hair of the hair cells to bend."

So in summary, considering HRTF as single sources on the head might be OK for certain modeling, but you may need to consider these other complex factors for more realistic modeling.
 
Last edited:

Users who are viewing this thread

Back
Top