Small correction.
Binaural doesn't necessarily require dummy outer ears.
Just a plain ball acting as a head masker is sometimes used. Some use parts of the torso as well. Some use hair approximation with outer ear approximators.
It is arguable, that the approximation of outer ears is problematic due to different shapes and sizes of outer ears, which is precisely the reason why some binaural recorders shun them.
This also then affects the ideal playback chain considerations, naturally.
Quote:
That's the reason why you really don't need one of those "5.1 headphones" to create surround sound effects, because as long as the source stream is really surround, you can emulate how they will sound and then apply that effect into headphone applications. |
Yes. Unfortunately the emulation with approximation is far from complete or believable (esp. in depth separation).
Having compared: CMSS (Creative), Sensaura (ex-Sensaura, now Creative), A3D (ex-Aureal, now buried by Creative), QSound (QSound) and Dolby Headphone (Dolby), I can personally testify that the emulation for rear localisation (or front/rear separation in general) is still sorely lacking in generalized headphone virtualisation systems.
Of course, using one's own HRTF would be more towards the ideal and produce much more believable results, but this is something that is laborous to do and for which there is no provision in consumer 3D headphone virtualisation algorithms.
Sensaura did have "Virtual Ear" App that allowed the customisation of HRTF by playing around with some basic filtering parameters, but this was not the same as if it had the ability to input one's own HRTF function as a filtering input.
Quote:
soundstage whatsoever.Perception of soundstage, and sound in general, has as much to do with a physical effect as well as psychological. |
Ah, here I agree completely.
It's a philosophical point, but acoustical filtering (physical) is just signal manipulation. Depth perception is a phenomenological instance, an inference of the mind, drawn from the acoustical cues fed to the human nervous system.
I'm sorry if I bore you with psychoacoustics. I just wanted to further clarify a few points, in order not to be misunderstood by my terse original remark.
friendly regards,
Halcyon
PS ObThreadTip: I think a decent hardware accelerated DirectSound3D compliant sound card is _audibly_ better for avid gamers than plain-old 2D sound cards.
Why?
Not all games have their own 3D headphone virtualisation algorithms (like HL2 does) and even if they do, they almost always do a crappier job than any of the currently available gaming headphone virtualisation algorithms (QSound, Sensaura, CMMS classix or SS/DD->Dolby Headphone). Even HL2 is much better played with 5.1 output and having the soundcard do the headphone virtualisation (if it has even semi-decent HRTF approximation). If you don't believe me, try it out. I've done it and I'm not going back to playing with stereo
Of course, you don't _need_ them, but they do provide at least some rudimentary level of front/rear sound localisation, something which stereo only sound sources do not provide.
Also, as a further point, it must be noted that sometimes using stereo with lousy in-game virtualisation can be better _overall_, even if the depth perception (esp. front/rear separation) is almost completely destroyed.
Why?
Because some games internal stereo headphone virtualisation output allows one to hear at longer distances than using 5.1 discrete ouput with soundcard virtualisation.
Sometimes it's more important to actually hear _something_ than not to hear anything (but have that nothing more accurately positioned).
Counter-strike being a notable example, where many people I know swear by the internal headphone algorithm, although it has practically no front/rear localisation at all.
It just allows you to hear game sounds happening at longer distances in the game world.
It's a small difference, but to some that counts.