cjl
500+ Head-Fier
- Joined
- Dec 28, 2009
- Posts
- 833
- Likes
- 206
Oh wait, I forgot, I gave up on this thread.
Carry on.
Carry on.
Nope.
A DSP can change the waveform however you want. It can change phase (frequency dependent or independent), it can mix in one channel into the other, it can add reverberation or distortion, or many more things. The trick is figuring out how to change the waveform to simulate sounds coming from in front of the listener (or to the sides, or above, etc...). That is nontrivial, but definitely not impossible.
That's what is said Changing a waveform is altering the signal with respect to time (x) or amplitude (y). This simple statement covers all the waveforms you describe. That's not the issue anyways.
There NO trick (trivial or otherwise) that can change soundstage depth with respect to only 2 drivers facing each other. The only thing a DSP can do in this configuration is: widen the soundstage since all waveform changes (x,y) will be reflected on the same axis. Hopefully this dichotomy will become apparent to those of you who think DSP's have physical superpowers.
You may perceive a forward and back soundstage (who hasn't at one time or another?). Drivers facing each other on the same axis cannot be induced to produce such lateral sounds from any content (stereo, binaural, 5.1, 7.1 or whatever) unless a) reflections in the headphone cavity itself are calibrated and subsequently leveraged by some form of DSP or b) there exists some mechanism to control deflection of the driver cone/surface itself. Neither method is common use outside of marketing literature.
That's what is said Changing a waveform is altering the signal with respect to time (x) or amplitude (y). This simple statement covers all the waveforms you describe. That's not the issue anyways.
There NO trick (trivial or otherwise) that can change soundstage depth with respect to only 2 drivers facing each other. The only thing a DSP can do in this configuration is: widen the soundstage since all waveform changes (x,y) will be reflected on the same axis. Hopefully this dichotomy will become apparent to those of you who think DSP's have physical superpowers.
You may perceive a forward and back soundstage (who hasn't at one time or another?). Drivers facing each other on the same axis cannot be induced to produce such lateral sounds from any content (stereo, binaural, 5.1, 7.1 or whatever) unless a) reflections in the headphone cavity itself are calibrated and subsequently leveraged by some form of DSP or b) there exists some mechanism to control deflection of the driver cone/surface itself. Neither method is common use outside of marketing literature.
Ppl listen for yourself all hp surround have the common "helium SQ" artifact, because they work with the same audio-electronics acoustic phenomenon "Out-Of-Phase" nature.
DSP can only mixing the surround/stereo channels together with some Out-Of-Phase amount relative to each other, to create "widen" Out-Of-The-Head effect (Virtual Surround).
There's no front and back placement just mixing for example the mono/center channel with some Out-Of-Phase x% into both L/R channels can only create to the top of the head placement that would suggest the "front", but that helium art. ruined that experience as it sound out-of -place not natural.
Out-Of-Phase audio is easily recognized as it can't reproduce full bandwidth, only thin exaggerated "helium" Mids, the more x% Surround/Wide setting the more wide Mono the total audio will be sound.
Listen for yourself honestly, DSP hp surround can impossible reproduce the same (in-phase) natural SQ from normal stereo/5.1/7.1 recordings:
you dismiss the entire idea based on how some general purpose surround systems sound. ok, but what a few of us are talking about is that you would need to target a specific headphone and specific ears. so obviously those stuff can't always work or even remotely sound good.
of course I don't feel like I'm in another dimension when I turn on my dolby crap on my computer. even less when the headphones I will use may very well have some 20+db differences between one another. it would be crazy to expect such uncontrolled environment to do great in any circumstances with any ears.
plz look into the smyth realiser as it's pretty much the only device doing what we're talking about right now(with the occulus rigs I expect a lot of softwares to work that way in the future with any form of head tracking tool). and notice how you do need to first measure sounds with microphones inside your own ears.
you dismiss the entire idea based on how some general purpose surround systems sound. ok, but what a few of us are talking about is that you would need to target a specific headphone and specific ears. so obviously those stuff can't always work or even remotely sound good.
of course I don't feel like I'm in another dimension when I turn on my dolby crap on my computer. even less when the headphones I will use may very well have some 20+db differences between one another. it would be crazy to expect such uncontrolled environment to do great in any circumstances with any ears.
plz look into the smyth realiser as it's pretty much the only device doing what we're talking about right now(with the occulus rigs I expect a lot of softwares to work that way in the future with any form of head tracking tool). and notice how you do need to first measure sounds with microphones inside your own ears.
I'm not talking about surround sound that simulate real speakersetup with room specifics DSP algorithm.
There's already "cheaper" software that can simulate from many different room algorithm measurements.
http://www.head-fi.org/t/689299/out-of-your-head-new-virtual-surround-simulator
Its not difficult to pass the 5.1 audio through those algorithm filter to simulate the room characterics.
Even with through your ear/headphone characterics or superduber headtracking algorithm, won't change the fact that the Out-Off-Phase manipulation/interpolation always comes with that squeaky voice (helium SQ) artifacts, that prevent it to sound as realistic/good natural as in-phase recordings!
https://fongaudio.com/demo/
Because the virt. surround signal processing is based on the "out of phase" substraction cancellation algorithm between the L and R audio, some sound details be will missing.
It doesn't matter to me as I'm already enjoying my surround hp processing with (cheap solution) Creative SBX processing at 14%, any higher the helium art. will be too annoying.
Good thing SBX algorithm is adjustable with surround 0-100% intensity/effect that can be "calibrated" to the source/hp/preference, where-else other consumer Dolby hp/ DTS X:hp are limited by few fixed room presets.
we're obviously not concerned about the same things and/or not talking about the same things.
I've heard binaural tracks that sound like they are coming from the front. But the problem for me is, my brain knows something is wrong with the sound and it keeps snapping it from front to back over and over randomly.
It only sounded a foot or two in front of my head though. Nothing like speaker soundstage. It's probably easier to create that illusion when the sound object is closer to the head.