Virtual Stage
Nov 6, 2012 at 6:12 AM Post #61 of 69
Quote:
 
Maybe I'm wrong about what crossfeed is. Does it create a clear three dimensional space like the DSP simulator we've been talking about? Or does it just shift phase and mix channels?

Think of it like listening in an anechoic chamber with speakers. Sound from the right speaker arrives properly (level difference, time delay) also at the left ear and vice versa.
What's missing is room information and any change of the parameters if you move your head. But it's clearly more natural - I rarely listen to music without it. I just cannot stand instruments that are panned hard to one side giving you some kind of deaf-feeling in the other ear. It's just fatiguing which is the opposite of what I want when listening to music.
 
Without crossfeed, it's like listening in an anechoic chamber with speakers but with a wall that divides the left and right half of the chamber with you stuck centered in that wall.
 
Nov 6, 2012 at 12:40 PM Post #62 of 69
That's what I call phase effects, because it really doesn't sound like real space. It sounds like outer space. That would work better on synthetic ambiences in rock music than it would true soundstage in classical.
 
Nov 6, 2012 at 1:23 PM Post #63 of 69
How is "true soundstage" created in a recording?  All mixing and sound reproduction are synthetic.  Is it really natural to have only two (five? ten?) sources of sound?  How can that sound like "real space" at all?  If you can accept that it works reasonably well, then what's the reason behind it?
 
Real-life sounds out in space product "phase effects" or some kind of different sounds into different ears.  Interaural differences are mainly how we do localization.  What is soundstage, other than producing these differences?
 
In order to produce the correct results at the ears, you need a combination of the correct signals for the positioning of the speakers used.  Change the position of the speakers, and you would need to alter the signals they are sent to compensate, or interaural differences would change as a result of the move.  Actually, the L->L and R->R responses change as a function of positioning as well.
 
Nov 6, 2012 at 2:17 PM Post #64 of 69
Of course it's still quite different from real soundstage, but it's more natural than to have two (totally) isolated channels.
 
Depending on the implementation there's more to it than just mixing in a delayed signal to the other channel. bs2b mixes in a delayed low-pass filtered signal to the other channel. foo_dsp_xfeed mixes in a delayed signal that decreases in level with higher frequencies (head shadow). Simplified speaking, the hearing analyzes the phase shift at low frequencies and level difference at high frequencies to determine the direction the sound is coming from.
 
So all that crossfeed tries to achieve is that an instrument on the right side of the stage appears exactly there, and not next to my right ear as if someone's whispering into the ear. Also, for me it moves the sound out of my head.
 
Here's an example. Maybe a bit extreme but you should get the idea if you listen with headphones:
http://www.mediafire.com/?rk47cydtiey2sxw
http://www.mediafire.com/?5h0ebs0f80u2nbs
 
Nov 6, 2012 at 3:02 PM Post #65 of 69
How is "true soundstage" created in a recording?  All mixing and sound reproduction are synthetic.


The best sounding classical recordings (like RCA's Living Stereo and Mercury Living Presence) are recorded in a concert hall from a prime point in the auditorium using two or three mikes covering the spread of the orchestra. This gives a totally natural soundstage. Occasionally they'll drop a mike overhead to catch a detail, then mix it into the natural spread in the right place. The basic soundstage is almost binaural, and includes natural hall ambience and depth cues captured by the mikes.

Jazz combos are often miked as a group with a natural spread in a real room too. This is completely different than the synthetic instrument placement in multitracked rock mixes. Those are blends of close miked, distant miked and direct patch in. Ambiences and reverbs are digitally synthesized on an instrument by instrument basis. There's no soundstage at all. The instrument placement can change with every instrumental solo.

If you take a natural classical soundstage and play it on speakers, the depth cues and realistic spread are projected three dimensionally into the listening room. The listening room's own natural reberberation and depth cues are added to it, making the effect as if the orchestra is right there in front of you. You can close your eyes if the placement is good and "see" the aural imaging in your head.

I haven't heard it, but I'm guessing that if you take a naturally recorded soundstage and run it through crossfeed, it won't sound as natural.
 
Nov 6, 2012 at 3:19 PM Post #66 of 69
My point was that all sound reproduction is synthetic, not like the real thing, though your perspective of the back end is appreciated.
 
With different mic placements, you can capture inputs which can be mixed (even a 1:1 mapping of mics to output channels with no mixing counts as a mix, as far as I'm concerned in this context) to produce some synthetic reproduction that sounds closer to the original, maybe has a realistic soundstage when played back on a certain system.
 
So once you have a recording, why does playback work the way it does?  What is different if you move the positions of the speakers used for playback?  That's what I'm driving at.
 
Nov 6, 2012 at 4:44 PM Post #67 of 69
Quote:
The basic soundstage is almost binaural, and includes natural hall ambience and depth cues captured by the mikes.

 
Quote:
I haven't heard it, but I'm guessing that if you take a naturally recorded soundstage and run it through crossfeed, it won't sound as natural.

 
I agree with most of what you wrote but binaural is the wrong term. Binaural means that playback over headphones will sound natural and is kinda mutually exclusive with stereophony for loudspeakers, which is normally used for those classical recordings you mentioned. Even ORTF, which combines level and time differences, isn't close to artificial head recordings. And others like XY, MS etc. are also microphone systems for loudspeakers which are even further away from binaural.
 
Such natural recordings may sound a lot better than synthetic (in german we call it "Knüppelstereofonie" ... "panning stereophony" where the mix contains mono tracks panned to a side => level differences between channels) tracks on headphones, but they're still not sounding natural like on speakers.
 
Nov 6, 2012 at 6:03 PM Post #68 of 69
So once you have a recording, why does playback work the way it does?  What is different if you move the positions of the speakers used for playback?  That's what I'm driving at.


There's a standard for speaker placement too
 
Nov 6, 2012 at 6:24 PM Post #69 of 69
Quote:
There's a standard for speaker placement too

 
Yes, for obvious reasons, yet this is of course grossly violated if they are strapped to your head.  Anyway, we seem to be having different discussions, which is why this has dragged on, though not in a bad way.
 
On a side note, I believe I have been hanging around academia too long, picking up too many bad habits.  Instead of making things clear, I use confusing language and then try to answer questions by asking questions of my own.  (Those last few were not meant to be answered!)
 

Users who are viewing this thread

Back
Top