Virtual Stage
Nov 5, 2012 at 3:28 PM Post #46 of 69
I don't think a lot of people here even know what soundstage is. Almost no rock albums have it. It's mostly in classical and older jazz recordings.
 
Nov 5, 2012 at 3:30 PM Post #47 of 69
I do not know a single guy who prefers a high-end headphone rig to a mid to high-end speaker system. Soundstage, or lack thereof, being one of the main reasons. It just doesn't work on headphones without DSP, regardless of cables.
 
Nov 5, 2012 at 4:19 PM Post #48 of 69
Some soundstage and virtual stage may be mimicked through headphones using binaural recordings or by using some crossfeed (and of course some other DSP stuff.) It is probably and understandably not going to be the same as 5.1 speakers or an actual performance.
 
The point is howerver that binarual, crossfeed and other such things can deliver improved soundstage through headphones. There are well understood reasons why. But I fail to see how and why cables can deliver these soundstage effects from a mono or stereo signals into headphones alone. The only reason I can think this could work with cables under stereo conditions might have to do with crosstalk, but that is not necessarily a good thing and seems difficult to control/adjust.
 
Nov 5, 2012 at 5:17 PM Post #49 of 69
I don't see how crossfeed could improve soundstage. It might create amore open sound, but soundstage involves spacial placement of instruments created in the mix. Crossfeed would just muddle that.
 
Nov 5, 2012 at 5:25 PM Post #50 of 69
not exactly - high R,L channel isolation with headphones is the "unnatural" situation
 
crossfeed is limited but you are already listening to R,L channels "muddled" in a loudspeaker&room listening situation with both speaker's sound audible at each ear - crossfeed attempts to approximate that condition with headphones
 
Nov 5, 2012 at 5:38 PM Post #51 of 69
ultrabike, cables obviously can't manipulate the signal to produce soundstage affects. What they can do is reduce hash and splash types of noise in the upper frequencies, and allow a bit more low level detail through. All of which makes the sound images stand out more from the mix and gives our brains more spacial clues, more information on reverb etc, which results in our brains more easily forming an impression of soundstage. Works with speakers and headphones.
 
bigshot, I think you're wrong about most rock albums not having any soundstage. It's often just a different type soundstage than say a live classical performance, but that doesn't mean that the rock version has no value.
 
xnor, I'm not saying I prefer the headstage to the soundstage of a good loudspeaker rig. Of course a loudspeaker rig will win purely on soundstage terms. I'm saying that the headstage can become good enough for me not to want to go back to the hassle of a speaker setup. Apart from the ones I've already mentioned, the biggest hassle is of not being able to play what I like, when I like, at whatever volume I like - without annoying my wife and neighbours.
 
Nov 5, 2012 at 5:41 PM Post #52 of 69

I don't see how crossfeed could improve soundstage. It might create amore open sound, but soundstage involves spacial placement of instruments created in the mix. Crossfeed would just muddle that.
 


Right, as you say, (emphasis added)
 
Quote:
I haven't found any of that to be a factor. The main things I've found that affect soundstage are 1) the miking and mix of the recording and 2) speaker placement. I've never heard headphones have any sort of real soundstage at all, but this simulation technology is very interesting. I'd be interested in it if I wasn't so fine wth my speakers.

 
The mix is mostly created on speakers spaced a certain distance apart, a certain room configuration.  In this configuration, sound from the left speaker is heard by the left ear but also by the right ear with some slight time delay, smaller amplitude, etc. (which all depend on your head and ear shape, the frequency of the sound).  Vice versa for R->L.
 
When listening on headphones, the left "speaker" is placed right next to the left ear.  The right ear is not getting any sound from the left speaker (same for the reverse case), so the spatial placement of the a sound in the mix is played back in the wrong place.
 
Any correction hopefully accounts for these differences to some degree.  You can compensate for the incorrect positioning of the speakers (i.e. right next to your head) by applying the right transfer functions.  Or in the very least, you can make it better with some kind of approximation.
 
Nov 5, 2012 at 5:51 PM Post #53 of 69
Quote:
I don't see how crossfeed could improve soundstage. It might create amore open sound, but soundstage involves spacial placement of instruments created in the mix. Crossfeed would just muddle that.

It adds the interaural level and time differences that we're used to from listening with speakers / real sounds. If you look at the 30° HRTF of a human you will see that sound from the right speaker arrives also at the left ear (higher frequencies are lower in level) with a small time delay. That's what our hearing needs to localize stuff.
 
Without that, as jcx wrote, you have two "totally" isolated channels. Even if a sound source is at 90° (to the right) there's still lots of crosstalk to the left ear in reality. That's not the case with headphones.
 
Quote:
ultrabike, cables obviously can't manipulate the signal to produce soundstage affects. What they can do is reduce hash and splash types of noise in the upper frequencies, and allow a bit more low level detail through. All of which makes the sound images stand out more from the mix and gives our brains more spacial clues, more information on reverb etc, which results in our brains more easily forming an impression of soundstage. Works with speakers and headphones.

Even if cables improved "low level detail" or let more of them through by 5000%, which they clearly don't, it wouldn't change a bit about what I wrote above.
 
 
Quote:
xnor, I'm not saying I prefer the headstage to the soundstage of a good loudspeaker rig. Of course a loudspeaker rig will win purely on soundstage terms. I'm saying that the headstage can become good enough for me not to want to go back to the hassle of a speaker setup. Apart from the ones I've already mentioned, the biggest hassle is of not being able to play what I like, when I like, at whatever volume I like - without annoying my wife and neighbours.

Sure, but since this thread is about soundstage I don't wanna talk about the wife acceptance factor. Also, mixing is usually done on speakers too. Not only because it's hard to get the bass right on headphones, but also because of the soundstage.
 
Nov 5, 2012 at 6:06 PM Post #54 of 69
Quote:
It adds the interaural level and time differences that we're used to from listening with speakers / real sounds. If you look at the 30° HRTF of a human you will see that sound from the right speaker arrives also at the left ear (higher frequencies are lower in level) with a small time delay. That's what our hearing needs to localize stuff.

 
Agreed. As with equalization though, one has to be careful and spend some time learning about it. I could see how miss-equalizing a headphone would result in sorry results. Miss-cross-feeding (say one does 100% across the frequency range to effectively end up with mono or worse) can also result in weirdness... Then there is clipping if gains are not properly considered, and so on... More knobs to tweak! Me likes
biggrin.gif
...Things may also be sort of dependent on how things were recorded...
 
Nov 5, 2012 at 7:31 PM Post #55 of 69
Quote:
crossfeed is limited but you are already listening to R,L channels "muddled" in a loudspeaker&room listening situation with both speaker's sound audible at each ear - crossfeed attempts to approximate that condition with headphones

 
The difference is the added dimension. With speakers you get a dimensional triangulation that places things in a clear field in front of you from left to right. With crossfeed it's not dimensional. No triangulation, just mixing of left and right. That isn't soundstage.
 
Nov 5, 2012 at 7:38 PM Post #56 of 69
Quote:
bigshot, I think you're wrong about most rock albums not having any soundstage. It's often just a different type soundstage than say a live classical performance, but that doesn't mean that the rock version has no value.

I really wasn't making a value judgement. It's just that everyone points to Pink Floyd as having good soundstage. Pink Floyd has no soundstage. It has stereo and phase effects. Soundstage is usually pretty static, because it takes a moment for you to figure out the spacial placement without a visual idea of where things are. In rock music, things are placed semi-arbitrarily in the mix without trying to create an image of where each instrument is. George's guitar solo is on the left side one time and the right the other. When something is overdubbed it's fit into an open space, not placed in the stage with the other musicians.
 
When music is tracked one instrument at a time (as a lot of pop music is) the soundstage never exists. The mixer creates a non-specific sound placement that doesn't necessarily relate to actual positions of the musicians.
 
Classical music and jazz, which are often captured live from a fixed position stereo mike and not multitracked are different.
 
Nov 5, 2012 at 7:42 PM Post #57 of 69
Quote:
It adds the interaural level and time differences that we're used to from listening with speakers / real sounds. If you look at the 30° HRTF of a human you will see that sound from the right speaker arrives also at the left ear (higher frequencies are lower in level) with a small time delay. That's what our hearing needs to localize stuff.

 
Maybe I'm wrong about what crossfeed is. Does it create a clear three dimensional space like the DSP simulator we've been talking about? Or does it just shift phase and mix channels?
 
Nov 5, 2012 at 7:55 PM Post #58 of 69
This is my understanding: To triangulate you need two points (i.e. your ears.) The angle of arrival of the wave to those two points (provided by magnitude and FR as a function of the ears and head asymmetry which our brain may have calibrated), and the distance between the ears and our head's center point (which cross-feed may supplement along with what our brain has learned about the shape of our head.) What filter should be used for cross-feed may be individual and recording dependent (specially in cases where instruments are available to one ear and not to the other.) In cases such as a binaural recording all these considerations may have been already taken care off through the dummy head and no cross-feed should be used.
 
I could be wrong of course.
 
Nov 5, 2012 at 8:23 PM Post #59 of 69
Quote:
Maybe I'm wrong about what crossfeed is. Does it create a clear three dimensional space like the DSP simulator we've been talking about? Or does it just shift phase and mix channels?

 
If so, I don't think you're the only one wrong about crossfeed.  Or rather, a range of implementations are labeled as crossfeed, even though they do different things, with different levels of sophistication.  I don't know if there's an agreement on the definition, so confusion is to be expected.  I think the hardware crossfeed implementations tend to only do lowpass filter L channel and attenuate -> add this to what's coming out of R channel, and vice versa.
 
I'm guessing that xnor is thinking in terms of this  (I wonder why 
wink.gif
), which is literally a DSP component, so to say:
http://www.hydrogenaudio.org/forums/index.php?showtopic=90764
 
Actually, the picture may be... illustrative.
 
Regardless, even the most crude of crossfeed will at least have some kind of effect, which may or may not alter the soundstage from the playback to be more realistic.
 
In terms of what sounds your ears get, it doesn't really matter where the sound comes from—an object out in space, a speaker across the room, a speaker several feet in front, or a headphone strapped to your head—so long as you get the same X and Y sounds into the left and right ears, right?  For each speaker positioning, you just need to find a way to manipulate the input such that the resulting sounds that get to your ears are X and Y.  The issue that then comes up is that sound localization and sense of soundstage are also influenced by feeling vibrations and compressions in parts of the body other than the ears, and you can't exactly create that effect with headphones.  Or so I've been told, or maybe I just made that up.  Furthermore, you get positional cues from sampling the listening space in multiple positions, when you move your head slightly (intentional or not).
 
I'm going to end this with "I could be wrong" too.  Seems to be a trend.
 
Nov 5, 2012 at 10:52 PM Post #60 of 69
Rock has sound-stage though not like a real stage and precisely for that reason the sounds are placed in new other-worldly locations. The only reason I posted in this thread is I felt I needed to give some a clue. Even when mixed down on monitors the resulting headphone experience has always given us a bastardized reproduction. The folks who truly understand the limits and reality of that bastardization can then move on and enjoy the experience for what it's worth.
 
I though have always thought we as a user group could( through the same DSP that gave us surround-sound)..... slowly move into sonic illusion of a real sound-stage with headphones. I have quested for it though in the end given much more.
 
 
I just did not see the basics listed? 
 
 
No doubt an incredible headphone system could enjoy the stage expanding processes of DSPs or cross-fade but an inherent amount is going to come from the system itself, from the reasons listed above by my fellows.
 
No snake-oil or magic here, only the pure reproduction of music and the inherent reproduction quality of it there of. The same process occurs when you hear the difference from a Mobile phone I-bud sound-stage and a full-on home unit, just accentuated.I look at it like seeing more detail through the refractive properties of a light lens. There is a basic reality to start with and who ever can regenerate the illusion wins.
 
I have always loved the idea of electronic filters creating a new and special sonic dimension. I started listening to rock just as Jimi Hendricks was adding the increased ambient effect to the electric guitar on stage( a delay). The idea has always been about some exotic and romantic device to add magic to the experience.
 

Users who are viewing this thread

Back
Top