bigshot
Headphoneus Supremus
I don't think a lot of people here even know what soundstage is. Almost no rock albums have it. It's mostly in classical and older jazz recordings.
I don't see how crossfeed could improve soundstage. It might create amore open sound, but soundstage involves spacial placement of instruments created in the mix. Crossfeed would just muddle that.
I haven't found any of that to be a factor. The main things I've found that affect soundstage are 1) the miking and mix of the recording and 2) speaker placement. I've never heard headphones have any sort of real soundstage at all, but this simulation technology is very interesting. I'd be interested in it if I wasn't so fine wth my speakers.
I don't see how crossfeed could improve soundstage. It might create amore open sound, but soundstage involves spacial placement of instruments created in the mix. Crossfeed would just muddle that.
ultrabike, cables obviously can't manipulate the signal to produce soundstage affects. What they can do is reduce hash and splash types of noise in the upper frequencies, and allow a bit more low level detail through. All of which makes the sound images stand out more from the mix and gives our brains more spacial clues, more information on reverb etc, which results in our brains more easily forming an impression of soundstage. Works with speakers and headphones.
xnor, I'm not saying I prefer the headstage to the soundstage of a good loudspeaker rig. Of course a loudspeaker rig will win purely on soundstage terms. I'm saying that the headstage can become good enough for me not to want to go back to the hassle of a speaker setup. Apart from the ones I've already mentioned, the biggest hassle is of not being able to play what I like, when I like, at whatever volume I like - without annoying my wife and neighbours.
It adds the interaural level and time differences that we're used to from listening with speakers / real sounds. If you look at the 30° HRTF of a human you will see that sound from the right speaker arrives also at the left ear (higher frequencies are lower in level) with a small time delay. That's what our hearing needs to localize stuff.
crossfeed is limited but you are already listening to R,L channels "muddled" in a loudspeaker&room listening situation with both speaker's sound audible at each ear - crossfeed attempts to approximate that condition with headphones
bigshot, I think you're wrong about most rock albums not having any soundstage. It's often just a different type soundstage than say a live classical performance, but that doesn't mean that the rock version has no value.
It adds the interaural level and time differences that we're used to from listening with speakers / real sounds. If you look at the 30° HRTF of a human you will see that sound from the right speaker arrives also at the left ear (higher frequencies are lower in level) with a small time delay. That's what our hearing needs to localize stuff.
Maybe I'm wrong about what crossfeed is. Does it create a clear three dimensional space like the DSP simulator we've been talking about? Or does it just shift phase and mix channels?