Originally Posted by dvw
I don't get it. What is the soundstage people are talking about? I have been a long time speaker listener and I do enjoy a good sound stage. However, modern recordings have a severe lack of sound stage. Some music like hip pop, there is barely a hint of a soundstage.
With headphone, I have never ever had any experience of any sound stage. As I read here in this forum, the sound stage is defined as listening beyond the ears or headstage. I don't understand why the importance or enjoyment on material that has no sound stage at all. In some cases for critical listening, some haedphones with so called large soundstage is really irritating. For example, a drum could span from all the way from the right to the left. The band is sitting on top of one another. I would say a large soundstage is a disadvantage instead of an advantage. But yet, on most reviews here soundstage is always claimed as a major differentiator (especially with cable). And yet nobody ever tried to correct or clarify what they are talking about. Could it be people are reading too many Six Moon review?
Originally Posted by dvw
However, modern recordings have a severe lack of sound stage. Some music like hip pop, there is barely a hint of a soundstage.
You might also notice that most people splurging on audiophile gear (particularly the older ones on speakers) aren't listening to hiphop and most other modern music, so yeah all the more will it be very hard to understand soundstage. Also when it comes to hip hop, people will either recommend the Sony enhanced bass headphones, or start bashing Beats.
Originally Posted by dvw
I don't get it. What is the soundstage people are talking about? I have been a long time speaker listener and I do enjoy a good sound stage.
...I don't understand why the importance or enjoyment on material that has no sound stage at all.
In any case, that might also be why I react with a "huh?" to new songs on the radio. I really don't listen to much new music, although I have to admit soundstage hints are audible with speakers and headphones even on more mainstream music, although by that I don't mean electronics-synthesized music, but more acoustic recordings.
Originally Posted by dvw
With headphone, I have never ever had any experience of any sound stage. As I read here in this forum, the sound stage is defined as listening beyond the ears or headstage.
Not necessarily - it just means those headphones get that much closer to speakers.
Originally Posted by dvw
In some cases for critical listening, some haedphones with so called large soundstage is really irritating. For example, a drum could span from all the way from the right to the left. The band is sitting on top of one another.
I had a similar experience when I used a K701 with my DAC-AH and Little Dot MkII. This system basically put the orchestra in Epica's The Classical Conspiracy in front and below the band, like they were performing in the Kodak Theater with the band in the pit ahead and below the stage. To date I have not listened to the K701 doing anything as weird as that.
On the other hand, yes headphones do have a lot of inherent weaknesses compared to speakers and that's exacerbated by younger people who have to get into (serious) headphones for one reason or another before (serious) speakers (technically a lot of us started with head/earphones given we grow up in our parents' house first, etc) reading what people listening to music with soundstage say then expecting to hear it in badly recorded modern music (whether they think they hear it or actually do is up for debate).
But that's also why the size of the soundstage is less critical than all the other characteristics. For one, even with speakers you are not reproducing the same soundstage even the best recording engineers hope to simulate in the recording - you are reproducing it to scale. That means a band of whatever musical inclination recorded to simulate their space in a small bar for example isn't going to be reproduced in the same size unless your room was around the same size, because even if you can control reflections, you're still restricted by how far the speakers are from each other. For a headphone, it's more around the size of those table top band toys that start moving/"performing" when you hear music, if you were watching it with your face close to it. The important thing is how accurate it is within that space, not the size out of one's head.
Which leads to another set of disadvantages with headphones. Speakers are usually set to a toe-in angle, especially if you aren't far enough from them. The other is that with speakers, your left ear can hear the right speaker and vice versa, which allows for simulating the spatial cues better. By default a very basic headphone system isn't going to get either, however technology tries to improve on things. As for the angle, some headphones have the chassis (like the cups) or the earpads set the drivers at an angle and maybe slightly forward your face from the ear hole, simulating a speaker toe-in and the more "in your face (head)"* sound that comes naturally when the sound source is right outside one's ears. My HD600 doesn't have either but I wear it in such a way that the earpads produce a wear pattern similar to the default shape of K701 pads, that is, more wear on the side of the pads forward of the face, simulating the same toe-in. I went further and wear it with the rear pad pushing my earlobes slightly forward.
As for hearing the the other side, there's crossfeed, which is basically like the headphone equivalent of time alignment, particularly in a car where more serious systems can set customized time delays for each tweeter, midrange, midwoofer and the subwoofer so the driver can hear a more normal soundstage on the dashboard as per IASCA and EMMA rules (ie, vocalist dead center and as high up as possible, all other instruments in the same height spread out and with distinct sources along that plane, etc). However the difference here is that you're not just delaying sound but deliberately feeding some freqs across the channels. Depending on how purist one is about not fiddling around with the sound, that may be a problem. On my end though teh built in crossfeed on my amplifier helps to keep the drum roll span realistically. What some people think is congested is, again, to scale. Guitars on left and right, bass guitar and vocals in the middle, all percussion somewhere between the guitars and preferably behind the vocals.
Some sources however are just bad at this with headphones. With a NAD C545, the drum rolls don't only span from one side to the next, they actually have a 'crescent' shape - however as much as that may seem like what a real drum set is like around the drummer, the problem is it happens above and around my head on some of the CDs I tried. The Marantz CD5003 does this worse. The Cambridge 340C puts all percussion forward of the entire band. Obviously with more affordable CDPs realistic soundstage isn't only down on the list of priorities, these are also very likely not tested with headphones (or not extensively) by the engineers.
*Like how cymbals always seem to be at the extreme left and right, EQ can only tame its fatiguing qualities but not exactly reposition it
Originally Posted by dvw
But yet, on most reviews here soundstage is always claimed as a major differentiator (especially with cable). And yet nobody ever tried to correct or clarify what they are talking about. Could it be people are reading too many Six Moon review?
Well, here's one problem - try to correct a strong belief and see what reactions you get. Like how a $4,000 cable is a good investment for $4,000 or even just $1,000 components being bridged by said cable, or telling Kansas, Texas and England that T-Rex died 65 million years ago and Noah didn't save the Stegosaurus.
On the other hand, frequency peaks (or lack thereof) does affect soundstage. Bias it on the vocal range and voices seem more forward, for example. What some people describe as "weak bass" but is adequate for others is usually perceived by the latter as a deeper soundstage on some headphones, because that also puts all percussion but especially the bass drum (and guitar) behind the vocals which is normally recorded straight from the singer into the mic, no 'wall of sound' or any other phase involved, whereas you have the mics a bit away from the drums (but not necessarily in "front row"), guitars' amps placed on one side of a room (or the engineer just biases it in the mixing), etc.
In any case, if you can get your hands on an EMMA or IASCA test/demo/competition disc, they have recordings of some person babbling about the competition or what soundstage means while walking around a room relative to a mic (or a pair of them), so you can see how he pans from one side to the next, and from front to back, to demonstrate how its possible with speakers. Since this is recorded particularly to show this (unlike, again, most current music), it's easy to get soundstage cues off this even on headphones.
In any case I'm not a fan of Six Moons either. Some info on there does serve as a good guide but I'd be one of the first to not buy it lock stock and barrel (I take to every review like that generally, but of course a bit of a guide is never a bad thing - it's how much faith some people put in them regardless of who wrote them that I find problematic.)
Originally Posted by joshwalnut
Yeah i also don't understand soundstage when it comes to music, i don't here much soundstage unless i'm hearing a 3d sound of some kind. BUT when it comes to movies and games, especially online mulitplayer video games (first person shooters) i understand the concept of soundstage very well. My turtle beach x1's gave the best soundstage but it was only meant for that, sound quality listening to music is HORRIBLE. I now have a Sennheiser HD 558 that is suppose to have outstanding sound stage because of the open back design.. about the same if not worse then my x1's when it comes to fps shooter soundstage. Listening to music or movies though the 558's are amazing... while the "gaming" headset excels well at fps sound stage
Well, in those cases, they're a surround format, so off the bat, on a home theater system you do have speakers all around you to simulate that. By contrast in music, not even 5-channel concert videos or SACDs have all five channels where the individual instruments are. In movies and games you have speakers around you to show how the sparks off Darth Sidious' lightning and Mace Windu's lightsaber are all over the place, or an Urukhai exhaling around you when the camera is right there among them outside of the Hornburg, or if some Nazi is shooting at you from wherever his foxhole is as youmake your way through Normandy's hedgerows in Call of Duty. By contrast in music you don't get a center channel in front for vocals, another behind that for bass guitar, monitors for guitars, then another stereo pair for the drums behind those; or an array of speakers spread out to simulate the location of every section of an orchestra. Recording music like that is too anal-retentively specific to be commercially viable, although I must admit I have tried to borrow five CDPs, the CDs of individual tracks (instruments, not songs) before they're put together on the normal 2ch recording, then hook up each source to active monitors. Needless to say, even if get around to setting this up, the question is whether all five CDPs will play simultaneously off of one universal remote where I hit play.
Going back to the IASCA and EMMA demo disc I mentioned above, though, it is possible to simulate this if the recording was done in such a way as to use the recordings mics to simulate distance. But of course that's not exactly anywhere near the top of the list for engineers nowadays, more so when your clients have one or a handful of guys rapping at the same time, the drums don't roll from one side to the next along with a guitar riff, and the bass comes from a synth extending the rumble off a bass drum hit or just generating the windshield-shattering bass outright. Yep, of course they probably record with a Hummer on Dubbz sporting Audiobahns in mind a lot more than a speaker system at home. (This is an extreme example and yes even rock bands nowadays just tend to absolutely suck at soundstage recording too)