1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

What creates soundstage in headphones??

Discussion in 'Sound Science' started by noobiiee, Apr 29, 2010.
2 3 4 5 6 7
  1. Noobiiee
    Hey guys, just out of curiosity, how could one headphones have a soundstage while some other cannot? and how could amping or better sources improve soundstage of some headphones?

  2. Head Injury
    This should be mandatory reading.

    This regards imaging and headstage, which are big parts of soundstage. Other than that is air, the perceived distance between instruments. In general, the farther from the head a driver is held and the more it projects back towards the ear instead of directly into, the larger the soundstage will feel.
    sidduson likes this.
  3. Noobiiee
    Thank you for the link, it would take me few days to properly assimilate and understand the ideas of the paragraph but what I get from flicking through few paragraphs is that training is necessary in order to imagine the soundstage right. So in order to improve the soundstage I could simply change the ear pads to widen the gap between my ear and the drivers?
  4. Head Injury

    Originally Posted by Noobiiee /img/forum/go_quote.gif
    Thank you for the link, it would take me few days to properly assimilate and understand the ideas of the paragraph but what I get from flicking through few paragraphs is that training is necessary in order to imagine the soundstage right. So in order to improve the soundstage I could simply change the ear pads to widen the gap between my ear and the drivers?

    You'll make it bigger. You won't necessarily make it "better", because I think imaging is what does that. You'll also lose bass, gain treble, and maybe hollow out the mids.

    Training isn't really necessary. The headphones do all the hard work. Proper imaging and a sense of distance come from the detail it sends your way. Your ears imagine soundstages all the time. When someone talks to you from ten feet behind and to your left, you know exactly where they are before turning around. The challenge is on the headphones to portray the micro-detail cues properly to give that sensation, then you just need time to get used to the headphones and find them.
  5. Young Spade
    Phase shifts right? By changing the phase of the "sound" you can change "where" the sound is coming from. I'm under the assumption that this applies mostly to near and far and not so much left and right.

    Someone please correct me if I'm wrong [​IMG]

    EDIT: Regarding what Head Injury stated, I do agree with the whole distance thing but IMO it's not the only thing that affects SQ. Using UM3Xs the sound stage is somewhat close (which could be a point in his favor as they are IEMs and have a small, long tip), but once plugged into a ALO Rx, the soundstage backs up and you get some very real, very accurate instrument separation as well as a great sense of space.
  6. Noobiiee
    I think its mainly about how accurate the driver reproduce the sound signal. In plain English, if the instrument is placed far away from the mic, the phones meant to produce the sound + distance, not just sounds with no distance (like in the narrow soundstage phones).
  7. Chef
    Don't worry about sound stage unless you're listening to Classical music, I think, or something else where they're not playing the sound directly into the mic....

    I really think the recording is what will make sound stage 'realistic'... quieter sounds will seem farther away, and louder ones will seem closer. Of course you have to know yourself what is actually normal to know what is far and what is close... If you've never heard and instrument live before you might not be capable of that.

    The idea that headphones have anything to do with that... seems really silly. Like, unless they're really awful and over/under emphasizing certain frequencies, there's no reason for it not to sound like the recording.
  8. Shike
    Dispersion characteristics
    Frequency Response
    maybe phase

    Just some quick ones that come to mind.
  9. G.Trenchev
    Channel phase shifting,it's in the recording
  10. Happy Camper
    I know this is a complex question. I don't think it is always the headphone that has the most impact on the soundstage. I've heard differences in tubes, amps, dacs.
    To the recording engineer, could you explain the practices creating the soundstage? I think there is a lot of timing issues with secondary sounds that are part of the instrument, the location of the instrument to anything that would create a reflection, venue acoustics, etc.
    I feel tube amps have a large impact on the soundstage but haven't enough experience with ss amps to make this a general statement.
  11. MrGreen
    Channel separation in amps and dacs plays a role in how soundstage is perceived. How much of it we hear is a little debatable, I guess. With speakers, it is typically the delay (and volume difference as a result of the inverse square of distance) between the reverb and the original signal that creates the soundstage effect - or rather gives an impression of the size of the room. In the case of cased speakers (cabinet speakers? Idk what they're called), the size of the box will also affect the impression of size. With headphones, the 'room' ultimately becomes the headphone cup. In the case of closed headphones, it is simply a matter of reflections within the cup - which results in the typical "boxed in sound" (and obviously a larger sound when the drivers are angled (as it takes a larger number of reflections for the room reflections to get to the ear) and when the cups are made larger). With open headphones, the room is opened up - I'm afraid I can't explain the physics behind the reason for the  larger soundstage in more depth than this, because I am not an acoustical engineer: at a guess, it is perhaps the reduced number of reflections, which makes the reflection seem further away because it is quieter (or maybe it's just the same as a large room, I dont know but it doesnt sound like speakers anyway). You also have other factors, interestingly, like treble response influencing how we perceive the soundstage. This is because the frequency response of our ear (particularly the treble) changes as we rotate around our head and the pinnae are struck differently. The difference in treble between left and right gives us a stereopositioning effect, which allows the brain to know where sound is coming from. Going straight into the ear has a reduced treble response - which is why many headphones have a boosted treble response - to mimick the sound of a speaker. A headphone that has a boosted treble response is more likely to seem coming from "all around" (diffuse field response), or indeed infront of the face (free field response), than a reduced treble, which typically sounds like it is very two dimensional (my experience agrees). At this point, you may notice that it is usually headphones that some consider bright - and certainly not those people consider dark - as well as those that are open, and have angled drivers (in some cases) that have a reputation of a large soundstage. Of course, this is a simplification and the angle the reflections enter the ear (previously mentioned) also creates some soundstage phenomena.
    I'm afraid I can't comment on soundstage (or lack of) in balanced armature canalphones. I suspect distance from the ear also plays a role in soundstage effects but I can't articulate how/why.
    As for why low-fi headphones have a poor soundstage well, I can't explain that one.
    intlsubband likes this.
  12. JerryLove
    The location of a sound is determined by the volume and time difference as it hits our right and left ear. We hear reverberations / echos that occur <1ms from the original sound as an extension of the original sound. We hear reverberations / echos >1ms as reflections.
    Our brain integrating <1ms reflections into the original sound is why it's so important that speakers have good off-axis performance.
    Localizing sound is about controlling that first sound. Making sure that it stops early and can be readily placed. This is why it's often easier with a single-driver boombox than a multi-driver speaker to get that "in the middle of my head" location headphones often give.
    To create a sense of a room, you need to hear those late-order reflections. It's rather like sonar. Adding more is why some speakers are bipolar or omni-polar (to introduce real room reflections).
  13. MrGreen


    Oh, I forgot to mention that. Thanks, nice post.
  14. Dynobot
    Here is a Question...
    What creates sound stage on regular CD's????
    Most all recordings are done in studios using separate tracks and blended together to make a complete song.  Artists my record on separate days, some together, some not, but most often not in the position that we 'seem' to hear them in via our speakers.  That is to say, we hear a sound-STAGE, but the cd was recorded in a recording-booth.
  15. MrGreen

    Track separation: namely the difference in volume between left and right. The brain hears it as "stronger sound from the left", for example, and ends up putting a picture that side. Theoretically the higher the bit depth (thus the higher the dynamic range), the greater the potential soundstage. Of course, the audibility hugely comes into question at that point. Rarely will you actually have a sound from the left actually be a 100% pan - a 100% pan is actually painful to listen to on headphones IMO (try an old coltrane album to hear this for yourself). There are also issues like reverb and treble response, but I'd say they're less responsible than the headphones. You can test the track separation idea by using a software DSP for total crossfeed (mono) and turning it down slowly.
    Binaural recording uses the same principal but I think it takes into account locational HRTF. I'd actually really like to try recording some of my music binaurally.
2 3 4 5 6 7

Share This Page