What creates soundstage in headphones??
Oct 19, 2014 at 5:28 PM Post #61 of 288
  I am assuming that you are talking about this track? http://www.youtube.com/watch?v=IUDTlvagjJA
 
I dunno. For the test track above, I can distinctly hear the exact spot where the sound is coming from with both my open HE-560s and closed M50x, so sound stage definitely exists if present in the recording. I never personally experienced the sound moving or flickering in locations in a weird way. The sound moves closer & farther away & around my head very precisely and linearly for me. I do not think the idea that there are different cues in sound intensity & arrival time contained within the source that creates the perception of sound stage is a "trick." It really is just how our brains process and interprets distance cues. When you listen to movies or game with headphones, you can often get the positional cues from the sound. I have also noted a perception of distance and sound stage in non-binaural recordings where you can hear exactly where an instrument is in space not located in the 'center of your head'. With live orchestral pieces or vocal performances, I feel like I can accurately place where the instruments are in space.
 
I am unable to accurately tell whether there is a large differences in sound stage are between these two headphones because I am unable to do an exact volume matching or blind test as they feel very different on your headphone. However, both headphones present the illusion of space very effectively.
 
I am curious if you hear the center vs twisted the same or opposite with your headphones and if you can hear the distance of the knock in the binaural test track located here: http://www.audiocheck.net/soundtests_headphones.php


Yes the youtube track is the one I had in mind.  The door knocking works quite well.  In the barber shop track, it works fairly well only when directly right or left.  Like the door knocking.  Move the slightest bit from full right or left and it jumps inside my head.  The scissors sound just outside at times and jump inside most of the time.  Quite yucky sounding in that regard.  Same with the electric razor. 
 
Oct 19, 2014 at 5:49 PM Post #62 of 288
 
  I am assuming that you are talking about this track? http://www.youtube.com/watch?v=IUDTlvagjJA
 
I dunno. For the test track above, I can distinctly hear the exact spot where the sound is coming from with both my open HE-560s and closed M50x, so sound stage definitely exists if present in the recording. I never personally experienced the sound moving or flickering in locations in a weird way. The sound moves closer & farther away & around my head very precisely and linearly for me. I do not think the idea that there are different cues in sound intensity & arrival time contained within the source that creates the perception of sound stage is a "trick." It really is just how our brains process and interprets distance cues. When you listen to movies or game with headphones, you can often get the positional cues from the sound. I have also noted a perception of distance and sound stage in non-binaural recordings where you can hear exactly where an instrument is in space not located in the 'center of your head'. With live orchestral pieces or vocal performances, I feel like I can accurately place where the instruments are in space.
 
I am unable to accurately tell whether there is a large differences in sound stage are between these two headphones because I am unable to do an exact volume matching or blind test as they feel very different on your headphone. However, both headphones present the illusion of space very effectively.
 
I am curious if you hear the center vs twisted the same or opposite with your headphones and if you can hear the distance of the knock in the binaural test track located here: http://www.audiocheck.net/soundtests_headphones.php


Yes the youtube track is the one I had in mind.  The door knocking works quite well.  In the barber shop track, it works fairly well only when directly right or left.  Like the door knocking.  Move the slightest bit from full right or left and it jumps inside my head.  The scissors sound just outside at times and jump inside most of the time.  Quite yucky sounding in that regard.  Same with the electric razor. 


with my IE80 I hear like the guy is turning around me in a 2D layout that is tilted with what should be in front of me, up 30 or 40degrees.
don't ask me why.
 
Oct 19, 2014 at 7:40 PM Post #63 of 288
I dunno. For the test track above, I can distinctly hear the exact spot where the sound is coming from with both my open HE-560s and closed M50x, so sound stage definitely exists if present in the recording.


I was referring to front/back, not right left. The sound that was supposed to be in front of me (as soundstage always is) was right in front of my face, or right behind my head and kept snapping back and forth. Soundstage involves placement of the musicians in front of you, like they are on a stage in front of you. You can close your eyes and envision not just the left right spread, but the distance and angle of their location too.

In my experience, headphones can perfectly reproduce left/right, but can only just barely indicate distance and angle. Stereo speakers can perfectly indicate left/right and distance and angle, but can just barely indicate front/back. Multichannel can do all three, but can only barely indicate up/down. And Dolby Atmos adds the up/down to the equasion creating a fully dimensional sound field.

I think that most headphone folks refer to soundstage in a totally different way than speaker people do. It's just "open" sounding. The spread still goes straight through the middle of your noggin. Speaker people refer to soundstage as sitting in front of the performers with the musicians spread out from left to right in front of them. When the depth cues combine with the sound of the room, it creates a more dimensional stage in front of you that with headphones, which anchors the sound right down the middle of your head.
 
Oct 19, 2014 at 7:47 PM Post #64 of 288
IMO, soundstage is related to the sense of 'air', or some type of interaction with the treble frequencies and acoustic resonances in the cups and pads.  'Air' sounding like the sound is travelling over a distance with various frequencies decaying at different and yet natural rates. Coupled with frequency response, it creates a coherent soundstage and imaging.

I realized how important the natural sense of decay or 'air' was, when doing a side-by-side comparison of the Hifiman HE-400 vs. HE-400i.
Same manufacturer, same-open back design, both using orthodynamic drivers, and supposedly same sensitivity. Same amp, source, and tracks. 

With the lack of soundstage and 'air' meant with 400i, the distinct impression was that the entire band is in your face. The lows and the mids come through fine, but higher frequencies seemed like they were behind a wall of water or glass. Thus, the natural sense of decay with various frequencies seem unnatural.  You get the odd feeling of being in an aquarium (with just the higher frequencies being hindered by water).  This is all relative to the HE-400, of course. 

So now 'air' makes sense. 

Note: This is with maybe 12 hours head time, so there is the benefit of the doubt that it was no where near the recommended 150h burn in time. 
 
Oct 19, 2014 at 8:04 PM Post #65 of 288
Air is a pretty vague term. It sounds like it means something and it is poetic, but it doesn't refer to any aspect of sound. The trick is to understand how we hear and apply that to what we hear... frequency response, phase, distortion, etc.

Soundstage is directly related to three things... left/right phase, psychoacoustic depth cues, and the effect of distance and the room to heighten the impact and focus those two things. Headphones can do the first two, but they can't do the third. People who use the term "headstage" for headphones make more sense than those who use the term "soundstage". But headstage is almost entirely built into the recording itself. Soundstage is largely room acoustics, placement of the speakers and seating position. Soundstage can be adjusted by tweaking the listening room. In fact, most people with speakers aren't experiencing good soundstage because their system isn't set up for that. Headstage can only be slightly tweaked by using open or sealed headphones. Most headphones are able to reproduce phase and channel separation perfectly.
 
Oct 19, 2014 at 8:28 PM Post #66 of 288
I was referring to front/back, not right left. The sound that was supposed to be in front of me (as soundstage always is) was right in front of my face, or right behind my head and kept snapping back and forth. Soundstage involves placement of the musicians in front of you, like they are on a stage in front of you. You can close your eyes and envision not just the left right spread, but the distance and angle of their location too.

In my experience, headphones can perfectly reproduce left/right, but can only just barely indicate distance and angle. Stereo speakers can perfectly indicate left/right and distance and angle, but can just barely indicate front/back. Multichannel can do all three, but can only barely indicate up/down. And Dolby Atmos adds the up/down to the equasion creating a fully dimensional sound field.

I think that most headphone folks refer to soundstage in a totally different way than speaker people do. It's just "open" sounding. The spread still goes straight through the middle of your noggin. Speaker people refer to soundstage as sitting in front of the performers with the musicians spread out from left to right in front of them. When the depth cues combine with the sound of the room, it creates a more dimensional stage in front of you that with headphones, which anchors the sound right down the middle of your head.

 
ahh. i see. it seems like we are thinking of vaguely different things when talking about sound stage. I admit I think I am thinking more of imaging, positioning, and instrument separation rather than the physical "in-front of you eyes" type sound stage you are referring to.
 
I think you are probably right in terms of that comparison. The front-back depth cues for headphones will never be a match for speakers in a room. The actual "in front of you" depth cues for headphones are not as strong from my experience as the drivers are pointed at you from the side directly next to your ears. With speakers, there is actual distance between the source of the sound and your ears that gives you that heightened sense of depth. However, I do think that there have been certain musical tracks that I felt like I had a good sense of distance and angle with headphones, though I am sure it is sub-par compared to a speaker set-up. I will have to see if I can take note of which track if I hear it again.
 
I do still feel like there are some variations in sound stage abilities between headphones on the same track from my listening experience, but I will need to do a more objective comparison to confirm.
 
that is a very enlightening perspective from the speaker world. thank you.
 
Oct 20, 2014 at 1:55 AM Post #67 of 288
Air is a pretty vague term. It sounds like it means something and it is poetic, but it doesn't refer to any aspect of sound. The trick is to understand how we hear and apply that to what we hear... frequency response, phase, distortion, etc.

Soundstage is directly related to three things... left/right phase, psychoacoustic depth cues, and the effect of distance and the room to heighten the impact and focus those two things. Headphones can do the first two, but they can't do the third. People who use the term "headstage" for headphones make more sense than those who use the term "soundstage". But headstage is almost entirely built into the recording itself. Soundstage is largely room acoustics, placement of the speakers and seating position. Soundstage can be adjusted by tweaking the listening room. In fact, most people with speakers aren't experiencing good soundstage because their system isn't set up for that. Headstage can only be slightly tweaked by using open or sealed headphones. Most headphones are able to reproduce phase and channel separation perfectly.

 
Let's call it psychoacoustic depth cues, then. The perception of depth in a recording.
 
If headstage is built into the recording itself, there should be some mechanism that allows some headphones to render it better than others, everything else, including the recording itself, being the same or equal. 
 
Whether it's frequency response, spectral decay, phase, or a combination of all those and other things...that I'm not quite sure. 
 
Oct 20, 2014 at 1:56 AM Post #68 of 288
"Imaging" is another imprecise term, so is "instrument separation". Most likely imaging involves response imbalaances and instrument separation involves distortion (or noise I suppose).
 
Oct 20, 2014 at 1:58 AM Post #69 of 288
Let's call it psychoacoustic depth cues, then. The perception of depth in a recording.

If headstage is built into the recording itself, there should be some mechanism that allows some headphones to render it better than others, everything else, including the recording itself, being the same or equal. 

Whether it's frequency response, spectral decay, phase, or a combination of all those and other things...that I'm not quite sure. 


It's frequency response and distortion. Those are the two things that all transducers have the most problems with. It's not timing errors like phase or decay, because headphone timing errors doesn't come within a country mile of being in the audible range.
 
Oct 20, 2014 at 2:03 AM Post #70 of 288
"Imaging" is another imprecise term, so is "instrument separation". Most likely imaging involves response imbalances and instrument separation involves distortion (or noise I suppose).

I disagree. When you listen to an orchestra piece, you can clearly hear how the different instruments are in different sections. I feel like instrument separation is a pretty clear-cut term... unless there is another definition or usage that I am unaware of. Imaging as I understand it is simply being able to pinpoint the location where the sound is coming from. Similar to sound stage except stage not required? lol! I always thought imaging & sound stage was the same term, but you use sound stage in a different context, hence I went for imaging :)
 
edit: I guess when you are listening to a movie on a pair of headphones and you hear the helicopter going over your head or when you are gaming & hear a footstep behind you to the right. That would be imaging as I understand it. I don't think that phenomenon necessarily needs to be a frequency response imbalance or distortion. It can be intended in the source.
 
Oct 20, 2014 at 2:37 AM Post #71 of 288
 
Let's call it psychoacoustic depth cues, then. The perception of depth in a recording.

If headstage is built into the recording itself, there should be some mechanism that allows some headphones to render it better than others, everything else, including the recording itself, being the same or equal. 

Whether it's frequency response, spectral decay, phase, or a combination of all those and other things...that I'm not quite sure. 


It's frequency response and distortion. Those are the two things that all transducers have the most problems with. It's not timing errors like phase or decay, because headphone timing errors doesn't come within a country mile of being in the audible range.


while I agree with the principle and reality of things, we headphone dudes are in need for words like soundstage imaging or instrument separation. because even though those are crap terms and we can't even seem to agree on what they mean, they "talk" to us.
I think we all know that soundstage doesn't exist in a headphone, but as it was used everywhere, I use it too. to me when I say soundstage while talking headphone, I usually mean how wide on the sides sound can go(because the main parameter is that axis between our ears for headphones and IEM). with my IEMs there are clear positioning differences when listening to a song, and again it may very well be only FR and distortion, but I don't have a way to describe what I'm hearing using only FR and disto. when everybody can get an idea when I say that the ER4 puts the instruments on the axis of the ears and has very very little depth(in front of me) and height(above me). but it can go fairly far(for an IEM) on both sides, and one of the defining sound is that most instruments are small points in space, they never surround you or sound like the source is large, it never feels like instruments mix together to make one homogeneous thing that would be the music.  listening to ER4 you often lose the whole idea and hear several instruments each doing its own thing alone in space.
my JH13(first gen) I will say that the soundstage is a lot smaller, most of the instruments I hear seem to be placed inside my skull but can be found a little bit anywhere inside that space, up down left right front center and even sometime a feeling it's at the back of my head. the bass surrounds me a lot more (there are also a lot more) than with the ER4.
then with my IE80 (very V shaped and bassy signature), a lot of those same instruments on the same song will feel like they are now free in a 0 to 20 cm outside my head. and in fact I can find hardly "find" sounds that seem to be coming from inside my skull. most sounds feel like they are big and come from a bigger physical source (harder to pinpoint exactly).
listening to music with those 3 is a very different experience when it comes to placing sounds in space. how can I explain those things to someone looking to buy that IEM if I only talk FR, disto, crosstalk and nozzle diameter or number of bores? there is a need for those words, it sure would be even better if we could agree on what they mean ^_^.
 
Oct 20, 2014 at 3:13 AM Post #72 of 288
For me soundstage does exist in a headphone. A headphone should reproduce the soundstage that is in the recording, specifically for on location recording.  However all sounds in nature are mono in origin. We hear things in stereo because of our two ears not because of the source. A headphone does not have the natural crossfeed that happens with speakers.  A recording properly done should have good instrument placement. You should also hear the "space" or the room sound characteristics in the recordings. EDM type of music are usually are not recorded "live"  meaning that the soundstage is created in the mixing. 
 
Oct 20, 2014 at 1:06 PM Post #73 of 288
I disagree. When you listen to an orchestra piece, you can clearly hear how the different instruments are in different sections. I feel like instrument separation is a pretty clear-cut term...


It's a lot easier to refer exactly what is holding back or enhancing the imaging... and that could be either frequency response or distortion. Imaging involves all of the variables at once. It's basically the same as saying good headphones vs bad ones. It's a description of the *perception* of the sound being produced, not the sound itself.

Again, soundstage was a term created to describe precise three dimensional placement of sound in a room. On mixing stages, the speakers are calibrated and positioned. The mixer creates the illusion of depth and width, placing each instrument in space across in front, just as if you were sitting in an audience listening to a band play on stage in front of you. Headphones can't do this because there is no room for the sound to be precisely placed in. Instead, it's crammed into the space between your ears. That isn't soundstage.

I really think that the reason that headphone users apply the term incorectly because they have never actually heard true soundstage. It's a pretty uncanny effect, but it isn't very common in rock recordings. It's most often used in jazz and chamber music. It's the natural sounding illusion of three dimensional space in sound. Headphone users generally think of soundstage as being that amorphous phase stuff like you hear in Pink Floyd. But that style of presentation has nothing to do with natural soundstage.
 
Oct 20, 2014 at 1:40 PM Post #74 of 288
It's a lot easier to refer exactly what is holding back or enhancing the imaging... and that could be either frequency response or distortion. Imaging involves all of the variables at once. It's basically the same as saying good headphones vs bad ones. It's a description of the *perception* of the sound being produced, not the sound itself.

 
I guess I am a bit confused about the difference in your mind between the perception of the sound produced & the sound itself.
 
In terms of speaker set-ups, the sound stage of speakers produced is dependent on the acoustic properties of the room. The final effect of being able to picture the musicians on the stage in front of you would technically be the *perception* of sound produced rather than the sound itself.
 
In real life, we experience the Doppler effect. The sound wave itself has the same frequency, but we hear different frequencies due to the source's the relative movement to us. In that case, I think that the *perception* of sound is more accurate (useful) for us than the objective properties of that sound wave as our brains can accurately process the changes in frequency to gauge distance of the source of sound. Sound has to be perceived and our brains process that information. Looking at a sound wave in a vacuum... lol nvm, sound waves actually wouldn't exist in a vacuum hhahaha.
 
I guess I don't really understand how you draw the distinction between the *perception* of sound produced and the sound itself and how such a distinction would be useful as sound is just information that our brains process, so it seems to me that the perceived sound is always the most valuable information. I do think that perceived sound is definitely NOT a touchy-feely subject, but can be accurately measured, like the doppler effect.
 
I always thought that 'imaging' (or we can go with 'positional cues' instead) was due to differences in volume levels between channels, difference in arrival time between our ears, phase/group delays, and changes in the frequencies that mimic the doppler shift. It makes sense that the information required for positional cues is pre-contained within the source file. I do think that there may be differences between how different headphones present this information due to differences in earcup housing (similar to the room effect of speakers but on a much smaller scale), damping, and channel matching. I am sure that frequency response and distortion can also play a role in this as well, though I am not sure how distortion relates to this subject.
 
Basically, I think that stereo recordings via headphones should be able to give us the ability to localize sound as the interaural cues of small differences in sound between our two ears is what is primarily responsible for sound localization in real life. Headphones are definitely capable of reproducing that to a degree.
 
Oct 20, 2014 at 1:45 PM Post #75 of 288
Again, soundstage was a term created to describe precise three dimensional placement of sound in a room. On mixing stages, the speakers are calibrated and positioned. The mixer creates the illusion of depth and width, placing each instrument in space across in front, just as if you were sitting in an audience listening to a band play on stage in front of you. Headphones can't do this because there is no room for the sound to be precisely placed in. Instead, it's crammed into the space between your ears. That isn't soundstage.

I really think that the reason that headphone users apply the term incorectly because they have never actually heard true soundstage. It's a pretty uncanny effect, but it isn't very common in rock recordings. It's most often used in jazz and chamber music. It's the natural sounding illusion of three dimensional space in sound. Headphone users generally think of soundstage as being that amorphous phase stuff like you hear in Pink Floyd. But that style of presentation has nothing to do with natural soundstage.

Okay, I just saw your edit. That was very helpful for me to understand your point of view. I can see your point with your sound stage definition. I agree that the "true speaker-sense sound stage" would be more powerful & realistic than '"headphone sound stage."
 
I do think that headphones can do sound localization to a certain degree & it is not 100% incorrect to use terms like sound stage, imaging, instrument separation, and positional cues when describing headphone performance.
 

Users who are viewing this thread

Back
Top