Head-Fi.org › Forums › Equipment Forums › Sound Science › Speaker vs. Headphone Soundstage / Positional cues / Imaging
New Posts  All Forums:Forum Nav:

Speaker vs. Headphone Soundstage / Positional cues / Imaging

post #1 of 41
Thread Starter 

I had this theory (later proved wrong) about how headphone audio's width and depth was inverted compared to speakers, that's how this thread started at least.  Below is a drawing of what thought was going on.

 

This thread has gone on with much more informed and intelligent contributions so I just wanted to amend the original post to keep as little of my silly theory as possible.  Go on to the next few pages which have some pretty interesting discussions going on.

 

Quote:
Originally Posted by sphinxvc View Post

Screen shot 2011-10-12 at 12.09.32 PM.png

 

Screen shot 2011-10-12 at 12.13.27 PM.png

 


Edited by sphinxvc - 11/6/11 at 12:44pm
post #2 of 41
Thread Starter 

What is it that I'm missing?

post #3 of 41
Thread Starter 

Depth in the first picture should be vertical.  That would properly show how things are getting inverted.

post #4 of 41

sphinxvc, Good luck getting an answer in this forum unless you have graph layouts and FACTS and test charts etc etc., about your questions or answers!  I was told straight out in another post in the "Sound Science" forum that I could not include IMO in my answer. I'll just listen and enjoy my music be it in my main system or headphone system... Not to be a smart ass but I've been listening to music through many different systems longer then most of the dudes on this sight have been alive!  I'm not bragging but the old audiophiles that I learned from would just laugh at most of these topics etc.etc.    So sad.... So if you like how something sounds, that's all that really matters, isn't it?


Edited by 9pintube - 10/12/11 at 1:11pm
post #5 of 41

 

I too don't really understand...
But I think you're overrating what the human ears are actually able to hear. Think of it like this: an ear is just a single point in space that "hears" the vibrations in the air at exactly this point. This point just happens to be positioned a few centimeters inside our heads with free air-access to only one side (which is the reason why we hear sounds coming from the left louder in our left ear than our right). We only have two ears - so only a two-channel input. This is all the brain has to work with - so all the "width" and "depth" we hear is just the brain computing and guessing - using the loudness of the sound in each ear and the difference in time it takes for a particular sound to hit one ear to the other (pretty impressive when you think about it). The only thing that separates binaural recordings from regular stereo, is that the microphones are positioned in a plastic replica of a head where the ears would be, and thus produces a more accurate image of the sound as a human would hear it. In reality there is no width and depth in stereo recordings - it's two channels, nothing more. The rest is the brain.

I hope that at least has something to do with what you were asking. And sorry if I just explained something you already knew :-)

 

post #6 of 41

I just love linking this post. biggrin.gif

 

It's darth nuts review on the O2, he goes into detail on how we perceive soundstage with headphones on normal recordings.

post #7 of 41

All sounstaging, imaging, etc. is delicious artifice.  Our ears and brains interpret the sounds coming out of speakers as a representation of actual performers, based upon the cues we have learned from every day experience.

 

I record orchestras and chamber ensembles.  Let's consider a simple microphone setup recording an orchestra: two mics, 15' in the air, a foot apart, cardiod capsules, the left mic pointing far left and the right far right.

 

Stereo positioning/imaging is a result of differences in timing and sound pressure.  A violin to the immediate left will be louder in the left mic and will arrive sooner than the right.  A tympani's sounds on the far left will arrive even later.  The sound of the flutes in the middle arrives at the same time and at the same intensity.  When speakers reproduce these recorded sounds the positional cues allow our brains to place the images in the sound stage from left to right.

 

Depth is slightly more complicated.  Sounds that are further away are less loud.  They also contain less treble energy (consider how a band at a distance sounds very bass heavy, thumping).  The further away sounds will also be accompanied by more sound of the room (close sounds contain little echo/reverberation - far sounds much more room reverberation.)  The sound of a violin in the front row is brighter, louder and has less room ambiance than a violin in back.  We also rely on our experience.  We know a trumpet will drown out a flute.  Thus our brains will interpret the trumpet further back if we can hear the flute..

 

Headphones have difficulty reproducing imaging and depth.  For example, our brains rely on each ear hearing both speakers.  As another example, the best imaging typically occurs when the speakers are the same distance apart and the listener is this same distance from each speaker.  This timing relationship is not maintained with headphones. 

 

The relative positioning of "width" and "depth" do not change with headphones.  They are just harder to discern, particularly with multi-tracked studio recordings as there is no real life counterpart - we do not know what is real.

 

I hope this makes sense.

post #8 of 41
Thread Starter 

Edit: for some reason I missed a number of these posts.  Reading right now.  Response forthcoming.


Edited by sphinxvc - 10/12/11 at 2:24pm
post #9 of 41

Speakers soundstage can be measured with Vertical & Horizontal Polar Response graphs this would be the width, depth would be Delayed Energy and Decay, this is what makes soundstage sound different between different speakers, I'm not sure what would apply to headphones but the cup design would be of great importance to the sound dispersion and would effect the width and depth of the soundstage.

post #10 of 41
Nice post wapiti,

I thought about giving a shot at replying but your post is just neat...

To the thread starter: you should read up on human hearing / auditory cues, your graph is nice looking, but depth and width on an overhead chart does not equal depth and width as how ears interpret incoming sounds. It's all in the mix of direct and reflected sounds from the recording hall as well as sound diffraction across human head. Headphones actually have an advantage over speakers when fed with some form of binaural signal ( was listening to The Final Cut from Pink Floyd for the first time yesterday, what a shock!)
post #11 of 41
Thread Starter 

Edit: Just saw all the new replies, reading...

post #12 of 41
Thread Starter 

Quote:

Originally Posted by Wapiti View Post

All sounstaging, imaging, etc. is delicious artifice.  Our ears and brains interpret the sounds coming out of speakers as a representation of actual performers, based upon the cues we have learned from every day experience.

 

I record orchestras and chamber ensembles.  Let's consider a simple microphone setup recording an orchestra: two mics, 15' in the air, a foot apart, cardiod capsules, the left mic pointing far left and the right far right.

 

Stereo positioning/imaging is a result of differences in timing and sound pressure.  A violin to the immediate left will be louder in the left mic and will arrive sooner than the right.  A tympani's sounds on the far left will arrive even later.  The sound of the flutes in the middle arrives at the same time and at the same intensity.  When speakers reproduce these recorded sounds the positional cues allow our brains to place the images in the sound stage from left to right.

 

Depth is slightly more complicated.  Sounds that are further away are less loud.  They also contain less treble energy (consider how a band at a distance sounds very bass heavy, thumping).  The further away sounds will also be accompanied by more sound of the room (close sounds contain little echo/reverberation - far sounds much more room reverberation.)  The sound of a violin in the front row is brighter, louder and has less room ambiance than a violin in back.  We also rely on our experience.  We know a trumpet will drown out a flute.  Thus our brains will interpret the trumpet further back if we can hear the flute..

 

Headphones have difficulty reproducing imaging and depth.  For example, our brains rely on each ear hearing both speakers.  As another example, the best imaging typically occurs when the speakers are the same distance apart and the listener is this same distance from each speaker.  This timing relationship is not maintained with headphones. 

 

The relative positioning of "width" and "depth" do not change with headphones.  They are just harder to discern, particularly with multi-tracked studio recordings as there is no real life counterpart - we do not know what is real.

 

I hope this makes sense.


Thanks for the insight.

 

That did make sense and that was in line with how I understood depth / positioning to be perceived.  I think the crux of my question lies in the bolded portion above.

 

Why is that?  Wouldn't the relative positioning of width and depth change if you played the same recording on two loudspeakers facing each other?  (or am I wrong there?)

 

I'm basically equating the headphone experience to the experience of having two loudspeakers facing each other.

 

post #13 of 41
it's basic common sense. the speakers of headphones are close to the ears while speaker have no distance limitations like headphones. there are certain headphones to sound close to speaker like soundstage which were specially equalized in anechoic or diffused-field chambers using a head dummy. basically simple terms known as binaural equalization. they tend also to have a more realistic soundstage compared to headphones that rely on special software DSP's or simulations which usually give an artificial approach to soundstage and imaging.

problem with these headphones everyone perceive soundstage and imaging differently. i think also lot to do with crosstalking as well in a way. speakers outputs are always grounded for left and right which eliminates cross-talk. lot of professional headphones that experimented with free-field and diffused-field always used a balanced connector where both sides had an individual negative/ground connector. i think that has something to do with lot of headphones as well cause even expensive headphones that claim to be audiophile standards still always have a shared ground connector which i don't get.
post #14 of 41
Thread Starter 

Quote:

Originally Posted by arnaud View Post

Nice post wapiti,
I thought about giving a shot at replying but your post is just neat...
To the thread starter: you should read up on human hearing / auditory cues, your graph is nice looking, but depth and width on an overhead chart does not equal depth and width as how ears interpret incoming sounds. It's all in the mix of direct and reflected sounds from the recording hall as well as sound diffraction across human head. Headphones actually have an advantage over speakers when fed with some form of binaural signal ( was listening to The Final Cut from Pink Floyd for the first time yesterday, what a shock!)


Understood.  The question I asked of Wapiti, please take a crack it.

 

And I'm glad everyone liked my drawing!  tongue.gif  

post #15 of 41
Thread Starter 

Quote:

Originally Posted by RexAeterna View Post

it's basic common sense. the speakers of headphones are close to the ears while speaker have no distance limitations like headphones. there are certain headphones to sound close to speaker like soundstage which were specially equalized in anechoic or diffused-field chambers using a head dummy. basically simple terms known as binaural equalization. they tend also to have a more realistic soundstage compared to headphones that rely on special software DSP's or simulations which usually give an artificial approach to soundstage and imaging.
problem with these headphones everyone perceive soundstage and imaging differently. i think also lot to do with crosstalking as well in a way. speakers outputs are always grounded for left and right which eliminates cross-talk. lot of professional headphones that experimented with free-field and diffused-field always used a balanced connector where both sides had an individual negative/ground connector. i think that has something to do with lot of headphones as well cause even expensive headphones that claim to be audiophile standards still always have a shared ground connector which i don't get.


Not sure I understand your post.  I'm specifically talking about non-binaural recordings here.  

 

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Speaker vs. Headphone Soundstage / Positional cues / Imaging