Head-Fi.org › Forums › Equipment Forums › Sound Science › Improving sound stage of full sized cans
New Posts  All Forums:Forum Nav:

Improving sound stage of full sized cans - Page 3

post #31 of 45

Someone recently brought this thread to my attention in another thread.  Anyone interested in soundstage as it applies to headphones (or "headstage" as the poster calls it) should read his essay.  Don't let the fact that it is titled as a review for a specific headphone - it goes far beyond that, and the review actually takes a back seat to analyzing headstage.  It is also quite long and you might end up bookmarking it and finishing it later.  It's worth a read in the context of this thread specifically.

post #32 of 45
Quote:
Originally Posted by maverickronin View Post


Head tracking is more about keeping the illusion that you're listening to something that's really in front of you.

<snip>

1) Properly set crossfeed and eq does give the illusion that the sound source is two properly set stereo speakers of high quality. Then you can use any spatializer that works on stereo speakers. (however the headphone ones should be superior)

2) Music is recorded with no obstructions - you shouldn't attempt to simulate any...

3) Who would turn their head away from the music? What's the reason to do that?

4) Minor head movements produce nigh zero sound change assuming the source is far enough. With music playback, we can assume that's true all the time. (e.g. speakers 3m away in free field) If you're trying to simulate something exotic, like speakers right in your face, look for another solution.

 

So, you're either sitting extremely near the screen (which means eq + crossfeed soundstage will sound too far away for good illusion) or are suffering from Meniere's disease and throw up all the time due to cars passing you. Or possibly you're trying to sell your technology.

 

Typical free "matrix decoders" only do panning, the better ones some phase modification - they're not that good, it's not enough operations.

Proper approach is properly copying the sound to every other "virtual speaker" before mixing akin to crossfeed, with equalization on every channel, then mixing.

Indeed, Realizer is a very good simulator for this purpose, but it's inferior for stereo speakers simulation - it colors the sound far too much.

post #33 of 45
Quote:
Originally Posted by jax View Post

Someone recently brought this thread to my attention in another thread. 
That guy sure is in love with his own words!

The depth cues he talks about are largely a result of decisions made in the mix and just about any headphone will represent them as well as they can represent any sound.

Soundstage with speakers is immeasurably more dimensional. I have heard mono recordings that had all of the depth cues he is talking about (using these techniques was common in the 78rpm era), but on speakers the mono sound unfolds, using the space of the room to create a volumetric soundstage. With stereo, the depth has added left to right placement making a clear stage in front of the listener that the music is emanating from. 5:1 sound extends the stage out into the room to envelop the listener.

Mono Is simple to arrange a listening room to accommodate. Stereo is considerably more difficult, and 5:1 can be maddeningly frustrating to try to balance because there is no standard for mixing. Each mix requires a bit of tweaking by the listener to focus in his room properly.

But headphones can never match the immersive quality of speakers. It's a MUCH more natural presentation for music than putting cups over your ears.
post #34 of 45
Quote:
Originally Posted by AstralStorm View Post

Minor head movements produce nigh zero sound change assuming the source is far enough.
That isn't true. Blindfolded, you can pinpoint location even at great distance by simply turning your head a little.
post #35 of 45

@AstralStorm

 

Maybe you misunderstood me.  I'm not saying that kind of system needs head tracking to work.  I'm just saying it a nice extra and its not useless.


Edited by maverickronin - 7/19/11 at 4:29pm
post #36 of 45
Quote:
Originally Posted by bigshot View Post


That guy sure is in love with his own words!

The depth cues he talks about are largely a result of decisions made in the mix and just about any headphone will represent them as well as they can represent any sound.

Soundstage with speakers is immeasurably more dimensional. I have heard mono recordings that had all of the depth cues he is talking about (using these techniques was common in the 78rpm era), but on speakers the mono sound unfolds, using the space of the room to create a volumetric soundstage. With stereo, the depth has added left to right placement making a clear stage in front of the listener that the music is emanating from. 5:1 sound extends the stage out into the room to envelop the listener.

Mono Is simple to arrange a listening room to accommodate. Stereo is considerably more difficult, and 5:1 can be maddeningly frustrating to try to balance because there is no standard for mixing. Each mix requires a bit of tweaking by the listener to focus in his room properly.

But headphones can never match the immersive quality of speakers. It's a MUCH more natural presentation for music than putting cups over your ears.


I completely agree with you on all counts here except perhaps the first., er, I mean second normal_smile%20.gif  Though the cues may be in the mix, they will be reproduced uniquely by different transducers/systems which may enhance or detract from those cues...at least in my experience.  Regarding your points on speakers, I could not agree more.  There is no comparison.  The vast majority of recordings are mixed on speakers and for speakers.  Nonetheless, his is an interesting tome on the concept of soundstage as it applies to headphone listening. 

 

 

post #37 of 45
I'm not sure if this is correct, but I suspect that the biggest difference between speakers and headphones is the presentation of dynamics. Speakers put out soft and loud into the room pretty much the way it's recorded, but headphones seem to either boost the quietest stuff or swallow the dynamic peaks. Whichever way it works, the end result is that the small details like inhales of breath between the lyrics of a song are accentuated. It could also boost the spacial cues in the ring offs.

I know that volume level in headphones can be deceiving. With cans, it's easy to get too loud. With speakers, the sound pressure slams you and room starts to vibrate if you get too loud.
post #38 of 45
Quote:
Originally Posted by bigshot View Post


That isn't true. Blindfolded, you can pinpoint location even at great distance by simply turning your head a little.

I'd say angular resolution of human hearing is a few degrees and does not improve with distance. It's worse for nearby sounds. Source: http://www.dspguide.com/ch22/1.htm

Changing head location indeed can improve it a few times, still not that good for faraway sources.

I'd hazard a guess of one degree resolution for a highly trained listener in perfect conditions. At 3 meters it'd give ~5 cm lateral.

Try it in a double-blind trial some day - I bet you won't fare nearly as well. (e.g. ABX) I'd love to see some papers on this topic.

 

It might be easier to pinpoint really diffuse sound sources due to diffraction pattern - but you won't gain anything by using speakers, in fact the diffuse room reflections will mess up this information.

Anechoic room would score highly in this regard, though it'd be decreed as unnatural by most listeners.

--

Quote:
Originally Posted by bigshot View Post

I'm not sure if this is correct, but I suspect that the biggest difference between speakers and headphones is the presentation of dynamics. Speakers put out soft and loud into the room pretty much the way it's recorded, but headphones seem to either boost the quietest stuff or swallow the dynamic peaks. Whichever way it works, the end result is that the small details like inhales of breath between the lyrics of a song are accentuated. It could also boost the spacial cues in the ring offs.

I know that volume level in headphones can be deceiving. With cans, it's easy to get too loud. With speakers, the sound pressure slams you and room starts to vibrate if you get too loud.

Not nearly as much as you'd think - the main issue with almost all headphones is that they sound nonlinear, typically in the way that amplifies some bands of high frequencies. Most quiet sounds fit in these bands, esp. the ones you've mentioned.

Conversely, louder sounds are low frequency and many headphones are at least somewhat deficient in this area, either in amount or decay.

Equalization can help tremendously with both of these, thus improving localization a lot.

Another thing contributing to this effect is the lack of reverberation - the quiet sounds are less obscured.

 

I prefer to calibrate the loudness to the speech - if it sounds like normal talk, it's not overly loud or too quiet. (40-60 dB range)


Edited by AstralStorm - 7/19/11 at 5:29pm
post #39 of 45
I would call one degree pretty doggone accurate. I wouldn't fire a gun based on it though....
post #40 of 45
Thread Starter 

Good God! I just wanted a yes or no, do headphones with transducers larger than the outer ear have typically better performance where soundstage is concerned?

post #41 of 45

there's seldom such simple relations in real world transducer engineering - your finding a web post that "corroborates" your initial prejudgment hardly constitutes engineering research

 

the discussion has brought out a lot about soundstage - pointing out that personal eq and binaural source can give many good quality headphones superior soundstage - the SVS Realizer is an "existence proof" that approaching room/loudspeaker performance is possible

 

since most source is produced on/for loudspeakers in a room most of the time headphone listening will benefit from processing like crossfeed circuits, plugins or DSP like Dolby Headphone, up to the SVS system

 

very little is controlled by the headphone driver/ear geometry compared to having processed or originally produced binaural signal going into your amp and thence to the headphone

 

 

the biggest prediction I would give for larger area diaphragms is simply that by needing less excursion for the same volume displacement the low frequency distortion can be superior given similar "motor" limitations

 

larger diaphragms will have more modes, lower breakup frequencies for similar material and geometry so higher frequency performance could be expected to in general be rougher

 

orthodynamics and electrostatic headphones claim to have less severe modal problems due to the uniform drive over diaphragm area vs a typical dynamic voice coil "edge driven" design

post #42 of 45

I really wouldn't want a software based crossfeed because I use osx, and there's hardly any good software audio filters for it it seems.

post #43 of 45

 

Quote:
Originally Posted by ffdpmaggot View Post

Good God! I just wanted a yes or no, do headphones with transducers larger than the outer ear have typically better performance where soundstage is concerned?

 

In case of headphones, no. What matters more is that it should sound flat and directly face your ear - the angle between driver plane and earlobe plane should be as close to 0 degrees as possible without compromising the acoustic seal.

post #44 of 45
Thread Starter 

Thank you

post #45 of 45
Quote:
Originally Posted by AstralStorm View Post

 

 

In case of headphones, no. What matters more is that it should sound flat and directly face your ear - the angle between driver plane and earlobe plane should be as close to 0 degrees as possible without compromising the acoustic seal.


Where's your source for that?  What AES papers cover the head-related transfer function of propagation of sound from headphones to and past the pinna?

 

Equalization and crossfeed alone can never make up for the difference in where the sound is propagating from and how the waves propagate around the room (or not) and the head, even if you hit your magical 0 degree number (show me evidence, please - not that I disagree, but that I want to know why from a trusted source - and some random forum member on the internet is not).  Anyway, to complicate things, the HRT is different for every headphone/ear combination...

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Improving sound stage of full sized cans