How soundstage depth works (or doesn't work) in headphones
Mar 18, 2010 at 6:01 PM Thread Starter Post #1 of 31

fjrabon

Headphoneus Supremus
Joined
Feb 1, 2009
Posts
3,996
Likes
1,119
I've seen a lot of posts where people state they don't really understand the concept of soundstage "depth". Here I will relate how the concept of soundstage depth works in general, why headphones have a problem with it, and their "imperfect" solutions to the problem.

The vast majority of my knowledge about these areas comes from Gary Davis and Ralph Jones' Sound Reinforcement Handbook, Second Edition, as produced for Yamaha. It's a FANTASTIC book for learning about how audio works. It's aimed at live sound reproduction, but its so exhaustive that you can learn a TON about sound reproduction in general. I don't know of a comparable book for home audio, though I am sure there are some out there.

The fundamental stumbling block to headphones, with regards to all aspects of sound, but especially soundstage depth, really doesn't have to do with their ability or inability to reproduce sound in a quality manner, it comes from the fact that mastering is done for speakers, and headphones aren't speakers. This is sort of a background fact to keep in mind when I talk about why things are the way they are. I will constantly refer to the "producer's intent", what I mean there is what the producer/engineers/etc intended for the record to sound like, I mean proucer, just in terms of the party in charge of producing the record, which isn't necessarily the credited Producer with a capital "P". In fact a lot of people hate how producers want things to sound, and that's an entirely different topic, but for the purposes of this discussion, the goal will be to get as close to the producer's intent as possible.

So, since producers generally master for speakers, lets first talk about how speakers create soundstage depth:

Soundstage depth, in general, is created by the fact that high frequency sound waves are absorbed and dissipated more easily than low frequency waves are. This is why you can clearly and easily hear thunder from several miles away, but you couldn't hear a high pitched siren very clearly from even a quarter mile away. Or why you can hear your neighbor's bass, but not any of the words to the song. So, what your brain does is calculate the differences in amplitude of the fundamental frequency and the frequency's harmonics. Any tone is made up of a fundamental, which is the main tone, and harmonics, which are numerical multiples in frequency of the fundamental. Because the higher frequencies are absorbed by the atmosphere faster than the lower ones, if there are a lot high harmonics compared to the fundamental, your brain thinks that the sound is close by. These higher harmonics are often what is referred to as the presence frequencies. More presence frequencies = sounds closer to you.

So, speakers create soundstage depth from two areas. 1) is what is in the recording. This has to do with microphone positioning (far away mic'ing sounds farther away) and EQ'ing (pulling out or boosting the presence frequency of an instrument can make it sound closer or further away). Typically mic positioning is preferred to EQ'ing for creating soundstage depth. it's easier and doesn't have nearly as many complications. For instance an instrument with a very large range like a concert grand piano is difficult to "move" with EQ, because the presence frequencies of its lowest registers are different from its highest. But if you mic it with several different mics at different distances, you can control the soundstage depth by mixing the relative levels of closer or further away microphones. 2) is the simple distance from the listener and the speaker. This is what gives speakers the feeling of not having sound pumped into your brain, that you are watching a performance. The distance from you and the speaker will knock some of the presence frequencies off just from the air.

Now you can probably already see what the fundamental issue with headphones is going to be here. Headphones, by definition, have the driver extremely close to your ear. There isn't very much air to "knock down" those high frequencies. Meaning that a totally flat measuring headphone would sound insanely high pitched. Like an icepick being shoved into your eardrum. While with speakers the goal is a totally flat frequency response, this simply is not possible with headphones, it wouldn't sound like what we are accustomed to hearing as flat. The problem is we are trying to use a source that is mastered to be heard from a distance of several feet through headphones that are less than an inch from our ears.

Now this probably sounds grim, but headphones solve this problem by rolling off high frequencies. Even a headphone that is considered to be VERY titled towards the high frequencies, the AKG K701, has all frequencies above 8kHz MASSIVELY rolled off.
graphCompare.php


However, this is still an imperfect solution, because it's hard, if not impossible to perfectly replicate the natural, atmospheric rolloff that occurs in air with frequency response on a headphone. It's imperfect in the same way that EQ'ing an instrument in the studio is an imperfect way to create soundstage depth in the studio. Different instruments have different presence rages, and thus its impossible to accurately reproduce the correct adjustments with a single instrument that has the same sort of "EQ setting" all the time. Some do a better job than others, some are quite outstainding to the point of ALMOST sounding "speakerish" in this regard.

So now onto how some headphones create unrealistic soundstage depth. Some headphones aren't just content to try and approximate headphone depth, they want to try and beat it in some way. They do this by boosting and pulling back certain presence frequencies. A notable example is the AD700. When flat, it pulls the vocal presence back a touch, allowing vocals to less forward, which creates a sense of depth. The AKG K701 goes in the opposite direction. It pushes the female vocal presence frequencies forward, and drum and bass presence backwards. This creates a deep sounding soundstage for female vocal performances, because the vocals sound immediate, the drums and bass sound like they are in the back and most other stuff sounds in between. However, it can become discombobulating for a bass solo, because the bass solo refuses to move upfront. Or on a male/female vocal duet, because the female vocal sounds much more forward than the male. it kind of sounds like romeo and juliet singing a duet during the balcony scene where you are standing next to juliet. This is what people mean when they say that the K701 has an artificially deep soundstage. It's not that the soundstage is "too deep", just that its artificially, and at times oddly, deep.

It's my opinion that headphones just CANT accurately reproduce soundstage depth. The best a headphone can do is strive for a flat SOUND (as opposed to a flat reading on a FR chart) and then whatever that makes the soundstage depth is the best they can do.

Now, please don't focus on the couple of headphones I mentioned here. They were only used because they were example of headphones that I personally believe exhibit a certain quality, and not because I like or dislike them. Hopefully this helps some understand what soundstage depth is and how it works.
 
Mar 18, 2010 at 6:16 PM Post #3 of 31
Quote:

Originally Posted by Acix /img/forum/go_quote.gif
The Studio Engineer and the Mastering Engineer control the soundstage and the depth of on the recording materials by using pro studio tools. Here is more info: http://www.head-fi.org/forums/f4/k702-studio-393139/ Here is my music that you can test the soundstage and the depth on ANY Headphones, or any speakers.
k701smile.gif



I'm well aware of those "pro studio tools", they're all more or less very specialized EQ units. But talk to any of the great producers/engineers and they will tell you the best way to create realistic sounding depth is still to do it with microphone placement.
 
Mar 18, 2010 at 6:18 PM Post #4 of 31
Thanks alot man, this was very helpfull.
Have been wondering what ppl exactly meant with that for quite some time so I'm glad some1 took the time to explain it.
Another thing I learned from this forum
o2smile.gif
 
Mar 18, 2010 at 6:25 PM Post #5 of 31
Quote:

Originally Posted by Scrivs /img/forum/go_quote.gif
Thanks alot man, this was very helpfull.
Have been wondering what ppl exactly meant with that for quite some time so I'm glad some1 took the time to explain it.
Another thing I learned from this forum
o2smile.gif



Glad to help. It's very oversimplified at times, but I think it gets the basic idea and basic issues out there.
 
Mar 18, 2010 at 7:10 PM Post #7 of 31
Thanks for the overview on the topic. I wonder if my library has that book...
 
Mar 18, 2010 at 7:13 PM Post #8 of 31
Quote:

Originally Posted by KCChiefsfan /img/forum/go_quote.gif
Thanks for the overview on the topic. I wonder if my library has that book...


That book doesn't talk much about headphones, but is about sound science in general. So if you want to learn about headphones in particular, its not great, but if you want to learn about sound science and sound reproduction science it's great.
 
Mar 18, 2010 at 7:37 PM Post #9 of 31
Quote:

Originally Posted by fjrabon /img/forum/go_quote.gif
That book doesn't talk much about headphones, but is about sound science in general. So if you want to learn about headphones in particular, its not great, but if you want to learn about sound science and sound reproduction science it's great.


It would be nice to know a little more about sound science in general, although headphones are my main focus at the moment (since the neighbors wouldn't enjoy speakers blaring in the middle of the night
biggrin.gif
)
 
Mar 18, 2010 at 7:59 PM Post #10 of 31
Quote:

Originally Posted by KCChiefsfan /img/forum/go_quote.gif
It would be nice to know a little more about sound science in general, although headphones are my main focus at the moment (since the neighbors wouldn't enjoy speakers blaring in the middle of the night
biggrin.gif
)



It's a great book, but I'd say 80% of its going to be irrelevant to you then. Now that 20% that will be relevant is very clear and concise. The entire half about getting the source sound through the microphones and then properly mixed in the board will only be marginally relevant at best. And then most of the other half is about setting up a live sound reproduction rig, which is obviously speaker based.
 
Mar 18, 2010 at 11:21 PM Post #12 of 31
Quote:

Originally Posted by fjrabon /img/forum/go_quote.gif
I'm well aware of those "pro studio tools", they're all more or less very specialized EQ units. But talk to any of the great producers/engineers and they will tell you the best way to create realistic sounding depth is still to do it with microphone placement.


No, the pro studio tools that help create the sound stage are reverb, pan, delay, echo, stereo imager that can create dimensional layers of sound in the surrounding space. The stereo imager can pinpoint the location of the sound in the sound image field. The specialized EQ units do not contribute to the sound stage. They're basically designed to sculpt the sound and to create better instrument separation. I appreciate what you're doing here, but if you don't have the right information to complete your picture, it can create confusion for others. I hope you understand.
 
Mar 19, 2010 at 12:09 AM Post #13 of 31
Quote:

Originally Posted by Acix /img/forum/go_quote.gif
No, the pro studio tools that help create the sound stage are reverb, pan, delay, echo, stereo imager that can create dimensional layers of sound in the surrounding space. The stereo imager can pinpoint the location of the sound in the sound image field. The specialized EQ units do not contribute to the sound stage. They're basically designed to sculpt the sound and to create better instrument separation. I appreciate what you're doing here, but if you don't have the right information to complete your picture, it can create confusion for others. I hope you understand.


Agreed 100%
 
Mar 19, 2010 at 12:09 AM Post #14 of 31
Quote:

Originally Posted by fjrabon /img/forum/go_quote.gif
I'm well aware of those "pro studio tools", they're all more or less very specialized EQ units. But talk to any of the great producers/engineers and they will tell you the best way to create realistic sounding depth is still to do it with microphone placement.


fjrabon,
The above is absolutely correct. Your great short essay was also "right on". In fact, I liked your explanation so much that I would like to add it to my sig area, with your permission.
Microphone placement is becoming something of a lost art/ lost skill. As I'm sure you know, much is produced non-acoustically today which robs the listener of the natural sound stage even when played through speakers.
Some of the best microphone placement and consequently sound stage creation was done by the producers and engineers at Capitol Records in the mid to late 1950's and 1960's. A very prime example of this is an Album recording called "Only The Lonely" which featured Frank Sinatra with full orchestration. The microphone placement/ sound stage on this album is superb.
 
Mar 19, 2010 at 12:19 AM Post #15 of 31
To me the soundstage is better now than in the 50's and 60's. Even SQ in general is better, I can not accept that we have gone backwards in music reproduction as things sound so much better now than the 50's and 60's. I agree with you Acix, reverb, pan , echo,delay etc.
 

Users who are viewing this thread

Back
Top