How soundstage depth works (or doesn't work) in headphones
Mar 19, 2010 at 12:53 AM Post #16 of 31
Quote:

Originally Posted by Guidostrunk /img/forum/go_quote.gif
To me the soundstage is better now than in the 50's and 60's. Even SQ in general is better, I can not accept that we have gone backwards in music reproduction as things sound so much better now than the 50's and 60's. I agree with you Acix, reverb, pan , echo,delay etc.


Well..., there are better, more sensitive microphones and recording equipment extant today compared to the 50's and 60's. I believe microphone placement for the purpose of creating a sound stage was utilized more frequently back then.
I'm referring to natural depth and layering here which is relatively rare in much of today's pop recordings. It is very difficult to achieve this depth and layering unless all instruments are recorded simultaneously which is a method of recording that happens extremely infrequently in pop recordings currently whereas recording all instruments and vocals simultaneously was a common method of recording prior to the mid 1960's.
If there has been a recording produced utilizing the recording techniques about which Acix wrote that accurately replicates the natural sound stage created by the art and skill of correct microphone placement, I have not heard it and I seriously, make that very seriously doubt that such a recording exists.
 
Mar 19, 2010 at 1:47 AM Post #17 of 31
Quote:

Originally Posted by Peter Pinna /img/forum/go_quote.gif
Well..., there are better, more sensitive microphones and recording equipment extant today compared to the 50's and 60's. I believe microphone placement for the purpose of creating a sound stage was utilized more frequently back then.
I'm referring to natural depth and layering here which is relatively rare in much of today's pop recordings. It is very difficult to achieve this depth and layering unless all instruments are recorded simultaneously which is a method of recording that happens extremely infrequently in pop recordings currently whereas recording all instruments and vocals simultaneously was a common method of recording prior to the mid 1960's.
If there has been a recording produced utilizing the recording techniques about which Acix wrote that accurately replicates the natural sound stage created by the art and skill of correct microphone placement, I have not heard it and I seriously, make that very seriously doubt that such a recording exists.



Peter Pinna, you honestly believe that bands like The Beatles were recorded only by microphone placement?
 
Mar 19, 2010 at 2:28 AM Post #18 of 31
Quote:

Originally Posted by Acix /img/forum/go_quote.gif
No, the pro studio tools that help create the sound stage are reverb, pan, delay, echo, stereo imager that can create dimensional layers of sound in the surrounding space. The stereo imager can pinpoint the location of the sound in the sound image field. The specialized EQ units do not contribute to the sound stage. They're basically designed to sculpt the sound and to create better instrument separation. I appreciate what you're doing here, but if you don't have the right information to complete your picture, it can create confusion for others. I hope you understand.


reverb and delay doesn't create depth as far as positioning goes, it creates room depth ie it makes the space the performers are performing sound bigger, but it doesn't move the performers around. Pan creates soundstage width, not depth.
 
Mar 19, 2010 at 2:35 AM Post #19 of 31
also Acix, a stereo imager is a very specialized EQ unit. It takes a source signal and EQ's it to sound like it's coming from a specific distance by cutting or boosting presence frequencies differently in the left and right channels. They're still not great for natural sounding music, though they're quite acceptable for eletronic music, which doesn't have the problem of sounding unnatural, because it doesn't sound natural to begin with.

I've worked with some pretty up there producers (Don Dixon and Pete Anderson briefly) and engineers and they still prefer mic placement when they want to create soundstage depth. There is nothing that emulates distance from a microphone like actual distance from a microphone.
 
Mar 19, 2010 at 3:14 AM Post #20 of 31
Quote:

Originally Posted by Peter Pinna /img/forum/go_quote.gif
If there has been a recording produced utilizing the recording techniques about which Acix wrote that accurately replicates the natural sound stage created by the art and skill of correct microphone placement, I have not heard it and I seriously, make that very seriously doubt that such a recording exists.


The units hes talking about are used a lot in electronic music. But electronic music doesn't have to sound natural. The place where soundstage is most used in a natural way in today's music is still classical and large group jazz. And they still create soundstage with mic placement. It's also used in large group bluegrass, again, almost exclusively with mic placement.

non-electronic Pop and rock generally has a two depth soundstage, vocals and lead instuments up front and bass, drums and other rhythm instruments in back. That's mostly done with EQ, as you pull a little of the presence frequencies of those instruments out when you're mixing, so they sound a little bit behidn the vocalist and lead instruments. But it's not a very deep soundstage at all, it's not supposed to be.
 
Mar 19, 2010 at 3:26 AM Post #21 of 31
Quote:

Originally Posted by Peter Pinna /img/forum/go_quote.gif
fjrabon,
The above is absolutely correct. Your great short essay was also "right on". In fact, I liked your explanation so much that I would like to add it to my sig area, with your permission.
Microphone placement is becoming something of a lost art/ lost skill. As I'm sure you know, much is produced non-acoustically today which robs the listener of the natural sound stage even when played through speakers.
Some of the best microphone placement and consequently sound stage creation was done by the producers and engineers at Capitol Records in the mid to late 1950's and 1960's. A very prime example of this is an Album recording called "Only The Lonely" which featured Frank Sinatra with full orchestration. The microphone placement/ sound stage on this album is superb.



sure, you're more than welcome to link to it, though I wouldn't say its in depth or exhaustive enough to warrant something like that. And I'd also be weary that it would be taken out of context. That's already started to happen to a certain extent when people started mentioning things that have to do with soundstage width, when I was just talking about soundstage depth. But sure, if you like it, feel free to link to it in your sig.
 
Mar 19, 2010 at 5:26 AM Post #22 of 31
Quote:

Originally Posted by Acix /img/forum/go_quote.gif
Peter Pinna, you honestly believe that bands like The Beatles were recorded only by microphone placement?


No, I don't. Actually, it would depend on the age of the recording by the Beatles. I am no expert on the history of the Beatles but, to my understanding, some of their early recordings were recorded using the techniques I wrote about (based on microphone placement) while other recordings in later years introduced some techniques more similar to the recording techniques you wrote about. Included in and along with these techniques were such techniques as over-dubbing which had been done previous to them by only one other musician, the guitarist, Les Paul.
 
Mar 19, 2010 at 6:03 AM Post #23 of 31
Quote:

Originally Posted by fjrabon /img/forum/go_quote.gif
The units hes talking about are used a lot in electronic music. But electronic music doesn't have to sound natural. The place where soundstage is most used in a natural way in today's music is still classical and large group jazz. And they still create soundstage with mic placement. It's also used in large group bluegrass, again, almost exclusively with mic placement.

non-electronic Pop and rock generally has a two depth soundstage, vocals and lead instuments up front and bass, drums and other rhythm instruments in back. That's mostly done with EQ, as you pull a little of the presence frequencies of those instruments out when you're mixing, so they sound a little bit behidn the vocalist and lead instruments. But it's not a very deep soundstage at all, it's not supposed to be.



I know about this because I have been involved in the recording and production of some Jazz recordings and to a lesser extent, Classical. It is a much more natural approach than that which is involved in the production of current poop recordings. (Ooops, did I make a typographical error or a Freudian "slip"?) Everything is recorded at once, and to a great extent, what you hear being recorded is what you will hear from the playback. And, the engineers really have to be "on top of their game".

Acix, imagine yourself recording an orchestra and a vocalist. The orchestra includes a string section. What is required of you is to record the Vocalist and the Orchestra simultaneously with no over-dubbing or "punching in". Would you be able to handle it and have you ever heard of this being done? This is what recording engineers had to do years ago. Many Jazz and Classical artists, to this day, insist on recording this way.

Quote:

Originally Posted by fjrabon /img/forum/go_quote.gif
sure, you're more than welcome to link to it, though I wouldn't say its in depth or exhaustive enough to warrant something like that. And I'd also be weary that it would be taken out of context. That's already started to happen to a certain extent when people started mentioning things that have to do with soundstage width, when I was just talking about soundstage depth. But sure, if you like it, feel free to link to it in your sig.



I understand what you mean about your essay being taken out of context. I can't begin to tell you how many times, right here on Head-Fi, I've been misunderstood because something I wrote (or didn't write, as the case may be) was either misquoted or taken out of context. What would you think about my linking your essay along with the Head-wise article? I have attempted numerous times to explain about headphones sounding as if they have a "built in" neutral EQ versus the idea of headphones actually having a "built in" neutral (a.k.a. "flat") EQ (which isn't a particularly good thing). Your essay accomplished a succinct explanation of the difference between these two ideas quite well and that is one of the reasons I want to link to it and the Head-wise article.

By the way, if you are taken out of context, misquoted or accused of writing something you never wrote, it is from those types of experiences that I say to you, welcome to Head-Fi!
rolleyes.gif
 
Mar 19, 2010 at 1:11 PM Post #24 of 31
Quote:

Originally Posted by Peter Pinna /img/forum/go_quote.gif
I know about this because I have been involved in the recording and production of some Jazz recordings and to a lesser extent, Classical. It is a much more natural approach than that which is involved in the production of current poop recordings.

Your essay accomplished a succinct explanation of the difference between these two ideas quite well and that is one of the reasons I want to link to it and the Head-wise article.

By the way, if you are taken out of context, misquoted or accused of writing something you never wrote, it is from those types of experiences that I say to you, welcome to Head-Fi!
rolleyes.gif



The thing with pop is that some producers take a rock based production approach, and some take an electronic music production approach, so it's hard to really say that pop is done a single way.

Thanks for the compliments, and feel free to link. And yeah, I know what you mean, being misquoted and taken out of context should be as much of a part of the "welcome to head-fi" as "sorry about your wallet" is.
 
Dec 14, 2011 at 10:41 AM Post #25 of 31
 
Thanks for the write-up.
 
The only part I don't quite follow is about headphones sounding like an ice-pick if they had a flat FR, I'm not sure if this is theory or fact.
 
The Etymotic ER-4S for instance has a spike around 9 or 10kHz, the Sony Z1000 has a huge spike at 10kHz, and the latest $900 Shure flagship SRH1840 looks quite dead-flat, until some random spikes in the treble, ice-pick material?
 
 
Dec 14, 2011 at 11:14 AM Post #26 of 31
I think what the OP meant by flat FR, he meant on paper, like a ruler flat FR curve on a graph. Don't think any headphones have ruler flat FR on paper above 500Hz.
 
Quote:
Originally Posted by kiteki /img/forum/go_quote.gif
 
The only part I don't quite follow is about headphones sounding like an ice-pick if they had a flat FR, I'm not sure if this is theory or fact.
 
The Etymotic ER-4S for instance has a spike around 9 or 10kHz, the Sony Z1000 has a huge spike at 10kHz, and the latest $900 Shure flagship SRH1840 looks quite dead-flat, until some random spikes in the treble, ice-pick material?
 


 
 
 
Dec 14, 2011 at 11:36 AM Post #27 of 31
Quote:
I think what the OP meant by flat FR, he meant on paper, like a ruler flat FR curve on a graph. Don't think any headphones have ruler flat FR on paper above 500Hz.
 



Here is one of Shure's latest headphones, it will cost over $400, and it's hailing from a company with a long history of manufacturing microphones and recording equipment.
 
 
As you can see, the FR just keeps going up and up, so this is worse than dead-flat, the FR is from Shure, measured on an expensive dummy-head.
 

 
 
 
Dec 18, 2011 at 7:03 AM Post #28 of 31


Quote:
Quote:
I think what the OP meant by flat FR, he meant on paper, like a ruler flat FR curve on a graph. Don't think any headphones have ruler flat FR on paper above 500Hz.
 



Here is one of Shure's latest headphones, it will cost over $400, and it's hailing from a company with a long history of manufacturing microphones and recording equipment.
 
 
As you can see, the FR just keeps going up and up, so this is worse than dead-flat, the FR is from Shure, measured on an expensive dummy-head.
 

 
 

 
That is certainly an un-compensated graph. The graphs you see on HeadRoom are compensated for an average (?) HRTF, that is, an average frequency response of a person's ears, which drops off the treble considerably.
 
Interesting write-up about harmonics though. Something to note is that I believe that Tyll stated his measurements of headphones (which make up the HeadRoom graphs) aren't accurate above 10 kHz.
 
 
Dec 14, 2015 at 10:29 PM Post #29 of 31
Sorry to revive this ancient thread, but I have a relevant question and I think this is the best place to ask it.
 
The following points seem somewhat contradictory to me:
 
Soundstage depth, in general, is created by the fact that high frequency sound waves are absorbed and dissipated more easily than low frequency waves are.

 
Quote:
Typically mic positioning is preferred to EQ'ing for creating soundstage depth. it's easier and doesn't have nearly as many complications. For instance an instrument with a very large range like a concert grand piano is difficult to "move" with EQ, because the presence frequencies of its lowest registers are different from its highest.
 


The first point seems to imply that the level of atmospheric attenuation is only frequency dependent. In other words, the fundamentals of higher registers will be naturally attenuated in relation the the fundamentals of lower registers. Wouldn't it also follow that the harmonics/presence frequencies of higher registers will be naturally attenuated more than lower register presence frequencies?
 
I don't understand why you can't just apply some simple equalization to emulate depth. The EQ curve should be easy to figure out by sampling some white noise at whatever distance you want to emulate. How are the harmonics even relevant if the desired attenuation is simply frequency dependent?
 
I think I can understand why you'd have to record each instrument separately if you want to adjust the depth of the instruments in relation each other, but I don't understand why simply EQing wouldn't work.



 
Dec 15, 2015 at 12:53 AM Post #30 of 31
  Sorry to revive this ancient thread, but I have a relevant question and I think this is the best place to ask it.
 
The following points seem somewhat contradictory to me:
 
 
The first point seems to imply that the level of atmospheric attenuation is only frequency dependent. In other words, the fundamentals of higher registers will be naturally attenuated in relation the the fundamentals of lower registers. Wouldn't it also follow that the harmonics/presence frequencies of higher registers will be naturally attenuated more than lower register presence frequencies?
 
I don't understand why you can't just apply some simple equalization to emulate depth. The EQ curve should be easy to figure out by sampling some white noise at whatever distance you want to emulate. How are the harmonics even relevant if the desired attenuation is simply frequency dependent?
 
I think I can understand why you'd have to record each instrument separately if you want to adjust the depth of the instruments in relation each other, but I don't understand why simply EQing wouldn't work.


 
Maybe since you can't eq one key on a piano compared to another one when they're both playing nearly at the same time.
 

Users who are viewing this thread

Back
Top