Soundstage Width and Cross-feed: Some Observations
Status
Not open for further replies.
Jan 21, 2018 at 3:50 PM Post #151 of 241
2. There were few sound engineering courses before the 1990's. Most sound engineers had degrees in electronics, some other field or no degree at all and learned through the apprentice model of education. Stereotypically starting as the "tea boy" and working their way up to becoming a recording or mix engineer over the course of several years, mastering engineers typically longer.

2a. What happened? Stereo, EQ and compression in the 1940s, tape (tape effects and splicing, multi-track tape), synths, overdubbing and multi-tracking, echo chambers, plate and spring reverbs, digital effects in the late 1970's, multi-effects units, samplers and then software plugins in the 1990's, to name just a few of the top of my head.

3. Why? I've stated it a number of times!

2. So, am I accused of not taking courses that don't really even exist? My university years happened in the 1990's. I always thought sound engineers learn their craft by doing the work starting as an aid and getting more demanding tasks and more reponsibility with experience.

2a. How does innovations in audio technology make science of acoustics inapplicable in sound engineering? EQs and compressors don't remove the laws of physics and software plugins don't change the way hearing works. Hi-res audio didn't extent human hearing beyond 20 kHz. No matter what effects you use, all of it is turned into physical soundwaves to be heard by someone. I am asking so I can learn things I apparently don't know.

3. Why? Because it's shocking to me. I'm done with the denial phase and the acceptance phase kicks in hard! That's why.

1. Yes, we finally arrived at you not saying "it's scientific" after numerous pages of you arguing that it was scientific!
2. I didn't say all your conclusions contradict the facts, just some/many of the key ones. I don't know for sure how you do it, I assume it's because you just make them up and therefore there's a high probability they'll be wrong and be exactly opposite to the actual facts about 50% of the time.
2a. Natural acoustics can and frequently do create sharp patterns, standing waves causing very obvious sharp boosts or cancellations for example. And, what a plugin produces is highly customisable/alterable.
2aa. It is wrong because although panning is relatively simple, it's never used in isolation! There are very few exceptions to this statement and virtually all of them are before (or well before) the 1970s. Did you not read my post listing some of them?
2b. Then why make the incorrect statement of fact in the first place and then why argue that "I know Lt/Rt matrix thank you." when your "fact" was refuted?

G
1. Yes. I thought I know better than others and that's why others keeps arguing with my claims so I called others ignorant until lately I realized it's actually me who doesn't have the knowledge of the subject, that sound engineers have some strange knowledge unknown to me and that's why they disagree. Needless to say, I am very confused at this point and I need to think about things a lot. I have believed my science is correct. I need to study how it is wrong and why!

2. Okay. I don't feel like making these things up from thin air. I feel it's based on my education and working experience. That's why it's surprising to hear it's so often wrong.

2a. I think standing waves don't play a big role in encoded rear channels because of bandlimiting, but maybe I am wrong. Even if there are anomalities, natural acoustics tend to be pretty stable while studio effect might be dynamic, changing in time and that's one source of funny things to happen.

2aa. Never? How come we have tons of ping pong stereo recordings from late 50's and early 60's?

2b. It's possible I knew the details of Lt/Rt better years ago and have forgotten since. I knew something about it so I was half-right, half-wrong I guess. The point is testing between Lt/Rt and Lo/Ro made me prefer Lt/Rt. It sounds like having more spatial information and I think it works better with crossfeed.
 
Jan 21, 2018 at 3:51 PM Post #152 of 241
I suggest a walk in the sunshine
 
Jan 21, 2018 at 3:59 PM Post #153 of 241
71dB, please try and talk about the subject without trying to focus the conversation back to you. Be part of a group.

I'll try, but it is difficult with my inapplicable knowledge. Difficult to be part of a group of sound engineers when you are not one yourself.
 
Jan 21, 2018 at 4:08 PM Post #154 of 241
I suggest a walk in the sunshine

We have about 7 hours of "daylight" at the moment in Helsinki, Finland and the Sun doesn't rise very high. It lingers near the horizon. People don't realize how dark place Finland is in the winter before they come here and see it with they own eyes. Helsinki is South Finland, Northern Finland, especially Lapland is even darker! In the summer it's the other way around, it doesn't gets dark even during the night.
 
Jan 21, 2018 at 4:55 PM Post #155 of 241
Star Wars was Dolby Stereo 4.0, like Lisztomania (which was limited release in that format).
I think we are splitting hairs, but as long as we still have hair to split...Correct on the 35mm optical prints, but the 70mm mag was 4.1 (correct, @gregorio)
I think the first wide release use of that was Streisand's A Star Is Born. I think you're right that Apocalypse Now was the first true 5.1, although they did a limited release test of it with Superman.
Apparently Apocalypse was first to start but Superman beat it to release. I heard one of the few split-surround 70mm prints during initial release and confirmed with an industry contact that it was experimental split surround.
There were earlier releases in various multichannel configurations going all the way back to Fantasia, but those weren't quite the same layout as standard 5.1
5 screen and mono surround was fairly common for 70mm mag releases dating back probably 15 years before Dolby made it to the cinema (thank you Ioan Allen!).
 
Jan 22, 2018 at 3:56 AM Post #156 of 241
[1] How does innovations in audio technology make science of acoustics inapplicable in sound engineering? [2] EQs and compressors don't remove the laws of physics and [2a] software plugins don't change the way hearing works. [2b] Hi-res audio didn't extent human hearing beyond 20 kHz. [2c] No matter what effects you use, all of it is turned into physical soundwaves to be heard by someone. [3] I am asking so I can learn things I apparently don't know.

1. This has already been explained to you several times. For example, if we use several different mics, widely spaced and mix them together, we get a mixture of early reflection times and directions (dependent on each mic's position relative to reflective surfaces) which is impossible for a human being to experience. Or, If we record one instrument in a large room, another in say a toilet and then mix them together, we have a mixture of acoustics which cannot possibly exist in the real world. These two examples (and there are many more) cover nearly all recordings, acoustic/classical and studio/popular music from around the 1960's onwards.

2. Of course they do. One of the earliest scientific principles we have is that of a fundamental frequency, a harmonic series and a mathematical relationship between them. With EQ we could for example remove the fundamental and just have the rest of the harmonic series, a situation which cannot exist in the real world. A compressor could for example remove the transient from a struck instrument (or sound) which again, is an impossibility in the real world. These are just two of many examples. One further example (which combines this and the previous point): We can and very commonly do, EQ reverb (the reflections/acoustics), causing a significant difference in the acoustic information which actually existed (or could exist in a given acoustic space).
2a. So using a crossfeed plugin doesn't change what we hear?
2b. No a higher sample frequency doesn't extend human hearing beyond 20kHz but under certain circumstances it can (and does) change what we hear within the hearing spectrum (below 20kHz).
2c. That is not necessarily true, in the example above (of removing the fundamental with EQ) or the example of employing a low or high pass filter, the effect itself results in a deliberate absence of a physical sound wave, the only physical soundwave remaining is what has not been processed (effected). And, even if we remove the fundamental (or some other soundwaves/notes) it is still entirely possible to perceive it (a well used principal/technique, dating back well over 400 years)! How we hear is not linear (either in terms of frequency/pitch or amplitude/loudness) and our perception of sound/music has even less (and sometimes none at all) correlation to the actual soundwaves and can therefore be relatively easily fooled/deluded. All principles which are heavily relied upon in commercial music (and sound) creation.

3. It's not entirely clear that you are. What you have asked could very easily be construed as sarcasm, especially as you've employed that tactic previously, in which case most of the above is a wasted effort.

1. Even if there are anomalities, natural acoustics tend to be pretty stable while studio effect might be dynamic, changing in time and that's one source of funny things to happen.
2. Never? How come we have tons of ping pong stereo recordings from late 50's and early 60's?

1. Agreed but then this statement completely contradicts your previous conclusion/assertion that with acoustic classical music "the spatial information is complex and "random", while "studio music" has problems, because the spatial effects are largely simpler". Additionally, causing "funny things to happen" has been a fundamental tenet of art for 150 years or more!
2. I specifically mentioned the exception of some early stereo recordings, did you not read the post to which you are responding? All the recordings of the late 50's and early 60's represents a miniscule amount of all the recordings made since the late 50's and of that miniscule amount, relatively few employed ONLY ping/pong panning and no other simultaneous spatial effect.

G
 
Jan 22, 2018 at 6:38 AM Post #157 of 241
1. This has already been explained to you several times. For example, if we use several different mics, widely spaced and mix them together, we get a mixture of early reflection times and directions (dependent on each mic's position relative to reflective surfaces) which is impossible for a human being to experience. Or, If we record one instrument in a large room, another in say a toilet and then mix them together, we have a mixture of acoustics which cannot possibly exist in the real world. These two examples (and there are many more) cover nearly all recordings, acoustic/classical and studio/popular music from around the 1960's onwards.

2. Of course they do. One of the earliest scientific principles we have is that of a fundamental frequency, a harmonic series and a mathematical relationship between them. With EQ we could for example remove the fundamental and just have the rest of the harmonic series, a situation which cannot exist in the real world. A compressor could for example remove the transient from a struck instrument (or sound) which again, is an impossibility in the real world. These are just two of many examples. One further example (which combines this and the previous point): We can and very commonly do, EQ reverb (the reflections/acoustics), causing a significant difference in the acoustic information which actually existed (or could exist in a given acoustic space).
2a. So using a crossfeed plugin doesn't change what we hear?
2b. No a higher sample frequency doesn't extend human hearing beyond 20kHz but under certain circumstances it can (and does) change what we hear within the hearing spectrum (below 20kHz).
2c. That is not necessarily true, in the example above (of removing the fundamental with EQ) or the example of employing a low or high pass filter, the effect itself results in a deliberate absence of a physical sound wave, the only physical soundwave remaining is what has not been processed (effected). And, even if we remove the fundamental (or some other soundwaves/notes) it is still entirely possible to perceive it (a well used principal/technique, dating back well over 400 years)! How we hear is not linear (either in terms of frequency/pitch or amplitude/loudness) and our perception of sound/music has even less (and sometimes none at all) correlation to the actual soundwaves and can therefore be relatively easily fooled/deluded. All principles which are heavily relied upon in commercial music (and sound) creation.

3. It's not entirely clear that you are. What you have asked could very easily be construed as sarcasm, especially as you've employed that tactic previously, in which case most of the above is a wasted effort.

1. Perhaps, but I am slowly beginning to understand the nature of our disagreements and differences of views now (finally!). The difference is, I think we are still in the "acoustic world" when using widely spaced mics etc. Every single mic works under the laws of physics. Even if it's impossible for humans to experience, it still follows the laws of acoustics. For example, the way the signals between widely spaced mics correlate, is almost completely dictated by the acoustics of the room. Mixing large and small rooms together brings new elements on the table, but I don't think it means we drop the science of acoustics. There are limits of what kind of sound waves are possible in real world, but mics don't capture sound waves. They capture pressure changes at a fixed point is space unless the mic is moving.

2.-2c. Ears hear pressure changes of air and no matter how "crazy" music you create using a DAW digitally, it all has to be "acoustified" for the ears and the acoustic environment of the sound will immediately be modified under the laws of acoustics. Your "crazy" music gets coloured by pinna effects and 1/4 wavelength resonance inside the ear canal even with over the ear heaphones not to mention listening with speakers. You can't avoid the laws of acoustics in the process even if you generate "not from real world" effects with DAW.

Crossfeed, or any other plugin changes what we hear, of course, but not how we hear.

A thing you might have missed is that acoustic engineers don't work ONLY with acoustic sound waves. I studied acoustics AND signal prosessing. Sure, what I learned had very little to do with music production, unfortunately, but manipulating signals isn't new to me. Acoustic work often means 10 minutes of acoustic measurements followed by 10 hours of analysing the data with a computer.

Our ears and brain needs to interpret the sounds we hear. In my opinion it's good to keep that in mind when making effects. In what way are they "not possible in reality"? That dictates how we will experience the sounds and relate to them. I proposed limiting ILD, but the response was that would limit artistic intent. Why is excessive ILD an artistic intent in the first place? It's a bad choice for artistic intent, because some people use speakers and some people headphones so they will hear ILD differently anyway. Artistic intent should mainly concentrate on "artistic" things such as do we use guitar or mandolin in the "bridge" part of this song? How loud do we mix it? How much reverb do we use? How much do we compress it? ILD is problematic, because the way you listen to music affects so much how you experience it and also because spatial hearing is so sensitive of it. My opinion is that channel differencies should not be clowned around. Do whatever you want above 2 kHz, because that's "safe" area, but below 1 kHz one should be careful. Thankfully, that seems to be the overall trend in music production, at least in pop music consumed a lot with headphones.
 
Jan 22, 2018 at 8:26 AM Post #158 of 241
May be we can look at the subject from a slightly different perspective.
While sound engineers properly outline that their recordings are not a snapshot of any real world acoustic environment and a form of art, they nonetheless have to optimize the listening result (according to their choices and taste) for playback with a system that cannot but obey the laws of acoustics.
Now, as outlined in the blog post that I linked before, the situation is objectively very different between playback with a stereo speaker system vs. a pair of headphones... and as hinted (I believe) by 71 dB some crossfeed can subjectively improve (with limitations indicated by pinnahertz) the listening result of a recording that has been optimized for speakers and not headphones.
It would be interesting to read from bigshot and gregorio if their choices are based on listening with speakers, or headphones, or a compromise between both.

My traditional two cents,
Flavio
 
Last edited:
Jan 22, 2018 at 10:10 AM Post #159 of 241
1. Perhaps, but I am slowly beginning to understand the nature of our disagreements and differences of views now (finally!). The difference is, I think we are still in the "acoustic world" when using widely spaced mics etc. Every single mic works under the laws of physics. Even if it's impossible for humans to experience, it still follows the laws of acoustics. For example, the way the signals between widely spaced mics correlate, is almost completely dictated by the acoustics of the room. Mixing large and small rooms together brings new elements on the table, but I don't think it means we drop the science of acoustics. There are limits of what kind of sound waves are possible in real world, but mics don't capture sound waves. They capture pressure changes at a fixed point is space unless the mic is moving.
Clearly, you have not learned anything about the nature of our differences. The above doesn't outline the differences at all, it only further highlights why they are present.
2.-2c. Ears hear pressure changes of air and no matter how "crazy" music you create using a DAW digitally, it all has to be "acoustified" for the ears and the acoustic environment of the sound will immediately be modified under the laws of acoustics. 2d. Your "crazy" music gets coloured by pinna effects and 1/4 wavelength resonance inside the ear canal even with over the ear heaphones not to mention listening with speakers. You can't avoid the laws of acoustics in the process even if you generate "not from real world" effects with DAW.
Still ignoring a key aspect, focussing on the mechanism only. I've highlighted yet another entry in the "Lexicon of Made Up Words, which I swear, is published by the Ministry of Silly Walks.
A thing you might have missed is that acoustic engineers don't work ONLY with acoustic sound waves. I studied acoustics AND signal prosessing. Sure, what I learned had very little to do with music production, unfortunately, but manipulating signals isn't new to me. Acoustic work often means 10 minutes of acoustic measurements followed by 10 hours of analysing the data with a computer.
Nope, nobody missed that either.
1. Our ears and brain needs to interpret the sounds we hear. In my opinion it's good to keep that in mind when making effects. 2. In what way are they "not possible in reality"? That dictates how we will experience the sounds and relate to them. 3. I proposed limiting ILD, but the response was that would limit artistic intent. Why is excessive ILD an artistic intent in the first place? It's a bad choice for artistic intent, because some people use speakers and some people headphones so they will hear ILD differently anyway. Artistic intent should mainly concentrate on "artistic" things such as do we use guitar or mandolin in the "bridge" part of this song? How loud do we mix it? How much reverb do we use? How much do we compress it? 4. ILD is problematic, because the way you listen to music affects so much how you experience it and also because spatial hearing is so sensitive of it. My opinion is that channel differences should not be clowned around. 5. Do whatever you want above 2 kHz, because that's "safe" area, but below 1 kHz one should be careful. 6.Thankfully, that seems to be the overall trend in music production, at least in pop music consumed a lot with headphones.
1. Now you're getting closer...but still missing it.

2. Simple: there can be no natural occurrance of the effects gregorio used as an example. They only occur artificially.

3. Again, the answer here is simple: you're making a presumption about artistic content, but your assumption stands little chance of being correct.

4. Again, your opinion, clearly underscored with yet another derogatory jab. That clowning is done with sophisticated equipment in a controlled environment by trained experts, and is intentional, making it artistic content. The only "clowning" here is the application of cross-feed during listening without understanding that.

5. Above 2kHz is a very important key area of HRTF...but go ahead and ignore that.

6. A poll at Hydrogenaudio has 59% of respondents (members of that forum) listening primarily on headphones, IEMs and earbuds. To think engineers don't take that into account in creating a mix is naive.
 
Jan 22, 2018 at 10:19 AM Post #160 of 241
May be we can look at the subject from a slightly different perspective.
While sound engineers properly outline that their recordings are not a snapshot of any real world acoustic environment and a form of art, they nonetheless have to optimize the listening result (according to their choices and taste) for playback with a system that cannot but obey the laws of acoustics.
Now, as outlined in the blog post that I linked before, the situation is objectively very different between playback with a stereo speaker system vs. a pair of headphones... and as hinted (I believe) by 71 dB some crossfeed can subjectively improve (with limitations indicated by pinnahertz) the listening result of a recording that has been optimized for speakers and not headphones.
It would be interesting to read from bigshot and gregorio if their choices are based on listening with speakers, or headphones, or a compromise between both.

My traditional two cents,
Flavio
If I may...my mix decisions include considering how it sounds on headphones, but there's more to it. Mixing on headphones alone results in a mix that doesn't work well on speakers at all. However, the inverse is not true. Mixing on speakers results in an acceptable headphone presentation that may require only slight modification to be fully compatible. So the answer is a qualified compromise. Oh yea, I still check for mono compatibility. There are a few mono listeners, even yet, it's primarily a broadcast concern. And an incompatible mono mix with excessive L-R aggravates FM reception multipath issues, so it's worth considering if the material is destined for broadcast.
 
Jan 22, 2018 at 11:55 AM Post #161 of 241
It would be interesting to read from bigshot and gregorio if their choices are based on listening with speakers, or headphones, or a compromise between both.

Always speakers. I've never seen headphones used in the studio for anything other than isolation during tracking. There's a huge difference between pushing sound waves a couple of centimeters and pushing them across the room.
 
Jan 22, 2018 at 1:19 PM Post #162 of 241
Clearly, you have not learned anything about...

I don't think I can answer to your posts pinnahertz without sounding a "racket" scientist… :rocket:…no matter what I write there's no support from you, now is there? I know I don't know everything, but this is comical…:ghost: …your criticism and judgement is so over the top I can't take it serioustly anymore. Sorry.
 
Jan 22, 2018 at 2:03 PM Post #163 of 241
I don't think I can answer to your posts pinnahertz without sounding a "racket" scientist… :rocket:…no matter what I write there's no support from you, now is there? I know I don't know everything, but this is comical…:ghost: …your criticism and judgement is so over the top I can't take it serioustly anymore. Sorry.
You're just not reading and learning. I'm pointing out that you are still locked into the process of human hearing and the science of acoustics and hearing, while completely ignoring human perception, and the two are entirely different. Normal hearing pretty much always works the same way, but perception is highly influenced by other factors. Until you embrace that concept, several of us here will seem to disagree with you while speaking nonsense. You are considering less than half of the human experience of listening to recorded music.
 
Jan 22, 2018 at 3:16 PM Post #164 of 241
Sometimes knowing what you don't know is better than knowing something.
 
Status
Not open for further replies.

Users who are viewing this thread

Back
Top