bigshot
Headphoneus Supremus
I'd even wager that the speakers retrieve the same amount of detail and resolution as the HD 800
The wild card is the room. If you can tame that with EQ, any decent speakers can outperform headphones.
I'd even wager that the speakers retrieve the same amount of detail and resolution as the HD 800
Once again, you've placed your own opinion as to what should be done above that all others. Yes, the facts are that in an acoustic space with a pair of speakers, at some frequency, bass becomes difficult to localize at all, and separation becomes progressively moot. But the transition to that point is gradual, frequency dependant, and the localization issue occurs with speakers in a closed acoustic space. The space does impact the results, often quite significantly. Remove the space, and things change, like in an outdoor environment for example.
But all that aside, when you say "3dB is optimal maximum separation" (oh, and ignore the transition frequency where that gradually becomes true!), and I assume you're referring to the mixing of music, you are, again, expressing opinion. You've made no allowance for creative decisions, because you're opinion is right, final, immutable! Well, I view your means of expressing that opinion as quite arrogant. I have worked in that creative environment, and that's just not the way it is. If we were all limited to 3dB of separation, even with a clearly defined crossover frequency, I have no doubt that it wouldn't take long for someone to find a reason to object to the limitation of the creative palette, and the fidelity of a transmission system.
You can create spatial effects with delay alone at low frequences where you need to limit separation. If you delay a bass tone by ~600 µs on right channel, and listen to it on speakers, you should hear the sound located near left speaker. Even when volume of right channel is the same! If you attenuate right channel by 3 dB, the effect works even better. In real life you have high frequency content on instrument sounds, transients when bass is plucked etc, that those have greater separation and the localization is much easier. So it is about using spatial cues cleverly.Now, let me be very clear: I'm not disputing your discussion of how human hearing localization works. But, if in the recording process, I reduce channel separation to say 3dB at 50Hz, and 10dB at 1kHz, what do you think happens when that artificially reduced separation is played through an acoustic transducer system in an acoustic space which applies that reduction again? See, artificial separation reduction is never appropriate in recording and production because we are never absolutely certain of how it will be heard! We must, therefore, mix properly to convey the creative intent, whatever that may be, and that just might include hard-panning the string bass to the left channel because we may actually want that to be heard way on headphones! Sure, the possibility is rare, but it exists...unless we let you have your way.
And this is key: Mixing is done on excellent speakers positioned optimally in a well designed, acoustically treated space. And, since we sit and listen to them, the HRTF is already being applied, the separation reduction of our own heads and ears is already factored in! We are mixing through that mask. Applying it again would be incorrect, and impair our creative space. Sure, that means some mixes will sound less than natural on headphones, but not every headphone listener has a problem with that like you do. I've tried several times to convey that, you seem to think your hate for highly separated mixes is universal. It's not! Oh, but we're all idiot engineers that know nothing about spatial hearing.
[A] Do you think your analysis of hearing at low frequencies is unknown to anyone but you?
Do you not realize that has been known for a time spanning many, many decades?
[C] Do you thing every engineer is completely ignorant of this?
[D] And do you think it's impossible that someone working with cross-feed longer than you might have a different opinion?
So far it seems you'd be answering "yes" to all of the above.
And yet, certain creative choice are still made anyway. Just because someone doesn't like the result doesn't make those decisions wrong in a way that must absolutely and universally be corrected! If you don't mix music for commercial release, I'm not sure how you can have such a low opinion of those who do.
Accuracy or not, nobody...and by that I mean no engineer working his craft for a living...would work with that kind of separation reduction built in as a mixing requirement. Your figures may not be far off (actually depending on which source you read), but that's what happens in a room with speakers during playback. If we do that in production then the entire process would get separation reduction twice! That's how we'd get fired.
Unfortunately, it does matter. Recordings are made to convey creative intent as universally as possible. Building in crossfeed would assume only headphone listening. Built in crossfeed is not removable. However, it can be added during playback, and there are many methods. Those playing back the recording are aware of their playback methods, and can apply crossfeed if they desire and affect no one else but them.It doesn't matter when the reduction of separation is done as long as it is done.
The implication here is that you are right and every engineer mixing for stereo on speakers without crossfeed is wrong. I'm glad you realize your opinions won't change how music is mixed.If it isn't done while mixing music (very rarely is), I do it myself using crossfeed. My allowance doesn't matter. My hopes, opinions and advices don't change anything.
Ever seen a pan pot? I assume you must have at some point. Now, ever seen a pan pot that varies ITD vs frequency and ILD vs frequency? Of course not, they don't exist. Because that wouldn't work as universally as the simple ILD pan pot. I've advocated the idea above for years, actually proposed a design for this in the 1980s, it's even entirely possible now that mixing is fully done in software, but the fact is, it's impractical because it's not a universal solution for every listener.They mix the music the way they do. However, I can use these principles when I make my own music. I try to mix it so that it works on headphones and speaker as it is. I call it omnistereophonic sound. Yes, that's my own definition.
I don't think the limitations of separation limits much creativity, because there is so much more you can do. People are just used to do things in a certain way and are reluctant to change the habits. I question things and I am one of those guys who steps in a room an says "Why do you do thing X this way and not another way?" For me answer "Well, it's been done always this way." is not a good answer. It is an excuse. Anyway, It's just frequencies below 1 kHz or so and above that you have a lot of freedom with separation.
Separation is overrated. In the early days of stereo it was considered a "virtue", but in real life we don't need much of it, at least below 1 kHz. People are just still stuck on that mentality. What we hear in everyday life around us is more mono than people think. Our hearing is good at detecting tiny spatial cues and that's why headphone listening with huge separation overloads our heads.
You can create spatial effects with delay alone at low frequences where you need to limit separation. If you delay a bass tone by ~600 µs on right channel, and listen to it on speakers, you should hear the sound located near left speaker. Even when volume of right channel is the same! If you attenuate right channel by 3 dB, the effect works even better. In real life you have high frequency content on instrument sounds, transients when bass is plucked etc, that those have greater separation and the localization is much easier. So it is about using spatial cues cleverly.
The problem with the above statements is they are based on your opinion of what's right. The industry is already mixing using a compromise it feels adequate.That's why omnistereophonic sound is difficult, but also possible, because such recordings exists (those "infamous" 2 %). It is about having a clever compromise between stereophony that works on speakers and stereophony that works with headphones. <snip> Mind you, I am in the beginning of studying this subject, and I don't claim mastery on this issue. I like a little separation on bass more than total mono bass with headphones. it gives a sensation of a room around me, that something is "happening" around me acoustically.
Nobody would know any of that from your posts. If that wasn't your intention, you may want to rethink how you say things.[A] Of course not. This is acoustics 101 stuff.
No, I am aware of that and I never said it wasn't. I was teached this stuff in the university over 20 years ago, in 1992 maybe...
[C] Of course not, but how the knowledge is applied is another story. I was 41 when I discovered crossfeed! Before that I was "spatially ignorant."
[D] I believe there might be differing opinions in little details, but the large picture should be agreed upon.
No, I did not answer "yes" to all those questions. I'm not as arrogant as you think.
But you are saying that 92% of all mixes are "wrong", you've invented statistics and terminology....etc., etc....I'm not a dictator telling how all music on Earth must be remixed and crossfed to my liking. I'm figuring out myself what is the smartest way to do things and I apply that understanding in my music making as a hobby. I'm for neutral audio and this is about finding out how we get into "neutral spatiality." Omnistereophonic recordings would mean no need for crossfeeders. The spatiality would be much better controlled in all listening conditions because the spatial information in the recording is smart.
Nobody's arguing HRTF or how hearing works, you can stop explaining the obvious any time now. The argument is about your cross-feed mandate for 92% of all recordings ever made, and that your cross-feed should employed in mixing. Wrong for both.HRTFs are measured in anechoic chamber, at least they were measured that way in the acoustics lab I worked in. My HRTFs have been measured in the past, but I don't have them because they are "intellectual property" of Nokia. What happens in room at low frequencies is up to room modes and the result is that spatial information is mostly lost. You need higher frequencies for spatial cues. For that reason it doesn't matter much what the separation at bass is when we listen to speakers, but it matters A LOT with headphones, so it should be optimazed for phones. At the lowest frequences monophonic bass helps speakers, because the sound sources add up amplitude-wise meaning a little bit more sensitivity (1-2 dB).
Please don't continue.Now I have to go. I continue later...
Yeah, blah blah... I know all of that, but what you're trying to do is to take what human hearing does and apply it as a pre-correction. Sorry, wrong. And it's never going to happen anyway. There's no correction needed or desired if the mix is played on a system similar to that on which it was created. And, that mix was checked and sometimes adjusted by listening on headphones. What you get is intentional. If you hate 98% of what you get, go fix it....for yourself.
If you want to use cross-feed, go for it. Again..(and again and again), it's NOT the ultimate in any way, it DOESN"T improve every single recording, not even your invented 98% figure, and all the rest. I found an example that not only doesn't benefit from cross-feed, it's harmed by it. Wasn't hard, and it's not uncommon, just not in YOUR world which, BTW, is only in habited by one person.
Yes! and to the degree that your pseudo definitives are fact imposed on everyone else, making everyone else wrong!
Again, and I will just keep hammering this nail until it goes in: I don't object to your opinions, I object to you stating them as absolute fact that should apply to every recording and every headphone listener everywhere. State your preferences, even promote them, that's fine. But state opinion as absolute fact, and be prepared to back up your statements (like the 98%/2% statistic) with actual tested verifiable data. Be prepared to back up anything like "crossfeed improves everything" with a study showing preference over a statistically significant population segment.
Or we can just do this for a third time, if you like. I'm almost to the point of copy/paste already.
All those blue stripes sure do look pretty on the page!
If you can make an omnistereophonic mix then that's what I suggest you to do. If you can't, then mix for speakers. That's what I think about this.Unfortunately, it does matter. Recordings are made to convey creative intent as universally as possible. Building in crossfeed would assume only headphone listening. Built in crossfeed is not removable. However, it can be added during playback, and there are many methods. Those playing back the recording are aware of their playback methods, and can apply crossfeed if they desire and affect no one else but them.
The implication here is that you are right and every engineer mixing for stereo on speakers without crossfeed is wrong. I'm glad you realize your opinions won't change how music is mixed.
Ever seen a pan pot? I assume you must have at some point. Now, ever seen a pan pot that varies ITD vs frequency and ILD vs frequency? Of course not, they don't exist. Because that wouldn't work as universally as the simple ILD pan pot. I've advocated the idea above for years, actually proposed a design for this in the 1980s, it's even entirely possible now that mixing is fully done in software, but the fact is, it's impractical because it's not a universal solution for every listener.
EThe problem with the above statements is they are based on your opinion of what's right. The industry is already mixing using a compromise it feels adequate.
I'm going to quote you some other statistics. 80% of all existing recorded stereo music doesn't benefit from rudimentary cross-feed. 40% of that group is actually damaged by rudimentary cross-feed. 15% of all recorded music is at least somewhat benefitted by rudimentary cross-feed, and the remaining 5% is mono. Those figures are MY opinion with the same verification as yours (none).
Please don't continue.
Some people have more to say than just comment on blue stripes!
I guess if I read all those words, I might have more to say. I keep trying and I get one or two stripes in and I start thinking of better things I could be doing than wade through it all. It might be a good idea to put your best thoughts up front. I tend to pay attention to properly constructed paragraphs with a statement, supporting arguments and summation. I understand what people are trying to say better when they organize their thoughts for the convenience of the reader. If it's just for your own amusement, that's fine. I can always just admire the pretty blue stripes.
1. I completely disagree with your statistics. Please prove them.1. What human hearing does is a good "pre-correction" target if the rest of the chain does next to nothing (headphones without cross-feed). A studio set-up doesn't reveal problems such as excessive stereo separation at low frequencies. Of the 98 % a lot are "mild" cases which I don't "hate", but I still think can be improved with crossfeed. Ping pong recordings are in the "hate" category.
2. I believe that excessive stereo separation is one of the least understood aspects of audio and that a lot improvents can be done on that front. I was also "deaf" to it because of speakers hiding problem it with acoustic crossfeed and thinking headphones sound unnatural/tiring because they are headphones. Headphone listening has become more popular thanks to "mobile music devices", so maybe people who mix music become wiser day by day and we go toward omnistereophonic sound.
3. Yes, crossfeed doesn't improve every recording. For the 2 % it is harmful and for the mildest cases of the 98 % it is a matter of taste and even daily mood. However, in my opinion crossfeed benefits without doubt easily over 50 % of recordings, maybe 2/3 of all. I have said several times that off-switch is needed on a crossfeeder, because every now and then you need it!
4. What then if "everyone else" was wrong? That's only human. I was wrong/ignorant about this myself up until 2012.
5. I have a strong faith in what I say based on the rather intensive and impassioned study of the subject for half a decade.
6. Is there something I can learn? Absolutely! My understanding isn't complete at all (for example I haven't studied the "transitional octave" much), but I sense it is quite advanced compared to the general understanding of the subject.
Habits can be rational or "cultural". Most of people do "cultural" headphone listening (you but your Beats Audio cans to your head and blast out bassy EDM or hip hop tracks without thinking much about it). I try to do rational headphone listening which try to take into account how our hearing works to make the "sonic flow" to my mind as rational as possible. The problem is that "cultural" habits are mistaken as rational when they rarely are.
The 2 % / 98 % is an estimate, which I don't have reasons to disagree. Crossfeed doesn't improve everything, but most of the time it imo does.
1. If you can make an omnistereophonic mix then that's what I suggest you to do. If you can't, then mix for speakers. That's what I think about this.
2. Of of course pan pots should be ITD+ILD based in the 21st century. It's not 1958 anymore, and simple ILD panning is naivistic. 3. How is it universal even for speakers? 4. Many hard panned ping pong recordings sound plain silly on speakers and remixing such recordings using the principles of ITD+ILD results in much better sound image on speakers (and headphones too!)
5. I have processed Dave Brubeck's Jazz Impressions of Eurasia for speakers (crossfed for speakers) to have reasonable soundstage and it was a clear improvement even if I say it myself. Hard panned stereo makes the sound come from speakers only (keyhole audio), but if both speakers participate on every sound, the soundstage gets spread over the area where the speakers are, a bit behind the loudspeakers. ILD-only panning is naivistic and hard panned ping pong stereo is silly. Nothing "universal" about those. It's 2017 and we have powerful computers to do panning much better. Omnistereophic recordings are universal.
6. Except recordings are mixed differently! Some have huge separation at bass, others hardly any. Some recordings need brutal crossfeed, others hardly any. There is no standard. It's up to who is doing it. 7. The principles of omnistereophonic stereo would set limitations and in that way create some kind of standard so that recordings would have more coherent stereophony.
8. Even your numbers suggest that people should have crossfeeders for the 15 % of their recordings.
Mono recordings are of course not included in the 2 % / 98 % estimation.
Um...I'm not the one stating your opinions as fact, dude.My original posts are more in the form you suggest, but then pinnahertz reads them and interprets them as me arrogantly stating my opinions as facts and it all escalates into pretty blue stripes for you to admire...
2. I believe your are wrong, stereo separation is good and well understood, and that we make choices to use it as required for our end goal. I believe we have all the terms we need. Your new term "omnistereophonic" is misapplied if you consider that the root greek word behind "stereo" means "solid, and 3 dimensional". Your omnistereophonic concept is not that at all.
8. How's that for minimizing the blue lines?