Listener fatigue

Discussion in 'Sound Science' started by SilverEars, Nov 3, 2017.

  1. 71 dB
    The 2 % is not my estimate. I read it somewhere online. I don't disagree with it, because I estimate I have at least 20 and no more than 50 CDs not requiring crossfeed at all, so the 2 % estimate (in my case 30 CDs) is not far off. Even if the number was 10-fold, 80 % of recordings would still benefit from cross-feed.

    Youtube audio quality isn't top notch, at least not with 360p video. That's why I said that for final judgement CD quality is needed.

    Wow, you really enjoy splitting hairs! I need to hear it with CD quality to make the final decision. I don't know what Youtube low bitrate coding does to stereo sound.

    I'm just lazy in reminding you every 3 seconds that what I say is my opinion. You should have learned that by now. I have been around over a month and made over 200 posts. My writing style should be familiar to you considering how much you follow what I write. People around me are friends, working pals etc. People who I connect with in my life face to face. All kind of stuff, 70's rock etc. They listen to what they want and if crossfeed helps it helps. Isn't it great it does?

    Made it up? I didn't know I am that creative!

    1. You're welcome! That's because large channel separation (at low frequencies) is possible only when the sound source is very near one ear. Cross-feed reduces channel separation making it possible for the sound source to have some distance from both ears. The width of soundstage is more complex than just channel separation. The maximum width is achieved with optimal channel separation which is about 3 dB for bass and increases with frequency.

    2. Yes, I understand. Linkwitz definition in extremely ambitious. I don't think our audio technology is there yet at least as an affordable form. I think his definition of spatial distortion used to be in the 70's the same as mine, but he has since "developped" it to more ambitious and "3D" direction. That would explain were I got my definition: From his old writings.

    3. Pretty straightforward in my opinion, but thanks!
     
  2. 71 dB
    Then why is the "remixing" a sin when it's done with a cross-feeder, but not a sin when room acoustics, LP, speaker position, or speakers themselves causes it? What cross-feed does to sound is very mild and controlled compared to what room does even if you have spend a fortune on the acoustics. People whine about how cross-feed "messes" with the sound while tolerating 10 dB room modes when listening to speakers.
     
  3. bigshot
    The reason that speakers don't have spatial distortion is because speakers inhabit space. They aren't pressed up against your ears. The two big things that give them auditory space are the acoustics of the room, with a myriad of complex directional cues in the form of echoes and reflections, and the ability for the listener to turn his head to localize sound direction. Cross feed doesn't address either of those things. It just mixes together the primary directional sound (channel separation) as you say. Speakers are a whole different kettle of fish.
     
  4. pinnahertz
    You'll pardon me if "I read it somewhere online" doesn't wash.
    I understand that YouTube audio employs lossy compression, but in that example it's 256Kbps AAC @44.1kHz. Since AAC is considered transparent at 256k, your need for a CD for final quality judgement is completely unnecessary. There's no alteration of anything that would cause additional spatial distortion in that recording over the CD version.
    No, you don't. See the above.

    Make no mistake, I do know your writing style. However, lurkers don't, and new visitors don't. Worse, anyone with entry-level experience in audio would never be able to discern your ham-fisted opinion from fact. That makes your "style" irresponsible.
    I'm sure you know MY style by now as well: No proof, no statistical evidence of preference...again, just opinion, likely yours of theirs through a massive filter.

    You first claimed your definition was that of Linkwitz, it's not. Then you cite Cmoy and Meier, but no proof. I'm left with your definition being your own. And yes, quite creative.
    More creativity? "Maximum optimal channel separation of 3dB for bass"? Seriously? Lets see how that flies with an equipment review. You'd be imposing significant creative limits with that (made up) figure. That would be just plain poor engineering. I do recognize that most (not all) bass is mixed nearly panned dead center, but that doesn't mean that 3dB is at all optimal.
    Technology doesn't need to provide a practical solution to provide a workable definition of a problem. Inf act, defining the problem and inventing a solution hardly ever occur simultaneously.

    I can't help it if you're citing old work. But you're still not citing anything, so that still leaves us with your own creative definition.
    My comment was sarcastic. This is a Headphone Forum, and you're suggesting that people use speakers. See the irony? I realize sarchasm doesn't work well in print, I also fully expected you NOT to get it without explanation.
     
  5. pinnahertz
    I never said remixing with cross-feed is a sin. It's a tool, and as such, the better the tool the better the result when applied appropriately.

    Regardless, the result of your cross-feed method is just another unnatural perspective: narrowing of separation of material still imaged far to close, still mostly within the head. Yes it's mild, and could be controlled, but as such it's not the panacea, the mandatory and universally desirable solution you make it out to be.

    Speakers in a room at least roughly approximate the position of the speakers used when the mix was created. They are roughly localized the say way as the original. Headphones don't do that, and headphones + cross-feed doesn't either.

    By your own admission you have very strong negative feelings about highly separated mixes, cross-feed is a remedy to that, and is strongly your preference. Let's not defocus from the issue at hand.
     
  6. 71 dB
    I don't write down where I read what. I absorb the information and move on if it doesn't contradict my prior understanding. I have no need to deny a claim that only 2 % of stereo recordings are free off excessive stereo separation. You must understand that a lot more than 2 % can be listened to without cross-feed because the violation is small enough, but it doesn't hurt using very weak cross-feed either. Cross-fed version might cause less fatique, but the without cross-feed a mildly excessive stereo sound may sound more vibrant spatially. So, it's up to my mood what I do. Your Youtube example is probably one of these "gray area" recordings.

    I can say is I listen to almost everything cross-fed, but not 100 % everything.

    Really? It is a known fact that audio quality varies with video resolution. HD-videos have "good" sound. The sound quality seems pretty low for low resolution videos such as 480x360. So, maybe 1920x1080 videos have 256Kbps AAC @44.1kHz? I don't know about AAC, but mp3 encoders have options for how stereophonic information is handled.

    mp4a.40.2(18) seems to be the audio codec, whatever that means.

    Irresponsible? Wow. I think what I say is most beneficial for entry-level audiophiles, because my philosophy is "bang for the buck" and I have the type of education that helps understanding these things. It's the golden ears and "too far gone" high end gurus with their Myth Realizers and snake oiled power cables who are not targets of my enlightement.

    I don't mind polite corrections of mistakes, but you attack me as if the purpose of you life was to destroy my self-confidence. It makes me think that my opinions are dangerous to you, but I can't figure out why.

    Yes, I can recognize your style a mile away.

    Here: http://www.johncon.com/john/SSheadphoneAmp/

    How much ILD do you think you can get at low frequencies when you listen to loudspeakers? Audio gear such as power amps should have as large channel separation as possible, because the reduction of separation happens acoustically (speakers) or with cross-feed (headphones). Delay is involved. Contralateral signal is not only quieter, but also delayed (up to ~640 µs).

    I have studied and thought about these things a lot during the last 5 years. Some of this stuff is counterintuitive and requires different thinking than with speakers. Under 800 Hz ITD is important for the perceived angle and ILD acts as an supportive spatial cue (affects perceived distance). 800-1600 Hz is "transitional octave" and over 1600 Hz spatial hearing operates with ILD while ITD is pretty meaningless. A good spatial presentation is about having consistent combination of ILD(f) and ITD(f) at all frequences f. It is actually much more complex than that (spectral effects + reverberation), but this is the main principle.

    You can have 10 dB of ILD at bass if you want, but it makes the bass sound very near and unnatural with headphones and doesn't sound nice without cross-feed. Sometimes large separation is your enemy, sometimes it's good.

    True, but it doesn't help me with the problem if the solution is outside my "budget", or becomes available/cheaper in the distant future. I need solutions that are available yesterday and also affordable. For me cross-feed is such a solution.

    Yeah, people who hate sound inside their head love headphones which are known for locating sound inside head. My suggestion was almost equally ironic, I admit.
     
  7. 71 dB
    You can denigrate cross-feed all you want, but for many it is an important method to improve listening enjoyment. It doesn't create totally accurate sound image (how many speaker system does?), but it removes or at least reduces excessive stereophony and in doing so makes the sound natural or at least more natural. A flutist may not play as far as he/she should or at the correct angle, but at least his/her position is "possible" and natural. Our hearing system knows that such sounds are possible in real life in certain kind of environment if someone plays flute very near to you. That's why cross-feed doesn't create unnatural perspective but miniature perspective. That is a compromize of course, because real size perspective is the ultimate goal, but it is a pleasant compromize in my opinion. Ask yourself do you listen to music mainly to hear musicians at the correct distance of yourself, or to simply enjoy their playing? What is relevant and what is not?

    I am not here to say you should not use anything better than cross-feed. If you can then by all means, but use AT LEAST cross-feed, because it improves in my opinion the sound in relevant ways. Makes it more enjoyable, natural and less tiring to listen to. I believe this is a case of some people not knowing what's best for them and that's when someone has to do the enlightement. I feel like someone who tells people that tobacco is bad for them.
     
  8. pinnahertz
    Your stats are still unique to you, based in your preference, and you have no actual data to indicate accuracy for everyone and all music.
    MP4 is the container, AAC is the audio codec. I saved and opened the file. Bitrate is better than 256K.
    Once again, you take this far to personally. I object to the method with which you communicate, not you personally or your opinions. Stating your personal opinions as if they are fact, definitive, and well researched. They are not, they are just your opinions. You may have put it some work over time, you may not. We only have your word. State it all as opinion, we're good. State is all as fact, you need to publish a paper with research for peer review.
    Yes, sure, I already get that. But when you say 3dB separation is optimal, you're just wrong. That would impose an artificial limit on what's going on acoustically, and that's bad engineering.
    Then suggest it as your favorite practical solution without all the pseudo definitives.
     
  9. ev13wt

    I do tend to be more on this side of the table. In reality, I enjoy headphone listeing for what it is - "no room" - be it with our without crossfeed.

    I would wager that most sound engineers these days don't mix for "stereo". They mixdown for earbuds, bass overblown headphones and bluetooth speakers and cell phone speakers. Why is everything highly compressed? Because 80% (at least) of currently "deployed" audio reproduction systems cannot deal with more than 10 dB of dynamic range :p
     
    Last edited: Nov 6, 2017
  10. pinnahertz
    Mixed for stereo, checked on headphones/earbuds. DR limits in repro systems are acoustic (i.e. high background noise environments), not the quality of the gear. Compression/loudness is demanded by producers and artists for artistic and competitive reasons.
     
    71 dB likes this.
  11. 71 dB
    About 3 dB is optimal maximum separation. We would have to have elephant heads to change that because it's wavelength versus head size thing. It means the operative separation at low frequencies is 0-3 dB. If you do it in the mix, you have more or less created a recording belonging the the small 2 % of recordings not needing crossfeed (now you perhaps understand why only 2 % qualify: Most of the time engineers think like you and don't limit separation enough + they mix for speakers). If the separation is not limited in the mix, you need cross-feed to limit it. Separation larger than 3 dB doesn't happen in real life at low frequencies unless the sound source is very near the listener, which is something engineers would like to avoid most of the time.

    For example: Let's assume a sound source 1 feet (30 cm) away from head on the left side and 2 dB acoustic shadow effect for the head at low frequencies *. Distance to left ear is 1 feet/30 cm and to right ear about 0.75 feet / 22 cm more for normal size head. The distance attenuation for right ear is 20*log10(1.75/1) = ~5 dB. Together with shadow effect this means that the separation is about 7 dB **. If the sound source is moved to 3 feet, separation is 2+20*log10(3.75/3) = ~ 4 dB and if the distance is 10 feet, we get 2.6 dB. This example oversimplifies the situation (near field/far field + reverberation ignored), but illustrates why 3 dB limit at bass makes sense.

    At 1 kHz separation limit is about 10 dB (I haven't studied this very accurately) and rises with frequency to 25-30 dB.

    * these values can be seen in HRTF curves comparing lateral +90° and -90° measurements.
    ** In other words, if you have 7 dB ILD at low frequences, it's a spatial cue for the brain that the sound source is about 1 feet away from your head, right or left. Large ITD is expected, because small ITD suggests the sound source is almost equally distant from both ears which demands almost zero ILD. So, there's a lot to go wrong and that's why 98 % of recordings benefit from cross-feed.

    Okay, okay. The "pseudo definitives" arise from the fact that I believe in what I say.
     
  12. bigshot
    I've never done a mix on anything but speakers. The mix is usually done on the main speakers on the stage, and then as a last check, it's played back through small near field speakers. Very rarely a change in that step. I've never seen a sound mixer use headphones for anything but isolation during recording and editing when they don't want to bother other people. In fact, the headphones at studios that I've seen are inexpensive "beaters" that are stored in a pile in a cabinet or drawer. Never any good headphones.
     
  13. bigshot
    I've been working on my system for about 40 years. But even a midrange 5.1 system in a typical living room can outperform the best headphones when it comes to a vivid and lifelike soundstage. I have great headphones too, but I use them mostly for editing. When I listen to music, it's always on speakers.
     
  14. theveterans
    Agree with this. Even the entry level Yamaha HS7 speakers and the entry level Yamaha HS8S subwoofer outperforms all headphones I've tried including the HD800 in the imaging and soundstage. I'd even wager that the speakers retrieve the same amount of detail and resolution as the HD 800
     
  15. pinnahertz
    Once again, you've placed your own opinion as to what should be done above that all others. Yes, the facts are that in an acoustic space with a pair of speakers, at some frequency, bass becomes difficult to localize at all, and separation becomes progressively moot. But the transition to that point is gradual, frequency dependant, and the localization issue occurs with speakers in a closed acoustic space. The space does impact the results, often quite significantly. Remove the space, and things change, like in an outdoor environment for example.

    But all that aside, when you say "3dB is optimal maximum separation" (oh, and ignore the transition frequency where that gradually becomes true!), and I assume you're referring to the mixing of music, you are, again, expressing opinion. You've made no allowance for creative decisions, because you're opinion is right, final, immutable! Well, I view your means of expressing that opinion as quite arrogant. I have worked in that creative environment, and that's just not the way it is. If we were all limited to 3dB of separation, even with a clearly defined crossover frequency, I have no doubt that it wouldn't take long for someone to find a reason to object to the limitation of the creative palette, and the fidelity of a transmission system.

    Now, let me be very clear: I'm not disputing your discussion of how human hearing localization works. But, if in the recording process, I reduce channel separation to say 3dB at 50Hz, and 10dB at 1kHz, what do you think happens when that artificially reduced separation is played through an acoustic transducer system in an acoustic space which applies that reduction again? See, artificial separation reduction is never appropriate in recording and production because we are never absolutely certain of how it will be heard! We must, therefore, mix properly to convey the creative intent, whatever that may be, and that just might include hard-panning the string bass to the left channel because we may actually want that to be heard way on headphones! Sure, the possibility is rare, but it exists...unless we let you have your way.

    And this is key: Mixing is done on excellent speakers positioned optimally in a well designed, acoustically treated space. And, since we sit and listen to them, the HRTF is already being applied, the separation reduction of our own heads and ears is already factored in! We are mixing through that mask. Applying it again would be incorrect, and impair our creative space. Sure, that means some mixes will sound less than natural on headphones, but not every headphone listener has a problem with that like you do. I've tried several times to convey that, you seem to think your hate for highly separated mixes is universal. It's not! Oh, but we're all idiot engineers that know nothing about spatial hearing.

    Do you think your analysis of hearing at low frequencies is unknown to anyone but you?
    Do you not realize that has been known for a time spanning many, many decades?
    Do you thing every engineer is completely ignorant of this?
    And do you think it's impossible that someone working with cross-feed longer than you might have a different opinion?

    So far it seems you'd be answering "yes" to all of the above.

    And yet, certain creative choice are still made anyway. Just because someone doesn't like the result doesn't make those decisions wrong in a way that must absolutely and universally be corrected! If you don't mix music for commercial release, I'm not sure how you can have such a low opinion of those who do.
    Accuracy or not, nobody...and by that I mean no engineer working his craft for a living...would work with that kind of separation reduction built in as a mixing requirement. Your figures may not be far off (actually depending on which source you read), but that's what happens in a room with speakers during playback. If we do that in production then the entire process would get separation reduction twice! That's how we'd get fired.
    Yeah, blah blah... I know all of that, but what you're trying to do is to take what human hearing does and apply it as a pre-correction. Sorry, wrong. And it's never going to happen anyway. There's no correction needed or desired if the mix is played on a system similar to that on which it was created. And, that mix was checked and sometimes adjusted by listening on headphones. What you get is intentional. If you hate 98% of what you get, go fix it....for yourself.

    If you want to use cross-feed, go for it. Again..(and again and again), it's NOT the ultimate in any way, it DOESN"T improve every single recording, not even your invented 98% figure, and all the rest. I found an example that not only doesn't benefit from cross-feed, it's harmed by it. Wasn't hard, and it's not uncommon, just not in YOUR world which, BTW, is only in habited by one person.
    Yes! and to the degree that your pseudo definitives are fact imposed on everyone else, making everyone else wrong!

    Again, and I will just keep hammering this nail until it goes in: I don't object to your opinions, I object to you stating them as absolute fact that should apply to every recording and every headphone listener everywhere. State your preferences, even promote them, that's fine. But state opinion as absolute fact, and be prepared to back up your statements (like the 98%/2% statistic) with actual tested verifiable data. Be prepared to back up anything like "crossfeed improves everything" with a study showing preference over a statistically significant population segment.

    Or we can just do this for a third time, if you like. I'm almost to the point of copy/paste already.
     

Share This Page