To crossfeed or not to crossfeed? That is the question...

Sep 18, 2017 at 2:38 PM Post #46 of 2,192
My own personal stance: Technically, it's changing the sound and therefore lower fidelity. Therefore, I do not like the idea on a sheerly emotional / neurotic level.

If that's the case, you should sell your headphones and get a pair of studio monitors, because the difference between listening to music on headphones instead of the intended speakers is greater than any kind of signal processing. Start thinking about that and get your emotional neurotic level to work on that. You'll at least get better sound for your trouble.
 
Sep 18, 2017 at 3:18 PM Post #47 of 2,192
If that's the case, you should sell your headphones and get a pair of studio monitors, because the difference between listening to music on headphones instead of the intended speakers is greater than any kind of signal processing.

You might be joking, but actually for the past 10 years or so most of my personal listening has been done on studio monitors... :grimacing: ...

and, I totally recognize that DSP can be a valuable part of a good listening setup... I don't question it, just personally feel a non-specific discomfort about it.

However, I would argue that "intended speakers" is usually a very poorly defined category. Audio pros know that their audience doesn't typically have nearfield monitors. Engineers *usually* make as few assumptions as possible about the ultimate listening conditions, meaning they want it to sound good on headphones and speakers alike. In fact, it is seen as a major failure if your mix only sounds good on studio monitors. However, now we again run into the fact that "sounds good" is entirely subjective.
 
Sep 18, 2017 at 3:27 PM Post #48 of 2,192
I have a hard enough time figuring out my own intents, let alone the intents of a mastering engineer who lived 50 years ago when decent headphones were a gleam in an engineer's eye, and the reference monitors they were using on a different technological level as well. Not to mention that intent and sacrifice are hard to discern. Was the treble or bass boosted to make up low-fi equipment? Did they drag the band in to do stereo mixes even though they loathed the idea of stereo? Intent is a psychological aspect, and it can be a neurotic enterprise trying to discern it. You could end up on a slippery slope only buying vintage vinyls and listening through vintage gear. And boy that would suck. Intent and faithful reproduction are not the same thing.

Nevertheless, those who are all about intent, faith, and neutrality have to admit to the fact that many historic albums were never mastered with headphones in mind. For them, crossover is a more authentic way to listen over headphones.
 
Sep 18, 2017 at 3:46 PM Post #49 of 2,192
The thing I learned with multichannel sound is that in the modern world, the old concepts of signal purity are just plain wrong. Purists refuse to use EQ or even tone controls and end up with a response curve created by a dice roll. Others refuse to use a DSP to create a room ambience or re-channel stereo to 5.1 and they end up with 2 dimensional sound. Still others approach system calibration like it's an order from God on high, and they never properly hear the hundreds and thousands of discs mastered slightly off spec.

I believe that tools have their purpose. We should be able to correct mistakes made by engineers, and we should be able to tailor the sound to fit out room and personal tastes. If you don't know how to use a tool properly, it's probably best to leave it alone. But if you're interested in learning about them and use them to help sculpt your sound, that's a good thing.

+1 I'm big into DSP right not. I loaded Foobar up with DSPs and am going to town on them. I've never felt so empowered to customize my listening experience. Some people do tube rolling, I do plugin rolling. And things have never sounded so good. That Jimi Hendrix album I mentioned above, I was using dynamic EQ for treble spikes, parametric EQ for tone adjustment, slickEQ for saturation/tube sound, and Meier crossfeed to stop the ping pong. lol. The neutrality folks would probably freak out hearing that, but man did it sound gooood. If I was using speakers, I'd go for stereo->5.1 DSP too. I'm not exactly sure what Jimi Hendrix intended, but I assume he wanted me to enjoy his music, and that I did.
 
Last edited:
Sep 18, 2017 at 3:54 PM Post #50 of 2,192
Audio pros know that their audience doesn't typically have nearfield monitors. Engineers *usually* make as few assumptions as possible about the ultimate listening conditions, meaning they want it to sound good on headphones and speakers alike.

The only headphones I've seen in the studios I've worked in were the beaters in the booth used for playback. I never saw an engineer put headphones on. We would usually mix on the big full range monitors, then do a check on small speakers to make sure it worked well with cheaper systems. We never checked with headphones.
 
Sep 18, 2017 at 4:05 PM Post #51 of 2,192
My own personal stance: Technically, it's changing the sound and therefore lower fidelity. Therefore, I do not like the idea on a sheerly emotional / neurotic level.

Cross-feed definitely changes the sound. It would be pointless if it didn't do anything, so of course it changes the sound. As for the lower fidelity, are you really thinking that, or is it an assumption that all changes to sound are for the worse? Is noise reduction always bad? Is filtering a bass bump away bad? Is removing spatial distortion bad? Did the producers of a recording intent spatial distortion to be part of the listening experience? If so, they fail miserable whenever the recording is listened to with loudspeakers, due to acoustic cross-feed, which by the way changes the sound significantly more than an average headphone cross-feeder does. Sometimes changes to sound are for the better meaning higher fidelity, and cross-feed is a IMO a good example of that.

Also, most mixing engineers (at least for pop / non-audiophile recordings) Will try to produce a mix that sounds reasonable on headphones as well as speakers, so it's not as if headphones are a forgotten realm. Most mixing engineers are resigned to the fact that their recordings will most frequently be heard on iPod earbuds.

You are correct, modern pop is often produced to contain only mild levels of spatial distortion (depends on the producers), but is that all you want to listen to? Weak cross-feed might improve even these recordings taming occasional bursts of spatial distortion. Spatial distortion free recordings do exist and I do listen to them cross-feed off. Knowing how much cross-feed is needed is important for best results (highest fidelity). Sometimes you don't need it at all, but most of the time spatial distortion does exist, even when it's modern pop intended for headphones.

I'm guessing here, but crossfeed should be minimally or not necessary at all for recordings where the soundstage is recorded using a small stereo or binaural setup. Hopefully, a stereo image recorded this way should still sound somewhat natural in headphones, as the recording will still contain the intact stereo image / reverb of the space where the recording was made.

Correct. There are microphone setups that cause very little spatial distortion (such as OSS, ORTF and XY). Binaural recordings should be listened to cross-feed off, but they are pretty rare. Don't cross-feed when there is no spatial distortion to remove! Cross-feed as little as possible to get rid of spatial distortion. It's like adjusting colors on a tv set. You don't want colors pale or over-saturated. You want natural colors. Proper cross-feed means the sound contains just the right amount of spatial information (channel difference).

However, simple stereo setups (such as AB and Blumlein) can produce significant spatial distortion and strong cross-feed is needed to fix things.

When you record with more than 2-3 mics and you're placing instruments artificially and building the soundstage from scratch (this is really a lot more common), the listener may or may not want crossfeed. Hard panning is generally only used as a special effect now, but if you're listening to old Beatles records, you will not hear the recording as intended without crossfeed, because at the time listening equipment was a lot more limited. On the other hand, a fully synthesized electronic music track will not have a "natural" sound no matter what you listen with, as there never was one in the first place.

I pretty much agree with this. However, electronic music can have a very natural sounding spatial image thanks to advanced plugins simulating acoustics for the sound. Spatial distortion is spatial distortion no matter the nature of the music, so fully synthesized music needs cross-feed just as much as totally acoustic music recorded in a real room. If there is spatial distortion, you need to fix it with cross-feed be it jazz, edm, classical or rock.

At the end of the day, if it sounds good, it is good, right? It is usually impossible to know if there is a "more correct" way to play any given recording, but it's not hard to decide whether you like a given effect or not.

My take is that spatial distortion free sound is "correct", because it makes most sense considering how human hearing works and to me it sounds best (natural, realistic, fatigue-free, precise and detailed).
 
Sep 18, 2017 at 4:45 PM Post #52 of 2,192
As for the lower fidelity, are you really thinking that, or is it an assumption that all changes to sound are for the worse? Is noise reduction always bad?

I actually don't take a good/bad stance on this, for me the only ultimate truth in audio is "if it sounds good, it is good". Now, the definition of "good" is an exercise left to the reader, but... when I say 'lower fidelity' I only mean this in the most technical sense, as in, the signal has been altered somehow and is a less-exact copy of the original.

Anyway, I think that crossfeed is probably a very reasonable thing to do, to the extent that instruments in a mix are over-panned to create a wide or more spatialized image on loudspeakers, and therefore sound odd on headphones - which probably applies to a lot of recordings. Probably my personal discomfort comes from the fact that it is difficult to characterize a perfect loudspeaker listening setup, therefore it is equally difficult to characterize a perfect crossfeed implementation. For example, should you just do free-air filtering of high frequencies and 3 feet worth of delay? Or do you also add early reflections from a virtual room? If so, do you also go as far as adding actual reverb? Even 50ms of reverb can really change how things sound... is it for the better? It's a can of worms I'd prefer not to have to think about :)

Again, this is all just my personal feeling and I don't mean to imply anything negative about using crossfeed.

The only headphones I've seen in the studios I've worked in were the beaters in the booth used for playback. I never saw an engineer put headphones on. We would usually mix on the big full range monitors, then do a check on small speakers to make sure it worked well with cheaper systems. We never checked with headphones.

I haven't worked in proper studios, but have mixed a couple albums... even in my limited experience, it's very true that 95% of mixing takes place on speakers, probably more. I made it a point to check on headphones, (notably the crap Apple earbuds) but I will also admit that tweaking spatialization on headphones was not a priority at all. It was a cursory check just to make sure nothing sounded totally bizarre or got lost.
 
Sep 18, 2017 at 8:12 PM Post #53 of 2,192
I actually don't take a good/bad stance on this, for me the only ultimate truth in audio is "if it sounds good, it is good". Now, the definition of "good" is an exercise left to the reader, but... when I say 'lower fidelity' I only mean this in the most technical sense, as in, the signal has been altered somehow and is a less-exact copy of the original.

What is the "original" signal? Isn't the signal from the microphone the "original" one? That signal is altered in many ways in music production. Finally you buy a CD containing that guitar on a track. You can name the CD the original signal, but if you listen to it with loudspeakers, that signal is altered pretty heavily by the acoustics of your room including acoustic cross-feed. When you listen to the same CD with headphone and use cross-feed, the signal is altered less.

Anyway, I think that crossfeed is probably a very reasonable thing to do, to the extent that instruments in a mix are over-panned to create a wide or more spatialized image on loudspeakers, and therefore sound odd on headphones - which probably applies to a lot of recordings. Probably my personal discomfort comes from the fact that it is difficult to characterize a perfect loudspeaker listening setup, therefore it is equally difficult to characterize a perfect crossfeed implementation. For example, should you just do free-air filtering of high frequencies and 3 feet worth of delay? Or do you also add early reflections from a virtual room? If so, do you also go as far as adding actual reverb? Even 50ms of reverb can really change how things sound... is it for the better? It's a can of worms I'd prefer not to have to think about :)

Again, this is all just my personal feeling and I don't mean to imply anything negative about using crossfeed.

What I do is simple straight forward cross-feed using passive circuits based on Linkwitz-Cmoy designs. That removes spatial distortion and ensures the sound is "natural". Extra "thinkering" might improve the sound even more, or it might mess up things. I don't feel the need to do extra things, because the sound is natural, detailed and pleasing. It just works for me.
 
Sep 18, 2017 at 8:26 PM Post #54 of 2,192
You can name the CD the original signal, but if you listen to it with loudspeakers, that signal is altered pretty heavily by the acoustics of your room including acoustic cross-feed.
Yes, I basically consider the recorded media (say a CD) to be the "original signal", which I realize is significantly distorted by basically all transducers and real-world listening scenarios. To be honest, I have a very loudspeaker-centric mentality. My view has been that if you can eliminate distortion everywhere from your source to your loudspeaker, and you have good acoustic treatment in your space, then you have something approximating an ideal listening setup. If you take that view further, headphones are REALLY ideal because they present ONLY direct signal to the ear with no acoustic cross-over.

owever, I had never considered headphone listening itself as inherently creating a form of distortion. Viewed that way, you almost need crossfeed. Either that, or you accept an unnatural presentation of the audio to each ear (i.e. each ear treated separately) as valid... problematic.

Good discussion!
 
Sep 18, 2017 at 8:47 PM Post #55 of 2,192
There really isn't any reason to do cross feed in a speaker setup. But you might want to use EQ, or various DSPs to improve the natural room ambience of either the recording or the listening room, or to re channel stereo to multichannel sound. It's rare, but I occasionally run across recordings that require a little compression because the dynamics are too wide to listen to comfortably, or a little peak expansion if they are too compressed.

The "purity" theory only gets you so far. If you want music to really sound good, you might need to alter it to suit your room and equipment and your ears.
 
Sep 19, 2017 at 3:21 AM Post #56 of 2,192
The only headphones I've seen in the studios I've worked in were the beaters in the booth used for playback. I never saw an engineer put headphones on. We would usually mix on the big full range monitors, then do a check on small speakers to make sure it worked well with cheaper systems. We never checked with headphones.
+1. Never used headphones for a studio mix. No mix I'm aware of other than a binaural recording ever considered headphones.
 
Sep 19, 2017 at 7:28 AM Post #57 of 2,192
Yes, I basically consider the recorded media (say a CD) to be the "original signal", which I realize is significantly distorted by basically all transducers and real-world listening scenarios. To be honest, I have a very loudspeaker-centric mentality. My view has been that if you can eliminate distortion everywhere from your source to your loudspeaker, and you have good acoustic treatment in your space, then you have something approximating an ideal listening setup. If you take that view further, headphones are REALLY ideal because they present ONLY direct signal to the ear with no acoustic cross-over.

The problem is that the "original signal" such as a CD isn't problem-free. It is flawed and I don't mean because the music on it sucks. The problem is that arbitrary 2-channel signal doesn't match human hearing. Audio formats allow "original signals" to exist in larger signal spaces than the signal space of human hearing expects them to be. The correlation between left and right channel can be anything between -1 and 1. In other words you can have spatial information that doesn't exist for our hearing, because sounds heard in real environments just can't have any correlation between -1 and 1. For low frequencies the correlation between left and right ear is always very high, near 1 if not 1. It can't be negative, not even zero. I can write date January 32, but such day does not exist. Similarly you can have crazy out-of-phase bass on a CD, signals that as such doesn't make sense to for out hearing.

Luckily this problem of original signals is pretty easily fixed. Loudspeakers fix it using acoustic cross-feed. If you use headphones, you don't have acoustic cross-feed, so you need to do electric cross-feed, or if the CD happens to be produced for headphones (binaural/monophonic etc. recording), you don't need to do anything, because there is nothing to fix.

However, I had never considered headphone listening itself as inherently creating a form of distortion. Viewed that way, you almost need crossfeed. Either that, or you accept an unnatural presentation of the audio to each ear (i.e. each ear treated separately) as valid... problematic.

Good discussion!

It happened to me too. Before 2012 or so I didn't realize there's a fundamental problem in headphone listening. I can still remember the moment when I suddenly realized the problem, because it was like a child finding out Santa Claus doesn't exist. You have it correct my friend, headphone listening requires proper cross-feed unless you accept spatial distortion. For me this isn't a huge problem, because I can design and construct cheap cross-feeders for myself. When I rip my CDs for my portable player, I pre-cross-feed the music in Audacity using a simple Nyquist plugin I wrote before exporting to mp3 files for the portable player. Cross-feed has opened a completely new world for me, exposing how great headphone listening can be when done right.

I'm glad you find this interesting. :)
 
Sep 19, 2017 at 11:53 AM Post #58 of 2,192
The problem with the way a lot of people think about speakers is that they think of them like headphones- independent, isolated producers of sound for ears to hear. Speakers do a lot more than that. In fact, the sound of the room is just as important as the sound of the speakers. The goal of room treatment isn't to eliminate the sound of the room. That would be like building an anechoic chamber. The goal is to eliminate *unwanted* reflections... the kind that interfere with the sound. There are plenty of desirable things that rooms do that you don't want to eliminate. The room is what allows the sound of the music to bloom and fill the space. Headphones omit that part of the sound. Secondly, speakers don't just produce sound for ears to hear. There's a kinesthetic effect to the bass that you feel in your body. Without that, bass doesn't have the same impact, and headphones just can't do that. Thirdly, speakers exist in space. They provide an anchored soundstage and directional location to the sound. Headphones are one dimensional- a straight line through the middle of your head. Albums are generally mixed to create a directional speaker soundstage that exists in front of you in space. Headphones can't reproduce that either.

Headphones are great for hearing tiny details. If you shove your ear right up against your speaker, you can get that too. Headphones are also great at allowing you to listen to music without disturbing the people around you. Speakers aren't good for that at all. But even with cross feed and the best headphones in the world, headphones don't sound as natural as speakers. Crossfeed only makes the sound less directional along that one dimensional line down the middle of the head. It doesn't do anything to add the bloom of the room, produce kinesthetic thump or create a dimensional soundstage in space.
 
Last edited:
Sep 19, 2017 at 2:53 PM Post #59 of 2,192
Similarly you can have crazy out-of-phase bass on a CD, signals that as such doesn't make sense to for out hearing.

True, but this is not necessarily a problem. Unnatural or "special effect" type acoustic reproductions can still be artistically useful. Ask Ryoji Ikeda, I am sure he's not concerned with what the human auditory system is equipped to process naturally. ;) I get what you're saying though, 2-channel reproduction has some real, inherent divergences from real life.

Luckily this problem of original signals is pretty easily fixed. Loudspeakers fix it using acoustic cross-feed. If you use headphones, you don't have acoustic cross-feed, so you need to do electric cross-feed, or if the CD happens to be produced for headphones (binaural/monophonic etc. recording), you don't need to do anything, because there is nothing to fix.


In fact, the sound of the room is just as important as the sound of the speakers. The goal of room treatment isn't to eliminate the sound of the room. That would be like building an anechoic chamber. The goal is to eliminate *unwanted* reflections... the kind that interfere with the sound. There are plenty of desirable things that rooms do that you don't want to eliminate.
I mostly agree. It's very true that room influence is huge - In many rooms, more sound will reach the ear via reflection than directly from the loudspeaker. I would argue that one should strive to eliminate reflections up to the extent that a really well-treated mixing studio does, at least, in an ideal case. Having zero reflected sound is bad, but the amount of reverberation you get in your typical untreated, acoustically unfavorable room is arguably just as bad.

Headphones are great for hearing tiny details. If you shove your ear right up against your speaker, you can get that too. Headphones are also great at allowing you to listen to music without disturbing the people around you. Speakers aren't good for that at all. But even with cross feed and the best headphones in the world, headphones don't sound as natural as speakers. Crossfeed only makes the sound less directional along that one dimensional line down the middle of the head. It doesn't do anything to add the bloom of the room, produce kinesthetic thump or create a dimensional soundstage in space.
Can't really argue with any of that.
 
Sep 19, 2017 at 4:50 PM Post #60 of 2,192
True, but this is not necessarily a problem. Unnatural or "special effect" type acoustic reproductions can still be artistically useful. Ask Ryoji Ikeda, I am sure he's not concerned with what the human auditory system is equipped to process naturally. :wink: I get what you're saying though, 2-channel reproduction has some real, inherent divergences from real life.

I don't know Ryoji Ikeda's art, but if it is based on spatial distortion then one can listen to his music cross-feed off, just like binaural stuff etc. Most of the music in the world as far as I know is not based on spatial distortion. Mozart hardly had headphones in mind while writing his Requiem…
 

Users who are viewing this thread

Back
Top