To crossfeed or not to crossfeed? That is the question...
Jan 9, 2011 at 10:56 PM Post #31 of 2,146
I tried the Rockbox default crossfeed setting, and it rolled off the highs, and compressed the soundstage so that I didn't like the sound at all. I was listening to Electric Ladyland, which I've listened to countless times over the years, and I couldn't make it through the album with the xfeed enabled. I'll certainly try another implementation if I have the chance, but I don't know when that might be.
 
Jan 10, 2011 at 2:51 PM Post #33 of 2,146
Depends on what headphone you have also.
 
Crossfeed worked well for my K701. It took the edge off and provided a much smoother sound.
 
For my HD800 however, all it did was greatly reduced the soundstage. Some headphones don't need crossfeed.
 
Jan 10, 2011 at 6:43 PM Post #34 of 2,146


Quote:
I tried the Rockbox default crossfeed setting, and it rolled off the highs, and compressed the soundstage so that I didn't like the sound at all. I was listening to Electric Ladyland, which I've listened to countless times over the years, and I couldn't make it through the album with the xfeed enabled. I'll certainly try another implementation if I have the chance, but I don't know when that might be.


When I used RB crossfeed, I set the parameters so that the effect was at the minimum possible. Play with it and it gets much better.
 
Jan 10, 2011 at 8:38 PM Post #35 of 2,146
BBE ViVA on my S9 seems to have some crossfeed (tested with a few hard panned songs) I think it works quite well; it makes the soundstage more holographic/3D.
 
Jan 10, 2011 at 10:42 PM Post #36 of 2,146
heck yes it does!!
 
Quote:
BBE ViVA on my S9 seems to have some crossfeed (tested with a few hard panned songs) I think it works quite well; it makes the soundstage more holographic/3D.



 
Jan 13, 2011 at 6:45 PM Post #37 of 2,146
As the term crossfeed is used in audio, it can mean any of a number of fundamentally different things so it is not appropriate to lump them all together.
 
First you have simple "blend", i.e. mixing the left and right channels so that there is less stereo separation. The extreme setting of a blend control is single channel monaural sound. I had a pre-amp that alllowed this and it was of some use on a few old stereo recordings that had extreme separation, eg. voice in one cahnnel, instruments in another. Generally however I found it rarely useful.
 
The other meanings of crossfeed are based on the proprietary techniques of the designers and do other kinds of black magic, including frequency response alteration as well as some blending. I suspect that much of the appeal of these systems has more to do with the black magic than the blending.
 
One common claim which I totally disagree with is that by making phones sound more speaker like they will be more realistic. No - they will be more speaker like and that is far from giving a realistic spatial image.
 
All conventional speakers suffer from inadvertent cross-feed which is simply an artifact of the speaker presentation and which causes each channel to feed both ears with the same signals. In effect your brain is getting hit from 4 signals rather than the 2 in the source. The 2 crossfeed signals are extraneous to the original 2 channel signals and simply degrade the sound.
 
Headphones by comparison give only one channel to each ear and produce a more accurate spatial image. What they don't do is produce a sense of externalization, rather you get the in-the-ear effect that some complain about. But in other respects the headphone image is much clearer. Accordingly efforts to give phones speaker-like cross-feed are simply wrong in principle, just a way of buggering up the sound.
 
However I doubt that many commercial crossfeed systems really do provide speaker-like cross-feed which also requires time delays of the cross-fed signal. Most are I guess simply blend, plus frequency tweaking, plus some other voodoo.
 
From time to time efforts to get rid of speaker cross-feed are tried. Polk made its SDA speakers some years ago. I bought them and still have them because they do a pretty good job of giving a much more precise stereo image. Unfortunately they are no longer made.
 
http://www.polkaudio.com/forums/showthread.php?t=45468
 
Here are some other discussions of this issue.
 
http://news.cnet.com/8301-13645_3-20022412-47.html?tag=mncol;title
 
http://www.princeton.edu/3D3A/
 
http://www.freepatentsonline.com/6009178.html
 
http://kom.aau.dk/group/02gr960/docs/lspkpos02.pdf
 
http://www.isvr.soton.ac.uk/fdag/vap/html/xtalk.html
 
Jan 13, 2011 at 8:02 PM Post #38 of 2,146
 
Quote:
All conventional speakers suffer from inadvertent cross-feed which is simply an artifact of the speaker presentation and which causes each channel to feed both ears with the same signals. In effect your brain is getting hit from 4 signals rather than the 2 in the source. The 2 crossfeed signals are extraneous to the original 2 channel signals and simply degrade the sound.
 


Meier Audio's StageDAC cross-feed switch come with one option where partial signal from one channel is subtracted (instead of added) to another channel. This option is meant for speaker and it produces much better sound stage and spatial image than normal stereo. With intensity switch for cross-feed set to minimum, the channel separation effect does not affect sound stage of headphone too much. Thus whether cross-feed is pleasant or not also depend on how much it has been applied. Too much of good thing is bad for cross-feed.
 
I suppose this is manifestation of what you have claimed.
 
Jan 14, 2011 at 1:21 PM Post #39 of 2,146


Quote:
As the term crossfeed is used in audio, it can mean any of a number of fundamentally different things so it is not appropriate to lump them all together.
 
First you have simple "blend", i.e. mixing the left and right channels so that there is less stereo separation. The extreme setting of a blend control is single channel monaural sound. I had a pre-amp that alllowed this and it was of some use on a few old stereo recordings that had extreme separation, eg. voice in one cahnnel, instruments in another. Generally however I found it rarely useful.
 
The other meanings of crossfeed are based on the proprietary techniques of the designers and do other kinds of black magic, including frequency response alteration as well as some blending. I suspect that much of the appeal of these systems has more to do with the black magic than the blending.
 
One common claim which I totally disagree with is that by making phones sound more speaker like they will be more realistic. No - they will be more speaker like and that is far from giving a realistic spatial image.
 
All conventional speakers suffer from inadvertent cross-feed which is simply an artifact of the speaker presentation and which causes each channel to feed both ears with the same signals. In effect your brain is getting hit from 4 signals rather than the 2 in the source. The 2 crossfeed signals are extraneous to the original 2 channel signals and simply degrade the sound.
 
Headphones by comparison give only one channel to each ear and produce a more accurate spatial image. What they don't do is produce a sense of externalization, rather you get the in-the-ear effect that some complain about. But in other respects the headphone image is much clearer. Accordingly efforts to give phones speaker-like cross-feed are simply wrong in principle, just a way of buggering up the sound.
 
However I doubt that many commercial crossfeed systems really do provide speaker-like cross-feed which also requires time delays of the cross-fed signal. Most are I guess simply blend, plus frequency tweaking, plus some other voodoo.
 
From time to time efforts to get rid of speaker cross-feed are tried. Polk made its SDA speakers some years ago. I bought them and still have them because they do a pretty good job of giving a much more precise stereo image. Unfortunately they are no longer made.
 
http://www.polkaudio.com/forums/showthread.php?t=45468
 
Here are some other discussions of this issue.
 
http://news.cnet.com/8301-13645_3-20022412-47.html?tag=mncol;title
 
http://www.princeton.edu/3D3A/
 
http://www.freepatentsonline.com/6009178.html
 
http://kom.aau.dk/group/02gr960/docs/lspkpos02.pdf
 
http://www.isvr.soton.ac.uk/fdag/vap/html/xtalk.html



I think the problem is that music used to be entirely mixed for listening from speakers, and to this day that remains a primary objective.
 
Another attempt at nulling out the unwanted signal from the opposite speaker (in addition to the SDA speakers) is Bob Carver's Sonic Holography - Professor Choueiri's system looks like it does the exact same thing, except with software instead of circuitry, and probably with a whole lot more customizability.  Anyway, you can get a Sonic Holography processor or preamp pretty cheap these days - about $60 for the C-9 processor or $100-$200 for a C-1 or C-11 preamp - and they work with any speakers.
 
The basic concept behind all of these systems is to introduce a phase-inverted copy of the opposite channel's signal, delayed by about 0.2 ms (the difference in time it takes for the sound to go to your other ear instead).  The goal is to cancel out the unwanted signal from the other speaker.  Of course, because the other ear also hears the cancellation signal, it's not a perfect solution.  I'd be willing to bet that neither is Choueiri's.
 
Oh, and for the record, this type of processing could be put in the recording, but it isn't because listening to it through headphones is entirely unnatural.
 
The effect is interesting, to say the least.
 
It isn't always an improvement - while the soundstage becomes absolutely huge in quite a few recordings, and it's crazy to hear sounds around and behind you (I tricked a friend into thinking I had a 5 channel SACD setup) - it's not always very natural sounding.  For example, the female backup vocalists on Clapton's Lay Down Sally from Slowhand sound like they're behind you...  It sounds cool, but it's a novelty that wears off quickly.
 
It also has a tendency to diffuse the soundstage quite a bit - vocalists and other precisely located instruments become slightly more diffuse in location.
 
 

But, it remains that sound coming from headphones, a source so close to your ear, don't fully mimic our ears' and brains' accustomation to hearing far away sound sources in 3D space.  Of course, speakers don't either...  They both have their failings in this respect, and since they fail in different manners, it's impossible to account for both in the recording.
 
 
 
So yes, I do use crossfeed much of the time in Winamp.  The "HeadPlug MKII" plugin actually does a really good job, with adjustable amounts of crossfeed, delay, treble control, and more.
 
No, it's not perfect - some recordings don't sound nearly as dynamic with it on.  But for listening to early stereo stuff it's a godsend - I can't listen to Cream without it!  For more modern mixes, it does do a good job of providing a sense of distance and bringing the sounds more in front of you, like at a live show.
 
But like I said, sometimes I turn it off as it doesn't always sound better.
 
Sep 11, 2017 at 10:24 PM Post #40 of 2,146
Recently, I found a digital simulation of the Meier crossfeed for Foobar, and have been enjoying it.

http://www.foobar2000.org/components/view/foo_dsp_meiercf

I'm not much of a crossfeed fan, but this one is very good, subtle but effective when not overdone. I am listening to Jimi Hendrix experience now and it's making a big difference. I don't know if I could enjoy this album on headphones otherwise.
 
Sep 18, 2017 at 8:48 AM Post #41 of 2,146
I found cross-feed 5-6 years ago. It was kind of awakening, sudden realization of how unnatural headphone listening without proper cross-feed is. I felt stupid for not realizing it much sooner. I had the education and knowledge (acoustic engineering), but I never questioned headphone listening on fundamental level. Just shows how important it is to question things… ... everything. Better late than never. At that point I wasn't much of a headphone guy, but cross-feed changed it all for me.

So yes to proper cross-feed. At bass frequencies our ears don't expect more than about 3 dB difference in level (ILD) and the expected time difference (ITD) is less than about 650 microseconds. Going outside these limits causes spatial distortion in our brain. In everyday life all bass we hear is always almost mono unless we feed signals with larger stereo separation into our ears with headphones. We learn to think the unnatural headphone sound is correct, but from scientific point of view considering how our hearing works, it is not correct. Cross-feed is not only about more pleasant and natural sound, it is about understanding what kind of biological creatures we are. It is about realizing how it's possible that something has been done wrong and can be fixed. Cross-feed is a great topic when it comes to showcase the complexity of issues related to audio and listening.

After 5-6 years of cross-feed experimenting and thinking, I have learned things which I want to share here. If you disagree with me, please bring it up, because I am willing to learn more and correct mistakes of my thinking.


Spatial distortion

Spatial distortion means spatial information in the form of channel difference in a recording that is "outside" what our spatial hearing expects. Our brain doesn't know how to interpret such information and as a result the sound feels unnatural and tiring and the spatial image is wrinkled/fragmented. It seems to vary how strongly people "suffer" from spatial distortion. Personally I don't want to experience spatial distortion at all. The purpose of cross-feed is to remove spatial distortion, scale spatial information so that it's within the expectation space of our spatial hearing so that our brain can decode it normally with ease like any sound in our sonic environment.


Proper cross-feed

Each recording has it's own proper cross-feed level. Monophonic recordings have negative proper cross-feed level, because we would like to create some channel separation to have stereophonic sound. Some (a few percent) of stereophonic recordings don't need cross-feed at all. They are simply recorded to have a "binaural" sound signature using for example a Jecklin disk microphone setup. Majority of stereophonic recordings need cross-feed. Some less, some more. Early stereophonic recordings typically had HUGE channel separation in order to demonstrate stereo sound and these recording require HUGE cross-feed. On the other hand modern popular music is often mixed headphone listening in mind and requires mild if any cross-feed. So, there is an optimal level of cross-feed for every recording depending on how it is produced. Not enough cross-feed means spatial distortion and too much cross-feed means narrowed mono-like sound.


Cross-feed level

In my opinion typical cross-feed levels of for example headphone amps with cross-feed are quite conservative. Many recordings require stronger cross-feed in my opinion. Proper cross-feed level varies according to my tests between -1 dB (strong!) and -12 dB (weak!). Strong cross-feed works well with "ping-pong" stereo recordings and multichannel movie soundtracks witch contain a lot of channel separation after down mix to stereo due to surround channel (out-of-phase) information.


Does cross-feed mess up the sound? Does it remove details?

People who don't like cross-feed often say cross-feed messes up the sound. In my opinion these people have it logically wrong, but it's not their fault. Cross-feed is considered exotic, something extra to thinker with the sound. It is not. Loudspeaker listening causes strong acoustic cross-feed, because both ears hear the sound of both loudspeakers. Why doesn't people think loudspeakers mess up the sound because of this? It's because the recordings are produced to take this into account. Recordings have strong channel differences, because loudspeaker listening causes acoustic cross-feed. In other worlds, recordings are "anti-messed" and cross-feed (acoustic with loudspeakers or electric with headphones) mess with this anti-messed up sound to create non-messed up sound. It is true that cross-fed signals sound less detailed, but it's not because relevant information is lost. It's because spatial distortion is removed, information that never should have been there in the first place. Getting rid of spatial distortion makes possible to notice real musical information better. Our hearing is very good at detecting details in correctly cross-fed sounds, so proper cross-feed gives the best circumstances to notice as much details as possible. So, if you want to hear details in your music, you should favor proper cross-feed. If you prefer tiny details cloaked under spatial distortion then cross-feed is not for you.
 
Sep 18, 2017 at 10:50 AM Post #42 of 2,146
how crossfeed is wrong because it changes the signature is indeed backward thinking. pick a headphone that sounds more balanced with crossfeed, and now it's removing crossfeed that messes up with the signature.
but crossfeed as it's mostly implemented is a very simplified approach to the stereo issue of headphones+music made for speakers. doesn't mean that doing nothing is a great idea, but it can be tricky to find a crossfeed that really works well for us. ultimately we'll need to move on to more customized solutions if headphones want to become an actual HI-FI tool someday.
 
Sep 18, 2017 at 11:36 AM Post #43 of 2,146
The thing I learned with multichannel sound is that in the modern world, the old concepts of signal purity are just plain wrong. Purists refuse to use EQ or even tone controls and end up with a response curve created by a dice roll. Others refuse to use a DSP to create a room ambience or re-channel stereo to 5.1 and they end up with 2 dimensional sound. Still others approach system calibration like it's an order from God on high, and they never properly hear the hundreds and thousands of discs mastered slightly off spec.

I believe that tools have their purpose. We should be able to correct mistakes made by engineers, and we should be able to tailor the sound to fit out room and personal tastes. If you don't know how to use a tool properly, it's probably best to leave it alone. But if you're interested in learning about them and use them to help sculpt your sound, that's a good thing.
 
Sep 18, 2017 at 12:00 PM Post #44 of 2,146
The thing I learned with multichannel sound is that in the modern world, the old concepts of signal purity are just plain wrong. Purists refuse to use EQ or even tone controls and end up with a response curve created by a dice roll. Others refuse to use a DSP to create a room ambience or re-channel stereo to 5.1 and they end up with 2 dimensional sound. Still others approach system calibration like it's an order from God on high, and they never properly hear the hundreds and thousands of discs mastered slightly off spec.

I believe that tools have their purpose. We should be able to correct mistakes made by engineers, and we should be able to tailor the sound to fit out room and personal tastes. If you don't know how to use a tool properly, it's probably best to leave it alone. But if you're interested in learning about them and use them to help sculpt your sound, that's a good thing.

So much awesomeness in this post.
 
Sep 18, 2017 at 1:18 PM Post #45 of 2,146
My own personal stance: Technically, it's changing the sound and therefore lower fidelity. Therefore, I do not like the idea on a sheerly emotional / neurotic level.

Also, most mixing engineers (at least for pop / non-audiophile recordings) Will try to produce a mix that sounds reasonable on headphones as well as speakers, so it's not as if headphones are a forgotten realm. Most mixing engineers are resigned to the fact that their recordings will most frequently be heard on iPod earbuds.

I'm guessing here, but crossfeed should be minimally or not necessary at all for recordings where the soundstage is recorded using a small stereo or binaural setup. Hopefully, a stereo image recorded this way should still sound somewhat natural in headphones, as the recording will still contain the intact stereo image / reverb of the space where the recording was made.

When you record with more than 2-3 mics and you're placing instruments artificially and building the soundstage from scratch (this is really a lot more common), the listener may or may not want crossfeed. Hard panning is generally only used as a special effect now, but if you're listening to old Beatles records, you will not hear the recording as intended without crossfeed, because at the time listening equipment was a lot more limited. On the other hand, a fully synthesized electronic music track will not have a "natural" sound no matter what you listen with, as there never was one in the first place.

At the end of the day, if it sounds good, it is good, right? It is usually impossible to know if there is a "more correct" way to play any given recording, but it's not hard to decide whether you like a given effect or not.
 
Last edited:

Users who are viewing this thread

Back
Top