To crossfeed or not to crossfeed? That is the question...

Sep 19, 2017 at 5:00 PM Post #61 of 2,192
I don't know Ryoji Ikeda's art, but if it is based on spatial distortion then one can listen to his music cross-feed off, just like binaural stuff etc. Most of the music in the world as far as I know is not based on spatial distortion. Mozart hardly had headphones in mind while writing his Requiem…

It's basically a bunch of hard-panned beeps and tones, I guess the most normal name for it is 'noise music': ... he uses little to no spatialization in his mixes, either, making any notion of naturalistic listening sort of beside the point... although I realize it's an extreme case :)
 
Sep 19, 2017 at 5:33 PM Post #62 of 2,192
I would argue that one should strive to eliminate reflections up to the extent that a really well-treated mixing studio does, at least, in an ideal case. Having zero reflected sound is bad, but the amount of reverberation you get in your typical untreated, acoustically unfavorable room is arguably just as bad.

The purpose of a recording studio is completely different than the purpose of a listening room in a home. A studio depends on calibration and precise control of every element to be able to consistently capture a performance and build a mix from the captured sound. They want to isolate the sound so they can balance it and finesse the mix and not get extraneous ambience added to the recording that isn't intended. So they record in a sound proofed booth and mix in a carefully calibrated and treated mixing stage.

A listening room in a home doesn't need that kind of isolation and precision because the purpose is different- to present recordings of all types in a pleasing manner. The most natural sound of all is the sound of the real room that you inhabit and are familiar with. If that sound environment is complementary to music, the added ambience adds a level of natural presence that isn't in the recording itself. It allows the sound of the recording to bloom and inhabit space. If you want to isolate your listening experience to just the recording and nothing but the recording, headphones are fine for that. But that isn't the intent that the engineers are trying to create. They want the music to interact with your room and fill it with sound.

That said, there are good rooms and bad rooms. Good ones add euphonic ambience and bad ones have acoustics that muddle the sound with primary reflections or cancellation. I've been in a lot of fantastic recording studios, and although their equipment was top notch and the acoustics of the mixing stage were just about perfect for recording, it isn't necessarily the ideal for what a home system should sound like. Try as engineers might to be consistent, there is still a huge variation in the sound of different types of recordings. The listener has to abandon calibration at some point and create a balance for his particular room and circumstances.

I know it's common among audiophiles to cite the old adage "I want the sound they heard in the studio when they created the recording." But that is easier to say than to achieve. And I'm not convinced that even if you achieve it that you will be getting the absolute best sound that way. We could record symphony orchestras section by section in sound proofed booths... violins in one booth, brass in another, percussion in another... but the result would be a total gnocchi- a mulligan stew of sound. Take that same orchestra and put it in the Berlin Philharmonie and add a few microphones to capture the sound of the hall and it sounds great. Multichannel sound allows room ambiences to be altered. A living room can sound exactly like a philharmonic hall or gothic cathedral. That is the "fourth dimension" of sound that goes beyond a flat stereo soundstage and begins to create a dimensional sound field. Pursuing that is a lot more effective than pursuing the ideal of creating a duplicate of a recording studio in your home.
 
Sep 20, 2017 at 8:26 AM Post #63 of 2,192
It's basically a bunch of hard-panned beeps and tones, I guess the most normal name for it is 'noise music': ... he uses little to no spatialization in his mixes, either, making any notion of naturalistic listening sort of beside the point... although I realize it's an extreme case :)


Thanks! I think this kind of music benefits from cross-feed the most. This stuff sound pretty awful without cross-feed. With Cross-feed it becomes pleasant, just quite boring imo. I listen to Autechre when I want to listen to something of this sort.
 
Sep 20, 2017 at 10:25 AM Post #64 of 2,192
The purpose of a recording studio is completely different than the purpose of a listening room in a home. A studio depends on calibration and precise control of every element to be able to consistently capture a performance and build a mix from the captured sound. They want to isolate the sound so they can balance it and finesse the mix and not get extraneous ambience added to the recording that isn't intended. So they record in a sound proofed booth and mix in a carefully calibrated and treated mixing stage.

I mean, yes, nobody should listen in a recording booth, much too dead, it sounds bad. But, a "just live enough" room with really flat loudspeakers is my idea of an 'ideal' listening setup. Something like what was heard when the recording was mixed (as opposed to when parts were recorded). Now, I concede that's not the most enjoyable setup possible for everyone, it's more like it's comforting for me to know the music has been minimally interfered with or altered.

Thanks! I think this kind of music benefits from cross-feed the most. This stuff sound pretty awful without cross-feed. With Cross-feed it becomes pleasant, just quite boring imo. I listen to Autechre when I want to listen to something of this sort.

I would take the other side of that. For example, one of his albums is actually called "headphonics". It features a good deal of hard-panned, simple tones. It's easy to imagine that he intends the listener to sit through a lot of totally unnatural and challenging tones, rather than turn it into a more natural listening experience - and the title definitely suggests headphones as the preferred listening equipment!
 
Sep 20, 2017 at 10:43 AM Post #65 of 2,192
I would take the other side of that. For example, one of his albums is actually called "headphonics". It features a good deal of hard-panned, simple tones. It's easy to imagine that he intends the listener to sit through a lot of totally unnatural and challenging tones, rather than turn it into a more natural listening experience - and the title definitely suggests headphones as the preferred listening equipment!

Well, these are opinions and your opinion is just as good as anyone else's. I don't know his art apart from couple of tracks I listened to thanks to your youtube link. Maybe he is a genius or expert on spatial hearing, but generally speaking people who limit their "spatial expressions" to hard amplitude panning tricks are not that wise/advanced on the issue. Real panning is about careful combination of amplitude, phase and spectral tweaks.

personally I think I choose not to sit through a lot of totally unnatural (but not so challenging from what I heard - short bursts of noise or sinusoids are hardly "challenging" in the 21st century) tones. There is too much great music in the world competing of my listening time to even consider wasting it on this. Sorry.
 
Sep 20, 2017 at 12:55 PM Post #66 of 2,192
Now, I concede that's not the most enjoyable setup possible for everyone, it's more like it's comforting for me to know the music has been minimally interfered with or altered.

Well I can't speak to your comfort level. All I can speak about is sound. I don't lie in bed at night worrying if my room is altering my sound. I just listen and it sounds good. If there's a problem with the sound, I try to fix it. When I do that, I'm trying to make *my* room sound as good as it can. I'm not trying to guess what the mixer's room was like and shoehorn my room into conforming to that guess. I don't think I'm uncommon. I think a lot of audiophiles talk about not interfering with sound, but they don't have the slightest idea how to achieve that. Neither do I to be honest. Luckily, I don't even try. I just focus on getting great sound. I'm often listening to music that is over 50 years old. I don't want to hear it unaltered the way the original engineers heard it. I like in the 21st century with a lot of fabulous technology. I expect to hear it better than they did.
 
Last edited:
Sep 20, 2017 at 1:59 PM Post #67 of 2,192
Audio pros know that their audience doesn't typically have nearfield monitors. Engineers *usually* make as few assumptions as possible about the ultimate listening conditions, meaning they want it to sound good on headphones and speakers alike. In fact, it is seen as a major failure if your mix only sounds good on studio monitors.

I haven't worked in proper studios, but have mixed a couple albums... even in my limited experience, it's very true that 95% of mixing takes place on speakers, probably more. I made it a point to check on headphones, (notably the crap Apple earbuds) but I will also admit that tweaking spatialization on headphones was not a priority at all. It was a cursory check just to make sure nothing sounded totally bizarre or got lost.
It seems to me the two above quotes are somewhat at odds with each other. Which is it, we mix for speakers and headphones, or we mix for speakers primarily then do a cursory check for headphones? (It's the latter, BTW).

I'm kinda wishing there were fewer statements and assumptions about what audio pros do made by people who don't seem to actually know.
 
Sep 20, 2017 at 2:23 PM Post #68 of 2,192
The problem is that the "original signal" such as a CD isn't problem-free. It is flawed and I don't mean because the music on it sucks. The problem is that arbitrary 2-channel signal doesn't match human hearing. Audio formats allow "original signals" to exist in larger signal spaces than the signal space of human hearing expects them to be. The correlation between left and right channel can be anything between -1 and 1. In other words you can have spatial information that doesn't exist for our hearing, because sounds heard in real environments just can't have any correlation between -1 and 1. For low frequencies the correlation between left and right ear is always very high, near 1 if not 1. It can't be negative, not even zero. I can write date January 32, but such day does not exist. Similarly you can have crazy out-of-phase bass on a CD, signals that as such doesn't make sense to for out hearing.

Luckily this problem of original signals is pretty easily fixed. Loudspeakers fix it using acoustic cross-feed. If you use headphones, you don't have acoustic cross-feed, so you need to do electric cross-feed, or if the CD happens to be produced for headphones (binaural/monophonic etc. recording), you don't need to do anything, because there is nothing to fix.
The idea that you can fix acoustic crosstalk by using crossfeed in loudspeakers by design or pre-processing has been a long-time study of mine, I've designed and tested several systems. I have to disagree that the problem is easily fixed, though.

Basically, what you can do is modify to some extent how sound from those speakers is localized by partially cancelling the direct signal from speakers that form localization cues we would normally use to localize the speakers themselves. However, you're not fixing acoustic crosstalk, it's still all over the place. But the cancellation only works in one very specific listening position, and is extremely fragile, being affected by room acoustics with reflective surfaces (pretty much a fact of life), and head position, especially when cancellation is the result of a signal processor and not speaker design. All the loudspeaker crossfeed in the world can't compensate for unknown reflections. But worse, what you get is something entirely new, not heard in the mixing environment, or the original acoustic environment either. It no more represents "reality" than any other perspective, though may to some be more pleasing.

As for headphone crossfeed, the Linkwitz circuit is just a somewhat frequency selective reduction in separation, but that doesn't address the rather prominent headphone problem if mid-head localization, which is much harder to deal with. But again, you're creating something new that was neither heard nor planned for before. It might be pleasant, it might not, or anywhere between. I personally would be frustrated with a crossfeed on/off switch, I'd need it to be variable. In fact, one system I designed for this monitored the channel separation of the actual signals and changed crossfeed dynamically. Worked ok, a definite improvement over a switch.
It happened to me too. Before 2012 or so I didn't realize there's a fundamental problem in headphone listening. I can still remember the moment when I suddenly realized the problem, because it was like a child finding out Santa Claus doesn't exist. You have it correct my friend, headphone listening requires proper cross-feed unless you accept spatial distortion.
I'm not sure I'd agree, because headphones always have a unique spacial presentation, which is part of the experience. In fact, it used to be promoted! Prog rock radio in the 1970s used to have "headphone hour" programming where widely separated mixes, or some with whip-panned sounds, lots of whacky effects where desirable. We grew up with headphones being different, and for that reason the modification of the spacial perspective is more accepted. I don't like applying the term "spacial distortion" to the headphone perspective, because distortion implies a reference undistorted original, but in recorded music that never exists. Even the original mix in the original studio is synthetic. Yes, headphones present differently, but neither perspective is actually undistorted.
For me this isn't a huge problem, because I can design and construct cheap cross-feeders for myself. When I rip my CDs for my portable player, I pre-cross-feed the music in Audacity using a simple Nyquist plugin I wrote before exporting to mp3 files for the portable player. Cross-feed has opened a completely new world for me, exposing how great headphone listening can be when done right.

I'm glad you find this interesting. :)
I was disappointed when I looked up some of the crossfeed models. So basic, so crude.
 
Sep 21, 2017 at 4:40 AM Post #69 of 2,192
My amateur music is mixed for headphones as I don't have a decent monitoring system :D

Jokes aside if I wanted to add crossfeed to my spotify music, what should I use on android, and windows?
Good read as usual.
 
Sep 21, 2017 at 8:00 AM Post #70 of 2,192
The idea that you can fix acoustic crosstalk by using crossfeed in loudspeakers by design or pre-processing has been a long-time study of mine, I've designed and tested several systems. I have to disagree that the problem is easily fixed, though.

Headphone cross-feed doesn't address any acoustic problem, so I am not sure what the relevance here is.

Basically, what you can do is modify to some extent how sound from those speakers is localized by partially cancelling the direct signal from speakers that form localization cues we would normally use to localize the speakers themselves. However, you're not fixing acoustic crosstalk, it's still all over the place. But the cancellation only works in one very specific listening position, and is extremely fragile, being affected by room acoustics with reflective surfaces (pretty much a fact of life), and head position, especially when cancellation is the result of a signal processor and not speaker design. All the loudspeaker crossfeed in the world can't compensate for unknown reflections. But worse, what you get is something entirely new, not heard in the mixing environment, or the original acoustic environment either. It no more represents "reality" than any other perspective, though may to some be more pleasing.

Cancellation of loudspeaker crosstalk as a concept is familiar to me. I studied acoustics in the university and worked in the acoustics lab for almost a decade. However, I am not sure why you talk about loudspeaker cross-talk cancellation in a thread about cross-feed in headphone listening. Personally I am not that worried about loudspeaker cross-talk. It is a "natural" acoustic phenomenon that doesn't create unnatural signals to my ears. Making the listening room more absorbent and using more directional loudspeakers one can reduce cross-talk, if that is an issue. It will make the loudspeaker sound more headphone-like, but isn't it easier to just use headphones if that's what you want?

I think you confuse cross-talk and cross-feed in some places.

As for headphone crossfeed, the Linkwitz circuit is just a somewhat frequency selective reduction in separation, but that doesn't address the rather prominent headphone problem if mid-head localization, which is much harder to deal with.
Linkwitz circuit, as pretty much all cross-feeders, are frequency selective because that is how our spatial hearing works. Our head is a frequency selective barrier for the sound. Cross-feeder also delay cross-fed signals typically about 0.2-0.3 ms to simulate the delay caused but loudspeakers in ~30° angles. The delay is conveniently created by the low-pass filter. Cross-feeders are simple circuits, but they miraculously fix the problem, spatial distortion. Mid-head localization is partially fixed and depends on the recording itself. Acoustic recordings done in real acoustics such as classical music can sound pretty amazing after proper cross-feed, but not as amazing as real binaural recordings.

But again, you're creating something new that was neither heard nor planned for before.
Cross-feed removes spatial distortion by scaling spatial information into the "value-space" our brain expects it to be. Not using cross-feed creates something that was not planned, spatial distortion.

It might be pleasant, it might not, or anywhere between.
It sounds natural, realistic and fatique-free. Drums sound like real drums in a room, not fake plastic toys. Short transient sounds are located in the sound image at pin-point accuracy instead of spreading all over the place because brain doesn't know how to interpret crazy spatial cues. Cross-feed doesn't remove details, it removes spatial distortion revealing the tiny details of the music itself. If that's not desirable then I don't know what is.

I personally would be frustrated with a crossfeed on/off switch, I'd need it to be variable.
Same here. That's why my DIY cross-feed headphone adapter has got 6 cross-feed levels (+ off of course).

In fact, one system I designed for this monitored the channel separation of the actual signals and changed crossfeed dynamically. Worked ok, a definite improvement over a switch.

Interesting. Did you reduce the speed cross-feed level is changed? What are the benefits of dynamic cross-feed compared to constant cross-feed in your opinion? Any down-sides?

I'm not sure I'd agree, because headphones always have a unique spacial presentation, which is part of the experience. In fact, it used to be promoted! Prog rock radio in the 1970s used to have "headphone hour" programming where widely separated mixes, or some with whip-panned sounds, lots of whacky effects where desirable.

Since when has the marketing people understood anything about audio quality? I'm not going to suffer spatial distortion just because some lunatic radio shows decades ago when people didn't know what to do with stereo sound. Such whacky effects are childish.

We grew up with headphones being different, and for that reason the modification of the spacial perspective is more accepted. I don't like applying the term "spacial distortion" to the headphone perspective, because distortion implies a reference undistorted original, but in recorded music that never exists. Even the original mix in the original studio is synthetic. Yes, headphones present differently, but neither perspective is actually undistorted.

What you experienced in your childhood doesn't change scientific facts. We all have to admit sometimes that the way we have done things or thought about things have been wrong. That's how we learn, accepting new understanding. I listened to music wrong when young. I'm probably still doing something wrong, but hopefully just a tiny little bit. Cross-feed was a huge step for me. I don't believe spatial distortion was ever intended with headphones. It's an accident of stereo sound. In the late 50's and 60's people were so exited about stereo sound and the possibility to have huge channel separation they didn't think about the consequences. It's something people simply ignore not realizing how it destroys the potential on headphone listening. People get used to things and when somebody questions things they are in denial. Sad.

Spatial distortion happens in our brain, but it is just as real for the listener, just as pain is real for a person. You can listen to headphones the way you want, that's your business, but I feel responsible to educate people about spatial distortion and how to significantly enhance headphone listening using cross-feed. I have science on my side. Open-minded people do get what I say.

If the sound from a cowbell spreads all over the place instead of being in one position in the sound image then I am going to call it spatial distortion. Spatial information gets distorted. A cowbell is not all over around your head. It's on the left or right or in the center. It's in one place and it sounds like a real cowbell if you hear it like that, without spatial distortion. The thing exists and people should be educated about it.

Also, don't blame cross-feed for some crappy prog rock sounding weird because some anarchistic sound engineers using drugs liked to play with the knobs in the studio. Cross-feed makes miracles, but not so big miracles as to transforming a badly produced rock album of the 70's into gold. Listen to some well recorded classical music (e.g. SACD by BIS label) with proper cross-feed and then you hear how good the result is.

I was disappointed when I looked up some of the crossfeed models. So basic, so crude.
The market for headphone amps with cross-feed is miserable. Headphone amps are expensive, only a few models have cross-feed and then it's one or two levels available. There's SPL Phonitor, but that's very expensive. I recommend DIY cross-feed headphone adapters. Having a DIY cross-feeder in between of your source and headphone amp is another option.
 
Sep 21, 2017 at 11:39 AM Post #71 of 2,192
My amateur music is mixed for headphones as I don't have a decent monitoring system :D

Jokes aside if I wanted to add crossfeed to my spotify music, what should I use on android, and windows?
Good read as usual.

I found a simulation of Meier "natural" crossfeed filter for foobar that sounds very good to me. That will work for Windows, not sure what to tell you about Android.

edit: just noticed you were specific to Spotify, so foobar plugins will be of no use, but I decided to keep the link here in case you give it a shot with Foobar.

Plugin:
http://www.foobar2000.org/components/view/foo_dsp_meiercf

Explanation of Meier Crossfeed:
http://www.meier-audio.homepage.t-online.de/crossfeed.htm

I personally would be frustrated with a crossfeed on/off switch, I'd need it to be variable. In fact, one system I designed for this monitored the channel separation of the actual signals and changed crossfeed dynamically. Worked ok, a definite improvement over a switch.

You are essentially describing Meier crossfeed.

The market for headphone amps with cross-feed is miserable. Headphone amps are expensive, only a few models have cross-feed and then it's one or two levels available. There's SPL Phonitor, but that's very expensive. I recommend DIY cross-feed headphone adapters. Having a DIY cross-feeder in between of your source and headphone amp is another option.

Don't overlook DSP.
 
Last edited:
Sep 21, 2017 at 12:20 PM Post #72 of 2,192
You are essentially describing Meier cross feed.
Cross-feeders do spatialize sound depending on the incoming channel separation and Meier is no exception, but that's not dynamic cross feed. Even Meier has a constant cross feed level.

The difference of Meier (a "H-topology" cross-feeder) and Linkwitz-Cmoy (a "X-topology" cross-feeder) is that Meier distributes sound according to the channel difference while Linkwitz-Cmoy emphasizes 30° angles simulating loudspeaker listening. Meier gives more vivid/aggressive/wide sound than Linkwitz-Cmoy which is more calm and relaxed.

Don't overlook DSP.
DSP is a great way to do cross-feed if you can. It's just that often you just can't use one (Spotify?) so I do all my cross-feed with my DIY cross-feed headphone adapter which is available at home no matter what the source is (CD, DVD, Blu-ray, Spotify, Youtube, TV,…) For portable music I pre-crossfeed the music before exporting them to mp3-files* for my portable player (I use a Nyquist-plugin I wrote for Audacity).

* In my opinion mp3s are "good enough" at bit rate 192 kbps or more outdoor in the noisy environment.
 
Sep 21, 2017 at 1:02 PM Post #73 of 2,192
Headphone cross-feed doesn't address any acoustic problem, so I am not sure what the relevance here is.
The task, as I see it, is to get the in-head, hard left-right perspective of headphones back into a more natural presentation, in essence, a more acceptable, if artificial, acoustic space.
Cancellation of loudspeaker crosstalk as a concept is familiar to me. I studied acoustics in the university and worked in the acoustics lab for almost a decade. However, I am not sure why you talk about loudspeaker cross-talk cancellation in a thread about cross-feed in headphone listening.
You mentioned it, I quoted you. You said it was easy, it's not. You should know that.
Personally I am not that worried about loudspeaker cross-talk. It is a "natural" acoustic phenomenon that doesn't create unnatural signals to my ears. Making the listening room more absorbent and using more directional loudspeakers one can reduce cross-talk, if that is an issue.
No, it can't. Both ears still hear both speakers, even in an anechoic chamber.
It will make the loudspeaker sound more headphone-like, but isn't it easier to just use headphones if that's what you want?
Even with speakers and as much crosstalk cancellation as you can manage, it's till a completely different perspective than headphones.
I think you confuse cross-talk and cross-feed in some places.
You brought it up and make misleading statements. I know the difference.
Linkwitz circuit, as pretty much all cross-feeders, are frequency selective because that is how our spatial hearing works. Our head is a frequency selective barrier for the sound. Cross-feeder also delay cross-fed signals typically about 0.2-0.3 ms to simulate the delay caused but loudspeakers in ~30° angles. The delay is conveniently created by the low-pass filter.
This is one of the things that jumped out at me when I looked up the circuit. The "delay" caused by the filters is not actually time delay, it's phase shift which looks like delay when you look at one group of frequencies, but is not time delay. That time delay could be simulated well enough with an all-pass network, but not with a single pole filter. Sorry, I tried that 35 years ago. It sort of works, but not well. That's why I was disappointed. You need a real DSP to do that well.
Cross-feeders are simple circuits, but they miraculously fix the problem, spatial distortion. Mid-head localization is partially fixed and depends on the recording itself. Acoustic recordings done in real acoustics such as classical music can sound pretty amazing after proper cross-feed, but not as amazing as real binaural recordings.
At best that thing is an improvement, but it's not really doing what needs to be done. The fact that certain recordings work better than others should tell you that. Mid-head localization should not depend only on the recording, proper correction would place it outside the head all the time. That's not what you have there.
Cross-feed removes spatial distortion by scaling spatial information into the "value-space" our brain expects it to be.
Change "removes" to "reduces", and we're good. That circuit can't remove spacial distortion. I can't even minimize it.
Not using cross-feed creates something that was not planned, spatial distortion.
In some cases I would agree, but certainly not all. As I referred to earlier, there is material that while mixed on speakers was happily embraced on headphones as a new, if hyper-stereo, experience. Remember, mixes are checked on headphones, especially today in contemporary popular music, since that's the market, but mixed on speakers, because mixing on speakers translates to a pleasing headphone experience, but not the other way 'round.

It sounds natural, realistic and fatique-free. Drums sound like real drums in a room, not fake plastic toys. Short transient sounds are located in the sound image at pin-point accuracy instead of spreading all over the place because brain doesn't know how to interpret crazy spatial cues. Cross-feed doesn't remove details, it removes spatial distortion revealing the tiny details of the music itself. If that's not desirable then I don't know what is.
To be completely fair, I appreciate your opinion, but do not share it.
Interesting. Did you reduce the speed cross-feed level is changed? What are the benefits of dynamic cross-feed compared to constant cross-feed in your opinion? Any down-sides?
Speed and degree are program determined and variable. The benefit is more consistent results, the down side is more consistent results. It's just different. I didn't develope the idea any farther because the problem was the algorithm that determined the required crossfeed. It turned out it's not just the amount, that was easy to quantify, but to work well it needed delay, and the amount changes with program. What morphed out of this was abandoning the idea in favor of an idea akin to the Smyth Realizer, but I didn't have DSP in those days, so I moved on.
Since when has the marketing people understood anything about audio quality? I'm not going to suffer spatial distortion just because some lunatic radio shows decades ago when people didn't know what to do with stereo sound. Such whacky effects are childish.
Actually, those efforts were very successful. Stereo headphones were fairly new, and a program featuring cool headphone mixes was a revenue source for broadcast. BTW, you're expressing opinion again. Thanks, but it's not fact, just opinion. That's why there's an off switch on crossfeed!
What you experienced in your childhood doesn't change scientific facts. We all have to admit sometimes that the way we have done things or thought about things have been wrong. That's how we learn, accepting new understanding. I listened to music wrong when young. I'm probably still doing something wrong, but hopefully just a tiny little bit. Cross-feed was a huge step for me. I don't believe spatial distortion was ever intended with headphones. It's an accident of stereo sound. In the late 50's and 60's people were so exited about stereo sound and the possibility to have huge channel separation they didn't think about the consequences. It's something people simply ignore not realizing how it destroys the potential on headphone listening. People get used to things and when somebody questions things they are in denial. Sad.
Well, it wasn't childhood, but...
Your view is very rigid, very black and white. If you want stereo done "right" then the only way you'll be satisfied is with binaural recordings made with mics in your own ears. That works very well, but just for you.

Recording and reproduction, especially in two-channel stereo, is very much a subjective art. As you grow older (ok, sorry, just a return jab), you may realize there are lots of "rights" and "grays" in...well, everything. And there are some absolute rights and wrongs. Experience helps us to understand the difference.

Your statements show an understanding gap. The generalizations are disturbing too, like the comment about the 50s and 60s, like it was all huge ping-pong ball stereo. It wasn't, there are some very fine recordings from that time period, some even made with more that two recording channels so the phantom center could be brought under control. Most of the stereo mic techniques we still use were introduced then. And even earlier, Bell Labs research into stereophony (that doesn't mean two channels, BTW), showed that the real reproduction with accurate spacial reproduction would require a grid of over 1000 microphones and recording channels, and a speaker grid to match. The reduced the channel count until it was practical, and landed at the lower limit of 3. That was the 1930s. Give history some credit!
Spatial distortion happens in our brain, but it is just as real for the listener, just as pain is real for a person.
No, spacial distortion results from the way signals are transduced.
You can listen to headphones the way you want, that's your business, but I feel responsible to educate people about spatial distortion and how to significantly enhance headphone listening using cross-feed. I have science on my side. Open-minded people do get what I say.
Yeah, right, except you are not educating people with the whole story. You've been rather definitive with your precepts, and I'm just pointing out that a few things are not so definitive. And your definition of what is "right" includes a half-baked attempt at crossfeed that doesn't take real time delay into consideration, nor the actual response curve of sound diffracting around a head, nor any thought of the angle of the phantom transducers. That's not definitive, don't portray it as final. It might not even be desirable!
If the sound from a cowbell spreads all over the place instead of being in one position in the sound image then I am going to call it spatial distortion. Spatial information gets distorted. A cowbell is not all over around your head. It's on the left or right or in the center. It's in one place and it sounds like a real cowbell if you hear it like that, without spatial distortion. The thing exists and people should be educated about it.
What if the creator wanted it over head? How would you know? This is again another strong effort to categorize something that is far more subjective.
Also, don't blame cross-feed for some crappy prog rock sounding weird because some anarchistic sound engineers using drugs liked to play with the knobs in the studio.
Well, I didn't do that, but I think you just did. "Crappy" could be your opinion, and reproducing a whip-panned guitar in headphones actually was the intent of the creators some times. If you cross-feed that out, you've taken away their intention. Is that the right thing to do?
Cross-feed makes miracles, but not so big miracles as to transforming a badly produced rock album of the 70's into gold. Listen to some well recorded classical music (e.g. SACD by BIS label) with proper cross-feed and then you hear how good the result is.
Again...perhaps yes...perhaps no. There's no way I can agree that crossfeed of the type you've defined is universally miraculous. Just as i can't agree that all album rock in the 1970s is badly produced. Not that you'll care or be impressed, but my recording background is in classical music, not 70s rock.
The market for headphone amps with cross-feed is miserable. Headphone amps are expensive, only a few models have cross-feed and then it's one or two levels available. There's SPL Phonitor, but that's very expensive. I recommend DIY cross-feed headphone adapters. Having a DIY cross-feeder in between of your source and headphone amp is another option.
I'd probably look for a crossfeed DSP plugin and use whatever headphone amp you have. At least then the crossfeed wouldn't be limited to a simple filter and whatever phase shift it creates. It could include head diffraction, and time delay, and have all variables that are actually required. In truth why I believe you don't find much crossfeed on commercial headphone amps. It's too complex to do well.

What we have here is a difference in opinion. I respect that you love the Linkwitz crossfeed circuit, please respect that I feel it to be inadequate. You feel crossfeed is essential and miraculous, I feel it is an occasional improvement in the implementation you've cited. I also know from experience that there is normal stereo material that crossfeed would ruin in terms of what the creators intended. I understand that from experiencing the culture of the era. Correcting that would be inauthentic, just as trying to massage stereo out of a mono recording would also be inauthentic.

Looks like we'll differ here. Perhaps we should let it go at that.
 
Sep 21, 2017 at 1:15 PM Post #74 of 2,192
I found a simulation of Meier "natural" crossfeed filter for foobar that sounds very good to me. That will work for Windows, not sure what to tell you about Android.

edit: just noticed you were specific to Spotify, so foobar plugins will be of no use, but I decided to keep the link here in case you give it a shot with Foobar.

Plugin:
http://www.foobar2000.org/components/view/foo_dsp_meiercf

Explanation of Meier Crossfeed:
http://www.meier-audio.homepage.t-online.de/crossfeed.htm
on windows I believe equalizer APO can offer system wide crossfeed. else there is always the option of a virtual cable and a VST host where you can use all the VSTs that can be used in foobar, and more as foobar is limited in some ways(vst needs a gui, 32bit). once you have such an implementation, you can decide to route anything through it, or not.

on android I still use viper4android but I'm on some older version and haven't really looked into replacing that with something possibly better. it requires root, has a basic kind of crossfeed setting that's called something else, or can be used as a convolver. the issue with the convolution choice is to find some files to use it with and at the right sample rate. but at least some options exist for those who don't know how to make their own stuff. our fellow member @Joe Bloggs has made such impulses he shared some time back, and a few other people did the same.
 
Sep 21, 2017 at 2:57 PM Post #75 of 2,192
That was the longest line by line reply I’ve ever seen. I hope I live long enough to actually getting around to reading it someday.
 

Users who are viewing this thread

Back
Top