To crossfeed or not to crossfeed? That is the question...

Oct 11, 2022 at 2:15 AM Post #2,011 of 2,192
That is disappointing. You helped me figure one big thing about your particular preference for crossfeed though. I appreciate it. I think I've figured out what is going on and why some people report more effect from crossfeed than others.
 
Last edited:
Oct 11, 2022 at 6:52 AM Post #2,012 of 2,192
You seem to be arguing against yourself here! Crossfeed IS a very course manipulation but it’s not only manipulating ILD, it crossfeeds everything, including all those factors HRTFs addresses. You can’t just ignore/dismiss those factors because crossfeed fails to handle them correctly! And, if we could simply dismiss those factors then why did anyone bother to invent HRTFs in the first place?
What I say it we can LOOK only ILD, even if crossfeed does something else (e.g. ITD changes)

What exactly do you mean by saying "it crossfeeds everything"? A 100 Hz low pass filter filters everything, but it 20 Hz frequencies remain almost untouched while 20 kHz is filtered away massively. Similarly crossfeed doesn't change mono signal much (Jan Meier - type of H-topology crossfeeders are even mono NEUTRAL!), but stereo signal with large channel separation gets modified a lot (made more mono!)

This is what this has been. I write things how I understand and know them using my skills in English (my English is pretty good, but it is not my native language). Not-so-standardized terminology regarding audio / sound doesn't help. Then you read my post and interpret it (purposedly or not) in ways that make them look wrong, contradictory. Then I try to explain what I really mean and again you read that explanation not trying to understand what I mean, but trying to twist it so that you can again say I am wrong!! So frustrating! The fact that there are objective and subjective element to this makes things even worse.

If I say I measured the width of a book using measuring stick, you probably say I can't use a measuring stick because books are made of atoms and the resolution of measuring sticks is 10^8 times too coarse to measure atoms. That's how this feels to me.

No it doesn’t, you can’t just keep repeating that.
Yes I can because I am right. With speakers the left channel "leaks" acoustically to right ear and why versa. Crossfeed does something similar electronically. The result in both cases is similar: Reduced ILD at low frequencies and increased cross-correlation between the ears favoring ITD of about 250 µs. I have never said those things are identical, of course. Crossfeed is a coarse approximation of acoustic crossfeed that ignores the fine detail of HRTF and instead simulates the overall shape with a low pass filter. Would you PLEASE try to understand what I mean instead of twisting everything?

No, it’s simple proof spatiality is happening in the speakers/room and your brain is interpreting that spatiality to create its own perception.
If that was true, binaural spatiality would be impossible. No room - no spatiality. Spatial cues are generated in a room (or otherwise) and our spatial hearing creates spatiality from them, but I don't think we are even disagreeing here. It is more like arguing how to express things with words.

Not particularly and even less so in the case of stereophonic sound.
Yes, but stereophonic sound is a small fraction of what we hear in our lives. It is a special case of sound that is made to fool our spatial hearing. I think spatial hearing is easy to fool, but there just isn't much fooling going on in the world. Again, you interpreted what I said in a funny way. TRY to understand what I say. Read between the line. Give me the benefit of a doubt instead of always interpreting everything the worse way. Ask me to clarify things before jumping to your hostile conclusions declaring I am WRONG. Do you really think I am an idiot who knows nothing? That's how you treat me! It is INSANELY insulting. That's why I lose my temper time to time!

It’s not a new dogma and it’s not my dogma. HRTFs demonstrate the deficiencies of simple crossfeed, HRTFs are not new or my idea/hypothesis/dogma. So, I’ve no idea why you’re not convinced.
BUT I don't deny deficiencies of simple crossfeed!!!
I don't think crossfeed is the be all end all solution! I am saying it is surprisingly effective for its simplicity and it is enough for me to make headphone sound enjoyable. What I do deny is your claim that simple crossfeed can improve headphone sound only in specific cases such as hard panned ping pong recordings.

I am convinced HRTF solutions are better than simple crossfeed. Of course I am!
I am convinced even simple crossfeed can improve headphone spatiality + enjoyment a lot.

Default headphone sound => worst
Simple crossfeed => better
State of the art HRTF convolution solution => best

Hopefully this clarifies some things for you.

But spatial cues are NOT scaled into more natural form, the ILD spatial cue might be but the other spatial cues are NOT, which is why spatial hearing cannot make better sense of them, although a minority of people do seem perceive that effect.
Of course fixing only one problem can improve the situation, especially if the fixed problem was the most harmful one. Maybe it is you who has a mental block to get the most out of crossfeed? That's how this sounds. It is as if you try to come up with excuses why crossfeed can't improve things. I listen to the sound with open mind: Does it improve the sound for me or not? Do I enjoy the sound better?

How much are the other spatial cues wrong to begin with? If there was something nearly as bad as ILD then perhaps crossfeeders tried to fix it too? I am not aware of any other spatial cue besided ILD that is a big problem with headphone. If you know one then please tell about it and explain why it is a problem and how it should be fixed.

I believe simple crossfeed is surprisingly successful, because what it does simulates what happens with speakers with direct sound, the acoustic crossfeed.
 
Oct 11, 2022 at 7:33 AM Post #2,013 of 2,192
Crossfeed puts the minuature soundstage in order for me and the instrument stay were they are. The instruments are also more point-like sound sources and not fractured all over. For me this stability is one of the great benefits of crossfeed.
I tried to "visualize" the miniature soundstage without crossfeed (upper) and with proper crossfeed (lower):

crossfeed.jpg
That’s interesting because in many respects, what I experience is almost the exact opposite of what you describe.

Using the example of say an orchestral recording, what I experience with speakers vs cans is analogous to listening to the orchestra from an ideal listening position, say 15-20m away from the orchestra, while putting on the cans is like suddenly jumping forward towards the orchestra, to a position roughly equivalent to the conductor but even further forward. In the real scenario, the orchestra appears much wider and has a lower ratio of reverb to direct sound but of course we aren’t stretching the width of everything just the soundstage. The sound sources (instruments) aren’t stretched, they’re just more separated within a wider soundstage and appear even more distinct due to the lower reverb ratio, which also reduces depth. This is very similar to the effect of wearing cans and is why some/many engineers use cans when recording, because it’s easier to notice details/faults that maybe masked or partially concealed by reverb and it’s easier to identify where (which instrument or mic) the detail/fault is happening. This perception seems to be almost the exact opposite of your perception. Instead of more separation and more distinct positioning, you seem to experience a “blurring” effect.

This analogy of cans and sitting in the orchestra is not ideal though, all the sound appears to be occurring inside my head with cans. There is some perception of depth but it’s more squashed and not as coherent as in the real life scenario. My perception of bass and bass balance is not the same either but it is quite a linear relationship and therefore usually fairly predictable. I do occasionally get anomalies with popular genres, say the lead vocal on a different horizontal plane, at the top of my head. Everything is always inside my head with cans though, the only exception is some binaural sound recordings accompanied by video (providing visual cues).

With crossfeed I perceive a narrower soundstage, so more like the width of an orchestra from the ideal listening position but without the distance or the higher ratio of reverb. The bass also appears different but not more similar to a real life scenario and not as linearly/predictably as without crossfeed. Sometimes I get an EQ notch type effect in the bass, sometimes the bass sounds artificially louder, sometimes I get the bass component of a sound/instrument within the mix in a slightly different location to the higher freq components, which I find particularly annoying and doesn’t appear correlated with the crossfeed freq. I also get unpredictable effects with the location and FR of ERs/reverb. In general it’s more blurred, less spatially coherent, less stable and more unpredictable. It’s also still always all inside my head.

Maybe it’s because I spent a lot of time actually sitting inside orchestras that I don’t mind that extreme width/separation and don’t find it unnatural. Without crossfeed is far from ideal, it would be good to get it outside my head, get more depth, have more representative bass and not to have those occasional anomalies but even with all these failings, it’s still acceptable for me. With crossfeed, the narrower width without the greater distance is a conflict, as is the same sound in different locations, in addition to the less coherent reflections and other issues, it appears far less natural to me and is typically unacceptable. I can’t just sit and enjoy it, because I’m constantly trying to figure out what’s going on. I should mention there are exceptions and it’s often not as obviously “black and white” depending on the mix (which can vary wildly). I have encountered recordings that I did prefer with crossfeed but such exceptions are so rare, it’s not worth the effort.

Even amongst those like me who are not fans of crossfeed, I don’t assume they are going to experience the same as me. Some of what I described maybe identical or similar for other non-fans but they might not perceive or be consciously aware of the other things I’ve described or even if they are aware, they might not be troubled by them and it’s very likely some non-fans experience yet other effects that I don’t.

G
 
Oct 11, 2022 at 7:43 AM Post #2,014 of 2,192
Do you mind answering specific questions? I understand your general impression. I'm trying to figure out what is creating that impression.

1) Here is a song I'd like you to listen to without crossfeed and then again with it...



How much of a difference in distance do you perceive between without and with? Five feet? Ten feet? Is it like a speaker system with a soundstage at the other side of the room from you? Or is it just a couple of inches from your face?

This recording is avoid of almost any secondary spatial cues. It is very dry tracks hard panned in LCR style. VERY unsuitable for headphones as it is.

Without crossfeed distance is 1-10 inches. A lot of the sound is annoyingly close to my ears. Nasty spatiality!
With crossfeed (-2 dB level seems optimal): 8-12 inches. Thanks to crossfeed the sounds stay at least 8 inches from my ears, but the lack of secondary spatial cues makes it impossible to have a bigger headphone soundstage than about a feet. That's okay, because such miniature presentation can be cozy and intimate.

If each element is different, let me know which ones sound like they're inside your head and which ones sound like they're on the other side of the room (or whatever distance you perceive)... the vocals, the bongos, the piano, the woo woo's, the guitar solo. Are they all at the same distance from you, or are some things closer and some things further?
Without crossfeed the stuff mixed center is inside my head. With crossfeed it moves about 4 inches forward and is on my upper face. The stuff hard panned left and right is outside my head, but closer without crossfeed as described above.

2) Do you get any perception of distance at all without crossfeed? If so, what elements sound further away?
Yes, 1-10 inches. It is difficult to say what is furher away, because the soundstage is so fractures and all over the place. The sounds are not point-like but long objects that extend from near to further and move/change shape in time. The center-mixed stuff (singing) is pointlike steady sound inside my head.

3) Does the perception of distance pop in and out as you dial the crossfeed up and down? Is there a narrow window for the perception of distance to be apparent? Or is the effect pretty consistent regardless of how much or how little crossfeed you dial in?
It is gradual change in this regards, but there is an optimal level for the crossfeed giving the largest distances overall. Above and below it the distances get smaller, but in very different ways. Too much crossfeed makes the sound mono-like and moves it toward the center of my head. Too little crossfeed moves the sound toward my left and right ears. As making sound mono kills the out of phase content in the sound, it is the in-phase content that moves toward the center of my head while using too weak crossfeed keeps out of phase content amplified and that moves toward my ears. So, it is pretty complex, but proper crossfeed level manages to balance these things nicely. That's how I recognize it. Things just feel balanced and natural.
 
Oct 11, 2022 at 7:53 AM Post #2,015 of 2,192
I've said earlier that I listen only to classical music, and that my experience may not be so relevant for other kinds of music. I guess that the prevalence of 'field' recording of classical music is pretty important here. Even studio recordings of solo piano recitals, for example, are mic'd in a similar way. This means the recording - usually hi-res these days - is likely to present a performance space as part of the acoustic image. It's also important that the recording doesn't involve compression.
Well-recorded classical music is excellent for crossfeed. Tons of natural-based spatial cues in the recording providing possible nice headphone soundstage. Organ music recorded in a cathedral can sound really impressive on headphones properly crossfed, almost binaural!
 
Oct 11, 2022 at 10:00 AM Post #2,016 of 2,192
What exactly do you mean by saying "it crossfeeds everything"? A 100 Hz low pass filter filters everything, but it 20 Hz frequencies remain almost untouched while 20 kHz is filtered away massively.
A 100Hz LPF filters everything above 100Hz, not just ILD or some other factor. Likewise, crossfeed is crossfeeding everything (by a determined amount) below a set threshold, not just ILD but all the sound which includes all the timing, spectral and other information used by our perception.
Similarly crossfeed doesn't change mono signal much …
What mono signal? We’re dealing with stereo, so a mono signal is a signal which only occurs in either the left or right channel and obviously crossfeed does change that. If you’re talking about the perception of a sound in the phantom centre, then that’s a dual mono signal and summing them together does change it to an extent (it increases at least the level). Furthermore, in most cases even a sound in the phantom centre is likely to have stereo reverb (artificial or acoustic), variations between the left and right channels and therefore the potential for spectral and timing/phase issues.
The fact that there are objective and subjective element to this makes things even worse.
There are always objective and subjective elements, the trick is understanding which is which and not making false assertions about the former based on the latter. This case is tricky because we’re talking about subjective responses which don’t have precise definitions/descriptions and which vary considerably between different individuals.
Yes I can because I am right. With speakers the left channel "leaks" acoustically to right ear and why versa.
No you can’t because you are wrong. With speakers the left channel does not “leak” acoustically to the right ear! What actually happens is that the signal from the left speaker reflects off the left and right walls of the listening environment, so now we have a mixture of direct and reflected sound with different timing and spectral content. Some of the direct sound and sound reflected off the left wall reaches your right ear but is further affected by your skull and pinnae (attenuated and spectrally altered), the reflections from the right wall do not have to pass through your skull to reach your right ear but are affected by your right pinnae. What we actually get is very significantly different from just crossfeed, so you can’t just keep repeating “Same happens with speakers.”!
The result in both cases is similar: Reduced ILD at low frequencies and increased cross-correlation between the ears favoring ITD of about 250 µs.
No, the result is not similar it’s very different as explained above and as you already know but are ignoring!
Of course fixing only one problem can improve the situation,
But you’re not “fixing only one problem” because you are not only crossfeeding ILD, you’re crossfeeding all the signal below the threshold and by fixing one problem you’re making other factors/considerations worse. 4+4+4=8 if you ignore/dismiss that last “+4”, which I don’t really notice and doesn’t affect my enjoyment anyway!
especially if the fixed problem was the most harmful one.
But what if it’s not the most harmful one? What if all the other factors combined, which you’re damaging by fixing that one problem, are more harmful? What if you don’t find that problem you fixed to be that harmful a problem to start with?

You have a particular perception and you’ve invented an idea/theory that explains it by effectively dismissing/ignoring everything that your perception isn’t consciously aware of and you don’t believe is harmful. If your perception were the same as everyone else’s then maybe you’d be on to something but clearly it isn’t. If your theory of solving the most “harmful” problem and being closer to ideal were correct, then why, after being around for 50 years or more, don’t we see it as standard or at least as an option on every headphone device, especially as it has the potential to be a money earner? It’s never taken off and science knows why but you dismiss this too and instead falsely assert it’s due to training (or previously ignorance or idiocy).

It appears you’ve fallen into the same logical trap so many audiophiles do with other aspects of audio. They have a perception, find or invent explanations that support it and ignore or dismiss anything to the contrary. Typically you do not fall into that trap, unless it includes the letters “ILD”!!

G
 
Last edited:
Oct 11, 2022 at 10:07 AM Post #2,017 of 2,192
JamesJames was very helpful. I think I’ve figured out what’s going on here now. Crossfeed isn’t creating spatiality. In fact, the perception of spatiality depends more on how the person hears spatial cues than it does the crossfeed itself. Crossfeed was designed for a specific purpose, and it does that well. But it has a beneficial unintended side effect for people who hear recorded music in a certain way. It reduces a kind of separation in music that some people hear as a contrast, and others hear as a distraction. By evening out the contrast, it allows people who are easily distracted to hear elements in the music that other people are able to easily parse even through the contrasts.

An analogy would be like this… Two people are standing side by side alternately calling out numbers. One of them is counting down from a hundred, a number at a time. The other one is calling out random numbers. One kind of listener can parse out the numbers that have a pattern from those that don’t. They can focus on the descending numbers and set aside the random ones. The other kind of listener hears the random number and his perception hits reset. He can’t hear the pattern clearly because he can’t focus his attention beyond the random numbers.

That is what’s happening here. Gregorio and I are used to hearing competing sounds in a mix and making sense of them so we can organize and balance them. JamesJames was unable to comment on the effect of crossfeed on different elements in the song because the contrast was so wide in some of them. Some elements, like the vocals and guitar solo were straight mono- equal loudness from both speakers. And some elements, like the bongos and the woo woos were hard panned left and right. Cross feed would have absolutely no effect on the former, but a large effect on the latter. But JamesJames didn’t hear it like that. He heard it all as one thing and couldn't sort out any differences between them at all.

I suspect that some people are extraordinarily sensitive to being distracted by sound that is hard panned left or right. When sound comes at them from both sides like that, they’re unable to parse the sound in the middle. It becomes muddled and they are unable to hear it as a separate thing. This is irritating to them, and they describe it as listening fatigue. When the competing contrasts are evened out with crossfeed, their minds can suddenly perceive the stuff in the middle, and it’s as if that suddenly turned on like a light bulb.

What is generally in the middle of the two channels, but not hard panned to left and right? Secondary depth cues, like reverb and room ambience.

Cross feed doesn’t enhance spatiality, it reveals the spatial cues to people who can be distracted by extreme contrasts. Crossfeed doesn't create the spatiality. The spatial cues are in the track all along, they just can’t be heard by people who are not good at parsing big contrasts. It’s purely a subjective thing, dependent on how an individual hears and organizes sound in his head.
 
Last edited:
Oct 11, 2022 at 10:33 AM Post #2,018 of 2,192
One other analogy I just thought of...

I once attended an organizational meeting of a science fiction group. They put on cons related to sci-fi tv shows and books. It was a very unusual group of people. They had more in common than just science fiction, but that wasn't apparent at first glance. There was a guy in the group who was quite odd, but brilliant. I heard him talking and asked someone about him. They told me that his IQ was off the charts. I chatted with him a little and realized that he always directed the conversation- you reacted to what he said, not the other way around. So I decided to try throwing a 90 degree turn into the conversation and see what he did. He was talking about how some invention was created, and I grabbed a little side detail of the story he was telling and asked him a question about it. He stopped dead and his eyes went wide open like his brain had reset. He quickly brushed aside the sidetrack and went back to his story again. I threw in another sidetrack. He stopped dead and stared into space again. He was incapable of flowing with a conversation. He had a pathway in his mind that was the direction he was used to going, and if you took him off that train of thought, he went blank and couldn't say anything at all.

Some people are photosensitive. If there is a blinking light, they zone out like a zombie. Other people are hyper-sensitive to certain kinds of textures or colors. It makes sense that excessive channel separation might be a similar kind of blind spot for some listeners. Reduce the sounds coming from competing directions, and they are able to hear again. It isn't science in a sort of definition involving physical sound, but more a psychoacoustics way. All crossfeed really does is reduce channel separation. But for some people that can change the way they perceive other unrelated aspects of the sound dramatically.
 
Last edited:
Oct 11, 2022 at 1:19 PM Post #2,019 of 2,192
A 100Hz LPF filters everything above 100Hz, not just ILD or some other factor. Likewise, crossfeed is crossfeeding everything (by a determined amount) below a set threshold, not just ILD but all the sound which includes all the timing, spectral and other information used by our perception.
filters filter one channel without the knowledge of other channels. Since ILD is a property of how channels differ from each other, it doesn't "exist" for the filter. Nor does any other spatial cues that are based on channel differencies such as ITD or ISD.

What mono signal?
Stereo sound in L/R form can be transformed into M/S from (Mid/side) where Mid is "centered mono" (left and right channels are the same) and Side is the difference of left and right channel. Since Mid/Side processing is common in music production/mixing, I thought you'd be familiar with these concepts. No wonder my analyse of the binaural recording seemed to go over your head. If you are unfamilar with this you can study it for example here:

https://www.izotope.com/en/learn/what-is-midside-processing.html

We’re dealing with stereo, so a mono signal is a signal which only occurs in either the left or right channel and obviously crossfeed does change that. If you’re talking about the perception of a sound in the phantom centre, then that’s a dual mono signal and summing them together does change it to an extent (it increases at least the level).
You are talking about mono in "mixing" context. Mixers work like that. There are pan laws and what not, but elsewhere mono is a simpler concept. In this (stereo consumer audio) context it is just all the stuff that is the same for left and right channels, the "M" channel. In mixing context we can have a mono track panned hard (100 %) left, but in crossfeed context this is not at all mono sound, because you can't use mono playback system to indicate left channel has sound while right channel is silent. You need stereo playback system for that. It is stereo sound that was created by hard panning mono sound. The whole point of panning mono tracks in mixing is to create stereo from mono!

Furthermore, in most cases even a sound in the phantom centre is likely to have stereo reverb (artificial or acoustic), variations between the left and right channels and therefore the potential for spectral and timing/phase issues.
Yes, you are right.

There are always objective and subjective elements, the trick is understanding which is which and not making false assertions about the former based on the latter. This case is tricky because we’re talking about subjective responses which don’t have precise definitions/descriptions and which vary considerably between different individuals.
Yes, but we can't ignore the fundamental problem of spatiality created for speaker in headphone listening. Back in the day tv series were produced for 4:3 screens. Then came 16:9 TVs and people watched 4:3 shows widened on their 16:9 screen. To keep the original aspect ratio and picture shape, you need to add "black bars" on the left and right. Luckily TV had that property even if many didn't "like" if for the black bars, but crossfeed is a similar thing, just for sound. I want to experience the spatiality at the "scale" it was created for speakers, not in a scale of excessive spatiality.

No you can’t because you are wrong. With speakers the left channel does not “leak” acoustically to the right ear! What actually happens is that the signal from the left speaker reflects off the left and right walls of the listening environment, so now we have a mixture of direct and reflected sound with different timing and spectral content.

The sound of course reflects in many other surfaces too. From floor. From ceiling. From back wall etc. These reflections can ruin speaker spatiality if the room is acoustically bad.

Anyway, NONE of this happens with headphones, BUT if you use crossfeed, you simulate part of it, the part of direct sound from speakers. It is like having the speakers in an anechoic chamber. No reflections! Only direct sound! I am not wrong because I don't make the claim you say is wrong! I am making a claim that is right.

Some of the direct sound and sound reflected off the left wall reaches your right ear but is further affected by your skull and pinnae (attenuated and spectrally altered), the reflections from the right wall do not have to pass through your skull to reach your right ear but are affected by your right pinnae. What we actually get is very significantly different from just crossfeed, so you can’t just keep repeating “Same happens with speakers.”!
That's complete semantic nitpicking!! I have said crossfeed simulates ONLY acoustic crossfeed of direct sound and indeed that DOES happen. Of course the lack of of the reflections AFFECT the sound, but the same happens without crossfeed! Lack of room acoustics is not a crossfeed problem. It is a headphone problem! Crossfeed solves the lack of direct sound acoustic crossfeed problem.

No, the result is not similar it’s very different as explained above and as you already know but are ignoring!
Still less different from not using crossfeed. You think you can only do things if you can do them 100 % perfectly. I think 1 % improvement is an 1 % improvement. That is our fundamental philosophical difference.

But you’re not “fixing only one problem” because you are not only crossfeeding ILD, you’re crossfeeding all the signal below the threshold and by fixing one problem you’re making other factors/considerations worse. 4+4+4=8 if you ignore/dismiss that last “+4”, which I don’t really notice and doesn’t affect my enjoyment anyway!
The pros win the cons easily for me. If it was the other way around, obviously I wouldn't like crossfeed. To me headphone sound as it is is so wrong that sound that is wrong, but less wrong is a huge improvement. Maybe you can explain me what exactly are these bad things (that aren't bad without crossfeed) I do not notice. Again, I have listened to crossfeed thousands of hours. If there is something to notice, it must be something really hard to notice!

But what if it’s not the most harmful one? What if all the other factors combined, which you’re damaging by fixing that one problem, are more harmful? What if you don’t find that problem you fixed to be that harmful a problem to start with?
Such questions could be made about everything in life. I have thinked a lot what crossfeed does to the music and I just don't believe in harmful things. If there is something, it must be very minor. Frankly I think you fear "damage" too much. You don't trust crossfeed. To me listening to music mixed for speakers on headphones is the damage and using crossfeed makes the damage less harmful for enjoyment.

You have a particular perception and you’ve invented an idea/theory that explains it by effectively dismissing/ignoring everything that your perception isn’t consciously aware of and you don’t believe is harmful.
So people should not invent ideas/theories? Again, I am NOT ignoring anything. I have just concluded those things insignificant. You keep touting these ignored things, but you have zero theories how they ruin things in crossfeed. It is like saying mankind can't go to Mars without taking account the mating habits of unicorns, but not explaining how the mating habits of unicorns affect space travel to Mars.

If your perception were the same as everyone else’s then maybe you’d be on to something but clearly it isn’t. If your theory of solving the most “harmful” problem and being closer to ideal were correct, then why, after being around for 50 years or more, don’t we see it as standard or at least as an option on every headphone device, especially as it has the potential to be a money earner? It’s never taken off and science knows why but you dismiss this too and instead falsely assert it’s due training (or previously ignorance or idiocy).
Crossfeed is not a "standard" in every device, but it hasn't gone away either. To my experience crossfeed is insanely difficult to sell to people, because it requires understanding of human spatial hearing which most people don't have and for a novice the benefits of crossfeed can be difficult to figure out. As it kills superstereo, many people think it makes the sound duller, more mono and removes detail. I don't blame those people, because it takes time to learn to appreciate crossfeed. I can't "sell" crossfeed even to you, so how could I sell it to someone who understands nothing about spatiality and audio?

It appears you’ve fallen into the same logical trap so many audiophiles do with other aspects of audio. They have a perception, find or invent explanations that support it and ignore or dismiss anything to the contrary. Typically you do not fall into that trap, unless it includes the letters “ILD”!!

G
I am happy in this trap...
 
Oct 11, 2022 at 1:51 PM Post #2,020 of 2,192
That’s interesting because in many respects, what I experience is almost the exact opposite of what you describe.

Using the example of say an orchestral recording, what I experience with speakers vs cans is analogous to listening to the orchestra from an ideal listening position, say 15-20m away from the orchestra, while putting on the cans is like suddenly jumping forward towards the orchestra, to a position roughly equivalent to the conductor but even further forward.
It is the same for me.

In the real scenario, the orchestra appears much wider and has a lower ratio of reverb to direct sound but of course we aren’t stretching the width of everything just the soundstage. The sound sources (instruments) aren’t stretched, they’re just more separated within a wider soundstage and appear even more distinct due to the lower reverb ratio, which also reduces depth. This is very similar to the effect of wearing cans and is why some/many engineers use cans when recording, because it’s easier to notice details/faults that maybe masked or partially concealed by reverb and it’s easier to identify where (which instrument or mic) the detail/fault is happening. This perception seems to be almost the exact opposite of your perception. Instead of more separation and more distinct positioning, you seem to experience a “blurring” effect.
I was unable to me the picture how I wanted it. Blurring is not right. Fractured, pointy is correct. Sharpness.
 
Oct 11, 2022 at 3:35 PM Post #2,021 of 2,192
filters filter one channel without the knowledge of other channels.
Even a stereo filter?
Stereo sound in L/R form can be transformed into M/S from (Mid/side) where Mid is "centered mono" (left and right channels are the same) and Side is the difference of left and right channel. Since Mid/Side processing is common in music production/mixing, I thought you'd be familiar with these concepts.
Does a consumer stereo setup have 2 speakers, a left and a right, or does it have 3, a mid and 2 sides out of phase with each other? What about headphones? I thought you’d be familiar with the concepts of a consumer stereo setup.
In this (stereo consumer audio) context it is just all the stuff that is the same for left and right channels, the "M" channel.
There is no “M” channel in stereo consumer audio, just a left and a right.
Yes, but we can't ignore the fundamental problem of spatiality created for speaker in headphone listening.
Yet hundreds of millions of consumers have for decades.
Luckily TV had that property even if many didn't "like" if for the black bars, but crossfeed is a similar thing, just for sound.
Of course it’s not a similar thing. A TV does not crossfeed the left side of the image to the right side and vice versa. The TV analogy would be simply reducing the panning width.
BUT if you use crossfeed, you simulate part of it, the part of direct sound from speakers. It is like having the speakers in an anechoic chamber.
No it is not. A HRTF is like having the speakers in an anechoic chamber, crossfeed isn’t.
I am not wrong because I don't make the claim you say is wrong!
But you just have!!
I am making a claim that is right.
No, crossfeed is not a HRTF!
That's complete semantic nitpicking!!
So HRTFs are just “complete semantic nitpicking” and the development and ongoing research is a waste of time. And you wonder why I think you are wrong?
I have said crossfeed simulates ONLY acoustic crossfeed of direct sound and indeed that DOES happen.
Which is false because crossfeed does not “simulate ONLY acoustic crossfeed”, it ALSO crossfeeds all the other factors, which you are dismissing!
Crossfeed solves the lack of direct sound acoustic crossfeed problem.
At the expense of causing other problems!
I think 1 % improvement is an 1 % improvement. That is our fundamental philosophical difference.
Exactly! I think a 1% improvement is only a 1% improvement if there isn’t at the same time a 1% or greater degradation. Simple math, 1-1=0 or 1-2=-1. For you, 1-1=1 because the “-1” part is nitpicking!
Again, I am NOT ignoring anything. I have just concluded those things insignificant.
That’s a contradiction! You have (falsely) concluded those things are insignificant and therefore you ignore them! But “those things” are all the things that crossfeed doesn’t account for and HRTFs (+ reverb) do. Those things define the difference between crossfeed and HRTFs!
You keep touting these ignored things, but you have zero theories how they ruin things in crossfeed.
I’ve iterated them countless times but you ignore it! Just closing your eyes and sticking your fingers in your ears does not mean something ceases to exist, at least not in science!
To my experience crossfeed is insanely difficult to sell to people, because it requires understanding of human spatial hearing which most people don't have and for a novice the benefits of crossfeed can be difficult to figure out.
Thanks for proving my point! You’ve effectively just claimed that those who don’t experience crossfeed as you do, are ignorant. We’re back where we started and you’re just as wrong now as you were then!
I am happy in this trap...
As are most audiophiles, which is why they get upset if you try to explain there is no audible difference between Ethernet cables, why they come out with nonsense explanations which ignore/dismiss/omit facts, why they accuse others of ignorance and why they ban mention of science in those forums!!!

G
 
Last edited:
Oct 11, 2022 at 6:27 PM Post #2,022 of 2,192
Even a stereo filter?
Yes, as long as the filters operate independently.
Does a consumer stereo setup have 2 speakers, a left and a right, or does it have 3, a mid and 2 sides out of phase with each other? What about headphones? I thought you’d be familiar with the concepts of a consumer stereo setup.

There is no “M” channel in stereo consumer audio, just a left and a right.
This is so ridiculous! Your knowledge of this basic signal processing method is lacking amazingly badly! M/S is the L/R information in another from! Consumer audio is in L/R form, but you use a simple matrix to turn it to M/S:

M = k * (L + R)
S = k * (L - R)

where k = 1/SQRT(2) = 0.707106781... you go back to L/R from using similar matrix

L = k * (M + S)
R = k * (M - S)

So, there is "M" and "S" channels encoded in consumer audio. On vinyl "M" corresponds horizontal movement and "S" channel vertical movement of the needle. Didn't you even check out the link I gave you? You are too busy telling I am wrong, but at this point you are embarrassing yourself badly.

Yet hundreds of millions of consumers have for decades.
Yes. Most consumers have crappy audio systems anyway.

Of course it’s not a similar thing. A TV does not crossfeed the left side of the image to the right side and vice versa. The TV analogy would be simply reducing the panning width.
My analogy was about the "presentation format". Listening to music mixed for speakers with headphones is like watching shows made in 4:3 format with 16:9 TVs.

You are working hard to find excuses to say I am wrong. It is pathetic at this point, even sad. What a pathetic man you are for not being able to admit I might be right about something. I thought I was weakminded, but now I am really starting to see who you are. Your knowledge has its limits and on some areas I can easily surpass them with my background.

No it is not. A HRTF is like having the speakers in an anechoic chamber, crossfeed isn’t.
Crossfeed simulates it coarsely. HRTF simulates it accurately. Former is easy to implement. Latter is hard to implement. That is the difference.

But you just have!!
Apparently you can interpret anything I say a way that makes it wrong.

No, crossfeed is not a HRTF!
When have I said (simple) crossfeed is HRTF? That would be a very silly claim to make! You just invent things I have said! What is wrong with you? I am losing all respect I have had toward you. Maybe you should visit a doctor for possible brain damage.

So HRTFs are just “complete semantic nitpicking” and the development and ongoing research is a waste of time. And you wonder why I think you are wrong?
Huh? When have I done such claims? Of course research into HRTFs is not waste of time! All I am saying I don't personally need as advanced methods as HRTF, because simple crossfeed is good enough. So, if it is good enough, why would I make my life harder and go HRTF?

Which is false because crossfeed does not “simulate ONLY acoustic crossfeed”, it ALSO crossfeeds all the other factors, which you are dismissing!
All the other factors which you never list or explain. You find a silly excuse to everything I say. The difference of acoustic crossfeed and crossfeed is in the detail how the crossfeeding happens. With speakers there are aspects such as the radiation pattern of the speakers and listener HRTF. With crossfeed it is straightforward simple circuit that does the crossfeeding coarsely simulating HRTF with a low pass filter + treble boost on the ipsilateral side.

At the expense of causing other problems!
Problems you never explain or list!

Exactly! I think a 1% improvement is only a 1% improvement if there isn’t at the same time a 1% or greater degradation. Simple math, 1-1=0 or 1-2=-1. For you, 1-1=1 because the “-1” part is nitpicking!
I have said to me the pros win the cons. Since you are finally doing some simple math, you could at least come up with examples that reflect what I say to mathematically illustrate what I say!

That’s a contradiction! You have (falsely) concluded those things are insignificant and therefore you ignore them! But “those things” are all the things that crossfeed doesn’t account for and HRTFs (+ reverb) do. Those things define the difference between crossfeed and HRTFs!
Yes, there are clear differencies between crossfeed and HRTF. It doesn't take much to know that!

I’ve iterated them countless times but you ignore it! Just closing your eyes and sticking your fingers in your ears does not mean something ceases to exist, at least not in science!
Have you? Then I must have missed them, or I have seen them and concluded insignificant.

My choices are no crossfeed and crossfeed. I don't have the choice of HRTF. Crossfeed is hands down better of my options. So, even if it had 1000 problems, I would have to live with them! Not using crossfeed is JUST SO MUCH WORSE!!

Thanks for proving my point! You’ve effectively just claimed that those who don’t experience crossfeed as you do, are ignorant. We’re back where we started and you’re just as wrong now as you were then!
No, I am saying crossfeed is not easy to sell. I am not calling anyone ignorant. It is a fact most people don't know spatial hearing, because it is special knowledge important only in very specific sectors of jobs, so why would constructor workers know spatial hearing? They don't need such knowledge!

As are most audiophiles, which is why they get upset if you try to explain there is no audible difference between Ethernet cables, why they come out with nonsense explanations which ignore/dismiss/omit facts, why they accuse others of ignorance and why they ban mention of science in those forums!!!

G
Except differences in ethernet cables aren't real, only placebo while crossfeed changes the sound audibly. There is even a reason to do something to headphone sound because most music is mixed for speakers.
 
Last edited:
Oct 12, 2022 at 3:26 AM Post #2,023 of 2,192
I think for a lot of amps these days crossfeed is irrelevant because the crosstalk is so high. Probably 3/4 of the amps I have owned had significantly less channel separation than a professional SS amp. Some of the tube ones I joke are pseudo mono, seeming about as wide as a vinyl record. Its great if you want your whole mix squished together between 10:00 and 2:00 and don't mind the masking or phase canceling.

The industry seems to have this flavor covered to the degree that it is hard to find true stereo output.
 
Oct 13, 2022 at 8:20 AM Post #2,024 of 2,192
This analogy of cans and sitting in the orchestra is not ideal though, all the sound appears to be occurring inside my head with cans. There is some perception of depth but it’s more squashed and not as coherent as in the real life scenario. My perception of bass and bass balance is not the same either but it is quite a linear relationship and therefore usually fairly predictable. I do occasionally get anomalies with popular genres, say the lead vocal on a different horizontal plane, at the top of my head. Everything is always inside my head with cans though, the only exception is some binaural sound recordings accompanied by video (providing visual cues).
I didn't address this post of yours completely. I need time to avoid burnout from this. The faster I comment on what you say, the lousier are my answers. So, I take this in small steps.

For me binaural recordings are very effective even without visual help. In same ways it is almost scary. To have the sound completely inside my head the sound must be almost mono or it "leaks" outside my head (mostly sides). To make it simplified: Comparing Mid (M) and Side (M) channels.

M >> S ===> inside my head
M > S ===> outside my head, natural feel
M <= S ===> outside my head or at my ears, unnatural annoying feel
M << S ===> sound all over the place (diffuse as hell), unnatural and very annoying fee.

This is for the whole bandwith which is dominated by low frequencies because that's where most energy is in music. If we concentrate on treble for example things change. Above 1600 Hz M and S being equal in strength means natural diffuse spatiality, because the phase differences get "randomized" due to the dimensions of head. At higher frequencies it is more convient to use L and R channels instead:

L >> R ===> Sound source 90° left and close
L > R ===> Sound source between left and center
L = R ===> inside my head (mono)
L < R ===> Sound source between right and center
L << R ===> Sound source 90° right and close

The difficulty of having forward depth with headphones comes from the fact that L = R for any sound that is in the center regardless of the distance. Floor and ceiling reflections are helpful, because the time difference of direct sound and floor/ceiling reflection is the bigger the closer the sound is. Loud reverberation compared to direct sound is another trick to indicate more distance.
 
Oct 13, 2022 at 12:15 PM Post #2,025 of 2,192
It's a subjective preference based on the way you as an individual interpret the sound you hear. It may be that way for you, but other people subjectively interpret sound differently.
 

Users who are viewing this thread

Back
Top