To crossfeed or not to crossfeed? That is the question...
Oct 17, 2022 at 5:38 AM Post #2,086 of 2,146
Correlation isn’t necessarily causation.

Spatiality is created by the effect of space on sound. That’s reflections and delays.
 
Last edited:
Oct 17, 2022 at 5:39 AM Post #2,087 of 2,146
When is it in your opinion natural to have sound close to one side of our head?
When someone whispers in your ear, when driving, looking ahead and the passenger talks to you, when an insect flies close to one ear. When you’re a musician sitting right next to another musician, when you play an instrument that’s to one side of your head (such as a flute, violin or tuba for example), in fact many long term professional musicians of such instruments have serious noise induced hearing loss/damage in just that one ear.

I’m sure there are number of other scenarios which occur IRL and are therefore natural. I’m sure most people experience such a scenario at least once and quite a large number experience it several times or even fairly commonly. Unlike for example the bizarre acoustic experience of being in an anechoic chamber but even then, individual responses vary dramatically.
I’m very surprised you couldn’t come up with any IRL scenarios to answer your own question. This further indicates you have a “blind spot” for anything which may falsify your theory or contradict your personal perception!
I guess crossfeed does nasty things and was invented to ruin headphone sound.
Crossfeed was invented to improve HP sound but also “does nasty things”, which is why it works better than no crossfeed for some people and worse for others. This is why HRTFs (and then added reverb and head tracking) we’re invented, to remove/avoid those “nasty things”. But then you already know this!
I am satanic for trying to give scientific justification to crossfeed.
Of course not, because there is a scientific justification to crossfeed. However, there is also scientific justification for why it does not work (for many/most people) and it is “satanic” to simply ignore/dismiss this science for no reason other than to defend a nonsense theory. We see this sort of thing commonly in audiophile marketing and you rightly challenge it but not in this case when it’s your own theory. Take for example perceived sonic differences in cables and the common scientific justification of skin effect, all of which is true/real, providing we ignore/dismiss the science that dictates skin effect doesn’t affect audible freqs. Or, I’ve seen articles and even white papers (by audiophile manufacturers) that were several/many pages long explaining everything to do with jitter, the problems/distortion it creates and where every single stated measurement and fact was correct but nevertheless, the whole thing is invalidated for the intended reader by omitting just one single fact, that it’s well below audibility. You know all this but somehow can’t apply it your own theory.

G
 
Oct 17, 2022 at 5:51 AM Post #2,088 of 2,146
Your statement is false because IRL we never hear only the direct sound and even considering only the direct sound, there are several important aspects that crossfeed does not simulate at all, roughly or otherwise. Your answer to this is that these “important aspects” not only are not important, they’re irrelevant and should be dismissed. That’s nonsense because it contradicts established science. As one single example, if we have a direct sound centrally in front of us and then the exact same sound centrally behind us the ILD and ITD are zero in both cases. We can tell the difference due to spectral differences caused by the different absorption of the front of the pinnae compared to the back of the pinnae and other absorption characteristics of the front of the body/skull and the back. Cross feed does not simulate any of these differences in any way at all, not even roughly and you cannot claim they are irrelevant because your perception relies on them as does everyone else’s. Unless you’re claiming that in this experiment you wouldn’t be able to perceive the different location of the sound in front or behind?
YES, crossfeed doesn't simulate many things!! I admit that! So don't say I don't. Since I admit it, what I say is not false!! Crossfeed can still improve the sound a lot for some people despite not simulating everything and only doing some very simple things! I know because I am one of those people! I have also learned and admitted that what I hear is not what everyone hears. I ADMIT IT!!!! You can't admit that! Instead you base your claims to your own how you hear crossfeed.

You should admit that crossfeed is BASED on the science of spatial hearing. Very roughly and simply perhaps, but that is the origin. Crossfeed wasn't invented by trying something totally random out of the hat! It was invented by simulating acoustic crossfeed. This is the FACT and you are wrong if you claim otherwise. I also discovered crossfeed by thinking headphone spatiality the way I was though in university realising how problematic excessive ILD can be with headphones. So, it is useless to take science away from crossfeed. This science isn't taken away by the fact that nowadays we can do things MUCH BETTER with HRTF/head tracking etc. I have always admitted those methods are even better.

Headphones don't have front/behind separation. Crossfeed doesn't "take it away" nor "give it". Also, I know this stuff of course. It was teached to me in the university.
 
Last edited:
Oct 17, 2022 at 6:05 AM Post #2,089 of 2,146
Crossfeed doesn’t simulate the things that are responsible for giving sound a feeling of space. Reverbs and digital delays do that. Crossfeed reduces channel separation, which may be desirable if you don’t like ping pong stereo.
 
Oct 17, 2022 at 6:16 AM Post #2,090 of 2,146
When someone whispers in your ear, when driving, looking ahead and the passenger talks to you, when an insect flies close to one ear. When you’re a musician sitting right next to another musician, when you play an instrument that’s to one side of your head (such as a flute, violin or tuba for example), in fact many long term professional musicians of such instruments have serious noise induced hearing loss/damage in just that one ear.
How common are those in music listening? When I go to a music concert, I don't have people whispering to my ears, at least as part of the music. I don't want insects either when enjoying music. So, in music reproduction sounds near one ear aren't very relevant or even desired! ( Also, with speakers it is practically impossible to generate this effect (inside anechoic chamber it can be done using cross-talk-cancelation) and since music is mostly mixed for speakers...

I’m sure there are number of other scenarios which occur IRL and are therefore natural. I’m sure most people experience such a scenario at least once and quite a large number experience it several times or even fairly commonly. Unlike for example the bizarre acoustic experience of being in an anechoic chamber but even then, individual responses vary dramatically.
I’m very surprised you couldn’t come up with any IRL scenarios to answer your own question. This further indicates you have a “blind spot” for anything which may falsify your theory or contradict your personal perception!
Sure, millions perhaps, but hardly any of them are related to music listening. I can't come up any related to music listening, but of course I can come up many non-music-related (e.g. when I touch my ear I have the rubbing noise at one ear ==> huge ILD)

Crossfeed doesn’t simulate the things that are responsible for giving sound a feeling of space. Reverbs and digital delays do that. Crossfeed reduces channel separation, which may be desirable if you don’t like ping pong stereo.
Crossfeed helps me to interpret the cues in the recording that give feeling of space. I have explained the process many many times, but nobody wants to understand.

Crossfeed tells me the sound was intended for speakers and make sense in that context. Without crossfeed the intent is binaural sound.
 
Oct 17, 2022 at 6:31 AM Post #2,091 of 2,146
I linked an example of a song that had elements hard panned left and right that was clearly mixed for speakers. You said that crossfeed didn’t improve it or add spatiality, I believe.
 
Last edited:
Oct 17, 2022 at 7:11 AM Post #2,092 of 2,146
I linked an example of a song that had elements hard panned left and right that was clearly mixed for speakers. You said that crossfeed didn’t improve it or add spatiality, I believe.
I don't think I said crossfeed didn't improve anything. That song contained very little spatial cues in the mix, so obviously crossfeed can't do much about it. Crossfeed helps me to interpret spatial cues in the recording, but if there aren't any/much then there isn't. However, even if crossfeed didn't help much with the spatiality, it did make the sound less annoying and fatiguing for me, so there where still benefits.
 
Oct 17, 2022 at 9:37 AM Post #2,093 of 2,146
Crossfeed doesn’t add spatial cues. It simply blends channels. All the spatiality is in the recording itself.
 
Last edited:
Oct 17, 2022 at 10:02 AM Post #2,094 of 2,146
Acoustic crossfeed of direct sound creates ITD of about 250 µs.
No it doesn’t. Acoustic crossfeed of a direct sound can result in an ITD of anything from about 0µs to around 800µs. It’s depends on the horizontal position of the direct sound AND the morphology of the individual’s skull and body. Furthermore, ITD is not a single number, it varies non-linearly with frequency due to skull refraction by up to about 150µs.
Crossfeed mimicks this 250 µs. So there is that similar aspect about ITD.
Exactly, crossfeed mimics 250µs, which is not at all similar to actual ITD. It’s like saying a stopped clock mimics a functioning clock because it’s right twice a day!
I am tired of you twisting things and terms in ways that are always as unfavorable to crossfeed as possible.
No, I am presenting the facts which falsify your theory and explanation of why crossfeed supposedly works.
I think I am much more honest: I admit what crossfeed can't do. I admit its limitation,
Yes, you do admit what crossfeed can’t do and it’s limitations BUT, you then spend innumerable pages trying to explain why those limitations are either just irrelevant to start with or how crossfeed overcomes them using false/made-up assertions that it mimics or simulates what happens IRL (or with speakers). That is NOT “much more honest”, it is far less honest!!
I see the positive and the negative. You want to see only the negative.
That’s not true. I’ve stated that crossfeed works very well for a few people, acceptably well for a group of people and even that I prefer crossfeed in a very limited number of cases.
Crossfeed helps me to interpret the cues in the recording that give feeling of space.
Generally crossfeed makes it far more difficult for me to interpret the cues in the recording, gives me a more mono and therefore a lesser feeling of space.
I have explained the process many many times, but nobody wants to understand.
That’s because my perception, the perception of many/most others and loads of scientific evidence (such as HRTFs for example) falsifies your explanation of the process, regardless of how many times you repeat it!!
How common are those in music listening?
Now who’s changing the goal posts? We do quite commonly experience large ILD in real life. And, I’ve already given you examples where we do even with music; anyone who’s ever played a flute, violin, tuba, some other instruments or in a closely spaced ensemble. You could add, children being sung to by their mother with one side of their head near her mouth, anyone who’s ever listened to a radio on their shoulder or a mobile with the speaker close to one ear and there’s probably some other scenarios. There are various potential real life scenarios (that are not incredible rarities) which falsifies your assertion that high ILDs are “unnatural”. Again, you’re just making up false assertions to justify your “theory”/explanation.

G
 
Oct 17, 2022 at 10:57 AM Post #2,095 of 2,146
The problem isn’t you not acknowledging the limitations of crossfeed, it’s you attributing things to it that are completely unrelated. Spatial cues with headphones are 100% recorded in the mix. Crossfeed doesn’t change that. And speakers create a spatial soundstage that is completely different than narrowing the stereo spread with crossfeed. Yet you keep talking about crossfeed “enhancing” or creating spatiality, and comparing aspects of headphone listening with crossfeed to speakers.
 
Oct 17, 2022 at 11:12 AM Post #2,096 of 2,146
Crossfeed doesn’t add spatial cues. It simply blends channels. All the spatiality is in the recording itself.
That's not correct. The perception of space is a complex system and our HRTF(or what parts from it) is always involved in our interpretation. Mix filtered channels with a delay are very likely to alter that interpretation, be it in a positive or negative way. I get what you're trying to say about space and the room defining it, but even that is hard to keep alive when considering mono mics all over said room that get mixed together into an album.

About the angle thing before, I'm also not convinced you're right. With walls further away, the angle wouldn't change for the reverb only if both the speaker and the listener were at the same distance from the wall. That's not the case. It's not significant IMO as just about all other variables of the signal bouncing off the wall will be changed in ways more significant for the brain(we're not about a few degrees for the direct sound so it's unlikely to be a big deal for secondary cues), but I thought I shouldn't let 71 dB get only called out when wrong and never when correct.
 
Oct 17, 2022 at 11:51 AM Post #2,097 of 2,146
No it doesn’t. Acoustic crossfeed of a direct sound can result in an ITD of anything from about 0µs to around 800µs. It’s depends on the horizontal position of the direct sound AND the morphology of the individual’s skull and body. Furthermore, ITD is not a single number, it varies non-linearly with frequency due to skull refraction by up to about 150µs.
You know I mean situation where the speakers are at plus minus 30° angle and the listener doesn't turn head, but your style is to create a scenario where what I said doesn't apply. If the speakers move around you or the listener turns his/head then yes, but I wasn't talking about such situations. Crossfeed generates the ITD at frequencies up to about 800 Hz. Below that frequency ITD is quite constant. Above 800 Hz the importance of ITD goes away with frequency and ILD becomes more important. 800-1600 Hz is the transition band.

Exactly, crossfeed mimics 250µs, which is not at all similar to actual ITD. It’s like saying a stopped clock mimics a functioning clock because it’s right twice a day!
At all?? Don't be so difficult. You know 250µs a proper approximation of ILD. My wide-crossfeed uses 640 µs, but that's another story. Not using crossfeed doesn't give ANY ITD because it doesn't crossfeed anything in any way! Crossfeed gives the 250 µs which is close to acoustic crossfeed.

No, I am presenting the facts which falsify your theory and explanation of why crossfeed supposedly works.
You are nitpicking. Your explanations made sense if I claimed crossfeed does perfectly everything, but I don't claim that. To some of us crossfeed is able to improve headphone sound a lot DESPITE of being VERY imperfect AND there is actually scientific explanation (that led to the original idea of crossfeed of simulating acoustic crossfeed with speakers) as to why this is the case. I have tried to explain this for 5 years now and I am fed up with you and other crossfeed-skeptics here.

Yes, you do admit what crossfeed can’t do and it’s limitations BUT, you then spend innumerable pages trying to explain why those limitations are either just irrelevant to start with or how crossfeed overcomes them using false/made-up assertions that it mimics or simulates what happens IRL (or with speakers). That is NOT “much more honest”, it is far less honest!!
They are irrelevant for me to enjoy music! Did it ever occure to you that people might enjoy headphone sound without using state of the art HRTF processing? To me crossfeed is good enough, but headphone sound as it is is NOT good enough.

Also, crossfeed does simulate certain things. Simulation can be very coarse. There is no threshold of how accurate simulation has to be to be called simulation. So, you are using semantics to discredit me and I don't like that AT ALL!! I am very honest here.

That’s not true
You say that to everything say. Maybe I am a machine that generates untrue claims? So funny. Who takes you seriously at this point? Some fools maybe...

. I’ve stated that crossfeed works very well for a few people, acceptably well for a group of people and even that I prefer crossfeed in a very limited number of cases.
Yes you have, but the next thing you say is that my scientific explanations are false, but they are not.

Generally crossfeed makes it far more difficult for me to interpret the cues in the recording, gives me a more mono and therefore a lesser feeling of space.
That is interesting. Thanks to you I know now that people like you exists and my way of hearing crossfeed is not the only way common to all people.

That’s because my perception, the perception of many/most others and loads of scientific evidence (such as HRTFs for example) falsifies your explanation of the process, regardless of how many times you repeat it!!
No. Scientific evidence tells us we can do things even better than crossfeed. We both agree HRTF is better than crossfeed, but my claim is crossfeed is better than nothing (and good enough for me to enjoy headphone sound).

Now who’s changing the goal posts? We do quite commonly experience large ILD in real life. And, I’ve already given you examples where we do even with music; anyone who’s ever played a flute, violin, tuba, some other instruments or in a closely spaced ensemble. You could add, children being sung to by their mother with one side of their head near her mouth, anyone who’s ever listened to a radio on their shoulder or a mobile with the speaker close to one ear and there’s probably some other scenarios. There are various potential real life scenarios (that are not incredible rarities) which falsifies your assertion that high ILDs are “unnatural”. Again, you’re just making up false assertions to justify your “theory”/explanation.

G
How an earth should I hear flute or tube at my ear when I listen to a recording of Elgar's 2nd Symphony? I am not supposed to play in the orchestra! I am supposed to sit in the audience 15 meters from the orchestra! Large ILD is not unnatural in all context, but it is unnatural in the context of music listening. There is also the matter of spectrum. When we hear large ILD in real life, it tends to be mid/high frequencies (insects flying by, mother singing lullaby). Low frequencies generally require large vibrating surfaces and if such objects are near head, it is near field* meaning the ILD isn't that large. In fact, (closed) headphones are the best way to generate large ILD at low frequencies, and that's also the danger and motivation to use crossfeed.

* the vibrating surface is so big, that even the nearer ear isn't that near the average distance. Only a small part of the surface is very near. That limits the ILD.
 
Last edited:
Oct 17, 2022 at 1:40 PM Post #2,098 of 2,146
Crossfeed generates the ITD at frequencies up to about 800 Hz. Below that frequency ITD is quite constant.
Is this what you call “quite constant”?
8A9C2B27-1DF2-454E-B650-059BCF4FB176.jpeg

Taken from “On the variation of interaural time differences with frequency” - Victor Benichoux, Marc Rebillat, Romain Brette, JASA, 2016.
Maybe I am a machine that generates untrue claims? So funny. Who takes you seriously at this point? Some fools maybe...
Clearly, from the data in the peer reviewed paper above, ITD does vary by frequency below 800Hz, by as much as 200μs. Your claim of it being constant is untrue, a fixed 250μs delay does NOT simulate what actually occurs so you are apparently an untrue claim generating machine and your insult applies to yourself!!

I can’t be bothered with the rest of it, it’s just more of the same.

G
 
Oct 17, 2022 at 2:14 PM Post #2,099 of 2,146
Is this what you call “quite constant”?

Taken from “On the variation of interaural time differences with frequency” - Victor Benichoux, Marc Rebillat, Romain Brette, JASA, 2016.

Clearly, from the data in the peer reviewed paper above, ITD does vary by frequency below 800Hz, by as much as 200μs. Your claim of it being constant is untrue, a fixed 250μs delay does NOT simulate what actually occurs so you are apparently an untrue claim generating machine and your insult applies to yourself!!

I can’t be bothered with the rest of it, it’s just more of the same.

G
It is less constant than what I have understood it to be. I need to study this paper. Thanks for the link!
 
Last edited:
Oct 17, 2022 at 2:22 PM Post #2,100 of 2,146
That's not correct.

About the angle thing before, I'm also not convinced you're right. With walls further away, the angle wouldn't change for the reverb
If you notice, I said "Crossfeed does not add SPATIAL CUES. It adds nothing that will create spatiality. If it triggers something in someone's brains and makes them hear something they were tuning out before, that is part of their perception, not the actual sound being produced, nor is it necessarily a universal reaction to crossfeed.

You are correct though that delays and reverbs do add synthetic spatial cues. But simply reducing channel separation doesn't introduce a delay. Creating a coherent artificial ambient space is very complex. I bet Gregorio could speak for many pages talking about all the elements that go into creating a realistic sense of space in a mix with delays and reverbs.

As for angles, we weren't talking about reflections because headphones don't have reflections. The comment I was replying to said that as a room size got larger, the angle of the direct sound from the speakers changes, but crossfeed keeps that angle consistent. That isn't true because the triangle of the speakers and listening position scales up to maintain the same angles. And crossfeed does not change the angle. It's still 90 degrees off the sides of the head.

The problem here was that I answered one comment and then it was replied to with alterations to the context of the original post I was replying to. (A "yes but" was added after I answered...) When the conversation wiggles and morphs like that, it's difficult to follow. It would be easier if there was an acknowledgment of my point, and then introduce the next point as a new argument, but that isn't how things work around here sometimes. No acknowledgments... one point morphs into another slightly different one when it's proven wrong. Set context on "blend".
 
Last edited:

Users who are viewing this thread

Back
Top