To crossfeed or not to crossfeed? That is the question...
Oct 7, 2019 at 7:24 PM Post #1,186 of 2,146
crossfed signals do NOT mimic HRTFs because they do not account for ITD or the colouration of the crossfed signal that is required of a HRTF.

Of course, they do! Crossfeeds mimic ITD and some of them offer an option of colorizing (equalizing) the crossfed signal. Spend some time not arguing against crossfeed plugins but actually learning how they are made and how they work.
 
Oct 7, 2019 at 7:28 PM Post #1,187 of 2,146
Ironmine, crossfeed with reverb doesn't come much closer to the effect of a room on speakers. To get speciality, you need physical space. At least until the technology for stuff like the Smyth Realizer gets better.

I don't like reverbs, I don't use them. I want to hear the original reverberation in the recording. Why intentionally mix it with the reverberation of a listening room?

But I guess, as the computational power grows in future, any physical space and any processes happening in it can be eventually simulated with a great degree of precision and detail.
 
Oct 7, 2019 at 9:05 PM Post #1,188 of 2,146
There are times when reverb and other kinds of delay are useful. In my multichannel speaker system, I use a hall ambience DSP based on the Vienna Sofiensaal when I play orchestral music recorded in dry studio conditions. For instance the recordings of Toscanini and the NBC Symphony in Studio 8H. The records are dry as dust and in aggressive mono. But when I wrap the ambience around it, it almost sounds like stereo recorded in a good concert hall.

I think the sound around the sound is as important as the sound itself. But I can understand how people who are used to headphones might not believe that.
 
Oct 11, 2019 at 9:12 AM Post #1,189 of 2,146
1. Yes, reverberation is for the most part diffuse spatial information without direction.
[1a] When I create the spatial information for the instruments of my own music, I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information ...
[1b] I think our spatial hearing is actually quite good at dealing with multiply spatial scenarios simultaneously. Imagine listening to your friend speaking during thunderstorm. The sounds of the thunderstorm has totally differen spatiality than the speech of your friend, but there is nothing annatural to the situation.
[1c] Spatial hearing can easily deal with the situation and the fact that your friends voice doesn't echo all over the neighborhood.
[1d] As long as each spatiality itself makes sense, mixing multiple different ones isn't a problem in my opinion.
[1e] The trick is to take the spatiality of individual track as far as possible, but not too far to have as "wide" spatiality as possible, but also avoid excessive spatiality. I believe this is the way to create "omnistereophonic recordings, recordings that work for both speakers and headphones without crossfeed.
2. Crossfeed is kind of what you get when you approximate HRTF for ~30° angled sounds with a first order low pass filter. ...
[3] I don't ignore the fact of simultaneous acoustics.
[3a] I'm just confident human hearing is surprisingly good at dealing with them because it happens in real life (thunderstorms etc.)

1. No, it is NOT! When we have a sound in an acoustic environment we get a set of initial reflections, which are NOT diffuse and DO have considerable directional information. The properties of these initial reflections (often referred to as Early Reflections, ERs); their timing, freq response, direction and level relative to the direct sound are VITAL to our perception of "spatiality", in fact they largely define it! These ERs then hit other surfaces, are reflected again and the system becomes semi-random/chaotic and this portion of the reverb is diffuse with little direction.
1a. Then you're doing it wrong, because reverb DOES have left/right information! Why do you think stereo (and surround) reverb units/plugins exist in the first place? If there were no left/right positional information then all reverb units would be mono!

1b. This is nonsense! The sounds of the thunderstorm would NOT have "totally different spatiality"! Obviously you, your friend and the thunderstorm would be in the same acoustic environment and all of you would therefore have somewhat similar "spatiality" (as defined by that acoustic environment). However, the individual parameters of the spatial information would be somewhat different due to the different relative positions of you, your friend and the thunderstorm within that aoustic environment.
1c. Of course it can, because our hearing deals constantly with sounds in different relative positions within an acoustic environment.
1d. With the vast majority of non-acoustic music genres we're not talking about different sound sources in different positions within a single acoustic environment but about different, independent acoustic environments at the same time. And of course our hearing never has to deal with that because it's a physical impossibility!
1e. You admit to knowing pretty much nothing about sound/music engineering, have stated that you are "not saying much what engineers are and [should] do", yet here you are, yet again (!), doing exactly that?!

2. Crossfeed obviously changes the time, level and direction of the ERs which are so vital to spatial hearing.

3. But that's exactly what you've just done! Your analogy does exactly that, your analogy is in ONE acoustic environment, so ignores different simultaneous environments (which is the case with non-acoustic genres) and it also ignores different simultaneous acoustic perspectives within that single environment (which is the case with acoustic genres such as classical). So, that covers all the acoustic and non-acoustic music genres, please tell me what other genres exist besides these two (to which your analogy would be applicable)?!!
3a. Which again is nonsense, unless you have a totally bizarre "real life". IE. Two sets of ears which you can simultaneously place in two different rooms (acoustic spaces) or, which you can simultaneously place both say 1m and 30m away from your friend speaking (in the same acoustic environment).

You state you studied at university and have "thought about this a lot" but what you actually write indicates little/no education and that you've thought about this for no more than a few seconds!

G
 
Oct 11, 2019 at 9:53 AM Post #1,190 of 2,146
we have to consider several playback models, first to try and understand them properly, and then to see if what applies to one model can apply to the others:
we have the headphone's usual playback.
we have the sound field from speakers in a room as perceived by a dude in that room.
and for crossfeed, the model consists in the best case scenario, of 2 speakers, no room, no floor, no listener's body, a head with a shape that acts like a one band EQ, and when the head turns, the speakers follow. you know, a more natural spatiality. :alien:

deciding that crossfeed is an improvement over default headphone playback is a subjective opinion. and one I share. even more so when there are indeed VSTs that bring more than just basic crossfeed to the table and can feel even better to some listeners with the right settings. but subjective it is. the entire notion of perceived sound location is fundamentally a subjective thing. it involve psychoacoustic, and varies with the listener's head, oh and it refers to an acoustic model that does not actually exist. if that's not subjective, what is?
 
Oct 11, 2019 at 11:27 AM Post #1,191 of 2,146
1. When we have a sound in an acoustic environment we get a set of initial reflections, which are NOT diffuse and DO have considerable directional information. The properties of these initial reflections (often referred to as Early Reflections, ERs); their timing, freq response, direction and level relative to the direct sound are VITAL to our perception of "spatiality", in fact they largely define it! These ERs then hit other surfaces, are reflected again and the system becomes semi-random/chaotic and this portion of the reverb is diffuse with little direction.
1a. Then you're doing it wrong, because reverb DOES have left/right information! Why do you think stereo (and surround) reverb units/plugins exist in the first place? If there were no left/right positional information then all reverb units would be mono!

1b. This is nonsense! The sounds of the thunderstorm would NOT have "totally different spatiality"! Obviously you, your friend and the thunderstorm would be in the same acoustic environment and all of you would therefore have somewhat similar "spatiality" (as defined by that acoustic environment). However, the individual parameters of the spatial information would be somewhat different due to the different relative positions of you, your friend and the thunderstorm within that aoustic environment.
1c. Of course it can, because our hearing deals constantly with sounds in different relative positions within an acoustic environment.
1d. With the vast majority of non-acoustic music genres we're not talking about different sound sources in different positions within a single acoustic environment but about different, independent acoustic environments at the same time. And of course our hearing never has to deal with that because it's a physical impossibility!
1e. You admit to knowing pretty much nothing about sound/music engineering, have stated that you are "not saying much what engineers are and [should] do", yet here you are, yet again (!), doing exactly that?!

2. Crossfeed obviously changes the time, level and direction of the ERs which are so vital to spatial hearing.

3. But that's exactly what you've just done! Your analogy does exactly that, your analogy is in ONE acoustic environment, so ignores different simultaneous environments (which is the case with non-acoustic genres) and it also ignores different simultaneous acoustic perspectives within that single environment (which is the case with acoustic genres such as classical). So, that covers all the acoustic and non-acoustic music genres, please tell me what other genres exist besides these two (to which your analogy would be applicable)?!!
3a. Which again is nonsense, unless you have a totally bizarre "real life". IE. Two sets of ears which you can simultaneously place in two different rooms (acoustic spaces) or, which you can simultaneously place both say 1m and 30m away from your friend speaking (in the same acoustic environment).

You state you studied at university and have "thought about this a lot" but what you actually write indicates little/no education and that you've thought about this for no more than a few seconds!

G

1. Correct. I agree. Early reflections are not diffuse and they contain crucial spatial information. I don't think I have said otherwise. I said the reverberation after early reflections is diffuse. As I make computer music and have to create spatiality from the crash I am very familiar with this concept not to mention university studies where this stuff was teached almost the first day! It continues to amaze me how you refuse to believe I really know this stuff.

1a. Reverberation has left/right diffence (so it's not mono), but the difference is very random, practially noise and doesn't really contain spatial information about where the sound originated left-right-wise. That information is in the direct sound and early reflections. Reverberation contain information about the acoustic space and how far the sound source has been (when compared to direct sound/ER). Again this is stuff we both know. Since you keep claiming I don't, I have no choice but to call you nefarious.

1b. The thunderstorm contains echoes of significant time delay. The speech of a friend doesn't. Sure, the speech does techically echo from a distant building and come back half a second later, but the sound is so quiet nobody can hear it. The speech is dominated by the acoustics near you, the thunderstorm is dominated by the massive echoes from the hills and buildings in the radius of a mile. Result is totally different spatiality. In my opinion you lack the ability to discern what things mean in practice and we disagree a lot about what matters and what doesn't matter.

1d. Except when we are listening to such music our hearing needs to deal with it… …so what do you mean?

1e. I haven't done it for a work, but I know something and I learn more. I have watched countless of hours of Youtube videos about how to mix music. I also make my on music and have learning a thinhg or two while doing it. However, I acknowledge the limits of my knowledge. In general consumers have power. If consumers don't like product A, they may prefer product B even if they know nothing about how A and B are produced. I'm not saying this is always a good thing. I am saying it's what happen in capitalism.

2. What ER? There is no room so there are not ER either. Maybe you mean the ER of the recording itself? According to youself that's something like 30 mics worth of multispatiality, that in your opinion doesn't suffer at all when you use speakers and have acoustic crossfeed (doing "time change" etc.), early reflections and diffuse revereberation. Also, if you use only headphones everything is fine and dandy according to you, BUT if you dare to simulate the acoustic crossfeed of speaker listening with crossfeed you suddenly have devastating problems!! As I said, you have difficulties discerning what matters and what doesn't matter.

3. This leads nowhere… …your attempts are pathetic at this point.
3a. Same as 3.

I have the qualification certificate on my shelf and I am not intimidated anymore by your nasty words. You could start practicing putting things in proper perspective and becoming better at discerning what matters and what doesn't matter instead of calling others ignorant.
 
Oct 11, 2019 at 12:14 PM Post #1,192 of 2,146
In order to have true spatiality, you need physical space. It may be possible someday to synthesize that effect, but it's going to require computer processing. It won't happen with a simple crossfeed.
 
Last edited:
Oct 11, 2019 at 3:14 PM Post #1,193 of 2,146
1. I said the reverberation after early reflections is diffuse. As I make computer music and have to create spatiality from the crash I am very familiar with this concept not to mention university studies where this stuff was teached almost the first day! It continues to amaze me how you refuse to believe I really know this stuff.
1a. Reverberation contain information about the acoustic space and how far the sound source has been (when compared to direct sound/ER). Again this is stuff we both know. Since you keep claiming I don't, I have no choice but to call you nefarious.
1b. The speech is dominated by the acoustics near you, the thunderstorm is dominated by the massive echoes from the hills and buildings in the radius of a mile. Result is totally different spatiality.
[1b1] In my opinion you lack the ability to discern what things mean in practice ...
1d. Except when we are listening to such music our hearing needs to deal with it… …so what do you mean?
2. What ER? There is no room so there are not ER either. Maybe you mean the ER of the recording itself?
2a. According to youself that's something like 30 mics worth of multispatiality, that in your opinion doesn't suffer at all when you use speakers and have acoustic crossfeed (doing "time change" etc.), early reflections and diffuse revereberation.
[2b] Also, if you use only headphones everything is fine and dandy according to you, BUT if you dare to simulate the acoustic crossfeed of speaker listening with crossfeed you suddenly have devastating problems!! As I said, you have difficulties discerning what matters and what doesn't matter.
3. This leads nowhere… …your attempts are pathetic at this point.
4. I have the qualification certificate on my shelf and I am not intimidated anymore by your nasty words. You could start practicing putting things in proper perspective and becoming better at discerning what matters and what doesn't matter instead of calling others ignorant.

1. No you didn't, why don't you read what you wrote? You said "I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information", no mention of ERs at all, just the direct sound and the reverb!! If by "reverberation" you are including the ERs, which of course you should because reverberation is only the subsequent further reflections of the early reflections, then your statement is false and reverb does contain very significant directional (+ timing and freq content). If you're not including ERs in "reverberation", then as I said, you're doing it wrong and ignoring some of the most vital spatial information (which is damaged by crossfeed)! How many times?

1a. The main spatial information is in the left/right information of the direct sound and the parameters of the ERs, including distance! The arrival time and direction of the ERs tells us how far the direct sound is from the various different reflecting boundaries. However, you want to destroy that by changing (crossfeeding) that direction and timing information! If you do know this stuff, why do you keep omitting it and/or ignoring it? How many times?

1b. And how exactly are you going to hear the "massive echoes from the hills and buildings" independently from the acoustics of the environment you and your friend are in? Do you have a set of ears in the hills, another set of ears in the buildings and another set near your friend? Of course you don't, you have one set of ears near your friend and everything you hear is through that single acoustic environment and therefore the result CANNOT be totally different spatiality, just somewhat different, as I've already stated. This is NOT analogous to most commercial music recordings, where we have different, simultaneous acoustic environments.
1b1. How ridiculous is that? You've admitted you've got no practical experience yourself but even so, you have the opinion that someone who has 25 years of practical professional experience lacks the ability to "discern what things mean in practice" and you know better because what, you've seen some youtube vids and know that your knowledge is limited? How does that make any sense to you unless you're delusional?
1d. Huh, that's the whole point! Our hearing has to deal with something that we can never experience and therefore each individual's perception interprets the spatial information presented with headphones according to their individual biases, experiences and preferences. How many times!

2. Of course I mean the ERs on the recordings, what is it you're listening to on your headphones?
2a. Your joking right? 30 mics worth of different spatiality would be a complete mess when using speakers, which is why we need music engineers to adjust that spatial information, while using speakers, before it's distributed to consumers! You're just getting more and more ridiculous!
2b. And clearly you have severe difficulties discerning the actual facts from the lies that you yourself have made up! I never said only using headphones is "fine and dandy", in fact I said the opposite and I never said that crossfeed gives you "devastating problems", you just made those lies up!!

3. Absolutely. In terms of getting you to stop posting falsehoods, my attempts are clearly pathetic. However, as this is the sound science forum then your falsehoods will be refuted. The obvious solution to this "leading nowhere" is to stop posting your falsehood repeatedly. How many times?

4. What qualification certificate, you said your university course didn't even mention music production! And how can I START "practicing putting things in proper perspective" when I've been doing that professionally for over 25 years and started practicing that nearly 30 years ago? Just how ridiculous are you going to get?

Again, you're just following the path you always take, which inevitably leads you to more and more ridiculous assertions, which eventually even you realise are ridiculous and then you get all depressed, go on about your self-esteem, drop the subject for a while and then start all over again, do exactly the same thing and expect a different result. Don't you know the famous cliche attributed Einstein about that?

G
 
Last edited:
Oct 11, 2019 at 4:13 PM Post #1,194 of 2,146
Nefarious!

barney-oldfields-race-flower-hat.gif


Pathetic!

doll-ernst-lubitsch-ossi-oswalda-silent-movie-animated-kind-sun.gif
 
Last edited:
Oct 11, 2019 at 10:34 PM Post #1,195 of 2,146
In order to have true spatiality, you need physical space. It may be possible someday to synthesize that effect, but it's going to require computer processing. It won't happen with a simple crossfeed.

I think also HRTF is much more complicated with headphones as they sit close to our ears: which are all individualized by how our outer ears are shaped, angles of our ear canals, the current state of middle ear, and the state of inner ear (and tolerances that more individualized/finite)...which leads to the bickering of what headphone brand is most "natural" sounding. Speakers are far enough away to have a sound field that mimics the source venue. I've also never thought "crossfeed" a term for a virtual spactiality with headphones. To date, the best spatiality I've heard with headphones are a few virtual surround settings. The first time: when I got a Sennheiser Pro Dolby surround processor: which does rely on a parametric setting which you alter to your taste. When I dialed it in to mine, I could hear effects around me. True Dolby Atmos headphone sources also sound good...but can't say I hear as much front depth cue as my actual speaker system.
 
Oct 12, 2019 at 5:57 AM Post #1,196 of 2,146
In order to have true spatiality, you need physical space. It may be possible someday to synthesize that effect, but it's going to require computer processing. It won't happen with a simple crossfeed.

You can't use a hammer as a screwdriver. Does it mean hammers are useless? I use crossfeed to make headphone sound have natural levels of ILD and in some ways (not all ways unfortunately) sound similar to speakers. Someday I may have better ways to improve headphone sound, but until then this is what I have got and I think it's much better than nothing so I am definitely using it! Sometimes I even listen to my speakers and have your beloved "true spatiality."
 
Oct 12, 2019 at 6:17 AM Post #1,197 of 2,146
1. No you didn't, why don't you read what you wrote? You said "I mix the direct sound which has the position information (angle of sound) with reverberation which doesn't really have left-right information, but gives distance information", no mention of ERs at all, just the direct sound and the reverb!! If by "reverberation" you are including the ERs, which of course you should because reverberation is only the subsequent further reflections of the early reflections, then your statement is false and reverb does contain very significant directional (+ timing and freq content). If you're not including ERs in "reverberation", then as I said, you're doing it wrong and ignoring some of the most vital spatial information (which is damaged by crossfeed)! How many times?

Sorry about not mentiong ER every time. I don't mention dinosaurs either in my posts but I still know dinosaurs existed.. …you'd make a good lawyer…
If crossfeed damages ER, then acoustic crossfeed damages it also. Are you advocating crosstalk canceling for speakers? No, because you sound engineers mix taking acoustic crossfeed into account which means you are in trouble with headphones if you don't use similar prosessing of sound.

What does damaged ER even mean? Crossfeed IMPROVES the sound for me so I don't get what is damaged. I really don't get how room acoustics is nothing, but simple crossfeed ruins things. I have calculated these things, what crossfeed does, how it alters phases and things and it's NOTHING compared to what room does. I am totally fed up with your criticism. Nobody would use crossfeed if it didn't improve things. You are out of your mind if you think I consider the ER of some ping pong recordings intact and the crossfed version damaged! If I listen to speakers the player is 30° left. With heaphones without crossfeed it's maybe 60°. Which is correct? I suppose speakers so I crossfeed and the sound moves to maybe 40° which is closer to speakers and everything sounds better and natural you tell me ER is damaged? What the ****?
 
Oct 12, 2019 at 6:44 AM Post #1,198 of 2,146
2a. Your joking right? 30 mics worth of different spatiality would be a complete mess when using speakers, which is why we need music engineers to adjust that spatial information, while using speakers, before it's distributed to consumers! You're just getting more and more ridiculous!

Yes, USING SPEAKERS! Spatial information adjusted for speakers is not automatically optimal for headphones. Newer recordings are often somewhat good for headphones thanks to sophisticated tools (and many producers have learned to mix bass mono etc.) but older recordings are what they are, often ping pong or something else crazy. Even newer recordings often benefit from weak crossfeed, but clearly headphone users have been thought about. You are the one who brought the consept of multiple spatiality into this discussion.

2b. And clearly you have severe difficulties discerning the actual facts from the lies that you yourself have made up! I never said only using headphones is "fine and dandy", in fact I said the opposite and I never said that crossfeed gives you "devastating problems", you just made those lies up!!|

Ok, sorry but why do you then oppose crossfeed so fiercely?? Your posts indicate you are very worried about what crossfeed does to ER.

What qualification certificate, you said your university course didn't even mention music production! And how can I START "practicing putting things in proper perspective" when I've been doing that professionally for over 25 years and started practicing that nearly 30 years ago? Just how ridiculous are you going to get?

Music production is not the thing of the university I went. It was about electric engineering and I chose acoustics and signal processing as my speciality. The whole faculty was very much into telecommunication. I believe that's why Finland is strong in mobile technology (Nokia) because we have this faculty producing tons of telecommunication engineers. Acoustics was just a small fraction of the faculty, small group of more or less excentric people into sound, many musically talented. Until I heard it from you I didn't even know music production is teached in universities. I thought you learn the art by doing, working in the business and learning from the gurus while working with them.
 
Oct 12, 2019 at 9:27 AM Post #1,199 of 2,146
I use crossfeed to make headphone sound have natural levels of ILD and in some ways (not all ways unfortunately) sound similar to speakers.

That's just a function of YOUR perception, to me crossfeed does NOT make it sound anything like speakers. It doesn't even sound similar to speakers for some/many of the people who use crossfeed regularly, just preferable to not using crossfeed! You are talking about your personal perception, NOT an objective fact that's applicable to everyone (except jerks, idiots, etc.)! How many times?

[1] Sorry about not mentiong ER every time. I don't mention dinosaurs either in my posts but I still know dinosaurs existed.. …
[2] If crossfeed damages ER, then acoustic crossfeed damages it also.
[2a] Are you advocating crosstalk canceling for speakers? No, because you sound engineers mix taking acoustic crossfeed into account which means you are in trouble with headphones if you don't use similar prosessing of sound.
[3] What does damaged ER even mean?
[3a] Crossfeed IMPROVES the sound for me so I don't get what is damaged.
[3b] I really don't get how room acoustics is nothing, but simple crossfeed ruins things.
[4] I have calculated these things, what crossfeed does, how it alters phases and things and it's NOTHING compared to what room does.
[4a] I am totally fed up with your criticism.
[4b] Nobody would use crossfeed if it didn't improve things.
[4c] You are out of your mind ...

1. What do you mean sorry for not mentioning ERs everytime? You omitted it completely from a long post all about spatial information! In nature/the real world, how do you have spatial information without ERs (probably the most important aspect of spatial information)? You think maybe ERs went extinct 98 million years ago?

2. Oh god, how many times? Except in an anechoic chamber, you never get acoustic crossfeed from speakers without the ERs and reverb (spatial information) of the listening environment, spatial information which is vital to our perception! Have you ever listened to music on stereo speakers in an anechoic chamber? It sounds terrible, except possibly to you?
2a. It's just ridiculous! You state you don't make assertions about sound engineers because you don't know anything/much about sound engineers/engineering but here you are, yet again making an assertion about what sound engineers "take into account". If that's not ridiculous enough, you're making the exact same false assertion about what sound engineers "take into account" that I, an actual sound engineer, refuted just a few posts ago! Round and round we go.

3. Asked and answered numerous times (but I've done it again briefly in 2a below).
3a. Fallacy, false correlation! Another common example: For some people, using a tube in the playback chain IMPROVES the sound for them and they too usually "don't get what is damaged"!
3b. Clearly you don't "get it", despite it being explained to you numerous times. The fault in your approach seems to be that you assume that because you don't "get it", then it must be false but what you should be doing is questioning your ability/willingness to "get it"! So, round and round we go.
4. Obviously it's not "NOTHING", that's nonsense but certainly it's a great deal less compared to speakers in a normal room. What you seem utterly unwilling "to get", is that's precisely what's wrong with crossfeed, it's why we've moved on to HRTFs and even that's still not enough (on it's own) for many!
4a. Then stop writing nonsense that needs to be criticised/refuted! How many times?
4b. Which is why "nobody" uses tubes, vinyl records or expensive audiophile cables, right?
4c. Pot, kettle, black!

[1] Yes, USING SPEAKERS!
[2] Ok, sorry but why do you then oppose crossfeed so fiercely?? [2a] Your posts indicate you are very worried about what crossfeed does to ER.
[3] Music production is not the thing of the university I went. It was about electric engineering and I chose acoustics and signal processing as my speciality. The whole faculty was very much into telecommunication. ...

1. What do you mean "yes, USING SPEAKERS", I was responding to your point about using speakers: "According to youself that's something like 30 mics worth of multispatiality, that in your opinion doesn't suffer at all when you use speakers ...". Again, it's just ridiculous, don't you know what you've written? Why do I have to keep quoting what you've written back to you? Why don't you stop writing nonsense/falsehoods in the first place and then we don't have to keep going round in circles? How many times?

2. You're joking right? I accuse you of lying, of making up an assertion and falsely attributing it to me, and how do you respond? You say you're sorry and then make-up another assertion which you falsely attribute it to me! It's just more and more ridiculous!
2a. I am somewhat worried about what crossfeed does to the signal/ERs, as obviously it crossfeeds the ER's and therefore alters the directional relative timing of them. Also obviously, that's not what is intended! Engineers and artists mix/master recordings for speakers in a consumer room/listening environment, for headphones or most commonly, for speakers but with some consideration of headphone playback but we NEVER mix/master for speakers in an anechoic chamber!! However, as we're talking about crossfeed messing up spatial information (on the recording) which cannot exist in the real/natural world anyway, then the end result, what a particular listener will perceive, varies according to that particular listener's perception and preferences. How many times? It's astonishing that after two or more years, you don't even seem to know what it is that I (and others) are "opposing so fiercely"!

3. Yes, we know that you don't know anything about music production, what we don't know is why you keep making false assertions on a subject you admit you don't know anything about and then defend them endlessly?!! How many times?

G
 
Last edited:
Oct 12, 2019 at 1:44 PM Post #1,200 of 2,146
That's just a function of YOUR perception, to me crossfeed does NOT make it sound anything like speakers. It doesn't even sound similar to speakers for some/many of the people who use crossfeed regularly, just preferable to not using crossfeed! You are talking about your personal perception, NOT an objective fact that's applicable to everyone (except jerks, idiots, etc.)! How many times?

When I say similar to speaker I mean some aspects, not all aspects. Speakers don't give excessive ILD, same with crossfeed => SIMILAR

I am done now. I don't care what you or other peope think. Wasting my life here is pointless.
 

Users who are viewing this thread

Back
Top