Crosstalk/Crossfeed Questions
Apr 16, 2006 at 6:54 AM Post #17 of 42
Quote:

Originally Posted by TheSloth
The idea that hearing dual mono reproduction through headphones of a stereo recording is anywhere near what some 'engineer' ever intended is not true, and simply perpetuated misunderstanding.

Dual Mono listening is just as inaccurate as crossfeed is, from a different sonic stand point. By not blending the channels in any way, the dual mono is an inaccurate reproduction of anything except binaural recordings.



Dual monaural? There is no such thing.

You really need to analyze the psychoacoustics at work here. I am always surprised at how few people can actually figure this out although it has been discussed for decades.

You have 2 ears. A sound source, sometimes called "distal stimulus," in perceptual pschology, creates 2 "proximal stimuli," i.e. one signal at each ear, slightly different from each other. These differences, primarily interaural differences in time and amplitude of what would otherwise be almost the same signals, are what is responsible for the spatial perception of sound, especially left-right discrimination.

2 microphones create 2 signals. When 2 channel sound is directed to headphones, you get some approximation of the 2 proximal stimuli you would receive if your ears were in the original recording set-up instead of 2microphones. These signals are more or less of a match to what the ears would hear in the real situation. The exactness varies with microphone positioning . Binaural recordings are generally even closer to what the ears would hear.

However, no matter how bad the 2 channel recording is in screwing up the spatial cues, loudspeakers compound the problem by creating 4 proximal stimuli from 1 distal stimulus. Why? Because each speaker is heard by both ears. The left channel going to the right ear and vice versa. These extra stinmuli are termed phantom channels, and they have no correspondance to anything which would be heard in real life. They are simply added by the physics of loudspeaker reproduction. These phantom channel signals are also slightly delayed in time due to the need to travel farther to the opposite ear. So when, for example, the left speaker signal makes it to the right ear there is going to be a complex interference of addition and subtraction of various frequencies. Basically then, speakers inevitably produce major distortion for this reason. Headphones don't. and that I think is one of the reasons why people like them just as they are without attempts to recreate the screwed up sound fields of speakers.

With simple blending, you are probably not doing as much damage to the signals as when time delays are added to the mix. Blending merely mixes the left and right signals. If fully blended there would simply be monaural sound. If you add time time delays you end up with additional complex addition and subtraction as well.

If you like your phones to sound like speakers, then enjoy that.

I suspect that what many people find most helpful about crossfeed is that the addition and subtraction effects gets rid of some frequency problems in their equipment or the source material.

If it works for you than so be it. But recognize it for what it is, a complex screwing up of sound, in an attempt to recreate the artificial nature of speaker listening, rather than an attempt to provide higher fidelity.
 
Apr 16, 2006 at 7:17 AM Post #18 of 42
You are ignoring the fact that all CD's, except for binaural ones are pre-screwed-up (or pre-compensated) to sound real when played back through speakers. The point of crossfeed is to reproduce speaker acoustics, NOT because speakers are more accurate fundamentally, but because the material has been pre-screwed-up in such a way as to sound closer to the intention on speakers than it does on headphones.

Your argument relies on the fact that what is on the CD is somehow clean and unadulterated. It is nothing of the sort. It has had speaker equalisation curves applied to it, and the perceptual cues have been placed on the basis of speaker listening. It is therefore pre biased to sound realistic only when played back on equipment that will provide the 4 spacial cues that you refer to (ignoring all of the room spacial cues as well). By playing it back on headphones, you are playing it back without 2 spacial cues that are assumed in the production of the recording. You can't really disagree that when listening to such recordings on headphones, you are actually not hearing sounds that the engineer intended you to hear. I have made recordings and I can assure you that the final product is NEVER mixed evaluated using headphones. Headphones are strictly a studio-only tool.

All you have really done is argued for the merits of binaural recording, and why you wouldn't want to screw the beauty of that up with a loudspeaker system or an attempt to simulate a loudspeaker system, and on that count I wholeheartedly agree with you. But what you say has little relevance to anything that is not recorded that way.

Crossfeed is screwed up. So is the complete left and right separation of headphone reproduction of loudspeaker mixed material. Which screwed up is closer to the original depends on your ears and your brain. The part of your argument that I think is absolutely incorrect is that crossfeed is by its nature lower fidelity than normal headphone listening. The crossfeed does introduce distortions and is not accurate for all the obvious reasons, however due to the lack of correct spatial information, neither is a non-blended, speaker mixed sound unless the material is binaural. Whether you like it or not, headphone listening is no more accurate without crossfeed as it is with it. To say that pure headphone listening is true to the source is just plain wrong.

(I see no issue with the term dual mono to describe headphone listening, as a way of expressing the fact that the sound of each channel remains discrete for the other at the ear).
 
Apr 16, 2006 at 11:42 AM Post #19 of 42
Quote:

Originally Posted by edstrelow
I am more than a little bothered that a Headroom spokesperson would write such material without one iota of factual evidence to back this up. Headroom sells crossfeed as a feature, I would say gimmick, and this quote is nothing more than a sales pitch. Everytime I have seen recordings taking place I see lots of headphones.


The headphones that you see in recording studios are for the performers. Every studio that I've ever been in has many headphones and headphone distribution amps in each room used for recording. However, I can't think of a single time when I've seen headphones in the control room. Every recording engineer that I've ever known records, mixes and masters using a set of quality nearfield monitors.

When you're doing a multitrack recording, you try to isolate instruments as much as possible so that they don't bleed onto other tracks. So, for example, you might put the drummer in an isolation booth so that the drums don't bleed onto the guitar tracks. However, the other musicians needs to be able to hear the drummer, so they will monitor using headphones.
 
Apr 16, 2006 at 11:56 AM Post #20 of 42
I think the moral may be that it is objective - there is science behind it that tries to solve a problem, but in the end it's also subjective, because it boils down to personal taste. Whether it be an issue with source, music, amplification, cans, or even psychology, it's hard to say. I've rarely seen a double blind test performed here, and I think that's great. What sounds good to you and why? I think it's been made obvious why it sounds good to some, and like nothing has changed to others.

Without trying to play thread police too much, I do still have a huge interest in the questions posted about cross*TALK*. At what point does crosstalk become noticeable for people? Still looking at the PIMETA's graph and technically speaking, is this something that the average ear could pick up? I've searched laboriously and I haven't found any detail on this topic. The general consensus is "less is better" but there isn't an explanation for why or what at threshold it would really start to matter.

The PIMETA lowered crosstalk considerably versus the META42 design, but in reviews people have said they can't hear a bit of difference in soundstage. Also, I won't claim to be able to read these graphs correctly, but it would appear that the proper channel's data is about 5-8 times louder than the crosstalk data. Would a proper analogy be someone trying to have a conversation with you at normal volume, at normal range (60db) in the middle of a loud concert (110db+)?

Thanks again for all the responses!
 
Apr 16, 2006 at 12:41 PM Post #21 of 42
IMO this is completely wrong to say because a recording has been made/monitored with speakers than it will only works with speakers. Both, speakers and headphones, provide a different sonic presentation and usually both are used during the recording process because of these differents sonic presentation.

Your brain is the best crossfeed ever made guys and myself i never felt the need for crossfeed even with closed headphone and cheap amplification. At worst i hear only one small 'blob' of sound between the ears and as i upgrade with nice open phone like the top Sennheiser the soundstage expand beyond my left and right ear.

I never felt a separation, like there's a gap between my left and right ear. And in fact open design like the HD6XX provide all 'natural crossfeed' necessary like with speakers. They're leaking so much that my left ear is picking information from the right driver and vice-versa. This is why open design are so good about imaging and realistic soundstage. Try snapping your fingers left and right around your head when you listen (even at very loud level) and you'll see that open headphone are very transparent and you can recognise very easily where the sound come from.

Seriously you will hardly sell crossfeed to experienced audiophiles. These kind of processing belongs to the realm of bass-boost, loundness and all that cr.., sorry all that low-fi device.

Don't mess with my precious audio signal please!
 
Apr 16, 2006 at 12:59 PM Post #22 of 42
I'm an hour away from driving out of here on my way to the National Meet or I'd write a much longer answer.

Quote:

You really need to analyze the psychoacoustics at work here. I am always surprised at how few people can actually figure this out although it has been discussed for decades.


I think this may be a case where you know just enough psychoacoustics to get yourself into trouble. Sure speakers aren't perfect, nothing in audio is, but two channels to two speakers is better than one channel to one speaker. Bob Stuart over at Meridian will tell you that using a stereo signal you can synthesize a set of signals for center, near-left, near-right, far-left, and far-right speakers that does a much better job of replicating a stable acoustic image because it does a much better job of replicating a wave front approaching the listener. Experiements have been done with huge planer arrays of mikes that are played back on a huge planar array of speakers that do an even better job of replicating the wave from of sound.

My point is that two channel speaker playback is a far cry from perfect, BUT that's what we've got. And basically any music you get your hands on was designed for playback on two speakers. Headphone monitoring for the purposes of mixdown to two channels is almost never done because it is common knowledge in the pro audio world that headphones don't image the same as speakers. Headphones are commonly used in mastering applications where people are listening for small flaws, but not for imaging issues.

I'm sorry, Xerophase, there have been lots of studies for maximum allowable THD, but I'm not aware of any any commonly accepted standards for maximum allowable crosstalk. If you really want to develope an answer for yourself, go to the Stereophile web site and start looking at all the crosstalk measurements for preamps and power amps after a while you should have a good sense on what's good and what's not.

Quote:

Originally Posted by edstrelow
I am more than a little bothered that a Headroom spokesperson would write such material without one iota of factual evidence to back this up.


What makes you think I don't have factual evidence? You can start by looking up Ben Bauer in back issues of the Journal of the Audio Engineering Society. Then continue your search for all issues related to the psychoacoustic of headphones in the same journal. You should find maybe 30-40 articals refering to acoustic localization and headphones. At that point you'll have maybe 20% of the factual information on the subject that I've had a look at.

Quote:

Headroom sells crossfeed as a feature, I would say gimmick, and this quote is nothing more than a sales pitch.


No. It's a discussion relevent to those seriously interested in headphone listening. To say it's a gimmick points out how much information on this topic you don't have at your disposal, because if you look into it you will find valid arguments for such an approach. What I'm not allowed to do is be promotional about our product offering. And I haven't said anything about our crossfeed implementation being better than Xin's or Meyer's.
 
Apr 16, 2006 at 3:49 PM Post #23 of 42
Quote:

Originally Posted by Mastergill
IMO this is completely wrong to say because a recording has been made/monitored with speakers than it will only works with speakers. Both, speakers and headphones, provide a different sonic presentation and usually both are used during the recording process because of these differents sonic presentation.

Your brain is the best crossfeed ever made guys and myself i never felt the need for crossfeed even with closed headphone and cheap amplification. At worst i hear only one small 'blob' of sound between the ears and as i upgrade with nice open phone like the top Sennheiser the soundstage expand beyond my left and right ear.

I never felt a separation, like there's a gap between my left and right ear. And in fact open design like the HD6XX provide all 'natural crossfeed' necessary like with speakers. They're leaking so much that my left ear is picking information from the right driver and vice-versa. This is why open design are so good about imaging and realistic soundstage. Try snapping your fingers left and right around your head when you listen (even at very loud level) and you'll see that open headphone are very transparent and you can recognise very easily where the sound come from.

Seriously you will hardly sell crossfeed to experienced audiophiles. These kind of processing belongs to the realm of bass-boost, loundness and all that cr.., sorry all that low-fi device.

Don't mess with my precious audio signal please!



You clearly don't know what you are talking about, and are enjoying muddling up science with 'what I like'.

Your precious audio signal has already been messed with by the original designer to work optimally with speaker parameters. How could you outright say it's destructive to bring your headphones more in line with the plackback medium it was designed for. Your brain does do a good job of adjusting to the presentation given to you by your headphones, and in that you are correct. But, as Tyll said, there has been extensive scientific research into the human perception of sound from stereo loudspeakers which you have clearly never been party to. Our brain does NOT supply crossfeed, and cannot psychologically make up for the lack of acoustic cues provided by headphones. We can simply become accustomed to it and enjoy it for the clarity (aritificial) and seperation (artificial), but to say it is accurate to the source because it hasn't been adjusted in any way proved you don't understand the point or the scientific basis. I love headphone listening, pure headphone listening, but none of the crossfeed detractors seem to understand that such a thing cannot possibly be described as accurate - it's just a scientific fact that it is technically impossible for such a system to reproduce the source correctly. It is NOT accurate, and unless there is a paradigm shift in the recording industry, will NEVER be accurate. Don't quote audiophile bull until you truly underststand the subject matter. Headphone are NEVER used in the mixing process with regards to instrument placement, and are NEVER used in the final evaluation process testing all factors from instrument placement to overall timbre. Headphones are, as I have said before and can back up with real life experience from engineers studio-only tools. Stop saying they are used for more than that, or indeed that you could possibly optimise a recording for both. They are fundamentally, technically, psychacoustically different, and it's one or the other. And it's ALWAYS speakers.

Why are you arguing that something is low-fi and pointless just because you personally find it less enjoyable than something else? Since when were those the sole criteria that defined whether the motivation behind something was valid or not? That's quite something to declare years of painstaking work by a very large number of audio engineers and scientific researchers low-fi rubbish. I'm sure Tyll especially appreciates the sentiment, who although is here as president of HeadRoom is also a human, audiophile and dare I say it geek who loves great sound.

Regarding your magical crossfeed on the 650, that shows even further you've never done 2 seconds of research on what crossfeed actually is. The frequency response curve of the signal reaching the left ear from the right headphone isn't even remotely related to that of a speaker, and the amplitude is so low as to make it completely irrelevant. To think that that is why open headphones have such good imaging and soundstage! Yes, and that is why all closed headphones have NO soundstage. Yes, ALL of them. Goddamit, they don't have any natural crossfeed! To think that such crossfeed is actually 'crossfeed' shows that you need to do some research as to what we actually mean by 'crossfeed'. You are just plain wrong, and should think twice before spreading your misunderstanding.
 
Apr 16, 2006 at 6:21 PM Post #24 of 42
As I previously said, I have (and had) amps equipped with a crossfeed feature. But, when I switch back and forth, in order to compare use of crossfeed, and no crossfeed, I hear no (or almost no) difference. Why all of this discussion regarding a feature that is hardly, if at all, perceptible?
 
Apr 16, 2006 at 6:36 PM Post #25 of 42
If crossfeed was really different from no crossfeed then this woud be a sure sign that it s a gimmick. In my understanding of the concept it should sound very close to what it was before. If you really want to notice a difference, try an old recording where an instrument is ostensibly playing on only one ear.
 
Apr 16, 2006 at 7:33 PM Post #26 of 42
Quote:

Originally Posted by TheSloth
You clearly don't know what you are talking about, and are enjoying muddling up science with 'what I like'. ...SNIP/SNAP...
You are just plain wrong, and should think twice before spreading your misunderstanding.



Wow, chill out man! I thought that when it's about audio i knew my stuff but if you tell me there's scientific studies that know better than me how do I hear with my own poor two ears, well i'll try to read these papers next time before i listen to my music...
rolleyes.gif


Man, seriously, the soundstage perception with headphone is different, depth is somehow lacking respect to speakers because of the psychoacoustics Tyll talked about, but otherwise i'm 'immersed' into a very wide almost holophonic soundstage where i can pinpoint source with millimeter precision.

Why are you so dogmatic? Do you know how all sound engineers work?
Have you some particular interest to protect?

YOU obviously don't know what you're talking about. Do your homework about signal path and don't forget to listen to some tube gear for a true 3D soundstage...even with headphone, yes man.
icon10.gif
 
Apr 16, 2006 at 7:49 PM Post #27 of 42
Quote:

Originally Posted by Mastergill
YOU obviously don't know what you're talking about. Do your homework about signal path and don't forget to listen to some tube gear for a true 3D soundstage...even with headphone, yes man.
icon10.gif



You should indeed read scientific papers if you are going to try to argue a point in this context. You obviously don't need to know a thing to listen, enjoy, and have personal preferences. Your comment about scientific papers telling you better than your ears do about how your ears work is ridiculous. That is akin to saying, 'I know how to walk, therefore I know exactly how my legs function, and how my nervous system allows this to happen'. You don't know how your ears work, and you do not know how your ears and brain percieve sound just because you hear. The fact that you hear a soundstage does not mean you are hearing the actual, or correct, or intended, or fill-in-the-blank-with-a-better-word soundstage. That is of no relevance to your enjoyment of it, however you should be aware when describing it that it is not 'the truth' just because it sounds good. The relevance of crossfeed is NOT a matter of taste. It is a scientifically proven principle. It becomes taste as a result of the issues with actually implementing it, as it cannot possibly be perfectly achieved in the analogue domain. The principle is not up for discussion - it is only the result that is in question, a result that is refined and honed with every revision of each of the crossfeed circuits.

Regarding signal path, I have on multiple occasions in this thread mentioned the potential DISadvantages of crossfeed implementations, which of course is reference to signal path. READ before you accuse. I am trying to balance that out however by mentioning the often forgotten and misunderstood disadvantages of NO crossfeed.

But to argue the theoretical merit of something the theory of which you don't quite understand is irksome to me. You are not in a position to declare its pointlesness, nor are you in a position to say that it is inaccurate. You are in a position to say that you don't find it enjoyable, and prefer the sound without it, but that's all and you should stop there.

And I do actually know how engineers work. Of course, each to their own, however there IS an overwhelming standard in the industry which you can't argue with. There are standards in the way (at least classical) recordings are mixed that are common to all the engineers I have worked with (working for Deutsche Gramophon and Harmonia Mundi in particular (the same Harmonia Mundi that unfortunately refuses to teach it's engineers how to properly set the recording level so that the original master doesn't clip)) used the same principles. Yes, they sat in the recording session with headphones one, however the mix down and quality evaulation is always done with speakers. So on that count, yes I am right and telling me otherwise seems a little silly.

And your comment about 'real' soundstage with tubes doesn't belong in this thread. You can't possibly argue that your tubes are magically producing the spatial cues that would normally be missing from headphone reproduction. It's technically impossible. You might hear it a certain way, and that's great for you - go and enjoy it, but that doesn't mean that is actually what is happening. To argue so is just plain confusing to those who are trying to understand what crossfeed is and why people have bothered to spend years trying to implement it well.
 
Apr 16, 2006 at 8:18 PM Post #28 of 42
Quote:

Originally Posted by TheSloth
And your comment about 'real' soundstage with tubes doesn't belong in this thread.


You're right and i don't want to derail the thread, that was a little ironic comment but haven't you noticed that it's only (or mostly) solid-state/op-amp headphone amps which carry this kind of device?
wink.gif
 
Apr 16, 2006 at 10:12 PM Post #29 of 42
Quote:

Originally Posted by Tyll Hertsens

I think this may be a case where you know just enough psychoacoustics to get yourself into trouble...
basically any music you get your hands on was designed for playback on two speakers. Headphone monitoring for the purposes of mixdown to two channels is almost never done because it is common knowledge in the pro audio world that headphones don't image the same as speakers. Headphones are commonly used in mastering applications where people are listening for small flaws, but not for imaging issues.



You just keep repeating this gross generalization. How can you possibly make such a claim? You have anot got and cannot get evidence as to the purposes and standards for ALL recordings. !

Everyone is free to listen to what they want, but don't mislead people with claims that there is something "more right" about crossfeed. This is just a commercial sales pitch.
 
Apr 17, 2006 at 2:41 AM Post #30 of 42
Quote:

Originally Posted by edstrelow
... I have a Beach Boys re-issue that points out that it was mixed for monaural AM car radios.


Which I assume you only listen to in your car, having disabled all but the center dash speaker whose input has been filtered to pass only 150Hz to 8KHz.

Life ain't easy when one's a signal purist...

BTW, the question in my previous post wasn't rhetorical. How is it that Stax - and, it would appear, only Stax - has the ability to disabuse you of your deeply held beliefs about crossfeed?
 

Users who are viewing this thread

Back
Top