Improving sound stage of full sized cans
Jun 23, 2011 at 10:40 AM Post #16 of 45
The short answer is that a different driver size isn't going to change anything in respect to what I described above.
 
It's never gonna sound like live sound, regardless if the stereo separation sounds fine to you or not.
 
Jun 23, 2011 at 6:22 PM Post #17 of 45
Then the problem is that recordings are made to be heard on speakers? Because with headphones you just make the sound reach one ear and then the other with ease, thus giving the spacial information. How would crossfeed help? Only if like I said the sound was made somehow to be heard on headphones, but how would this be?
 
Quote:
It's quite simple actually. In nature, you always hear a sound no matter what direction it came from with both ears. The sound usually reaches one ear with a little time difference and lower level. We humans use this information to locate where the sound came from.
 
With headphones, the left channel can be heard in the left ear only, and vice-versa of course. This unnatural stereo separation usually causes fatigue (unless you're used to it) and also creates an unnatural/artificial soundstage*. By adding some crosstalk with time delay to the stereo signal (= crossfeed) you reduce this unnatural experience and get closer to what your hearing/brain expects.
 
*) feels like sounds are coming from the inside of your head
 
If you ever sat in front of a good stereo speaker setup and closed your eyes you should know what a real soundstage sounds like.
Depending on the DSP you're using you can get close to that experience with headphones, but it requires some time and tweaking.



 
 
Jun 23, 2011 at 7:13 PM Post #18 of 45
TheUbiquitous, yes speaking very generally recordings are usually mixed and mastered on speakers and meant to be played on speakers.
 
Crossfeed helps headphone users by post-processing stereo tracks (mixing the left and right channel) to reduce the unnatural effects I've described in my previous post.
 
Quote:
Because with headphones you just make the sound reach one ear and then the other with ease, thus giving the spacial information.

If a track contains a signal in the left channel then you only hear it in the left ear with headphones, but if you use speakers you can hear it with both ears.
With headphones you locate the sound like someone is whispering into your left ear and it feels like you're deaf on the right ear. With speakers the sound comes from front-left (the left area of the stage).
 
Quote:
How would crossfeed help? Only if like I said the sound was made somehow to be heard on headphones, but how would this be?

This doesn't make sense to me. If tracks were made specifically for headphones then we wouldn't need any post-processors like crossfeed.

 
 
 
Jun 24, 2011 at 9:26 AM Post #20 of 45
Ok I see what you mean, sound could be made to sound correct on headphones but it isn't, it's engineered for speakers.
It doesn't bother me anyway
gs1000.gif

 
Jun 29, 2011 at 3:10 AM Post #21 of 45
Binaural recordings are designed to sound correct with headphones. They sound great with speakers too.

But people don't take into account the part the room plays in the listening experience with speakers. The specific way that sounds bounce around in a room can add to the realism, because the room ambience added to the sound coming from the speakers is the exact same room ambience that one hears when speaking in the room. The recording itself has nothing to do with this, but it can make a big difference.

Because of this, I find that really well balanced mono recordings can sound better than a lot of stereo or 5:1 recordings. Speakers definitely are the best way to listen to music if you have the money and room for them.
 
Jul 11, 2011 at 7:29 PM Post #22 of 45
you can get really good reproduction of a great speaker/room setup with Smyth SVS Realizer processing and head tracking - this gives external, stable sound "image" - as good as the calibration speaker/room with your headphones
 
for me the SVS Realizer system was far beyond crossfeed or Dolby as stereo is to mono
 
http://smyth-research.com/technology.html
 
Jul 11, 2011 at 10:16 PM Post #23 of 45
I did some research, yes, covering the ear with a flat driver so as to produce planar waves improves soundstage, at least in the opinions of the authors of several electrostatic DIY articles.
 
Jul 13, 2011 at 10:19 PM Post #24 of 45
Crossfeed is one simple tool. Room ambience can be added afterwards using a convolution reverb or even a simple one. Simple reverb is pretty easy to make in analog form too.
Best possible effect would be dual mono to stereo convolution, impulse recorded using the same technique binaural recordings are made with. This would require a great sounding room, very flat sounding speakers and great recording setup of course, otherwise will introduce excess noise or other artifacts. Typically recording equipment limitations mean this will not be as great in low frequencies. However nobody says you can't combine it with crossfeed to get the best of both worlds.
 
If you're going down the DSP route, note that even relatively minor differences (e.g. 2 dB) in frequency response at certain high frequencies (e.g. 3,5 kHz, 7 kHz) can affect localization of sound a lot and have to be corrected for the best sound...
It's easy to do with DSP, far harder with analog equalizers, as you might need quite a few bands to do this reasonably accurately. (I found 4 to be the real minimum.)
 
Jul 19, 2011 at 12:22 PM Post #27 of 45
The middle frequency is different from person to person - best left tunable.
Try to find a good cheap housing...
 
Also, to keep the size small, you need to do it in active topology, that is with opamps.
Typical crossfeed is (very short) delay + lowpass, so 4 opamps are necessary - that hits $20 for just the decent opamps.
Edit: Six if you want to change the crossfeed delay time, since it's based on capacitance - you want a capacitance multiplier.
Probably extra two if you want to change the lowpass slope. Two more for highboost, which is a highshelf filter.

 
Jul 19, 2011 at 1:10 PM Post #28 of 45
you can try crossfeed with plugins for your PC media player - no need for hardware
 
you can also use RockBox on many DAPs
 
but the effect is small - nowhere as significant as binaural recordings - such as http://www.head-fi.org/forum/thread/550220/chesky-records-makes-a-high-rez-album-for-head-fi-ers-in-binaural
 
and as good as binaural can be the sound stage still moves with your head, hugely interferes with your mental reconstruction of a external event - really try the SVS headtracking DSP - having heard it at Ft Lauderdale CanJam it makes it hard to sit around reading such exaggerated/overblown discussion of the tiny variations on the "between your ears" "sound stage" and "imaging" improvement by passive headphone design features, simple crossfeed or even binaural recordings without personalization, head tracking  when the "real deal" is so much better
 
just listen to a decent loudspeaker/room setup's soundstage and imaging to get an idea of how weak headphone generally are in this dimension of listening - even my bedroom's Minimus 7 speakers, US$14 "manager's special" Accurian amp system is way ahead of any of my $$$ headphones for sound stage (just don't turn it up over ~85 dB where the tweeter bottoms)
 
the SVS Realizer can be "calibrated" right at the mixing desk/mastering engineer's chair for up to 7.1 surround - and sounds just like the loudspeakers and room in your headphones
 
Jul 19, 2011 at 4:38 PM Post #29 of 45
Head tracking is completely unnecessary, unless you happen to listen with your head tilted. Head tracking attempts to add information that's not stored in the sound at all, which is in general impossible. The result is that the sound is colored (unless equalized, say hi to this thread) and has phase artifacts.
 
Note that the equalization and adjusting their head shadowing model is most of the personalization they're doing.
In other words, headphones equalized to sound flat to your ears + crossfeed set by your ears = >90% of their system sans head position sensitivity and phase artifacts.
Their head shadowing model is only slightly more advanced than spherical head - they use earlobe position on the head, which changes front-to-back frequency response difference. That's irrelevant for stereo, but relevant for surround sound and their positioning trick.
 
Crossfeed effect is not small - the reason it's small in certain implementations is that center frequency is set to some lowish Hz (I need 1800 Hz not some 300-900 Hz for proper positioning) and/or the amount is too low (I need 15 dB) and/or highboost is used. (for me, it moves the sound inside or behind the head)
BS2B can barely match these settings, esp. crossfeed amount.
Crossfeed synergizes with equalization greatly. I'd call it an exponential improvement of soundstaging when the two are combined. (crossfeed to the eq power)
 
Rockbox's crossfeed is known for being subpar. So is Foobar2000's old crossfeed. Head-fit is pretty decent - the nice part is that you can equalize in the same plugin, but its crossfeed colors the sound - but for me, BS2b and plugins derived from it are the winner. My own passive crossfeed circuit based on Linkwitz's comes pretty close to it.
 
Jul 19, 2011 at 5:10 PM Post #30 of 45
Quote:
Head tracking is completely unnecessary, unless you happen to listen with your head tilted. Head tracking attempts to add information that's not stored in the sound at all, which is in general impossible. The result is that the sound is colored (unless equalized, say hi to this thread) and has phase artifacts.


Head tracking is more about keeping the illusion that you're listening to something that's really in front of you.
 
This has to do with that fact that pretty much anything you hear could actually be in any location depending on what may or may not be in the way and what the sound actually is.  Your brain has to figure out where to localize the sound based on your memory of how it should sound like as well as what your other senses tell you.  Its like one of those optical illusions where an image shifts between two different things.  If the same auditory data is consistent with two different things your other senses will be the tie breaker.  That's why imaging on headphones is better if you close your eyes.  If you try to "follow" where the sound could be coming from with your eyes and don't see anything the illusion collapses and it sounds like its coming from the headphones again.
 
The head tracking on the Realiser exists to keep the illusion going.  If you turn you head and the sound stays the same then the whole illusion is likely to collapse because it tell your brain that the sound can't be coming from in front of you.
 
I experience this sort of thing when watching movies with headphones via a surround DSP.  I have keep my head almost perfectly still or the illusion of proper soundstage collapses until I sit still for a while.  I normally watch movies in bed so that's not a big deal to me but I could see it being more important with different seating or if you were working a mixing console and your head and upper body kept moving back and forth to work the equipment.  With the head tracking you wouldn't have to close your eyes or sit still to "reacquire" the illusion.
 
At first I though head tracking was just a gimmick, but then I noticed what I was doing when I watched movies and thought about why.
 

Users who are viewing this thread

Back
Top