ADUHF
Headphoneus SupremusMusic And Measurements
Headphone spatiality is difficult, because all errors in it are directly shot into our ears. There is nothing in between to mitigate errors or create real spatial information as is the case with speakers. The most convincing soundstage with headphones happens with binaural sound. Unfortunately binaural sound is not common, because almost all stereophonic music has been produced primarily for speakers and binaural recordings don't work well with speakers. The secret of binaural recordings is the correct or near correct combination of many many spatial cues.
EQ may help with soundstage (if it can attenuate spatial problems), but it is very limited. Often the wideness of headphone sound is ruined by excessive spatiality targeted for speakers. When the level difference between the ears (ILD) is too large (compared to natural levels) spatial hearing is likely to deduct that the sound is very near the other ear (the side with higher level) and that explains the level difference. Reducing ILD to natural levels helps spatial hearing to believe the sound is further away, especially if other spatial cues support it (for example reverberation level is high compared to direct sound. So, more channel separation can actually make the headphone soundstage narrower! On speakers the effect is the opposite, because the listening room regulates spatial cues to natural levels. Instead of increasing channel separation, the target should be natural levels. That's when we can have the "widest" headphone sound and going over or under that "sweet spot" makes the sound narrower, just in different ways (the sound is outside or inside the head)
Spatial hearing looks at many spatial cues and tries to come up with a interpretation that makes sense. If a recording has huge ILD (indicates the sound is near listener) and also strong reverberation level compared to direct sound (indicates the sound is far away), spatial hearing may struggle a lot to make head or tail out of it. This struggle causes even listening fatique to some listeners including me.
Tonal accuracy is partly a myth. Sounds don't need to have tonal accuracy. Instead sound reproducing need to be transparent. In real life sound gets spatially coloured and that colourization is the information about the kind of existence the sound has. Does the sound happen in a church? Or in a forest? Or in your living room? Our hearing expects the sound to be coloured. That colour equals the physical existence of the sound. Too much color is called bad acoustics.
Cross feed, with a time delay, might help to increase the impression of width in the soundstage. In my own tests with closed headphones though, where I simply mixed some of the left and right channels together without a delay, it actually seemed to make the soundstage a bit narrower. Though it probably did make things seem a bit farther away in the Z-axis.
I'm sure you're correct though that the spatial cues are very important. And if you are listening to open headphones in a fairly reflective or reverberant room, then you may get a more realistic cross feed from the headphones themselves in some of the higher frequencies, that is more delayed in time. As well as the ambient sounds of the room, which will also be different in each ear. And all of this will probably contribute in some ways to your perception of greater width/spaciousness in the sound.
I am not an expert on any of this though. So I may be completely off base on all of this.
Last edited: