Originally Posted by oaklandrkg /img/forum/go_quote.gif
My response is based entirely on Headroom's crossfeed used with the HD-650s. Perhaps other crossfeeds and/or different headphones would change my perspective, perhaps not.
There is no question that the crossfeed shrinks the soundstage. What it does more than 'even out' hard panned noise, is pull all the sound forwards, so that the music is coming more from the front instead of from the far sides. This is why the sound is described as less fatiguing, because it's not pulling on your ears from such distant polar directs (left vs. right). Unfortunately, this, by definition, means that the soundstage is significantly shrunk.
Also, it adds a noticeable amount of colorization, mostly by emphasizing the low-ends, allowing for (IMO) the bass to commandeer the rest of the music. Sometimes, but not very often, it will 'color' to the point up misrepresenting timbre, too.
If you doubt this, I suggest locating some sort of decent 'test' CD that has tracks verifying the soundstage and different frequencies. I've found that the differences are so distinct that I have a hard time not vomiting in my mouth when I hear it described as "subtle." Ok, a little overly-dramatic, but you get my point Headroom uses the word "subtle" and the fact is, they know better.
The only time I use the crossfeed is with hip-hop. I feel that the altered sound helps against hip-hop's immense struggle to sound 'at home' on headphones. This genre is unique in that soundstaging is relatively unimportant, and more bass, as long as it's well defined, is almost always a plus with hip-hop and headphones. As the crossfeed collapses the vocals into the beat, bass is added, both of which help hip-hop sound more alive (IMO) when listen to on headphones.
Other than that, I have no reason to use the crossfeed - the colorization is obvious, along with the shrunken soundstage, making it too distracting for my tastes. Bravo for everyone else enjoying it, but for me, its a step in the wrong direction.
Since crossfeed gets so much attention, my question is, why do headphones need to imitate speakers to sound at their best? Why does the sound of one necessarily have to be correct over the other? I find it hard to believe that sound engineers have ignored headphones entirely when mastering music. And even if one is correct, why not enjoy and appreciate the differences? Isn't that the biggest part of why we use headphones; that they're different from speakers? I mean, I like apples, but I sure do like oranges, too.
Too much fuss IMO opinion, as crossfeed sacrifices what it tries to save along the way. Here I was, enjoying headphones for their intrinsic sound, and appreciating these distinctions, and never once considered this passion as 'a grand quest for stereo speaker sound' - I have a stereo for that
|