Head-Fi.org › Forums › Equipment Forums › Sound Science › Virtual Stage
New Posts  All Forums:Forum Nav:

Virtual Stage

post #1 of 69
Thread Starter 

Quick background: I currently have AKG K81 'phones and MeE IEM 'phones. Both are similar once I EQ out canal resonance (and all that other stuff, yippee-ki-ay). I need to get a proper pair if I want to enjoy them because the AKG developed a channel imbalance (and frequency mismatch) while the IEM require drastic EQ and pop out any time I do something that causes my ears to move. I laugh or smile easily so I can't keep them in and have quit using them altogether.

 

My problem comes down to the virtual stage locations and depths in the soundstage with all music. Maybe some HRTF processor or otherwise could remedy these things and bring the virtual stage to a proper balance. Maybe I didn't search hard enough, but I think I can find a solution that undoes the collapsed-stage effect. Any suggestions on what can be done or links to the science behind this kind of thing would be helpful.

 

A song with a stage that suffers more than average when transitioning from speakers to headphones (in my case) is The Four of Us are Dying from NIN.

 

 

No comparison to the 24/96, but the collapsed-stage effect is the same.

post #2 of 69

Have you tried crossfeed (software or hardware)?

post #3 of 69

The effect of soundstage with speakers doesn't really translate to headphones. Instead of having the aural image laid out in front of you, it's drilled straight through the middle of your skull. If soundstage is important to you, go back to speakers.

post #4 of 69
Thread Starter 

@xnor: Have tried a multitude of things, yes.

 

@bigshot: I should've mentioned (made a bunch of assumptions in my first post) that I only use headphones when the enjoyment I can get from them goes above a certain threshold (changes with mood etc.) AND there is not an appropriate speaker setup available. Much of the time, I'm busy and away from home where I have speakers. The enjoyment I get from mine has fallen below the threshold except at home under controlled conditions. Also, I specifically asked in my post about processing to move the sound image out based on personal HRTF and pyschoacoustics to emulate a left/right speaker or a more exotic wavefront simulation. I know you repeatedly mention that it's a through-the-skull thing. Surely you've looked thoroughly into the possibilities of processing, maybe dismissing it based on the experience you did have. I have not dismissed processing capabilities. I'd rather lose some refinement for an overall greater improvement in sound imaging while I am not around speakers. Obviously things like bone conduction would be missing. HRTF-like processing would also work well with object-oriented audio in the future.

 

Links to any writings on recent/significant study of psychoacoustics (that could be related to playback systems) would be useful if you know of any.

 

Thanks.

post #5 of 69

I've tried a few headphone processing things... My Yamaha amp has a synthetic 5:1 processor that is supposed to be good. They all made the music sound distant, not focused into a coherent soundstage. I really think when it comes to soundstage, headphones are what they are.

post #6 of 69

The Smyth Realiser is designed to exactly replicate a 3D loudspeaker soundstage, even to the level of replicating a particular loudspeaker setup in a particular room.

According to reports, it achieves this with mind blowing accuracy. There are a number of threads about it in the High End forum.

 

I haven't heard it myself, but would be interested to try if the opportunity arose.

 

At around $4k though, it's a bit pricey for some.

post #7 of 69
Sorry, I really don't think this is something a DSP can fix.
post #8 of 69

Why not? We only have two ears. If you apply HRTF impulse responses to your music it'll sound like the place the IRs were recorded at.

I've tried this myself, not the Smyth Realiser though, and it works. At first it feels weird because suddenly you hear the room with all its reflections and reverb but you gets used to it.

 

But I prefer the dry headphone sound which is why I use a crossfeed plugin instead.

post #9 of 69

The ears can accurately sense directionality. Sound can be localized by moving your head slightly. People do this all the time without even knowing they're doing it. Reflected sound from different directions is a big part of speaker setups. No sound processor can help simulate that if the transducers making the sound are clamped tightly to your noggin.

 

All of the synthetic ambiences for headphones I've heard, just make the music distant sounding, not localized into a coherent soundstage.

 

There was a line of transfers of early acoustic recordings from the early days of the 20th century (Caruso, Galli Curci, etc.) that had an interesting idea. They took a top of the line perfectly restored acoustic grammophone with a beautiful oak horn and put it on the stage of a theater. Then they carefully miked it for 5:1 sound. Played back with surround, it was *exactly* like what a good Victrola sounds like... present, alive, loud... Unfortunately, the CD format they chose to release it in was one of the early surround formats that was quickly swept by the wayside. Very few people ever heard it in 5:1. I got a copy of one of the disks and in two channel, the blend of direct sound with hall reverberation that sounded so good in 5:1 sounded like a muddled mess in 2 channel.

 

I am very excited about 5:1. I think it's a huge breakthrough in sound reproduction, particularly in the area of soundstage. But I'm afraid that headphones by definition will never see any of the improvement.


Edited by bigshot - 10/19/12 at 12:09pm
post #10 of 69

That's why the Realiser includes head trackers that monitor movement of the head and update the filters accordingly.

Directivity/reflections are included in the HRTF filters.

 

Crossfeed has neither but the job of a crossfeed is to move sound "out of the head" and reduce unnatural stereo separation, which works nicely. If you rotate the head the band will always play in front of your eyes. At least that's what it does for me.
 


Edited by xnor - 10/19/12 at 12:13pm
post #11 of 69
Wild! That must take some getting used to. It would probably be great forvideo games.
post #12 of 69
Quote:
Originally Posted by bigshot View Post

Wild! That must take some getting used to. It would probably be great forvideo games.

 

I can only imagine myself.  I have a Dolby Headphone solution which does imitate 5.1 well enough, but the Smyth is another ball game entirely.

post #13 of 69

My Yamaha does the Silent Cinema deal, which is K I guess. It seems the Realiser personalizes the HRTF, does the head movement tracking, and takes it to a new level. It seems it takes HDMI, but no decoders. One needs to PCM the multichannel stuff or take it from the preouts (which some cheapo Yamaha's like mine can do.) I've heard great things about it, but it's currently outside my budget, and before putting the $$$ on it, I would have to try it first.

 

There is also the Beyer Headzone which is a bit cheaper, but probably not on par to the Realiser.

post #14 of 69
My amp has that Yamaha thing too. It didn't do much for me.
post #15 of 69

Yeah, I have it off most of the time. Proper implementation may be the key. The Realiser may get all of it right (properly implemented) with the right sound formats and/or recordings. But we are talking kilo-bucks frown.gif.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Virtual Stage