Head-Fi.org › Forums › Equipment Forums › Computer Audio › The Holy Grail of True Sound Stage (Cross-Feed: The Next-Generation)
New Posts  All Forums:Forum Nav:

The Holy Grail of True Sound Stage (Cross-Feed: The Next-Generation)

post #1 of 80
Thread Starter 
Hello everyone!

As this is going to be my very first post in this forum, I felt I might as well make it a good one with some substance. I’m afraid it will be somewhat long, but I think it’s going to be worth it. I have collected a lot of information that can also be found elsewhere, but I felt it a good idea to present it all in one place. To make it all more bearable (and fun), I’ll use easily digestible episodes. Please bear with me and forgive my language as English is not native for me.

Part 1: Introduction

First, the Appetizer: Virtual Barbershop

If you know it, fine, if you don’t, listen to it NOW and be amazed! Now THAT is what I would call Sound Stage!

I’m going to share with you my quest for the Holy Grail of effortless, realistic, natural, life-like, transparent, and therefore “true” sound stage through headphones. I can say that I succeeded in this quest to my own full satisfaction. I’d like to share this with you all, because I’m almost sure this is a subject that every head-fier is interested in. I’m going to report my research on the valuable work already done by others, add my own theories, findings, experiments and results and present my case step-by-step, so at the end of this pseudo-scientific ranting you’ll know as much as I know and I dare to state that for many of you it could improve your listening experience. Do I sound presumptuous or downright crazy? Read on and find out.

I thought to myself: wouldn’t it be absolutely great, if I could listen to my favourite music just like the virtual barbershop. That’s what started me on my quest and although in the end I have not managed to place my favourite musicians in a virtual space as clear-cut as the barbershop, I have come very close!

A little bit of personal background first to show you my credentials. Traditional hi-fi has always been an essential part of my life, from the moment I got my first transistor radio at age 7 and gramophone at age 10. As a teenager I built my own speakers from the units in broken down TV sets and all my pocket money went into sound systems. As an adult I’ve worked as a high-end hi-fi salesman, sometimes selling equipment costing 6 figures (think Mark-Levinson, Wadia, etc.) and I was and still am a hi-fi purist, always looking for ways to keep the signal path as “clean” as possible, but only if that works and makes the sound better, not merely for its own sake. I have owned good cans in the past as well, although I rarely used them, and if I did, through the headphone output of a main or pre-amp, not through a dedicated headphone amp. Other than that, I felt, and still feel that most of the time listening through speakers is a much more realistic and relaxing experience than listening through headphones, although I must add that headphones are not to blame for that, music recordings are!

Part 2: All the World’s a Sound Stage?

I guess this chapter is not going to be news to most of you, but I’ll include it anyway to paint a complete picture and as preparation for what comes after. From the very beginning music recording has been focused on reproduction through loudspeakers rather than headphones. That’s why almost all recordings to date are “stereophonic” recordings. In modern sound studios stereo mixes are created from multiple mono tracks, but in the old days a stereo recording was made by placing two microphones a certain distance apart, and realistic playback of a 3-dimensional “stereo-image” was possible through two speakers similarly placed a certain distance apart, a phenomenon all of us know very well. This type of recording was and still is meant to be heard through a pair of loudspeakers in order to unfold and re-create its inherent 3D-image or sound stage. If heard through headphones however, each channel that’s supposed to be heard by both the left and the right ear, is in stead heard only by one ear, causing the stereo-image to collapse into a flat line between both ears.

This is the (not much of a) “sound stage” that we perceive through headphones while listening to stereophonic music recordings (99.9% of all recordings, or more) in our natural state of hearing. Note the two issues I have underlined, which I will get into separately.

Part 3: Binaural Minority Report

The Virtual Barbershop (VB) is NOT a stereophonic recording. It’s a binaural recording, tweaked digitally by way of a proprietary algorithm. The binaural recording technique is one specifically designed to be played back through headphones. The two microphones are placed in a dummy head where our eardrums are located. If the dummy would be an exact plastered copy of our own head and ears, we would have no need of digitally enhancing the recording. Anything would then sound just like the VB. I’m sure you can understand why. In order for it to create its 3D realism to such an extent in each pair of human ears, the digital algorithm that’s whispered into your ear at the end of the clip is used. It enhances the so-called head-related transfer functions (HRTF) of the recorded sounds. This is what creates the main difference between the perception of front and rear. Of the very few binaural recordings that are made, only some will give you that exact front/rear positioning like the VB, because some binaural recordings are recorded with a Jecklin Disc, or a dummy head without ears, so typically the perceived space is placed 180 degrees in the rear OR 180 degrees at the front, rather than the full 360% like the VB. It’s our ears, and in this case I mean those funny pieces of meat sticking out of the sides of our heads, that allow us to discern between a sound coming from the front or the rear. They screen the sounds coming from the rear more than they do the sounds coming from the front. The way sound is altered because of our outer ear is determined by these HRTF. So the first clue I followed was the mysterious algorithm that was whispered in my left ear. But first, as I promised above, I‘d like to share my experience with natural hearing and the lack thereof!

Part 4: The Red or the Blue Pill

I’ve noticed that when the subject comes up, almost all long-term head-fiers state that although they have tried cross-feed and even preferred its sound at the start of their head-fi “career”, after a while they more and more started to prefer listening without it. Why is that? I think it’s all between the ears!

Our brain is an amazing thing capable of performing awe-inspiring feats. As we come into the world, our ears (that is our brain) don’t have as yet the capacity to locate sounds. We have to slowly start learning to interpret those slight phase-shifts in sound, those reflections and diffractions that are caused by the unique shape of our ears and our head (HRTF). We would lose that capability if we would lose our ears or be outfitted with differently shaped ears, at least at first, but as we get used to those new ears, we would slowly gain that capability again. This shows that we are able the “re-program” our brain in order to preserve our capacity for 3-dimensional hearing. In case of the new set of ears we merely have to continue our inherent capability for “natural” hearing, based on those subtle HRTF cues, so although the transitional adaption period might be slightly confusing and tiring for our brain, once re-programmed, we are again able to listen effortlessly to the sounds in the world around us. Now, what does this have to with anything?

Let’s talk about headphone fatigue. This is the reason why I said that I personally always preferred listening to speakers rather than headphones. While we listen to stereophonically recorded music through headphones our brain is receiving auditory information that is in some way distorted and unnatural. Some aspects of it like the frequency spectrum and the timing are OK, but the directional cues are plainly NOT there in the way our brain is used to receiving them. So rather than give up and leave us to the narrow between-the-ears stereo-image we actually perceive in that natural state, our brain starts the process of re-programming itself in order to re-instate the illusion of natural positional hearing. This takes time and effort. It does cause fatigue, but after some time our wonderful brain IS actually able to have us believe that we are listening to a speaker-like sound stage. And the more we get used to it, the less fatiguing it gets and we are happy. This is the situation almost all of you find yourselves in. It’s not exactly natural, and it still does take some small effort to the brain to maintain the illusion, but it kind of works, and at least there are absolutely no changes made to the frequency spectrum, the timing or resonant harmonics of the source.

This has been the one and only choice available, but now I’m going to offer you a pill of a different colour. I’m not claiming it’s the better choice. It’s the one I chose and I’m absolutely happy with it. What if we could spare the brain the initial time and effort to re-program itself for headphone listening and the continuous effort it takes to uphold an illusion. I believe that it still creates fatigue in the most experienced of head-fiers, because it takes so much more effort to translate those invalid auditory cues into a coherent sound stage, at least compared to the natural HRTF phase-based cues. As most of you know, I’m not the first one to get the idea of some sort of pre-processing to make the sound more natural. The thread title does mention cross-feed, so that’s one of the things I started experimenting with.

Part 5: Cross-Feed, just a newbee toy?

Up until a year ago I hadn’t owned a set of cans for about 15 years, for the reasons mentioned above. There were three reasons why I gave headphones another chance. First, because my family situation didn’t allow me to listen to music through my speakers at the proper sound level anymore. Second, because I stumbled upon this forum and started reading all kinds of amazing stuff about headphones and relatively cheap dedicated head amps. And third, the cross-feed circuits built into them, because at that time that seemed like the solution to the problem I had with headphone listening. I bought myself a Grado SR-60 and a Headstage Lyrix amp, connected it to my CD player, and... I was enchanted!

Little did I know how many more revelations were yet to come. I’ll make a long story short at this point because my subsequent attack of upgraditis is not relevant to the subject at hand. I used the cross-feed setting on the Lyrix almost all the time although I started noticing that the perceptual difference between with and without was getting smaller AND I did notice that without it the sound was more accurate as far as frequency and timing was concerned and more transparent. While making the change from using my CD player as a source to my computer, I upgraded to a Derek Shek NOS DAC, a DIY-built OTL tube amp (NOS-rolled) and headphones alternating between an AKG K340 and K240 Sextett. All in all a set-up I am absolutely happy with at the moment. I play mostly lossless files through Foobar with bit-perfect output obviously.

As I had left the cross-feed of the Lyrix behind, I listened without it through my new system for a few weeks and although I did indeed learn to experience a certain measure of depth in the music, as before, it was way too little and too unnatural for my taste and in some recordings I didn’t like it at all. So I started looking for an alternative to the hardware cross-feed circuit I had experienced already and found out that there were actually a few Foobar plugins offering software cross-feed. The head-fi sticky thread only mentions 4Front and naive software, but these fall way short in terms of realism. Browsing the forum I stumbled upon Boris Mikhaylov’s Bauer stereophonic-to-binaural DSP (BS2B). The name was already very promising and having played around with it and its settings, I liked it quite a bit. It emulates various hardware based cross-feed circuits and makes subtle changes to the sound. In a way it offers an option in-between the two extremes mentioned above. It helps the brain a little bit more with deciphering spatial cues and building a small-scale sound stage, but still leaves something for the brain to do: expanding the soundstage outward; all in all, a good compromise, and definitely an option.

At this point in time I found the virtual barbershop demo, clearly demonstrating that even more might be possible, so I started digging deeper.

Part 6: Professional Positional Audio

I’ll continue where I left off at the end of part 3 with my report on the search for that mysterious Cetera algorithm responsible for the WOW-factor in the VB. I found that the demo was created by a manufacturer of hearing aids called Starkey. The Cetera algorithm was the “software” part of a hearing aid developed in the late nineties, in cooperation with another company called QSound Labs. This company, then as well as now, specializes in a wide spectrum of 3D audio solutions. Their technology is implemented in various ways, software as well as hardware based. In fact, I discovered that since the nineties various companies had started research into 3D audio, both for studio purposes, like music recording and movie surround tracks, as well as positional audio for the PC (think first-person shooters). SRS Labs, for instance, has worked in the same field as QSound. Both these companies have developed software packages for the PC, able to process and enhance sound and music in a variety of ways, including headphone surround. I’ll not go into details here, as their products usually are shipped with certain hardware, like PC sounds cards, or if sold separately, are only useable as part of the operating system, upsampling, downsampling, etc, so doesn’t really serve the purpose of audiophile music listening.

There’s one company however that I didn’t as yet mention, Lake Technology. This Australian company developed digital audio algorithms for recording studios. One of their algorithms allowed movie studio technicians to use headphones to work with and monitor 5.1 AC3 tracks. After Dolby Laboratories licensed the technology and then even bought the whole company, it became known as Dolby Headphone. Being a company with a slightly different focus, compared to the others mentioned before, Dolby Headphone was licensed to manufacturers of DVD-players and other home-theatre equipment, where its algorithms were hardwired so-to-speak into the signal path.

Now, all these technologies were a long way from being easily useable for my purpose, to create a high-quality, PC-based 3D sound through headphones. The technology exists, but we audiophile headphone listeners don’t present a large market to begin with and is not exactly an obvious market for such companies, especially considering that most of you are convinced purists like myself, for whom any kind of sound-processing represents evil. So why would any of these companies bother to create something like a Foobar plugin, when they can make money licensing their technology to manufacturers of products that are bought in huge quantities or to professional recording studios.

That’s another taboo we audiophiles tend to suffer from. Most of recorded music is processed and down-mixed by exactly these kinds of technologies, and we are ready to defend our purist ways by preserving the “original” sound, when that sound is not original to begin with. So what would be wrong with using those technologies to merely extrapolate the sound engineer’s purpose and change a stereophonic track into a binaural one, suitable for headphone listening?

Investigating Dolby Headphone I stumbled upon THIS thread at the Hydrogenaudio forum and discovered that someone had had similar thoughts already and that turned out to be the singular most significant discovery I made during my research.

Part 7: Chungalin’s Dolby Headphone Wrapper

This guy (or girl, I’m not sure, you never know, in the guise of Calimero, goes by the name of Chungalin in various forums, head-fi included) already developed a great piece of software around 3 years ago, called the Dolby Headphone Wrapper (DHW). It’s an official 3rd party Foobar plugin (foo_dsp_dolbyhp) and using it correctly and in combination with certain other plugins improves regular cross-feed processing by several orders of magnitude (imho). Reading the thread I mention above gives one a good idea of the various developments and improvements that have been made in connection with the wrapper. I won’t repeat everything that’s said there, but in case you don’t feel like reading that whole thread I’ll summarize it for you.

The Dolby Headphone algorithm is not only built into stand-alone dvd-players, but is also part of a number of commercial software dvd-players for the pc. One little file in particular takes care of it: dolbyhph.dll and Chungalin wrote a Foobar plugin that utilizes that file. There are demo-versions of software dvd-players available for download that include that dll-file. So the wrapper converts a 5.1 channel input into a 2 channel output for headphones. Now, the big question and challenge mainly featuring in the Hydrogen thread is: What can we put before DHW to change a 2 channel stereo track into a 5.1 surround track? Hancoque’s large post in the thread is an excellent resume of the process of finding answers to that question.

Until now not much of what I’ve shared with you has been my own unique effort. I’ve just gathered relevant information in one place, merely adding some conclusions and opinions. Now I’d like to share with you my review of the different methods of how to use DHW and the results of my own experimenting and tweaking. These days, except for the one binaural album in my possession and the VB, all the rest of my music, be it rock, classic, pop or folk, I listen to through my customized DSP chain, based on Dolby Headphone. The slight alterations in the frequency spectrum and resonant harmonics are negligible in my perception, so with most music I don’t feel I lose anything. But the gains are immense. I have something better than a speaker-like soundstage. I have a live soundstage. Words cannot begin to describe what these DSP’s do to any source of music, no matter how it’s recorded.

So what is the missing link? The last chapter will be my actual contribution. I’m going to tell you step-by-step what I’ve done and the results of my testing.

Part 8: The Icing on the Cake

The options that Hancoque presented and shared in his foo_dsp_upmix are the final developments of the “straightforward” approach to the upmixing issue. It basically takes the two stereo channels and sort of spreads them around the 6 channels. Of the dsp’s he created, I personally found the double center option slightly better than the full front option and a lot better than the full rear option. I could have happily lived with his upmix dsp’s combined with the DHW, as the results are already absolutely great, but … I did happen to have some experience with home-recording software like Cubase and Cakewalk and their use of so-called VST plugins. I was aware of the amazing pieces of software that are available for changing the number of channels in a multi-track recording, so I was wondering if there wouldn’t be a more intelligent way to spread the stereo signal around the available 6 channels, and of course there is!

A guy named Steve Thomson created a free piece of software called V.I. Stereo to 5.1 Converter VST Plugin Suite (VI) that incorporates a number of algorithms (i.e. ambisonics) to place sounds into the proper place in the 3-dimensional sound stage. It creates a living, breathing atmosphere out of the slightest auditory cues available in the original signal. No matter how the recording is made, as long as it’s stereophonic and not binaural, VI will create a 360 degrees image that is absolutely believable, and, miraculously, hardly takes anything away from or adds anything to the frequency spectrum. In my perception it is VI that is actually responsible for placing echoes, resonances and other subtle or not so subtle cues at the proper place in the virtual sound stage, without ever overdoing it in such a way that it’s perceived as unnatural. A singer for instance is typically placed front center, but the acoustic reverberation of the voice that is part of the original recording is placed all around the listener just as it should be if the singer would be standing before you in a real room. And this applies to all instruments and sounds. The result is mind-blowing, and you’ll have to hear it to believe it. Of course some recordings work better than others with it, but all in all it’s pretty amazing how intelligently VI and Dolby Headphone work together to create such a realistic sound space. I have spent considerable time finding the perfect setting for VI where the focus and front/rear division is optimal and most realistic. I suggest you start with this and only if you are the experimenting type, change the settings and see if your taste is different than mine.

So, what do you need? I’ve already explained above how to get the DHW. Place this last in the DSP chain (well, not much of a chain as there are only two DSP’s needed). Configure it by pointing it to the dolbyhph.dll on your PC, set the room to the DH1 reference room (the other rooms are too large), set amplification at 100% (changing this slightly affects sounds quality similar to the use of ReplayGain) and make sure to leave Dynamic Compression off. Now download the VI Suite (Google it) and the old VST host foobar plugin (foo_dsp_vst), so NOT George Yohng’s VST wrapper as this plugin doesn’t support multiple channels. Place the VST host (bridge) first in the chain above the DHW and point it to the VI.dll file that came with the VI Suite. Now press “Show Editor” and you’ll be able to adjust the VI settings, even while music is playing. I strongly recommend you use the following settings:

Of the three lights on the right side of the panel the first (on/off) should obviously be on, movie mode should be off and LFE on. Of the 4 main sliders the two middle ones should be left where they are, in the middle at the default setting. I call this position 0. The rest of the settings range from -5 to 5. The two remaining sliders (top and bottom) should be slightly left of the -4 position in such a way that the top slider just covers the little white marker, and the bottom slider just barely shows the marker. Sliding these only a tiny bit to the left or right already creates a major shift in sound, so make sure they are placed like this:

Attachment 7875

UPDATE: Go to the end of page 4 for revised and more transparent settings!

There are a few things that might spoil it for you. I guess these DSP’s take up quite a bit of processing power, so I’m not sure which computer will be up to the task and which not. I use a Core Duo. The old VST bridge unfortunately is not a very stable program. It tends to crash. In my case it always displays an error message AFTER I have stopped listening and close Foobar, so not much of a problem, but I DID have problems playing FLAC’s. I have now converted all my music to TAK, an alternative lossless format, which works just fine, MP3’s and OGG’s as well. Certain loud pieces of music might clip after it has passed through the chain. Only in that case I would either use ReplayGain or set the amplification of DHW a bit lower (at 70%), because to my ears at least, there is a barely noticeable loss in sound quality lowering the volume of the music digitally, not a lot, but enough.

Well, that’s it. Thanks for your patience. I hope it has been worth it.
post #2 of 80
Thanks a lot for the very nice guide

I have it set up now Some thoughts:

* I do feel that it changes the sound of the just a little bit. There seem to be a bit of bass boost also. The HD600 seem to get a slight veil which I do not hear otherwise. It seem to give bass and lower mids a small boost.

* It will take some time to fully get used to the soundstage changes.

* After listening for a while with these settings, I tried switching back to standard. The standard feel VERY compressed now.

* I will definitely listen to it a lot more before I make my final conclusion.

* At this moment, I REALLY like the enhanced and much more neutral soundstage.

* I will try to use foobars eq to reduce the slight bass boost.

* Vocals sound very real using this effect. They illusion that the singer is in the same room as me is very great.

I will definitely listen to this much more. I do really like the much bigger soundstage, but I've yet to decide if the change in sound otherwise is worth it.

EDIT: "Porcupine Tree - Trains". ****, Wilson is in my room!
post #3 of 80
Pretty good guide...heads and shoulders above Bs2b atleast
post #4 of 80
Another props here for the guide. It's quite excellent and informative.
post #5 of 80
I was using the HD25-1 when listening to the virtual barber shop and I was amazed.. How a good stereo recording can sounds SO real!
..WOW! From the first 30 seconds to the end.. with a grin from ear to ear!
post #6 of 80
Two questions:

1 Can I get this program for my Mac?

2 Does the program produce the desired effect just as well with in-ear headphones which bypass the outer ear as it does with circum- or supra-aural headphones which don't?
post #7 of 80
tim, there's Canz 3D. You need a plug-in host like Audio Hijack Pro. From there, it can tap into any application's audio stream.

It should work equally well for in-ear or open phones. It doesn't do 5.1 mix-downs, just simple two channel cross-feed.
post #8 of 80
Interesting, I'll have to try this. I generally use WinAmp on Windows and Amarok or mpd on Linux - I'll have to see if I can find a way to make it work for those scenarios.

On a side note I was using bs2b on multiple machines with WinAmp until recently when my virus scanners starting reporting it as containing a Trojan Downloader. I suspect it's a false positive, but can't be too careful...
post #9 of 80

Thank you for the guide. I am tryng it right now and concur with many of henmyrs observations. Bass seems a bit heavy.

Alison Kraus on "Forget about It" sounds like she is in a cave sorta. I'll give it some more time.

Soundstage is definitely wider front to back, but seems like the music is suffering, maybe less detail. My system probably couldn't get much more resolving though.

I really want to try the Smyth system. Have you seen this thread:

Anyway, thanks. Truth-Peace

post #10 of 80
Originally Posted by blubliss View Post
I really want to try the Smyth system. Have you seen this thread:
I was going to look up that thread and link to it but you saved me the effort It was pretty impressive with surround sound material. I don't recall the demo running off a stereo source though - but presumably it could...
post #11 of 80
Originally Posted by Mazz View Post
... don't recall the demo running off a stereo source though - but presumably it could...
It did at the CanJam '08 demo ... we heard some 2-channel redbook CD's. Headphones sounded like speakers. Exactly.

Because Smyth measures your very own HRTF it will beat all others; you get a payback for the time you invest in getting measured. The beyer HeadZone too (using average HRTF's) will make you believe the vocalist is in the room with you, and the piano behind her. No doubt.

Both of these -- and other high end units -- use head tracking. How can the methods discussed in the OP get the spatial placement right without that?
post #12 of 80
Interesting, to say the least.
oversampling uses more resources than both plugins combined, so it isn't that resource intensive.

Initial impressions:
It feels as if I've stuck my head in a small room, with miniature people, and they're playing to me... it sound pretty live, but I can't get the feeling that they're playing in a invisble cube suspended in midair to my head.

I guess the best way to put it is that they're playing to my forehead, not to me.

This makes live recordings sound like... live. You could probably determine where they put the mics for recording the live event, it is so live-like.

This is a initial impression, so keep that in mind.
post #13 of 80
Thread Starter 
Just came back from work. Thank you all for your compliments and feedback. I don't have much time right now, so just some brief reactions:

Originally Posted by Planar_head View Post
It feels as if I've stuck my head in a small room, with miniature people, and they're playing to me... it sound pretty live, but I can't get the feeling that they're playing in a invisible cube suspended in midair to my head.
First, Dolby Headphone works with generalized HRTF, so it will work better for some and worse for others. Second, general room acoustics are used and the reference room is indeed a small one. Purely based on the "natural hearing" state I mentioned in my OP, depending on the quality of the headphones used, the reference room could I guess kind of shrink to that "small cube" you mention. Try DH3, the movie theatre setting, and see how you like that. The room will get bigger, but the sound will get more diffuse I'm afraid. I guess the brain will still have to do some work to create the illusion of a large sound stage. I have notice that also with my set-up there is a bit of brain re-programming involved to get used to it.

Originally Posted by blubliss View Post
Bass seems a bit heavy.
Try this: move the bottom slider ONE (and only one!) tick to the right, to the same position the top slider's in. This will lighten up the bass and sharpen the focus of the sound stage. BTW, glad you dug my name, dude.

Originally Posted by wavoman View Post
Because Smyth measures your very own HRTF it will beat all others; The beyer HeadZone too (using average HRTF's) will make you believe the vocalist is in the room with you, and the piano behind her. Both of these -- and other high end units -- use head tracking. How can the methods discussed in the OP get the spatial placement right without that?
As I live across the Atlantic, I didn't attend CanJam and have never heard either of the two systems that are mentioned in the Smyth thread. I can imagine however that at least the Smyth system sounds again several orders of magnitude better than my set-up. Of course there's a price tag involved , as opposed to Foobar DSP's being absolutely free.

Anyway, I think it's a great initiative. And I even said that us purist audiophile head-fiers don't present much of a market for companies that are into this kind of technologies. It seems I was wrong there. It's actually the same as what Starkey, the owner of the VB's Cetera algorithm used to offer their hearing aid customers, a customized DSP based on their own ears. The difference here is that it's not just the HRTF that is measured, but room acoustics and speaker characteristics as well, in one package deal, which in some way is an advantage and in another a disadvantage.

As for the head tracking, isn't that only relevant if you move your head while listening? Of course, it adds to the illusion of simulating a speaker system, and is an essential aspect for audio-visual applications, like computer gaming and home theatre, but.. in my opinion, it's not that important for plain stereo music listening. When I listen to music I might bob my head occasionally (up and down that is (YES), left and right (NOT)) but most of the time I sit like my avatar with my eyes closed in a state of peaceful bliss.

Anyway, I don't have the cash available for the Smyth system when it hits the market, and I think that's the case for the majority of us, so... until this technology becomes as widespread and cheap as USB sticks and MP3 players, the Dolby Headphone-based plugins will have to do, and for me they do it quite well, at least compared to the sound straight-up.

Well, that's my few (Euro) cents' worth. I love these smilies!
post #14 of 80
Thank you for the excellent write up, and the very clear instructions - certainly an interesting effect.

Assuming I got the process right, however, audibly the effect is rather low-fi. Soundstage and image placement is pretty good, really good in terms of instruments and ambience. Detail, however, is absolutely throttled, and the vocal becomes as dry as a bone.

There's also a functional glitch - I found that 'fast forwarding' using the slide control caused the track to 'stick' (caused by the VST side of things it seems).

I'll continue testing the set-up, but results so far suggest that I actually prefer each plugin on it's own - with the VST being the better of the two.
post #15 of 80
Originally Posted by Killahertz View Post
audibly the effect is rather low-fi. Soundstage and image placement is pretty good, really good in terms of instruments and ambience. Detail, however, is absolutely throttled, and the vocal becomes as dry as a bone.
My exact thoughts when i went back to my original setup. My system is soooo detailed that i lose a lot with these DSPs.

Vocals sound like they do when you implement that SoundBlaster feature (SDS?) but not quite that bad. Just seems like a cave or chamber, not a room.

But this stuff is fun to play around with.
System used: Foobar 9.5.4>rme 9632>APL dac>ES-2>HE90
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Computer Audio
Head-Fi.org › Forums › Equipment Forums › Computer Audio › The Holy Grail of True Sound Stage (Cross-Feed: The Next-Generation)