The Holy Grail of True Sound Stage (Cross-Feed: The Next-Generation)
Jan 25, 2009 at 7:36 PM Post #46 of 80
Quote:

Originally Posted by milkweg /img/forum/go_quote.gif
You can thank us Brits for dominating most of the world in the last century for that.
evil_smiley.gif



British in the 19th century - ~1960 (see Queen Victoria era), the USSR and the United States post WWII post-isolationism policy to ~1980 (The USSR was really rotting from the inside from 1980 onwards), United States 1980 - present, to be historically correct
wink.gif


The Dutch were the major superpower of the world in the 17th century.
Anyway, very nice first post in Head-fi!
I personally use the Dolby Headphone 7.1 speaker setting included in the Asus Essence STX software.
Am I right in that any surround sound emulation software uses lossy algorithms?

e.g. Dolby Headphone with AC3 or am I off track here?
 
Jan 25, 2009 at 7:50 PM Post #47 of 80
Quote:

Originally Posted by chinesekiwi /img/forum/go_quote.gif
Am I right in that any surround sound emulation software uses lossy algorithms?


That's why I ask for DVD-A. At least, it would be less lossy, if not totally lossless...
 
Jan 25, 2009 at 9:23 PM Post #48 of 80
Quote:

Originally Posted by chinesekiwi /img/forum/go_quote.gif
British in the 19th century - ~1960 (see Queen Victoria era), the USSR and the United States post WWII post-isolationism policy to ~1980 (The USSR was really rotting from the inside from 1980 onwards), United States 1980 - present, to be historically correct
wink.gif


The Dutch were the major superpower of the world in the 17th century.
Anyway, very nice first post in Head-fi!



Seems to me you left out the 1800s. Battle of Waterloo (Sunday 18 June 1815) , The Battle of Trafalgar (21 October 1805) was a sea battle fought between the British Royal Navy and the combined fleets of the French Navy and Spanish Navy, during the War of the Third Coalition (August-December 1805) of the Napoleonic Wars (1803-1815). The battle was the most decisive British victory of the war and was a pivotal naval battle of the 19th century.
smily_headphones1.gif
 
Jan 25, 2009 at 10:38 PM Post #50 of 80
I know that. Just wanted to make sure you knew we ownd you all that time. In the 1600s we pwnd the Spansih Galleons with our fleet of Privateers.

And why did you think that was my first post to headfi anyway?
 
Jun 1, 2009 at 8:34 PM Post #51 of 80
Quote:

Originally Posted by satshanti /img/forum/go_quote.gif
Part 2: All the World’s a Sound Stage?

I guess this chapter is not going to be news to most of you, but I’ll include it anyway to paint a complete picture and as preparation for what comes after. From the very beginning music recording has been focused on reproduction through loudspeakers rather than headphones. That’s why almost all recordings to date are “stereophonic” recordings. In modern sound studios stereo mixes are created from multiple mono tracks, but in the old days a stereo recording was made by placing two microphones a certain distance apart, and realistic playback of a 3-dimensional “stereo-image” was possible through two speakers similarly placed a certain distance apart, a phenomenon all of us know very well. This type of recording was and still is meant to be heard through a pair of loudspeakers in order to unfold and re-create its inherent 3D-image or sound stage. If heard through headphones however, each channel that’s supposed to be heard by both the left and the right ear, is in stead heard only by one ear, causing the stereo-image to collapse into a flat line between both ears.



I'd like to make a few observations and add my 2 cents worth. I'm afraid that quite a bit of the paragraph I've quoted is not accurate.

Stereophonic recording was invented in the 1940s, before that, all recordings were monophonic. However, stereophonic recordings didn't start to become really popular with the public until the late 1960s. In the 1970s surround started becoming common in film sound, although experiments were carried out as early as the late 1930s by Bell Labs and Walt Disney (FantaSound). The system used in the '70s was Dolby Stereo, which used 4 channels (LCRS) which are matrixed down to stereo (Lt + Rt), eventually this was modified and re-badged in the 1980s and marketed to the public as Dolby Pro-Logic. It worked well, as it was backwards compatible with stereo equipment but the addition of a decoder allowed for the full 4 channels. Unfortunately though there are quite a few limitations of Pro-Logic and besides which, cinema sound had already moved on to descrete multi-channel surround.

You also mentioned , "In modern sound studios stereo mixes are created from multiple mono tracks, but in the old days a stereo recording was made by placing two microphones a certain distance apart, and realistic playback of a 3-dimensional “stereo-image” was possible through two speakers. You are refering to stereo mic'ing techniques: MS Pairs, Spaced (AB) Pairs and Coinicident (XY) Pairs being the main techniques. These stereo mic'ing techniques are still widely and commonly employed today so your statement was NOT true. There is some modern music which uses purely mono sources to create the stereo soundfield but most use a combination of mic'ing techniques, which generally include stereo mic'ing techniques. This is true of the vast majority of stereo recordings from the late 1960s to the present day. A disadvantage of all stereo mic'ing techniques (except an MS Pair) is phase cancellation due to timing differences between the two mic's.

With this in mind, it's therefore possible that a stereo recording listened to through cans may actually sound better than if listened to through speakers. In general though it's probably just as likely that listening through cans will make it sound worse.

I would add that downmixing a 5.1 mix to stereo and then simulating surround is never going to be as effective as listening to a 5.1 mix on a surround speaker system. Those algorithms which upmix from stereo to 5.1 are quite poor and the stereo mix cannot be completely deconstructed so that the individual elements can be accurately placed within a surround soundfield. This upmixing is always going to be a poor solution compared to a mix specifically created in 5.1. This isn't to say that some consumers won't actually like or even prefer the sound of upmixed playback.

For the true purist, by far the best listening experience is going to be listening to a stereo mix in stereo and listening to a 5.1 mix on a 5.1 surround speaker system. If you are not so bothered about purism or neutrality then it's possible you may prefer the effect of surround emulators.

G
 
Jun 2, 2009 at 8:59 AM Post #52 of 80
Gregorio, I always enjoy your posts, because you're factual and write from your own professional experience. Thanks for clearing up those issues from the perspective of studio recording and engineering. I'm just a layman trying to simplify complicated things, and some of my statements have obviously not been a 100% accurate. I also agree with this statement to a certain extent:
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif
I would add that downmixing a 5.1 mix to stereo and then simulating surround is never going to be as effective as listening to a 5.1 mix on a surround speaker system. Those algorithms which upmix from stereo to 5.1 are quite poor and the stereo mix cannot be completely deconstructed so that the individual elements can be accurately placed within a surround soundfield. This upmixing is always going to be a poor solution compared to a mix specifically created in 5.1. This isn't to say that some consumers won't actually like or even prefer the sound of upmixed playback.

For the true purist, by far the best listening experience is going to be listening to a stereo mix in stereo and listening to a 5.1 mix on a 5.1 surround speaker system. If you are not so bothered about purism or neutrality then it's possible you may prefer the effect of surround emulators.
G



I agree that stereo recordings are made for the purpose of listening through a stereo speaker system, but my claim is that a set of cans is not a stereo speaker system. Headphones should be used for (almost non-existent) binaural recordings. My efforts have been to somehow re-create binaural recordings out of a stereo mix, and this just happens to be possible via 5.1 surround.

It is my claim that by using the VI settings of my latest post, hardly any information present in the stereo mix is lost or hardly any additional distortion is introduced, but through the smart use of a complex mixture of cross-feed and phase shifts, the spatial cues that are available in the stereo mix are re-created binaurally. VI uses ambisonics for that, and Dolby Headphone is a really good and precise algorithm. It does work with general HRTF only, so I assume the algorithm might have a different effect on different people with different ear sizes.

I'll give you an example of how stereo spatial information is transferred into a binaural space. Imagine a vocal track. Now, I'm not a sound-engineer, so I don't know how all these effects are called, but I imagine that a recording studio is not a bathroom, so any reverb,delay and/or echo effects one hears on a vocal track is added "artificially" via a DSP afterwards. This effect is then merely "placed" somewhere in the stereo mix. VI recognizes this and places it in the virtual space where it should be namely all around you, as if you were in a room with the singer in the middle. And this happens for all instruments. With most recordings this whole process creates a very realistic and natural-sounding sound stage, similar to a real binaural recording, meaning a few things are placed even behind you, which is something to get used to. The process does not add delay AFAIK (although through VI one can enhance certain effects).

Anyway, I personally prefer to listen to all of my music like this, and I do try without it once in a while, but quickly put it back on.
bigsmile_face.gif
 
Jun 2, 2009 at 12:57 PM Post #53 of 80
Quote:

Originally Posted by satshanti /img/forum/go_quote.gif
It is my claim that by using the VI settings of my latest post, hardly any information present in the stereo mix is lost or hardly any additional distortion is introduced, but through the smart use of a complex mixture of cross-feed and phase shifts, the spatial cues that are available in the stereo mix are re-created binaurally. VI uses ambisonics for that, and Dolby Headphone is a really good and precise algorithm. It does work with general HRTF only, so I assume the algorithm might have a different effect on different people with different ear sizes.

I'll give you an example of how stereo spatial information is transferred into a binaural space. Imagine a vocal track. Now, I'm not a sound-engineer, so I don't know how all these effects are called, but I imagine that a recording studio is not a bathroom, so any reverb,delay and/or echo effects one hears on a vocal track is added "artificially" via a DSP afterwards. This effect is then merely "placed" somewhere in the stereo mix. VI recognizes this and places it in the virtual space where it should be namely all around you, as if you were in a room with the singer in the middle. And this happens for all instruments. With most recordings this whole process creates a very realistic and natural-sounding sound stage, similar to a real binaural recording, meaning a few things are placed even behind you, which is something to get used to. The process does not add delay AFAIK (although through VI one can enhance certain effects).

Anyway, I personally prefer to listen to all of my music like this, and I do try without it once in a while, but quickly put it back on.
bigsmile_face.gif



Mmmm, it's not quite that simple (never is!). Using your example of a vocal track: Depending on the type of vocal it may be recorded with the real reverb of a venue or studio, as good quality commercial studios will have well designed acoustics in the live room. In this case an array of mics (including stereo mic'ing) may be used to capture both the direct and reflected sound. Positioning is critical though as it's next to impossible to totally eliminate phase artifacts. It's just as likely though that the vocal in popular music has been recorded in a dry acoustic and then reverb added artificially. Most reverb processors are quite sophisticated. Let's say you pan the vocal 75% to the right speaker, you can also send the vocal to a reverb unit also 75% to the right. The stereo reverb unit will then calculate reverb within this stereo space, in our example the Early Reflections (one of the many parameters of digital reverb) would be generated first in the right output of the reverb unit before the left output. This is to maintain the perception that the relative positioning of the vocal track in the stereo soundfield is re-enforced by the acoustic information produced by the reverb unit. Reverb units (outboard or plugin software) are complicated bits of kit and very DSP heavy as they ideally needs to be able to process the audio sent to them as many as 3,000 times a second and each one of these 3,000 reflections needs processing variations to introduce a certain amount of randomisation and avoid correlation (which causes nasty artifacts, like ring modulation). The quality and realism of reverb units vary enormously and you can tell this from the price. Reverbs vary in price from free downloads up to about US$14,000 for a top of the line model. Although a top of the line model will be more complex still, as it will calculate the timings and positions of the reflections for a full surround soundfield (as well as a stereo soundfield when set to stereo mode), these units tend to have a whole array of DSP chips to handle the complex algorithms. What is more, it's not unusual to use two or even three different reverb units in a single stereo mix (to create varying depths).

As a general rule though, naturally recorded reverb usually (though not always) gives the best results, although it requires a lot more time, effort, skill and expensively designed live room. It's not uncommon to mix both live acoustic reverb and digital reverb.

It can get even more complicated than I have explained but I hope I've demonstrated that the process is far more complex than just adding an effect and then placing that effect somewhere. The interaction of reflections from digital and natural reverb in your average mix can be incredibly complex, very unpredictable and often requires considerable skill to create depth, while not loosing clarity or separation. There is no way to calculate the effect of all these interacting reflections, it just requires experience, taste and a good set of ears (and obviously a good monitoring environment). Although reverb is based on science, it's use and application is an art. Also, although a stereo mix can be analysed, it can't be separated out to it's original constituents and re-mixed by a DSP process, taking into account all the reverb interactions.

All the above leads me to feel very dubious that a stereo mix can be re-processed adequately with the added timing (phase) information which will be introduced through the use of crossfeeding and compensations for HRTF. There is just too much unpredictability and complexity for there to be any way to guarantee that this added phase information is not going interact in some way with the original mix (phase cancellations and/or inaccurate positioning). This unpredictability of what is likely to happen is why I stated that for the purist it's probably not a good idea. Although depending on; the mix, the effect of different playback equipment and each individual's subjectivity and physical attributes, some people may prefer the effect and some may not. Some people may prefer it only for certain mixes or genres and on certain equipment.

I am used to critically analysing mixes in my head and listening for the use of time based effects (Delay, Echo, Phase and Reverb). So I personally have never heard a crossfeed or DSP processing consumer system which I liked or felt comfortable with. I think in general most audio engineers and producers would feel the same way. It does concern me that a consumer is adding processing to a mix which may change or even nullify some of the artistic endeavour and hard work I've put into creating it. I like to think though that I've come to terms with the fact that once a consumer has bought my product it's theirs to do with as they please (except ripping it off of course) and that it's no longer "my baby". I also realise that listening to music with cans or IEMs is already a compromise, so I can appreciate why efforts are being made to modify the output of cans to emulate a more realistic listening experience.

Sorry this is such a long post, it's not easy to explain without being able to actually demonstrate with a mix and production system in person. Hopefully though it's all useful background information for those of you interested in the technicalities of how music and audio products are created.

G
 
Jun 2, 2009 at 5:28 PM Post #54 of 80
I have been using the setup posted by OP for past year, though I use the plugins as VST in Plogue Bidule.

I used to like the setup when i used the onboard audio. But after switching to Xonar essence, I have stopped using this, since I feel that there is some loss of detail and I prefer the unaltered sound now.

I really like the VI stereo to 5.1 VST and IMHO this is the best stereo upmixer (software or hardware) that I have seen. I still use it on my 5.1 HT setup for listening to stereo material.
 
Jun 3, 2009 at 1:40 PM Post #55 of 80
Gregorio, your explanation does make sense, even to a layperson. As I said in my OP, I'm somewhat of a purist myself and a perfectionist to boot, so I know where you're coming from and how you must perceive sound recordings.

My problem has always been that I just couldn't really get used to the headphone sound, which with most recordings just sounds like a very limited and flat line between my ears, and after hearing the Virtual Barbershop, I just knew what kind of sound stage was possible out of my cans. Have you ever heard some really well binaurally recorded music? I agree that listening to well-recorded/engineered stereophonic music through a good stereo speaker system is the best way to go, but since neither my wife, nor my neighbours agree with me on that, I'm "stuck" with headphones and the next best thing for me is the setup I use, better than listening directly to the "naked" stereo output.

And to respond to Gurubhai, it may happen that as my system goes through its upgrades, I might also find that at some point I can do without. Who knows? My system is not bad as it is, and I have gone through some upgrades already without changing back. And indeed Steve's VI Suite is excellent. On some recordings he just hits the nail smack on the head with his algorithms. He says he uses second order ambisonics, which is purportedly able to extract depth in a very natural way. I have no idea how he does it, but it sounds great to me.
 
Jul 15, 2009 at 9:18 AM Post #56 of 80
This is a very informative and interesting thread, and the effect of the plugins were very intriguing.

The soundstage really widened and opened up, but I felt that there was a significant trade-off in sound quality - particularly that the sound became bloated and muffled in such a way that detail and clarity was lost in the process.

I've been trying this (as well as several crossfeed plugins) over the course of the past several days or so and eventually came back to my default setup that doesn't use any of them
biggrin.gif
 
Jul 15, 2009 at 12:05 PM Post #57 of 80
I do most of my home headphone listening from my Xonar D2 with Dolby Headphone in 2 channel mode. I tried Chungalin’s Dolby Headphone Wrapper (using PowerDVD's Dolby Headphone Version with V.I. Stereo to 5.1 Converter VST Plugin Suite (VI) ) and there's no contest. There is a substantial loss of detail and realism using the wrapper compared to the Xonar version.

Additionally, IMO, Dolby Headphone should be set to 2 channel mode for the vast majority of music listening. It's in this mode that it most closely simulates my ideal listening setup: stereo speakers. I wouldn't choose to listen to most music on a 5.1 speaker system over stereo and it's the same when using Dolby Headphone. With well-recorded music this already creates a 3d soundstage.

Finally, the headphones used must be well controlled across the tonal range for Dolby Headphone to sound realistic. I've found most headphones can cope with the mids but treble needs a bit of sparkle to give the sound a bit of air. It's bass where I've most often found headphones to be lacking. A lot of headphones just aren't controlled enough to really lift the bass out of your head. My Goldring NS1000s manage it, as long as they're amped adequately. I'm sometimes amazed at what they pull off and could sometimes swear I can feel the bass in front of me. No idea what sort of psychoacoustic trickery Dolby Heaphone does to achieve this. HD580s and HD600s also managed it with aplomb. My Goldring DR150s needed to have the tape removal mod done before they could get close. Even then, the bass still isn't quite clean enough to really shine.

I now find it very difficult to go back to headphone music listening without Dolby Heapone though. I've taken to pre-recording some mp3s with it using the Xonar. Unfortunately, that sometimes introduces jitter. I'd love a PMP to include Dolby Headphone in 2 channel mode.

Edit: another quick observation. For recordings that capture somthing of the environmental acoustics, DH1 works very well. On studio recordings it can place you strangely close to the lead vocalist or instruments though. Where a 3d soundstage is lacking in the original recording, I find DH3 (which simulates listening room acoustics) often introduces a pleasant ambience. The additional simulated space of DH3 gives a listening experience akin to sitting in front of a stage on which the performers are playing, rather than being very near the lead vocalist / instrumentalists.

Edit: If the Foobar Dolby Headphone wrapper was adequate for music listening, I would probably be using an Auzentech Prelude. I bought one to replace my D2, as I was missing having EAX 3, 4 and 5 in a number of games. After A/Bing the 2 cards, including pitting the Prelude with DH wrapper against the Xonar with native DH, I decided that I couldn't part with the latter.
 
Jul 15, 2009 at 2:59 PM Post #58 of 80
ear8dmg, thanks for sharing your experience. I have gone through a lot of upgrades to my system and all the time I never bothered to re-test the suitability of my set-up, which I will do now. I did notice that "native" DH, like with watching a movie and enjoying the end credits with a good soundtrack, sounds much more natural than my VI-based setup. It must be possible to just take VI out of the chain and feed the wrapper with the stereo-signal to the two front speakers and an empty signal to the rest. I'm wondering if the difference you noticed is caused by VI or by the wrapper, because the DH algorithm is basically the same. I'll start experimenting a bit myself, when I have some time.

For some reason I never liked DH2, as I have the feeling the timings are off. This may be caused by VI though, and not by DH. I like both DH1 and DH3. Setting 1 is more clear, direct, and most true to the original signal, but setting 3 is wonderfully spacious. I love this for watching movies, but it does "smear" the sound a bit, which I don't like for all music.

Edit: It's been a few days now, and I have tried removing VI out of the DSP chain and I've now kept that setting. After a whole series of upgrades, my system has obviously moved to a stage where detail and tonal accuracy are contributing more to the overall sense of realism than spatiality.

With VI the 3D image is more defined, more "binaural" with some great spatial cues and subtle echos and reverbs that add to spatial realism and soundstage of a recording. Without VI in the chain, the Dolby Wrapper seems to just take the stereo signal and accept this as two front speakers and ignore the additional 3.1 channels. This means that the sound stage indeed moves a bit to the front as with regular speakers, is a bit smeared and diffuse compared to the one with VI, but for that the level of detail and tonal accuracy is higher across the whole frequency spectrum, not by a whole lot, but substantial enough for me to warrant the permanent removal of VI out of the chain.

By the way, I have also retried listening to the "naked" signal, which I still abhorred, and while I was at it, I've compared my new "simplified" DH setup with some of the more "orthodox" cross-feed DSP's, like Bauer's and the HDPHX-VST-plugin, which were still just a bit too subtle for my taste. So thanks again, ear8dmg, for your post. It has improved my listening experience
 
Aug 4, 2009 at 7:03 AM Post #60 of 80
Quote:

Originally Posted by cyberspyder /img/forum/go_quote.gif
How would you use the Dolby Headphne plugin with a movie player?


Well, in the latest versions of most software DVD players, like Power DVD and WinDVD and some others, Dolby Headphone is integrated, so it's just a matter of switching it on and off.

The plugin is just for Foobar, although I know a similar one exists on VST basis, so if you use a movie player that doesn't have Dolby Headphone integrated, I don't know of a way to use it.
 

Users who are viewing this thread

Back
Top