A layman multimedia guide to Immersive Sound for the technically minded (Immersive Audio and Holophony)
Feb 1, 2019 at 12:47 AM Post #196 of 220
hehe. I remember having a blast seeing "JMJ" live as a kid, and looking at the lasers and other light stuff the way Hillary Clinton looks at balloons. but I never really spent much time listening to his albums. I liked the showman, the musician, not so much.
 
Feb 1, 2019 at 2:15 AM Post #197 of 220
It's a heck of a lot of fun if you listen to it as a cartoon. A lot of that 70s goofy synth stuff and prog rock is like that. It takes itself so seriously, you have to laugh when it sounds more like a clown show than the "composition of cultural import" it thinks it is. It's like Margaret Dumont in a Marx Bros movie.
 
Mar 12, 2019 at 3:20 AM Post #198 of 220
I have a question for those who know something about matrixed Dolby Stereo surround... I'm reading about albums that were mixed in Dolby Surround losing their rear channels when the album is remastered. The relationship of the center and the mains remains the same, but the rear info is rolled into the mains. Is there a technical reason behind that?
 
Mar 13, 2019 at 3:00 AM Post #199 of 220
Dolby Surround was the consumer version of (the professional/theatrical) Dolby Stereo. It was essentially a 4 channel format (LCRS) matrixed to 2 channels. The centre channel was simply an extraction of the signals which existed equally in the Left and Right channels, while the bandlimited (mono) Surround channel was encoded (equally) to the Left and Right channel but 90deg out of phase. The great advantage of Dolby Stereo/Surround was that the matrixed Left and Right channels would play as normal stereo if the consumer didn't have a Dolby Surround decoder. For this reason it was a very long lived format, in cinema it continued to be used until the death of 35mm film and even today it is still one of the required deliverables for some TV networks, even though it's not really used any more.

I'm not quite sure what you mean when you say that remastered albums lost their Surround channel, that it was rolled into the mains. The distribution of Dolby Surround (later Dolby Pro-Logic) is just 2 channels (LtRt), so the Surround channel must always be rolled into the Left and Right channels. Do you mean that a Dolby Surround album/s was remastered and distributed in 5.1 and there was no surround channel/s?

G
 
Mar 13, 2019 at 3:14 AM Post #200 of 220
When the remastered stereo is decoded, the center, left and right are well separated, but the rears are attenuated drastically. Would applying a digital reverb in sweetening obliterate 90 degree phase?

I think what happened is that they pulled a matrixed Dolby surround encoded 2 channel master and then remastered it. And some of the matrixed channels survived the remastering process, but the rears didn't. I've heard the same with some Tomita albums that were originally mixed in Dolby but were subsequently remastered and re-released with the same problem.
 
Last edited:
Mar 13, 2019 at 2:37 PM Post #201 of 220
I discovered something today that I’ve been looking for for years... Back about ten or fifteen years ago, Nimbus records put out a line of matrixed surround CDs in an odd format called Ambisonics. One of them was a disc of Caruso. They got a perfectly maintained holy grail phonograph with a huge horn and put it on the stage of an acoustically perfect theater and recorded it from an ideal spot in the audience.

78 collectors who heard it said it was the best reproduction of Caruso’s voice they had ever heard. But Ambisonics as a format was a flop, and the CDs were discontinued. I just discovered that the Nimbus Caruso was later released on discrete 4.0 DVD-A. (Another extinct format, but I can play that one.) I just ordered a copy and I’m excited to hear it. I’ve never heard an electrical transcription of Caruso that holds a candle to the way they sound on an acoustic phonograph. And with real concert hall ambience, it should be wonderful.
 
Mar 15, 2019 at 8:27 AM Post #202 of 220
When the remastered stereo is decoded, the center, left and right are well separated, but the rears are attenuated drastically. Would applying a digital reverb in sweetening obliterate 90 degree phase?

I think what happened is that they pulled a matrixed Dolby surround encoded 2 channel master and then remastered it. And some of the matrixed channels survived the remastering process, but the rears didn't. I've heard the same with some Tomita albums that were originally mixed in Dolby but were subsequently remastered and re-released with the same problem.

One of the annoyances of creating a Dolby Stereo/Pro-Logic mix is the difficulty of getting things to go where you want them. Sounds will tend to "snap" to a particular speaker, so for example a sound panned to a position between say the centre and left speaker will tend to snap exclusively to either the left or centre speaker, rather than stay between them. And, as the surround channel depends on relative phase and as multi-channel/tracked recordings are never perfectly in phase, sometimes you could pan a sound to the surround but it would actually come out of a front channel or, you could get a sound nicely positioned and then apply (or change) the EQ and it would jump somewhere else, as EQ (and most processing) also affects phase and push it over/under the phase threshold for routing to the surround channel.

Reverb is of course a series of closely spaced echoes/reflections which by definition are later in time that the original dry signal. So depending on the frequency and phase coherency of the original signal and the timing of the reflections in the reverb, it's could result in material which is interpreted by the decoder as 90deg out of phase and therefore repositioned. The result might work well or the original signal, parts of the signal or the reverb itself (or parts of it) might end up being panned/routed inappropriately. There's really no way of predicting what will happen on any particular mix without trying it. For this reason, the only way of creating a Dolby Stereo/Pro-Logic mix was to have an encoder and decoder in the b-chain (monitoring chain). So, one would mix to 4 channels (LCRS), output those 4 channels to the Dolby Encoder, take the two (LtRt) channels output from the encoder and input them to the Dolby Decoder and then feed the monitors/speakers with the 4 outputs from the Decoder.

There's a number of reasons why the rears might appear attenuated drastically. It could be due to some processing in your AVR (EQ or something) affecting the phase and moving some of the surround signal to the fronts. It could be a routing thing, the mono surround signal being split to your two rear speakers, something else entirely or it could be an error in the mixing/mastering, although as mixing and mastering virtually always occurs in two different studios it's unlikely (though not impossible) such an error would have occurred in two different studios and not been noticed.

G
 
Mar 15, 2019 at 12:34 PM Post #203 of 220
I think the reason the rears aren't working is because of the processing in mastering. Reviews say that the Dolby output is messed up, and they even took the Dolby logo off the cover of the album. The strange thing is that the album was mixed for Dolby originally. It never had a non-surround release. The master is a four channel mix and there never was a two channel mix that wasn't Dolby. That means they had to deliberately do something to mess up the Dolby encoding. There are two different albums where this has happened that I've heard of. Perhaps they didn't want to pay the license for the Dolby logo on the cover any more. It's strange that the center channel would continue to work perfectly and the rears would disappear entirely.

There are dozens of examples I know of where movies were originally released in Dolby Stereo and the matrixed surround is still decodable off the blu-ray... even when the disc is in DTS stereo. Matrixed surround seems to be a hit or miss thing on home video.
 
Last edited:
Jun 28, 2019 at 8:33 AM Post #204 of 220
Aural ID Is Now Available
https://auralid.genelec.com/
How does it work?
After you upload a 360 degree video of your head and shoulder region from your mobile phone camera, Aural ID builds an accurate and detailed 3D model scaled to exactly the correct dimensions of your head and upper torso. From this, your personal HRTF is formed and delivered to you as an internationally recognised SOFA file format, which supports 44.1, 48 and 96 kHz sample rates and contains data for both ears in 836 different orientations.

For more in-depth information, view the Aural ID User Manual.

The price of your personal Aural ID is 500 € + VAT.

Are you planing to order the processing of your HRTF? If yes, please start an impression thread!

The requirements for video and measurement photographs seem very specific.

I would like to know an indicator of accuracy. For instance, the deviation from an HRTF recorded in an anechoic chamber versus the HRTF calculated by Genelec algorithm.

Thank you for the head up!
 
Jun 28, 2019 at 8:46 AM Post #205 of 220
Are you planing to order the processing of your HRTF? If yes, please start an impression thread!

The requirements for video and measurement photographs seem very specific.

I would like to know an indicator of accuracy. For instance, the deviation from an HRTF recorded in an anechoic chamber versus the HRTF calculated by Genelec algorithm.

Thank you for the head up!
saw that post too and can't hide that it's intriguing. now 500bucks is a lot for:
-something that may or may not come close to an acoustic measurement(theoretically it could be even better, but in practice we're no there yet based on AES garden parties and other atmos conventions for 3D googles)
-no obvious way to calibrate the headphone's frequency response.(I've done that a few times using a speaker right in front of me and trying to match the amplitude of tones, it's a real PITA!!)
-we're basically ending up with a fancy crossfeed unless we find a headtracking solution that will accept those files instead of using a factory HRTF simulation.

if the last 2 can be addressed easily and I don't have to purchase a 2000$ headphone because it was their reference, then I would probably go for it.
 
Jun 28, 2019 at 8:53 AM Post #206 of 220
saw that post too and can't hide that it's intriguing. now 500bucks is a lot for:
-something that may or may not come close to an acoustic measurement(theoretically it could be even better, but in practice we're no there yet based on AES garden parties and other atmos conventions for 3D googles)
-no obvious way to calibrate the headphone's frequency response.(I've done that a few times using a speaker right in front of me and trying to match the amplitude of tones, it's a real PITA!!)
-we're basically ending up with a fancy crossfeed unless we find a headtracking solution that will accept those files instead of using a factory HRTF simulation.

if the last 2 can be addressed easily and I don't have to purchase a 2000$ headphone because it was their reference, then I would probably go for it.

Are you familiar with the plugins Genelec has tested? Maybe one or more has headphone calibration or tracking solution?

Perhaps is time to Smyth Research start thinking about the adoption of a Sofa file...
 
Jun 28, 2019 at 2:59 PM Post #207 of 220
Are you planing to order the processing of your HRTF? If yes, please start an impression thread!
No, I am not planning to order that because I share the following sentiment with castleofargh:
now 500bucks is a lot for:
-something that may or may not come close to an acoustic measurement
 
Jun 29, 2019 at 1:51 AM Post #208 of 220
Are you familiar with the plugins Genelec has tested? Maybe one or more has headphone calibration or tracking solution?

Perhaps is time to Smyth Research start thinking about the adoption of a Sofa file...
not really. some stuff ring a bell, but that's about it.
but on a side note, I spent an hour or so with the nephew of the friend of a friend of my mother while on a short trip..., so serious street cred. he seems to work on various immersive stuff and he showed me SPAT Revolution, which let you play around in a virtual room and many sources, some virtual, some set as the real ones we're using. not new but pretty cool. it had by default some of the IRCAM's HRIR available for free online, and I happen to use a slightly modified version of one for my own "DIY crossfeed", so of course I asked him to load that and play around with the placement of the virtual sources on some of his stuff while using a headphone. fun! and with an HRIR fairly close to mine(at least subjectively for all the frontal area), it made me dream of an audio future even cooler than the one I already was wishing for.

I'm not sure this really has any application even for a rich audiophiles, it's clearly aimed at content creators and isn't even specific to headphones. but when I see stuff like that, and then I look at the playback world and how technologically outdated we are, it makes me both sad and angry. sad watching consumers who seem fine with it and run after the promise that noise shaping at -300dB or DSD40000 is what they need for "realistic" soundstage. and angry at manufacturers who surely take their sweet time getting in the game, and would probably have never bothered if not for 3D googles urgently in need of a sound that won't ruin the experience too much. all elitists that this hobby likes pretending to be, those 3D googles have plenty of cool audio tricks and content made for them, while we're still here with our "wrong" stereo playback.
so right now I'm pretty much like this
ThunderousHauntingGaur-size_restricted.gif
 
Feb 22, 2020 at 9:55 AM Post #209 of 220
I have a BACCH4Mac system.
I just tried playing REM 'Automatic For The People' two ways:
- a DVD-A 6-channel recording played via my AV system (Anthem and Lyngdorf amps, 7.2.4 speaker configuration)
- a 2-ch stereo recording played through BACCH4Mac by the Lyngdorf amp to a 2+2 stereo speaker and subs configuration)

I vastly prefer the BACCH4Mac rendering of the track. It sounds like the vocals are in specific positions in the room. Because it's so easy to separate the vocals out from the other instruments, it's much more engaging to me, pleasurable and less fatiguing. By contrast, the 6-ch recording rendered vocals that sounded flatter.

I accept that sound engineers layer and mix sounds using studio monitors without IXTC which means that they simply do not intend the music to sound the way it does through systems that deploy IXTC. I also accept that rendering the soundfield using this technology can place sounds in weird positions. But for about 85-90% of music I listen to, it sounds vastly better using this technology.

I believe that rendering the soundfield using stereo speakers without IXTC induces listening fatigue - this may be because the brain is trying to correlate that sonic input with the normal 3d sound it hears the world through. If that's the case, it is a fundamentally flawed approach given how our brains work.

I have yet to listen to Dolby Atmos Music mixes - must give that a go soon.
 
Mar 25, 2021 at 10:40 AM Post #210 of 220
Apparently Sony is also in quest for the holy grail:
They've reached it. Been beta testing a different Sony product designed for pro use for sound to picture use since early last summer. I've listened extensively to a Realiser and the Sony system is as convincing. Makes sense as they both use similar concepts: need mics in ears and you at a dub stage or control room to get a personalized reference. Sony's loads as a plugin for Pro Tools for the time being. I think they'll have standalone software when (if) released, but even though it'll be much less expensive than the A16, the high barrier to entry (personalized measurements) may mean this is a product oriented to professionals. Dunno, how Sony will market it. BTW, this product comes with open back, purpose-built headphones along with the software. Works from 2 through 32 channels.
 

Users who are viewing this thread

Back
Top