A layman multimedia guide to Immersive Sound for the technically minded (Immersive Audio and Holophony)
Jan 3, 2018 at 12:10 PM Post #61 of 220
From what I'm told, the biggest problem with sound quality in 5.1 systems isn't the equipment. It's the baby sleeping upstairs or the wife who isn't interested in what you're screening or the neighbor who shares a common wall with your apartment. I'm lucky. My listening room is in an attached guest house in the back yard. I can listen to whatever I want whenever I want. The speakers and amps are easy. The room is the trick.

I'd call it a tie between your comment and people who run 5.1 (or stereo) but refuse to use an automated EQ system (Audyssey, Direc, Trinnov, etc.) or manual EQ with a measurement system to address room issues. Or who plop a subwoofer down where it's convenient rather than where it performs best in that particular room/listening position(s).

Completely agree that the room is the challenge, not the actual gear. It's amazing how much cash people will throw at hardware for a problem that isn't solvable with money unless you're hiring someone to help with room acoustics. Why people will spend thousands of dollars on gear but not a few hundred hiring someone to help with EQ/room correction remains a mystery to me.
 
Jan 3, 2018 at 12:28 PM Post #62 of 220
I have a bit of an off-topic question : when i get my A16, can I use the in-ear microphones to experiment and make binaural recordings ?

Sure you can try, why not? Use your own head (wearing the in-ear microphones of course) for recording a live event, playback with HPEQ only (no PRIR or BRIR). Should be perfect (for yourself). Unless the microphones have limited fidelity (only picking up the test A16 signals is a less demanding job than doing a full range dynamic recording I guess, so it could be they are less suited for the latter). But as I said, just try.
But you would have to sit/stand perfectly still during recording or otherwise you might get a bit seasick during playback! Or remember exactly how you moved ( :L3000: ) during the recording and do the same ( :L3000: ) during playback:beyersmile:.

Now maybe someone can figure something out to have headtracking as well. The trick of playing back through a crosstalk free PRIR that is sometimes used with generic binaural recordings, not made with your own head, is less suitable now. It would be a pitty to "add" an additional HRTF because this recording already has perfect HRTF for you applied to it, by your own head and ears. I am thinking of "constructing" a cross-talk free PRIR that does not apply the full HRTF for each lookanagle, but only the difference between your HRTF from straight in front and your HRTF for the other lookangle (and does not contain any reflexions/reverberations). (So looking straight forward it would do nothing, and looking 30 degrees left for example it would only apply the change in the HRTF.) I am just brainstorming here, I will think about it more.

Yes.

If your recorder has plug-in power and you want to record unamplified events, you will be fine. If you want to record loud music, then you will need a battery module with higher voltage and attenuation.

See taperssection.com for more details.

In that case, a crossfeed free PRIR is mandatory.

Or the dual user crossfeed reduction approach, but then not using the head tracker will make recordings less lifelike depending how intolerant you are to head movements.

Edited: sorry, @sander99, I didn’t read post #1696. That is more or less what I was trying to say...

I record with this:

427B6FAC-19C8-4F31-8DFC-7BE96CE60215.jpeg

This is my feeling about binaural recordings:

I don't remember my grandfather. He died when I was too little. But there is a tape recording that he made at some Christmas that I like to hear. My mother also died and I wish I had recorded her with my binaural microphones. I would love to hear her voice again using an externalization device.

But be careful when recording music.

Firstly, you must respect copyright from artists (partitures, lyrics and performers).

Secondly, you and the artists may receive severe critics such as:

(...) There is almost NO emulation going on here! You've posted a recording of a real drumkit and that's obviously NOT what we're emulating. Do you want to hear an emulation of a pathetic string "twang" captured with high fidelity 3-axis spatial information or do you want to hear that pathetic twang distorted completely beyond recognition, so it better matches what you think an electric guitar should sound like? A number of famous artists would struggle to sing "Twinkle, Twinkle Little Star" decently, why would we want to emulate that? Etc., etc.!
(...)
Me too but unfortunately, that's the reality here. You continue to miss the point that bigshot and I are trying to explain to you, that it's ALL an illusion. You seem determined to interpret this as meaning that it's actually all real, except for the illusion of stereo. That the musicians are creating real performances on real instruments which we're accurately recording and then creating a stereo illusion from those recordings. The reality is: The instruments in real life sound little or nothing like we want them to, there is no real performance and therefore, how can we accurately record something which never existed? When we say it's ALL an illusion, we don't just mean an illusion of stereo, we mean the performance and the music itself is an illusion and we CANNOT create that illusion if we attempted to record and "preserve" 3-axis spatial information!

I'm not sure how to break you out of the myth you appear trapped in. (...)

G

(...)
2a. Hang on, you're talking about something rather different now. Before, you were talking about "preserving the 3-axis spatial information" and I explained that it is impossible to record and preserve that spatial information because we don't have one coherent acoustic space to start with (but a number of different ones) and because the different processing required to all the instruments/sounds in every popular music genre would not be possible if we did try to record and preserve the spatial information. However, that's a significantly different proposition from saying (for example): Let's make a bunch of multi-tracked mono recordings, with relatively little spatial information, process those tracks individually how we want and then place them in a 3-axis soundfield. If we did this, we would obviously be recording and preserving little/nothing of the spatial information, we would be creating/manufacturing new and entirely different spatial information and, we are certainly not talking about emulating any sort of real 3-axis soundfield here but of creating a hopefully aesthetically pleasing soundfield (from a combination of mono, stereo and multi-channel spatial effects). Additionally, all this applies to the majority of music products (the various popular music genres) not to niche music genres such as say classical music, which is typically entirely acoustic, where we would have a single coherent acoustic space to start with and where relatively little processing of the instruments is required/desired. However, we still have some issues even in these circumstances which preclude (or rather, restrict us from) simply recording/preserving the 3-axis spatial information.
2b. No, I am not saying acoustic virtual reality is a myth! I'm not sure where you've got that from? I am saying that because with popular music genres there is no "reality" to start with, then logically it's obviously impossible to emulate a reality which never existed. So, we cannot have a virtual reality of popular music, although we could in theory have a sort of "virtual non-reality" or "virtual surreality" but it's not clear how we could achieve even that in practice without musical compromises and avoiding it being no more than just a cheesy gimmick (as with some early stereo popular music mixes).

3. To be honest, your questions, conclusions and statements indicate that you have relatively little understanding of our work. We do not "add value" ... putting a chassis, wheels and suspension on a car does not "add value" to a car because without a chassis, wheels and suspension you don't have a car in the first place, just an incomplete pile of car parts! Engineering is an intrinsic part of the creation of all popular music genres, not an added value. For example ...
(...)
5. Clearly you are wrong and driven by myth as far music is concerned, even acoustic music genres, although to a lesser degree. You are also somewhat wrong and driven by myth as far as most commercial sound in general is concerned. What you've presented here is not "a layman guide to immersive sound" but an hypothesis of what theoretically might occur in the future but it's a distant "might" because apparently without realising it, you're not just talking about technicalities of sound reproduction but a huge change in the art underlying music, a change to something new, as yet undiscovered and at the cost of abandoning the art we currently have and have had. If we look back in history, we see that the change from mono to stereo occurred gradually but once there was a decent installed user base of stereo then the popular music genres evolved to take advantage of it, even to the point of becoming reliant on it. Then we got 5.1 about 25 years ago and have had a decent installed user base for about 15 years or so but beyond a relatively few experimental albums, we've seen none of the huge music genre evolution to take advantage of 5.1 which we saw with the change from mono to stereo. Now you're talking about another big evolutionary step beyond 5.1, while the music itself hasn't even evolved beyond stereo yet and, shows no signs of doing so!

G

@gregorio, I know you didn’t mean to say that commercial music is the only form of art allowed, but since you keep saying I am driven by a myth, I thought why not showing him an alternative view...
 
Last edited:
Jan 3, 2018 at 12:30 PM Post #63 of 220
bose v3 lifestyle works for me just wonderfully
 
Jan 3, 2018 at 10:42 PM Post #64 of 220
1. Obviously, the more people you wish to accommodate, the bigger the room you'll need and the bigger the room, the more expensive it becomes. There's a larger surface area to treat, which is more time and a bit more money but you'll also need bigger amp/speakers to handle the increased room volume.

2. I'll slightly disagree with bigshot here: While it's entirely possible to find a finished room of the right general dimensions, you're not going to find a room with the right properties acoustically. Depending on how far you're willing/able to go, you'll probably be covering all those finished walls and ceilings anyway or even possibly removing that "finishing", depending on what that finishing is, so it's a waste paying the extra for a finished room. BTW, by "covering" I'm not talking about sticking some acoustic panels on the existing walls but building stud walls in front of the existing walls, this allows you to get rid of the main acoustic issue of a cubiod room, the parallel surfaces, and in the process improve isolation, deal with some of the bass build-up issues, achieve better isolation between the speakers, etc. This all sounds like an onerous, expensive task and while it can take time, it's not particularly expensive; say some metal frame, a bunch of standard construction plywood sheets and/or gypsum boards, several rolls of rockwool, some cheap softwood to build panels, etc. So, we're talking hundreds or low 4 figures if you're DIY'ing. On the forum I mentioned previously you'll find detailed step by step instructions posted by numerous home theatre self builders. Taking this route would obviously result in a dedicated room and it really wouldn't matter much if it were just bare concrete to start with, which is why my advice is slightly different from bigshot's, who I believe is thinking more along the lines of a multi-function room and minimal treatment. However, I do agree entirely with bigshot about the typically low ceilings found in basements, which is a serious problem with no practical solution. Bare in mind that as you increase the desired square footage to accommodate more people, you really need to maintain the ratio with height. A 10' ceiling might be OK in a small room for just a couple of people but be a significant problem in a bigger room.

G

Yeah I realize my question may have come off as "can you just listen in a basement", when more I meant what you're addressing "is it cheaper to just build from scratch." Now you make me wonder if I should try to build a "wife prover" single-person setup in my current basement to justify extra destruction when/if we get a bigger place. I'll pass myself off to the home theater web now; thanks to you both!
 
Jan 3, 2018 at 11:43 PM Post #65 of 220
I'd say if the ceiling is high enough and you are willing to put in your own walls, the basement can work great. It's also an ideal place for projection video. (I was answering thinking you were thinking about leaving the unfinished basement unfinished.)
 
Last edited:
Jan 4, 2018 at 4:57 AM Post #66 of 220
[1] As I see it, that procedure could make your mixings compatible with current and future listening enviroments, because it synthetize coherent ITD according to the azimuth you choose at your artistic will. ... But you are clearly and expressly stating/asserting/mantaining that such type of mixing is impossible!
[2] Remember, this is the science forum. You must test the hypothesis before you can rule it out. Given your assertiveness, apparently you did test such hypothesis.
[3] I just don't understand why then Professor Choueiri insist that it is a possible path... He is professor of applied physics at Princeton University and he is the one who is lecturing that new technologies will allow people to be truly fooled by audio... Apparently he is also driven by the same myth...

1. Not impossible, just impractical and artistically undesirable!
2. Effectively, every mix I've ever done is a similar sort of test.
3. Yes, apparently he is also driven by the same myth! Being a professor of applied physics at Princeton means he is an expert on applied physics, not an expert on recording, mixing and music production! I'm sure he knows way more about applied physics than I will ever know even exists but the example screenshot you posted of a mixing session indicates he's got no idea how music is performed, recorded and mixed. Or, maybe he does and he's either just talking "theoretically" or relying on the ignorance/myths of how audiophiles think music is produced in order to market a product? The example screenshot you posted would work as intended given ALL these conditions: A. If each of the instruments were recorded individually, B. With little/no natural reverb/room acoustics and C. If we were trying to create a single, cohesive acoustic space in the finished product.
Typically, NOT EVEN ONE of these conditions is practical or musically desirable, let alone all of them! For example, in most popular/rock based genres we need to individually process each of the instruments in a drumkit but ultimately we want it to sound like a drumkit (albeit not a drumkit which actually exists in the real world), not like a bunch of individual, unrelated percussion instruments. In practice then, we record the whole drumkit in one go (not each instrument in the drumkit individually), we spot mic each of the instruments AND we record the whole drumkit in stereo, plus we typically also have a mic dedicated to recording the room ambience (respectively called "instrument", "overheads" and "room" mics). We then process each of the instruments in the kit individually from the spot mics (EQ, compression, reverb, etc), mix with the overheads and room mic, create a stem for the drumkit and apply more processing to the kit as a whole. In practice, we end up with the relative timing of the instruments within the drumkit all over the place, severely compromised in preference to a subjectively aesthetically pleasing result. We would most likely do something similar with the strings/brass (although with less individual processing), then again with the singers (other than the lead vocal) and the guitars probably recorded individually. Each of these groups/stems would most likely have different reverbs applied. The lead singer maybe a small plate, the strings/brass probably a much bigger chamber or hall type reverb, the backing vocals maybe a bigger plate, the lead guitar maybe a stereo delay, the bass guitar probably very little reverb. There is no one room or coherent acoustic space that we are either recording, mixing or trying to create at the end of the process, which I've stated before! The software screenshot you posted clearly operates on the basis of a single reverb and single coherent acoustic space. So, Choueiri is correct, "it is a possible path" but going back to point #1, "possible" and "practical"/ "artistically desirable" are not at all the same thing!

[1] I see why my language may lead to a misunderstanding. Instead of "realistic ITD", I should have used "synthetic ITD, coherent with our spatial expectations".
[2] And why would a mastering engineer want to make current mixings compatible with future listening enviroments?

1. Yes, there is a misunderstanding going on here. You are misunderstanding both the idea of a "realistic ITD" AND of a "synthetic ITD, coherent with our spatial expectation". Music mixes are NOT coherent with our spatial expectation, they are created with acoustic information which couldn't possibly exist in the real world and which should sound like a bizarre, ridiculous, spatially incoherent mess but of course that's not how they appear. They don't appear like that because mixes are created by human beings whose brains (process of perception) work in roughly the same way as consumers' brains. In other words, when we mix we have little regard for what should be coherent, incoherent or expected spatial information, just for what sounds good and what sounds good is typically nothing like real (actual or synthesised) timing delays. It's bizarre listening to audiophiles go on about natural, transparent, realistic soundstages or "it's like being there", because there is no natural, realistic or "there". It's like having a photoshop'ed image of a unicorn and then people discussing/arguing about how natural and realistic it is, about how one video monitor makes the unicorn look more real than a different monitor.

2. They wouldn't, there's not the time or money to do that and typically, making the master sound good for one format compromises the sound for another. For example, despite virtually all network TV being made and broadcast in 5.1, most consumers still listen in stereo. This is why Dolby has historically dominated the film sound world, even it's first surround format was backwards compatible. All HDTVs contain licensed Dolby software which automatically down-mixes the broadcast 5.1 to stereo if neccesary but that down mix is compromised. It's a simple algorithm which works fairly well sometimes, not so well at other times. We have to check a down-mix when delivering 5.1 to make sure nothing really strange is going on and change the 5.1 mix if there is, this obviously compromises the 5.1 mix. In general, the stereo down-mix is acceptable but it's not as good as if we made a dedicated stereo mix. Dolby Atmos also has this feature, if you have a Dolby Atmos processor it will mix/down-mix according to your system, up to 64 speakers in your Atmos installation or to 7.1, 5.1 or stereo if you don't have an Atmos system. Despite this feature, most theatrical films have separate mixes (say an Atmos mix and a separate 7.1 mix) rather than relying on the down-mixing algorithm.

G
 
Jan 4, 2018 at 10:25 AM Post #67 of 220
I am sad that I won’t easily find modern stereo content with coherent ITD/ILD compatible with more suitable for crosstalk avoidance/cancellation technologies.

So I have been searching for compatible recordings and I have found very few recordings done by recording/mixing engineers with a different school of thought, but they still are coincident microphones recordings without ITD.

One example would be Cowboys Junkies album “Trinity Sessions” recorded by Peter Moore. See https://www.soundonsound.com/people/cowboy-junkies-sweet-jane.

A Calrec soundfield microphone was used. But the vocalist sang all music (except one) through an amplifier and a Klipsch Heresy loudspeaker! Only one song was recorded a capela in which she sang directly to the soundfield microphone:


Interesting enough, the recent film “Trinity Revisited” was recorded with spot microphones and mixed with Dolby 5.1:


@bigshot, I know you like classical, but just in case have you compared those two recordings? I would love to hear your impressions.

I have also found the “one mic” recordings made by John Cuniberti (http://www.johncuniberti.com/onemic/), with a AEA R88 (stereo ribbon microphone):





So those with Comhear Yarra, Smyth Research Realiser (if you are fond of crossfeed free emulations...) and Theoretica Bacch products will need to start a campaign similar to the motto “bring back dynamic range”, which is bring back coherent ITD/ILD! :grin:

Stereo mastered for immersive audio is also another good motto! :joy:

Cheers!
 
Last edited:
Jan 4, 2018 at 12:08 PM Post #68 of 220
Gregorio, I'm curious what sort of compromises you need to make to have 5.1 fold down to matrixed 2 channel Dolby well... I've been watching blu-rays of TV series lately, and I've discovered that a lot of them aren't just 2 channel like it says on the box, but actually matrixed 5.1 mixes, either in Dolby or DTS. However, I've noticed that these multichannel mixes are very basic- dialogue in the center and occasional ambience or sound effect in the rear. There is never an attempt to pan dialogue across the screen when a character crosses or push a sound object out into the center of the room. No sense of immerisive ambience either. Is this a limitation of the matrixing, or is it just a bare bones mix? Can matrixed Dolby or DTS handle subtle gradated handoffs from front to rear or from mains to center? I'm not hearing that in any of these mixes.
 
Jan 4, 2018 at 2:57 PM Post #69 of 220
(...)
1. Yes, there is a misunderstanding going on here. You are misunderstanding both the idea of a "realistic ITD" AND of a "synthetic ITD, coherent with our spatial expectation". Music mixes are NOT coherent with our spatial expectation, they are created with acoustic information which couldn't possibly exist in the real world and which should sound like a bizarre, ridiculous, spatially incoherent mess but of course that's not how they appear. They don't appear like that because mixes are created by human beings whose brains (process of perception) work in roughly the same way as consumers' brains. In other words, when we mix we have little regard for what should be coherent, incoherent or expected spatial information, just for what sounds good and what sounds good is typically nothing like real (actual or synthesised) timing delays. It's bizarre listening to audiophiles go on about natural, transparent, realistic soundstages or "it's like being there", because there is no natural, realistic or "there". It's like having a photoshop'ed image of a unicorn and then people discussing/arguing about how natural and realistic it is, about how one video monitor makes the unicorn look more real than a different monitor.

(...)

G

Gregorio, I'm curious what sort of compromises you need to make to have 5.1 fold down to matrixed 2 channel Dolby well... I've been watching blu-rays of TV series lately, and I've discovered that a lot of them aren't just 2 channel like it says on the box, but actually matrixed 5.1 mixes, either in Dolby or DTS. However, I've noticed that these multichannel mixes are very basic- dialogue in the center and occasional ambience or sound effect in the rear. There is never an attempt to pan dialogue across the screen when a character crosses or push a sound object out into the center of the room. No sense of immerisive ambience either. Is this a limitation of the matrixing, or is it just a bare bones mix? Can matrixed Dolby or DTS handle subtle gradated handoffs from front to rear or from mains to center? I'm not hearing that in any of these mixes.

So why the need for panning in stereo at all?

Why not just using mono?

Why not delivering all in the center channel with a low directivity loudspeaker (omnidirectional) and all seats get all the good sound?

Actually those new technologies also allow the engineer to be incoherent with the listener’s spatial expectation.

I agree that, like dissonant intervals in music, that spatial incoherence may help to alter the mood of your listener.

Going from flat to elevated sources in the frontal soundstage and being able to detach from the region between the frontal speakers gives you more creative freedom.

It is all in your hands.
 
Last edited:
Jan 4, 2018 at 3:19 PM Post #70 of 220
There are different ways to mix movies. Some put all the dialogue in the center channel, because in a traditional movie theater, that is directly behind the screen. It focuses the voices on the screen. The mains are mostly music and effects in this kind of approach. Another way of handling it is to consider the center channel as a replacement for the phantom center between the mains and all three channels are considered equal and related. I see more matrixed mixes using the former rather than the latter, so I'm curious if that is because of the limitations of the matrixing or if it is just a creative choice.
 
Jan 4, 2018 at 3:38 PM Post #71 of 220
Thank you @bigshot.

I saw the video from Dr. Toole that was linked in the thread about acoustics panels and he mentions how the center channel is important on movies, but he also says that is rare to have stereophonic effects between the left and right front channels, something understandable since the sweet spot is unique and very narrow.

So this bring me again to the fact that those new technologies allow multiple sweet spots.

The Realiser allow two users simultaneously and it lets to measure a PRIR with the center channel exactly where your TV or screen is supposed to be. Once the measure is done you return the center speaker to its compromised spot and emulates the central speakers coincidently with the visual cues, just like that, “out of the faa”.

If you use binaural through loudspeakers with xtc with beamforming phased array of transducers or just the latter you have multiple sweet spots and you don’t need central channel at all.

I wish you could tell me those technologies could work in favor of music mixings and not only for movies and VR. I just wished to make stereo mixings more suitable for them, even if it means releasing a new specific mastering.
 
Last edited:
Jan 4, 2018 at 5:14 PM Post #72 of 220
There are films that use the center channel as a bridge for the two mains. A lot of "roadshow" movies from the 50s and 60s had multitrack sound on first release and that is how they mixed. For instance, the opening of Billy Rose's Jumbo was shot widescreen and they had audio channels behind the screen. In an opening scene a character crosses the screen from right to left and the voice is focused right on his position on the screen as he crosses. Most modern films have a combination of dialogue center and mix center. They lean towards the dialogue, but include a bit of music so it doesn't drop out in the middle. My projection screen is pretty much acoustically transparent, so I have my center channel right in the center behind the screen.

It seems like it wouldn't be very practical to try to co-ordinate the pattern of recording with the pattern of the playback. It would require more calibration and stricter speaker placement, and it's hard enough to get people to do that right as it is.
 
Jan 5, 2018 at 7:16 AM Post #74 of 220
Gregorio, I'm curious what sort of compromises you need to make to have 5.1 fold down to matrixed 2 channel Dolby well... I've been watching blu-rays of TV series lately, and I've discovered that a lot of them aren't just 2 channel like it says on the box, but actually matrixed 5.1 mixes, either in Dolby or DTS.

That's actually two different questions. The auto down-mix of 5.1 to stereo is NOT matrixed, what you end up with is standard stereo (LoRo). Matrixed is different, what you end up with is Lt/Rt. With matrixed (LCRS for example), the centre channel is down-mixed to the phantom centre and the surround channel is also down-mixed to the phantom centre but 90deg out of phase, resulting in a LR stereo mix (LtRt). Using phase recognition circuitry the phantom centre can be extracted from the LtRt and the surround channel separated from the centre channel, thereby allowing the LCRS to be reconstructed from the LtRt. I worked quite extensively with LCRS matrixed mixes for a number of years in the late '90s and to be honest it was a PITA. It suffers from an effect named "snapping" which is a tendency for sounds to snap to a particular speaker caused by the absolute nature of the phase detection. For example, a sound panned between the centre and left channel is likely to snap either entirely to the left speaker or entirely the centre speaker when decoded, depending on which side of the phase threshold detector it falls. Additionally, with stereo material or anything anything else likely to have some phase incoherency, it might be difficult to pan where you want as the phase inconsistency might trigger a threshold and pan a surround positioned sound to a different speaker or vice versa. It was always absolutely essential when mixing in ProLogic (LCRS or a variant) to have a monitor chain which included encoding (to LtRt) and decoding, so you knew where elements of the mix would end up and then adjust the phase and/or panning when it didn't go where you expected. You don't have that problem with standard down-mixing to LoRo, there's no phase added and you can't reconstruct the LoRo back into the original multi-channel surround. Potential problems with a LoRo down-mix depends on how you mix the 5.1. The Left channel of the stereo down-mix (Lo) contains the left channel of the 5.1 mix + Ls at -3dB plus the Centre at -3dB, the right channel (Ro) contains the right channel of the 5.1 + Rs at -3dB + C at -3dB, the LFE channel is ignored. Obviously there's the possibility of overload to the LoRo, there's also the danger of loosing something if it's primarily placed in the LFE and there are various other potential dangers such as a large 5.1 reverb which can appear to sound just right in 5.1 but too much in a down-mixed stereo.

Some put all the dialogue in the center channel, because in a traditional movie theater, that is directly behind the screen. It focuses the voices on the screen. The mains are mostly music and effects in this kind of approach. Another way of handling it is to consider the center channel as a replacement for the phantom center between the mains and all three channels are considered equal and related.

It's not really a case of two different approaches but of the practicalities of what you're mixing for. In a traditional movie theatre you have the issue that the front left and front right channels are partially fed into the foremost surround diffuser speakers, resulting in a sound panned to say the hard left appearing to come from well beyond the far left of the screen. Generally of course the dialogue is coming from characters on screen and therefore it's rare to hard left or right pan dialogue and most of what is occurring off-screen is background SFX. Because of the large physical distance between the front left and right speakers, it's often not desirable to hard pan stereo effects or stereo music mixes, so they are often panned slightly inside hard left/right and maybe also fed into the centre speaker, depending on the music mix and what's going on in the rest of the sound mix. Additionally, there's the problem in large cinemas where the physical distance between the left and centre speaker (and obviously centre and right speakers) is very large, so we can have effectively the same stereo image problem we would get without a centre channel and the obvious solution is the same, to fill-in the stereo phantom centre with a physical centre speaker. This is why SDDS was invented, a 7.1 format with two surround channels, an LFE channel and 5 front channels: Left, Left Centre, Centre, Right Centre and Right. This is only applicable to large cinemas though, not to consumer environments.

[1] Actually those new technologies also allow the engineer to be incoherent with the listener’s spatial expectation. I agree that, like dissonant intervals in music, that spatial incoherence may help to alter the mood of your listener.
[2] ... bring back coherent ITD/ILD!

1. No, it's nothing like dissonant intervals in music. Dissonant intervals in music are used specifically because they are perceived to be dissonant, they create the perception of unpleasantness/tension and an expectation that dissonance will be resolved (to consonance). In fact, western music composition is largely based on this tension and expectation of a resolution and therefore the entire history of western classical music can be analysed in purely these terms. This is nothing like the use of spatial incoherence, we don't use it because it's dissonant/incoherent, whether it's coherent or not is irrelevant and even audiophiles are usually unaware that what they're listening to is incoherent, so there's no expectation of a resolution.

2. Or to put it another way: Bring back the music recording, mixing, performance and genres of the 1950's (and do away with pretty much everything since then)!

G
 
Last edited:
Jan 5, 2018 at 10:55 AM Post #75 of 220
Biometrics will be used by Creative's Super X-Fi for headphones. It will be demonstrated at CES next week.

https://us.creative.com/sxfi/

@Erik Garci, thank you very much for that!

It is not yet capable of third order ambisonics, but a dedicated low power consumption chip that skip time consuming field programming of DSP chips is certainly a step on the right direction for price affordability and consequent mass consumption.

I am sure you are much more prepared an experienced than me to ask them (Creative engineers at CES) the right questions, but just in case please consider the following suggestions:
  1. Does it use the headphone camera for head-tracking or the “FREE SX-FI Holography-Enabled Headphones” has tracking on their own?
  2. Are the “FREE SX-FI Holography-Enabled Headphones” connected with USB or Bluetooth? Those interfaces allow real time head tracking without the cellphone camera?
  3. Does it track vertically and horizontally?
  4. Can the user set an optional crossfeed free playback function?
  5. Is it compatible with Atmos/Dts:X/Auro and Google/Ambisonics adopted first order Ambisonics?
  6. Are they going to sell their chip mobile phones manufacturers?
  7. Can the listener use the mobile output to feed their beamforming phased array of transducers that avoid crosstalk?
Regarding the fifth question, please disclose that, for instance, “one of the longest running social networks for virtual reality (VR) head-mounted displays (HMDs), vTime, has partnered with audio specialist DTS to bring enhanced audio to the service and improve users’ immersion” (probably licensed from Smyth Research?).

You are the right man, in the right time, in the right place.

If many of those questions receive positive answers: recording and mastering engineers, be prepared to have a massive base of mobile users ready for enhanced stereo environments!

@gregorio

What if you still mix the drum kit the way you like into two tracks and use the Bacch-3dm to mix them as if they were two virtual loudspeakers in the standard stereo triangles and then mix only the objects with more directional frequencies and harmonics (voices for instances) in separate bus lines?

Please ask Professor Choueiri if you can skip his “reverb calculations based on user-controlled room geometry and a wide range of wall materials” for the former and only add your preferred reverberation for the latter.

Please hear the beginning of Michael Jackson Thriller and tell me: do you prefer him walking in the flat region from the left speaker to the right speaker or him walking from your left to your right unhooked from the speakers right next to you?

Note that, AFAIK, if you want, you still could keep the music the flat region from the left virtual speaker to the right virtual speaker.

Please give it a chance and tell us if it works in favor of your artistic conception. I am not in favor of reducing your creative freedom, quite the contrary.
 
Last edited:

Users who are viewing this thread

Back
Top