A layman multimedia guide to Immersive Sound for the technically minded (Immersive Audio and Holophony)

Discussion in 'Sound Science' started by jgazal, Nov 25, 2017.
First
 
Back
1 2 3 4
Page 5 of 13
6 7 8 9 10 11 12 13
Next
 
Last
  1. bfreedma
    I'd call it a tie between your comment and people who run 5.1 (or stereo) but refuse to use an automated EQ system (Audyssey, Direc, Trinnov, etc.) or manual EQ with a measurement system to address room issues. Or who plop a subwoofer down where it's convenient rather than where it performs best in that particular room/listening position(s).

    Completely agree that the room is the challenge, not the actual gear. It's amazing how much cash people will throw at hardware for a problem that isn't solvable with money unless you're hiring someone to help with room acoustics. Why people will spend thousands of dollars on gear but not a few hundred hiring someone to help with EQ/room correction remains a mystery to me.
     
  2. jgazal
    I record with this:

    This is my feeling about binaural recordings:

    But be careful when recording music.

    Firstly, you must respect copyright from artists (partitures, lyrics and performers).

    Secondly, you and the artists may receive severe critics such as:

    @gregorio, I know you didn’t mean to say that commercial music is the only form of art allowed, but since you keep saying I am driven by a myth, I thought why not showing him an alternative view...
     
    Last edited: Jan 3, 2018
  3. Whazzzup
    bose v3 lifestyle works for me just wonderfully
     
  4. RRod
    Yeah I realize my question may have come off as "can you just listen in a basement", when more I meant what you're addressing "is it cheaper to just build from scratch." Now you make me wonder if I should try to build a "wife prover" single-person setup in my current basement to justify extra destruction when/if we get a bigger place. I'll pass myself off to the home theater web now; thanks to you both!
     
  5. bigshot
    I'd say if the ceiling is high enough and you are willing to put in your own walls, the basement can work great. It's also an ideal place for projection video. (I was answering thinking you were thinking about leaving the unfinished basement unfinished.)
     
    Last edited: Jan 3, 2018
  6. gregorio
    1. Not impossible, just impractical and artistically undesirable!
    2. Effectively, every mix I've ever done is a similar sort of test.
    3. Yes, apparently he is also driven by the same myth! Being a professor of applied physics at Princeton means he is an expert on applied physics, not an expert on recording, mixing and music production! I'm sure he knows way more about applied physics than I will ever know even exists but the example screenshot you posted of a mixing session indicates he's got no idea how music is performed, recorded and mixed. Or, maybe he does and he's either just talking "theoretically" or relying on the ignorance/myths of how audiophiles think music is produced in order to market a product? The example screenshot you posted would work as intended given ALL these conditions: A. If each of the instruments were recorded individually, B. With little/no natural reverb/room acoustics and C. If we were trying to create a single, cohesive acoustic space in the finished product.
    Typically, NOT EVEN ONE of these conditions is practical or musically desirable, let alone all of them! For example, in most popular/rock based genres we need to individually process each of the instruments in a drumkit but ultimately we want it to sound like a drumkit (albeit not a drumkit which actually exists in the real world), not like a bunch of individual, unrelated percussion instruments. In practice then, we record the whole drumkit in one go (not each instrument in the drumkit individually), we spot mic each of the instruments AND we record the whole drumkit in stereo, plus we typically also have a mic dedicated to recording the room ambience (respectively called "instrument", "overheads" and "room" mics). We then process each of the instruments in the kit individually from the spot mics (EQ, compression, reverb, etc), mix with the overheads and room mic, create a stem for the drumkit and apply more processing to the kit as a whole. In practice, we end up with the relative timing of the instruments within the drumkit all over the place, severely compromised in preference to a subjectively aesthetically pleasing result. We would most likely do something similar with the strings/brass (although with less individual processing), then again with the singers (other than the lead vocal) and the guitars probably recorded individually. Each of these groups/stems would most likely have different reverbs applied. The lead singer maybe a small plate, the strings/brass probably a much bigger chamber or hall type reverb, the backing vocals maybe a bigger plate, the lead guitar maybe a stereo delay, the bass guitar probably very little reverb. There is no one room or coherent acoustic space that we are either recording, mixing or trying to create at the end of the process, which I've stated before! The software screenshot you posted clearly operates on the basis of a single reverb and single coherent acoustic space. So, Choueiri is correct, "it is a possible path" but going back to point #1, "possible" and "practical"/ "artistically desirable" are not at all the same thing!

    1. Yes, there is a misunderstanding going on here. You are misunderstanding both the idea of a "realistic ITD" AND of a "synthetic ITD, coherent with our spatial expectation". Music mixes are NOT coherent with our spatial expectation, they are created with acoustic information which couldn't possibly exist in the real world and which should sound like a bizarre, ridiculous, spatially incoherent mess but of course that's not how they appear. They don't appear like that because mixes are created by human beings whose brains (process of perception) work in roughly the same way as consumers' brains. In other words, when we mix we have little regard for what should be coherent, incoherent or expected spatial information, just for what sounds good and what sounds good is typically nothing like real (actual or synthesised) timing delays. It's bizarre listening to audiophiles go on about natural, transparent, realistic soundstages or "it's like being there", because there is no natural, realistic or "there". It's like having a photoshop'ed image of a unicorn and then people discussing/arguing about how natural and realistic it is, about how one video monitor makes the unicorn look more real than a different monitor.

    2. They wouldn't, there's not the time or money to do that and typically, making the master sound good for one format compromises the sound for another. For example, despite virtually all network TV being made and broadcast in 5.1, most consumers still listen in stereo. This is why Dolby has historically dominated the film sound world, even it's first surround format was backwards compatible. All HDTVs contain licensed Dolby software which automatically down-mixes the broadcast 5.1 to stereo if neccesary but that down mix is compromised. It's a simple algorithm which works fairly well sometimes, not so well at other times. We have to check a down-mix when delivering 5.1 to make sure nothing really strange is going on and change the 5.1 mix if there is, this obviously compromises the 5.1 mix. In general, the stereo down-mix is acceptable but it's not as good as if we made a dedicated stereo mix. Dolby Atmos also has this feature, if you have a Dolby Atmos processor it will mix/down-mix according to your system, up to 64 speakers in your Atmos installation or to 7.1, 5.1 or stereo if you don't have an Atmos system. Despite this feature, most theatrical films have separate mixes (say an Atmos mix and a separate 7.1 mix) rather than relying on the down-mixing algorithm.

    G
     
  7. jgazal
    I am sad that I won’t easily find modern stereo content with coherent ITD/ILD compatible with more suitable for crosstalk avoidance/cancellation technologies.

    So I have been searching for compatible recordings and I have found very few recordings done by recording/mixing engineers with a different school of thought, but they still are coincident microphones recordings without ITD.

    One example would be Cowboys Junkies album “Trinity Sessions” recorded by Peter Moore. See https://www.soundonsound.com/people/cowboy-junkies-sweet-jane.

    A Calrec soundfield microphone was used. But the vocalist sang all music (except one) through an amplifier and a Klipsch Heresy loudspeaker! Only one song was recorded a capela in which she sang directly to the soundfield microphone:

    Interesting enough, the recent film “Trinity Revisited” was recorded with spot microphones and mixed with Dolby 5.1:

    @bigshot, I know you like classical, but just in case have you compared those two recordings? I would love to hear your impressions.

    I have also found the “one mic” recordings made by John Cuniberti (http://www.johncuniberti.com/onemic/), with a AEA R88 (stereo ribbon microphone):

    So those with Comhear Yarra, Smyth Research Realiser (if you are fond of crossfeed free emulations...) and Theoretica Bacch products will need to start a campaign similar to the motto “bring back dynamic range”, which is bring back coherent ITD/ILD! :grin:

    Stereo mastered for immersive audio is also another good motto! :joy:

    Cheers!
     
    Last edited: Jan 4, 2018
  8. bigshot
    Gregorio, I'm curious what sort of compromises you need to make to have 5.1 fold down to matrixed 2 channel Dolby well... I've been watching blu-rays of TV series lately, and I've discovered that a lot of them aren't just 2 channel like it says on the box, but actually matrixed 5.1 mixes, either in Dolby or DTS. However, I've noticed that these multichannel mixes are very basic- dialogue in the center and occasional ambience or sound effect in the rear. There is never an attempt to pan dialogue across the screen when a character crosses or push a sound object out into the center of the room. No sense of immerisive ambience either. Is this a limitation of the matrixing, or is it just a bare bones mix? Can matrixed Dolby or DTS handle subtle gradated handoffs from front to rear or from mains to center? I'm not hearing that in any of these mixes.
     
  9. jgazal
    So why the need for panning in stereo at all?

    Why not just using mono?

    Why not delivering all in the center channel with a low directivity loudspeaker (omnidirectional) and all seats get all the good sound?

    Actually those new technologies also allow the engineer to be incoherent with the listener’s spatial expectation.

    I agree that, like dissonant intervals in music, that spatial incoherence may help to alter the mood of your listener.

    Going from flat to elevated sources in the frontal soundstage and being able to detach from the region between the frontal speakers gives you more creative freedom.

    It is all in your hands.
     
    Last edited: Jan 5, 2018
  10. bigshot
    There are different ways to mix movies. Some put all the dialogue in the center channel, because in a traditional movie theater, that is directly behind the screen. It focuses the voices on the screen. The mains are mostly music and effects in this kind of approach. Another way of handling it is to consider the center channel as a replacement for the phantom center between the mains and all three channels are considered equal and related. I see more matrixed mixes using the former rather than the latter, so I'm curious if that is because of the limitations of the matrixing or if it is just a creative choice.
     
  11. jgazal
    Thank you @bigshot.

    I saw the video from Dr. Toole that was linked in the thread about acoustics panels and he mentions how the center channel is important on movies, but he also says that is rare to have stereophonic effects between the left and right front channels, something understandable since the sweet spot is unique and very narrow.

    So this bring me again to the fact that those new technologies allow multiple sweet spots.

    The Realiser allow two users simultaneously and it lets to measure a PRIR with the center channel exactly where your TV or screen is supposed to be. Once the measure is done you return the center speaker to its compromised spot and emulates the central speakers coincidently with the visual cues, just like that, “out of the faa”.

    If you use binaural through loudspeakers with xtc with beamforming phased array of transducers or just the latter you have multiple sweet spots and you don’t need central channel at all.

    I wish you could tell me those technologies could work in favor of music mixings and not only for movies and VR. I just wished to make stereo mixings more suitable for them, even if it means releasing a new specific mastering.
     
    Last edited: Jan 5, 2018
  12. bigshot
    There are films that use the center channel as a bridge for the two mains. A lot of "roadshow" movies from the 50s and 60s had multitrack sound on first release and that is how they mixed. For instance, the opening of Billy Rose's Jumbo was shot widescreen and they had audio channels behind the screen. In an opening scene a character crosses the screen from right to left and the voice is focused right on his position on the screen as he crosses. Most modern films have a combination of dialogue center and mix center. They lean towards the dialogue, but include a bit of music so it doesn't drop out in the middle. My projection screen is pretty much acoustically transparent, so I have my center channel right in the center behind the screen.

    It seems like it wouldn't be very practical to try to co-ordinate the pattern of recording with the pattern of the playback. It would require more calibration and stricter speaker placement, and it's hard enough to get people to do that right as it is.
     
    jgazal likes this.
  13. jgazal
  14. gregorio
    That's actually two different questions. The auto down-mix of 5.1 to stereo is NOT matrixed, what you end up with is standard stereo (LoRo). Matrixed is different, what you end up with is Lt/Rt. With matrixed (LCRS for example), the centre channel is down-mixed to the phantom centre and the surround channel is also down-mixed to the phantom centre but 90deg out of phase, resulting in a LR stereo mix (LtRt). Using phase recognition circuitry the phantom centre can be extracted from the LtRt and the surround channel separated from the centre channel, thereby allowing the LCRS to be reconstructed from the LtRt. I worked quite extensively with LCRS matrixed mixes for a number of years in the late '90s and to be honest it was a PITA. It suffers from an effect named "snapping" which is a tendency for sounds to snap to a particular speaker caused by the absolute nature of the phase detection. For example, a sound panned between the centre and left channel is likely to snap either entirely to the left speaker or entirely the centre speaker when decoded, depending on which side of the phase threshold detector it falls. Additionally, with stereo material or anything anything else likely to have some phase incoherency, it might be difficult to pan where you want as the phase inconsistency might trigger a threshold and pan a surround positioned sound to a different speaker or vice versa. It was always absolutely essential when mixing in ProLogic (LCRS or a variant) to have a monitor chain which included encoding (to LtRt) and decoding, so you knew where elements of the mix would end up and then adjust the phase and/or panning when it didn't go where you expected. You don't have that problem with standard down-mixing to LoRo, there's no phase added and you can't reconstruct the LoRo back into the original multi-channel surround. Potential problems with a LoRo down-mix depends on how you mix the 5.1. The Left channel of the stereo down-mix (Lo) contains the left channel of the 5.1 mix + Ls at -3dB plus the Centre at -3dB, the right channel (Ro) contains the right channel of the 5.1 + Rs at -3dB + C at -3dB, the LFE channel is ignored. Obviously there's the possibility of overload to the LoRo, there's also the danger of loosing something if it's primarily placed in the LFE and there are various other potential dangers such as a large 5.1 reverb which can appear to sound just right in 5.1 but too much in a down-mixed stereo.

    It's not really a case of two different approaches but of the practicalities of what you're mixing for. In a traditional movie theatre you have the issue that the front left and front right channels are partially fed into the foremost surround diffuser speakers, resulting in a sound panned to say the hard left appearing to come from well beyond the far left of the screen. Generally of course the dialogue is coming from characters on screen and therefore it's rare to hard left or right pan dialogue and most of what is occurring off-screen is background SFX. Because of the large physical distance between the front left and right speakers, it's often not desirable to hard pan stereo effects or stereo music mixes, so they are often panned slightly inside hard left/right and maybe also fed into the centre speaker, depending on the music mix and what's going on in the rest of the sound mix. Additionally, there's the problem in large cinemas where the physical distance between the left and centre speaker (and obviously centre and right speakers) is very large, so we can have effectively the same stereo image problem we would get without a centre channel and the obvious solution is the same, to fill-in the stereo phantom centre with a physical centre speaker. This is why SDDS was invented, a 7.1 format with two surround channels, an LFE channel and 5 front channels: Left, Left Centre, Centre, Right Centre and Right. This is only applicable to large cinemas though, not to consumer environments.

    1. No, it's nothing like dissonant intervals in music. Dissonant intervals in music are used specifically because they are perceived to be dissonant, they create the perception of unpleasantness/tension and an expectation that dissonance will be resolved (to consonance). In fact, western music composition is largely based on this tension and expectation of a resolution and therefore the entire history of western classical music can be analysed in purely these terms. This is nothing like the use of spatial incoherence, we don't use it because it's dissonant/incoherent, whether it's coherent or not is irrelevant and even audiophiles are usually unaware that what they're listening to is incoherent, so there's no expectation of a resolution.

    2. Or to put it another way: Bring back the music recording, mixing, performance and genres of the 1950's (and do away with pretty much everything since then)!

    G
     
    Last edited: Jan 5, 2018
    jgazal likes this.
  15. jgazal
    @Erik Garci, thank you very much for that!

    It is not yet capable of third order ambisonics, but a dedicated low power consumption chip that skip time consuming field programming of DSP chips is certainly a step on the right direction for price affordability and consequent mass consumption.

    I am sure you are much more prepared an experienced than me to ask them (Creative engineers at CES) the right questions, but just in case please consider the following suggestions:
    1. Does it use the headphone camera for head-tracking or the “FREE SX-FI Holography-Enabled Headphones” has tracking on their own?
    2. Are the “FREE SX-FI Holography-Enabled Headphones” connected with USB or Bluetooth? Those interfaces allow real time head tracking without the cellphone camera?
    3. Does it track vertically and horizontally?
    4. Can the user set an optional crossfeed free playback function?
    5. Is it compatible with Atmos/Dts:X/Auro and Google/Ambisonics adopted first order Ambisonics?
    6. Are they going to sell their chip mobile phones manufacturers?
    7. Can the listener use the mobile output to feed their beamforming phased array of transducers that avoid crosstalk?
    Regarding the fifth question, please disclose that, for instance, “one of the longest running social networks for virtual reality (VR) head-mounted displays (HMDs), vTime, has partnered with audio specialist DTS to bring enhanced audio to the service and improve users’ immersion” (probably licensed from Smyth Research?).

    You are the right man, in the right time, in the right place.

    If many of those questions receive positive answers: recording and mastering engineers, be prepared to have a massive base of mobile users ready for enhanced stereo environments!

    @gregorio

    What if you still mix the drum kit the way you like into two tracks and use the Bacch-3dm to mix them as if they were two virtual loudspeakers in the standard stereo triangles and then mix only the objects with more directional frequencies and harmonics (voices for instances) in separate bus lines?

    Please ask Professor Choueiri if you can skip his “reverb calculations based on user-controlled room geometry and a wide range of wall materials” for the former and only add your preferred reverberation for the latter.

    Please hear the beginning of Michael Jackson Thriller and tell me: do you prefer him walking in the flat region from the left speaker to the right speaker or him walking from your left to your right unhooked from the speakers right next to you?

    Note that, AFAIK, if you want, you still could keep the music the flat region from the left virtual speaker to the right virtual speaker.

    Please give it a chance and tell us if it works in favor of your artistic conception. I am not in favor of reducing your creative freedom, quite the contrary.
     
    Last edited: Jan 5, 2018
First
 
Back
1 2 3 4
Page 5 of 13
6 7 8 9 10 11 12 13
Next
 
Last

Share This Page