An eigenmic or soundfield mics would not capture higher distortion, quite the opposite in fact, they would capture much higher fidelity than spot mics and that's PRECISELY why they are useless!! Didn't you read my post, particularly 4b? Listen to the drum kit at 7:40 on the virtual tour video you posted, listen carefully to the instruments in the drum kit, the kick drum and the snare drum for example. Now listen to the album you mentioned, Thriller. Does the kick and snare in the video sound even remotely like the kick and snare in Thriller? Pick something else: Motorhead, Prince, Seal, Sade, The Prodigy, Eminem, Coldplay or in fact pretty much anyone from the last 40 or so years, do the drumkits sound remotely like the accurately captured drumkit in your video? I find it unbelievable that you can appreciate the relatively subtle, immersive qualities of 3-axis sound reproduction while being completely oblivious to the massive difference between how a drumkit actually sounds in real life and how it ends up sounding in commercial music.
How can a desire to reproduce something which never existed, be legitimate? Your "legitimate desire" is a desire which necessitates effectively killing or at least massively damaging pretty much all modern popular music genres. To me, that's about as far from "legitimate" as I can imagine!
What has that got to do with anything? There is almost NO emulation going on here! You've posted a recording of a real drumkit and that's obviously NOT what we're emulating. Do you want to hear an emulation of a pathetic string "twang" captured with high fidelity 3-axis spatial information or do you want to hear that pathetic twang distorted completely beyond recognition, so it better matches what you think an electric guitar should sound like? A number of famous artists would struggle to sing "Twinkle, Twinkle Little Star" decently, why would we want to emulate that? Etc., etc.!
You "wouldn't mind" something which doesn't exist? The ability to synthesise spatial information even in just 5.1 is pretty basic and the technology for what you "wouldn't mind" doesn't yet exist.
Me too but unfortunately, that's the reality here. You continue to miss the point that bigshot and I are trying to explain to you, that it's
ALL an illusion. You seem determined to interpret this as meaning that it's actually all real, except for the illusion of stereo. That the musicians are creating real performances on real instruments which we're accurately recording and then creating a stereo illusion from those recordings. The reality is: The instruments in real life sound little or nothing like we want them to, there is no real performance and therefore, how can we accurately record something which never existed? When we say it's ALL an illusion, we don't just mean an illusion of stereo, we mean the performance and the music itself is an illusion and we CANNOT create that illusion if we attempted to record and "preserve" 3-axis spatial information!
I'm not sure how to break you out of the myth you appear trapped in. I'll try one more way; have you seen
this short video?
Click the link, watch it all the way through and then answer this question: How could we record/preserve and reproduce the 3-axis spatial information of the "Faa"??
G
Out of the “FA” does not seem the same as “out of the blue”.
Although info coming from the visual cortex can override info from the auditory cortex when all gets processed, perhaps in Broca’s area, this must not be a
loosely or
arbitrary “illusion”, as our brains were suffering from a “bizarre” disorder.
Speech is essential in evolutionary aspects. If you are trying to work together with other humans, with incipient language, you must get the information right. That’s why our vision override the auditory ambiguity in the particular example you mentioned.
When you see someone else lips pronouncing FA what is the chance he is trying to pronounce BA? So that “illusion” is fact highly correlated with reality.
I wouldn’t use the word “illusion” in a surrealist meaning of “more perfect than reality” and extrapolate the specific McGurk effect to the way our brain solves all possible ambiguities between vision and audition.
A precise perception of sound source location is also essential in evolutionary aspects.
So in that court, when the jury heard two loudspeakers in front of them, they might be with eyes wide open and nevertheless their brain probably processed Michael Jackson’s as he were right in the middle between speakers. It does not matter if your visual cortex delivers contradictory information that you are in a court room with nobody in that virtual spot.
IMHO, you must know what ambiguity you brain is trying to solve and which cue will prevail in each case.
Why this experiment to quantity errors of elevation with higher order ambisonics by the BBC engineering team used speech as a test signal?
So I also wouldn’t say the way our brain processes sound is uncorrelated with reality. IMHO it is actually highly correlated.
You must have heard reflections in large arenas. You know there is no sound source in the reflecting walls but still you perceive the sound as coming from the reflecting wall.
You must have also watched Professor Choueiri videos above.
He also describes the evolutionary aspects in the way our brain solves the head movement ambiguity when playing back sounds with headphones. And here we have ambiguities between sound cues.
In another instance, Professor Stephen Smyth also describes the ambiguity between a PRIR from a large room and the listener room size. It does collapse the externalization because sound cues are still altered dynamically with head tracking, but interesting enough, some users had described a sensation that speakers sound nearer than they were actually measured. I have asked if we could use a gear 360 and a gear VR to retrain our brains, but I received no answer yet.
Some say that our hearing is more precise in the horizontal plane. When seeing straight, we may perceive the elevation of sound sources in a loosely way, but as soon as the sound catch our attention we tilt our head and our the transverse plane that cuts your head now isn’t coincident anymore with the horizontal plane and you perceive that elevated sound source with more precision. The Realizer now allows elevation head tracking.
That said, I want now to describe two of my highest esteemed musical memories.
The first was an rehearsal of my cousin’s band. He is a drummer and the drum was not amplified since, obviously, it was loud enough. Then I heard them playing Hotel California. Interesting enough, Eagles has one of the best selling albums of all time. And you are right, I never heard the drum in the same way I heard that day (but that might be just my feeling).
So even being a real drum, I felt connected emotionally with that bass line and that music. Perhaps as emotionally connected as I am when hearing “The way you make me feel”.
The second was a marriage in which there was a band with all instruments amplified. There was also an saxophonist with a tenor saxophone (and an spot wireless microphone) playing around the tables.
I had never heard a saxophone playing around you with recorded music until recently. I hope the Realiser A16 and the Chesky record above can emulate that in a similar way.
Nevertheless, I do understand and respect your work and particularly the creative value added by recording and mixing engineers. I am sure certain bass lines sound better after mixing than when they were recorded.
But I have been reading your post many times and I still feel odd when I read the part about the vocals. It looked like the creative value of recording/mixing engineers would be somehow intrinsically and qualitative better than the creative value from musicians or artists performers.
People used to say that Rod Stewart had the “wrong” type of voice and nevertheless he is very successful, even when he sings in unplugged MTV shows.
What someone gifted with musical sensibility but who is not proficient in performing with acoustic instruments our with his own voice would do?
Maybe electronic music with synthesizers?
Would such genre be compatible with 3-axis mixing?
Believe or not, when you search for mixing Dolby Atmos for music, this is one video you will find:
You can find more on development of Atmos mixing with this particular genre here:
https://www.dolby.com/us/en/technologies/music/dolby-atmos.html#3
Well, I don’t feel such system can convey proximity for so many people in such a large listening area (the same challenges with movie theaters), but the concept of mixing synthetic sounds in 3-axis remains the same.
So if I understood right you are saying that (a) 3-axis mixing is a bad choice or even prohibited choice for any music genre or any type of musical event.
And if, again, I understood right, you may also be saying, that (b) acoustic virtual reality is also a myth or an utopia given the complexity involved to render 3d sound-fields.
I naively thought that the creative value added by recording/mixing engineers and 3-axis mixing could be harmonized.
You are an experienced audio professional and I am, well, just a regular guy.
So I will trust in good faith that your assertions (a) and (b) hold always, in any circumstances, true.
But tonight, when you lay up in your bed and put your head up on your pillow please pay attention to your feelings.
And since this is the science forum, please come back tomorrow, because I would like to know, respectfully, if you still feel okay when you advocate for everybody to dismiss, a priori, music mixing in 3-axis, in any circumstances.
If then you still tell me I am utterly wrong and that I am definitely driven by a myth, I will delete all my posts in this thread, in respect to your work and knowledge and because I don’t want people embarking in this supposed dead-end line of research driven by the same myth or utopia.
And since I mentioned “The way you make me feel”, I will confess that I felt deeply sad about your post. It is really shaking when someone put at stake your beliefs, isn’t it?