In-ear microphone cable measurements for planar magnetic headphones - FRs are indeed identical
Nov 2, 2023 at 7:29 AM Post #61 of 174
This is getting quite off-topic unless we are now talking about objective alternatives to DAC, amp, and cable magic for achieving soundstaging or imaging improvements. I don't know much about Apple's implementation of Spatial Audio, but if it is true that there is only one official Dolby Atmos mix for a given track, then I would consider it feasible for errors in the computer-vision-based HRTF calculation as well as inherent tonal differences in the absence of in-ear calibrations between the headphone and speaker responses that could contribute to the perception of alterations to the mix--mix panning alters channel balance within different frequency bands if not the entire after all, doesn't it, though at that point, we are arguing semantics of intent? I would expect that Spatial Audio does inherently have to virtualize the standardized angular positions of each channel, which may or not match that of your own physical Dolby Atmos system due to errors in the former or the latter. The presence of reflections in a room or the absence thereof from the headphone experience could also alter tonality and perhaps some aspects of imaging. But I certainly know nothing about how your listening room is currently set up and whether there could be sufficient comb filtering to cause vocals to be panned incorrectly.

Regardless, of course the head-tracking solution will be altering the frequencies instead of keeping them "unchanged in real world space", but don't forget that all this is doing is trying to mimic the same alterations and filtering your own ears make when being reoriented within "real world space". Maybe it is true that Apple's Spatial Audio implementation is inadequate at least for some. Maybe the Smyth Realiser A16 will provide a better match in sound and panning for your own system. Otherwise, "alteration to sound different" certainly depends on what frequencies finally reach your eardrums within both systems. In my case below, I can demonstrate how at least for certain directions, I can definitely measurably calibrate the combination of transfer functions from SPARTA Binauraliser NF (PC loopback and monitoring through the Reaper DAW) and my Meze Elite with hybrid pads to present a very similar frequency response at my ears as from my outdoor Genelec measurements:

(2024-03-19: See "Calibration using threshold of hearing curves" in https://www.head-fi.org/threads/rec...-virtualization.890719/page-121#post-18027627 (post #1,812). Everything I have said about neutral speakers actually having a lot more ear gain than neutral headphones was wrong.)

2023-10-27 - R 30 L - compensated.jpg

Figure 1: Left mic-compensated in-ear measurement with body and head rotated to the right by 30 degrees relative to the Genelec 8341A ("R 30 L") positioned in my large backyard with ground reflection absorption 1.5 m away, 1.187 m from the ground​

2023-11-02 - Meze Elite hybrid R 30 L EQ final.jpg

Figure 2: Final left in-ear EQed response of Meze Elite with hybrid pads, positioned here for quick comparison​

2023-10-27 - R 30 L - 1.2 ms window.jpg

Figure 3: Left uncompensated "R 30 L" measurement with 1.2 ms impulse window as the target for the response from 1 kHz and up

2023-11-02_06-36-07 - Meze Elite hybrid R 30 L transfer function.png

Figure 4: On Excel. Grey trace: Meze Elite hybrid pads left un-EQed in-ear measurement. Orange trace: Meze Elite hybrid pads left in-ear measurement after applying SPARTA Binauraliser NF's transfer function simulating the turning of the head 30 degrees to the right of the sound source. Blue trace: The transfer function per the latter subtracted from the former. Here, it appears that ear gain EQ is minimized for sound sources panned 30 degrees, its mainly applying the measured directional nulls and top octave changes.
2023-10-28_15-58-30 - EQing.png

Figure 5: Orange trace: The compensation curve from subtracting the target "real world space" "R 30 L" measurement from the uncalibrated virtualized Meze Elite hybrid pads "R 30 L" measurement. In practice, I have found only small variations within 1 dB between different seatings of the in-ear microphones, greater variation coming from the measurement setup and procedures itself. Here, I first manually apply a low-shelf filter, let REW generate the rest of the filters, do some fine-tuning of my own, export the filters into Equalizer APO, then spend another 30 minutes to an hour fine-tuning against subsequent in-ear measurements. It is certainly not fun having these mics stuck in your ears for hours and being tethered to your seat; the results were certainly worth it for some recordings.​

2023-11-02_06-32-46 - Equalizer APO left channel R 30 L EQ.png

Figure 6: Left ear final PEQ. Yes, it's a lot of filters, but it does the job. The only noticeable artifacts were when listening to isolated transients, yet such was only audible through the SPARTA Binauraliser NF or AmbiRoomSim chains as extra "zip" and decay perhaps due to the convolution filters used.

2023-10-29 - Audible transparency.jpg

Figure 7: The boosting from this EQ fortunately only adds second-order harmonics (plotted at the harmonic frequency) to the ear gain region and treble, still likely barely audible.​

2023-11-02 - Meze Elite hybrid R 30 L - orchestral distance - different day.jpg

Figure 8: The Meze Elite hybrid pads "R 30 L" measurement taken on a different day with a completely redone in-ear microphone and headphone seating. Not too far off. Understandably, a main limitation is variations in the treble peaks and the filling of the 9 kHz null between headphone seatings, but otherwise, I had found that this barely detracts from directional virtualization.

2023-11-02 - Meze Elite hybrid R 90 L.jpg

Figure 9: My grail headphone for binaural head-tracking would have no nulls, but as evident from this measurement applying the virtualized direct 90-degree incidence sound source to my left ear, the 6 kHz peak and the 8.7 kHz and 14 kHz nulls I tend to measure through my headphones are accentuated, suggesting their actually being part of my 90-degree free-field response.
If this demonstrates anything, it is that it can take a lot of work to match headphones to speakers' frequency response, mind those in your actual listening room. After that, we are arguing semantics regarding how Apple Spatial Audio and Dolby Atmos are "not the same". I suppose we agree that a completely different Dolby Atmos system with completely different speakers, room acoustics, and hence frequency response is still playing "Dolby Atmos"; hence the question is of how well a given binaural head-tracking system emulates Dolby Atmos with "different speakers" and perhaps "no room acoustics" (anechoic).

Anyways, one basic test would be to find a binaural panning tool for both the Dolby Atmos and Apple Spatial Audio systems and compare the perceived panning of individual sound sources. For my in-ear measurements, other than this HRTF measurement method only accounting for an upright torso rotating relative to a horizontally positioned sound source (head-tracking doesn't currently account for relative torso positioning anyways), I can say that dragging a sound source around in SPARTA Binauraliser NF quite convincingly positions it where indicated throughout the entire sphere, and quite responsively so. There are still some slight tonal differences, but I attribute this to my Genelecs being positioned and GLM-calibrated (within limitations) within the right half of a wide and untreated living room. For me, an absence of extraneous reflections are the key to absolute clarity and transparency into a recording though yes, certain distance cues may be lost, but my living room isn't exactly large enough to fit the physical stereo width of an entire orchestra.
 
Last edited:
Nov 2, 2023 at 7:33 AM Post #62 of 174
Someone must have both a 5.1 speaker system and AirPods Max. If you can hear it, you don’t have to talk about it in theory. Apple doesn’t document what Spatial Audio does. You have to listen to it and figure it out. It definitely isn’t just a simple stereo fold down of the Atmos track. It’s doing processing somewhere, whether on the server side, in the iPhone, or in the AirPods. Perhaps all three.

I’m betting that when Atmos tracks are mixed, you can’t export them and listen to them in Spatial Audio on AirPods. You can only monitor the Atmos mix on them as a two channel stereo fold down. That’s because there is more to Spatial Audio than just Atmos.

My first question is, where does the fold down occur? At the Apple server or on the iPhone?

My second question is, is the signal processing being applied at the Apple server and/or on the iPhone? The HRTF based on the photos of the ear is certainly applied at the iPhone. But Spatial Audio works without any HRTF calibration, so that is more fine tuning than it is heavy lifting.

My third question is, what kind of signal processing is going on in the AirPods themselves?

The processing power of AirPods is pretty limited, and the DAC in them can only decode a few formats. It isn’t a full function DAC. My guess is that the gyro in the AirPods is being used to accomplish the panning of head tracking and the DAC decodes the two channel AAC file for playback, but that is about it. Any spatial cues separate from head tracking are added upstream from there.

I think the Atmos track is being folded down either at the server or on the iPhone and there is some encoding for Spatial Audio direction and depth cues at one or both of those places too. It would be more efficient to do that at the server and simply put a sniffer on the download to detect that it is being played back with AirPods and not an AppleTV with speakers. That way you’re streaming a two channel AAC file and you aren’t depleting the phone’s battery with a lot of complex audio calisthenics. Then the phone would fine tune the track according to the ear photos- to be honest, that really doesn’t make much difference. It sounds pretty much the same with it as without it. Definitely not a night and day thing. Then the only decoding and processing in the AirPods themselves would be the head tracking. But that is just a guess. Apple doesn’t talk about how Spatial Audio actually works. We have to deduce that by listening and comparing. There’s more than one layer of signal processing going on for sure.
 
Last edited:
Nov 2, 2023 at 8:16 AM Post #63 of 174
This is getting quite off-topic unless we are now talking about objective alternatives to DAC, amp, and cable magic for achieving soundstaging or imaging improvements.
Yep, it’s quite off topic now. 😁
I don't know much about Apple's implementation of Spatial Audio, but if it is true that there is only one official Dolby Atmos mix for a given track …
Dolby Atmos is quite sophisticated and allows for various use cases. With consumer music distribution it does technically allow for two mixes, a virtual mix comprised of beds and objects and a stereo binaural mix with control data. Although that binaural mix just uses Dolby’s generic HRTF. Unfortunately, bigshot has no idea how Dolby Atmos works and has simply made-up his own incorrect explanation, which he pushes at every opportunity despite the fact it’s been explained to him, along with supporting documentation from Dolby. He’s convinced Apple has several different mixes and supplies the appropriate one depending on what device (speakers/headphones) you’re using. In fact, Apple Music does not support Dolby’s binaural mix channels or metadata (even on those rare occasions when the Atmos file may contain one), Apple just applies it’s own HRTF, calculated from user ear photos, to the only Dolby Atmos mix it has available. There is not a different/separate mix as bigshot falsely claims.

G
 
Nov 2, 2023 at 8:54 AM Post #64 of 174
My first question is, where does the fold down occur?
Your first question and your second, as well as your BS assertions are off topic. And, what’s the point of asking such questions when you’ve already invented false answers and steadfastly refuse to even acknowledge, let alone comprehend the actual facts?
I’m betting that when Atmos tracks are mixed, you can’t export them and listen to them in Spatial Audio on AirPods. You can only monitor the Atmos mix on them as a two channel stereo fold down. That’s because there is more to Spatial Audio than just Atmos.
Not that it will make any difference to you, as you consistently argue for what you’ve just made-up (“I’m betting”) regardless of the actual proven/demonstrated facts, but for anyone else interested: Bigshot has no idea what he’s talking about, the assertion I’ve quoted is false. This Apple Support Page proves it’s false as it provides the options available (in Logic Pro) for monitoring an Atmos mix binarally using either the Dolby Renderer or Apple’s Spatial Audio Renderer.

G
 
Nov 2, 2023 at 8:56 AM Post #65 of 174
Please get help.
 
Nov 2, 2023 at 9:10 AM Post #66 of 174
I suppose bigshot there is simply validly speculating on where the supposed "error" in the Apple Spatial Audio rendering chain may be located. I've already stated where I think the error could be. Otherwise, I've already suggested a test for comparing perceived sound source locations in isolation.
 
Nov 2, 2023 at 9:26 AM Post #67 of 174
Please get help.
I did get help, I backed up my claim that your assertion is BS with the linked apple support page that clearly proves it’s BS. Where’s your help? As there obviously isn’t any, because you made-up that BS yourself, so the help you need is of the psychiatric kind, please take your own advice before dishing it out to others!
Someone must have both a 5.1 speaker system and AirPods Max. If you can hear it, you don’t have to talk about it in theory.
I just noticed this beauty. Atmos does not directly support 5.1, the minimum for Atmos is 7.1.2, so you Yourself can’t hear it! So not only are YOU only talking about it in theory but a nonsense/false theory you’ve invented, that contradicts the actual facts as published by Apple and Dolby themselves! So again, gross hypocrisy on your part and even more so because I’ve already told you I’ve completed two profession mixes in Atmos, so I’m not talking only in theory!
I suppose bigshot there is simply validly speculating on where the supposed "error" in the Apple Spatial Audio rendering chain may be located.
Unfortunately, bigshot has decided to take this thread off-topic and continue with his little BS crusade he started in another thread, which was comprehensively refuted with info published by both Apple and Dolby.
I would expect that Spatial Audio does inherently have to virtualize the standardized angular positions of each channel, which may or not match that of your own physical Dolby Atmos system due to errors in the former or the latter.
He doesn’t even have his “own physical Dolby Atmos system”, he’s previously admitted to only having a 5.1 system. Lol.

There‘s no reason I can see to dispute your observations. There’s still quite some way to go, both with Spatial Audio and Dolby’s own Atmos binaural renderer.

G
 
Last edited:
Nov 2, 2023 at 12:49 PM Post #68 of 174
Please get help.
I can't imagine your gaslighting complies with the rules. It certainly is an ad hominem attack. Please stop gaslighting!!

But I haven't posted here in a very long time... perhaps one of the "regulars" (very, very regular!) might have more pertinent advice for you:
You really shouldn't focus so much on trying to "correct" Gregorio. Channel that energy into understanding him, and if you can't do that, skip over to stuff you can interface with. There is a hell of a lot of foolishness around here at times, and some of us find it refreshing when fools aren't suffered gladly.
That was quick and dirty... Perhaps you prefer a more in-depth response:
Gregorio is a very knowledgeable and experienced person. He is doing us a favor by participating here. If you give him respect, you'll get it back. He isn't being pompous or obnoxious. He actually *does* know things we aren't aware of. If you want to learn those things, you have to maintain respect yourself. People argue with Gregorio when they flat out don't know what they are talking about or who they are trying to argue with. They should lurk a bit more and get the lay of the land before they try to "assault the Matterhorn" so to speak.

I've been around Sound Science for a long time. The same stuff happens over and over. People come in and say something that is totally wrong. One of the regulars points out their errors and lets them know why. The person doesn't understand what he is being told, and doesn't want to know, so he starts arguing. It escalates. Reams of information are shared, but it falls on deaf ears because these argumentative people just want self validation, not the facts. They get madder and madder and eventually resort to ad hominem attacks. Eventually either they disappear or they are banned. Someone else comes in with an incorrect statement and starts the cycle all over again. It's a revolving door and we deal with it here all the time while we struggle to fit our own conversations in the cracks between all of the BS.

It isn't Gregorio's fault that people don't recognize that he knows what he's talking about. And it isn't Gregorio's fault that they refuse to listen. Patience is great, but it is a limited resource when it's assaulted over and over on a daily basis.

People should listen and lurk more.

Just so you know, Gregorio is a professional sound engineer and educator on this subject. You could learn a lot from him. He isn't trying to shove opinions down your throat. He is giving you a clue. If you have a question, ask it and he will answer it completely. If you don't care to hear what he has to say because you want to go on believing whatever you already believe, I would recommend not engaging with him at all.

This is Sound Science. It's different than the rest of Head-Fi and we have different rules. We can challenge your opinions and ask for the proof that you used to arrive at them. If you aren't prepared to support your arguments, it's best not to push it here. The rest of Head Fi will gladly accept unsupported claims. This isn't the place for that.
 
Nov 2, 2023 at 1:09 PM Post #69 of 174
Nov 2, 2023 at 2:14 PM Post #70 of 174
Normal people don't engage with other people like that. I believe he is unaware of how he presents himself. It isn't my job to fix him. I normally ignore him, but when he keeps dogging my posts and trying to shut down my conversation, I am going to tersely say what I honestly believe. There is something wrong with him that is going unaddressed. I won't do it in paragraphs any more. I'll do it in a sentence or two and move on.

Moving on...

I would like to speak with someone who has access to AirPods Max, the Apple Music Store and either an AppleTV hooked up to a multichannel speaker system or the blu-ray audio copy of one of the mixes on the Apple Music Store. I'd like to see if someone knowledgeable can listen and compare and tell me what is causing what I hear.

MrHaelscheir I don't think that the effect on the vocals is error. I think it is reconforming the placement of the frequencies that are most affected by head movement to allow for the head tracking to work dimensionally, rather than just a flat, even overall pan from left to right. When we turn our head, the upper mids turn with it, while lower frequencies remain in place filling the room. Higher frequencies are easier to sound locate than lower ones. That is what this sounds like.

In addition, Spatial Audio adds a timing echo element to this "head turn" band of frequencies indicating distance in front of the listener that isn't in the original mix. When you switch from a normal stereo fold down to Spatial Audio, the vocal frequency band sounds more distant, as if it's 15 or so feet in front of you. I think this is done to anchor the soundstage at a distance in front of you. Again, this distance cue does not exist in a normal fold down to two channel.

These seem to be intentional sound processing techniques applied to the Atmos track after the fold down to add distance and directional information to the sound that does not exist in the original Atmos mix. This is Spatial Audio processing and it is totally separate from Atmos.

 
Last edited:
Nov 2, 2023 at 3:22 PM Post #71 of 174
@bigshot Your descriptions of the higher vocal frequency frequency bands seemingly "intentionally" being panned while the lower frequencies stay in place is to me just a folk description of cycling and interpolating through the direction nodes of an HRTF file or describing what the ear does on its own as sounds come in from different directions. I also wasn't sure whether your perception of differing vocal placement regarded its panning relative to the recording's "center" as opposed to differences in its perceived distance. I don't know if Apple Spatial Audio simulates room reflections. If by "timing echo element", you rather mean interaural time differences (ITD), such are an essential part of the HRTF and of course exist in "real world space", but depend on the distance or radius of the sound sources from the listener, and as such, maybe Apple Spatial Audio is simulating the ITD for a bigger room, which to me isn't semantically the same as altering the mix as much as playing the same mix in different listening rooms and systems would be. On the other hand, the Apple AirPods Max themselves when you look up their frequency response have a relaxed ear gain region, whereby if Apple Spatial Audio were not EQing that up toward diffuse-field or similar, then the vocals may expectedly sound more distant than on your speaker system. In that case, the "error" is in the playback transducer's frequency response and the lack of compensation to match it to a speaker's response.

1698952777281.png


Were there other differences you were hearing other than expected tonal ones and perhaps the subjective effects of a lack of room reflections? Is what we are trying to do here to decide merely whether Apple Spatial Audio is doing a good job? And from now on, I'd like us to separate assessments of the Spatial Audio rendering from assessments of the transducer.
 
Last edited:
Nov 2, 2023 at 4:24 PM Post #72 of 174
Vocals that were not perfectly centered in the multichannel mix, in other words- vocals that are shared across multiple channels, seem to be refocused mostly to the center and a depth cue is added to it. It's as if a certain frequency band is being assigned to the center channel, even if it doesn't exist on the center channel in the multichannel mix. Also, there is a distance cue (ie: room reflection) applied to the center that isn't applied to any other direction. No other direction has that kind of "room reflection" sound.

Does this explain it better? Basically Spatial Audio sounds "spatial", but it's not a simulation of a room per se. It's simply separating things and applying direction cues to place them in a sort of stock arbitrary space. The rear channels for me appear down at my waist, not behind me, and I hear more to my right than to my left. That seems to definitely be HRTF error. In front, left and right is enveloping and mostly in and immediately around my head, except for the upper mids which are placed at a distance and respond to head tracking as I described above. Nothing else really responds to head tracking.

Just about every track sounds like the same layout with directions all in the same spot every time, even when the mix has objects spanning multiple channels and placed in the center part of the room. There is no gradation to the placement of between channels or bleed between them. Every channel sounds completely separate. In my speaker installation, it meshes into a sound field where objects can be placed within the room. With this the sound comes only from the four corners and center, with distance only on the center channel.

I don't know if you are familiar with Boom 3D, but I've heard much more convincing sound from the rear using that with Netflix than with Apple Music's Atmos tracks. That might be because movies have built in secondary depth cues and room reflections, while music generally doesn't. It's still hit or miss though. A sound can be well placed behind me and then suddenly snap to being down at my waist to the right instead.

I'm trying to describe what I hear specifically, let me know if the way I'm describing something is vague and I'll try to focus in on it.
 
Last edited:
Nov 2, 2023 at 7:37 PM Post #73 of 174
I don't know if you are familiar with Boom 3D, but I've heard much more convincing sound from the rear using that with Netflix than with Apple Music's Atmos tracks. That might be because movies have built in secondary depth cues and room reflections, while music generally doesn't. It's still hit or miss though. A sound can be well placed behind me and then suddenly snap to being down at my waist to the right instead.
It's more likely that you were listening to music/movies streamed in stereo instead of surround. Boom 3D advertises itself as applying a surround effect to any content. As I have linked previously, Apple Spatial Audio requires a surround sound track (preferably Atmos). Dolby Atmos for music is mixed the same as movies when it comes to positional objects. If you're subscribed to Apple Music, I've found a Dolby Atmos album that does have clear instruments in the rear is Kacey Musgraves Golden Hour.

Not sure why you're still insisting someone should be able to hear identical sound with Airpods and an Atmos speaker layout. As indicated, Spatial Audio is processed after the Atmos track is decoded: it's Apple's own methods for rendering a binaural stereo signal (it's creating a virtual 3D image, and also refreshing it from head tracking queues). The handling of the Atmos track and Spatial Audio processing is not happening with a server: it's happening on the iOS device (the headphones are sending head tracking info and getting a binaural audio signal). An iOS device that's certified for Dolby Atmos can handle the processing. If the Spatial Audio doesn't sound convincing (with actual surround sources), then it could also be issues with setup (making sure you have properly aligned images can be finicky). Or their HRTF algorithms aren't a match for your ears.
 
Last edited:
Nov 2, 2023 at 7:59 PM Post #74 of 174
I just noticed this beauty. Atmos does not directly support 5.1, the minimum for Atmos is 7.1.2, so you Yourself can’t hear it!

G
Just one minor correction: for the minimum Atmos speaker config, you can have 5.1.2. For height speakers, you can either have speakers reflecting off your ceiling (less direct and convincing) or direct speakers (usually ceiling, but there are some that angled at top of wall).
 
Nov 3, 2023 at 12:16 AM Post #75 of 174
Vocals that were not perfectly centered in the multichannel mix, in other words- vocals that are shared across multiple channels, seem to be refocused mostly to the center and a depth cue is added to it. It's as if a certain frequency band is being assigned to the center channel, even if it doesn't exist on the center channel in the multichannel mix. Also, there is a distance cue (ie: room reflection) applied to the center that isn't applied to any other direction. No other direction has that kind of "room reflection" sound.

Does this explain it better? Basically Spatial Audio sounds "spatial", but it's not a simulation of a room per se. It's simply separating things and applying direction cues to place them in a sort of stock arbitrary space. The rear channels for me appear down at my waist, not behind me, and I hear more to my right than to my left. That seems to definitely be HRTF error. In front, left and right is enveloping and mostly in and immediately around my head, except for the upper mids which are placed at a distance and respond to head tracking as I described above. Nothing else really responds to head tracking.

Just about every track sounds like the same layout with directions all in the same spot every time, even when the mix has objects spanning multiple channels and placed in the center part of the room. There is no gradation to the placement of between channels or bleed between them. Every channel sounds completely separate. In my speaker installation, it meshes into a sound field where objects can be placed within the room. With this the sound comes only from the four corners and center, with distance only on the center channel.

I don't know if you are familiar with Boom 3D, but I've heard much more convincing sound from the rear using that with Netflix than with Apple Music's Atmos tracks. That might be because movies have built in secondary depth cues and room reflections, while music generally doesn't. It's still hit or miss though. A sound can be well placed behind me and then suddenly snap to being down at my waist to the right instead.

I'm trying to describe what I hear specifically, let me know if the way I'm describing something is vague and I'll try to focus in on it.
Do you mean some sound sources at least within certain frequency bands aren't following the head-tracking at all? The lack of imaging integration between channels (hearing only distinct sources) is interesting. I for sure get excellent stereo imaging through these SPARTA plug-ins. As far as I can tell, SPARTA's virtualized stereo width remains quite coherent and can be rotated to virtually anywhere around my head. With its rendering and my measured HRTF, I can convincingly hear things behind, above, or below me without any reflection cues (Binauraliser NF is anechoic, or when disabling reflections in AmbiRoomSim), albeit from my manually applying said panning within Reaper and those SPARTA plug-ins.

Later update:

I managed to set up my Reaper and SPARTA chain to take in 5.1 and 7.1 surround audio. VB-Cable supports up to 8 channels, but per https://forum.vb-audio.com/viewtopic.php?t=1754, the VB-Cable Output due to being set up as a Windows recording device only supports stereo. Fortunately, Voicemeeter Potato was rather easy to hook up to ReaRoute from which I can then feed the channels to SPARTA Binauraliser NF. I was struggling to find effective (free) surround sound test files, YouTube 5.1 surround not being viable on my desktop PC, my fortunately coming across https://surroundmusic.one/streaming/ which plays off of Google Chrome without the need for any special settings, https://www2.iis.fraunhofer.de/AAC/multichannel.html showing the correct channels lighting up on Reaper. The main tracks of interest are those by Mike Vieira. Both tracks feature cases of multiple vocals playing from different channel simultaneously, albeit probably intentionally directly from just those channels. "Disturbing the Universe" at 6:41 features some guitars imaged between the rear channels behind my head. I am able to turn my head toward said sources to center them; the whole soundfield stays put. "Break You" has some places within the first minute where an echoing voice is imaged behind the head between the rear speakers. For some other tracks, I can definitely hear things panning between the side channels and so on. I'll probably soon find many more examples of things being properly imaged between channels. Okay. As I write, "1991 from Alternate Timelines" sounds damned great with pleasant directional bass impact and some moving sound sources, its properly having sound coming from all around you. "Incitation from Cinematic" has some movement of sound sources behind the head. Likewise, some tonal discontents were resolved simply by turning up the volume.

The sound probably isn't always the most engaging with the lack of room reflections (though I value the clarity attained from the lack of such) and the sound not hitting your entire body (and my being more of a classical listener), but otherwise, though this doesn't say anything about how Apple Spatial Audio performs (to be honest, one might as well have focused on looking at "reputable" reviews if they exist), it shows that proper surround imaging is possible with the right measurements and software.
 
Last edited:

Users who are viewing this thread

Back
Top