AirPods Max
Jun 27, 2021 at 12:58 PM Post #4,456 of 5,654
That‘s right if the „unofficial“ set of headphones supports Dolby Atmos, if not the automatic Mode makes sense. For example „Tom Sawyer“ from Rush sounds very nice in Dolby Atmos using the APM. With the B&O H95 it Sounds very bad (like the membran is damaged, at min 2:35 for example).
From what I understand, any headphone ”supports” Atmos but not head tracking. How a given song sounds in Atmos or how well a given headphone reproduces that sound is another matter entirely. Thanks for the H95 insight. I’m still struggling to understand why there needs to be an “Automatic” setting. If someone wants to listen to Atmos, why wouldn’t they choose Always, so that Atmos works (to varying degrees) on every headphone the listener owns?

Perhaps head tracking on supported Apple/Beats products doesn’t work when Always is selected? That’s the only reason I can think of at this time. Apple makes a *point* of saying in their spatial audio demos that *any* headphones can be used to hear the difference. Two selections seems unnecessary. I see no compelling distinction.

Apple certainly is prone to head-scratching moves.
 
Jun 27, 2021 at 1:23 PM Post #4,457 of 5,654
Let's see if this clears it up for iOS 14.x only:
  • Dolby Atmos is a format and decoder that is in the operating system that outputs to the mixer, which is then encoded in AAC to your Bluetooth headphones including AirPods, Bose, Sony etc. Example Apple Music
  • Spatial Audio is an output format that replaces the AAC output encoding with Apple special sauce working only on Apple stuff, which allows head tracking and probably other psychoacoustic technology we know little about. Example, Bruce Springsteen's documentary on Apple TV.
Atmos Automatic in Music turns on Spatial Audio (aka the Apple codec/head tracking) when connected to an Apple headphone that supports Spatial Audio over Bluetooth AND the track is Atmos encoded.

Atmos On in Music is the 1st bullet: If the track is Atmos then it is decoded into a mixer and with other system sounds, and for Bluetooth output as AAC

Does this make more sense? Does it align with what you see?
 
Jun 27, 2021 at 1:27 PM Post #4,458 of 5,654
Currently no Apple devices will reproduce Lossless audio at the full Hi Res quality.

You have to use a 3rd party converter and non Apple headphones to hear it.

Next batch of their headphones might be able to do it.

Get your wallets ready.
 
Last edited:
Jun 27, 2021 at 1:57 PM Post #4,459 of 5,654
Currently no Apple devices will reproduce Lossless audio at the full Hi Res quality.

You have to use a 3rd party converter and non Apple headphones to hear it.

Next batch of their headphones might be able to do it.

Get your wallets ready.
Jules I do not believe this to be accurate. I believe the APM can reproduce the full lossless (24/192) via wired output/input if the midi settings on iOS are set to 24/192 and you are playing a 24/192 lossless AM track. This is what I believe I’m experiencing.
 
Jun 27, 2021 at 2:11 PM Post #4,460 of 5,654
Am I understanding this correctly?

The only way to hear Dolby Atmos on an “unofficial“ set of headphones is to use the Always selection.

If that’s correct:

Does head tracking work on “official“ headphones when Always is selected? If the answer is yes, why would I ever choose Automatic? Choosing Always allows me to switch between headphone brands and get the Atmos experience regardless. (Without head tracking on the off-brand headphone, of course.)
Dolby Atmos is essentially a song mastered to a 5.1 system with an HRTF function applied to it. It will work any any pair of headphones, but without any head tracking. In iOS 15 head tracking is added to the mix to create a more immersive feeling, but you must have a supported headphone to do head tracking (APP or APM). I assume that head tracking DSP is applied on the fly on the headphone.

If you set Dolby Atmos to always on you'll always be playing the Dolby Atmos master of the song. I don't know for sure if the HRTF is applied already, but it kind of sounds like it is. Your mileage will vary based on song and even headphone. Some do very well while others don't. I have found the quieter Atmos masters to have more dynamic range than the original masters on Apple Music, though some people don't seem to like them due to lower volume (higher dynamic range requires music that is mixed less loud) or some other oddities (recessed vocals). If you listen using a headphone that is very close to DF neutral, many of the Atmos masters actually sound great; almost as if that's what the pros used to mix the music. Unfortunately, the APM is not anywhere near DF neutral, the APP is closer from that regard.
 
Jun 27, 2021 at 2:13 PM Post #4,461 of 5,654
Jules I do not believe this to be accurate. I believe the APM can reproduce the full lossless (24/192) via wired output/input if the midi settings on iOS are set to 24/192 and you are playing a 24/192 lossless AM track. This is what I believe I’m experiencing.
The DAC on the APM can only go up to 24/48. The Apple Lightning (male) to 3.5mm (male) connector also has that limitation as it takes the analog signal out and converts it that way. So let's say you have a DAC/amp that is capable of 24/192, the Lightning cable will take that analog signal and convert it to 24/48 and send that to the APM to play. You also run into the issue of double amping.
 
Jun 27, 2021 at 2:22 PM Post #4,462 of 5,654
The DAC on the APM can only go up to 24/48. The Apple Lightning (male) to 3.5mm (male) connector also has that limitation as it takes the analog signal out and converts it that way. So let's say you have a DAC/amp that is capable of 24/192, the Lightning cable will take that analog signal and convert it to 24/48 and send that to the APM to play. You also run into the issue of double amping.
Thanks for the clarification, Jules thank you also.
 
Jun 27, 2021 at 2:46 PM Post #4,463 of 5,654
The upshot is execs at Apple have us all used to dumbed down mp3-4 sound. A CEO defaults to "I can't hear a difference" - so that's it. No studio quality sound for Apple customers.

Wheel out the stars saying how great Spatial & Atmos is, have Zane Low talk (all the way through) an audio comparison sampler.

But get studio quality sound - No. We don't get it, because an Apple exec in an alligator shirt "cant tell the difference".

Shameful IMHO.

If you want to hear it on an iPhone you have to buy a 3rd party dac dongle and use wired headphones.

7A4829EA-0F4B-4F42-BF60-1968133DB7B3.jpeg
 
Jun 27, 2021 at 3:17 PM Post #4,464 of 5,654
Dolby Atmos is essentially a song mastered to a 5.1 system with an HRTF function applied to it.
Not wishing to be picky but... Dolby Atmos mastering can emulate 5.1 but if mastered correctly, it really should be done using objects rather than channels. It is the Dolby Atmos renderer that does the HRTF, and Apple appears to have two different renderers, one for Spatial Audio and the other for conversation for standard headphones.

Dolby Atmos in general scales from the same master source to provide different experiences via the renderer hence why someone previous on the thread mentioned that it was significantly better in a car with many speakers over headphones from the same source.
 
Jun 27, 2021 at 3:38 PM Post #4,465 of 5,654
Not wishing to be picky but... Dolby Atmos mastering can emulate 5.1 but if mastered correctly, it really should be done using objects rather than channels. It is the Dolby Atmos renderer that does the HRTF, and Apple appears to have two different renderers, one for Spatial Audio and the other for conversation for standard headphones.

Dolby Atmos in general scales from the same master source to provide different experiences via the renderer hence why someone previous on the thread mentioned that it was significantly better in a car with many speakers over headphones from the same source.
Yes that was me, I notice an improvement in the experience when listening iN a good vehicle system even though vehicle systems inherently are not good :)
 
Jun 27, 2021 at 4:30 PM Post #4,466 of 5,654
Dolby Atmos is essentially a song mastered to a 5.1 system with an HRTF function applied to it. It will work any any pair of headphones, but without any head tracking. In iOS 15 head tracking is added to the mix to create a more immersive feeling, but you must have a supported headphone to do head tracking (APP or APM). I assume that head tracking DSP is applied on the fly on the headphone.

If you set Dolby Atmos to always on you'll always be playing the Dolby Atmos master of the song. I don't know for sure if the HRTF is applied already, but it kind of sounds like it is. Your mileage will vary based on song and even headphone. Some do very well while others don't. I have found the quieter Atmos masters to have more dynamic range than the original masters on Apple Music, though some people don't seem to like them due to lower volume (higher dynamic range requires music that is mixed less loud) or some other oddities (recessed vocals). If you listen using a headphone that is very close to DF neutral, many of the Atmos masters actually sound great; almost as if that's what the pros used to mix the music. Unfortunately, the APM is not anywhere near DF neutral, the APP is closer from that regard.
I do understand and agree with your assessment. Still have no idea why there is both an Automatic setting and Always setting.
 
Jun 27, 2021 at 4:45 PM Post #4,467 of 5,654
I think it’s clear that Apple want to take Spotify out of market. For me they already kill Tidal now, $10 vs $20 is pretty no brainer, and besides Apple have better playlist compared to other services.

I really like how Dolby Atmos sound with APM. But like what others said, they need a particular tuning for it. Some tracks are mastered better than the other. Was wondering if they can automatically master each file with AI tho? This would save time in updating their entire library to Atmos.
No, I think most people don't really quite "get" how AI works and that's ok. It's not anywhere near human intelligence in terms of "taste," or "creativity" for topics like art. Generally, sound-oriented AI is going to fit into classification and understanding tasks.

For Atmos, you still need a human listening to placement in the remaster process and making decisions on how to structure the remaster.

While you might be tempted to think that "judgment and discernment" are part of the AI world because of the looming breakthroughs in Autonomy, you have to remember that there are very clearly defined rules of the road and that there is a significant economic incentive to creating a system that can operate in the ambiguity within the rules of the road. There isn't a clearly defined "put the cellos over there and pianos go here" type of mixing philosophy an AI can be trained on.

While there could theoretically be a rough version of an AI system that could do some really weak version of a remix, it's results wouldn't justify the development investment with our current technology.
 
Jun 27, 2021 at 5:08 PM Post #4,468 of 5,654
Not wishing to be picky but... Dolby Atmos mastering can emulate 5.1 but if mastered correctly, it really should be done using objects rather than channels. It is the Dolby Atmos renderer that does the HRTF, and Apple appears to have two different renderers, one for Spatial Audio and the other for conversation for standard headphones.

Dolby Atmos in general scales from the same master source to provide different experiences via the renderer hence why someone previous on the thread mentioned that it was significantly better in a car with many speakers over headphones from the same source.
Question, is Atmos already in an X.Y surround sound format (IE higher than stereo input)? If so, there aren't any cars (that I know of) that support higher than stereo, so you'd end up with the stereo output from the source anyways. I can see sound stage sounding like it's increased if the brain interprets the HRTF through the speaker system as being "bigger." To be honest, I've never taken any sort of audio that was done in a 3D-esque way through a speaker system. The closest I've done is a binaural track which never really sounds any bigger.

I was unsure if the source did the HRTF function or if the master had that already. I don't think there are multiple renders... That would make it very odd for downloading music (unless the render was done on the fly by the source) since when you download music you only get one version of the song (standard iTunes Plus, Lossless, Dolby Atmos) and if you want to play other versions you'd have to delete and redownload the song. Which render would you get when downloading Atmos? Spatial Audio version or standard headphone (I personally don't feel like there are two renders though).

The APP and APM will apply head tracking once that feature is available (iOS 15) which is really the only thing special to the APM.
 
Jun 27, 2021 at 5:37 PM Post #4,469 of 5,654
Q - How many classic recordings do you want to hear alterered for Spatial or Atmos?

A - Me? Not any. You?

Q - How many artists do you think have the budget to make custom mixes for these new formats

A - Not many.

Result - mangled classic recordings. A few people actually mixing for the new gimmick formats. Meanwhile Apple device owners still remain unable to play back hi res masters the way they were intended to be heard.
 
Last edited:
Jun 27, 2021 at 5:40 PM Post #4,470 of 5,654
No, I think most people don't really quite "get" how AI works and that's ok. It's not anywhere near human intelligence in terms of "taste," or "creativity" for topics like art. Generally, sound-oriented AI is going to fit into classification and understanding tasks.

For Atmos, you still need a human listening to placement in the remaster process and making decisions on how to structure the remaster.

While you might be tempted to think that "judgment and discernment" are part of the AI world because of the looming breakthroughs in Autonomy, you have to remember that there are very clearly defined rules of the road and that there is a significant economic incentive to creating a system that can operate in the ambiguity within the rules of the road. There isn't a clearly defined "put the cellos over there and pianos go here" type of mixing philosophy an AI can be trained on.

While there could theoretically be a rough version of an AI system that could do some really weak version of a remix, it's results wouldn't justify the development investment with our current technology.
Don't quite take machine learning to master tracks out of the equation. It would be very similar to applying a "style" to an image (look up deep style transfer); granted we're talking about a 1D signal vs a 2D image. You can have ML learn the "style" in which music is mastered. The input to such a network would be the raw input tracks and the output would be a single mixed track. Training data would essentially encompass the raw tracks that were used to generate the final master with the output being the final master itself. Granted we don't have access to this data, but I'm sure the record companies do (assuming they keep all the data). If the record companies kept this data, they'd have a wealth of data to train these models with.

In terms of "put the cells over there" and "pianos over there," the ML could learn that certain "types" of sounds would be placed and mixed a certain way. Given enough examples, it would definitely learn this. You have to remember that neural nets start out as a blank slate (unless you're doing transfer learning, then it doesn't), and it needs to learn what different features exist and patterns within the features and how they connect. We can train an auto encoder to generate realistic looking faces. There isn't a clearly defined "put the eyes next to each other" and "a nose below and between the eyes" inherently in the network, but it learns it as it sees more examples of this. These networks learn a general pattern and apply said pattern. The auto encoder will learn the that eyes have eyeballs and eye balls have dark circles and irises surrounding them.

One could try to, for example, make a model that's trained on recordings from just one genre or recordings from one specific producer; I feel finding patterns for a smaller set of related things would be easier than trying to make one model to rule them all. But that would be the end goal. If you can get one model to work, then you could apply transfer learning to get other genre's/producers to be replicated by model.

The problem? Well there is inherent risk involved. It's likely there is a very complex pattern that exists (how humans do things do tend to have certain patterns to them, even if there is a little variation from one work to the next). The problem is what's the cost of learning this model? Time? These models can take years to train assuming there is hardware out there that is capable of doing it. Not only the training time, but the time it takes to tune the model; which of itself is kind of an art still rather than a science (though there are approaches to doing this line a genetic algorithm or some other form of optimization like Bayesian Optimization). Hardware costs are another huge thing. You can rent, but this will cost you very much in the long term. You can buy, but that's not cheap either. Manpower and expertise... Who's going to do the work, they need to get paid too... It likely takes a team.

I feel like the two biggest factors right now is both on the hardware and time fronts. Getting a machine that can train a NN with this sort of data is going to be very costly if not impossible. Without good hardware your training times per model will take forever, and when optimizing and tuning a model, you train many models with that number growing combinatorially larger. I'll also admit there is a lot of luck involved. Note as of time of this writing there isn't much hardware out there that can train a NN on raw genomic DNA... Even a bacterial genome is too large to do (either space is too big or time is too big). Basically, I don't think the hardware has caught up to be able to do music yet.

Edit: building a model that mixes an individual track/instrument/etc. would require a lot less hardware than one that mixes multiple tracks. So you'd send each track in, it' would "mix it" individually and you'd just add all the tracks together. Getting training data for this might be more difficult.

Edit 2: on second thought, making the model blind to the other "tracks" would likely be more of a hinderance than not since knowing what other tracks exist and what they sound like would be beneficial for the model to use and learn from/exploit.

Q - How many classic recordings do you want to hear alterered for Spatial or Atmos?

A - Me? Not any. You?

Q - How many artists do you think have the budget to make custom mixes for these new formats

A - Not many.

To the first one, Warner actually has a playlist of this (so does Apple). The Atmos recordings don't sound too different to the originals; they are the least altered ones IMO. Artists wouldn't be doing the custom mixes, the songs are owned by the record companies. They definitely have the budget to do this. Especially if there is backing from Amazon, Apple, Spotify, etc.

Edit: I honestly could care less for 3D effects and stuff. They're cool and all... But I'd rather the record labels start going back to mastering things quiet again instead of loud. The Dolby Atmos tracks for the most part accomplish this particular goal IMO. It's really the only reason why I like them. I hear more dynamics with the Atmos tracks vs the original masters (on AAC). So yeah, forget the Atmos stuff... Just do the mastering in a way that doesn't completely destroy the dynamics of the music. Though I guess some people prefer their music loud.
 
Last edited:

Users who are viewing this thread

Back
Top