Audeze Mobius review / impressions
Apr 8, 2019 at 2:18 AM Post #3,406 of 5,780
So you mean Brainwavz, not the new Audeze Gel pads - there are some versions out there, which excacting ones are those that you used?
Maybe you can give an Amazon link please?

This ones?
https://www.amazon.de/Ohrpolster-Brainwavz-HM5-Kunstleder-angewinkelt-schwarz-black/dp/B0148TR6QE

those are the ones, the angled pleather.

If you remember any of my posts about em, they dont seal completely(i noticed i had to push the headphones against my head to get them to seal) due to the way the Mobius' baffle around the driver opening is uneven without the stock pads plastic mounting ring attached. A piece of felt and an extra plastic ring provides an excellent seal https://www.head-fi.org/threads/audeze-mobius-review-impressions.887808/page-214#post-14866353
 
Apr 8, 2019 at 4:48 AM Post #3,407 of 5,780
I find the 7.1 implementation very impressive. I have tried a few different solutions (still need to try the gsx1000) and the auduze mobius is right at the top. My priority is single player gaming and movies.

I watched glass last night on the mobius and whilst the film was rubbish the audio was very impressive through the mobius. There were a number of occasions where there was clearly sound being heard behind me which was very impressive. The 3D manual mode is also fantastic for movies as the headphones do a marvelous job of fixing the the center dialogue channel in place. That then fools you into thinking there are speakers around you.

One thing i was a bit concerned with is that there is still a lack of bass compared to my sennhesier HD599. I do not understand that however as the range of the audeze is lower than the HD599. The bass on the HD599 just sounds better.

has anyone compared the mobius to the gsx1000 amp


https://www.amazon.co.uk/Brainwavz-...vz&qid=1554713378&s=gateway&sr=8-1-spons&th=1

are these the angled pads people are using and what are the pros and cons of these new pads
 
Last edited:
Apr 8, 2019 at 5:36 AM Post #3,408 of 5,780
I am quite satisfied with the quality of the HRTF's sound color, especially it's relatively low coloration in the treble range while still maintaining the full 3-D effect is exceptional.
I have done a lot of research and development, practically and theoretically over the years, in the area of dummy head recordings and the related reproduction side, and Mobius is really convincing here.


The virtual speaker arrangement that I audibly receive from Mobius fed with discrete 7.1 signals in my studio environment is what I painted in figure 3.
This might be personal, so be it.


What we agreed is, the headtracking works too strong.

Fact is, virtual sound sources are located pin point precisely with no smear for me, they just end up at the wrong position.
For me this means HRTF and inter-aural-time-delay ("phase") are working together good enough for the 3-D effect.
It's just that they need to be coupled to the head tracking with different scaling, this is why I suggest this approach.

I even think there are not much other options, because like you mentioned, Waves NX is propriataty and Audeze does not have access to the full internal parameters set.
Specially they cannot change the HRTF which causes the "shadowing" effect you complain about.
I already asked (for another reason) and the simple answer was: "no"
Anyway this effect is natural part of every HRTF and in the basic tendency not very different between individuals (in detail it is).

What would be your suggestion to correct the problem?
I fully agree that the HRTF is exceptionally good when it comes to minimising sound coloration while retaining an excellent and convincing 3D image. I'm not arguing against that.

Regarding virtual speaker arrangement, do you mean that's what you perceive when you're facing forward, or do you mean the speakers are actually in that location, and when you turn your head directly towards them they stay in the same place. If it's the latter, something must be wrong with your Mobius. That simply doesn't happen with mine. If it's the former, that's pretty much what I perceive as well, though maybe not as extreme. An easy way to determine the actual (not just perceived) virtual speaker location is to simply play audio from one channel only and turn your head until you hear it as directly in front of you. You then estimate the angle you turn your head to go from one speaker to the next and that's the angle between the speakers. If I do that with the front left and right speakers they end up about 60 degrees apart, exactly as the standard dictates. If you do that and you find that the front left and right speakers are actually located directly to the side of you (so a 90 degree turn in either direction, 180 degrees total between the speakers), as your diagram shows, then either one of our headphones must be faulty, and I would have to think it would be yours.

I don't believe it's possible to give different scaling to ITD/phase and the rest of the HRTF in terms of head tracking since they're both part of Waves NX. Audeze cannot modify Waves NX themselves, and if Waves would be willing to cooperate they might as well tackle the problem at its core. All Audeze could do is scale the head position which is fed into the Waves NX algorithms directly, but doing so would mess with the ITD/phase part of the HRTF. The best they could really do is compromise between the shadowing effect (primarily used for noise-like sounds with a wide frequency spectrum and long duration) and the ITD/phase component (primarily used for impulse-like sounds with very short duration as well as pure tones). Right now for me at least the ITD/phase part is working pretty much flawlessly (after entering the correct head measurements in the Audeze HQ app), while the shadowing effect is in fact too strong. Another problem with a solution like this is that it actually wouldn't solve the problem completely. Because of the nature of the core issue, sounds that are directly in front or behind you seem to be too strongly affected by the head tracking. However, sounds that are directly to either side of you seem to be much too weakly affected by the head tracking. Another way to think about this is that the sounds are 'magnetized' to your left and right ears, they prefer to be directly to the side of you rather than in front or behind. Due to this, any scaling applied to the head tracking that would 'fix' the perceived strength of it for sounds in front or behind, would adversely affect the perceived strength of it for sounds to either side. I hope I was able to explain this clearly enough.

If you read the above you should hopefully understand the various issues that come with a scaling based approach, and why it's not an acceptable solution to me. It simply has too many side-effects and doesn't actually fix the problem at its core. By the way, just so we're clear, it's not the 'head shadowing effect' itself I'm complaining about, but rather its strength. I added a wikipedia link in my first response, perhaps you missed it?

Anyway, since the alternative is not an acceptable solution for me, my suggestion would be to tackle the issue at its core. Yes, I mean that Audeze should kindly ask Waves to implement changes into their code that would either change the strength of the head shadow effect to a reasonable level, or preferably, allow the user to customise it, much like they already do with the head measurements. In a conversation with @KMann , I was told that while Audeze cannot make changes to Waves NX on their own, they could reasonably request changes and improvements from Waves. Additionally, Audeze is currently working on adding the ability to update the Waves NX part of the Mobius firmware to the HQ app, further reinforcing the idea that Waves is willing to at least make some improvements to NX. That means it is a possibility and certainly not a plain 'no'. Whether or not Waves would be willing to cooperate, I cannot say, but since there are no other acceptable solutions, I believe Audeze should pursue this possibility to its fullest extent.

Lastly, I want to state again that I do not intend to offend anyone with this. While my responses may seem harsh or critical, I simply want to clear up as much confusion as quickly as possible and make sure we're on the same page. I hope you understand.
 
Apr 8, 2019 at 8:57 AM Post #3,411 of 5,780
I fully agree that the HRTF is exceptionally good when it comes to minimising sound coloration while retaining an excellent and convincing 3D image. I'm not arguing against that.

Regarding virtual speaker arrangement, do you mean that's what you perceive when you're facing forward, or do you mean the speakers are actually in that location, and when you turn your head directly towards them they stay in the same place. If it's the latter, something must be wrong with your Mobius. That simply doesn't happen with mine. If it's the former, that's pretty much what I perceive as well, though maybe not as extreme. An easy way to determine the actual (not just perceived) virtual speaker location is to simply play audio from one channel only and turn your head until you hear it as directly in front of you. You then estimate the angle you turn your head to go from one speaker to the next and that's the angle between the speakers. If I do that with the front left and right speakers they end up about 60 degrees apart, exactly as the standard dictates. If you do that and you find that the front left and right speakers are actually located directly to the side of you (so a 90 degree turn in either direction, 180 degrees total between the speakers), as your diagram shows, then either one of our headphones must be faulty, and I would have to think it would be yours.

I don't believe it's possible to give different scaling to ITD/phase and the rest of the HRTF in terms of head tracking since they're both part of Waves NX. Audeze cannot modify Waves NX themselves, and if Waves would be willing to cooperate they might as well tackle the problem at its core. All Audeze could do is scale the head position which is fed into the Waves NX algorithms directly, but doing so would mess with the ITD/phase part of the HRTF. The best they could really do is compromise between the shadowing effect (primarily used for noise-like sounds with a wide frequency spectrum and long duration) and the ITD/phase component (primarily used for impulse-like sounds with very short duration as well as pure tones). Right now for me at least the ITD/phase part is working pretty much flawlessly (after entering the correct head measurements in the Audeze HQ app), while the shadowing effect is in fact too strong. Another problem with a solution like this is that it actually wouldn't solve the problem completely. Because of the nature of the core issue, sounds that are directly in front or behind you seem to be too strongly affected by the head tracking. However, sounds that are directly to either side of you seem to be much too weakly affected by the head tracking. Another way to think about this is that the sounds are 'magnetized' to your left and right ears, they prefer to be directly to the side of you rather than in front or behind. Due to this, any scaling applied to the head tracking that would 'fix' the perceived strength of it for sounds in front or behind, would adversely affect the perceived strength of it for sounds to either side. I hope I was able to explain this clearly enough.

If you read the above you should hopefully understand the various issues that come with a scaling based approach, and why it's not an acceptable solution to me. It simply has too many side-effects and doesn't actually fix the problem at its core. By the way, just so we're clear, it's not the 'head shadowing effect' itself I'm complaining about, but rather its strength. I added a wikipedia link in my first response, perhaps you missed it?

Anyway, since the alternative is not an acceptable solution for me, my suggestion would be to tackle the issue at its core. Yes, I mean that Audeze should kindly ask Waves to implement changes into their code that would either change the strength of the head shadow effect to a reasonable level, or preferably, allow the user to customise it, much like they already do with the head measurements. In a conversation with @KMann , I was told that while Audeze cannot make changes to Waves NX on their own, they could reasonably request changes and improvements from Waves. Additionally, Audeze is currently working on adding the ability to update the Waves NX part of the Mobius firmware to the HQ app, further reinforcing the idea that Waves is willing to at least make some improvements to NX. That means it is a possibility and certainly not a plain 'no'. Whether or not Waves would be willing to cooperate, I cannot say, but since there are no other acceptable solutions, I believe Audeze should pursue this possibility to its fullest extent.

Lastly, I want to state again that I do not intend to offend anyone with this. While my responses may seem harsh or critical, I simply want to clear up as much confusion as quickly as possible and make sure we're on the same page. I hope you understand.
Thank you for your comprehensive answer!

To get back to your question:
Practically for me it's important what happens when I listen to music while I'm sitting relatively still.
I'm not circling around all the time trying to pinpoint virtual speaker's positions.

I determin the virtual speaker positions by automatically switching a mono sound, step-by-step, through all 7.1 channel, while looking straight ahead, and I get this:
IMG_0990.jpg



I tried the other way, turning my head to each virtual speaker, too.
But those virtual speakers do not stay in place when I turn my head.
In fact they move to about the intended positions of a standard 7.1 setup!
I take this as an evidence for the headtracking not giving a correct angular response, so I'd considered those results as worthless for determining the speaker positions.


This means, we completely agree with our opinion:
The 3-D engine is not placing the speakers on correct positions, partly contributing to your mentioned "too much shadow effect".
Now the wrong working headtracking seems to fix that, but not in normal listening position, only when you turn your head to "face" one virtual speaker, while moving other virtual speakers to even more wrong positions (LeftFront moves into the back when you turn right).


Changing the virtual speaker positions was my first request to Audeze, but I got a strong and distinctive "no, impossible".

At that time I thought, moving the virtual front speakers more to the center would fix the headtracking too.
Now as changes in the 3-D engine does not seem to be an option, working with the coupling between headtracker and 3-D engine is the only thing that's left.


Just one thing I want to make clear, which I didn't in my primary postings:
Scaling the headtracking angular response needs to be nonlinear, angular dependent:
those angles that already work correctly are not changed, mainly the front area's about 110° angular response would be different.
The whole HRTF 3-D processing would not be changed at all (it kind of works).
Just the angular mapping from the physical headtracker unit to the 3-D engine's simulted angle need to be changed to the perceived correct values.

It might even be that Waves company finds out that they have an error inside their own mapping and correct it on their side, but my proposal could maybe done by Audeze themselves.
The physical headtracker unit used in Mobius might not be fitting perfectly to Waves NX software, as Waves might have adapted their own hardware unit only and not the one from Audeze.


@KMann
These all are questions that only an Audeze / Waves interaction can solve.

I just can state, the angular response of Mobius 3-D headtracking effect is not correct.
I would like to ask for a fix, as it largely reduces the fun I have with Mobius.

I'm very happy that I am not the only one hearing it this way, thank you GalaxyMaster!
 
Apr 8, 2019 at 9:57 AM Post #3,412 of 5,780
is there a way to use mobius with ios in hi-res mode wired via a usb c - lightning adapter?
Hi-Res mode is a gimmick. I don't know if iOS will allow > 48 kHz via USB anyway. It will also drain the device's battery unless you switch off the charging on the Mobius. You can also buy the Lightning to USB 3 adapter which has an additional Lightning input for charging.

The only real use of this is for playing games with low latency audio. Videos playback is already lip synced for AAC via Bluetooth.
 
Last edited:
Apr 8, 2019 at 10:02 AM Post #3,413 of 5,780
Thank you for your comprehensive answer!

To get back to your question:
Practically for me it's important what happens when I listen to music while I'm sitting relatively still.
I'm not circling around all the time trying to pinpoint virtual speaker's positions.

I determin the virtual speaker positions by automatically switching a mono sound, step-by-step, through all 7.1 channel, while looking straight ahead, and I get this:



I tried the other way, turning my head to each virtual speaker, too.
But those virtual speakers do not stay in place when I turn my head.
In fact they move to about the intended positions of a standard 7.1 setup!
I take this as an evidence for the headtracking not giving a correct angular response, so I'd considered those results as worthless for determining the speaker positions.


This means, we completely agree with our opinion:
The 3-D engine is not placing the speakers on correct positions, partly contributing to your mentioned "too much shadow effect".
Now the wrong working headtracking seems to fix that, but not in normal listening position, only when you turn your head to "face" one virtual speaker, while moving other virtual speakers to even more wrong positions (LeftFront moves into the back when you turn right).


Changing the virtual speaker positions was my first request to Audeze, but I got a strong and distinctive "no, impossible".

At that time I thought, moving the virtual front speakers more to the center would fix the headtracking too.
Now as changes in the 3-D engine does not seem to be an option, working with the coupling between headtracker and 3-D engine is the only thing that's left.


Just one thing I want to make clear, which I didn't in my primary postings:
Scaling the headtracking angular response needs to be nonlinear, angular dependent:
those angles that already work correctly are not changed, mainly the front area's about 110° angular response would be different.
The whole HRTF 3-D processing would not be changed at all (it kind of works).
Just the angular mapping from the physical headtracker unit to the 3-D engine's simulted angle need to be changed to the perceived correct values.

It might even be that Waves company finds out that they have an error inside their own mapping and correct it on their side, but my proposal could maybe done by Audeze themselves.
The physical headtracker unit used in Mobius might not be fitting perfectly to Waves NX software, as Waves might have adapted their own hardware unit only and not the one from Audeze.


@KMann
These all are questions that only an Audeze / Waves interaction can solve.

I just can state, the angular response of Mobius 3-D headtracking effect is not correct.
I would like to ask for a fix, as it largely reduces the fun I have with Mobius.

I'm very happy that I am not the only one hearing it this way, thank you GalaxyMaster!
I don't want to sound condescending, but you seem to have a fundamental misunderstanding of the issue at hand. The head shadow effect isn't a symptom caused by wonky virtual speaker arrangement, it's the other way around. The increased strength of the head shadow effect is what causes the speaker arrangement to sound wrong. The head shadow effect is one of the fundamental mechanisms the brain uses to localise sound. As such it is simulated in all HRTFs to achieve a convincing sound projection as it would be in real life. In the case of Waves NX it happens to be too strong. This is what causes all the weirdness with speaker placement being seemingly wrong when facing forward, as well as them seemingly drifting to their 'proper' positions when you turn to face them.

I'm not sure if I'm right here so please correct me if I'm wrong, but you seem to be under the impression that the tracking itself (or the coupling of the tracking to the Waves NX algorithms) is wrong, rather than the algorithm used to render the binaural mix. That combined with wrong speaker placement would result in a similar, but still different (I'll explain later) phenomenon as I believe is the case here.

The reason I believe my explanation of our findings to be correct, is because the alternative (which you seem to believe) I consider to be vastly more unlikely to occur. Tracking an object's orientation using a gyroscope/accelerometer is pretty much a solved problem. It's easy to do, and there would have to be something seriously wrong for it to give wonky readings. The second part of your explanation of events that would have to be the case is intentionally wrong speaker placement within Waves NX. This would make absolutely no sense at all, considering the goal of Waves NX is to emulate a standard speaker arrangement as best it can. Because those 2 things are very unlikely to occur on their own, it would be exceedingly unlikely for them to occur at the same time.

Now let's look at my explanation, an error in the HRTF. Psychoacoustics is still a very young area of research. There are many things we still don't know or understand. Because of this, it's still very difficult to very accurately emulate the way humans hear through an HRTF, as Waves and many other companies have attempted. It's not very unlikely in my opinion, that some things aren't exactly perfect at this stage, and I would say errors are to be expected. It would probably take years upon years of refining to get things sounding perfect, which makes it all the more impressive how well Waves NX does in most regards.

If you don't understand how or why a strengthened head shadow effect in the HRTF can cause the observed behaviour, here is a hopefully clear explanation:

Consider the Front Left channel. Let's say you're facing that virtual speaker, and everything sounds as it should. Now let's say you begin turning your head to the right, until you're facing the center channel (should be a 30 degree angle change in total). As you move your head, 2 things (mainly) happen: The ITD gets shifted such that your left ear receives audio slightly before your right, and the head shadow effect causes audio entering the right ear to become occluded (volume is lowered and higher frequencies are attenuated). With a perfect HRTF, those effects would combine to sound exactly like they would in real life. Sounds would be delayed in the right ear just the right amount, and frequencies would be attenuated exactly as your brain would expect. This would give you perfect localisation of the speaker being exactly 30 degrees to your left. Now consider what would happen if the head shadow effect would be too strong. Your right ear would receive a more attenuated sound than your brain would expect given the angles. This would coincide with a location of the sound source that is further to your left (say 60 degrees instead of the actual 30). Because of this, your brain concludes that the sound must be coming further from the left and that is why you perceive it as such. Note that this only applies to sounds whose perception of location is mostly influenced by the head shadow effect, rather than the ITD. Examples of such sounds would be pink noise, human voices, and long-lasting pure tones (to an extent). Sounds of very short duration, such as an impulse, are mostly localised by the brain using the ITD.

One way in which you could verify if the ITD part of the HRTF is functioning correctly, is by repeatedly playing an impulse (a spike in amplitude lasting only 1 sample, followed by complete silence) in 1 channel, and slowly turning your head. You should, in theory, perceive the location of the speaker to remain the same regardless of head orientation, provided the ITD part of the HRTF is indeed correct for your particular head and ears. This is in contrast to what you would perceive with a noise-like sound (such as pink noise), with which the perception of location would become warped due to the increased head shadow effect.

I hope I was able to explain clearly enough. If you have any questions or would like more explanation, please don't hesitate to ask. The more we're on the same page, the better.
 
Apr 8, 2019 at 10:21 AM Post #3,414 of 5,780
Hi-Res mode is a gimmick. I don't know if iOS will allow > 48 kHz via USB anyway. It will also drain the device's battery unless you switch off the charging on the Mobius. You can also buy the Lightning to USB 3 adapter which has an additional Lightning input for charging.

The only real use of this is for playing games with low latency audio. Videos playback is already lip synced for AAC via Bluetooth.

thanks for the reply, I read that audeze considers bluetooth a support for the connection, but recommends listening to mobius wired via usb so as to express its potential (hi-res, mode 7.1), and I want to know if it is possible with iphone
 
Apr 8, 2019 at 2:06 PM Post #3,417 of 5,780
I don't want to sound condescending, but you seem to have a fundamental misunderstanding of the issue at hand. The head shadow effect isn't a symptom caused by wonky virtual speaker arrangement, it's the other way around. The increased strength of the head shadow effect is what causes the speaker arrangement to sound wrong. The head shadow effect is one of the fundamental mechanisms the brain uses to localise sound. As such it is simulated in all HRTFs to achieve a convincing sound projection as it would be in real life. In the case of Waves NX it happens to be too strong. This is what causes all the weirdness with speaker placement being seemingly wrong when facing forward, as well as them seemingly drifting to their 'proper' positions when you turn to face them.

I'm not sure if I'm right here so please correct me if I'm wrong, but you seem to be under the impression that the tracking itself (or the coupling of the tracking to the Waves NX algorithms) is wrong, rather than the algorithm used to render the binaural mix. That combined with wrong speaker placement would result in a similar, but still different (I'll explain later) phenomenon as I believe is the case here.

The reason I believe my explanation of our findings to be correct, is because the alternative (which you seem to believe) I consider to be vastly more unlikely to occur. Tracking an object's orientation using a gyroscope/accelerometer is pretty much a solved problem. It's easy to do, and there would have to be something seriously wrong for it to give wonky readings. The second part of your explanation of events that would have to be the case is intentionally wrong speaker placement within Waves NX. This would make absolutely no sense at all, considering the goal of Waves NX is to emulate a standard speaker arrangement as best it can. Because those 2 things are very unlikely to occur on their own, it would be exceedingly unlikely for them to occur at the same time.

Now let's look at my explanation, an error in the HRTF. Psychoacoustics is still a very young area of research. There are many things we still don't know or understand. Because of this, it's still very difficult to very accurately emulate the way humans hear through an HRTF, as Waves and many other companies have attempted. It's not very unlikely in my opinion, that some things aren't exactly perfect at this stage, and I would say errors are to be expected. It would probably take years upon years of refining to get things sounding perfect, which makes it all the more impressive how well Waves NX does in most regards.

If you don't understand how or why a strengthened head shadow effect in the HRTF can cause the observed behaviour, here is a hopefully clear explanation:

Consider the Front Left channel. Let's say you're facing that virtual speaker, and everything sounds as it should. Now let's say you begin turning your head to the right, until you're facing the center channel (should be a 30 degree angle change in total). As you move your head, 2 things (mainly) happen: The ITD gets shifted such that your left ear receives audio slightly before your right, and the head shadow effect causes audio entering the right ear to become occluded (volume is lowered and higher frequencies are attenuated). With a perfect HRTF, those effects would combine to sound exactly like they would in real life. Sounds would be delayed in the right ear just the right amount, and frequencies would be attenuated exactly as your brain would expect. This would give you perfect localisation of the speaker being exactly 30 degrees to your left. Now consider what would happen if the head shadow effect would be too strong. Your right ear would receive a more attenuated sound than your brain would expect given the angles. This would coincide with a location of the sound source that is further to your left (say 60 degrees instead of the actual 30). Because of this, your brain concludes that the sound must be coming further from the left and that is why you perceive it as such. Note that this only applies to sounds whose perception of location is mostly influenced by the head shadow effect, rather than the ITD. Examples of such sounds would be pink noise, human voices, and long-lasting pure tones (to an extent). Sounds of very short duration, such as an impulse, are mostly localised by the brain using the ITD.

One way in which you could verify if the ITD part of the HRTF is functioning correctly, is by repeatedly playing an impulse (a spike in amplitude lasting only 1 sample, followed by complete silence) in 1 channel, and slowly turning your head. You should, in theory, perceive the location of the speaker to remain the same regardless of head orientation, provided the ITD part of the HRTF is indeed correct for your particular head and ears. This is in contrast to what you would perceive with a noise-like sound (such as pink noise), with which the perception of location would become warped due to the increased head shadow effect.

I hope I was able to explain clearly enough. If you have any questions or would like more explanation, please don't hesitate to ask. The more we're on the same page, the better.
You might be completely right with the way you see the cause for the negative effect, as it would result in what I hear too.
What we cannot know is, if and what was done intentionally or not.
But we have this flaw we are talking about, so they are not quite there.
I wouldn't invest a second of my time here if Mobius wasn't that close to something really great.


The whole thing about dummy head recordings, 3-D audio and sound virtualization is about fooling the auditorial system and the brain.
I'm using this and other effects on a daily basis in my job as audio engineer to draw a convincing picture of an artist performing.

Anyway a recording is not the real thing, in case of Mobius it's a double abstractation:
1. You start with a recording of a real event, made to be played on speakers.
2. Then you simulate the speakers in the room.

My bet is, during WavesNX development a huge amount tuning based on listening, and technical compromisis working around theoretical approaches was going on to achieve a convincing result.
A lot of generalization must be built into it too, because you cannot serve each individual's HRTF, like i.e. Smith Realizer does.
Every other system I heard trying to do the same, (and that were a lot), had such big flaws that they were unusable.

It might be complex and costly to move several steps back to find a solution, Waves and Audeze might be reluctant to do that.
So my idea of adapting the headtracking could serve us with a faster workaround for the most obvious part of the problem.


Finally - cudos to Audeze for building something astonishing, it just needs some more refinement to become really great.
 
Apr 8, 2019 at 2:44 PM Post #3,418 of 5,780
You might be completely right with the way you see the cause for the negative effect, as it would result in what I hear too.
What we cannot know is, if and what was done intentionally or not.
But we have this flaw we are talking about, so they are not quite there.
I wouldn't invest a second of my time here if Mobius wasn't that close to something really great.


The whole thing about dummy head recordings, 3-D audio and sound virtualization is about fooling the auditorial system and the brain.
I'm using this and other effects on a daily basis in my job as audio engineer to draw a convincing picture of an artist performing.

Anyway a recording is not the real thing, in case of Mobius it's a double abstractation:
1. You start with a recording of a real event, made to be played on speakers.
2. Then you simulate the speakers in the room.

My bet is, during WavesNX development a huge amount tuning based on listening, and technical compromisis working around theoretical approaches was going on to achieve a convincing result.
A lot of generalization must be built into it too, because you cannot serve each individual's HRTF, like i.e. Smith Realizer does.
Every other system I heard trying to do the same, (and that were a lot), had such big flaws that they were unusable.

It might be complex and costly to move several steps back to find a solution, Waves and Audeze might be reluctant to do that.
So my idea of adapting the headtracking could serve us with a faster workaround for the most obvious part of the problem.


Finally - cudos to Audeze for building something astonishing, it just needs some more refinement to become really great.
I agree we can never know for certain, at least not until someone from Waves or Audeze tells us directly. We can definitely make educated guesses, however. I don't think Waves intended for these issues to be present, nor do I think they are perfectly happy with the current state of NX. I do believe they will be looking to continue to make improvements.

I don't know how difficult it would be to implement the changes I suggested, but since they're relatively simple in nature I don't imagine it would be too costly. Depending on how their code is structured it could literally be a single number tweak, but even if it is a little more complex, I do believe it would be worth their effort. Making such improvements would not only benefit Audeze and the Mobius, but it would also directly benefit Waves themselves, since they sell NX as a product.

Regarding the head tracking scaling solution, while that could alleviate some of the issues (namely the audio location drift for the front and rear speakers when moving your head), it would also cause others, as I explained earlier. Additionally, it wouldn't change the already present wider-than-normal stereo separation when facing forward. Therefore I don't think it's a viable or acceptable solution, even as a temporary band aid fix.

I fully agree with your last point. Audeze certainly has made something incredible with the Mobius. It's the best sounding Bluetooth headphone by a mile, it's the first wireless planar headphone, it's the first headphone to implement head tracking and sophisticated 3D audio, and it has arguably the best audio quality for the money. To me it's already the best headphone on the market all things considered, and it has the potential to become significantly better still with just a bit more work. Well done Audeze!
 
Apr 8, 2019 at 3:13 PM Post #3,419 of 5,780
Apr 9, 2019 at 4:11 AM Post #3,420 of 5,780
when i try to go into the device properties through windows then the screen opens but then automatically closes straight away. I cant get it to stay open to make any changes.

also i cant seem to be able to update the firmware - i assume my device requires an upgrade as even though the audeze hq app has the warm setting when you select it then it does not do anything on the headphones. Also when using the headphones and i select the EQ there is no warm option on the heaphones.

my firmware version is 1.30

i followed the instructions and knocked the mike off and laid the headphones flat (inside of the earcups towards floor) but did a message that the update has failed. I am running audeze HQ in admin mode.

also does anyone have an idiots guide as to how to use APO EQ to give a bass boost. People refer to using APO as a system wide EQ however with APO you have pick which device to install to. I have used it to install to my usual output which is my dennon AV amplifies (connected by HDMI ) to my GPU. however how do i install it to the headphones? also i am not sure how to actually boast the bass using APO. The only thing i have done with APO previously is to select a filter etc to flatten the EQ of a headphone for use with hesuvi
 
Last edited:

Users who are viewing this thread

Back
Top