Like I said, the source audio device that Waves NX piggy backs on to
needs to be multichannel capable. It must be capable of receiving and recognising a discrete multichannel surround signal. The Syba is just a stereo dac/amp so Waves was only being given a stereo source feed to virtualise so it's no wonder it wasn't that great for you. You were getting a virtualised stereo speaker setup rather than a virtual 7.1 surround setup. I
repeat, you need a 5.1 or 7.1 capable audio device as source in the first instance. That could be your onboard soundchip if it's capable or an external dac/amp that has those capabilities like a Sound blaster G6.
Choose the audio source from the Waves Central app and then ensure that it's set to 5.1 or 7/1. Can't remember if you can do this straight from the Waves app, hence why I recommended doing it through the Windows sound manager just to be sure, see below screen shot. I can't find my headtracker at present and Waves NX isn't installed on my Surface anyway, it's on my gaming rig which isn't here, so I can't illustrate the process exactly but it shows audio device configuration via Windows OS including the access path.
Imagine that I have Waves NX installed and I have chosen the 7.1 capable SXFI AMP (SXFI turned off as in this hypothetical example - I am simply using it as a multichannel decoding capable dac/amp) as the audio source device for Waves to attach itself to and for output from there.
IIRC, the only difference between this screenshot and an actual Waves NX config would be that the virtual audio device "Waves NX" would be set as the windows default audio device rather than the SXFI here. If you are able to set the Waves NX audio device as 7.1 too either in the app or in the sound manager, then do so.
nonsense. Sadly it's nothing more than arrogantly dismissive and closed minded ignorance from a bunch of people (including pro reviewers many of whom know far more about audio than I ever will and thus should know better) who don't understand the purpose of the head tracking implementation in the Mobius, Orbit S and Waves NX headtracker VSS solutions.
If Waves NX or another headtracking VSS solution doesn't work for an individual's HRTF and accordingly they don't rate it, that's absolutely fine; universally effective VSS is difficult if not impossible. But when they dismiss the principle of leveraging headtracking for VSS with "Meh, head tracking is a gimmick, it's only useful for VR!", it displays a fundamental ignorance as to how 3D audio is currently implemented in VR not to mention a complete lack of understanding in how the ears receive audio cues and how the brain processes them. Waves NX may not have worked for them but that'll be down to their own personal HRTF compatibility with the Waves algorithm, not because headtracking is a pointless gimmick. If anything, unbeknownst to them, the headtracking will have prevented their opinion /experience of Waves NX VSS from being even worse.
To be clear, there are two reasons and implementations for headtracking. One is VR and the other is VSS enhancement.
VR
VR revolves at least in part around physical movement. If not the entire body, then the upper body for motion control or al least the head for camera control as camera view tends to be tied to the physical movements of the user's head mounted display unit (HMU), i.e. the directions that the user is facing at a given time and the directions that they turn their heads towards instead of the manipulations of a mouse or a gamepad's analogue stick. VR audio tends to be pre-programmed into the game and handled by the CPU of the host system by itself or in conjunction with a dedicated dac/amp and audio processing chip in the HMU package. It is optimised to follow these head movements and keep the audio cues relative to the game environment and the user's head position. One of the major VR VSS engines used by game devs is actually a specialist VR version of Waves. In any case, whatever the VR audio engine, whoever it's from, such games don't typically require separate hardware to implement this experience. In fact, many VR headsets allow you to remove or bypass the included stock headphones and use your own stereo cans plugged to the headset or the PC if you like instead. Some VR headsets even forgo headphones altogether to keep costs down. Even in that latter instance, a separate audio device with its own headtracking would be unnecessary because a gyroscopic headtracker is already in the HMU itself. Without it, it wouldn't be able to track the VR world camera view to the user's head movements. The Mobius and Orbit S official documentation and online FAQs actually tell you to turn Waves VSS
off for VR because Waves, Audeze and Kingston are fully aware of the aforementioned.
(As a side note, Mobius control software beta has trialled mapping gyroscopic head motion to the cans for quick snapping the game camera relative to the direction of the user's gaze while in a kind of pseudo VR hybrid implementation. However this beta came some time after launch and was never the original raison d'etre for head tracking in the Mobius.)
Head tracking for Non-VR VSS
The core purpose of Mobius's 3D audio is like any other VSS solution (and I don't mean just the nu-wave 3D audio VSS solutions that have taken off over the past couple of years), that is to mimic a true physical surround speaker setup as far as is possible using various audio acoustic trickery. Thing is though, headphones are stuck to your head/ears. Any head movements, how ever small naturally result in the actual physical source of audio, the headphone drivers, moving with you as they are clamped to the sides of your head.
On the one hand, this can be a source of immersion due to the focus and isolation, particularly in the case of closed-backs but on the other hand, it's not realistic. . . It's not representative of how we perceive and process audio cues from our surrounding environment.
You will no doubt have listened to a true surround speaker setup at some point in your life, if not in the home AV space then at the very least in the cinema. In such a setup, do the speakers adjust themselves in angle or position if the user moves their head, let alone their body position? No. When you set-up a surround system, you fix your channels and speakers, perhaps do some calibration for distances, time delay and other environmental characteristics to optimise the experience to an ideal listening position, a sweet spot and then you leave it like that until if and when you decide it needs to be recalibrated, moved or modified. But in the here and now of your listening session while playing game or watching a film, the setup is fixed. If you shift your position at all, whether just tilting your head to the side or a more substantial repositioning of your body such as leaning over onto the left armrest when you were previously leaning on the right, tilting a reclining chair back etc., the speaker setup remains fixed in the same position with time delay and whatever else set as per last calibration.
So even if a given VSS is otherwise amazing, even if it's an otherwise perfect replication of a true surround setup, the moment you introduce movement that physically moves the source of the sound, that is the headphone drivers, that simple fact in and as of itself prevents true replication of a multichannel speaker system. That's not to say that a multichannel spear system is a perfect replication of the way audio behaves in real-life either but when the positional cues of the audio environment come from a fixed sphere or soundstage that only changes per the instructions of the source content / audio mixing, while the listening ears are left free to regularly shift in position, however minimally / other wise imperceptibly, due to the head's micro-movements resulting from respiration, chewing / swallowing, miscellaneous small shifts in head position etc. for comfort, that is much closer and more realistic to not only a true multichannel speaker setup but also to how we perceive and process sounds in real life. Those micro-movements help our ears and brains better pinpoint location from direction and depth perception etc. A crude way of demonstrating this would be to try it out in a non-VR game with any VSS solution (doesn't need to be Waves), without headtracking. Find a fixed point audio cue (preferably a constant one or a repeating one) that's faint or slightly vague in terms of positioning. Cues made quiet because they are far away are ideal for this. Alternatively, you could pick a louder cue from a busy audio environments where there are other cues and general background noise competing for your attention and obscuring the soundscape and hampering precise imaging of that cue in question. Cues that are continuously sounding (e.g. a waterfall) or providing a repeated but vague cycle of samples somewhere off camera, out of sight, are best suited for our purposes. The cue should be offscreen and if possible one that you couldn't see in the first place.
Once you have chosen your sample cue and you can vaguely image it, put the controller down or disengage from the keyboard so that the game engine camera is still/stationary. Now try listening to it through headphones, can be in stereo or even a non-head tracking VSS solution. Move your head from side to side (if necessary hold your cans to your head if they are loose or in danger of falling off), back and forth and mix it up with a combination of both. For the most part, the cue in question won't image any better. It won't become any clearer, more solid or distinct because the real-life source of the sound, the drivers, are moving in tandem with your head. And here's the rub, if it does improve at all, that will be due to the headphones movements not exactly matching that of your head.
For the next part, keep the headphones on but this time, keep your head still and use the mouse or analogue stick to give the game camera a good shake. The audio cue will become a bit clearer and more solid because your attention is focused on following that source cue in motion even though your head / ears are relatively stationary and unmoving and also because the audio processing of the cue will subtly change its audio characteristics or properties in line with the change in position relative to your in-game avatar's stationary position in the game's audio environment.
It's the same with VSS + headtracking only in reverse. The virtual environment and cues are anchored in that instance, and instead of the game camera, it's your head that's moving and changing your relative position in relation to the cue. When the virtualised speaker setup has an anchor point versus a relatively stationary listening position, moving your head however little, will facilitate easier recognition and conscious and subconscious imaging of the cue in question. That's due to the natural audio 'decoding' capabilities of our ears and brains.
All that is why Mobius / Orbit and Waves NX has head tracking. It's why Redscape, offers their own usb headtracker and VSS software as a competitor of Waves. It is why the Smyth Realiser A16 comes bundled with a head tracker, as did its predecessor the years old A8, and why the Beyer Dynamic Headzone Pro from over a decade ago (one version of which was reviewed by MLE) back when VR had no presence in the home entertainment space, was marketed as the ultimate luxury headphone VSS dac/amp complete with ridiculous looking antenna emitter and headphone mounted aerial. None of these products were intended for VR, they were all intended for movies, music and conventional gaming in VSS.