Smyth Research Realiser A16 Speaker Edition
Nov 30, 2021 at 9:07 AM Thread Starter Post #1 of 139


500+ Head-Fier
Sep 10, 2019
The Smyth Research Realiser A16 SE (Speaker Edition) is a special version of the Realiser A16 for listening to Dolby Atmos and DTS: X movies, music, games or series on two Hi-Fi speakers, using patented AudioXD technology.

The advantage would be a Realiser A16 unit with two operating modes: headphones and speakers. In other words, a multi-virtual speaker configuration all around you with only two real speakers and also one or two headphones, for late listening.

The operation in headset mode is identical to the Realiser A16. In "Speaker" mode, the Realiser A16 SE must be connected to 2 active speakers (self-amplified) or to a stereo amplifier and 2 passive speakers. This connection to the amplifier goes through 2 RCA/CINCH or a variable level optical/coaxial output.


The operating principle of the Realiser A16 SE is not quite clear to me. AudioXD, the French company behind Realiser A16 Speaker Edition disclosed quite limited information on the involved technical solution. It has been suggested that it’s not transaural but rather neural rendering, whatever that means.

The upgrade to the speaker edition requires sending the Realiser A16 unit to AV-in in Paris. The French distributor organised demonstration sessions recently. Any listening impressions from French users at

Later edit: I’ve found an English translated review for Realiser A16 SE that has references to Realiser A8 and A16.
Last edited:
Nov 30, 2021 at 10:33 AM Post #2 of 139
It is an interesting idea, but the price is way too high, IMO. Even if it was worth over 8K USD for virtual surround speakers (more than most spend on an entire home theater speaker system), I would not advise anyone to spend that much on the A-16. The long term future of the company is in some question and the device itself is unreliable. A few people on Head-Fi, including me, have had to send them back for repairs. The device sometimes hangs, sometimes boots to a white screen, has loud pops, and many other issues/odd behavior. While the most important features work, the Smyths have failed to deliver on many of the promised features and probably never will. Yes, when it is working properly, it is fantastic and I have no regrets about getting mine (I paid around 1K). However, unless you are wealthy, spending 8K on it would be very risky, especially when the ability to get support a few years down the line is in question.
Nov 30, 2021 at 10:53 AM Post #3 of 139
It’s still debatable whether or not for those who got their Realiser A16 units through the kickstarter campaign it’s worth upgrading to Realiser A16 SE. The asking price is like a pill harder to swallow. However, if one has already got two speakers, a stereo amp and a subwoofer, the spending with the upgrade might be a bit easier to manage.
Last edited:
Nov 30, 2021 at 11:09 AM Post #4 of 139
Also this is NOT being done by the Smyths even though they are licensing it. Seems to me very similar to the BACCH software. What is involved in modifying an existing Realiser and how convincing the effect will be is something only time will tell. Me, I already have 24 channels of bespoke PRIRs for D&D8c's, LS 50 Metas, and OG LS 50's, and a 4.1 channel LS 50 Meta with LS 50 Surrounds, so I'm in no hurry to plunk down another $4k.
Nov 30, 2021 at 12:15 PM Post #5 of 139
I was surprised that there was no mention of Stephen Smyth at all in the French podcast :

Which is amazing considering that Jean-Luc Haurais from AudioXD is also involved in Audio3D where the Smyth(s) are consultants.

See here on page 8 about Audio3D : (pdf)
Jean-Michel Jarre : Musician & President of CISAC (the International Confederation of Societies of Authors and Composers)
Mike and Stephen Smyth : Former CEO and CTO of DTS

A bit confusing and all that
Nov 30, 2021 at 12:37 PM Post #6 of 139
Merci beaucoup Yves for the additional information. I’ve tried to watch the podcast, but my French is very limited and I haven’t understood quite much. I didn’t like that they talked too much about the previous achievements of AudioXD and that they didn’t provide more information about the practical aspects of this version with speakers.
Nov 30, 2021 at 12:45 PM Post #7 of 139
Also this is NOT being done by the Smyths even though they are licensing it. Seems to me very similar to the BACCH software.

Yes, but it requires their hardware, so buyers will likely need to depend on Smyth for repairs and firmware upgrades.

As you know, I too have a bespoke PRIR that I'm extremely happy with. OTOH, I have a full-range two channel system and it would be cool to hear it simulate surround, but not 8K cool.
Nov 30, 2021 at 1:23 PM Post #8 of 139
Besides the podcast mentioned in @You Gene post#5, there’re two more videos also in French

Audio-XD Smyth A16 SE

J'ai écouté du 9.1.6 canaux DOLBY ATMOS sur DEUX Enceintes, et ça marche !
I listened to 9.1.6 channel DOLBY ATMOS on TWO Speakers, and it works!

Jean-Luc Haurais said that there was no calibration carried out at all. I think it’s a bit confusing. As owners of Realiser A16 units, most of us carried out PRIR measurements at home using the two speakers and an amp. In the second available video, Gilles Gerin made a demonstration in a hotel room. One could notice on the display of the Realiser A16 SE that the user was Wissam (most probably Mr. Haurais’s colleague that was also seen in the first video), the active preset was #5 and the listening room was Dolby Atmos 9.1.6. However, it wasn’t clear what PRIRs were used for building that listening room. Were there Wissam's own PRIR measurements or BBC and Surrey University PRIRs? This question actually leads to another one. How is the audition with Realiser A16 SE when the listener didn’t make personalized measurements beforehand? Does such an approach make sense?
Last edited:
Nov 30, 2021 at 3:35 PM Post #9 of 139
A guy on HCFR posted comments of his short experience listening to A16 AudioXD SE ; I am giving here a translation of his comments :
...I was able to listen to the demo of the Smith Realier with the programming by audio XD and discuss with the one who conceived the programming of the DSPs (Jean Luc Haurais)
The source virtualization is very successful, with only 2 speakers, the sounds come from the center, from the sides, from the top, from the bottom, I don't know how many channels, but it's more than 220 degrees of spatialisation.
The principle works on the CTC (CrossTalk Cancellation), only with the phase and magnitude, cancellation in recursive circuit (it is an endless cancellation loop that rotates) of sounds from the right speaker by the left, phase shift and here we go.
The worst part is that the listening spot tolerates around 5cm around the center, very impressive technically, it only works a little below the critical distance.
Obviously, the principle is destructive, the native signal is damaged due to cancellations, so it's not worth a real physical 5.1 or higher, the interest is therefore quite limited given the placement constraints, but the technical feat is remarkable.
I also went to CTC, non-recursive for the moment on my near field system, it's very light but it widens the soundstage, the sound seems to come out of the speakers.
The effect is light so as not to deteriorate the signal, it is a compromise between effects and purity, but at low level, the impact on the signal is inaudible, we just hear the widening without impact on the purity of the signal, very interesting.
It works in FIR with versatile HRTFs (HRTF profile averaged on the basis of 5000 samples and 5cm listening spot, so the same as in stereo for the perception of a perfect center.)...
Nov 30, 2021 at 3:39 PM Post #10 of 139
Yes, but it requires their hardware, so buyers will likely need to depend on Smyth for repairs and firmware upgrades.

As you know, I too have a bespoke PRIR that I'm extremely happy with. OTOH, I have a full-range two channel system and it would be cool to hear it simulate surround, but not 8K cool.
Would really have to knock my socks off. Currently it's only 9.1.6. Have to wonder if they'll hit the customers up for the upgrade to 15.1.8 as well. Could get very expensive very fast, and the way I hear it, it's another board that sits atop the main board in the A16. A "Pi hat" as it were. I don't know if the Smyth's are committed to servicing this functionality if something goes wrong. For that matter, if something goes wrong with the speaker outs, could it become a problem determining whether it needs to be sent to France or Northern Ireland? Just a whole lot of questions.

In reading the review, it apparently works with most normal speaker layouts. So, if you have your speakers laid out in an equilateral triangle arrangement, so they are positioned 60 degrees apart from you listening position, that will work with this setup. So that's a point in its favor.

So, it is a very seductive concept, particularly if you've created an exceptional 2 channel system, to suddenly find you can enjoy 16-24 channels with only one modification to an existing component.

A lot of people also use things like DSP and electronic crossovers for their subs. Would this system work and play well with a system that, for example, uses Dirac Live or Audiolense XO as the heart of a 2.1 or 2.2 stereo system?

Lot of questions to be answered.

So not a hard pass, but still a hard sell.
Last edited:
Nov 30, 2021 at 11:40 PM Post #11 of 139
A report from the main realiser thread, color me curious also!

I’ve listened to the french forum podcast where they discuss the implementation and, while still vague in the descriptions, I realized (pun intended), they weren’t merely replacing the headphones / hpeq with loudspeakers / xtalk cancellation but actually virtualizing the surround and ceiling channels from mathematical models derived by machine learning processes.

In essence, they appear to be making both magnitude and phase compensations to the surround channels not from personalized hrtfs but instead from heading specific models that represent « common denominator across population of hrtfs », extracted by means of machine learning.

There’s a bit of marketing gimmick and, I am dubious by principle since it’s well known human hearing uses idiosyncratic nature of head/ear shape to localize sources above and avoid cone of confusion etc. However, all who’ve given it a try were apparently looking for the speakers behind them…

What is fascinating here to me is that AI/machine learning opens doors for extraction of patterns from complex data that is an HRTF for distillation of generic features that appear to require little brain adaptation to make us believe in the validity of the virtual source location.

That is so much ahead than the mere convolution work that the realiser does from individual data, this realiser speaker edition almost feels like a trojan horse with the future being is in these math models (as we’re probably not too far away from software bsed realtime convolution rather than relying on dsp)…

Dec 1, 2021 at 6:25 PM Post #12 of 139
I’ve watched the 2nd youtube video linked by @GeorgeA and it’s now very clear the speaker edition portion just doesn’t rely on personalization at all (and I think just can’t by principles per the filters they use).

I have to admit I find some of the claims hard to believe as it’s a bit too much 22nd century to my eye but, who knows, maybe that’s showing instead this new generation of engineers in computer science (seems like 1 out of 2 of now is studying or working on big data processing using machine learning / AI methodologies…) is putting us old farts one foot in the grave as well as advances in research for perceptual acoustics :wink:.

Given the 2nd hand discussion above regarding the use of cross-talk cancellation, it still feels like a kind of binaural feed through speakers (where xtalk cancellation if mandatory then…). I can only guess now that they found a way to not just average but combine measured head responses from a bunch of individuals and come up with correction filters for each of the loudspeaker pair of the type hrtf(theta)/hrtf(speaker@+/-30 deg).

In the comments of the youtube video, several people actually caught the Trojan horse here and it’s even “worse” than I had imagined :wink:. In particular, since the correction filters aren’t dependent of the individual, room or speakers used (beside they relative angle to the listener), the stereo signal to be output by the video player could just be an additional “binaural stereo-downmix” track generated during the atmos mix itself… Not even a need to perform realtime multi-channel filtering which requires audio-xd to use the expensive realiser to add their processing…

As far as the binaural recording in the youtube video, it was immersive but I absolutely did not get a feel of sounds behind or above me (just fuzzy stuff surrounding me) which also several commented on: we’d need to get the stereo feed to the amplifier and playback on loudspeakers to experience this but, for some reason that’s beyond me, they haven’t made that choice for the youtube demo…

Dec 2, 2021 at 2:19 AM Post #13 of 139
Thanks @arnaud for your observations. Regarding the personalization when using Realiser A16 SE, I'm somewhat confused. In the 2nd youtube video, at 0:32, the reporter seems to have binaural microphones in his ears. Unfortunately, I don’t understand his comments.

This wasn’t the first time that a device designed and manufactured by Smyth Research was used as a basis for another device. Previously, Dr AIX aka Dr Mark Waldrep demoed Yarra 3DX at one audio show using his Realiser A8 as the binauralizer. The sound bar took the analogue binaural output from Realiser A8 and delivered left and right discrete audio signals (minimal crosstalk) to listeners. To minimize the crosstalk, Yarra 3DX worked with beamforming technology.

Also, in the 2nd youtube video, one could see that HP-A – alternative headphone output of the Realiser A16 was linked to the amp via an RCA cable. It definitely meant binaural feed to the amp and then through speakers.
Dec 2, 2021 at 4:43 AM Post #14 of 139
Thanks @arnaud for your observations. Regarding the personalization when using Realiser A16 SE, I'm somewhat confused. In the 2nd youtube video, at 0:32, the reporter seems to have binaural microphones in his ears. Unfortunately, I don’t understand his comments.

Actually, it's just that he actually used the realiser measurement system to capture the binaural signal that's played back in the video from 8 minutes onward (I forgot how you do this but I recall you can use the realiser as a measurement front end and output the binaural capture from digital output, here it was at 96kHz for ADC apparently ).

I still don't understand why they did not just record the binaural feed to the speaker amp instead as listening to a binaural recording from someone else's head mostly defeats the original purpose... From the videos I've seen, audio xd is sandboxing their algorithm so that individual filters for each channel can't be hacked (e.g. it reads the multi-channel stream, converts it to a proprietary binary format, does the processing and outputs a binaural output).

Along the same lines, even though it's just the processed output, I suppose they're a bit paranoid about this and did not want to have it included as is in a youtube video ( having it pass through someone's head definitely scrambles it alright :) ).
Dec 2, 2021 at 3:40 PM Post #15 of 139
So based on what I'm reading here, it's ambiguous whether or not a PRIR should be taken to have the system work to its optimum capability. What about head tracking? The Bacch system is very similar with its crosstalk cancellation, and it calls for head tracking for best case performance, does anyone know if it's the same for this thing?

Users who are viewing this thread