Smyth Research Realiser A16
Feb 28, 2018 at 9:33 AM Post #2,072 of 16,011
The TrueDepth sensor on the Apple iPhone X sounds a good way to modelize ear shape because it can create a real 3D model of it, not an extrapolation based on 2D pictures. The TrueDepth sensor is very similar to the XBox Kinect (i think it was developped by the same team).
I have to confess I have no idea how much progress we've done in 3D scans. my last contacts with professionals in 3D scan stuff was a good 10 years ago so I probably have very outdated ideas on the subject. at the time, even with the super duper big toy turning around the object it needed to scan for massive resolution, they still had a guy full time just to deal with the buggy mesh of the rendered object "manually". so if a hand held smartphone camera can do it properly on its own, we really have come a long way.
 
Mar 1, 2018 at 4:14 PM Post #2,079 of 16,011
That would be nice, but at this rate it won't be before May, and more likely the middle of May if not June at the earliest.
Just like the song:
January, February, March, April, May, I'll be gone till November.
Joking aside, everyone will be very happy with the A16 once they start shipping, so well worth the wait. Looking forward to read once CanJam LA starts , what exactly has changed/improved in this final commercial version.
 
Last edited:
Mar 3, 2018 at 12:59 PM Post #2,080 of 16,011
First I took photos of both my ears and face with an app to get my custom sound map. Then I sat down in a home theatre equipped with expensive up-firing speakers for Dolby Atmos effects. Finally I had to take another measurement of my ears by inserting two microphones while a test track played.

The additional calibration wasn't exactly needed, but Creative wanted another profile to show how close it could come to mimicking an actual room. The default sound profile for Super X-Fi is taken in a smaller room, which sounds slightly different as well.

From there, Creative started playing a Dolby Atmos demo video, with sounds coming from the left, right and above. I was then told to put on a headset, and Creative repeated the video. I assumed I would have been able to tell the difference, but the audio coming out of the headphones sounded exactly the same as what I'd previously heard.

I thought it was a trick -- the headphones weren't playing anything, but the speakers were still blasting away. So when I took the cans off to find myself listening to nothing but silence, I think I swore out loud. I was completely blown away.

During the subsequent demos, I switched between the calibrated profile and the one the AI picked for my ears, and found that there was a difference between the two.

Wow. This is huge! Smyth better pushes a bit to bring the A16 to the market or it might be too late.
 
Mar 3, 2018 at 1:35 PM Post #2,081 of 16,011
Definitely not, unless Smyth has a huge database of ear morphologies and measured HRTFs to train with machine learning. This is what THX has ( a database) and it will only get better as people submit their ear morphologies and match to a good HRTF.

Machine learning is only as good as the database

I think it’s significant to really keep in mind that Smyth and the others (Creative, THX) have a fundamentally different technique.

Smyth takes a rather mechanical, raw approach, and uses individualized measurements of speakers in a room. They will always be recreating a home theater. The advantage is it can recreate that home theater perfectly (with head tracking), but it’s limited by never being able to surpass a home theater and be flexible enough to recreate a movie set or game environment. I suppose the Realizer’s technique COULD be used to measure and recreate a orchestral hall... if you want to rent out the hall for yourself for 15 minutes and set up speakers in there ^_^'`

The Creative Super Mario-Fi... no wait, the Star Wars X-Fi Hologram... no, Sound Blaster Super X-Fi Holodeck (close as I’m gonna get guys), is trying to take a more abstract approach at understanding binaural audio. This is what Creative (and Auro3D) have always tried to do... and making small improvements over the years. If they can simulate ray tracing audio, the advantage is that the recreation could be as transparent as the recording environment or game environment, not limited by quantity or placement of speakers. This is also called object-oriented audio. The disadvantage, however, is that personalizing binaural audio has always been a guess, and simply not as accurate as a measured response from a predictable environment. Also, few sources will actually allow object-oriented audio output (game dev or Dolby Atmos would have to work directly with Creative), most of the time we get something already compressed into 7.1 theater surround. Lastly, still no head-tracking, and I doubt Creative will get many other headphone manufacturers on board to get their products measured for EQ.

Super X-Fi will indeed be an advancement, because it’s inherently cheaper than a Realizer and yet it does offer a certain level of personalization. Using photos to model the head and ears? Cool! Since we are still wearing our ears and all their nuanced creases and such, I wonder if the photos are just being used to measure the size and angle of ears, the head width, and maybe height from the shoulders if Creative is feeling frisky? Measuring those parameters should easily be within the realm of all smartphones with a few pictures, and software to interpolate general measurements. It’s pretty doable. I applaud them for continuing to push and advance a holographic sense of sound.

That said, I still have my A16 Kickstarter pre-ordered, and I’m not canceling. Partially because we’ve already been waiting, but also because 7.1 and maybe 9.1 will continue to be a thing and the most available input sources for years to come. Recreating a great home theater experience is the best we’re going to get, and the Realizer will make an extremely convincing recreation of that. I may/probably will pick up Creative’s new product too, because I love surround audio.
 
Mar 3, 2018 at 2:51 PM Post #2,082 of 16,011
At first when I read this I agreed that Smyth need to get their act together.

Then on closer inspection of the article... "The tech is a result of $100 million invested over 20 years of R&D"... bollocks.

When I read closer I appreciated (I was about to say realised) that this review is no different than all reviews of other 3-D sound wannabes. I'll assume that the reviewer was impressed in the same way that a random try of the "Out of your head" demos is impressive, unless you've heard the realiser demo. I think the reviewer has never heard the realiser.

The Smyth Realiser is so far advanced of anything close to the market . You have to hear it to believe it.

That said, I really think that investors deserve way better than the half-hearted updates we have ben given.

As it stands, the early investors have shown incredible patience considering the Kickstarter exceeded the initial ask by far, and they are approaching a year behind the promised schedule. People investigating the product without experiencing it themselves are bound to have doubts, and justifiably so, considering these reviews of much cheaper alternatives that keep popping up.
 
Mar 3, 2018 at 3:17 PM Post #2,083 of 16,011
Smyth takes a rather mechanical, raw approach, and uses individualized measurements of speakers in a room. They will always be recreating a home theater.

I agree with your overall point, but you are selling Smyth short. If the promised online exchange for room measurements delivers, A16 users will be able to have a library of virtual listening rooms. I believe that Smyth sees the A-16 as a platform on which they can deliver a lot of additional functionality in the future. It is powerful hardware with four DSPs, multiple DACs, ADCs, HDMI inputs and switching, etc etc. You get a lot for your money. It is hard for me to believe that a mass market device from Creative or a typical THX licensee will come anywhere close to the overall capabilities of the A-16. Admittedly, a basic feature set may be good enough for most users.

I also have some doubts about how well using a photo will work for a lot of people. Just as the exterior of people's ears vary, so do ear canals and interior physiology (bones, cartilage, etc) I understand that Creative plans to have a database of personalized measurements paired with photos. They will analyze the user's photo, generate a data set of key points/vectors, and then search the personalized measurements database for the closest match. A lot of factors will impact how will this works: how big a library of measurements they have, how large a dataset they can generate from the photo, how accurate the dataset is, the algorithm for matching the user's dataset to the models, and how "typical" the user's ears are.

I used to work on software for image guidance surgery. The technology has come a long way since I left the field a decade+ ago, but I'm pretty sure that accurately finding more than a few data points in the photos will be hugely challenging. Matching the user's dataset to the database of models may also be a pretty big challenge (it is not just a mathematical problem). There may be a reason that Creative was not demoing that feature at CES!

In other words, I'll believe it when I hear it. I've heard the A-16 and I believe. :)
 
Mar 3, 2018 at 3:58 PM Post #2,084 of 16,011
Does the X-Fi track your head orientation at all? It doesn't seem like it. If so, the room would turn with your head, which is a crucial difference. It also makes it much easier to pull off since tracking accuracy and latency aren't an issue.
 
Mar 3, 2018 at 4:15 PM Post #2,085 of 16,011
I'm sure that a £150 dongle is not going to be anywhere the sophistication of the A16 but as a portable option it sounds interesting.
 

Users who are viewing this thread

Back
Top