SergioPOE
100+ Head-Fier
I don't think they're prioritizing either one. The internals are the exactly the same. Whether you chose the 2U or headstand enclosure, the only difference is cosmetic.
I have to confess I have no idea how much progress we've done in 3D scans. my last contacts with professionals in 3D scan stuff was a good 10 years ago so I probably have very outdated ideas on the subject. at the time, even with the super duper big toy turning around the object it needed to scan for massive resolution, they still had a guy full time just to deal with the buggy mesh of the rendered object "manually". so if a hand held smartphone camera can do it properly on its own, we really have come a long way.The TrueDepth sensor on the Apple iPhone X sounds a good way to modelize ear shape because it can create a real 3D model of it, not an extrapolation based on 2D pictures. The TrueDepth sensor is very similar to the XBox Kinect (i think it was developped by the same team).
Maybe one takes longer to assemble then the other one?I don't think they're prioritizing either one. The internals are the exactly the same. Whether you chose the 2U or headstand enclosure, the only difference is cosmetic.
I can feel this time there won't be significant delays and we should get our A16 before may - i hope
Just like the song:That would be nice, but at this rate it won't be before May, and more likely the middle of May if not June at the earliest.
First I took photos of both my ears and face with an app to get my custom sound map. Then I sat down in a home theatre equipped with expensive up-firing speakers for Dolby Atmos effects. Finally I had to take another measurement of my ears by inserting two microphones while a test track played.
The additional calibration wasn't exactly needed, but Creative wanted another profile to show how close it could come to mimicking an actual room. The default sound profile for Super X-Fi is taken in a smaller room, which sounds slightly different as well.
From there, Creative started playing a Dolby Atmos demo video, with sounds coming from the left, right and above. I was then told to put on a headset, and Creative repeated the video. I assumed I would have been able to tell the difference, but the audio coming out of the headphones sounded exactly the same as what I'd previously heard.
I thought it was a trick -- the headphones weren't playing anything, but the speakers were still blasting away. So when I took the cans off to find myself listening to nothing but silence, I think I swore out loud. I was completely blown away.
During the subsequent demos, I switched between the calibrated profile and the one the AI picked for my ears, and found that there was a difference between the two.
Definitely not, unless Smyth has a huge database of ear morphologies and measured HRTFs to train with machine learning. This is what THX has ( a database) and it will only get better as people submit their ear morphologies and match to a good HRTF.
Machine learning is only as good as the database
Smyth takes a rather mechanical, raw approach, and uses individualized measurements of speakers in a room. They will always be recreating a home theater.