Smyth Research Realiser A16
Nov 6, 2017 at 6:35 PM Post #1,306 of 16,011
I see it. Their flagship was approximately 50,000. It remains at the same price, but since yarra entered the market, they started selling the Bacch-dsp at approximately 1/10 of that price.

So let me rephrase: after they capture the maximum profit coming from high valued utility of early adopters with monopoly/oligopoly price, they are going to further segment their product portfolio and start to practice competitive price with their basic product.
Unfortunately Smyth did not do this with the A8, so I highly doubt they ill do it with the A16 which is unfortunate b/c a mass market penetration strategy could probably make then a lot more money than skimming the market with an expensive box.
 
Nov 6, 2017 at 7:19 PM Post #1,307 of 16,011
Unfortunately Smyth did not do this with the A8, so I highly doubt they ill do it with the A16 which is unfortunate b/c a mass market penetration strategy could probably make then a lot more money than skimming the market with an expensive box.

IMHO, consumers willing to acquire a PRIR are a very small niche.

So I don't know if selling the A8 at cheaper prices would increase the scale because the great majority simple don't mind, are not aware what personalization improves or are not willing to get into a hassle calibrating a gadget.

If I were Smyth Research or Theoretica, I would only sell a product priced for mass consumption if and only if the personalization were applied seamlessly, without any or with a very simple and smooth user action, for instance, automatically scanning the users head or acquiring their biometrics from a photograph and comparing with a hrtf database to use a close enough hrtf.

And believe me, a lot of researchers and companies are trying to do that. Here are two examples:

3D audio is the secret to HoloLens' convincing holograms

(...)

The HoloLens audio system replicates the way the human brain processes sounds. "[Spatial sound] is what we experience on a daily basis," says Johnston. "We're always listening and locating sounds around us; our brains are constantly interpreting and processing sounds through our ears and positioning those sounds in the world around us."

The brain relies on a set of aural cues to locate a sound source with precision. If you're standing on the street, for instance, you would spot an oncoming bus on your right based on the way its sound reaches your ears. It would enter the ear closest to the vehicle a little quicker than the one farther from it, on the left. It would also be louder in one ear than the other based on proximity. These cues help you pinpoint the object's location. But there's another physical factor that impacts the way sounds are perceived.

Before a sound wave enters a person's ear canals, it interacts with the outer ears, the head and even the neck. The shape, size and position of the human anatomy add a unique imprint to each sound. The effect, called Head-Related Transfer Function (HRTF), makes everyone hear sounds a little differently.

These subtle differences make up the most crucial part of a spatial-sound experience. For the aural illusion to work, all the cues need to be generated with precision. "A one-size-fits-all [solution] or some kind of generic filter does not satisfy around one-half of the population of the Earth," says Tashev. "For the [mixed reality experience to work], we had to find a way to generate your personal hearing."

His team started by collecting reams of data in the Microsoft Research lab. They captured the HRTFs of hundreds of people to build their aural profiles. The acoustic measurements, coupled with precise 3D scans of the subjects' heads, collectively built a wide range of options for HoloLens. A quick and discreet calibration matches the spatial hearing of the device user to the profile that comes closest to his or hers.

(...)

A method for efficiently calculating head-related transfer functions directly from head scan point clouds
Authors: Sridhar, R., Choueiri, E. Y.
Publication: 143rd Convention of the Audio Engineering Society (AES 143)
Date: October 8, 2017

A method is developed for efficiently calculating head-related transfer functions (HRTFs) directly from head scan point clouds of a subject using a database of HRTFs, and corresponding head scans, of many subjects. Consumer applications require HRTFs be estimated accurately and efficiently, but existing methods do not simultaneously meet these requirements. The presented method uses efficient matrix multiplications to compute HRTFs from spherical harmonic representations of head scan point clouds that may be obtained from consumer-grade cameras. The method was applied to a database of only 23 subjects, and while calculated interaural time difference errors are found to be above estimated perceptual thresholds for some spatial directions, HRTF spectral distortions up to 6 kHz fall below perceptual thresholds for most directions.

Errata:
  1. In section 3.2 on page 4, the last sentence of the first paragraph should read “…and simple geometrical models of the head…”.

Instead of comparing with HRTF samples from a database, a hard core method would be calculating the HRTF directly from the head scan, but as you can deduce from those examples, it seems untreatable right now.

Anyway, thanks God they don’t need you and me to help them doing their business, otherwise we could drive them nuts. :L3000:
 
Nov 6, 2017 at 8:04 PM Post #1,308 of 16,011
IMHO, consumers willing to acquire a PRIR are a very small niche.

So I don't know if selling the A8 at cheaper prices would increase the scale because the great majority simple don't mind, are not aware what personalization improves or are not willing to get into a hassle calibrating a gadget.

If I were Smyth Research or Theoretica, I would only sell a product priced for mass consumption if and only if the personalization were applied seamlessly, without any or with a very simple and smooth user action, for instance, automatically scanning the users head or acquiring their biometrics from a photograph and comparing with a hrtf database to use a close enough hrtf.

And believe me, a lot of researchers and companies are trying to do that. Here are two examples:





Instead of comparing with HRTF samples from a database, a hard core method would be calculating the HRTF directly from the head scan, but as you can deduce from those examples, it seems untreatable right now.

Anyway, thanks God they don’t need you and me to help them doing their business, otherwise we could drive them nuts. :L3000:
All of which indicates to me that the Smyth is but the first stab at the creation of personalized binaural speaker-room simulations. Future offerings from other companies or even possibly Smythe will indeed use biometrics scanned from a phone or similar device, marry that data to a room/speaker transfer function, and deliver a binaural multichannel data stream to a PC media player and out to a DAC AMP. May take 5-10 years, but it will happen.
 
Nov 7, 2017 at 8:34 AM Post #1,309 of 16,011
All of which indicates to me that the Smyth is but the first stab at the creation of personalized binaural speaker-room simulations. Future offerings from other companies or even possibly Smythe will indeed use biometrics scanned from a phone or similar device, marry that data to a room/speaker transfer function, and deliver a binaural multichannel data stream to a PC media player and out to a DAC AMP. May take 5-10 years, but it will happen.
And instead of simulating a room-speaker combination, what about doing direct binaural rendering (using personal hrtf's) of 3D audio objects from object based audio formats...
I wonder how to deal with the acoustics of the virtual space (in the movie or whatever) in such an approach, would reflexions be seperate objects?
 
Last edited:
Nov 7, 2017 at 8:52 AM Post #1,310 of 16,011
well there is clearly a case to be made that people aren't interested because they don't know about it, what it does, or simply how wrong headphone listening can be with typical albums. cheaper products would probably drag in a few more curious people, but ultimately it's education that will bring a turn in the headphone hobby. it is my firm belief that this turn is unavoidable, some day we'll all have a custom compensation in our cellphone or in our BT headset. any non apocalyptic scenario of the future should lead there.
it is also my belief that the audiophile hobby is advocating ignorance with such strength and self righteousness on any relevant technical matters, that it could still take 20 years for what should be happening now to become reality.:rage:

maybe chaining us to a physical box is also seen as a copyright and crack protection? we know that no software is safe if someone really wishes to get it for free.
 
Nov 7, 2017 at 10:26 AM Post #1,311 of 16,011
maybe chaining us to a physical box is also seen as a copyright and crack protection? we know that no software is safe if someone really wishes to get it for free.

Programming for general purpose hardware raises a whole bunch of complications and performance issues - drivers, latency, resource contention etc. Not to say it can't be done, but it's a very different best from running on custom/dedicated silicon - see OOYH which either performs OK or terribly due to these issues. Microsoft killed positional audio with DirectX when sound card DSPs become unusable for gaming - otherwise we would still have audio DSPs and positional audio like we did 15 years ago.

the audiophile hobby is advocating ignorance with such strength and self righteousness on any relevant technical matters, that it could still take 20 years for what should be happening now to become reality.:rage:

Perhaps. I think the main issue is whether the porn industry will find a use for it, which is the golden ticket for innovative technology becoming mainstream. VR needs positional audio and porn needs VR, so perhaps the stars are finally aligning!
 
Last edited:
Nov 7, 2017 at 10:33 AM Post #1,312 of 16,011
And instead of simulating a room-speaker combination, what about doing direct binaural rendering (using personal hrtf's) of 3D audio objects from object based audio formats...
One way is to interpolate between HRTFs to put a virtual speaker at each object's exact location and move it as the object moves. This would improve localization and eliminate artifacts such as crosstalk and comb-filtering, instead of putting phantom objects between fixed virtual speakers. The Realiser already interpolates for head-tracking, and maybe it can also interpolate for objects.
 
Last edited:
Nov 7, 2017 at 11:17 AM Post #1,313 of 16,011
And instead of simulating a room-speaker combination, what about doing direct binaural rendering (using personal hrtf's) of 3D audio objects from object based audio formats...
I wonder how to deal with the acoustics of the virtual space (in the movie or whatever) in such an approach, would reflexions be seperate objects?
Don't know. But I'm certain an ideal space could be modeled.
 
Nov 7, 2017 at 2:21 PM Post #1,314 of 16,011
Programming for general purpose hardware raises a whole bunch of complications and performance issues - drivers, latency, resource contention etc. Not to say it can't be done, but it's a very different best from running on custom/dedicated silicon - see OOYH which either performs OK or terribly due to these issues. Microsoft killed positional audio with DirectX when sound card DSPs become unusable for gaming - otherwise we would still have audio DSPs and positional audio like we did 15 years ago.



Perhaps. I think the main issue is whether the porn industry will find a use for it, which is the golden ticket for innovative technology becoming mainstream. VR needs positional audio and porn needs VR, so perhaps the stars are finally aligning!
you might have just explained why audio isn't evolving as fast as other techs. porn doesn't care much about audio. ^_^
thanks for making me laugh.
 
Nov 7, 2017 at 3:01 PM Post #1,315 of 16,011
Not to derail the conversation... but what are everyone's must have accessories for the A16? How have you prepared for its arrival?

I'd like to have the HD800 or S by then and at least the Xbox One S since My Mac doesn't have any options for 4K or Atmos playback without issue. I also ordered the Yarra 3DX soundbar. Should be a pretty incredible experience.
 
Nov 7, 2017 at 3:23 PM Post #1,316 of 16,011
I think we're looking at what headphone can reasonably mimic the loudspeakers they are simulating. Smyth Research use the base model Stax Earspeaker system for their demo. The function of the headphone has changed from what we previously looked for. Rather than the individual flavour of the headphone the Realiser simply needs an accurate transport to fool the ears into believing they're listening to a top flight speaker system. The HD800 will do that for sure.
I have a set and many more so I've no axe to grind on the subject. There will probably be plenty of low distortion headphones that'll provide a similar experience
 
Nov 7, 2017 at 5:12 PM Post #1,317 of 16,011
And instead of simulating a room-speaker combination, what about doing direct binaural rendering (using personal hrtf's) of 3D audio objects from object based audio formats...
I wonder how to deal with the acoustics of the virtual space (in the movie or whatever) in such an approach, would reflexions be seperate objects?

Don't know. But I'm certain an ideal space could be modeled.

BACCH-3dm
A central feature of BACCH-dSP is its powerful and intuitive 3D mixer: the BACCH-3dm.

BACCH-3dm produces vivid and realistic binaural mixes (equivalent to binaural recordings produced through dummy heads or humans with in-ear microphones) from a collection of multiple audio sources such as the tracks of a multi-track recording.

BACCH-3dm’s visually striking graphical interface allows positioning multiple sound sources (each corresponding to a bus line from your DAW, a microphone signal, an audio file or a track from a multi-track file) precisely in 3D space. BACCH-3dm has the following additional features:

  1. A large library of human and dummy head HRTFs.
  2. Accurate real-time early reflections and reverb calculations based on user-controlled room geometry and a wide range of wall materials.
  3. Real-time six-degrees-of-freedom (X, Y, Z, Pitch, Yaw and Roll) navigation of the 3D sound field using the keyboard or a Playstation controller.
  4. Synchronized output writing with multi-track file playback for producing final binaural mixes with or without BACCH and BACCH-HP filters.
https://www.theoretica.us/bacch-dsp/
 
Nov 7, 2017 at 5:23 PM Post #1,318 of 16,011
Not to derail the conversation... but what are everyone's must have accessories for the A16? How have you prepared for its arrival?
Maybe something for feeling the bass, like a subpac and/or Taction Technology Kannon headphones. But I don't know about the overall sound quality of the Kannons. I would love to see the transporter transducer from the Kannons integrated into a higher quality headphone.
 
Nov 7, 2017 at 5:32 PM Post #1,319 of 16,011
well there is clearly a case to be made that people aren't interested because they don't know about it, what it does, or simply how wrong headphone listening can be with typical albums. cheaper products would probably drag in a few more curious people, but ultimately it's education that will bring a turn in the headphone hobby.

That education is perhaps the main cost.

Ask Smyth and they will tell you that meeting demos translates well into sales.

But people usually don’t take such cost into consideration when they see the box or the software.

So in order to create curiosity your cheaper product need to be personalized and paradoxically your user won’t undertake a personalization unless he/she already knows the magnitude of improvement or it is an automatic personalization (or a breeze to do it).

Seamless personalization and high order ambisonics or binaural masters may allow listeners to be fooled by 3D reproduction. That was in fact the subject of a recent lecture from Dr. Choueiri:

Keynote Speaker
This year’s Keynote Speaker is Prof. Edgar Choueiri of Princeton University. The title of his address is, "Fooled by Audio." How far are we from having reproduced or synthesized sound that is truly indistinguishable from reality? Is this laudable goal still the receding mirage it has been since the birth of audio, or are we on the cusp of a technical revolution - the VR/AR audio revolution? I will report on recent advances in virtual and augmented reality audio research from around the world, and focus on critical areas in spatial audio, synthesized acoustics, and sound field navigation in which recent breakthroughs are bringing us quicker and closer to being truly fooled by audio.
http://www.aes.org/events/143/specialevents/?ID=5522

I guess people underestimate the knowledge of the creators and the potential of the products described. And I am talking about mathematics, acoustics, electronic engineering, digital signal processing, programming and last but not the least psicoacoustics.
 
Last edited:
Nov 7, 2017 at 6:19 PM Post #1,320 of 16,011
That education is perhaps the main cost.

Ask Smyth and they will tell you that meeting demos translates well into sales.

But people usually don’t take such cost into consideration when they see the box or the software.

So in order to create curiosity your cheaper product need to be personalized and paradoxically your user won’t undertake a personalization unless it already knows the magnitude of improvement or it is an automatic personalization (or a breeze to do it).

Seamless personalization and high order ambisonics may allow listeners to be fooled by 3D reproduction. That was in fact the subject of a recent lecture from Dr. Choueiri:



I guess people underestimate the knowledge of both the creators and the potential products described. And I am talking about mathematics, acoustics, electrical engineering, digital signal processing, programming and last but not the least psicoacoustics.
sure, my point/guess was more about how in a hobby where people go crazy over ultrasonic content and jitter down at -120dB, the flaws in headphone stereo should stop audiophiles from sleeping at night in comparison. yet people aren't educated about that issue, and even less about possible methods to fix it. instead you read reviews about a headphone with great soundstage...:sweat: talk about great brain plasticity.
there cannot be a market if people don't know they have a problem in the first place, because then there is no demand. not that I want to diminish the work involved in making such gear( I've been fooling around with impulses and convolution for a while now, admittedly only in stereo), but I'm of the opinion that when there is a high demand for a product, someone will come and find a way to make that product at a price all those people can pay. how capitalist of me ^_^.
 

Users who are viewing this thread

Back
Top