Smyth Research Realiser A16
Dec 25, 2019 at 6:07 AM Post #7,666 of 15,986
Merry christmas to all.

@dsperber: That's good news. Whe I tried magnetic stabilisation I was not satisfied. Soundstage constantly shifted to the left or to the right, I had to recenter it every 10 minutes or so by pressing the button on the head top.
It could be that the head top (or both I have) already lost their gyro calibration, and I didn't know it back then, but I don't think so. When calibration data is lost the soundstage starts to rotate 360 all the time, and this was not the case.

I'm a bit confused about the new HT firmware. I thought this will eliminate the need for the recalibration in the fridge, but no, now you still need to do it but the fridge isn't enough, must be the freezer?
When it comes to liquid nitrogen, I'm out...

Both my headtops constantly lose this calibration data, I put them several times now in the fridge already. Now I updated the HT FW on both but one of them again lost its calibration data. I'm a bit worried to put it in the freezer... Will try the fridge first.

On headphones: For over ear headphones where you can use the mics for auto HPEQ the frequency response gets EQed, so basically all HP should sound similar. But only above 500 Hz! Below 500 Hz the original frequency response of the headphone is used, so especially the bass abilities of the headphone will be important. Since now there is a bass (and treble) shelving filter, one can increase or decrease the bass below a variable frequency so this is a workafound for that.
I'm pretty sure that there still will be slight differences in tonality even after HPEQ though. Therefore one could use the manSPKR process to compare directly to the speakers and EQ manually.
Localisation of the virtual speakers will be similar and very good with all autoHPEQed over ear headphones I think. A perceived "natural" soundstage of a headphone won't plaay a role anymore I'd say (for me personally no headphone ever had any kind of soundstage, it's always all in my head and I don't like it).
If you perceive some kind of natural soundstage with a bit of externalisation with a HP then that is because it coincidentally matches your HRTF better than another HP.
For 100% matching of HRTF weuse the Realiser though...
 
Dec 25, 2019 at 8:53 AM Post #7,667 of 15,986
@dsperber: That's good news. When I tried magnetic stabilisation I was not satisfied. Soundstage constantly shifted to the left or to the right, I had to recenter it every 10 minutes or so by pressing the button on the head top.
It could be that the head top (or both I have) already lost their gyro calibration, and I didn't know it back then, but I don't think so. When calibration data is lost the soundstage starts to rotate 360 all the time, and this was not the case.

I'm a bit confused about the new HT firmware. I thought this will eliminate the need for the recalibration in the fridge, but no, now you still need to do it but the fridge isn't enough, must be the freezer?
When it comes to liquid nitrogen, I'm out...

Both my headtops constantly lose this calibration data, I put them several times now in the fridge already. Now I updated the HT FW on both but one of them again lost its calibration data. I'm a bit worried to put it in the freezer... Will try the fridge first.
From the very last section in the 1.80 Release Notes it seems important to get the temperature of the HT down below 5 deg C, so it really depends on what the temperature of your refrigerator is. Mine is 36 F (about 2 C), so that would have worked as well. No harm done using the freezer, which might simply get the HT colder faster, and thus also would take a bit longer to come back up to 8 C when the calibration actually begins:

"If the LED atop your head tracker no longer illuminates (RED or GREEN) when plugged into the A16, or
after installing the new 1.20 Nov 08 2019 HT firmware, it flashes red 10 times after power up, then it
requires thermal re-calibration as described below. If you have not already done so, please ensure you
have updated to HT firmware to rev: 1.20 Nov 08 2019 or later before proceeding.

Step 1) Unplug the head tracker and place in a freezer (-18 deg C) for 20-30 minutes to ensure the starting
temperature of the head tracker is below 5 deg C. etc., etc."

In fact my HT flashed red 10 times after power-up, not green as it should be doing. I decided that's because I had NEVER calibrated it as I should have done. Since I had also updated the HT firmware to 1.20 I felt I might as well just do the recommended calibration now, as it couldn't hurt. Plus, I should have done it a long time ago and never did.

As far as choosing magnetic stabilization and possible drift over time (requiring resetting +0 by pressing the button every so often), I haven't yet actually had a multi-hour viewing/listening session yet since installing 1.80/1.20 and actually using HT with magnetic in effect. So to be fair I am not yet really justified in saying "it's a winner" and doesn't have any negatives. I will be in a better postion to comment intelligently after a few days of usage.

Before (when I was using the ST sitting on top of my Panny TV) I had selected "optical" stabilization not really knowing what I was doing, and in fact I confess not even aware of the +0 reset button on top of the HT! And again, I had never ever done the initial thermal calibration of HT as I should have.

So, I will reserve further comment on "magnetic" being wonderful until I've a day or two to actually use it and see how the new 1.20 HT firmware may have improved things. For sure I think it should be better than "none", and I no longer can use "optical" or the new "legacy optical A8" since my LG C9 is simply too thin to securely tape the ST to its top edge.
 
Dec 25, 2019 at 11:52 AM Post #7,668 of 15,986
Hello everyone,

I tried the autoEq a couple of days ago. And I did the manLOUD this afternoon ; I really dreaded that part ! (I haven't upgraded my A16 to 1.80 yet waiting for more information from other users about possible bugs).

About the manLOUD :
- I understand the 1st part from 500 Hz onwards and proceeded by comparison as explained. When coming to the highest subbands (green colored), I tend not to hear the signal very much or not at all except by boosting the volume with ADJ+ to the maximum resulting in unpleasant noises. My manLOUD was below average accordingly.
The tuto says : ..."the lower subbands will have the greatest affect on the final result...the method assumes average hearing. Older subject will tend to over boost the highest subbands making the final filter sound odd"...

- You might as well know that I am an older subject (68...). I went for an audio "check up" to the doctor 2 weeks ago and he is saying my hearing is good for my age...

- Back to the highest subbands : anyone having a similar problem??? And if yes, is there a way to circumvent the problem?

Thanks guys.
 
Last edited:
Dec 25, 2019 at 12:47 PM Post #7,669 of 15,986
Hello everyone,
- Back to the highest subbands : anyone having a similar problem??? And if yes, is there a way to circumvent the problem?

I'd just work my way up the bands and once you get to the bands that seem to be the hardest to adjust... then I'd just let those stay on the line or just bump them up by one or two points... There are many ways to work thru the bands and for me I found it easier to start by taking each band way down... by clicking the remote... then bringing up the volume of each band and rocking back and forth between the previous band and the one I'm adjusting... I can hear the changes better that way... I work thru all of the bands and then go back and fine tune the bands one more time... I examine the graph at the upper end and adjust those bands down by a couple clicks for insurance... might try it and let us know if that works for you... :)
 
Dec 25, 2019 at 3:30 PM Post #7,670 of 15,986
I'd just work my way up the bands and once you get to the bands that seem to be the hardest to adjust... then I'd just let those stay on the line or just bump them up by one or two points... There are many ways to work thru the bands and for me I found it easier to start by taking each band way down... by clicking the remote... then bringing up the volume of each band and rocking back and forth between the previous band and the one I'm adjusting... I can hear the changes better that way... I work thru all of the bands and then go back and fine tune the bands one more time... I examine the graph at the upper end and adjust those bands down by a couple clicks for insurance... might try it and let us know if that works for you... :)

Thanks a lot. It does make sense and it avoids hitting too high and getting to a level never meant to be reached for safe results.

You Gene

PS : I believe I will have to make several attempts. I will try first by 2 points up and see what comes out as a result. Then it will be either on the line like you said or 3 points up...and I will keep you posted of course.
 
Dec 25, 2019 at 5:15 PM Post #7,671 of 15,986
No, that doesn't matter. A constant extra delay, added equally to all (both in this case) channels doesn't change the perceived distance, and the realiser will also not "calculate" the distance to the speakers from this delay (it couldn't because there can always be extra delays in a measured system, for example individual speaker delays set in a surround receiver).
(If for example reflections would be delayed compared to the direct sound, then it could change the perceived distance, but that is not the case here.)
Just to clarify, even a differential delay added to some speakers won't change the perceived distance, it'll just mess up the illusion if an object pans from one speaker to another one with a different delay.

Since this issue was discussed here before, try a little Gedankenexperiment: imagine you're listening to a complete CD, and, without your knowledge, someone else presses pause and then very briefly later play in one of the gaps between the two tracks. Is there any reason to assume that the speakers would suddenly appear further away after this has been done? For differential delay, imagine playing two different, mono audio tracks through two speakers, and only one of those is paused very briefly. Why would you assume that the perceived distance of the delayed speaker change?

Edit: I hope that the A16 already has or in a future firmware update will get a function that removes any delay from the PRIR and HPEQ, so that one doesn't have to muck about with the delay settings to synchronize image and sound whenever one changes from one room to another.
 
Last edited:
Dec 25, 2019 at 6:52 PM Post #7,672 of 15,986
I already asked Stephen about delays. As far as I understood we don't need to worry about delays, all speakers will have the same delay, so they will be at the same "distance". Your brain doesn't recognize the speaker distance by delay (how could it, it doesn't know when the sound was generated in the player), what we recognize is the room acoustics, the ratio between direct and indirect sound.

I just quote Stephen on this:
Presently the A16 does not render speaker delays. I have not got round to that yet. So what that means is, all speakers you measure are placed at exactly the same distance. In other words if your head is in the centre of a sphere, then the speakers are all on the surface of that sphere. This applies even if the speakers come from different PRIRs, the timing between them is always matched.

The PRIR measurement does calculate the distance from speaker to head, but as I said, presently I do not use that information.
And:
You do not need to worry about delays, they will all be the same even if the speaker is pyhsically closer. The only reason you would preceive a difference in distance is because of the ratio of direct to reverberant energy in the PRIR. The greater the reverberant energy the further away the speaker appears, even if its distance does not actually change. In the A8 I had a proximity control that changed this ratio. The effect is easy to hear.
Background of my question was our planned measuring session where we will create a 9.1.4 room by measuring many 2ch PRIRs, because if we want to use the Dirac Live and Bassmanagement of the AVR we only could use the 2 front channels.
As there are realceiling speakers, we will use 2 of them and measure them 2 times (connected as stereo Front L and R) and those speakers would need about 3 ms delay because they are closer to the listener and I was not sure if the AVR will apply this delay when in stereo mode. But as it seems now we don't have to worry about delaying those speakers.

Thanks again, Stephen.
 
Dec 25, 2019 at 9:05 PM Post #7,673 of 15,986
I already asked Stephen about delays. As far as I understood we don't need to worry about delays, all speakers will have the same delay, so they will be at the same "distance". Your brain doesn't recognize the speaker distance by delay (how could it, it doesn't know when the sound was generated in the player), what we recognize is the room acoustics, the ratio between direct and indirect sound.

I just quote Stephen on this:

And:

Background of my question was our planned measuring session where we will create a 9.1.4 room by measuring many 2ch PRIRs, because if we want to use the Dirac Live and Bassmanagement of the AVR we only could use the 2 front channels.
As there are realceiling speakers, we will use 2 of them and measure them 2 times (connected as stereo Front L and R) and those speakers would need about 3 ms delay because they are closer to the listener and I was not sure if the AVR will apply this delay when in stereo mode. But as it seems now we don't have to worry about delaying those speakers.

Thanks again, Stephen.
These info are gold...
Hope that the A16 will take in consideration the distance of the measured spkrs in future to add even more realism to our PRIRs...
 
Dec 25, 2019 at 9:13 PM Post #7,674 of 15,986
.."Presently the A16 does not render speaker delays. I have not got round to that yet. So what that means is, all speakers you measure are placed at exactly the same distance. In other words if your head is in the centre of a sphere, then the speakers are all on the surface of that sphere. This applies even if the speakers come from different PRIRs, the timing between them is always matched.
The PRIR measurement does calculate the distance from speaker to head, but as I said, presently I do not use that information."....

So from what i understood so far, the way the A16 render our PRIR spkrs field atm is more like an DTS X set up than a Dolby Atmos type... DTS X sound field is based on a spherical sprks field set up with the sweet spot in a central position..for Atmos, the L,C and R are a bit away farther from the others spkrs..
 
Dec 25, 2019 at 10:35 PM Post #7,675 of 15,986
.."Presently the A16 does not render speaker delays. I have not got round to that yet. So what that means is, all speakers you measure are placed at exactly the same distance. In other words if your head is in the centre of a sphere, then the speakers are all on the surface of that sphere. This applies even if the speakers come from different PRIRs, the timing between them is always matched.
The PRIR measurement does calculate the distance from speaker to head, but as I said, presently I do not use that information."....

So from what i understood so far, the way the A16 render our PRIR spkrs field atm is more like an DTS X set up than a Dolby Atmos type... DTS X sound field is based on a spherical sprks field set up with the sweet spot in a central position..for Atmos, the L,C and R are a bit away farther from the others spkrs..
The sentence "So what that means is, all speakers you measure are placed at exactly the same distance." probably was a simplification by Stephen, thinking it would be easier to understand.
I am quite sure what he meant was just that for each speaker the delay between the signal in the input and the moment it arrives at your ears is constant for each speaker. That would be achieved if they simply strip all the initial delays from all the impulses in the PRIR. (Maybe minus a little constant bit, equal for all speakers, I will not explain why now because it only complicates this post.)
Then effective the total delay for each speaker becomes the latency of the SVS processing (plus the little constant bit I mentioned if applicable). And that would be the same for all speakers.

Normally in a real surround system there would be delays set in the receiver to accomplish that if not all speakers are at equal distance from the listener.
The net result is the same: signals that occur simultaniously in different channels arrive simultaniously at the listener (in the sweetspot).
It does not really change the perceived distance of the speakers. If you make a PRIR of surround speakers that are much closer to you than the front speakers, they will sound like they are much closer than the front speakers. Maybe that won't be obvious when you normally listen to a surround recording, because you will hear the total soundfield and not so much the individual speakers. But when you solo them they will sound just like when you solo the real speakers. Then you will notice the surrounds are closer.
Like Stephen said, things like the ratio between direct sound and reverberation are what determines the perceived distance. Not the delays. And he didn't implement anything yet in the A16 to change that ratio.
 
Dec 26, 2019 at 3:53 AM Post #7,677 of 15,986
Hello everyone,

I tried the autoEq a couple of days ago. And I did the manLOUD this afternoon ; I really dreaded that part ! (I haven't upgraded my A16 to 1.80 yet waiting for more information from other users about possible bugs).

About the manLOUD :
- I understand the 1st part from 500 Hz onwards and proceeded by comparison as explained. When coming to the highest subbands (green colored), I tend not to hear the signal very much or not at all except by boosting the volume with ADJ+ to the maximum resulting in unpleasant noises. My manLOUD was below average accordingly.
The tuto says : ..."the lower subbands will have the greatest affect on the final result...the method assumes average hearing. Older subject will tend to over boost the highest subbands making the final filter sound odd"...

- You might as well know that I am an older subject (68...). I went for an audio "check up" to the doctor 2 weeks ago and he is saying my hearing is good for my age...

- Back to the highest subbands : anyone having a similar problem??? And if yes, is there a way to circumvent the problem?

Thanks guys.
"good for your age" (68) means that you will hear frequencies well below the natural hearing limit of approx. 6 - 7kHz. I recommend to you not to boost any frequencies higher than 6 - 7 kHz (this equals to all sub-bands on the right half of the screen) - just let them all at 0 and your manLOUD filter will sound better afterwards most likely.

Typical hearing limits are:
  • 10 years: approx. 18’000 Hz
  • 20 years: approx. 16’000 Hz
  • 30 years: approx. 14’000 Hz
  • 40 years: approx. 12’000 Hz
  • 50 years: approx. 10’000 Hz
  • 60 years: approx. 8’000 Hz
  • 70 years: approx. 6’000 Hz
Those limits are for an average person, based on several studies. For me at least, these limits are a near perfect match and I do not increase any sub-bands above my natural hearing limit since I hardly hear them anyway.

It is not a problem, it is just nature for all of us.
 
Dec 26, 2019 at 7:37 AM Post #7,679 of 15,986
Well, I'm 42 and can still hear 16 kHz, although I regularly listen to my movies in original cinema volume level (though that is not really very loud as only the bass scenes normally are and bass is not as damaging to the hearing as mid frequencies at high levels. A club/disco is usually much louder and working 8h daily in a loud factory etc. is much more damaging to the hearing)

We've been doing that for two years
Ha ha...

.."Presently the A16 does not render speaker delays. I have not got round to that yet. So what that means is, all speakers you measure are placed at exactly the same distance. In other words if your head is in the centre of a sphere, then the speakers are all on the surface of that sphere. This applies even if the speakers come from different PRIRs, the timing between them is always matched.
The PRIR measurement does calculate the distance from speaker to head, but as I said, presently I do not use that information."....

So from what i understood so far, the way the A16 render our PRIR spkrs field atm is more like an DTS X set up than a Dolby Atmos type... DTS X sound field is based on a spherical sprks field set up with the sweet spot in a central position..for Atmos, the L,C and R are a bit away farther from the others spkrs..
No you're getting this wrong. The Realiser only automatically does what you do with every real surround system: Let's say the nearest speaker is only 1 m from your head and the farthest speaker is 3 m from your head. Traveling through the air from speaker to your head, the sound would need approx 9 ms from the farthest speaker, but only 3 ms from the nearest speaker. In a surround system it is mandarory that all sounds from speakers arrive at the same time at your head (if you for example send an impulse simultaneously through all speakers) otherwise if a sound is panned over several speakers (e.g. a flight over panned from the front over the heights/topsto the rears, or the bird in the Atmos trailer flying in a circle around you) the timing wouldn't be right.
So you (the AVR) have to add a delay to all those speakers that are closer to the listener, so in above example a 6 ms electronic delay would be added to the speaker that is only 1 m from your head, so that total travelling time of the sound from the source through the electronics and the air will be the same for all speakers. In most AVRs you just dial in all the real distances of the speakersto the listeners head, the AVR then will add no delay to the speaker farthest away, and add delays that are equal to the relative distance of a closer speaker to the farthest speaker, so in above example the relative distance would be 3 m - 1 m = 2 m, which, at a sound speed f 340 m/s equals roughly 6 ms.
So all speakers will have a total delay of 9 ms (electronics plus air) and all the sounds appear at the same time at your head. Your brain doesn't know about these 9 ms, how could it. Your ears hear the sound when it arrives at them, if it was sent out 1 ms or 100 ms before. You would only probably recognize when a sound of a speaker arrives too early or late at your head, but only so that a pan wouldn't sound right (and you still wouldn't know why).
So from the POV of sound travelling time ALL surround systems no matter what format have to put all the speakers on a (real or virtual (by adding delays)) sphere around your head.
This hasnothing to do with the perecived distance of the speaker, because that is driven by room acoustics, as Stephen said the ratio between direct and reverberant sound.
So in above example you would still perceive that one of the speakers is only 1 m from your head, even if the total travelling time of the sound from source to head is now 9 ms.

In all the standards (ITU etc.) the ideal of a surround system is to have all speakers in a circle around you at the same distance, with 3 d sound the circle gets a sphere now.

I'm pretty sure that the ideal setup for Dolby what also be a sphere, they just show it in their white papers in normal rectangular rooms because most people don't have cube shaped rooms with edge lenghts of 6 m or so...
 

Users who are viewing this thread

Back
Top