Mar 10, 2017 at 4:10 AM Post #646 of 16,944
Alternatively, you could skip the HPEQ process  entirely if you have something like a Sonarworks custom headphone equalization curve and your  source was your PC running JRiver or Foobar with the SW plugin feeding ther SVS Realizer.

Suuuuuuuuuure, you could spend an extra $100 (or however much Sonarworks costs, does it have in-app purchases too?) over what the A16 costs, and get a result that is not customized for YOUR head, and limit yourself to only using a PC as your input source...

But I'll spend the 10-15 seconds it takes to do a personalized HPEQ with any headphone (including ones not available for Sonarworks) and with any sound source in my house. Sonarworks is great – if you don't have an A16. The A16 does everything Sonarworks does, and better because it's personalized to your head and your headphone sample.
 
Mar 10, 2017 at 9:08 PM Post #647 of 16,944
You're right.  What the Realizer will compensate for that SW can not is the shape of the pinnae and the effect it has on frequency response.  However, it appears that neither system compensates for the effect  the ear canal has with regard tp the reinforcement of certain upper midrange frequencies.
 
Mar 11, 2017 at 8:22 AM Post #648 of 16,944
(...)it appears that neither system compensates for the effect  the ear canal has with regard tp the reinforcement of certain upper midrange frequencies.


Would you please explain how the A16 measures the impulse response of in ear monitors to process its HPEQ?

Do you have any news regarding the Bacch-DSP?

Do you know how the Bacch-DSP measures the impulse response of in ear monitors to process its equalization?

How much the Bacch binaural microphones cost?

How the realiser achieves such user acclaimed spatial accuracy* with apparently much affordable capsules?

Do you believe the accuracy/matching/equalization of microphone capsules can change the convolution spatial accuracy?

How much the accuracy/matching/equalization of microphone capsules can change the tone/timbre accuracy?

Does Smyth match or equalize the microphone capsules?

In which way the blocked ear canal measurement could affect the horizontal spatial perception? And elevation spatial perception?

*spatial accuracy of virtual speakers, not necessarily related to the spatial accuracy of audio content, particularly elevation.
 
Mar 11, 2017 at 10:42 AM Post #649 of 16,944
both measures(room and headphone) are done on the same couplers, so I don't see why they would need to be EQed. it's the variations between measures that matter the most. I would expect the couplers from pair to pair to have small variations and maybe Smyth bothers to include a calibration for the sake of exchanging data? (I wouldn't bother TBH, but then again I'm lazy ^_^).
for the ear canal, we have the quarter wavelength thingy that let us estimate the resonance. I wonder how Smyth decides to go at it? the easy way would be to use the expected average resonance at 2.7khz like used for dummy heads and compensations curves.

a more specific approach could probably be to use the delay when measuring speakers associated with the tracker to define the size of the head and then use a more likely model for the ear canal length(still average but average for people with a given head size). just an hypothesis of mine, I have no idea how Smyth deals with that.
 
but in any case the resonance should be somewhere around 2.5 to 3.5khz for most people(longer ear canal means lower frequency for the resonance), and probably in the 10 to 12db boost. could be less depending on the diameter of the ear canal and where along the canal it becomes smaller I guess.
and there is the odd order harmonic at a theoretical 3 times whatever frequency we previously got, but somehow real measurements seem to always get higher than 3 times the resonance frequency(I don't really know why, I just noticed it when mentioned in papers).
 
 
so if we want to go full parano, something of a ear canal scan could be nice to really finish the customization job. but then ear wax could also make measurable difference
evil_smiley.gif
. in practice I would imagine that placing the couplers correctly in the ears at an even depth would have more impact than most other concerns. but again I'm guessing as I have never played with such little toys. can't wait to try, answer 3 questions and get 30 new ones.
 
Mar 11, 2017 at 11:14 AM Post #650 of 16,944
Is there actually a strong enough coupling effect between circumaural headphones and the ear canal such that compensation is required? As far as I've read, the effect of the canal does not change as you change the azimuth/elevation of a speaker, so I don't see why you need any compensation if the addition of headphones does not alter the effect of the canal either…
 
Mar 11, 2017 at 11:19 AM Post #651 of 16,944
Quoting Hugo, quite a lot of my late questions have their answers directly given by Stephen Smyth in his interview given to HCFR or in the Kickstarter comments page.

AFAIK, left and right microphone capsules are just level matched.

For convenience, I have pulled together Smyth Research's responses from the Kickstarter comments that provide additional insight on the operation of the A16 or extra features they will be adding.

IN EAR HEADPHONES

Q. Any suggestions on how to use in-ear-headphones (how to measure HPEQ with them, e.g. using a small tube where I put the mic in one end and the headphone in the other end.)?

A. Your ‘ear canal’ tube idea is probably a good one. My only concern is that IEM EQ is all about attenuating the closed ear canal resonance which depends very much on the individual’s ear canal dimensions. Perhaps such an arrangement could be used as an initial measurement which could then be fine tuned using some other manual method. I would need to do some experiments. In previous comments I have come round to the idea of providing a minimum threshold measurement, or subjective EQ as I call it. I suspect this technique will also work for in-ear EQ. Imagine you are using the manual EQ method of the A8, but instead of comparing the headphone to the loudspeaker, you are adjusting the subband gain until you can just hear the subband signal. Normalising these subband gains with the standardised sensitivity curve will result in a flat in-ear response. Another method, which I have used in the lab, is to compare subband signal levels between an IEM and an already equalised over-ear headphone. For example, you put the IEM in the left ear only and then place the headphones over both ears. The listener switches between playing the subband signal to only the headphone right ear or only the IEM left ear, and adjusts the left ear subband gain until the volume appear similar in both ears. This is repeated for each frequency subband. Then the IEM is placed in the right ear and the whole process repeated again. It’s tedious but it does work.
So the possible methods are, increase the frequency resolution of the new subjective EQ measurement and/or implement this IEM-Headphone comparison method. There are pros and cons with both.


it's the variations between measures that matter the most.


Agree.

If the idiosyncratic part of the HRTF is constricted to narrower frequency band, it makes sense to use capsules that besides its relative affordability performs good enough at that critical range and are tiny to sit at the entrance of the ear canal.
 
Mar 11, 2017 at 5:28 PM Post #652 of 16,944
Guys i am confused ,, i am very sold to the concept of the Realiser A16 and i am very close to pre order one ,,
but to be honest ,, your talks here make it very scary to use it ..
 
i understand that it need calibration ,, PRIR and HPEQ ( i dont know what is it )
 
Please can anybody explain what i need to use the Realiser ?
 
i know for sure what i need is :
 
- Source with usb , hdmi , rca output
- headphone ( a good one )
- Realiser 16 unit
- calibration in 5.1 system to get the PRIR ( for my ears )
- HPEQ ( i dont know what is it or how it is done )
- EQ for the headphone ( not sure if it is done automatic)
 
Did i miss anything ?
some clarification is highly appreciated , in small simple words ..
 
Mar 11, 2017 at 8:10 PM Post #653 of 16,944
- HPEQ ( i dont know what is it or how it is done )
- EQ for the headphone ( not sure if it is done automatic)


A16 Headphone Equalisation Procedures 

A personalisation measurement actually consists of two parts. First is the PRIR measurement (loudspeakers measured in a sound room) while the second part measures how the headphones deviate from flat when placed on the head and driving into the ears of the listener. 

The A16 uses this ‘unflatness’ information to generate a headphone equalisation filter (we call this HPEQ) that then flattens the binaural signals just before they are output to the headphones. 

The HPEQ measurement is a very simple procedure and does not require any equipment other than the listeners own  headphones and is conducted as follows. 

1) Place the supplied in‐ear microphones in each ear and plug these into the Realiser. 

2) Place your headphones on your head without dislodging the microphones. 

3) Plug the headphones into the User‐A headphone output. 

4) Activate the HPEQ measurement in the menu. 

The entire HPEQ measurements takes about 20 seconds to complete and this HPEQ measurement file would typically be loaded into the system each time you use those same headphones with the  A16.
 
The A16 expands on the HPEQ procedures that were available on the A8. We have introduced a low latency procedure to be used for the low latency gaming and live applications. 

We have added a second HPEQ filter option that uses a causal filter structure that has the potential to generate a cleaner headphone impulse response than our traditional symmetrical FIR approach. 

Finally we have  increased the sub‐band resolution of the manual HPEQ method and made the procedure less clunky. 

We also have a third parametric HPEQ option in development to be released in firmware updates  following the initial launch of the product.

http://smyth-research.com/downloads/additional_KS_info.pdf
  
 
Mar 12, 2017 at 4:12 AM Post #654 of 16,944
  Is there actually a strong enough coupling effect between circumaural headphones and the ear canal such that compensation is required? As far as I've read, the effect of the canal does not change as you change the azimuth/elevation of a speaker, so I don't see why you need any compensation if the addition of headphones does not alter the effect of the canal either…


and once more I lose myself in an idea of getting some objective neutral when it's not the matter here. you're right and I concern myself with meaningless stuff(at least mostly meaningless for the Realiser's purpose). it's just like the microphone, if the ear canal applies about the same resonance with headphones and speakers, then we just don't care.
 
now I still wonder if the direction of the incoming sound cannot result in a signature differences(less bouncing off of the canal for headphone source?)
 
Mar 12, 2017 at 3:48 PM Post #655 of 16,944
 
and once more I lose myself in an idea of getting some objective neutral when it's not the matter here. you're right and I concern myself with meaningless stuff(at least mostly meaningless for the Realiser's purpose). it's just like the microphone, if the ear canal applies about the same resonance with headphones and speakers, then we just don't care.
 
now I still wonder if the direction of the incoming sound cannot result in a signature differences(less bouncing off of the canal for headphone source?)

 
So you're being… canal retentive? If you were talking IEMs then I take it all back, of course. But it seems like the question of the canal comes up often even in the context of regular cans, and I haven't read much to indicate that even if a coupling exists that it is worth the extra hassle of dealing with.
 
Mar 12, 2017 at 4:36 PM Post #656 of 16,944
A16 Headphone Equalisation Procedures 
 
A personalisation measurement actually consists of two parts. First is the PRIR measurement (loudspeakers measured in a sound room) while the second part measures how the headphones deviate from flat when placed on the head and driving into the ears of the listener. 
 
The A16 uses this ‘unflatness’ information to generate a headphone equalisation filter (we call this HPEQ) that then flattens the binaural signals just before they are output to the headphones. 
 
The HPEQ measurement is a very simple procedure and does not require any equipment other than the listeners own  headphones and is conducted as follows. 
 
1) Place the supplied in‐ear microphones in each ear and plug these into the Realiser.
 
2) Place your headphones on your head without dislodging the microphones. 
 
3) Plug the headphones into the User‐A headphone output. 
 
4) Activate the HPEQ measurement in the menu. 
 
The entire HPEQ measurements takes about 20 seconds to complete and this HPEQ measurement file would typically be loaded into the system each time you use those same headphones with the  A16.

  


So how does that work with User B then, or is it the same proceedure, just using the B input?  Also, the system somehow detects the actual headphone model being plugged in after a measurement, and it doesn't just load the last HPEQ you used?  Does it just send out a test tone to the headphones after they are plugged in and then it matches them?  I just assumed that you would take any given HPEQ, the measurement would be saved, and you would just select whichever one you needed for any given pair of headphones and person (having to label each of course...), and that the last HPEQ used in either input is what would remain for either input until you manually selected another.
 
Mar 12, 2017 at 5:53 PM Post #657 of 16,944
I would rather say that HPEQ is an acronym for headphone personalised equalisation.

Each HPEQ file is a combination of headphone and a given person with a given age (aka ear flaps size). :-)

That's why each HPEQ file has such data imprinted on it:

Headphone measurement

(...)

As before, enter the listener’s name after ID, and the headphone model number after HP. In this
example, the screen will say:
ENTER HPEQ DETAILS
ID: john doe
HP: stax sr-202

(...)

It is not necessary, but if you wish to confirm that the file has been copied, you can press OK and the
screen will say:
HPEQ File 01
by: john doe
hp: stax sr-202
on: 17:20 15-SEP-08
Press EXIT to return through each menu level.
The “john doe stax sr-202” HPEQ file is now in locations 64 and 01. The file in location
64 will be overwritten upon the next HPEQ measurement, but the copy in location 01 is safe.

http://www.smyth-research.com/downloads/A8manual.pdf


You have to assign a combination of PRIR and HPEQ or save them on a preset.

PRESET BASICS step by step

A preset consists of a PRIR, usually an HPEQ (but an HPEQ is not required), and various user-entered
settings, if any. The Realiser has four presets, which are accessed by the buttons P1, P2, P3 and P4
on the remote control. It is the preset which determines what personalisation files are used for
playback. So to listen to a personalised file set, it must be loaded into a preset.
Presets can be selected while listening, making it easy to have instantaneous comparisons of
equipment, rooms, room treatments, etc.

http://www.smyth-research.com/downloads/A8manual.pdf


That is the way the A8 works.

The new interface is said to be more user friendly and intuitive.
 
Mar 12, 2017 at 8:01 PM Post #658 of 16,944
 
So how does that work with User B then, or is it the same proceedure, just using the B input?  Also, the system somehow detects the actual headphone model being plugged in after a measurement, and it doesn't just load the last HPEQ you used?  Does it just send out a test tone to the headphones after they are plugged in and then it matches them?  I just assumed that you would take any given HPEQ, the measurement would be saved, and you would just select whichever one you needed for any given pair of headphones and person (having to label each of course...), and that the last HPEQ used in either input is what would remain for either input until you manually selected another.


The discussion about how to CREATE an HPEQ as measured for a specific person for a particular headphone/amp is saying simply that to go through the process you use the user-A connection on the Realiser.  Whether that person ends up being user A or B at PLAYBACK TIME (in a one or two user listening experience) since the Realiser supports playback mode for either one user or two users simultaneously, is up to whatever you do at playback time.  It's just during the measurement process for a single person when the HPEQ is first created for that person that the user A connection is specifically mandated as it is part of the circuitry and software involved with this particular function.
 
The digital HPEQ file is then saved for future use whenever that that specific person is going to be an A or B listener.  At PLAYBACK time the HPEQ for that particular person and particular headphone/amp equipment being used will be selected for use, from perhaps several available if that person has access to several headphone/amp setups and has created a unique HPEQ file for each. That user also selects one of however many unique and different PRIR's (describing specific individual listening room environments you've been fortunate enough to create PRIR files for) have also been created for that same person that you've also created and saved for that same person.  The combination of HPEQ and PRIR is then used to facilitate PLAYBACK... for that particular user as if listening to the new input source program in the original specific listening room environment represented by that uniquely measured PRIR, facilitated through the unique headphone/amp being used as represented by the unique HPEQ selected.  This listener person can be designated at PLAYBACK time as either sole user A, or user A or B in a dual user playback session.  Obviously if two users are listening as A and B simultaneously to a single source program, they would EACH have selected the proper person-unique pair of HPEQ/PRIR files and set that up in playback configuration with the Realiser.
 
Note that paired combinations of HPEQ/PRIR can be stored in "presets" (including a power-on DEFAULT preset) so that you don't actually have to always actively and manually choose HPEQ and PRIR at playback time.  Preset P1 is always the default HPEQ/PRIR combination for a one-user listening session, so you can simply power on the Realiser and headphone/amp, and you will automatically be listening through the P1 preset.  If you want to use a different HPEQ/PRIR setup, you would then manually choose some other previously created HPEQ/PRIR presets P2, P3 or P4, or manually select any arbitrary combination of HPEQ and PRIR individually.
 
I don't know exactly how dual-user mode will work in the A16, but on the A8 you would have to store user B 's HPEQ/PRIR choice in preset P4 and set listening volume for that preset, and user A's HPEQ/PRIR choice in preset P1 along with setting listening volume for that preset.  Dual-user PLAYBACK mode thus  made use of P1 and P4 simultaneously.  I don't know what the A16's approach is exactly yet, but something similar I'm sure.  You will still somehow have complete independence of HPEQ/PRIR and volume setting choice for each user, feeding two separate output headphone paths (either analog directly having come through the internal Realiser's DAC, or still 2-channel digital to external DAC). 
 
Mar 12, 2017 at 9:03 PM Post #659 of 16,944
Just a quick query for someone such as yourself experienced with the SVS etc.
 
Is there ever any strange feeling or disconnect when using the system in a room that is smaller or oddly angled compared to the preset you're listening to?
 
EG - small room and the SVS puts the sounds as if the speakers are coming from through a wall, or outside etc? or the RHS of the room is open but you're sitting against the left hand wall with the "speakers" suggesting otherwise?
 
Mar 12, 2017 at 9:33 PM Post #660 of 16,944
Is there ever any strange feeling or disconnect when using the system in a room that is smaller or oddly angled compared to the preset you're listening to?


VIRTUALISATION PROBLEMS

Conflicting aural and visual cues Even if headphone virtualisation is acoustically accurate, it can still cause confusion if the aural and visual impressions conflict. [8] If the likely source of a sound cannot be identified visually, it may be perceived as originating from behind the listener, irrespective of auditory cues to the contrary. Dynamic head-tracking strengthens the auditory cues considerably, but may not fully resolve the confusion, particularly if sounds appear to originate in free space. Simple visible markers, such as paper speakers placed at the apparent source positions, can help to resolve the remaining audio-visual perceptual conflicts. Generally the problems associated with conflicting cues become less important as users learn to trust their ears.

http://www.smyth-research.com/articles_files/SVSAES.pdf


Interesting how visual stimuli can prevail over auditory cues.

If listening to music, I would close my eyes and imagine the measured room.

With television, the scene you are watching may excite your visual cortex in agreement corroboration with auditory cues.

Pretty interesting stuff, don't you think?
 

Users who are viewing this thread

Back
Top