Correcting for soundstage?
Nov 16, 2017 at 6:05 PM Post #17 of 37
My definition of “spatial distortion” is the same Linkwitz used to have in the 70's [1]. Later he seems to have developped the definition to something I would call "soundstage distortion" [2]. Avoiding "soundstage distortion" in my opinion almost hopeless and extremely demanding. Avoiding spatial distortion is very easy in comparison, enough simple crossfeed will do.

So, crossfeed limits excessive stereo separation with headphones, which is imo much much bigger problem then "soundstage distortion". Soundstage distortion means for example a guitarist plays a few feet too close or far and maybe 8 degrees too left compared to what was intented. How do I even know where the guitarist should be?


[1] http://www.johncon.com/john/SSheadphoneAmp/
[2] http://www.linkwitzlab.com/frontiers_6.htm

Glad that you clarified.

Are you sure oblongdarts and the company he mentions had in their minds the same definition of spatial distortion you've just mentioned when they wrote the following assertion?

The company claims their new amp "corrects the fundamental spatial distortion in recordings" and "increases the width of the soundstage beyond that of the speaker placement." At the push of a button, an extra 30 degrees of soundstage can be recreated.
 
Last edited:
Nov 17, 2017 at 7:33 AM Post #18 of 37
Glad that you clarified.

Are you sure oblongdarts and the company he mentions had in their minds the same definition of spatial distortion you've just mentioned when they wrote the following assertion?

I don't know their definition, never heard of them. So, not sure at all.
 
Nov 23, 2017 at 9:41 PM Post #19 of 37
And what you can do with DSPs like the Smyth Research Realiser A16? This processor circumvent the difficulty in acquiring an HRTF by measuring the user binaural room impulse responses - BRIR (or head room impulse response - HRIR, or also personal room impulse response - PRIR, when you want to say that it refers to the unique BRIR of the listener) that also includes the playback room acoustic signature. Then it convolves the inputs with such PRIR, apply a filter to take out the effects of wearing the headphones you have chosen, add electronic crossfeed and dynamically adjust cues with headphones playback to emulate/mimic virtual speakers like you would hear in the measured room, with the measured speaker(s), in the measured coordinates. Bad room acoustics will result bad acoustics in the emulation. But you can avoid the addition of electronic crossfeed to emulate what beaforming or a crosstalk algorithm would do with real speakers and equalize in the time domain!

What do you mean by 'electronic crossfeed'? Convolving the PRIR for a given (mono) channel should already yield a stereo signal including the crossfeed.
 
Nov 23, 2017 at 9:54 PM Post #20 of 37
What do you mean by 'electronic crossfeed'? Convolving the PRIR for a given (mono) channel should already yield a stereo signal including the crossfeed.

I was imagining the following feature in the Realiser processor:

Jose Luis Gazal on September 4
@Smyth Research: Would it be possible to implement an optional function that allows the user to experiment a playback mode in which the signals assigned to left side speakers are not played back at the right headphone driver and vice versa?

Stephen Smyth Collaborator on September 4
@Jose Luis Gazal - yes we could do that for you.

https://www.kickstarter.com/project...16-real-3d-audio-headphone-processor/comments
 
Nov 23, 2017 at 10:24 PM Post #21 of 37
I was imagining the following feature in the Realiser processor:

Ah I see, so it's optional crosstalk cancellation for playback scenarios (e.g. binaural recordings) that benefit from it?
 
Nov 24, 2017 at 5:41 AM Post #22 of 37
Ah I see, so it's optional crosstalk cancellation for playback scenarios (e.g. binaural recordings) that benefit from it?

It is very hard to be precise with terminology. I wouldn’t say it is crosstalk cancellation since “eletronic crossfeed” or if you prefer “digital domain crossfeed” need to be accounted for if your goal is to emulate with headphones the acoustic crosstalk one would have when playing more than than one channel in more than one loudspeakers. In other words, acoustic crosstalk is not inherently to headphones playback, so crossfeed is electronically/digitally added.

I think the Smyth system could simulate the effect on phones
Submitted by Timothy Link on September 6, 2015 - 9:50am
Headphones are by their nature close to perfect crosstalk elimination devices. The Smyth realizer solves the problem of creating an external sound field effect for headphones, but in the process also simulates the crosstalk, which isn't necessary for an external effect (although I know some people add crosstalk to their headphone systems on purpose.) I emailed Smyth years ago inquiring about the possibility of using their system to simulate a crosstalk eliminated external speaker listening experience. They said there was no reason it wouldn't work, and to my thinking it would work extremely well, better than any in-room cancellation effort. I suspect it could be done very simply during the calibration phase of for the Smyth system. First set the speakers and listening position in a stereo dipole configuration,with speakers fairly close to each other. When calibrating for the right channel, block any sound from getting to the left ear microphone. When calibrating for the left channel, block any sound from getting to the right ear microphone.
By doing that you will hear through the headphones what seems to be an externalized sound source that sounds like it's coming from a stereo dipole configuration with near perfect crosstalk elimination and no colorations from crosstalk cancellation software. Ideally you'd take it a step further and compare the coloration of the speaker and room to the source signal and cancel all that out as well, but the Smyth Realizer wasn't made with that in mind. It was made specifically to simulate listening to speakers in a room, with room and speaker effects all simulated.
From personal experience, I have found that a dipole stereo arrangement with a physical barrier can be equalized to counter any coloration induced from the barrier. The result is incredible on some recordings, virtually unnoticeable on others. I never heard it make anything sound worse. It is a pain though to have to straddle the barrier and only enjoy the experience from that one location. I also tried digital recursive crosstalk elimination but found the sound quality unacceptable no matter how I adjusted it. I'm sure Bacch is a big improvement with it's customized hrtf and head tracking, and will be reasonably priced soon enough. Combine that with Occulus goggles and you could really feel like you are at the concert hall! Having visual cues that synchronize with the audio cues will make the effect that much better.
Read more at https://www.stereophile.com/content/bacch-sp-3d-sound-experience#YRZBi3tHSwxl946J.99

Binaural recordings benefit from perfect crosstalk cancellation. Stereo recordings with natural ITD and ILD also.

But there are stereo recordings with unnatural ITD and ILD in which tracks are purely assigned to one channel. When listening to such recordings you may benefit not from avoiding acoustic crosstalk or not adding digital crossfeed, but from adding less dB of digital crossfeed than dB of crosstalk one would have with more than one loudspeakers in a room.

By the way, I recently created a PRIR for stereo sources that simulates perfect crosstalk cancelation. To create it, I measured just the center speaker, and fed both the left and right channel to that speaker, but the left ear only hears the left channel because I muted the mic for the right ear when it played the sweep tones for the left channel, and the right ear only hears the right channel because I muted the mic for the left ear when it played the sweep tones for the right channel. The result is a 180-degree sound field, and sounds in the center come from the simulated center speaker directly in front you, not from a phantom center between two speakers, so they do not have comb-filtering artifacts as they would from a phantom center.

Binaural recordings sound amazing with this PRIR and head tracking.

How do you mute the opposite microphone?

To mute it I unplug the left or right microphone from the Y-junction between sweeps. I set the "post silence" to 8 seconds beforehand to give me enough time. To make it easier I plan to hook up an A/B switch.

I actually got the idea from a comment by Timothy Link in this Stereophile article about Dr. Choueiri's BACCH.
http://www.stereophile.com/content/bacch-sp-3d-sound-experience

You can also add a rear speaker to the PRIR for the left and right surround channels to achieve a full 360-degree circle like PanAmbiophonics, and additional speakers for hall ambience.

Can you describe that sensation of envelopment improvement you heard between the first PRIR and the "Hafler" PRIR?

Does the front soundstage keep believable in both PRIRs as you turn your head?

Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.

Using the Hafler PRIR, there seems to be a greater sense of space and ambience for all sounds. If the recording was matrix-encoded, some sounds extend beyond the far-left and far-right and wrap around you. Initially I noticed that far-left and far-right sounds moved too much when I turned my head, but after I increased the front speaker level to be 3 dB higher than the rear speaker level, they moved properly.

Does it make sense?

Anyway:

C) Record with Ambisonics. (...). Although the spherical harmonics seem more mathematically elegant, I still do not figured out how acoustic crosstalk in listening rooms or instead the auralization with headphones without adding electronic crosstalk affects the possibility of conveying sound fields and proximity, neither if crosstalk cancellation in high order ambisonics ready listening rooms with high directivity loudspeakers is feasible.

(...) I am curious to know what would happen if you do not add electronic crossfeed*** when convolving a HRTF with headphones playback of Atmos and higher order ambisonics content.

*** Or deliberately and carefully control its level in different and lower intensities we would find in acoustical crosstalk with speakers; when dealing with acoustical crosstalk I am referring to speakers located on both hemisphere that are cut by the median plane. The speakers in the same hemisphere would be summed with the HRTF filter anyway and the speakers placed within the median plane would also be filtered by the HRTF without any electronic crosstalk setting.

Any idea?
 
Last edited:
Nov 26, 2017 at 9:21 PM Post #25 of 37
That would be much easier than manually muting the microphones during measurements, and just about any PRIR could be used.

Allowing fractional values would be even better, such as 0.5 (-6 dB) or 0.1 (-20 dB).

Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.

Using the Hafler PRIR, there seems to be a greater sense of space and ambience for all sounds. If the recording was matrix-encoded, some sounds extend beyond the far-left and far-right and wrap around you. Initially I noticed that far-left and far-right sounds moved too much when I turned my head, but after I increased the front speaker level to be 3 dB higher than the rear speaker level, they moved properly.

I don’t know if my question really address the issue, but let’s wait for they answer:

@Mike Smyth, @Stephen Smyth, once I asked if it would be possible to implement an optional function that allows the user to experiment a playback mode in which the signals assigned to left side speakers are not played back at the right headphone driver and vice versa and the answer was yes.

I am sorry to bother once again, but I have just noticed that sometimes an instrument track is fully assigned to one channel and that could sound odd.

I’ve read in the Realiser A8 manual that one can blend channels in the mix block with 0.1 increments until full 1.0 mix.

But I just can’t figure out if such function is equivalent to adding less crossfeed than the crosstalk measured in the room the PRIR was acquired.

So in the end my new question is: would it be possible to mix individual channels or add lower dB of crossfeed than one would find in the real PRIR into the ipsolateral channels all at once, but with finer increments than 0.1?
 
Last edited:
Dec 1, 2017 at 2:11 AM Post #26 of 37
I see a new product being advertised and I was wondering about your take on it. The company claims their new amp "corrects the fundamental spatial distortion in recordings" and "increases the width of the soundstage beyond that of the speaker placement." At the push of a button, an extra 30 degrees of soundstage can be recreated.

It's easy to achieve this effect (with speakers, not headphones) with the help of an mid-side equalizer (I tried it with FabFilter Pro-Q 2, it's one of the best, most transparent VST equalizers):

Create two points in Pro-Q:
1) Mode - "Mid", type - "Low Shelf", frequency - 400 Hz, Q - 0.3, Gain - -4 dB.
2) Mode - "Side", type - "Low Shelf", frequency - 400 Hz, Q - 0.3, Gain - +4 dB.

By doing so, we exaggerate the differences between the left and right channels in low frequencies.
You will perceive this effect as the widening of stereo image and the clearing of the middle.

If your speakers are situated too narrowly (not enough distance between them), this trick can help you to achieve a better stereo image and a wider soundstage.
 
Dec 10, 2017 at 11:19 PM Post #27 of 37
But I wish the Realiser to allow just a little more freedom in the mix block (or ILD) and in the ITD of the PRIR’s (with two speakers PRIRs and Ambisonics speakers arrangement PRIRs).

But I haven’t received any answer from them so I am just losing my hope.

[Edit: better skip the rest of this post go to my next post, as I discovered a serious flaw, I leave this for who is interested in my line of thought]

If or as long Smyth doesn't implement your requested cross-talk control, maybe this is helpful for your experimentation plans:
I have an idea for creating a PRIR (or preset combining speakers from 2 special PRIRs) with wich you could "regulate" the amount of crosstalk. This PRIR or preset would use 2 x N channels of the Realiser for N speakers, each channel must be input twice (using the analog multi channel inputs). By changing the input levels you can regulate the amount of crosstalk.

For simplicity I will first descibe the idea for one normal pair of stereo speakers.

Suppose input channel 1 is left input channel, input channel 2 is right input channel, then a normal PRIR measurement results in a PRIR that makes the Realiser do the following calculation:

LeftOut = TransferFunctionSpeaker1ToLeftEar(InputChannel1)
+ TransferFunctionSpeaker2ToLeftEar(InputChannel2)

RightOut = TransferFunctionSpeaker1ToRightEar(InputChannel1)
+ TransferFunctionSpeaker2ToRightEar(InputChannel2)


When you make a cross-talk free PRIR you effectively take out half (in this case one of the 2) of the summation parts out of each channel, you get:

LeftOut = TransferFunctionSpeaker1ToLeftEar(InputChannel1)

RightOut = TransferFunctionSpeaker2ToRightEar(InputChannel2)


With a similar method (plug out the opposite mic everytime) you could create a PRIR that takes out the other half (in this case the other of the 2) of the summation parts, this would be a cross-talk-ONLY PRIR:

LeftOut = TransferFunctionSpeaker2ToLeftEar(InputChannel2)

RightOut = TransferFunctionSpeaker1ToRightEar(InputChannel1)


Now make a preset in which you select:
speaker 1 of the cross-talk free PRIR to be used for input channel 1
speaker 2 of the cross-talk free PRIR to be used for input channel 2
speaker 1 of the cross-talk only PRIR to be used for input channel 3
speaker 2 of the cross-talk only PRIR to be used for input channel 4
(Alternatively you can make one 4 channel PRIR in one go using 2 real speakers, each fed with the summation signal of 2 output channels of the Realiser; left with output channels 1 and 3, right with output channels 2 and 4.)

Now input the left source channel to input channels 1 and 3, and the right source channel to input channels 2 and 4.
By attenuating the signals in to inputs 3 and 4 you can reduce the amount of cross-talk to whatever fraction of the real-world cross-talk you like
(or you could even increase the cross-talk if you wanted).
I hope the Realiser allows independent attenuation of individual virtual speakers internally, if not you would have to do this attenuation externally before the inputs 3 and 4.

If you generalise this idea you can use 14 channels (if you have an A16) to do something similar for a 7 channel speaker set-up. I guess you then would not reduce cross-talk for the center speaker, and for the other speakers reduce the cross-talk with an amount that is depending on which position that speaker would be on.

(You could maybe even do something similar for a 16 speaker set-up, using both user A and B parts of the A16 (now possible because we use only 16 input channels, and sent each input channel to 2 different speakers!), put all cross-talk free virtual speakers under user A, all cross-talk-only virtual speakers under user B, add the headphone-output signals together (use a mixer), now the only question is: can you attenuate individual virtual speakers independently from other speakers, for one user? If not you can only reduce cross-talk equally for all channels by attenuating the user B output - although you could skip for example the center channel by using a normal not cross-talk free measurement for it under user A, and a dummy muted measurement for user B or something like that - I don't know exactly what possibilities there are with the A16.)

You could also increase the ITD by somehow delaying the cross-talk only virtual speakers, or decrease the ITD by somehow delaying the cross-talk free speakers. Let's hope the A16 has the possibility to delay individual virtual speakers, and with enough precision. Otherwise it is only possible with the "double-input" scenario for 8 speakers maximum, and the delay must be done exteranlly for the proper input signals.
 
Last edited:
Dec 11, 2017 at 12:53 AM Post #28 of 37
On second thought (ahum), there is one big problem with my proposal:
The correct relative timing between the cross-talk free part and the cross-talk only part of each speaker will be lost if they are measured seperatly.
Sorry, I will have to think again...

But first I have to get some sleep.
 
Dec 11, 2017 at 1:29 AM Post #29 of 37
But I think I have a solution already, but only if indeed the A16 allows independent attenuation of individual virtual speakers - and independent for each user - internally (with respect to controlling the amount of cross-talk) and for influencing ITD the A16 should allow independent delay of individual virtual speakers independent for each user. And the bonus of my new idea: you don't have to do a special measurement with muting a certain mic at a certain moment, it can be done with any PRIR, up to 16 channels!
How? Like this:

Load the same PRIR for both user A and B.
Use the left channel of user A headphone output for the left channel of your headphone.
Use the right channel of user B headphone output for the right channel of your headphone.
(This can be done by making a special cable, but probably simpler and better - safer to avoid strange left right asymmetric output load for the amplifier? - is to simply use the analog outputs and an external amplifier: connect a stereo RCA/cinch cable, one side: left into user A out left, right into user B out right, other side normally to your stereo headphone amp or other amp with headphone output.)

Now if for example you want to reduce the cross-talk of the left front speaker, you just have to attenuate the left front speaker for user B (that supplies the right ear).
If you want to reduce the cross-talk of the right front speaker, you just have to attenuate the right front speaker for user A (that supplies the left ear).
If you want to increase ITD for the front left speaker, you delay the front left speaker for user B (that supplies the right ear).
etc. etc.

[Edit: all this provided the user A and B parts work perfectly in sync]
 
Last edited:
Dec 11, 2017 at 10:05 AM Post #30 of 37
Load the same PRIR for both user A and B.
Use the left channel of user A headphone output for the left channel of your headphone.
Use the right channel of user B headphone output for the right channel of your headphone.
Great idea. However, you would give up head-tracking unless you wear two head-tops.
Now if for example you want to reduce the cross-talk of the left front speaker, you just have to attenuate the left front speaker for user B (that supplies the right ear).
If you want to reduce the cross-talk of the right front speaker, you just have to attenuate the right front speaker for user A (that supplies the left ear).
The A8 lets you adjust levels in increments of 1 dB. Per speaker and per user (but not mute it per user). To minimize crosstalk, you would set one speaker to +12 dB and the other speaker to -12 dB.
If you want to increase ITD for the front left speaker, you delay the front left speaker for user B (that supplies the right ear).
The A8 lets you adjust delays in increments of 1 ms which is too coarse for ITD purposes. Per speaker and per user.
[Edit: all this provided the user A and B parts work perfectly in sync]
I have not tested sync on the A8.
 

Users who are viewing this thread

Back
Top