- Joined
- Oct 14, 2013
- Posts
- 28,335
- Likes
- 32,153
You can only know where the positioning supposed to be when you are the engineers who recorded and edited the track
My definition of “spatial distortion” is the same Linkwitz used to have in the 70's [1]. Later he seems to have developped the definition to something I would call "soundstage distortion" [2]. Avoiding "soundstage distortion" in my opinion almost hopeless and extremely demanding. Avoiding spatial distortion is very easy in comparison, enough simple crossfeed will do.
So, crossfeed limits excessive stereo separation with headphones, which is imo much much bigger problem then "soundstage distortion". Soundstage distortion means for example a guitarist plays a few feet too close or far and maybe 8 degrees too left compared to what was intented. How do I even know where the guitarist should be?
[1] http://www.johncon.com/john/SSheadphoneAmp/
[2] http://www.linkwitzlab.com/frontiers_6.htm
The company claims their new amp "corrects the fundamental spatial distortion in recordings" and "increases the width of the soundstage beyond that of the speaker placement." At the push of a button, an extra 30 degrees of soundstage can be recreated.
Glad that you clarified.
Are you sure oblongdarts and the company he mentions had in their minds the same definition of spatial distortion you've just mentioned when they wrote the following assertion?
And what you can do with DSPs like the Smyth Research Realiser A16? This processor circumvent the difficulty in acquiring an HRTF by measuring the user binaural room impulse responses - BRIR (or head room impulse response - HRIR, or also personal room impulse response - PRIR, when you want to say that it refers to the unique BRIR of the listener) that also includes the playback room acoustic signature. Then it convolves the inputs with such PRIR, apply a filter to take out the effects of wearing the headphones you have chosen, add electronic crossfeed and dynamically adjust cues with headphones playback to emulate/mimic virtual speakers like you would hear in the measured room, with the measured speaker(s), in the measured coordinates. Bad room acoustics will result bad acoustics in the emulation. But you can avoid the addition of electronic crossfeed to emulate what beaforming or a crosstalk algorithm would do with real speakers and equalize in the time domain!
What do you mean by 'electronic crossfeed'? Convolving the PRIR for a given (mono) channel should already yield a stereo signal including the crossfeed.
Jose Luis Gazal on September 4
@Smyth Research: Would it be possible to implement an optional function that allows the user to experiment a playback mode in which the signals assigned to left side speakers are not played back at the right headphone driver and vice versa?
Stephen Smyth Collaborator on September 4
@Jose Luis Gazal - yes we could do that for you.
https://www.kickstarter.com/project...16-real-3d-audio-headphone-processor/comments
I was imagining the following feature in the Realiser processor:
Ah I see, so it's optional crosstalk cancellation for playback scenarios (e.g. binaural recordings) that benefit from it?
I think the Smyth system could simulate the effect on phones
Submitted by Timothy Link on September 6, 2015 - 9:50am
Headphones are by their nature close to perfect crosstalk elimination devices. The Smyth realizer solves the problem of creating an external sound field effect for headphones, but in the process also simulates the crosstalk, which isn't necessary for an external effect (although I know some people add crosstalk to their headphone systems on purpose.) I emailed Smyth years ago inquiring about the possibility of using their system to simulate a crosstalk eliminated external speaker listening experience. They said there was no reason it wouldn't work, and to my thinking it would work extremely well, better than any in-room cancellation effort. I suspect it could be done very simply during the calibration phase of for the Smyth system. First set the speakers and listening position in a stereo dipole configuration,with speakers fairly close to each other. When calibrating for the right channel, block any sound from getting to the left ear microphone. When calibrating for the left channel, block any sound from getting to the right ear microphone.
By doing that you will hear through the headphones what seems to be an externalized sound source that sounds like it's coming from a stereo dipole configuration with near perfect crosstalk elimination and no colorations from crosstalk cancellation software. Ideally you'd take it a step further and compare the coloration of the speaker and room to the source signal and cancel all that out as well, but the Smyth Realizer wasn't made with that in mind. It was made specifically to simulate listening to speakers in a room, with room and speaker effects all simulated.
From personal experience, I have found that a dipole stereo arrangement with a physical barrier can be equalized to counter any coloration induced from the barrier. The result is incredible on some recordings, virtually unnoticeable on others. I never heard it make anything sound worse. It is a pain though to have to straddle the barrier and only enjoy the experience from that one location. I also tried digital recursive crosstalk elimination but found the sound quality unacceptable no matter how I adjusted it. I'm sure Bacch is a big improvement with it's customized hrtf and head tracking, and will be reasonably priced soon enough. Combine that with Occulus goggles and you could really feel like you are at the concert hall! Having visual cues that synchronize with the audio cues will make the effect that much better.
Read more at https://www.stereophile.com/content/bacch-sp-3d-sound-experience#YRZBi3tHSwxl946J.99
By the way, I recently created a PRIR for stereo sources that simulates perfect crosstalk cancelation. To create it, I measured just the center speaker, and fed both the left and right channel to that speaker, but the left ear only hears the left channel because I muted the mic for the right ear when it played the sweep tones for the left channel, and the right ear only hears the right channel because I muted the mic for the left ear when it played the sweep tones for the right channel. The result is a 180-degree sound field, and sounds in the center come from the simulated center speaker directly in front you, not from a phantom center between two speakers, so they do not have comb-filtering artifacts as they would from a phantom center.
Binaural recordings sound amazing with this PRIR and head tracking.
How do you mute the opposite microphone?
To mute it I unplug the left or right microphone from the Y-junction between sweeps. I set the "post silence" to 8 seconds beforehand to give me enough time. To make it easier I plan to hook up an A/B switch.
I actually got the idea from a comment by Timothy Link in this Stereophile article about Dr. Choueiri's BACCH.
http://www.stereophile.com/content/bacch-sp-3d-sound-experience
You can also add a rear speaker to the PRIR for the left and right surround channels to achieve a full 360-degree circle like PanAmbiophonics, and additional speakers for hall ambience.
Can you describe that sensation of envelopment improvement you heard between the first PRIR and the "Hafler" PRIR?
Does the front soundstage keep believable in both PRIRs as you turn your head?
Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.
Using the Hafler PRIR, there seems to be a greater sense of space and ambience for all sounds. If the recording was matrix-encoded, some sounds extend beyond the far-left and far-right and wrap around you. Initially I noticed that far-left and far-right sounds moved too much when I turned my head, but after I increased the front speaker level to be 3 dB higher than the rear speaker level, they moved properly.
C) Record with Ambisonics. (...). Although the spherical harmonics seem more mathematically elegant, I still do not figured out how acoustic crosstalk in listening rooms or instead the auralization with headphones without adding electronic crosstalk affects the possibility of conveying sound fields and proximity, neither if crosstalk cancellation in high order ambisonics ready listening rooms with high directivity loudspeakers is feasible.
(...) I am curious to know what would happen if you do not add electronic crossfeed*** when convolving a HRTF with headphones playback of Atmos and higher order ambisonics content.
*** Or deliberately and carefully control its level in different and lower intensities we would find in acoustical crosstalk with speakers; when dealing with acoustical crosstalk I am referring to speakers located on both hemisphere that are cut by the median plane. The speakers in the same hemisphere would be summed with the HRTF filter anyway and the speakers placed within the median plane would also be filtered by the HRTF without any electronic crosstalk setting.
Any idea?
The "Immersive Sound" book is available at SafariBooksOnline. You can read it after starting a free 10-day trial. No credit card required.To read more about the topics:
That would be much easier than manually muting the microphones during measurements, and just about any PRIR could be used.I was imagining the following feature in the Realiser processor:
That would be much easier than manually muting the microphones during measurements, and just about any PRIR could be used.
Allowing fractional values would be even better, such as 0.5 (-6 dB) or 0.1 (-20 dB).
Using the first PRIR, central sounds seem to be in front of you, and they move properly as you turn your head. However, far-left and far-right sounds stay about where they were. That is, they sound about the same as they did without a PRIR, and they don't move as you turn your head. In other words, far-left sounds stay stuck to your left ear, and far-right sounds stay stuck to your right ear. It's possible to shift the far-left and far-right sounds towards the front by using the Realiser's mix block, which can add a bit of the left signal to the front speaker for the right ear, and a bit of the right signal to the front speaker for the left ear.
Using the Hafler PRIR, there seems to be a greater sense of space and ambience for all sounds. If the recording was matrix-encoded, some sounds extend beyond the far-left and far-right and wrap around you. Initially I noticed that far-left and far-right sounds moved too much when I turned my head, but after I increased the front speaker level to be 3 dB higher than the rear speaker level, they moved properly.
@Mike Smyth, @Stephen Smyth, once I asked if it would be possible to implement an optional function that allows the user to experiment a playback mode in which the signals assigned to left side speakers are not played back at the right headphone driver and vice versa and the answer was yes.
I am sorry to bother once again, but I have just noticed that sometimes an instrument track is fully assigned to one channel and that could sound odd.
I’ve read in the Realiser A8 manual that one can blend channels in the mix block with 0.1 increments until full 1.0 mix.
But I just can’t figure out if such function is equivalent to adding less crossfeed than the crosstalk measured in the room the PRIR was acquired.
So in the end my new question is: would it be possible to mix individual channels or add lower dB of crossfeed than one would find in the real PRIR into the ipsolateral channels all at once, but with finer increments than 0.1?
I see a new product being advertised and I was wondering about your take on it. The company claims their new amp "corrects the fundamental spatial distortion in recordings" and "increases the width of the soundstage beyond that of the speaker placement." At the push of a button, an extra 30 degrees of soundstage can be recreated.
But I wish the Realiser to allow just a little more freedom in the mix block (or ILD) and in the ITD of the PRIR’s (with two speakers PRIRs and Ambisonics speakers arrangement PRIRs).
But I haven’t received any answer from them so I am just losing my hope.
Great idea. However, you would give up head-tracking unless you wear two head-tops.Load the same PRIR for both user A and B.
Use the left channel of user A headphone output for the left channel of your headphone.
Use the right channel of user B headphone output for the right channel of your headphone.
The A8 lets you adjust levels in increments of 1 dB. Per speaker and per user (but not mute it per user). To minimize crosstalk, you would set one speaker to +12 dB and the other speaker to -12 dB.Now if for example you want to reduce the cross-talk of the left front speaker, you just have to attenuate the left front speaker for user B (that supplies the right ear).
If you want to reduce the cross-talk of the right front speaker, you just have to attenuate the right front speaker for user A (that supplies the left ear).
The A8 lets you adjust delays in increments of 1 ms which is too coarse for ITD purposes. Per speaker and per user.If you want to increase ITD for the front left speaker, you delay the front left speaker for user B (that supplies the right ear).
I have not tested sync on the A8.[Edit: all this provided the user A and B parts work perfectly in sync]