To crossfeed or not to crossfeed? That is the question...
Jan 3, 2018 at 2:35 AM Post #526 of 2,146
You could stop shooting down everything I say.
An exaggeration, but I can see how you might feel that way.
You could provide lisp code for Hilbert transformation so I could try myself 90° phase shift. I'm not sure if I can figure it out myself. Perhaps I could approximate it with all-pass-filters? After giving it some thought I admit it is an interesting idea for sure to have 90° phase shift prior to downmixing stereo to mono.
90 degree phase shift networks have been used in audio matrix systems for 50 years (4-2-4 quad, Dolby Stereo, etc.) I'm sure you can figure out how to do it.
No, I don't say I am right. It's you telling me I am wrong.
Well, this is interesting! I went back to look for examples where you have declared yourself right and anyone else who doesn't agree is an idiot (or words to that effect), claimed that 98% of all recorded stereo music benefits from cross-feed, and if anyone doesn't agree they're spatially deaf (or worse). It looks like the thread has been "cleaned up" just a tad, large groups of posts are now gone, others edited. But I do have all the originals in email notifications. I guess citing all those many occurrences would not benefit the thread and would just get deleted again. I can forward all of them to you privately if you like.
We never debate as equals because you put yourself above me.
I'm not the one proclaiming superiority of concept, intelligence or hearing ability.
Maybe it's the sound engineers with their "artistic intents" who are wrong? At some point artistic intent goes outside what is reasonable to accept.
Oh look! There's an example now!
I myself made music for year with excessive stereo separation because I was spatially ignorant and now I can see how wrong I was. I simply do not believe that most excessive stereo recordings are artistic intents. In my opinion excessive stereo recordings exists because:

- mixed for speakers, headphones ignored (less true nowadays)
- more appealing to spatially ignorant people (almost everybody)
- lack of sophisticated mixing tools (not true anymore)
- lack of understanding of the psychoacoustic problems of excessive separation.
...and you don't see what I mean from the above? I highlighted a few things to help you out. The red one, in particular, keeps coming up again and again. People's preferences make them ignorant, spatially or otherwise.
I sense that you almost fear stereo sound with natural ILD and ITD,
You've sensed pretty much everything incorrectly about me, so why not just add that one to the growing list?
but that's not a limitation really because there's so much more you can do in music, so much other possibilities for artistic intent.
And THAT from the same one who denies that choices in stereo perspective in headphones could possibly be artistic intent!

Back to the deleted posts (about a month's worth) and edits for a second, I really have no issue with that, but it does indicate something. It has become quite clear that what little educational benefit this thread might have once had has been obliterated by intense propaganda promoting a polarized, but scientifically unproven viewpoint. I recall a time in the not too distant past on this forum when threads would have been locked for less. If education is at all important, perhaps a return to the actual scientific method in the sound science forum should be considered rather than foot stamping, and the synthesis of terminology, statistics, and pseudofacts.
 
Last edited:
Jan 3, 2018 at 8:18 AM Post #527 of 2,146
90 degree phase shift networks have been used in audio matrix systems for 50 years (4-2-4 quad, Dolby Stereo, etc.) I'm sure you can figure out how to do it.

Yes, audio matrix systems use 90° phase shift, but it's not "trivial." The networks are an approximation of 90° phase shift within a "narrow" frequency band. The question is whether this can give better results than my vivid mono algorithm.

I think this method could work:

(1) Break the original signal into narrow band partial signals (say 10 octave bands).
(2) Allpass flter each partial signal so that the 90° phase shift hits the octave band middle frequency.
(3) Construct original phase shifted signal summing all these partial signals together.

Since the partial signals overlap a bit and in every partial signal the frequencies below middle frequency are phase shifted less than 90° and frequencies above the middle frequency more than 90°, summing these overlapping octave band will give pretty "flat" ~90° phase shift over the whole audio band.
 
Jan 3, 2018 at 8:31 AM Post #528 of 2,146
Yes, audio matrix systems use 90° phase shift, but it's not "trivial." The networks are an approximation of 90° phase shift within a "narrow" frequency band. The question is whether this can give better results than my vivid mono algorithm.

I think this method could work:

(1) Break the original signal into narrow band partial signals (say 10 octave bands).
(2) Allpass flter each partial signal so that the 90° phase shift hits the octave band middle frequency.
(3) Construct original phase shifted signal summing all these partial signals together.

Since the partial signals overlap a bit and in every partial signal the frequencies below middle frequency are phase shifted less than 90° and frequencies above the middle frequency more than 90°, summing these overlapping octave band will give pretty "flat" ~90° phase shift over the whole audio band.
A quick question: your DIY crossfeed adapter is specifically made for the HD598? What year is your HD598? Any modifications? Cheers.
 
Jan 3, 2018 at 9:21 AM Post #529 of 2,146
Yes, audio matrix systems use 90° phase shift, but it's not "trivial." The networks are an approximation of 90° phase shift within a "narrow" frequency band. The question is whether this can give better results than my vivid mono algorithm.
No, that's incorrect. They are relatively trivial, were realized with basic analog circuitry, and were pretty dead-on across the audio band.
I think this method could work:

(1) Break the original signal into narrow band partial signals (say 10 octave bands).
Not necessary!
(2) Allpass flter each partial signal so that the 90° phase shift hits the octave band middle frequency.
Getting warm....
(3) Construct original phase shifted signal summing all these partial signals together.

Since the partial signals overlap a bit and in every partial signal the frequencies below middle frequency are phase shifted less than 90° and frequencies above the middle frequency more than 90°, summing these overlapping octave band will give pretty "flat" ~90° phase shift over the whole audio band.
You're overthinking it. Think about doing that in 1968. And think not global spectral phase shift, but relative inter-channel phase.
 
Jan 3, 2018 at 5:02 PM Post #530 of 2,146
A quick question: your DIY crossfeed adapter is specifically made for the HD598? What year is your HD598? Any modifications? Cheers.
Not made specifically really for HD598. Works with any headphone I believe. I bought my HD 598 in 2011, serial number 0500001929. Earpads and headband padding renewed a few months ago. That's it.
 
Jan 4, 2018 at 5:02 AM Post #531 of 2,146
No, that's incorrect. They are relatively trivial, were realized with basic analog circuitry, and were pretty dead-on across the audio band.
Not necessary!
Getting warm....
You're overthinking it. Think about doing that in 1968. And think not global spectral phase shift, but relative inter-channel phase.

I found this example of an 90° phase shifter:

Hilbert transformer.gif

Source: http://www.microwave.gr/content/view/48/67/

This works in audio band 50-5000 Hz with a theoretical phase error of ±0.0607° (accurate as hell, but real life tolerances make the error much bigger).
 
Jan 4, 2018 at 12:34 PM Post #532 of 2,146
Okay, I wrote today a nyquist-plugin that simulates that 90° phase shifter above. The phase shift looks very accurate, but there's high frequency attenuation happening so that 20 kHz is down about 4 dB. All I can think of causing this is accuracy problems with low pass filter. The nyquist code looks like this:

;nyquist plug-in
;version 2
;type process
;name "Hilbert Transformer"
;action "Hilberting..."
;info "90 degrees phase shifter.\nWritten Jan. 4, 2018.

(setf sigl (aref s 0))
(setf sigr (aref s 1))

;; Left Channel All Pass Filters

(setf sigl (sim (lp sigl 51) (mult -0.5 sigl)))
(setf sigl (sim (lp sigl 201) (mult -0.5 sigl)))
(setf sigl (sim (lp sigl 677) (mult -0.5 sigl)))
(setf sigl (sim (lp sigl 2354) (mult -0.5 sigl)))
(setf sigl (mult 32 (sim (lp sigl 16442) (mult -0.5 sigl))))

;; Right Channel All Pass Filters

(setf sigr (sim (lp sigr 15) (mult -0.5 sigr)))
(setf sigr (sim (lp sigr 106) (mult -0.5 sigr)))
(setf sigr (sim (lp sigr 369) (mult -0.5 sigr)))
(setf sigr (sim (lp sigr 1246) (mult -0.5 sigr)))
(setf sigr (mult 32 (sim (lp sigr 4881) (mult -0.5 sigr))))


(if (arrayp s)
(vector (abs-env
sigl)
(abs-env
sigr)))
 
Jan 4, 2018 at 2:04 PM Post #533 of 2,146
[QUOTE="jasonb, post: 7007848, member: 159987"

"A delayed, lowpass-filtered version of the opposite channel is added to the current channel. The delay is achieved bs2b-style using a single high shelve filter giving about 0.5 ms delay. After that, the signal is mixed without phase delay with 12 dB attenuation. In addition, there is a small reverb based on Haas stereo widening effect of 30 ms ping-pong buffers."

I have also used one of the crossfeed plugins that are available for winamp with good results as well.
[/QUOTE]

you blew my mind with this. sounds like its a common deal but ive never heard of it.
you listen to your music through an effect that, in addition to playing the standard L R, also send them into each-other, with a lower level, tiny delay, a filter and reverb?
 
Jan 5, 2018 at 8:45 AM Post #534 of 2,146
you blew my mind with this. sounds like its a common deal but ive never heard of it.
you listen to your music through an effect that, in addition to playing the standard L R, also send them into each-other, with a lower level, tiny delay, a filter and reverb?

Think about what happens when you listen to loudspeakers.

Your left ear hears the (direct) sound from left speaker.
Your right ear hears the (direct) sound from right speaker.

But it doesn't end here.

Your left ear hear the sound from right speaker, delayed because of additional distance of about 10 cm or 4 inches and filtered because going "round" the head.
Your right ear hear the sound from left speaker, delayed because of additional distance of about 10 cm or 4 inches and filtered because going "round" the head.
Your ears also receive early reflections from surfaces, your upper body, furnitures etc.
Your ears also hear the reverberation in the room.


Nobody thinks there's something funny about these things and most recordings are mixes in studios with speaker for this kind of situation (the acoustic environment in a studio is much "better" and more controlled than a typical living room is, but anyway…)

When you listen to headphones:

Your left ear hears the left channel.
Your right ear hears the right channel.

That's pretty much it. Open headphones leak some sound and there is very minor acoustic crosstalk happening, but unless you do something to the signal entering your headphones, none of the violet stuff is happening. That's why some people including me find headphone listening without crossfeed unnatural, annoying, spatially broken and tiring.
 
Jan 5, 2018 at 1:44 PM Post #536 of 2,146
Think about what happens when you listen to loudspeakers.

Your left ear hears the (direct) sound from left speaker.
Your right ear hears the (direct) sound from right speaker.

But it doesn't end here.

Your left ear hear the sound from right speaker, delayed because of additional distance of about 10 cm or 4 inches and filtered because going "round" the head.
Your right ear hear the sound from left speaker, delayed because of additional distance of about 10 cm or 4 inches and filtered because going "round" the head.
Your ears also receive early reflections from surfaces, your upper body, furnitures etc.
Your ears also hear the reverberation in the room.


Nobody thinks there's something funny about these things and most recordings are mixes in studios with speaker for this kind of situation (the acoustic environment in a studio is much "better" and more controlled than a typical living room is, but anyway…)

When you listen to headphones:

Your left ear hears the left channel.
Your right ear hears the right channel.

That's pretty much it. Open headphones leak some sound and there is very minor acoustic crosstalk happening, but unless you do something to the signal entering your headphones, none of the violet stuff is happening. That's why some people including me find headphone listening without crossfeed unnatural, annoying, spatially broken and tiring.

But aren't the mics already adding in acoustic crosstalk? If I record using binaural mics in my ears, for instance, I would want no contralateral content out of the playback speakers. It would seem that the argument is that more typical micing and mixing schemes are judged based on normal speaker playback.
 
Jan 5, 2018 at 3:59 PM Post #537 of 2,146
But aren't the mics already adding in acoustic crosstalk? If I record using binaural mics in my ears, for instance, I would want no contralateral content out of the playback speakers. It would seem that the argument is that more typical micing and mixing schemes are judged based on normal speaker playback.
More or less yes. Depents on how the recording is mixed. You can record instruments with separate mono mics and hard pan them for example. Binaural recordings by definition have correct amount of channel separation (correct spatial information) and should of course be listened to without crossfeed (that's why crossfeeders have off/bypass switch). The things is, binaural recordings are VERY rare. I think I have two CDs (out of my ~1500 discs) having binaural sound. Most recordings are recorded and mixed so that ILD and ITD content is too much for headphones without crossfeed. Stereo mics that are more than about 10 inches away from each other produce too much ITD and the directivity + set up of mics easily produce too much ILD.
 
Jan 5, 2018 at 4:22 PM Post #538 of 2,146
Most recordings are recorded and mixed so that ILD and ITD content is too much for headphones without crossfeed. Stereo mics that are more than about 10 inches away from each other produce too much ITD and the directivity + set up of mics easily produce too much ILD.

Besides spot close microphones on each instrument that only captures mono direct field and are downstream mixed, what commonly used stereo spaced pair of microphones arrangements are more than 10 inches away?

I have been told that usually ILD is not encoded:

I'm not really sure of the context of the statements you've quoted. But on the face of it, some/many appear to be nonsense.

The percentage of popular music recordings deliberately mixed with both ILD and ITD is tiny. At a guess, less than 1% and probably a lot less!

Popular music is always recorded as a collection of mono sound sources or of one or two stereo sources mixed with mono sound sources. The stereo-image is therefore constructed artificially and even some of those stereo sources are commonly artificial (stereo synth pads for example).

So, virtually without exception, popular music has a stereo image which is an artificial construct and then the question becomes, how do we construct it?

Well, it's a combination of tools, one of which is reverb. Some reverbs are mono in, stereo out, others are stereo in, stereo out. The latter potentially providing a reasonably natural/realistic ITD relative to the mixed L/R position of the of the source channel/s feeding the reverb, the former cannot.

If we're talking about the L/R position of the individual source channels themselves though (rather than the reverb applied to those channels), then almost without exception that is accomplished purely with ILD (panning) and in fact, ITD is usually deliberately avoided, let alone realistic values calculated and applied!

The deliberate use of ITD for panning (more commonly called "psycho-acoustic panning" by the audio engineering community) is generally avoided for a few reasons:

1. It's far more time and resource consuming to set-up initially and adjust later.


2. The mix is unlikely to have decent mono compatibility and

3. The resultant L/R position achieved by psycho-acoustic panning on a channel is very fragile/unreliable:

A. Any subsequent application of any delay based effects (chourusing, doubling, DD or reverb for example) to that channel will almost certainly change or completely destroy the L/R position.

B. It's far more sensitive (than ILD panning) to small changes/inaccuracies in speaker positioning, room acoustics and listener position.

C. I can't even imagine trying to create a mix where all the L/R positioning is achieved by psycho-acoustically panning individual channels, I don't know how you'd avoid a complete mess.​

The only exception I'm aware of is an old, rather obscure trick on those rare occasions where it's desired that the kick and/or bass guitar be positioned some place other than the centre or near centre and psycho-acoustic panning maybe employed to more evenly distribute the high energy levels between channels/speakers.

All the above relates to popular music recordings, as quoted, it's not necessarily true of classical recordings.

G

1. All the spatial information there is to have?

Even modest familiarity with stereo microphone arrays should reveal the complete nonsense of that statement.

ORTF/XY: less than a hemisphere.

Coincident pair: no ITD.

M/S: no ITD.

The Decca Tree: scrambled ITD, and unique ILD, but capturing less than a hemisphere.

Spaced omnis: fully scrambled ITD and ILD.

Spot mic: mono, no 3D spatial information.

And those are the commonly used ones.

They all fall far short of capturing “all the spatial information there is to have”, but each is usable as an element for creating a believable mix.

I see what you mean when choosing different spacing with A-B stereo pairs:


From your ~1500 discs how many were recorded with stereo A-B pairs spaced more than 17 cm?

And how do you know exactly how much ITD you need for each type of recording or mixing?
 
Last edited:
Jan 5, 2018 at 5:09 PM Post #539 of 2,146
Besides spot close microphones on each instrument that only captures mono direct field and are downstream mixed, what commonly used stereo spaced pair of microphones arrangements are more than 10 inches away?

AB pair can be close to each other, but also a few meters (10 feet!) away.

I have been told that usually ILD is not encoded:
I see what you mean when choosing different spacing with A-B stereo pairs:
From your ~1500 discs how many were recorded with stereo A-B pairs spaced more than 17 cm?
And how do you know exactly how much ITD you need for each type of recording or mixing?

Maybe 500? The "needed" amount is 0-640 µs.
 
Jan 5, 2018 at 5:19 PM Post #540 of 2,146
AB pair can be close to each other, but also a few meters (10 feet!) away.

Maybe 500? The "needed" amount is 0-640 µs.

I see it.

So do you digitally analize the ILD and ITD parameters in the recording using some software the allows to do that and then you are able to know how the recording was made and how much ILD and ITD it has?

And then do you adjust your algorithm accordingly?

Do you mind telling me what sofware I can use to discover ILD and ITD from a given recording?

I beleive the Realiser with a crossfeed free PRIR would allow to spatially perceive those ITD differences, but it would be nice to have a sotware that allows to numerically/quantitatively confirm such perceptions...
 
Last edited:

Users who are viewing this thread

Back
Top