Soundstage Width and Cross-feed: Some Observations
Status
Not open for further replies.
Jan 27, 2018 at 7:47 PM Post #211 of 241
(...)I'm talking about soundstage width and depth.

In my room, the sound has a horizontal and vertical dimension. It also has a sense of scale. The depth cues baked into the mix combined with the distance I sit from the speakers give it a sense of depth. There is a dimensional plane of sound spread out in front of me. I can play recordings with defined soundstage, wild soundscapes with sounds flying around the front of the room, or ping pong stereo from the late 50s, and it all sounds good. (...)

(...)

Do you perceive vertical dimension systematically in all recordings or just in some aleatory recordings?

I fail to hear vertical dimension.

I wish I could fly to your house and hear your system.
 
Jan 27, 2018 at 7:59 PM Post #212 of 241
The vertical size is more pronounced with 5.1 than it is 2 channel. I experimented a lot with speaker placement because I wanted the soundstage to fill the area covered by the projection screen. I use two sets of mains with different dispersion patterns and an elevated center channel. With the Yamaha stereo to 7.1 DSP, the size of the soundstage fills the whole front end of the room.

The vertical dimension obviously doesn't have sound objects moving up and down like Atmos. It's more like the way a stereo system can have a soundstage that extends a bit beyond the width of the speakers... the same can be true for up and down.
 
Last edited:
Jan 27, 2018 at 8:06 PM Post #213 of 241
The vertical size is more pronounced with 5.1 than it is 2 channel. I experimented a lot with speaker placement because I wanted the soundstage to fill the area covered by the projection screen. I use two sets of mains with different dispersion patterns and an elevated center channel. With the Yamaha stereo to 7.1 DSP, the size of the soundstage fills the whole front end of the room.

That seems plausible to me.

Firstly, because my room is crappy and I only tried stereo.

Secondly, because with your room, multichannel rig and the way @gregorio described of using surround channels for reverberation, your elevated center channel matching the projection screen position may do the trick for voices and center stage instruments.

Very interesting.
 
Last edited:
Jan 27, 2018 at 9:00 PM Post #214 of 241
The rears are elevated too, so they don't fire into the back of the couch, so there is a triangle above with the two rears and the center, and the spread of two sets of mains below, dispersing directionally nearer and dispersing wide a little further back. It's definitely not a standard set up, but I experimented until I found something that worked. I was lucky because my system was installed before I furnished the room, so I could move things around easily and add furniture to help the acoustics where needed.
 
Jan 28, 2018 at 5:46 AM Post #215 of 241
That seems plausible to me.

Firstly, because my room is crappy and I only tried stereo.

Secondly, because with your room, multichannel rig and the way @gregorio described of using surround channels for reverberation, your elevated center channel matching the projection screen position may do the trick for voices and center stage instruments.

Very interesting.

Even if the results often are as good as possible, Stereo has objective limitations and we think that there is room for significant improvement in both the playback and recording technologies/techniques.
I hope that you'll find this blog post on the subject interesting...
:) Flavio

https://www.dirac.com/dirac-blog/perfect-sound-system-with-3d-sound-reproduction
 
Last edited:
Jan 28, 2018 at 6:03 AM Post #216 of 241
Sometimes 42 isn't the answer to everything. Sometimes it's just a dodge because someone doesn't now the answer. However, the questions were valid, while the answer was not.

Well, asking politely instead of demanding with bold letters would help.

You just made a pretty good case for not using cross-feed on everything.

I don't follow your logic here, but mind you I don't use crossfeed on everything, only about 98 % of the time.
 
Jan 28, 2018 at 6:30 AM Post #217 of 241
1. Yes, we listen!
1a. No we don't! How would measuring the equal loudness curves help? You realise that it's loudness curves (plural), which of the curves would we apply?
1b. And what does the science of acoustics say about how you will perceive an electric guitar, a vocal and a drumkit mixed together and therefore how each of them should or should not be EQ'ed? And, what does the science of acoustics say about mixing together the acoustics of an electric guitar with artificial echoes of a large arena, a vocal with the natural or artificial ambience of a plate reverb and a drumkit with the natural acoustics of a small room plus a large hall reverb on the snare? Answer these two questions! The science of acoustics CANNOT of course answer these two or numerous similar questions. In fact the science of acoustics tells us that we can't have all those completely different acoustic spaces at the same time (I doubt these inconvenient facts will get in the way of you making up some nonsense and passing it off as the science of acoustics though)!

Firstly, you haven't even got the right science here because this has relatively little to do with the science of acoustics, the applicable science here would be the science of psycho-acoustics, NOT acoustics! Secondly, even the science of psycho-acoustics cannot answer those two (or numerous other related) questions! This is why your assertions that we (artists/engineers) must know and abide by the "laws of acoustics" are clearly absolute nonsense. If we actually did abide by the "laws of acoustics" then almost no commercial audio of the last 50+ years or so could exist.
1c. Those arguments are NOT weak, they're not even arguments, they are the WHOLE POINT, there is nothing else! How on earth do you manage to miss this whole point? Apparently you just take a science which tells us little/nothing about music, music creation/production, art or it's perception, prioritise that science above everything else and then use nonsense circular arguments that artistic intent and perception are "weak" or "excuses" because they contradict the (inapplicable!) science of acoustics!
1a. Well, if you want to filter resonances flat you need to know how the resonances are. YOU are the one talking about filtering them, not me. Equal loudness contours are pretty similar in shape around the frequencies in question (actually above about 500 Hz). You can apply the one corresponding the intented listening level such as 80 phons.

1b. The science of acoustics does not answer every question. How you EQ is artistic intent, but knowing human hearing helps getting good results. We can build "artificial" soundworld WITH the guidance of science. We can have natural ILD in our "impossible" acoustics. I really don't understand why you insist of having unnatural ILD on everything. Why? Chipmunking vocals 4 octave is unnatural too. I wonder why all music doesn't do that...

1c. Circular arguments? Where? Please. I have not said all music production is against science. My point is that some aspects are. I haven't complained about how you EQ guitars. That's sound design, part of artistic intent.
 
Jan 28, 2018 at 6:47 AM Post #218 of 241
I'm not really talking about musical styles. I'm talking about soundstage width and depth.

In my room, the sound has a horizontal and vertical dimension. It also has a sense of scale. The depth cues baked into the mix combined with the distance I sit from the speakers give it a sense of depth. There is a dimensional plane of sound spread out in front of me. I can play recordings with defined soundstage, wild soundscapes with sounds flying around the front of the room, or ping pong stereo from the late 50s, and it all sounds good. Totally different ways of organizing sound that use the space between the speakers, the space between the listener and the speakers, and the space above and around the speakers differently. They all have a different dimensional feel.

When I listen with headphones, I'm sacrificing a lot of the definition of the soundstage I get with my speakers. No vertical dimension, no depth, just a straight line through my skull. I could reduce the channel separation with cross feed, but that isn't getting me any closer to the dimensionality of true soundstage. It's just blending the channels together, which to me is like reducing everything to the same lowest common denominator. Applying cross feed to all of the music I listen to is like saying that some of my food is soft and some is crunch and some is chewy. I'm going to blend it all together so it's all the same consistency.

I can see using cross feed with ping pong stereo if the extreme separation bothers you. But personally, I wouldn't mess with something that attempts some sort of soundstage or soundscape. To me, that would be taking it even further from speakers because a quasi-coherent soundstage running through the middle of my skull is better than one that has been muddled up into the middle.

I think it's better to focus on ways of making sound better, not ways to stick band aids on sacrifices we've already made. Just let headphones be headphones.

I have speakers too. Sometimes I listen to them. Before finding crossfeed I listen to them a lot. You don't understand that speakers + room ALSO makes everything the same consistency and that's partly a good thing because it protects you from excessive ILD and makes the spatiality natural. I don't think you understand crossfeed at all. I doesn't give the same soundstage as speakers, but it's not a straight line through skull. It's something in between what I call miniature soundstage. It's a cloud and the center part of it is inside my head and the rest is outside, because the cloud is larger than my head, a few feet in diameter. Recordings with excellent spatial information render bigger cloud than other recordings with worse spatiality.

I don't let headphones be headphones, because using crossfeed makes most of the recordings sound much more natural and better in several ways (no bees, no fake bass, no broken/dispersed spatiality, no fatique). You can call it band aid, but I love it. If you don't get then you don't. You have you speakers so you are good.
 
Jan 28, 2018 at 7:23 AM Post #219 of 241
Even if the results often are as good as possible, Stereo has objective limitations and we think that there is room for significant improvement in both the playback and recording technologies/techniques.
I hope that you'll find this blog post on the subject interesting...
:) Flavio

https://www.dirac.com/dirac-blog/perfect-sound-system-with-3d-sound-reproduction

Sure I find Dirac Research work interesting.

I have been following “Dynamic 3D audio” and “Panorama Sound” algorithms: This crazy audio software can make your smartphone sound like a Hi-Fi system.

I am sure you know Choueiri (Bacch) and Smyth (SVS) work and I bet you also know why Kyle Wiggers found Dirac’s “Panorama Sound” more impressive than “Dynamic 3D audio”.

As Professor Choueiri says, calculating personalized HRTF from anthropometric data was already done and the challenge know is to find less computational demanding methods to do so.

Dirac Research certainly has as much expertise to solve that problem as Qualcomm, Genelec/IDA, Princeton 3D3A lab team etc.

But the problem I believe will be critical after solving the personalization challenge is creating mixing engines that allow artists to mix for crosstalk free listening environments (headphone externalization devices, beamforming phased array of transducers and crosstalk cancellation algorithms) with the same artistic freedom they are used to with currently standard mixing (i.e. applying different reverberation for each steam).

Ultimately, I would like recording and mixing engineers to feel assured they can record with spot microphones and apply different types of reverberation for each steam and at the same time expand soundstage beyond the boundaries of two stereo loudspeakers and also allow vertical dimension with only two loudspeakers.

Finally, I would like to understand how Ambisonics downmixed to binaural for headphones differ from Ambisonics decoded for loudspeakers in a room. The crosstalk in the second environment makes my understanding fuzzy.

I have never listened to Ambisonics environment. When Professor Smyth describes his test of an Ambisonics 16 channels decoder in a 4.8.4 arrangement (bottom, central and top layers) he does not mentioned how acoustic crosstalk is managed in the binauralization algorithm. I bet the beta algorithm did not simulate acoustic crosstalk from contralateral channels.

And I would like to know how much room reflections and acoustic crosstalk in real rooms affect the performance of Ambisonics. I believe in anechoic rooms spherical harmonics do not introduce crosstalk, the problem to me is reflections in a reverberant room.

In the end, any format relying in spherical harmonics and binauralization in the playback stage (for headphones and phased array of transducers) may be the most practical chain to create an universal rig that allows Virtual/augmented reality and playback of acoustic genres and popular genres. Professor Choueiri prefers to introduce binauralization before distribution with Binaural synthesis...

But recording and mixing engineers still find realistic rendering such as the one from sound field microphones (or binaural dummy head microphones) detracts creativity for popular genres.

I encourage people with strong mathematics and DSP skills to help them to achieve, with spherical harmonics formats (or binaural synthesis), the same artistic freedom they currently have when mixing standard stereo. And of course mono compatibility for radio broadcast.

Cheers!
 
Last edited:
Jan 28, 2018 at 8:51 AM Post #220 of 241
[1]Well, if you want to filter resonances flat you need to know how the resonances are.
[1a] YOU are the one talking about filtering them, not me.
[1b] Equal loudness contours are pretty similar in shape around the frequencies in question (actually above about 500 Hz). You can apply the one corresponding the intented listening level such as 80 phons.
[2] The science of acoustics does not answer every question.
[2a] How you EQ is artistic intent, but knowing human hearing helps getting good results.
[2b] We can build "artificial" soundworld WITH the guidance of science.
[2c] I really don't understand why you insist of having unnatural ILD on everything. Why?
[3] Circular arguments? Where?

1. And I can do that by listening!
1a. No I'm not! I am saying I could remove the resonance if I so chose, thereby avoiding the "laws of acoustics". Whether I do in practice or not is PURELY a decision based on my perception and artistic intent and has nothing whatsoever to do with the science of acoustics!
1b. No, they are significantly different, in the critical band and particularly in the lower frequencies and I've no idea how loud the consumer is going to listen. Again, I use my hearing perception, I listen to the mix quietly, listen to it loudly and make a decision/compromise based on artistic intent. Equal loudness contours tell us absolutely nothing about artistic intent.

2. The science of acoustics does not answer hardly any questions! The science of psycho-acoustics answers far more questions but even then, only a small fraction of the required questions.
2a. No it doesn't! "Good results" is determined entirely by perception, certainly not the science of acoustics and even the science of psycho-acoustics tells us relatively little.
2b. Yes we could, but the guidance of science only allows us to create artificial worlds with the same properties as the natural/real world. That eliminates any composer who used a C19th violin, any composer who used any tuning system other than Pythagorean tuning. Effectively it eliminates virtually all western music of the last 400-500 years! It also eliminates virtually all recordings of the last 50 years or so and also most of the visual art of the last 200 years or so.
2c. Why don't you understand it? I don't know, maybe you just can't read properly or maybe you have a belief which is so unquestionable that you have to lie about what others are "insisting" in order to defend it?

3. Almost everywhere and in every single post! You define musical art/recordings by the science of acoustics and your perception of "natural" and then, when it's pointed out that's patently nonsense because it would eliminate most art, you respond by saying we know nothing about the science of acoustics and art is "bad" if it's not "natural". How much more circular of an argument can you think of, and how many times have you employed it?

G
 
Last edited:
Jan 28, 2018 at 9:56 AM Post #221 of 241
I have not said all music production is against science.
What you have said is that music production played in headphones is against science, to the degree than 98% of everything is wrong.
Well, asking politely instead of demanding with bold letters would help.
I doubt that. Bold came after plain text was ignored.
I don't follow your logic here, but mind you I don't use crossfeed on everything, only about 98 % of the time.
I know you don't follow my logic. That's been eminently apparent through several threads and innumerable posts. 98% is pretty much everything, and I believe you said for the 2% you just back it off, not turn it off, but no matter, 98% is good enough to be pretty much everything. Several of us strongly disagree with that number which you can't substantiate with any data other than your own preference, and you can't follow that logic either. So, good observation!
 
Jan 28, 2018 at 3:02 PM Post #222 of 241
What you have said is that music production played in headphones is against science, to the degree than 98% of everything is wrong.

Channel difference is just a small fraction of music production and you should know that much better than I do.
 
Jan 28, 2018 at 4:01 PM Post #223 of 241
Channel separation is a big part of stereo, isn’t it?
 
Jan 29, 2018 at 5:31 AM Post #224 of 241
Channel separation is a big part of stereo, isn’t it?

Yeah, of course, but it doesn't mean you do ping pong. One should use it wisely.
 
Jan 29, 2018 at 5:33 AM Post #225 of 241
Channel difference is just a small fraction of music production and you should know that much better than I do.

Another beautiful example of a circular argument. We could only "know that much better than" you, if we accept that what you're saying in the first place is true. Nice, a perfectly circular argument, impressive!!

G

EDIT: Oh, yet another beauty:
Yeah, of course, but it doesn't mean you do ping pong. One should use it wisely.
True ONLY IF we agree with what you determine to be "wise"!!
 
Last edited:
Status
Not open for further replies.

Users who are viewing this thread

Back
Top