What creates soundstage in headphones??
May 17, 2022 at 7:05 AM Post #271 of 288
No problem. I was surprise to see how much negativity your ideas got for no apparent reason.
It isn't the ideas that received the negativity, it was the dogmatic way in which the ideas were presented as established fact that subsequently presented as supporting the veracity of the proposed technical solution. I actually would love nothing more than for this idea to be proven correct. I love headphones and would really enjoy such a simple fix to solve an incredibly complex problem.
 
Last edited:
May 17, 2022 at 7:08 AM Post #272 of 288
Again, this could be solved with enough evidence. If several hundred people in blind listening tests all heard the theorized effect as predicted, the theory would have some legs. Simply providing a handful of people who did not listen blind with a control headphone as a comparator, who also certainly already expected or hoped to hear a difference (why else would they have ordered the mesh?) is not evidence of anything but the power of expectation bias.
 
May 17, 2022 at 7:11 AM Post #273 of 288
Yes, soundstage is an illusion created in our brain, but it is based on spatial cues. The quality of spatial cues affect the quality of the soundstage illusion and if fractals can make better spatial cues then whats the problem? John Massaria should perhaps talk about spatial cues rather than soundstage …
Sound stage in headphones is a perceptual effect based on a combination of “Spatial cues”, which are a wide range of things, from stereo time based echoes/reflections/reverb, the panning of sounds within the mix, cross feed, freq content, individual HRTF, etc.

So what John Massaria should therefore talk about is a rational explanation of how fractal patterns on fibreglass could affect all these things and present some reliable evidence to support that explanation.
Agreed 100% -illusionary spatial cues to simulate a larger space is probably better wording and more accurate expression of what I think it does
The spatial cues which “simulate a larger space” are; wider panning of the sound elements within the sound mix, a bunch of reverb parameters (such as initial reflection times, balance, spectral content, etc.) relative to the direct sound, cross feed and timing of the signals between the left and right earphones and of course is dependent on an individual’s HRTF. Please explain how a pattern on fibreglass performs all this signal processing.
Science doesn't happen by itself. People figuring out things create new scientific knowledge/understanding. Somebody has to figure out.
That somebody doesn’t have to be ourselves and we don’t have to create new scientific knowledge if it already exists.

G
 
May 17, 2022 at 7:11 AM Post #274 of 288
Spatial cues involve reflected sound and sound that is delayed in time. I don’t see how this could possibly do that.
 
Last edited:
May 17, 2022 at 1:10 PM Post #275 of 288
Sound stage in headphones is a perceptual effect based on a combination of “Spatial cues”, which are a wide range of things, from stereo time based echoes/reflections/reverb, the panning of sounds within the mix, cross feed, freq content, individual HRTF, etc.
Yep.

So what John Massaria should therefore talk about is a rational explanation of how fractal patterns on fibreglass could affect all these things and present some reliable evidence to support that explanation.
Headphones are transducers VERY near ears. This is likely to generate spatial cues of very near sound sources. These spatial cues get mixed to all the spatial cues in the recording itself. whatever those are, but we are in a situation, were we have contradicting spatial cues (for example spatial cues of distant sounds in the recording together with spatial cues generated by sound sources very near the ears. This is likely to be spatially compromizing if not plain confusing to the listener!

How I see John Massaria's idea is that the spatial cues generated by close sound sources are "messed up" by making the sound more diffuse with the fractal structures. This mimicks the situation where the sounds have been reflecting in the listening environment before entering the ears. It is kind of "destroying" clear spatial cues of close sound sources to that the spatial cues of the recording itself dominate the processing of spatial hearing resulting in less contradictory spatiality.

I have not heard this idea "in action" so I can't testify for its performance, but from the point of view of spatial hearing it makes sense (to me at least). I can't see why it could not work. My only doubts are about how much the spatiality can be improved this way.

The spatial cues which “simulate a larger space” are; wider panning of the sound elements within the sound mix, a bunch of reverb parameters (such as initial reflection times, balance, spectral content, etc.) relative to the direct sound, cross feed and timing of the signals between the left and right earphones and of course is dependent on an individual’s HRTF. Please explain how a pattern on fibreglass performs all this signal processing.
I don't think John Massaria claims his fibreglass performs all this signal processing, nor should it. What the fibreglass is suppose to do is explained above if I have understood this correctly.

That somebody doesn’t have to be ourselves and we don’t have to create new scientific knowledge if it already exists.

G
Well, a lot of science is yet to be discovered. Nobody claims that headphones give perfect spatiality. Far from it most of the time. To me it is important that people have ideas and test them out for progress.
 
May 17, 2022 at 4:38 PM Post #276 of 288
How I see John Massaria's idea is that the spatial cues generated by close sound sources are "messed up" by making the sound more diffuse with the fractal structures. This mimicks the situation where the sounds have been reflecting in the listening environment before entering the ears.
That does NOT mimic the situation where sounds are reflected in the listening environment before entering the ears, not even close! How do the fractal structures mimic the reflections from reflective surfaces meters away and how does diffusing the signal in each channel/ear cup individually mimic the spatial effects of the listening environment going to both ears? Is it applying some crossfeed or HRTF to the room reflections it’s magically generating?
Well, a lot of science is yet to be discovered.
Sure, we don’t know what happens inside black holes but of we’re not listening with HPs inside black holes. There probably still are some things yet to be discovered about the perception of spatial cues but a considerable amount has already been discovered because if it hadn’t we wouldn’t have had digitally generated spatial cues for over 40 years. And …
Nobody claims that headphones give perfect spatiality.
Yes they do. There are quite a few using the latest technology along with personalised HRTFs who feel they can’t tell the difference with a real room/speakers. I’m not one of them but it is starting to get close.

G
 
Last edited:
May 18, 2022 at 2:35 AM Post #277 of 288
Sometimes I think we do this stuff to ourselves. It's blatantly obvious that fabric mesh won't affect soundstage to any audible degree. Soundstage is governed by HRTF and reflections and delays created by sound reacting to actual physical space. To simulate that with headphones, you need HRTF calibration and complex signal processing, not stuff jammed inside the ear cups. Gregorio is explaining all that clearly. It's absurd to think that the fabric over the transducers in your cans is responsible for it sounding like Carnegie Hall. We all know what creates soundstage, so why are we arguing this? I think this stuff keeps going off the rails because people want it to go off the rails for personal reasons that have nothing to do with the subject being discussed. It's silly and just diverts the discussion away from anything meaningful and makes all of us look dumb.
 
Last edited:
May 18, 2022 at 5:12 AM Post #278 of 288
I'm done with defending John Massaria's ideas, because nobody pays me doing this and people here clearly don't even try to understand each other.
I have already said what I think and that's it.
 
May 18, 2022 at 12:15 PM Post #279 of 288
Clearly if his mod achieves something, it’s almost entirely at the frequency response level in a battle between reflection and absorption. Because for the rest, the distances are really small in a cup.
I have zero doubt that such a system can impact imaging in some ways, at least in theory(no idea for the mod shown), because almost any audible change in sound has some effect on imaging. But I’m with the hrtf crew when it comes to turning stereo into something not ”lateralized” by headphones.
 
Jun 24, 2022 at 12:01 PM Post #280 of 288
I’m only up to page 4 of this but these videos are very interesting. You don’t have to read all of this but if you’re interested in the subject of this thread watch the videos, especially the first.






He made binaural recordings of 3 speakers with different horizontal radiation patterns in his listening room, one very narrow 2 quite wide. The first video is an explanation of what’s going on followed by an ABC test (you HAVE to use headphones to listen). The 2nd is a more in-depth explanation of what he’s trying to accomplish by this. It’s amazing the difference in soundstage between the recordings of the 2 types. Not sure what exactly this says regarding headphones/IEMs but it does illustrate how much of soundstage is in the recording and I’d guess the radiation patterns of headphones/IEMs would have a lot to do with their mechanical(?) sound stage.
 
Jul 7, 2022 at 12:10 AM Post #281 of 288
It took me getting a 2-channel loudspeaker setup to even know what a soundstage was in music listening. And that was after over a decade of high-end headphone listening.

Getting a high-end 2-channel stereo basically made all my desktop headphone gear irrelevant. No matter how good headphones may be, the sound still sounds like it's emanating just outside your ears or from your forehead. I would encourage any audiophile to invest in a 2-channel system. Even $1000 nowadays can get you a decent pair of bookshelf speakers and an integrated amp.
 
Jul 7, 2022 at 12:41 AM Post #282 of 288
It took me getting a 2-channel loudspeaker setup to even know what a soundstage was in music listening. And that was after over a decade of high-end headphone listening.

Getting a high-end 2-channel stereo basically made all my desktop headphone gear irrelevant. No matter how good headphones may be, the sound still sounds like it's emanating just outside your ears or from your forehead. I would encourage any audiophile to invest in a 2-channel system. Even $1000 nowadays can get you a decent pair of bookshelf speakers and an integrated amp.
Yes this. I have been saying no matter how much you throw at headphones, they will always be mid-fi due to their lack of real life soundstage.
 
Oct 26, 2022 at 11:18 PM Post #284 of 288
Step 1: Most FR curves boost lower treble to hell and back
Step 2: This masks most of the rest of the FR.
Step 3: All the frequencies carrying more energy, i.e. 20 to 100hz or 500hz are masked so your brain can't properly place stuff around you
Step 4: Boom, your brain guesses where stuff is and now you have an "around you" effect that's innacurate.

At least that's been my working theory, I do plan on eventually getting a proper binaural recording setup (rather than a yeti) to test it eventually though.
I try to move the discussion here where it fits better.

For playback of binaural recordings maybe take a look at this EQ method:
https://www.head-fi.org/threads/fro...y-response-with-eq-only.853443/#post-13561451

For playback of conventional stereo recordings (and multichannel recordings) binaural simulation of loudspeakers in a room can be used.
With free software Impulcifer and a pair of in-ear-mics and possibly an audio interface you can measure real speakers in a room and your headphones (for headphone compensation). These measurements can be used by other software (for example also free HeSuVi) to perform a binaural simulation (of up to 7.1 channels).
https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/
Add headtracking and better results are possible: Smyth Realiser A16 (unfortunately very expensive).
 
Oct 26, 2022 at 11:55 PM Post #285 of 288
1) A frequency response curve can be balanced or imbalanced in any frequency range. I don't know where the idea that most of them boost lower treble comes from. I suspect it's a misunderstanding of Fletcher Munson.

2) Masking primarily masks one octave above the imbalance. A single imbalance couldn't mask everything above lower treble. That's three octaves.

3) Boosted treble wouldn't mask the low end.

4) Guessing?

For binaural playback, you need a binaural recording that corresponds with your particular HRTF. One size of HRTF doesn't fit all. So making your own recordings to your own measurements is a good idea.
 

Users who are viewing this thread

Back
Top