Can poor soundstage and imaging be fixed digitally?
Dec 15, 2020 at 7:12 PM Thread Starter Post #1 of 22

iemhater

New Head-Fier
Joined
Dec 5, 2020
Posts
45
Likes
31
Location
US
I was just thinking maybe poor soundstage could be fixed with artificial reverb. Do you agree or disgree?

What about imaging? I have no idea what the cause of poor imaging could be.
 
Dec 16, 2020 at 9:48 AM Post #2 of 22
Adding some reverb to a track would in general be done to give a feeling of a different and bigger room, but it might not do much to move the sound sources away from us or place them more accurately in our imagination. We make some use of reverb to locate things, like feeling when we're getting close to a wall based on how sounds bounce from it, or feeling like something is really far away(along with killing all the high frequencies) but it's not the main way we locate stuff in our brain.

For imaging and placing instruments, or any sound source at a given position in space around us, we mostly rely on delay, loudness and overall frequency response variations between each ear. Those things can clearly be modified with digital processing. So yes, digital processing can alter how and where we perceive a sound source. The idea of fixing is trickier because it implies that we know where something should be in the first place, when it's rarely the case, and that the human brain exclusively relies on sound to get those impressions(which it does not).
We locate things mostly based on frequency and delay impacted by the shape and size of my head, ears, and where I'm looking at. intuitively we find that to improve our impression of spacial localization for sound sources, the best method should involve measurements at our ears, and ideally, head tracking so the all mental image doesn't break down anytime we move a little. The more of those stuff are involved, the more they are customized to your own body, the more convincing some digital processing can be.

The basis behind sound localization:
https://en.wikipedia.org/wiki/Head-related_transfer_function
And something I stole from a PDF(sorry to the author, I couldn't find the reference) to show the leading concepts of how the sound is altered:
Capture.PNG





If you plan to procure some binaural microphones or make some from cheap capsules, here is what a fellow Headfier has made and is sharing for free to try and simulate the sound of your speakers through your headphone:
https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/

And the more advanced solution with head tracking and all the bells and whistles that costs an arm:
https://www.head-fi.org/threads/smyth-research-realiser-a16.807459/


In term of non customized HRTF, but with head tracking, you can find a few products like the Audeze Mobius, or even the latest Airpod thingy. The latter tries to track your head movement relatively to your cellphone or tablet(if I understood correctly, so it wouldn't become stupid if you were on a train or just walking outside). And there are several other products doing more of one thing, less of another. The basic concepts obviously remain the same because it's for humans.

So again, fixing might be a big ask, but altering and probably improving on the perceived imaging with DSPs, yes we can! Some do it pretty well. I'm a huge fan of the A16 myself and use it for hours almost everyday.
 
Dec 16, 2020 at 4:28 PM Post #3 of 22
I suppose it's possible with DSPs in a multichannel setup, but it's a lot easier to deal with it with speaker placement and room acoustics.
 
Jan 5, 2021 at 6:18 AM Post #4 of 22
I was just thinking maybe poor soundstage could be fixed with artificial reverb. Do you agree or disgree?

What about imaging? I have no idea what the cause of poor imaging could be.
Our god, Amir, has said the wider soundstage is the result of distortion. It is the best when everything sounds from one point. This is why mono is the best. Anyway stereo exists only because the companies can sell everything twice.
 
Jan 5, 2021 at 1:22 PM Post #5 of 22
Poor soundstage is due to improper speaker placement and bad room acoustics.
 
Jan 5, 2021 at 8:58 PM Post #7 of 22
Poor soundstage is due to improper speaker placement and bad room acoustics.

I got the VE monk plus as my first taste of chifi since western manufacturers stopped making good earbuds.

There is something wrong with the soundstage/imaging on it. I can't tell whether an instrument is being played in front/back/up/down, only left or right and there's very little variation.

I was using a Sennheiser mx365 for many years and the soundstage and imaging was far better.
 
Jan 5, 2021 at 9:02 PM Post #8 of 22
Since you put your buds in your left and right ear, you shouldn't be surprised that you only hear separation along that plane. The near/distant cues are secondary distance cues baked in the mix. If you don't hear that, try listening to a recording with a more spacious mix. If you want up/down axis, get an Atmos speaker system.
 
Last edited:
Jan 5, 2021 at 9:09 PM Post #9 of 22
Since you put your buds in your left and right ear, you shouldn't be surprised that you only hear separation along that plane. The near/distant cues are secondary distance cues baked in the mix. If you don't hear that, try listening to a recording with a more spacious mix. If you want up/down axis, get an Atmos speaker system.

That's what I assumed but I really feel like there's a clear difference between the 2 buds. The ve monk somehow made the separation worse. How? I have no idea but I swear it does.
 
Jan 5, 2021 at 9:11 PM Post #10 of 22
Could you have some sort of crossfeed engaged? The shape of the ear bud as it relates you your own particular ear canal might make a difference too.
 
Jan 5, 2021 at 9:15 PM Post #11 of 22
Could you have some sort of crossfeed engaged? The shape of the ear bud as it relates you your own particular ear canal might make a difference too.

I don't think there was any crossfeed. To test the sounds I just played music and surround sound test videos from youtube from my laptop. I don't have the volume up so loud that one earbud would leak sound to the other.

Maybe it could be the earbud shape? I have no idea.
 
Last edited:
Jan 5, 2021 at 9:18 PM Post #12 of 22
Did you use the exact same recording both times, and compare within a very short period of time?

It's most likely a frequency response difference that you are interpreting as "soundstage". Most people who use the term soundstage in relation to headphones are using it to describe placebo or differences that have nothing to do with sound placement.
 
Last edited:
Jan 5, 2021 at 9:36 PM Post #13 of 22
Did you use the exact same recording both times, and compare within a very short period of time?

It's most likely a frequency response difference that you are interpreting as "soundstage". Most people who use the term soundstage in relation to headphones are using it to describe placebo or differences that have nothing to do with sound placement.

Yes I listened to the same tracks. I don't think I confused it with frequency response. I tried using an equalizer to make the frequency response similar but it did not seem to help.

I'm also playing an FPS game where sound positioning is crucial. If you can't tell where someone is in front or behind you from their footsteps then you lose. I often found myself not being able to tell whether someone was in front or behind many times.

Of course "soundstage" and "imaging" is a hard thing to measure objectively so I have no way of knowing whether I'm just imagining it. I just got some new earbuds today to compare it so I can see if there's any improvement.
 
Last edited:
Jan 5, 2021 at 9:43 PM Post #14 of 22
I have heard a lot of headphones over the years, and every one of them sounded different. Some were "open" and some were "closed". But none of them reproduced things in any way other than left/right. You can synthesize depth cues with mike placement and mixing techniques, but that is a factor involved with the recording itself, not the transducers. Stereo can only present left and right, not front back. You need more than two transducers for that.

From the sounds of it, you didn't do a very controlled test. It's a given that the response of the two IEMs are different. They also might fit your ears different. That surely accounts for the difference you are hearing. The other option is some sort of bias affecting your comparison.
 
Last edited:
Jan 5, 2021 at 10:14 PM Post #15 of 22
That's what I assumed but I really feel like there's a clear difference between the 2 buds. The ve monk somehow made the separation worse. How? I have no idea but I swear it does.
As our subjective impression comes from a pudding of audio cues(and non audio one), it's hard to come up with one sure answer. I'd always point at the frequency response for main suspect, simply because we encounter pretty wild variations from one bud to the next(or one headphone to the next). As most of our localization cues make use of frequency response, it's fair to assume some amount of impact in that respect when the signature changes significantly.
But other stuff could be at play sometimes. Some excessive distortions maybe? Or if your DAP already has a poor crosstalk spec unloaded, and the earbud has very low impedance. Then the effective amount of crosstalk might end up being really high and change the perceived presentation, or how distinct the instruments might feel to you.
I mention this as an example, but don't become paranoid about crosstalk, it's usually not even worth looking that spec up. I'm just brainstorming and presenting possibilities I can think about.
 

Users who are viewing this thread

Back
Top