Reality can. Reproduction can't. But why?
Nov 19, 2020 at 1:30 AM Thread Starter Post #1 of 10

Solan

500+ Head-Fier
Joined
Mar 7, 2006
Posts
831
Likes
144
I've been pondering about this thing about us having to "balance" between bass extension, non-recessed mids, and clear highs. Or between "warmth" and "precision".

The real world itself doesn't have this problem! A voice (mid) doesn't get recessed just because you strike a deep note on the cello (bass) while striking a cymbal (high).

So one should expect it to be perfectly possible to get the same combo of bass extension, non-recessed mids, and clear highs in a true reproduction as was there in the original. So why is that not happening? How are we fumbling it up midway? Is there any kind of sound law that applies to reproductions that don't apply to the real world?

Or is the embarrassing answer that it's not really possible in the real, non-reproduced world either, but that we are demanding our reproductions to be better than reality, and just have to balance where the improvement should be?
 
Last edited:
Nov 19, 2020 at 1:41 AM Post #2 of 10
I've been pondering about this thing about us having to "balance" between bass extension, non-recessed mids, and clear highs. Or between "warmth" and "precision".

The real world itself doesn't have this problem! A voice (mid) doesn't get recessed just because you strike a deep note on the cello (bass) while striking a cymbal (high).

So one should expect it to be perfectly possible to get the same combo of bass extension, non-recessed mids, and clear highs in a true reproduction as was there in the original. So why is that not happening? How are we fumbling it up midway? Is there any kind of sound law that applies to reproductions that don't apply to the real world?

Or is the embarrassing answer that it's not really possible in the real, non-reproduced world either, but that we are demanding our reproductions to be better than reality, and just have to balance where the improvement should be?
In the real world, the issues are similar. Voices can be masked to some extent by other noise, or musical instruments and musical instruments can also be masked. If someome plucks a guitar louder than is warranted for that musical piece, it will overshadow vocals and other instruments. One of the jobs of a producer/engineer is to strike the right balance in the recording/mix/master. Likewise, a conductor of an orchestra in a concert.
 
Nov 19, 2020 at 2:57 AM Post #4 of 10
I've been pondering about this thing about us having to "balance" between bass extension, non-recessed mids, and clear highs. Or between "warmth" and "precision".

The real world itself doesn't have this problem! A voice (mid) doesn't get recessed just because you strike a deep note on the cello (bass) while striking a cymbal (high).

So one should expect it to be perfectly possible to get the same combo of bass extension, non-recessed mids, and clear highs in a true reproduction as was there in the original. So why is that not happening? How are we fumbling it up midway? Is there any kind of sound law that applies to reproductions that don't apply to the real world?

Or is the embarrassing answer that it's not really possible in the real, non-reproduced world either, but that we are demanding our reproductions to be better than reality, and just have to balance where the improvement should be?
Yes it should be possible to get a reproduction very close to the original. But I'm guessing you imagine that people are trying to do just that when they make an album, which is almost never the case:
-Music is not usually recorded at a listener's position so the sound is different(not that you'd be able to compare with the original unless you were there at the recording).
-Mics aren't necessarily flat, stuff for singers are rarely measurement microphones(by choice).
-Music is mixed and mastered on speakers to give a certain subjective experience, and usually the original sound is not the target for sound engineers.
-Even on speakers, several methods like panning are subjective ones and not how sound from a given direction would actually be if it really came from that direction instead of coming out of 2 speakers.
-The playback, in our case, is done on headphones when it was intended to sound a certain way on speakers in a room. So again, various differences will creep in(objective and subjective).
-Most headphones have vastly inadequate frequency responses for the person using it. So maybe some of the impressions of imbalance or masking you're thinking about are caused by it? IDK

my point is, I'm not sure what phenomenon you're talking about. But if you record something from inside your ears and play it back with a headphone EQed to give the correct response in your ears, then you could get a playback that will be very close to "the real thing". That is certainly possible. But that doesn't solve your question as it's only the sound entering your ears. You might still not interpret it the way you would a real band in front of you. because you know where the sound is coming from and you feel the headphone on your head. If you move your head, the mental placement might collapse(I've become a walking ad for the Realiser A16). The bass won't shake your body if only the headphone's drivers are shaking.
And of course you're not seeing the guys playing in front of you!!!!! In this section we keep pointing out how much of a sound impression can actually be sight. It's foolish to be in this hobby and dismiss that aspect of human senses. They do blend in together. So even with basically the same sound entering our ears, some people will never get the same experience. And once the brain starts to juggle with conflicting information or assumptions about how the sound will be on headphones, it's hard to accurately predict what can happen to the final interpretation. Maybe what you mention would still be there for you? Maybe not. I have no idea.
 
Nov 19, 2020 at 3:08 PM Post #5 of 10
It is the behavior of one "sound source" against different ones.
Take a look at a well made 3-way speaker.
There're different drivers for different freq bands, the opposite of most headphones, where one driver has to do all frequencies, except multi driver headphones, of course.

That's how i understand this question. :wink:

Well, the visual aspect also play's a big role in sound perception, as @castleofargh mentioned above. :)
 
Nov 19, 2020 at 7:21 PM Post #6 of 10
The real world isn't the correct model to compare to. Commercially recorded music isn't engineered to sound "real". It's designed to be more optimized for clarity than reality. A sound mix might have all kinds of EQ and special miking going on in the various voices that don't sound natural at all, but they allow that voice to meld well with other voices in the music. Listen to the way drums sound in a dozen different songs and you're likely to hear a dozen different sound signatures all coming from the same instrument.

But assuming the goal was perfect realism, it would still be impossible to accomplish. The primary difference between the real world and recordings though is directionality. In the real world, sounds come at us from all directions. And sounds bounce off things and the reflections come back and reach our ears again and again. That is a very complex thing, and it's not possible to recreate precisely with just two transducers.
 
Last edited:
Nov 19, 2020 at 10:49 PM Post #7 of 10
Another issue that hasn't been raised is the variation in everyone's ear anatomy. There are subtle differences in how a headphone sounds with every given person's head. We each have different shaped outer ears and capabilities with inner ear hearing. This is one factor for why one person may feel a given headphone is more "natural" or has more "detail", while another person won't perceive it as such. Headphone brands will study everyone's HRTF and statistically find the best response that is popular. I think this is one reason why headphones are much harder to model "realism" vs speakers (which are in a real sound field and don't have the direct interaction of the outer ear the way headphones do).
 
Nov 19, 2020 at 11:59 PM Post #8 of 10
We should hear reality with the same anatomy as recorded music, shouldn’t we? I guess the torso and head turning have a significant effect.
 
Last edited:
Nov 20, 2020 at 12:50 AM Post #9 of 10
You cannot get reality. You can only get better than reality.
 
Nov 23, 2020 at 8:30 PM Post #10 of 10
I've been pondering about this thing about us having to "balance" between bass extension, non-recessed mids, and clear highs. Or between "warmth" and "precision".

The real world itself doesn't have this problem! A voice (mid) doesn't get recessed just because you strike a deep note on the cello (bass) while striking a cymbal (high).

So one should expect it to be perfectly possible to get the same combo of bass extension, non-recessed mids, and clear highs in a true reproduction as was there in the original. So why is that not happening? How are we fumbling it up midway? Is there any kind of sound law that applies to reproductions that don't apply to the real world?

Or is the embarrassing answer that it's not really possible in the real, non-reproduced world either, but that we are demanding our reproductions to be better than reality, and just have to balance where the improvement should be?

Try listening to binaural on a diffuse field equalized headphone (you can take srh1840 frequency response as a guide for full size headphones and er4b for iems). Most of what you say is happening because of the mixed stereo audio having non-compatible spatial maps to the one our brain is trained for.

Binaural on df toned headphones gets pretty close to realism imo. Note: it is preferred to close your eyes when listening to binaural for decent front-back perception. It can be made better by head tracking + compensation for head panning.

 
Last edited:

Users who are viewing this thread

Back
Top