AES 2012 paper: "Relationship between Perception and Measurement of Headphone Sound Quality"
Jul 15, 2013 at 6:09 PM Post #106 of 135
Which link do you mean?
 
In terms of the effect of channel balance on the sound, I and I'm sure others can spot a difference in quality in regular music between a headphone with slight imbalances - the sort you'd expect on any given headphone - and when those imbalances have been removed. You can try this by measuring both channels of your chosen phones, convolving one channel by the difference, then having a listen before and after. (Though I'll admit this depends on the quality of your convolution, of course.)
 
I'm sure it would be somewhat impractical to intentionally skew the channel balance for a beneficial effect even if there was a benefit to be had. There is, though, the AKG K 340, which are curious in that every pair I've seen measured - two by Tyll and one by me - has had a clear channel imbalance between 2.5 and 4 kHz. I've wondered whether it's by design, consistent wear, or some technical necessity.
 
Jul 15, 2013 at 6:46 PM Post #108 of 135
Thanks.
 
I was under the impression though that the non-subject-specific measurements provided in the slides were done with a dummy ear, and presumably just that one ear? Also, when I say the ears perceive differently, I mean to refer to what was discussed in the S & W article, i.e. a difference arising not from the ears themselves but from lateralization in the brain (though something with which I'm not very familiar).
 
Jul 15, 2013 at 6:49 PM Post #109 of 135
If you did do it for some sort of 'benefit', you'd have to remove the 'benefit' from every other instrument/noise on the recording. Any of that sort of thing should be done on the recording IMo.

Being that you hear these imbalances, you might have greater insight into wether they are perceived as beneficial than I would...
 
Jul 15, 2013 at 7:18 PM Post #112 of 135
Think of it this way: someone standing in front of you saying a word will produce sound that your two ears will receive differently (different frequency response, time delay, even SPL).
Your hearing is used to that.
 
The problem with headphones is that they disable most components of your HRTF, so for example a completely different left outer ear would be "bypassed" with an in-ear headphone.
 
With a full-size headphone for example you could equalize for that, but it's completely personal, so again the ideal is both headphone channels behaving identically.
 
Jul 15, 2013 at 7:39 PM Post #113 of 135
I'm not sure that we'd need to bother with the HRTF in this case, at least not right now, since the difference between the two ears I'm talking about seems to not be so much about where in the world the brain thinks the signal is coming from but rather about which ear gets the signal, hence to which portion of the brain the signal is sent for processing, fundamentally influencing whether the signal is processed in this way (left ear) or in that way (right ear).
 
Jul 15, 2013 at 7:53 PM Post #114 of 135
If the sound source is dead center, and your ears, head ... are completely symmetrical, but sounds your left/right ear receives are processed differently then why worry about channel imbalance? Doesn't seem to be a "physical" issue we're dealing with.
 
An imbalance, let's say a dip, at a certain frequency may be worse in the right channel than in the left one, but channel imbalance is bad any way.
 
Jul 15, 2013 at 8:44 PM Post #115 of 135
The S & W paper I mentioned suggested that the reason why the majority of mothers instinctively hold their babies on the left was to exploit this difference between the left and right ear, whereby the left ear was more sensitive to melody in speech (I paraphrase from memory).
 
Whether this can be exploited in headphone design was one question I raised, and indeed it wouldn't be simple in any case (since we're on a scientific forum, I won't say impossible). The other question I raised was how a channel imbalance might affect the test conducted in the Harman paper which this thread is about. This I'm sure we could address more tangibly, even if Ronion for some reason doesn't find it important.
 
That is, if - as suggested in the S & W paper via their references - one ear is more sensitive to grammar and the other ear more sensitive to melody, I don't think it would be unfair to suggest the possibility that, depending on which ear a singer's voice was projected into, it might be processed differently by the listener's brain, potentially leading to a differing assessment subconsciously of the quality of the sound produced by the headphones being evaluated, since it's music and not a news broadcast on which the evaluation rests. So if a test track had a singer right in the middle but due to channel imbalances in the headphones being tested - imbalances which the phones in the Harman test seem to have had - the voice would be stronger in one or the other ear depending on the headphones, there is the potential that you have a confounding variable.
 
Jul 15, 2013 at 9:09 PM Post #116 of 135
@Vid: The reason I think I hold my kids to my left side ear is that I'm right handed (use my right hand to support the weight and left hand to secure which means the kids ends up in my left ear)... Could be that there is something else to it.
 
A channel imbalance may move the whole thing to the louder side in some strange way given the lack of crossfeed. However the left side will still receive quite a bit of information. I don't know how louder in left vs right correlates to melodic processing by the brain, but I think it will mess up with position perception.
 
Jul 15, 2013 at 9:45 PM Post #117 of 135
If their hypothesis is correct, surely the mother-child dynamic would receive input into both ears as well regardless of which side the baby was held on; I guess it could be about which side of the brain gets priority - but I've no idea about that.
 
Handedness was thought in the paper an unlikely explanation. Other possibilities were discussed and dismissed, among them the suggestion that the left breast was more sensitive than the right. Make of that what you will.
 
Jul 16, 2013 at 1:11 AM Post #120 of 135
The purpose of audio transducers is to reproduce the signal that drives them. Ideally, the speakers will have perfectly flat response, otherwise they fail to do what they're intended to do. In the case of headphones, the purpose of the transducers+the cups they are mounted in is to reproduce the signal that drives them at the eardrum. In this case, the headphones need to have a response such that headphone response + head transfer function = perfectly flat, otherwise, they are failing to do what they're intended to do. In both cases, the voicing of the left and right speaker/earphone must be identical. (Obviously, I've briefly brushed over the topic in haste, and the discussion of free field vs diffuse field, as well as the head transfer function and how it relates to headphones and audio imaging can be found elsewhere)
 
Differences in the left/right channels are the responsibility of the recording artist/mixer/producer. Most music is mixed in stereo to be listened to by a stereo system. Binaural music is mixed so that when played on ideal headphones, it sounds the same as the stereo mix being played on a stereo system.
 
The differences in perception in the brain of audio heard by the left ear and right ear (if they exist) are "postprocessing" done after the ears have detected the sound field. If you colored the left and right speakers/earphones differently, then the sounds you hear would be weird, or at the very least, an inaccurate reproduction of the sound the recording engineer intended for you to hear. If the left and right channels are intended to (for example) emphasize vocals in the left and melody in the right, then it should be encoded in the audio when it is being mastered in the studio.
 
Could somebody potentially enjoy a different coloring in the left and right channels? Of course! People's personal preferences often include the inclusion of inaccuracies in their sound reproduction equipment (see tube amps and bass boost for common examples of this). This personal choice in audio reproduction should be implemented by the appropriate application of equalization, balance, and DSP to color the audio in the desired manner, and then the audio transducers are responsible for reproducing that sound as completely and precisely as possible, i.e. without introducing further coloration. Hope this clears things up!
L3000.gif

 
Cheers!
 

Users who are viewing this thread

Back
Top