Quote:
Originally Posted by jgazal /img/forum/go_quote.gif
I thought that totally open headphones (not only in the back of the diaphragm, which is called circumaural “open” headphones, but those that are open right in the front of the diaphragm, firing the outer ear) were better, because there is no enclosure to alter the sound waves (AKG K1000, Sony MDR-F1 etc.).
I thought that AKG K1000, Sony MDR-F1 etc. sound similar to open baffle designs, but that type of design tends to cancel bass waves (in phase from the front and the opposite phase coming from the back). I don’t know how AKG dealt with this problem, but Sony MDF-F1 made “acoustic lenses” that are some kind of extension in order to delay the back waves and avoid the wave annulment (more info here: SONY MDR-F1/CD2000/G74LS NATURAL SOUND HEADPHONES FOR SESSION MONITORING/MASTERING/REMOTE LOCATION SOUND CHECK REVIEW).
On the one hand, when I see Stax SR-007A, I wonder if they sound like a bass reflex design. The front diaphragm behaves like a ported/vented enclosure. It might increase bass in its resonance frequency. But that’s half of the story, because that port seems to have immediate access to the open back diaphragm, so it’s like you have a bass reflex design, but instead of hearing right in front the speaker, you are right inside the ported chamber... I don’t know how that should acoustically behave.
On the other hand, when I see SR-007 or SR-007A with the blue tack mod, I wonder if they sound like a sealed/infinite baffle design. That type of design tends to demand more watts from the amplifier because the diaphragm has to compact sealed air. But again that’s half of the story, instead of hearing like you were right in front of a sealed speaker, you are right inside the sealed chamber and aggravate the ear occlusion. I don’t know again how that should acoustically behave.
Altering the ear pads distance might be affecting the resonance frequency in the sealed chamber or bass reflex (depending on the model and mod) formed between the front diaphragm and your ear.
All the enclosures above mentioned have mathematical acoustic modeling, comprehending room interactions, using real time analyzers etc. Here I go with my question. Is there someone who can propose a mathematical model for headphones acoustic? That is no simple task because each outer ear is altering the frequency response in its individual way.
I see only one solution and that is Smyth Realiser. Not only you get right frequency response, but you still win your individual Head Related Transfer Function, which means real stage.
Am I thinking correctly? Is that correct?
Best regards,
Jose Luis
|
007 Mods
I think you are correct that plugging the SR007 port makes this a true infinite baffle. However I didn't notice an obviosu reduction in sound level but I did hear a large bass increase, and in my estimation an excessively large one.
The open port of the 007A does presumably allow some back wave to get into the cup thus affecting bass response.
Modelling
I am not certain that any modelling tells you much about what makes a good heaphone. Certainly a phone which measures a wide and flat frequency response when on the ear, should sound good, but any such measurements I have seem have been more roller coaster than flat. So I think the issue is mostly, what type of badness sounds best or worst?
HTRF's
In my understanding of transfer functions as applied to audio, the main interest is to see how the physical structures of the outer ear modify sound. There are complex reflections from the pinna which presumably change with the lateral and vertical direction if the sound source. So potentially you could establish the stimulus cues which give or assist vertical and lateral sound perception.
In spite of some claims that vertical perception is important in audio, when I was working around acoustics, I don't recall any experimental evidence that vertical sound direction could be perceived with any accuracy.
Even lateral direction is not wonderful, but still enough to give us the pleasures of stereo. I recall reports of front-back reversals, i.e. listeners believing that sources in the midline were sometimes erroneously identifed as being ahead when they were behind or vice versa.
Auditory Directional Cues
The prime sources of lateral location information are interaural amplitude differences, i.e. loudness differences between the ears and interaural time differences. In audio recording, the amplitude differences are the main cues, that's all your balance control does is differentially adjust amplitude.
The time differences are mostly messed up by the vagaries of microphone placement, i.e. the mikes are not placed in positions that correspond with the ears. Once you add in multi-miking and studio mixing, the time cues if they are still there, are certainly not what you woud hear naturally.
Of course there is "binaural recording" which places the mikes in the correct ear positions using either a real or a dummy head. These recordings sound much more realistic than most conventional recordings. Probably this is because the 2 mikes record more correctly matched interaural time and amplitude differences. Some aspect of the HTRF time differences may be more accurately recorded, but unless you had microphones exactly in the position of the ear canal they would not pick up the correct HTRF and the playback driver would also need to be in the same location.
This could not be done with regular phones but you could do something with a pair of mikes in the ear canals and IEM drivers in the same location. While I have made a number of binaural recordigs over the years, I have yet to hear
and IEM biaural recording. Of course there would be different HTRF's for different persons, but I would be more than happy to hear such a recording, in effect through another person's ears.