What is resolution and how is it measured?
Aug 9, 2015 at 1:14 PM Post #31 of 45
who fails first usually in regard to speed? the amp or the driver having trouble moving air fast enough at full amplitude? or is there some kind of self inertia with the membrane being "helped" by the pressure changes of the previous waves at the same frequency? imagination going a little wild right there, and anyway that would concern only high frequencies and high frequencies don't concern me much ^_^.
 
Aug 9, 2015 at 1:32 PM Post #32 of 45
Good point there ... the transducer, speaker or headphones have way more THD and are way slower than all other components in the chain
wink.gif
.
 
What always dazzels me in general:
How can a single driver in a headphone at the same time reproduce the music of an orchestra?
I.e. at the same time reproduce the sound of violins, upright bass, piano and human voice and all in a different position on stage?
 
Aug 9, 2015 at 2:15 PM Post #33 of 45
Linearity is a good start - Superposition is a property of Linear Systems - ideally Linear transducers don't have any problem with many or few frequencies, notes, instruments playing at once
 
 
for audio reproduction we don't (yet) have full 3D Sound Field recording or playback, individual microphones capture sound very locally, just capturing scalar pressure or ~ 1D velocity
 
for home commercial recorded music playback we mostly have all of the signals from often many recording microphones in various positions in the studio or performance being mixed to 2 channel Stereo source, or some few extra channels of various "Surround" formats
 
Stereo is a standard because our hearing audio illusion of a spatial spread of recorded sound sources  "in between" the 2 speakers in a room can be pretty good - although a center channel was known to make for an even more stable "image" it wasn't as easy to add 3 channels to records, radio signals before modern cheap digital 
 
with Headphones its even simpler - we only have 2 ears, 2 signals - so some of the illusions in room+speaker mastered/produced commercial music recordings don't work the same in headphone listening, multichannel surround has to be mixed down to the 2 channels
 
and commercial recording practice today is not about reproducing exactly the 3D sound field reaching any particular audience member in a real live music performance - most music today is heavily processed, "enhanced" in the recording and mastering process
 
Aug 9, 2015 at 5:00 PM Post #34 of 45
is something like this a realistic take on the job? https://www.propellerheads.se/blog/tools-for-mixing-levels-panning
I've read several similar things on the web and they all tend to show some kind of a more or less standard receipe to panning music that has no relation to where the band was positioned (do they even play together nowadays?).
 
Aug 9, 2015 at 5:20 PM Post #35 of 45
looks plausible, probably reflects common practice - today intercahnnel delay and room reflections could be modeled in software -  likely the sound mastering/mixing/sound production range is from "Mickey Mouse" cartoon to "Avatar"advanced CGI, but still more about "artistic choice" than trying seriously to be literally realistic
 
  pan pot "painting on" soundstage from close miced feeds is pretty standard for the over 1/2 century of Stereo mastering - DAW today can add R,L delta delay to improve the sensation of musical instrument "position" in the mix
 
comments from Moulton's site:
...Then, a little later still, I got into some loudspeaker research, and found myself called upon one day to make a research recording, wherein I recorded a batch of clicks with very carefully documented changes in level between the stereo channels.
This was one of those cases where I figured I knew what was going to happen before I started.
Given my golden ears, there just wasn’t much doubt that I could hear the image move as soon as I tweaked the pan-pot even a little, so I decided to calibrate the changes to 1/10th of a decibel, so that I’d be able to really pick out the subtle differences in localization that were going to happen when the levels between channels changed.
 
However, I was very startled to discover that the phantom image didn’t seem to move at all even when the levels between channels changed a whole decibel! I was so startled that I became positive I had made a mistake when preparing the tape!
A little investigation (well, about three hours, including chasing down all the wiring in the monitoring system!) showed me that I hadn’t made a mistake, and when the dust finally settled I had found out something quite interesting: that as long as the difference between channels is less than 3 decibels, the phantom image hovers pretty much in the middle point between the two speakers.
 
I promptly ran this down to my buddies at the local loudspeaker factory and we tried it in the anechoic chamber with blindfolds and people pointing at the imaginary phantom, and it still remained true: with up to 3 dB difference between channels (that’s half-power, remember!) the image didn’t move much, maybe five degrees.
With between 3 and 6 decibels difference in levels, the phantom quickly and without much stability migrated to the louder speaker, hovering just inboard of that speaker, and once the difference was greater than 7 decibels, the phantom was for all intents and purposes coming from the louder speaker.
 

I think Moulton elsewhere does comment on dialing in delay between channels for more phantom control
 
even more fun with creating "soundscapes" you could read about "Foley" in motion picture sound
 
Aug 9, 2015 at 6:13 PM Post #36 of 45
  How can a single driver in a headphone at the same time reproduce the music of an orchestra?
I.e. at the same time reproduce the sound of violins, upright bass, piano and human voice and all in a different position on stage?

 
Same way the air in your ear canal carries the music of an entire orchestra and symphony hall to your ear drum.
 
Angular perception is based on the interaction of the shape of your head on the sound falling on it as it partially obstructs, partially redirects the sound to your ears with a lot of help from the brain.
 
Aug 9, 2015 at 7:25 PM Post #37 of 45
   
Same way the air in your ear canal carries the music of an entire orchestra and symphony hall to your ear drum.
 
Angular perception is based on the interaction of the shape of your head on the sound falling on it as it partially obstructs, partially redirects the sound to your ears with a lot of help from the brain.

 
The ear has different receptors (basilar hairs) that are sensitive to certain frequency bands and these share the task to basically analyze the entire signal that is coming in.
 
That doesn't really explain how a single driver is able to transmit different frequencies at the same time e. g. bass and violins.
When you have a 3 or 4 way system speaker each driver specializes in the section that the crossover sends it's way.
A head phone is usually only using one single driver for everything ... how is this feat possible?
 
Aug 9, 2015 at 8:21 PM Post #38 of 45
   
The ear has different receptors (basilar hairs) that are sensitive to certain frequency bands and these share the task to basically analyze the entire signal that is coming in.
 
That doesn't really explain how a single driver is able to transmit different frequencies at the same time e. g. bass and violins.
When you have a 3 or 4 way system speaker each driver specializes in the section that the crossover sends it's way.
A head phone is usually only using one single driver for everything ... how is this feat possible?

 
The speaker is playing an audio signal that is the sum of all of the sounds being reproduced.  Whether the audio signal is comprised of a single person singing a cappella, a 4-person rock band, or a full orchestral ensemble.  You could have 100 people talking in a normal conversation voice on the deck of an aircraft carrier, and if a jet is in the process of launching, all you will hear is the jet engine's roar.  Same idea with the audio signal, but much more complex as the masking is not typically a single, loud event.  I've read that intermodulation distortion can be a problem with a larger driver as it will move a considerable amount when producing low frequencies and can create a Doppler effect with the higher frequencies, which causes the distortion.  This is why many home speakers opt to have multiple drivers to minimize the potential impact from IMD.  Headphones get away with using a single driver because it is often very small when compared to a room speaker, and the movement of the driver is too small to normally have an impact on IMD in this manner.
 
Aug 19, 2015 at 10:00 AM Post #39 of 45
The ear has different receptors (basilar hairs) that are sensitive to certain frequency bands and these share the task to basically analyze the entire signal that is coming in.

That doesn't really explain how a single driver is able to transmit different frequencies at the same time e. g. bass and violins.
When you have a 3 or 4 way system speaker each driver specializes in the section that the crossover sends it's way.
A head phone is usually only using one single driver for everything ... how is this feat possible?


Hard to put into words..

Imagine the driver diaphragm swinging in and out to reproduce a constant low frequency tone. Then a high frequency signal comes along. The diaphragm makes a faster swing at some shorter displacement to reproduce high frequency while swinging still at the original longer displacement producing low frequency.. The diaphragm may be doing the shorter high frequency swing at some point along the swing displacement of low frequency, not necessarily originating from the center
 
Aug 19, 2015 at 10:03 AM Post #40 of 45
Hard to put into words..

Imagine the driver diaphragm swinging in and out to reproduce a constant low frequency tone. Then a high frequency signal comes along. The diaphragm makes a faster swing at some shorter displacement to reproduce high frequency while swinging still at the original longer displacement producing low frequency.. The diaphragm may be doing the shorter high frequency swing at some point along the swing displacement of low frequency, not necessarily originating from the center

 
Another way to think about it: Wiggle your finger up and down quickly as you move your arm up and down slowly; there are two frequencies at once.
 
Aug 19, 2015 at 1:01 PM Post #41 of 45
   
The ear has different receptors (basilar hairs) that are sensitive to certain frequency bands and these share the task to basically analyze the entire signal that is coming in.
 
That doesn't really explain how a single driver is able to transmit different frequencies at the same time e. g. bass and violins.
When you have a 3 or 4 way system speaker each driver specializes in the section that the crossover sends it's way.
A head phone is usually only using one single driver for everything ... how is this feat possible?

The signal is simply air pressure as a function of time though - at any given instant, the pressure has a single, distinct value, no matter how many separate sounds went into the overall experience. The total sound of anything, regardless of whether it is a full symphony orchestra or a solo flute (or a death metal band) when it arrives at your ear is just the air pressure varying with time, in a single waveform. The speaker just has to replicate this waveform.
 
Aug 19, 2015 at 2:33 PM Post #42 of 45
I guess exactely these kind of thoughts lead to the MP3 format ....
 
Still if you imagine one speaker membrane generating all these different frequencies at the same time and they come across clearly seperated, you can focus on certain sections of the orchestra in a concert hall and you can still do the same at home. The difference being that live you have e.g. 60 to 80 sound sources at home you have 2 and yet the impression is amazingly close to the real thing.
 
Aug 19, 2015 at 3:07 PM Post #43 of 45
  I guess exactely these kind of thoughts lead to the MP3 format ....
 
Still if you imagine one speaker membrane generating all these different frequencies at the same time and they come across clearly separated, you can focus on certain sections of the orchestra in a concert hall and you can still do the same at home. The difference being that live you have e.g. 60 to 80 sound sources at home you have 2 and yet the impression is amazingly close to the real thing.


but that's the point, all the speaker does is move from one place to the next one trying to follow the shape of the amplitude per time electrical signal. just like for sample rate, if the speaker can do 20 000 moves per second for the 20khz signal to be correctly reproduce, then everything else is ok too because everything else requires slower movement.
 
well not exactly true because low freqs have other problems on their own inside the speaker, but the "how to do so many frequencies at once" part is indeed as simple as my explanation.
 
Aug 19, 2015 at 6:56 PM Post #44 of 45
  I guess exactely these kind of thoughts lead to the MP3 format ....
 

Not at all.
 
To understand how MP3 works, you have to understanding something far, far more complex: Perceptual Masking.
 
People were pretty sure that something like MP3 was needed as soon as digital audio was apparent. But some thing like MP3 seemed like mission impossible until some very critical research into human perception was completed and proven.
 
Lossy compression was in general use for digital telephone service as implemented for telephone long distance service and digital network switching for at least 20 years before MP3.
 
Unfortunately, it sounded like a telephone.
 
Aug 19, 2015 at 6:58 PM Post #45 of 45
   
The ear has different receptors (basilar hairs) that are sensitive to certain frequency bands and these share the task to basically analyze the entire signal that is coming in.
 
That doesn't really explain how a single driver is able to transmit different frequencies at the same time e. g. bass and violins.
When you have a 3 or 4 way system speaker each driver specializes in the section that the crossover sends it's way.
A head phone is usually only using one single driver for everything ... how is this feat possible?

 
This feat is possible because effective loudspeakers are sufficiently linear that the various frequencies don't interfere with or intermodulate each other.
 

Users who are viewing this thread

Back
Top