can we hear above 20 KHz? might be asking the wrong question
Jul 9, 2016 at 1:05 AM Post #16 of 30
Well firstly you're talking about Auditory Scene Analysis, a non-trivial feat in and of itself. You're talking about taking two waveforms (from the two ears) and decomposing them into any number of original sound sources and making out which sounds each of these sound sources is making. This is not a closed-form problem; for any given waveform input there are infinite possibilities for original sound sources and combinations (you may have had the experience of being in a somewhat noisy environment and suddenly imagining that you're hearing a sound within that noise, which you later find via other means to simply be a figment of your "imagination". This could actually be an example of mistaken Scene Analysis.

Now, different sound sources can add to and subtract from each other in unpredictable ways (e.g. total silence could in fact be composed of a world of noise together with a 100% effective pair of active noise cancellation earphones you're wearing on your head). The imperative of the brain is to make out sounds that matter to you in this jumble. In this jumble it doesn't matter if it misses out on some small sounds, particularly because your attention is limited and can only handle so many things at once. If a lion suddenly roars behind you, it's important that you hear that roar and not important that you hear some leaves rustling in front of you. On the other hand, if all that's to be heard is leaves rustling, it's important that you hear that, as you may be quietly being stalked by some predator or other. In any event, the sound of leaves rustling would be overwhelmed by the sound of lion roaring in the first case, both temporally and in the frequency spectrum. We do not know in advance what the lion roar sounds like. The input waveform could be a lion roaring and some leaves rustling, or it could very well be just a tiny bit of difference in the lion's voice. In any case, what's important is that your brain registers that there's a lion roaring.

The above are the reasons for which I think it is reasonable to expect small signals that one can hear in isolation to be drowned out by a big simultaneous signal. Of course, these are expectations that have been borne out in masking experiments. If that can happen to small signals one *can* hear in isolation, it seems way, way easier to happen to small signals that one *can't* even hear in isolation.

If the above sounds quite ad-hoc, it is--my university education in cognitive science only covered audition as a side subject, although I did my final year thesis on the subject of trying to get a computer to recognize music notes from an audio recording. All this is in any case more than a decade in my past. If you want to seriously read up on the subject, I suggest the book Auditory Scene Analysis by Albert S. Bregman. It was reference reading for my final year thesis, although I wish I had time to finish the whole of it.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 9, 2016 at 1:36 AM Post #17 of 30
  Okay, good.
 
I know I sound pedantic but the problem is that on this forum, I get replies that try to pull me off my central question. The only thing that seems to work is being methodical. 
 
Of course you can say anything you want. You can answer or not answer. I'm just trying to make my central question clear.
 
I know you are asking what the heck my question is. I'm taking a big picture view of audio theory, design, and testing. Notice that audiophiles and "objectivists" have two different paradigms. They start from different places and end up in different conclusions. So I'm going back to basic questions. Like, "How do we get started answering the question if A and B are different?" How do we test a DAC to see how well it performs?
 
You have made a claim that does indeed help us with these tasks, should it be true. That is, there is little likelihood of any inverse masking effect.
 
That does indeed go to the heart of audio design and testing.
 
But when you say "no experiments have revealed inverse masking," you are expressing a confidence that the area has been explored well. 
 
And that may be the case.
 
But there is a pretty big world of signals, if we consider all possible signals A and B. And there is a big world of listening contexts if we consider all possible listening protocols. 
 
So there is some reason you are confident that this territory has been explored well. Right?
 
I can think of one answer. Linear systems theory tells us that a signal can be transformed into the frequency domain. We know that the ear operates on the frequency domain side of things. Therefore we can be systematic in constructing test signals by dividing the spectrum into bands and providing variations on the amplitude within each band.
 
In other words, if we have some theory about the ear and brain, it helps us to be systematic in exploring the likely territory. Agree?


If you are looking for the equivalent of an audio unified field theory spelled out one after the other, it doesn't exist to my knowledge.  For that matter it doesn't in physical science either.
 
In the other digital thread I listed a couple books which are a good place for you to start.
 
As another has already said, lots of data reduction going on to the point that what auditory info makes it to the brain is like a medium bit rate MP3.  You do then have the huge pattern matching computing power of the brain to process that as it sees fit (and evolution decided what was fit), but plenty of work was throwing away information not needed. Given that it isn't surprising that auditory discrimination is at its highest when the signal is simple.  Exactly the opposite of the audiophile idea that complex music long term gives on the best chance to catch small differences.  Once you see how the general hierarchy of hearing works the audiophile idea is ridiculous.  Long testing bears that out.  Humans with simple test tones can pick up distortion at around .1% THD.  Go lower and you can't hear it there or not. Using music you are doing well if you pick it up at 1% THD.  The list is long for such effects.  Loudness frequency etc.  I mean really which is easier to hear one tone of a pair getting louder or one tone of 25 getting louder?  Yet audiophile myths would lead you to think the latter is likely.

The basic parameters are in for hearing.  There are plenty of interesting more complex places to look at hearing, but I bet ten times as much as you realize has been done already.
 
So the total package in hearing is the mechanism itself and its physical performance envelope, the response of the nerves to that, and how that is processed in the brain for the total psychophysical package.  Thankfully, in most parameters our electronics push the physically capable limits and are way beyond hearing itself.  Mics and speakers aren't there yet.
 
So go read those texts I posted or go read some online material on psychophysics. 
 
Jul 9, 2016 at 6:06 AM Post #18 of 30
Well firstly you're talking about Auditory Scene Analysis, a non-trivial feat in and of itself. You're talking about taking two waveforms (from the two ears) and decomposing them into any number of original sound sources and making out which sounds each of these sound sources is making. This is not a closed-form problem; for any given waveform input there are infinite possibilities for original sound sources and combinations (you may have had the experience of being in a somewhat noisy environment and suddenly imagining that you're hearing a sound within that noise, which you later find via other means to simply be a figment of your "imagination". This could actually be an example of mistaken Scene Analysis.

Now, different sound sources can add to and subtract from each other in unpredictable ways (e.g. total silence could in fact be composed of a world of noise together with a 100% effective pair of active noise cancellation earphones you're wearing on your head). The imperative of the brain is to make out sounds that matter to you in this jumble. In this jumble it doesn't matter if it misses out on some small sounds, particularly because your attention is limited and can only handle so many things at once. If a lion suddenly roars behind you, it's important that you hear that roar and not important that you hear some leaves rustling in front of you. On the other hand, if all that's to be heard is leaves rustling, it's important that you hear that, as you may be quietly being stalked by some predator or other. In any event, the sound of leaves rustling would be overwhelmed by the sound of lion roaring in the first case, both temporally and in the frequency spectrum. We do not know in advance what the lion roar sounds like. The input waveform could be a lion roaring and some leaves rustling, or it could very well be just a tiny bit of difference in the lion's voice. In any case, what's important is that your brain registers that there's a lion roaring.

The above are the reasons for which I think it is reasonable to expect small signals that one can hear in isolation to be drowned out by a big simultaneous signal. Of course, these are expectations that have been borne out in masking experiments. If that can happen to small signals one *can* hear in isolation, it seems way, way easier to happen to small signals that one *can't* even hear in isolation.

If the above sounds quite ad-hoc, it is--my university education in cognitive science only covered audition as a side subject, although I did my final year thesis on the subject of trying to get a computer to recognize music notes from an audio recording. All this is in any case more than a decade in my past. If you want to seriously read up on the subject, I suggest the book Auditory Scene Analysis by Albert S. Bregman. It was reference reading for my final year thesis, although I wish I had time to finish the whole of it.

 
I will try to find the texts at the local university library.
 
I have two concerns at this point.
 
(1) Let's say we have signals A, B, and lets say the difference A-B is very small. Then we ask if we can hear the difference between A and B. You are framing this problem by saying the signal (A-B) is added to B, and then asking if B masks it. So this is only useful if masking theory applies in this context. It is only useful if the results of masking theory can be extended to the signals (A-B) and B. Clearly you have some confidence that it can. That leads me to wonder what the test signals and protocols were in developing masking theory, and whether they can be generalized.
 
(2) You speak of the functioning of the ear in a survival context, or a basic human functioning context (communication, finding food, staying safe, etc.) But are those results extendable to music? Music is an activity of the brain that seems to emerge from more basic survival functions, but it's not a survival function itself.
 
Jul 9, 2016 at 6:10 AM Post #19 of 30
 
If you are looking for the equivalent of an audio unified field theory spelled out one after the other, it doesn't exist to my knowledge.  For that matter it doesn't in physical science either.
 
In the other digital thread I listed a couple books which are a good place for you to start.
 
As another has already said, lots of data reduction going on to the point that what auditory info makes it to the brain is like a medium bit rate MP3.  You do then have the huge pattern matching computing power of the brain to process that as it sees fit (and evolution decided what was fit), but plenty of work was throwing away information not needed. Given that it isn't surprising that auditory discrimination is at its highest when the signal is simple.  Exactly the opposite of the audiophile idea that complex music long term gives on the best chance to catch small differences.  Once you see how the general hierarchy of hearing works the audiophile idea is ridiculous.  Long testing bears that out.  Humans with simple test tones can pick up distortion at around .1% THD.  Go lower and you can't hear it there or not. Using music you are doing well if you pick it up at 1% THD.  The list is long for such effects.  Loudness frequency etc.  I mean really which is easier to hear one tone of a pair getting louder or one tone of 25 getting louder?  Yet audiophile myths would lead you to think the latter is likely.

The basic parameters are in for hearing.  There are plenty of interesting more complex places to look at hearing, but I bet ten times as much as you realize has been done already.
 
So the total package in hearing is the mechanism itself and its physical performance envelope, the response of the nerves to that, and how that is processed in the brain for the total psychophysical package.  Thankfully, in most parameters our electronics push the physically capable limits and are way beyond hearing itself.  Mics and speakers aren't there yet.
 
So go read those texts I posted or go read some online material on psychophysics. 

Yes, I will look for the books. 
 
It's kind of funny though that first you give me no general answer at all, and when I press you for any kind of general answer, even the most basic outline of one, you complain that I'm demanding a "unified theory." There's a lot of ground in-between those two poles, you know.
 
I realize a lot of work has been done to model the functioning of the ear, but my concern is that it's a crude model that can't capture the subtleties of music. I will look to see what contexts and test signals were used to develop masking theory. I'm concerned that the contexts are not generalizable to a situation such as a highly trained conductor listening to an orchestra. Maybe they are more applicable to a non-musician listening to pop music.
 
Jul 9, 2016 at 2:30 PM Post #20 of 30
  Yes, I will look for the books. 
 
It's kind of funny though that first you give me no general answer at all, and when I press you for any kind of general answer, even the most basic outline of one, you complain that I'm demanding a "unified theory." There's a lot of ground in-between those two poles, you know.
 
I realize a lot of work has been done to model the functioning of the ear, but my concern is that it's a crude model that can't capture the subtleties of music. I will look to see what contexts and test signals were used to develop masking theory. I'm concerned that the contexts are not generalizable to a situation such as a highly trained conductor listening to an orchestra. Maybe they are more applicable to a non-musician listening to pop music.


Your three thread titles sound like a typical high ender audiophile looking for what is wrong with conventional engineering of gear, and  knowledge of hearing. 
 
As for general vs specific, it is like I gave you Newtons three laws of motion and answered a question about gravity saying on the earth gravity has a force that accelerates objects at 32 ft/sec/sec.  A nice general answer you were looking to find. Then I point out how a leaf or parachute won't drop the way that principle might make you think.  So then you say I am providing nothing except special cases.
 
In regard to masking, louder tones mask quieter tones.  What does that mean?  A 1 khz tone at 90 db sound level will mask a 1100 hz tone at 60 db sound level.  Meaning I could present to you the 1 khz tone and you could not tell when I added or removed the 1100 hz tone as it was masked.   More general principles.  The closer in frequency two tones are the more masking occurs.  Meaning I present a 1 khz tone at 90 db, and perhaps it even masks an 85 db 1100 hz tone.  While a 1 khz tone at 90 will not mask 1500 hz at 85 db.  You will hear when the second tone is added.  Another general principle is lower frequency tones mask more at higher frequencies than they do frequencies lower.  So 1 khz 90 db may mask 85 db 1100 hz and not mask 900 hz at 85 db.  There is also masking in time.  A tone may mask another tone for a period of time after it stops.  1 khz at 90 db when stopped might prevent you hearing 1100 hz for a couple hundred milliseconds.  There is even an unusual situation where you get backward masking.  A loud tone occurring after another tone can mask the tone that has already happened. That kind of masking is not for long and over very similar frequencies only. 
 
Here is some info on masking if you want to read it.
 
https://ccrma.stanford.edu/~bosse/proj/node9.html
 
Jul 9, 2016 at 5:12 PM Post #21 of 30
 
Your three thread titles sound like a typical high ender audiophile looking for what is wrong with conventional engineering of gear, and  knowledge of hearing. 
 
As for general vs specific, it is like I gave you Newtons three laws of motion and answered a question about gravity saying on the earth gravity has a force that accelerates objects at 32 ft/sec/sec.  A nice general answer you were looking to find. Then I point out how a leaf or parachute won't drop the way that principle might make you think.  So then you say I am providing nothing except special cases.
 
In regard to masking, louder tones mask quieter tones.  What does that mean?  A 1 khz tone at 90 db sound level will mask a 1100 hz tone at 60 db sound level.  Meaning I could present to you the 1 khz tone and you could not tell when I added or removed the 1100 hz tone as it was masked.   More general principles.  The closer in frequency two tones are the more masking occurs.  Meaning I present a 1 khz tone at 90 db, and perhaps it even masks an 85 db 1100 hz tone.  While a 1 khz tone at 90 will not mask 1500 hz at 85 db.  You will hear when the second tone is added.  Another general principle is lower frequency tones mask more at higher frequencies than they do frequencies lower.  So 1 khz 90 db may mask 85 db 1100 hz and not mask 900 hz at 85 db.  There is also masking in time.  A tone may mask another tone for a period of time after it stops.  1 khz at 90 db when stopped might prevent you hearing 1100 hz for a couple hundred milliseconds.  There is even an unusual situation where you get backward masking.  A loud tone occurring after another tone can mask the tone that has already happened. That kind of masking is not for long and over very similar frequencies only. 
 
Here is some info on masking if you want to read it.
 
https://ccrma.stanford.edu/~bosse/proj/node9.html

 
It appears that you do know more about the question (are A & B different) than you were posting initially. Maybe you organize knowledge in a different way than me.
 
Was masking theory developed with test signals or real music? What is the evidence that the masking model of a non-musician's ear generalizes to, say, a professional orchestra conductor?
 
Jul 9, 2016 at 5:31 PM Post #22 of 30
   
It appears that you do know more about the question (are A & B different) than you were posting initially. Maybe you organize knowledge in a different way than me.
 
Was masking theory developed with test signals or real music? What is the evidence that the masking model of a non-musician's ear generalizes to, say, a professional orchestra conductor?


Sorry, didn't realize my posting about one topic was to be representative of my sum total knowledge of things audio.  Masking data was mostly from test tones and then speech and then other things like natural sounds and music.  As for the latter, don't know it has been specifically investigated.  Some investigation into general hearing of different language groups and humans in general would make me think there are some places training and experience of a conductor matter a little bit, but not enough to be wildly different than the human norm.  Their are engine mechanics with a finely tuned ear for engine sounds too that haven't been investigated as a group.
 
Jul 9, 2016 at 6:23 PM Post #23 of 30
 
Sorry, didn't realize my posting about one topic was to be representative of my sum total knowledge of things audio.  Masking data was mostly from test tones and then speech and then other things like natural sounds and music.  As for the latter, don't know it has been specifically investigated.  Some investigation into general hearing of different language groups and humans in general would make me think there are some places training and experience of a conductor matter a little bit, but not enough to be wildly different than the human norm.  Their are engine mechanics with a finely tuned ear for engine sounds too that haven't been investigated as a group.


If we are interested in audio fidelity, it seems to me that it's critical to investigate that for listeners who are highly trained in perceiving music, in particular musicians. I believe that the phenomena of music, in its more coarse aspects (as represented by notes and rhythms and/or whatever features are obvious even in low fidelity recordings) is considered a valid topic for scientific study, so why not study what happens when a musician develops their perception?
 
Jul 9, 2016 at 7:36 PM Post #24 of 30
If we are interested in audio fidelity, it seems to me that it's critical to investigate that for listeners who are highly trained in perceiving music, in particular musicians. I believe that the phenomena of music, in its more coarse aspects (as represented by notes and rhythms and/or whatever features are obvious even in low fidelity recordings) is considered a valid topic for scientific study, so why not study what happens when a musician develops their perception?


The only bit of that I have read about is by Harmon. Audiophiles and the general public were about even. Audiophile journalists were slightly worse. Musicians and conductors were some better. Specially trained listeners were quite a bit better in discernment.
 
Jul 9, 2016 at 7:52 PM Post #25 of 30
The only bit of that I have read about is by Harmon. Audiophiles and the general public were about even. Audiophile journalists were slightly worse. Musicians and conductors were some better. Specially trained listeners were quite a bit better in discernment.

Okay, that's useful. What changes were they discerning?
 
By the way, I think I have misinterpreted your writing style. For instance, you say above they were better at "discernment" while leaving out the object of the discernment, which seems to me like you are dangling a little detail but skipping the main point, which I thought you were doing deliberately to annoy me. I suppose you have some other reason for leaving it out, maybe you are in a hurry, or maybe you don't want to look it up right now. I would like to know what the object of discernment was, in any case.
 
"Training" can be a lot of things. One thing musicians are trained to do, that may be less prevalent in the general public, is pick out patterns that are formed from a lot of details spread over time as well as over the spectrum at any instant.
 
I have a friend who can concentrate like a hawk while watching movies. She pulls in every little detail and coordinates details to make guesses about where the movie is headed. Very often, I will watch a movie for the second time with her, when she is seeing it for the first time. It doesn't bother me if she talks about her observations out loud. And so I can observe her pulling in significant details and making accurate guesses about what they mean (because I know the ending of the movie). She usually notices more about the movie on the FIRST watching than I notice on the SECOND watching.
 
I can't match her with movies. But that's something like what I do with music, and conductors even more.
 
I think it's an important question to ask whether this mode of using one's hearing would affect discernment ability. I also think it's important to ask whether some listening protocols would disrupt this ability. If you are going to experiment with this ability, you would need first and foremost to provide a test signal that actually has the relevant patterns. Second you would need a context that allows the patterns to be perceived (say, long enough listening time).
 
Jul 9, 2016 at 8:31 PM Post #26 of 30
Not dangling anything.  More the lack of time and how deep one should go with this.  Here is a few I could grab quickly more or less on topic.
 
http://www.pearl-hifi.com/06_Lit_Archive/15_Mfrs_Publications/Harman_Int'l/AES-Other_Publications/12206.pdf
 
http://seanolive.blogspot.com/2008/12/loudspeaker-preferences-of-trained.html
 
Here is a Harman how to listen bit of computer software.  Similar to how they train their paid listeners for evaluating their speakers.
 
http://harmanhowtolisten.blogspot.com/2011/01/welcome-to-how-to-listen.html
 
Philips had a Golden Ears program for training online though it was recently taken down.  You could earn bronze, silver or golden ear certification there if you passed the tests.
 
Jul 12, 2016 at 12:55 PM Post #27 of 30
I was examined by an audiologist back in 2012 and talked to him about high-frequency hearing. He said that in general, only children can hear 20khz -- for adults, 18khz is considered high. Also, in general, women can hear a wider frequency range than men.
 
Jul 12, 2016 at 7:26 PM Post #28 of 30
  I was examined by an audiologist back in 2012 and talked to him about high-frequency hearing. He said that in general, only children can hear 20khz -- for adults, 18khz is considered high. Also, in general, women can hear a wider frequency range than men.


That is more or less right.  Some small number of young adults under 30, mostly closer to 20 and mostly women have been shown to have some perception to 23 khz.  I seem to even remember one fellow had it to 25 khz.  The number among young adults is around 1% of those who were tested to those frequencies.  The threshold for them to hear it was at or a bit above 100 db sound level.  So very high thresholds and very few people can hear that. 
 
Jul 12, 2016 at 8:23 PM Post #29 of 30
 
That is more or less right.  Some small number of young adults under 30, mostly closer to 20 and mostly women have been shown to have some perception to 23 khz.  I seem to even remember one fellow had it to 25 khz.  The number among young adults is around 1% of those who were tested to those frequencies.  The threshold for them to hear it was at or a bit above 100 db sound level.  So very high thresholds and very few people can hear that. 

 
Any data on the lowest SPL where someone can hear 20kHz?
 
Jul 12, 2016 at 8:44 PM Post #30 of 30
   
Any data on the lowest SPL where someone can hear 20kHz?


Well your basic Fletcher-Munson curves show about +10 to +15db above the standard 0 db (0 phon)  threshold for about 15 khz.  Limited testing shows sharp upward slopes above that.  This would be for young adults.
 
Generally the hairs cells in the cochlea don't have any that are centered on frequencies above 15 khz.  The filtering of the entire mechanism isn't super sharp and that last group that appears made to respond at 15 khz respond weakly to even higher frequencies. 
 
It is also interesting that F_M was the result of testing in 1933 with only 11 young listeners.  Yet it is very close to the very latest curves from much more extensive research over the years.  The latest standard uses people 18-25 years old.  They are played tones.  It is considered an audible threshold when the listener hits 50% correct for repeated trials.  F_M and most research on equal loudness contours don't test above 16 khz.  The reason being thresholds jump upward sharply and are very variable above 15 khz.  One wouldn't be remiss in saying human hearing is not very effective above 15 khz. 
 
The physical construction of the hearing mechanism would indicate 20 khz thresholds would be around 40 db higher than at 15 khz. , or something over 50 db spl. 
 

Users who are viewing this thread

Back
Top