Sound and Music Perception
Jan 11, 2019 at 1:08 PM Post #31 of 115
Hi Fahqfasse,

I have to admit that all my years of grad school were a long time ago, but we didn't learn the stuff you mention. Do you have specific references? I have shelves full of neuroscience textbooks I've read, so I would appreciate it if you can be specific.
Cheers, SAM


I've found no one in sound science who combines the various disciplines and studies the entirety of sound as air pressure changes. They either focus on the ear, the gear, or the digital realm. They often use tones and/or headphones, not music in a space. Only musicians and producers/mix/master engineers focus on how a mix of music makes people feel. Since they must convey emotion through music to make a living, I usually trust their practical experience over other expert opinions.

I am here as a music lover, producer, musician, and human being with senses, not as one claiming scientific credentials. I feel I am in no way invalidating or wish to insult scientists in this field, I just think they are forced to be an ant instead of an eagle. A tree looks very different to an ant than to an eagle. Science must study a small subset and make sure variables are accounted for or removed. I understand scientific method, but I also understand why it fails to explain the larger picture here.

So, disclaimers out of the way, let's get to it!

1 - Deaf people feel music throughout their body, just like the rest of us - https://www.lifeprint.com/asl101/topics/music02.htm

2 - Deaf people's auditory cortex lights up when music is playing, even if their ears aren't deciphering the sound. This is because our vibration sensors and hair follicles all report to the auditory cortex. Auditory Cortex = more than the earbrain. https://medium.com/@rachelelainemonica/how-deaf-people-experience-music-a313c3fa4bfd

3 - Beethoven wrote amazing symphonies while deaf. Sitting at the piano is sitting at a vibration machine whether you hear or not. No link necessary :wink:

4 - Sound and vibration move things, including our hair -

5 - Human hair and skin can detect minuscule changes in air pressure among other things. Every hair is a mechanosensory organ. https://www.sciencedaily.com/releases/2012/01/120111103354.htm

6 - Even hairless skin detects vibration to a very fine amount - https://en.wikipedia.org/wiki/Lamellar_corpuscle

7 - Human hearing is more accurate in detecting timing than frequency. This is partly because we move our heads and have binaural ears. We inherently understand echo and delay. Location awareness is all about timing. Directionality = Survival. https://phys.org/news/2013-02-human-fourier-uncertainty-principle.html

8 - The science of the human senses is constantly updated and debated because we know so little: https://www.biorxiv.org/content/early/2015/07/06/022103


Putting these together, hopefully you see that there is more going on when you play and enjoy music than solely your ears reporting to your brain. Your skin and hair follicles are also reporting. Your body can detect vibrations below 20hz and air pressure changes caused by frequencies above 20Khz, even if your ears can't.

Note - I know headphones remove much of this from the story. Headphones are fine but very, very different than listening in a real space. If a sound study doesn't acknowledge that right at the top... red flag.


Well gotta get back to work. I hope this was helpful in starting a conversation. I leave you with this quote:

“The whole point of science is that most of it is uncertain. That’s why science is exciting–because we don’t know. Science is all about things we don’t understand. The public, of course, imagines science is just a set of facts. But it’s not. Science is a process of exploring, which is always partial. We explore, and we find out things that we understand. We find out things we thought we understood were wrong. That’s how it makes progress.” – Freeman Dyson, 90, Mathematical Physicist
 
Jan 11, 2019 at 1:45 PM Post #32 of 115
I've found no one in sound science who combines the various disciplines and studies the entirety of sound as air pressure changes. They either focus on the ear, the gear, or the digital realm. They often use tones and/or headphones, not music in a space. Only musicians and producers/mix/master engineers focus on how a mix of music makes people feel. Since they must convey emotion through music to make a living, I usually trust their practical experience over other expert opinions.

I am here as a music lover, producer, musician, and human being with senses, not as one claiming scientific credentials. I feel I am in no way invalidating or wish to insult scientists in this field, I just think they are forced to be an ant instead of an eagle. A tree looks very different to an ant than to an eagle. Science must study a small subset and make sure variables are accounted for or removed. I understand scientific method, but I also understand why it fails to explain the larger picture here.

So, disclaimers out of the way, let's get to it!

1 - Deaf people feel music throughout their body, just like the rest of us - https://www.lifeprint.com/asl101/topics/music02.htm

2 - Deaf people's auditory cortex lights up when music is playing, even if their ears aren't deciphering the sound. This is because our vibration sensors and hair follicles all report to the auditory cortex. Auditory Cortex = more than the earbrain. https://medium.com/@rachelelainemonica/how-deaf-people-experience-music-a313c3fa4bfd

3 - Beethoven wrote amazing symphonies while deaf. Sitting at the piano is sitting at a vibration machine whether you hear or not. No link necessary :wink:

4 - Sound and vibration move things, including our hair -

5 - Human hair and skin can detect minuscule changes in air pressure among other things. Every hair is a mechanosensory organ. https://www.sciencedaily.com/releases/2012/01/120111103354.htm

6 - Even hairless skin detects vibration to a very fine amount - https://en.wikipedia.org/wiki/Lamellar_corpuscle

7 - Human hearing is more accurate in detecting timing than frequency. This is partly because we move our heads and have binaural ears. We inherently understand echo and delay. Location awareness is all about timing. Directionality = Survival. https://phys.org/news/2013-02-human-fourier-uncertainty-principle.html

8 - The science of the human senses is constantly updated and debated because we know so little: https://www.biorxiv.org/content/early/2015/07/06/022103


Putting these together, hopefully you see that there is more going on when you play and enjoy music than solely your ears reporting to your brain. Your skin and hair follicles are also reporting. Your body can detect vibrations below 20hz and air pressure changes caused by frequencies above 20Khz, even if your ears can't.

Note - I know headphones remove much of this from the story. Headphones are fine but very, very different than listening in a real space. If a sound study doesn't acknowledge that right at the top... red flag.


Well gotta get back to work. I hope this was helpful in starting a conversation. I leave you with this quote:

“The whole point of science is that most of it is uncertain. That’s why science is exciting–because we don’t know. Science is all about things we don’t understand. The public, of course, imagines science is just a set of facts. But it’s not. Science is a process of exploring, which is always partial. We explore, and we find out things that we understand. We find out things we thought we understood were wrong. That’s how it makes progress.” – Freeman Dyson, 90, Mathematical Physicist


all you need is ears.
 
Jan 11, 2019 at 1:55 PM Post #34 of 115
Ah but everybody who is alive has a brain. It's the ears the matter more than anything.
Of course the brain allows for us to interpret auditory information, but it's not the organ (pardon the pun) that actually hears what's going on around us.
Beethoven could compose music, but he couldn't hear it. He was a very clever man who made use out of the vibrations his body could sense, but I'm sure he would've preferred it if he could actually hear music.
 
Last edited:
Jan 11, 2019 at 2:20 PM Post #35 of 115
awesome thread so far, but you can't discuss how we perceive music without discussion about the skin and the hair follicles sticking out from the skin.... each attached to it's own nerve pathway that reports to the same part of the brain that the ear reports to.

our entire dermal layer is a giant "ear" that can detect changes in air pressure far more precisely than the ear.

sound is vibration, and deep within our large joints (knees, ankles, shoulders, elbows, etc.) we have dense clusters of nerves built to detect micro vibrations.

this is about survival. without our vibration sensing bodies we wouldn't have made it out of the jungle, or perhaps even the water.

all of the sound science that i can find (so far) ignores the skin and joint vibration sensors and focuses only sound entering the ear. which is why even the smartest among us end up with false results and bad science.

IMHO the fatal flaw of sound science is ignoring what sound actually is & how we actually detect it.

secondly, most ignore how test sounds (basic) are different than music (extremely complex sound).
this is a forum about headphones. so the area of the body that might be involved with sensory nerves other than the ear, is going to be a rather small proportion. I'd say that it's a pretty good reason to neglect other possible impacts of vibrations on the body.

but sure, we(humans in general) tend to study sound waves as the things we can perceive with our ears when there exist other ways to perceive sound waves. that much is true. it is even more true for the audio hobby for obvious reasons.



we know that high frequencies are unlikely to go deep unless they are at crazy high amplitudes(because physics say so). we also know that ultrasounds need to be like above 100dB SPL from a source right next to the skin for people to barely notice something. so those are typically dismissed for lack of evidence that they matter. now nobody is dismissing low frequencies shaking the entire body. it's the main reason why I vastly prefer speakers to headphones, so I'm not going to contest that part ^_^.
beyond that, not a lot to be said. we don't seem to be very good at interpreting the vibrations and we seem to rely mostly on hearing for that interpreting task. the same way we will rely more on vision than hearing(when both work fine) to make decisions about what's going on around us. our brain has it's own idea about what to trust most and what senses to use for a given task.
for music, between hearing music and feeling vibrations in the rest of the body, it's not hard to tell which one provides the most information we can interpret and make sense of. again, not to say that vibrations are inconsequential, but low freqs seems to be the most impactful in music and even for them, we're not very good at identifying/interpreting them. basically we feel something, we feel it strongly sometimes, but that's about it for the information we can get out of the low freqs shaking us. the Harman guys seem confident that low frequency vibrations, even at the wrong frequency, can massively improve the impression of realism in headphone listening. so we expect and sort of require something like tactile bass to go with low frequency sounds. but even if wrong in amplitude and frequency(and up to a point even where it's shaking on the body apparently), the brain apparently seems to be eager to validate tactile bass on it's checklist for realistic sound. absence feels fake, and almost anything else drastically improve our belief that all is good. that's how bad we seem to be in analyzing those sound waves with the rest of the body.
and I don't think it's all that surprising. out off all the species of animals, humans are high on the list of morons failing to notice small shaking before an earthquake while many animals go crazy and try to run away.

I have some intuition that psychical vibrations might play a role in how we perceive IEMs and headphones, but beside trying to correlate a few stuff together, I can't say that I have any actual evidence. like how with IEMs, I need more bass than with a good full size headphone, and a pair of speakers with horribly rolled off bass, will still feel to me like it has more low end energy than the headphone with dead flat FR down to 10hz. and I don't seem to be alone feeling that way. so I'm guessing that maybe the amount of vibration the body gets might be what I'm trying to compensate with extra low end on IEMs and sometimes on headphones too but never as much.
 
Jan 11, 2019 at 2:32 PM Post #36 of 115
@FFBookman I can tell by your posts that you are definitely a speaker guy. Am I right or am I wrong?! :relaxed:
Nothing wrong with that, I used to be one too. I'm still very much in to speakers, I own several sets of them.
 
Jan 11, 2019 at 2:56 PM Post #37 of 115
I'm starting this thread for those interested in discussing sound and music perception, starting at the ears, transducing sound waves into nerve signals, the brain responding to those nerve signals, and perceptions of sound and music somehow forming in the 'mind'. Relevant science includes anatomy, physiology, neuroscience, and psychology. Some venturing into philosophical aspects probably can't be avoided.
I missed this post (and apologies for falling silent, it has been a bit hectic with various things on my mind and a brain that is not always cooperative).

One of the things I always find incredibly important to note is that contrary to many of the analogies often used, the brain is nothing like a computer. Biology is not as abstract as physics and so forming an understanding of perception of sound and music requires a somewhat different mindset. Certain aspects can be explained reasonably well through a mechanical analogy, but it will always fall short of a comprehensive explanation. To me that is the beauty of biology and at the same time an incredibly frustrating aspect because you have to accept that things are inherently contingent, which we don't like when we are looking for answers. Yet therein lies the key to the plasticity of the brain that is so central to our relationship with music.

So getting into the right mindset as you set off exploring this topic can be incredibly helpful. I'll be happy to explain more about it, but I did not want to start off with a lengthy post and risk veering off in a direction different from where you were intending to go.
 
Jan 11, 2019 at 3:05 PM Post #38 of 115
@FFBookman I can tell by your posts that you are definitely a speaker guy. Am I right or am I wrong?! :relaxed:
Nothing wrong with that, I used to be one too. I'm still very much in to speakers, I own several sets of them.

I don't think I'm either. I listen to both during a normal week. I guess I do prefer speakers slightly but it's really about acknowledging the difference. Headphones are private speakers that take the body out of listening. They simulate a room. I'm not a fan of simulated real if real real is available.

That said, wearing headphones right now, so it's philosophical not practical.
 
Jan 11, 2019 at 3:14 PM Post #39 of 115
I missed this post (and apologies for falling silent, it has been a bit hectic with various things on my mind and a brain that is not always cooperative).

One of the things I always find incredibly important to note is that contrary to many of the analogies often used, the brain is nothing like a computer. Biology is not as abstract as physics and so forming an understanding of perception of sound and music requires a somewhat different mindset. Certain aspects can be explained reasonably well through a mechanical analogy, but it will always fall short of a comprehensive explanation. To me that is the beauty of biology and at the same time an incredibly frustrating aspect because you have to accept that things are inherently contingent, which we don't like when we are looking for answers. Yet therein lies the key to the plasticity of the brain that is so central to our relationship with music.

So getting into the right mindset as you set off exploring this topic can be incredibly helpful. I'll be happy to explain more about it, but I did not want to start off with a lengthy post and risk veering off in a direction different from where you were intending to go.


Amen. We are not machines, nor are we digital. Our senses function nothing like a computerized sensor. We are far more capable of parallel-processing than any machine we've yet invented, at a scale and with the durability and regenerative abilities that no machine can touch (ba-dum). I think we are still hundreds of years away from building Cmdr. Data of Star Trek.

We couldn't get close to building a finger, or ear, with the complexity of a natural body part. Even in 2019. To impart that machine with all the sensing and physical capabilities of a natural body part would require the mechanical finger to be so much larger, so much more clumsy, and it would operate at such a lower resolution than the real thing.

Anyway, that's a bit off topic even for me. Thanks for the post though.
 
Jan 12, 2019 at 7:26 AM Post #40 of 115
[1] our entire dermal layer is a giant "ear" that can detect changes in air pressure far more precisely than the ear.

[2] all of the sound science that i can find (so far) ignores the skin and joint vibration sensors and focuses only sound entering the ear. which is why even the smartest among us end up with false results and bad science. IMHO the fatal flaw of sound science is ignoring what sound actually is & how we actually detect it.

[3] secondly, most ignore how test sounds (basic) are different than music (extremely complex sound).

1. While I would agree that our skin can certainly detect changes in air pressure, I would dispute that it can do so far more precisely than the ear. In a sense, hearing is a specialised form of touch and interestingly, in the Italian language, "to hear" and "to feel" are the same verb (although "feeling" uses the reflexive form). Most of the time, our hearing is far more sensitive to sound than our sense of touch and therefore our brains use the information from our ears (along with information from our other senses, such as sight, plus "expectation") to generate a perception of hearing which largely ignores the sense of touch. Where this situation changes most notably is in the low freqs, where our hearing becomes progressively more insensitive and as it does so, our brains rely more heavily on our sense of touch. So, our sense of touch is not more sensitive to air pressure variations than our ears, it has relatively low sensitivity, it's more a case of that low sensitivity (of touch) still being greater than our hearing at extremely low freqs but to excite/trigger that low level touch sensitivity requires relatively high amplitude/volume levels. For example, cinema sound systems will typically have a total power output of around 30,000 Watts (or more) of which, probably at least 20,000w is purely dedicated to freqs below 120Hz and broadly, the same is true of music PA systems. Where all this gets interesting is in the case of those who are deaf/hearing impaired, many of whom have so much less hearing sensitivity throughout the normal hearing spectrum that they *can* learn to utilise their sense of touch to (somewhat) compensate. The proof of this is that there are a few successful, hearing impaired professional musicians but it takes a great deal of training and is still more vague in many respects than hearing (with our ears).

2. I'm not sure all of science ignores the sense of touch for sensing sound but certainly the science is relatively sparse as far as I'm aware. One of the problems being the extreme difficulty of isolation and the precision of testing, especially with very low freqs. However, in the field of sound engineering, particularly the field of sound for film, there is a great deal of practical experience and knowledge, going back many decades, in how we respond to the sensation of feeling/hearing very low freqs (and how to manipulate that response). Although, there's relatively little actually published on the subject and it's not in the form of peer reviewed scientific papers of highly controlled listening tests.

3. I can't really agree with this. It certainly seems to be true that some audiophiles "ignore how test sounds are different than music" but not really scientists or sound engineers. Firstly, scientists are typically testing the limits of an extremely specific aspect of hearing, the utmost limits under any circumstances (IE. The most optimal/perfect circumstances). This is useful information because it tells us that everything audible must fall within those limits. The problem is that some/many (who are not scientists) misinterpret the meaning of the results and assume those limits apply to other, non-optimal circumstances, say in the case of more complex sound such as music. One example of many, is that of jitter detection: Using specifically designed test signals, random jitter has been demonstrated to be identifiable down to about 2 or 3 nano-seconds, however, there has also been some studies of jitter detection with actual music and typically the subjects were insensitive to jitter of less than 500 nano-secs, although there have been some isolated cases of people who could reliably differentiate down to 200 nano-secs. The obvious problem is that we cannot test every piece of music ever recorded and therefore cannot be absolutely certain that a recording might exist which provides more optimal circumstances that might allow detection lower than 200 nano-secs. However, we can be extremely confident that it's never less than 2-3 nano-secs and quite confident that if it is lower than 200 nano-secs, it's not much lower. Secondly, test sounds/signals are not necessarily less complex than music, they are sometimes more complex, it depends what is being tested and the test signals being employed. Obviously a single sine wave is less "complex sound" than music, because music is comprised of many simultaneous sine waves. However, music is less complex than test noise (white and pink noise being the most common test noise signals) because test noise isn't just MANY simultaneous sine waves (frequencies) but ALL the sine waves/frequencies simultaneously.

G
 
Last edited:
Jan 12, 2019 at 7:59 AM Post #41 of 115
Amen. We are not machines, nor are we digital. Our senses function nothing like a computerized sensor. We are far more capable of parallel-processing than any machine we've yet invented, at a scale and with the durability and regenerative abilities that no machine can touch (ba-dum). I think we are still hundreds of years away from building Cmdr. Data of Star Trek.

We couldn't get close to building a finger, or ear, with the complexity of a natural body part. Even in 2019. To impart that machine with all the sensing and physical capabilities of a natural body part would require the mechanical finger to be so much larger, so much more clumsy, and it would operate at such a lower resolution than the real thing.

Anyway, that's a bit off topic even for me. Thanks for the post though.
It is not just that biology is extremely complicated, but that it works on different principles. It is inherently contingent and so a specific input is not always going to produce the same output. The reason I consider it so important is that this is part of perceiving music. The same music can resonate differently with us at different times and there does not always need to be a clear reason for that.

Music is also evolutionary ingrained in humans and predates spoken language, so it works at a very primitive level and is at the core of our being. That is why music can be such a powerful therapeutic (my area of interest).
 
Jan 12, 2019 at 8:15 AM Post #42 of 115
Guys, don’t worry about being off topic, the thread topic is intentionally broad. We’re just talking about stuff that interests us, not trying to solve a specific problem. :)
 
Jan 12, 2019 at 8:34 AM Post #43 of 115
Ah but everybody who is alive has a brain. It's the ears the matter more than anything.
Of course the brain allows for us to interpret auditory information, but it's not the organ (pardon the pun) that actually hears what's going on around us.
Beethoven could compose music, but he couldn't hear it. He was a very clever man who made use out of the vibrations his body could sense, but I'm sure he would've preferred it if he could actually hear music.

Seems to me that ears transduce, and the brain interprets the signal in order to ‘hear’. I’m tempted to make a distinction between brain and mind here also, but we don’t really know what mind is.
 
Jan 12, 2019 at 9:09 AM Post #44 of 115
Seems to me that ears transduce, and the brain interprets the signal in order to ‘hear’. I’m tempted to make a distinction between brain and mind here also, but we don’t really know what mind is.

If I were to draw a ven diagram, the mind would be one circle, the brain would be one largely intersecting but not totally overlapping circle, and the body would be a larger circle that encompasses both. FWIW. Which is very little. Conceptually I think the problem is more of an extremely difficult definitional and semantic exercise than a matter of not knowing.

Also as far as sensing sound pressure it certainly seems to me that my ears sense change in pressure generally first, whether it’s in an airplane or driving through the mountains or on an elevator. So I am a little skeptical of the idea that the skin senses changes is sound pressure levels before the ears. I could be conflating sound pressure with air pressure. Certainly we’ve all felt the body sensing very low frequencies, and it becomes more difficult to sense pitch at those frequencies, so it would seem to be more of a whole body experience. Deep frequencies on a top-notch pair of noise cancelling headphones is viscerally pretty pleasurable to me too, though.

Also I think the brain does try really hard to recreate what goes missing in our hearing for us, whether over time or due to disease or as a result of trauma. I believe that’s one of the theories behind why tinnitus occurs. But at some point the effects of actual physical deterioration overcome the brain’s ability to compensate.

I listen to both speakers and headphones. I prefer speakers but when I was younger, could afford less, and affordable speakers were not as good, a $100 or $200 pair of headphones could be revelatory as far as what was in the recording.

I will never get over what humans can do with math and music and language. To me it seems so far beyond what was needed in terms of evolution I consider it miraculous, even though I am a firm believer in evolution, I think it’s so improbable as to be miraculous, Not in a spiritual sense but just in terms of probability and wondrousness. But it can certainly give rise to a feeling of spirituality.
 
Last edited:
Jan 12, 2019 at 10:13 AM Post #45 of 115
Seems to me that ears transduce, and the brain interprets the signal in order to ‘hear’. I’m tempted to make a distinction between brain and mind here also, but we don’t really know what mind is.

Well, we should also not forget the emotional/psychological impact that music has on us. That has less to do with how we hear and more to do with how we react to what we hear. Unfortunately the mind is something that can't be measured/tested objectively using science. The mind is "us" it's who we are..not a part of the body.
 

Users who are viewing this thread

Back
Top