mike1127
Member of the Trade: Brilliant Zen Audio
- Joined
- Oct 16, 2005
- Posts
- 1,114
- Likes
- 25
Quote:
Well that question, how would I test it, brings up several issues. There are some more fundamental issues that need to be tested first.
Here are my observations from experience. All of this would be interesting to formulate more precisely and test.
Professional musicians develop their perception of sound and arrive at the ability to perceive what could be called "abstractions."
Consider the realm of vision: consider that the human brain is good at identifying a face under many lighting conditions, angles of view, emotional expressions on the face, etc. We could call this a perception of an abstraction. The concept of "Bob's face" is an abstraction which can be recognized in different concrete instances.
Musicians perceive the same kinds of abstracted concepts except in the domain of hearing. A simple example is the ability to recognize a particular player's sound, in different acoustic environments, different musical compositions, different distances and volume levels, etc.
Scientists who study vision have an interested in things like facial recognition. They not only acknowledge the brain can do it, but have developed algorithms to imitate it.
But in the domain of audio, in a paradigm such as Ethan Winer's, there seems to be a total lack of interest in (or acknowledgement of) abstracted concepts.It's rather a primitive science by comparison.
Okay, finally arriving at my original point: it is the direct experience of musicians that a perception such as the sound of a particular player is sometimes a perception that takes in details over time, such as an entire musical phrase.
So first there needs to be some work on how abstracted perceptions work, and from that point we can proceed to a study of what features of them play out over time.
Cool, let's call that your hypothesis. How would you go about testing it? What background info might be useful from prior studies?
Well that question, how would I test it, brings up several issues. There are some more fundamental issues that need to be tested first.
Here are my observations from experience. All of this would be interesting to formulate more precisely and test.
Professional musicians develop their perception of sound and arrive at the ability to perceive what could be called "abstractions."
Consider the realm of vision: consider that the human brain is good at identifying a face under many lighting conditions, angles of view, emotional expressions on the face, etc. We could call this a perception of an abstraction. The concept of "Bob's face" is an abstraction which can be recognized in different concrete instances.
Musicians perceive the same kinds of abstracted concepts except in the domain of hearing. A simple example is the ability to recognize a particular player's sound, in different acoustic environments, different musical compositions, different distances and volume levels, etc.
Scientists who study vision have an interested in things like facial recognition. They not only acknowledge the brain can do it, but have developed algorithms to imitate it.
But in the domain of audio, in a paradigm such as Ethan Winer's, there seems to be a total lack of interest in (or acknowledgement of) abstracted concepts.It's rather a primitive science by comparison.
Okay, finally arriving at my original point: it is the direct experience of musicians that a perception such as the sound of a particular player is sometimes a perception that takes in details over time, such as an entire musical phrase.
So first there needs to be some work on how abstracted perceptions work, and from that point we can proceed to a study of what features of them play out over time.