comparing live and recorded music
Jul 11, 2016 at 7:10 AM Post #61 of 135
Read your own first points 1-3 and say that again with a straight face.
The fact that they can turn themselves into picky audiophiles doesn't negate one's ability to retrieve all the nuances of a (say) guitar player from a bad recording played back over bad speakers.

Well, except for the fact that being a picky audiophile distracts one from hearing the performance behind the recording, so I imagine it's counterproductive for a music player trying to learn from their or others' performances, which is why musicians learn to turn the "picky audiophile" part of their brains off as your sound engineer friend says.

 
I see you haven't provided any evidence for your sweeping claim that all non-linear distortions create "un-musical" artifacts which can be "separated" by an "intelligent" listener. First define these terms, then provide some evidence.
 
Point 1:
 
If you analyze the spectral content of a single note over time, you can put an envelope on each harmonic. You will notice that these envelopes are different. The totality of them provides an overall musical impression of something changing over time. An FR distortion (totally linear!) changes different frequencies by different amounts. What reason is there to suppose that this *doesn't* change the impression of the way the note changes over time?
 
You're basically skewing the input to the system, and you know that the system depends on the relationship between these parts, so I would like to see some evidence that this *doesn't* throw the music out of whack.
 
Regarding the disappearance of the tail of the ebb, there's no magical force involved. The worst case would be a microphone that picks up too much unwanted noise. Another case would be too much reverb.
 
Point 2:
 
Dynamic resolution is the question of how small a dynamic change can be perceived reliably. The system is not changing the dynamics, but rather making them less clear.
 
Point 3:
 
The tempos are not altered. The question is whether the tempos "work."
 
One concept fundamental to music, and I don't know if this has been studied in cognitive science, is "clarity." There is much more to art than the question of whether a pattern exists. The question rather, is how clear it is. And making a pattern either more or less clear can make the art "work" or "fail."
 
So caring about the detail of art is "picky." Wow, tell that to any great artist.
 
Jul 11, 2016 at 11:33 AM Post #62 of 135
 
Honestly I might be the last person here who has finally reached the point where I don't care what you think I should be thinking about or care what you think what constitutes a good playback system.
rolleyes.gif


You don't have to care about my opinion. But anyone who wants to do a systematic study of music should care about the patterns that constitute music. And anyone who wants to create high-fidelity recordings should care about how recording/playback equipment distorts those patterns.

ok 10year old analogy time:
sure, we study what constitute music, that's called a sine wave. everything in the sounds you get to your ears can be physically decomposed into a given number of basic sine waves! always!  all this time you're pretending like each word in a sentence isn't made of the alphabet because once a word or a sentence is interpreted in your head, the result is much more information than just a series of letters. so you mistakenly decide that words and sentences must be made of more than letters. you kept going in all directions at once on all topics, waiting for us to give you the science to justify that erroneous concept about what sound is made of. so of course we fail to answer "properly", for what you seek isn't about sound at all.
 
it was already the same problem with blind tests. we ask in such a test "can you notice any difference between those 2 files?" I say measure the text, find the letters that changed the most, and look only at them, if a "B" in one text is now distorted into a "I", I can see it! I've seen a difference and the test is conclusive. but you believe I must speak the language and read the entire sentence, else I will miss some of the differences. that's just not not true. it might make the job easier for sure if I could just read the text and notice when a word is misspelled because I already know it.  that would be faster, thanks to pattern training. but it's not like it's my only way to find a difference. letters are what will really be different so letters are all I really need to be able to identify. at least it is when the question really is "are those 2 texts noticeably different?"
 
the same way, you can decompose sound into the most simple elements a human can notice, and that's what our brain puts together when it does all that amazing interpretation job and pattern analysis that would make a NSA software jealous. but that's the brain, not the music. you need to stop mixing them together and try to force mind constructs onto the science of sound.
 
Jul 11, 2016 at 3:18 PM Post #63 of 135
  ok 10year old analogy time:
sure, we study what constitute music, that's called a sine wave. everything in the sounds you get to your ears can be physically decomposed into a given number of basic sine waves! always!  all this time you're pretending like each word in a sentence isn't made of the alphabet because once a word or a sentence is interpreted in your head, the result is much more information than just a series of letters. so you mistakenly decide that words and sentences must be made of more than letters. you kept going in all directions at once on all topics, waiting for us to give you the science to justify that erroneous concept about what sound is made of. so of course we fail to answer "properly", for what you seek isn't about sound at all.
 
it was already the same problem with blind tests. we ask in such a test "can you notice any difference between those 2 files?" I say measure the text, find the letters that changed the most, and look only at them, if a "B" in one text is now distorted into a "I", I can see it! I've seen a difference and the test is conclusive. but you believe I must speak the language and read the entire sentence, else I will miss some of the differences. that's just not not true. it might make the job easier for sure if I could just read the text and notice when a word is misspelled because I already know it.  that would be faster, thanks to pattern training. but it's not like it's my only way to find a difference. letters are what will really be different so letters are all I really need to be able to identify. at least it is when the question really is "are those 2 texts noticeably different?"
 
the same way, you can decompose sound into the most simple elements a human can notice, and that's what our brain puts together when it does all that amazing interpretation job and pattern analysis that would make a NSA software jealous. but that's the brain, not the music. you need to stop mixing them together and try to force mind constructs onto the science of sound.

 
If you put I1 into a linear system, you get O1. If you put I2, you get O2. If you put (I1+I2) you get (O1+O2). Therefore, as you say, it is useful to decompose the input into parts, if that helps, because the output is the sum of the outputs to each part.
 
Let's say a sound is composed of two notes: N1 and N2. The brain is not a linear system. Therefore it is not correct to say that the response of the brain is the sum of its response to N1 and its response to N2.
 
Same with blind tests, and quick switching versus long listening. If a three second sample can be divided into three portions of length 1 second, say P1, P2, and P3, and the brain's response to each of those by itself is R1, R2, and R3. It is not correct to say that the brain's response to the whole sample is R1 + R2 + R3, or (R1,R2,R3) in sequence.
 
Jul 12, 2016 at 4:21 AM Post #64 of 135
As seems usual with johncarm, the thread rapidly went off topic. Typically the problem is based on some incorrect assumption or interpretation about some aspect of sound (confusing sound with perception being just one example) or demanding simple answers to questions which only appear to be simple and to which there are no simple answers. johncarm then makes some absolutist and often inflammatory statement based on that incorrect assumption, which therefore almost demands a response and voilà we're off topic! For example:
 
Quote:
  [1] As a matter of fact, the guy from Sheffield Lab I worked with in the 80's was both a concert pianist and recording engineer. He said that musicians can become fabulous engineers in moments. All you have to do is ask them to listen to the recording as if it were a musician playing right in front of them. 
 
[2] There is a myth that "sound quality" is a specialization and that "music" is a separate specialization, as if they were fundamentally different activities. Many musicians are told by the engineers not to concern themselves with the recorded sound because "that's the engineer's job." Total baloney.

 
1. Obviously you have taken what "the guy" told you out of context and misinterpreted it. I say "obviously" because the only alternatives are that he knew little about audio engineering himself or that he was deliberately trying to mislead you. In practice a musician can't even become a competent audio engineer in weeks, let alone "fabulous" or "in moments"! Common sense alone should tell you this!
 
2. And here we go! Making inflammatory statements about what is "myth" and "total baloney" based entirely on false assumption and misinterpretation. That's insulting to those of us who know different and as you appear to only want (or are only willing) to accept answers which conform to your existing understanding/assumptions the result is that threads are likely to remain off topic and slowly degenerate towards personal attacks.
 
My question to you johncarm, is do you really want the scientific/practical/truthful answers to your question/s? You state that you do but constantly contradict that statement by demanding answers which only conform to your preconceptions and therefore argue with (and sometimes even insult) the answers provided rather than trying to understand/contextualise them. If you want answers which pander to your preconceptions, you're in the wrong forum! Many/Most of the other forums here on Head-Fi are specifically for audiophiles who want their preconceptions pandered to and there are plenty of audiophile companies more than willing to support or advance those preconceptions and provide the appropriate pandering, for a price of course and of course at the expense of the actual science (which is routinely banned in some/most other forums here). If you want answers to your questions you need to make a choice, there's little/no logical middle ground between current audiophilia and current science! I can address many/most of your points/questions in this thread (both the on and off topic ones) but on past experience you're not going to be happy with some of my responses. Are you only going to defend your preconceptions or are we going to discuss the issues? If it's the former then I'd ultimately be wasting my time, which is why I haven't responded to this thread so far!
 
G
 
Jul 12, 2016 at 7:01 AM Post #65 of 135
 
  ok 10year old analogy time:
sure, we study what constitute music, that's called a sine wave. everything in the sounds you get to your ears can be physically decomposed into a given number of basic sine waves! always!  all this time you're pretending like each word in a sentence isn't made of the alphabet because once a word or a sentence is interpreted in your head, the result is much more information than just a series of letters. so you mistakenly decide that words and sentences must be made of more than letters. you kept going in all directions at once on all topics, waiting for us to give you the science to justify that erroneous concept about what sound is made of. so of course we fail to answer "properly", for what you seek isn't about sound at all.
 
it was already the same problem with blind tests. we ask in such a test "can you notice any difference between those 2 files?" I say measure the text, find the letters that changed the most, and look only at them, if a "B" in one text is now distorted into a "I", I can see it! I've seen a difference and the test is conclusive. but you believe I must speak the language and read the entire sentence, else I will miss some of the differences. that's just not not true. it might make the job easier for sure if I could just read the text and notice when a word is misspelled because I already know it.  that would be faster, thanks to pattern training. but it's not like it's my only way to find a difference. letters are what will really be different so letters are all I really need to be able to identify. at least it is when the question really is "are those 2 texts noticeably different?"
 
the same way, you can decompose sound into the most simple elements a human can notice, and that's what our brain puts together when it does all that amazing interpretation job and pattern analysis that would make a NSA software jealous. but that's the brain, not the music. you need to stop mixing them together and try to force mind constructs onto the science of sound.

 
If you put I1 into a linear system, you get O1. If you put I2, you get O2. If you put (I1+I2) you get (O1+O2). Therefore, as you say, it is useful to decompose the input into parts, if that helps, because the output is the sum of the outputs to each part.
 
Let's say a sound is composed of two notes: N1 and N2. The brain is not a linear system. Therefore it is not correct to say that the response of the brain is the sum of its response to N1 and its response to N2.
 
Same with blind tests, and quick switching versus long listening. If a three second sample can be divided into three portions of length 1 second, say P1, P2, and P3, and the brain's response to each of those by itself is R1, R2, and R3. It is not correct to say that the brain's response to the whole sample is R1 + R2 + R3, or (R1,R2,R3) in sequence.


ok I agree, the sound heard just before may influence how I'll perceive the sound right after, so listening to only the one after may result in a different experience. I'd say psycho acoustic is with you on that one. where we disagree is that you seem to imagine so kind of cascading effect where plenty of little differences end up taking a meaning of their own that would generate a bigger more noticeable difference. I don't doubt how a series of events can become something significant to us, after all 1second of a 1khz tone is already a repeating of the same event a thousand times and that's what gives us what we perceive as a tone. my problem lies with how you expect a series of differences to be noticeable when the biggest of those differences alone wouldn't. and I don't think that can happen often in actual tests.
my opinion is that most of the times I would actually do better at testing that one most significant difference in a 2 or 3s sample, than testing the all track in one go. and my personal experiences with ABX tends to comfort me in that opinion. no in fact it would be better to say that I have that opinion because of the results I get in abx.
so we're back to your assumption and how you should try to test it objectively to find at least a situation where existing differences result in you scoring better with the long sample in a bind test vs a short sample containing the most significant difference. if you can make some I'll be genuinely interested in trying them myself. I'm an idiot full of preconceptions, but I have no trouble changing my mind when presented with some convincing evidence.
 
 
about the musician playing a piece and judging the music, from an objective point of view that would be like thinking that a family member would make a good judge in a trial or a good psy for his patient. it may seem good because the guy actually knows a lot on the subject at hand, maybe more than anyone else in the world. but being personally linked, creates its own list of problems and judgment biases that we certainly wouldn't want in anything trying to make a fair assessment of a situation. I believe that's one of the points the others have tried to make
 
Jul 12, 2016 at 2:09 PM Post #66 of 135
 
ok I agree, the sound heard just before may influence how I'll perceive the sound right after, so listening to only the one after may result in a different experience. I'd say psycho acoustic is with you on that one. where we disagree is that you seem to imagine so kind of cascading effect where plenty of little differences end up taking a meaning of their own that would generate a bigger more noticeable difference. I don't doubt how a series of events can become something significant to us, after all 1second of a 1khz tone is already a repeating of the same event a thousand times and that's what gives us what we perceive as a tone. my problem lies with how you expect a series of differences to be noticeable when the biggest of those differences alone wouldn't. and I don't think that can happen often in actual tests.
my opinion is that most of the times I would actually do better at testing that one most significant difference in a 2 or 3s sample, than testing the all track in one go. and my personal experiences with ABX tends to comfort me in that opinion. no in fact it would be better to say that I have that opinion because of the results I get in abx.
so we're back to your assumption and how you should try to test it objectively to find at least a situation where existing differences result in you scoring better with the long sample in a bind test vs a short sample containing the most significant difference. if you can make some I'll be genuinely interested in trying them myself. I'm an idiot full of preconceptions, but I have no trouble changing my mind when presented with some convincing evidence.
 
 
about the musician playing a piece and judging the music, from an objective point of view that would be like thinking that a family member would make a good judge in a trial or a good psy for his patient. it may seem good because the guy actually knows a lot on the subject at hand, maybe more than anyone else in the world. but being personally linked, creates its own list of problems and judgment biases that we certainly wouldn't want in anything trying to make a fair assessment of a situation. I believe that's one of the points the others have tried to make

 
I'm not "imagining" or "assuming" anything. I'm observing that sound scientists seem to trust that short samples are the way to go, so they must have some evidence for this, right? One person's opinion is not that important. You feel you do better in short samples, I feel I do better in longer tests.
 
It's not a mystery that humans find it easier to remember something they've been repeatedly exposed to. This suggests an area of research; I would like to know if that research has been done.
 
Is there a model in psychoacoustics that predicts this result? Anything?
 
It's very, very difficult for me to do a meaningful ABX in Foobar, as the kinds of differences I hear in test files are not the same kinds of differences I hear in comparing components. My bias could mean I fail the short-sample test, but that would be a meaningless result!
 
This brings up a point. Objectivists like to mention that sighted bias can cause us to hear a difference where there is none, but does a blind test resolve that? Has anyone gathered evidence about the question "Can a bias/expectation cause someone to hear no differences?"
 
I never said it was the same musician who performed who should be evaluating the recording. You are right that being personally close to the performance can skew perception. Primarily I'm saying that to evaluate fidelity involves perceiving musical patterns, and that means in practice someone who has a good musical ear. If the choice is between (1) the musician who performed and (2) an engineer who is only trained to hear sound fields, then I'll take (1).
 
Jul 12, 2016 at 3:02 PM Post #67 of 135
Jul 13, 2016 at 12:22 AM Post #68 of 135
Okay, I did get a little rough there, so I'll try to focus on what I see as the point.
 
I think my main observation--and keep in my this is an "observation" for which I am open to multiple explanations--is that live music has clarity that is not present in recorded music. Recorded music can have decent clarity, although not much of it does.
 
What do I mean by clarity? First let me say that I am avoiding the term "beauty," even though I might sometimes use that word to describe live music, and using "clarity" instead because it's a framing of the problem that might be easier to investigate. So what do I mean?
 
"Good" art has clarity.  Okay, I realize that the concept of "good art" is subjective, but don't get distracted by that facet of it. Because I don't think that anybody who is interested in investigating perception--in an objective way--can avoid art, and anybody that wants to investigate art cannot avoid dealing with the fact that some art grabs the attention of a LOT of people in a BIG way and holds their attention for centuries, and that's something worth investigating.
 
So I think a way to frame the question of "good" art that's a little more objective is to say that it has "clarity." The paper that jcx quotes below uses the term "intelligibility" which I think must be similar.
 
An example from composition is easier, although please don't get distracted and try to say "composition is all macro phenomena" -- this is only an analogy to make clear what I mean. In the common practice period of classical music, running from something like 1650 to 1850, there were stereotyped patterns of voice leading found in many composers. But we only remember the way the great composers used them. So what was different about the great composers? One thing (not the only thing) was that they choose their counterpoint, harmony, voice leading, orchestration, etc. to make the patterns clear. Whereas the minor composers that play all night on KUSC here in Los Angeles sound rather muddy by comparison. The idea is that a clear pattern in one that you "can't miss"--whereas an unclear pattern takes a little more time and attention to decipher, or can't be deciphered at all. 
 
Art could be described as a set of patterns found in many artists--but maybe a lot clearer in the great artists. And there are often technical explanations for clarity or lack of it. No subjectivity needed.
 
So there is something similar in performance. Performers are using nuances of expression in order to make a pattern clear. And what I observe is that there is a loss of clarity when you move from the concert hall to the control room. But things like the mike technique have a major effect on that loss---it can be a total loss, or it can be a minor loss.
 
If the recording engineer is going to be judging clarity, then he should be familiar with the patterns that need to be clear. As those are musical patterns, there is not much of a distinction between an "engineer's ear" and a "musician's ear."
 

 
   
1. Obviously you have taken what "the guy" told you out of context and misinterpreted it. I say "obviously" because the only alternatives are that he knew little about audio engineering himself or that he was deliberately trying to mislead you. In practice a musician can't even become a competent audio engineer in weeks, let alone "fabulous" or "in moments"! Common sense alone should tell you this!
 
2. And here we go! Making inflammatory statements about what is "myth" and "total baloney" based entirely on false assumption and misinterpretation. That's insulting to those of us who know different and as you appear to only want (or are only willing) to accept answers which conform to your existing understanding/assumptions the result is that threads are likely to remain off topic and slowly degenerate towards personal attacks.
 
My question to you johncarm, is do you really want the scientific/practical/truthful answers to your question/s? You state that you do but constantly contradict that statement by demanding answers which only conform to your preconceptions and therefore argue with (and sometimes even insult) the answers provided rather than trying to understand/contextualise them. If you want answers which pander to your preconceptions, you're in the wrong forum! Many/Most of the other forums here on Head-Fi are specifically for audiophiles who want their preconceptions pandered to and there are plenty of audiophile companies more than willing to support or advance those preconceptions and provide the appropriate pandering, for a price of course and of course at the expense of the actual science (which is routinely banned in some/most other forums here). If you want answers to your questions you need to make a choice, there's little/no logical middle ground between current audiophilia and current science! I can address many/most of your points/questions in this thread (both the on and off topic ones) but on past experience you're not going to be happy with some of my responses. Are you only going to defend your preconceptions or are we going to discuss the issues? If it's the former then I'd ultimately be wasting my time, which is why I haven't responded to this thread so far!
 
G

 
I wasn't distinguishing between the engineer's "ear" and their "technique." Obviously a musician can't learn the technique and the technical concepts involved without a lot of study and practice. However, I do tend to regard the ear as primary. So I was thinking that musician's can make very competent evaluations of the clarity in a recording. I don't really know what fraction of musicians have awful stereos, but it seems that in many cases they just turn off their perception of nuance when they listen to recordings, and they can easily turn it back on with a shift of the attention. I.e., in moments.
 
I guess you will never convince me that recorded music has clarity equal to live, or that an engineer's job doesn't affect that clarity. You will never convince me that clarity in performance doesn't demand precision, nuance, etc. and that a bad recording doesn't have the effect of obscuring the performer's precision. If you mean to convince me of those things, then you are wasting your time. But maybe I can be convinced the reasons for those things are other than I thought.
 
Jul 13, 2016 at 12:35 AM Post #69 of 135
  didn't read the reference yet did you?
 

 
Okay, I just read it.
 
Let me go to my primary observation... that live music has clarity that is not present in recorded music. The paper referred to a number of experiments that used test tones. Or they used instruments but it appears they were sustained tones. But since I'm interested in what needs to be preserved in order to preserve clarity, I would be interested in using test signals that have the patterns I want to be clear. And test tones just don't have it.
 
So if the question is whether we can hear above 20 KHz, first I think we should reframe it by asking "Can we tell the difference between the original and when it is passed through a system that bandlimits it," then we should ask that for any relevant system and not answer it with just one kind of system (say, digital), and finally we should use test signals that represent the category of phenomena we want to replicate in the first place. We should also use test gear that is up to the job.
 
So in the paper you cite, there were some issues. The biggest issue is the reliance on tones. It seems that sound scientists want to build up a model of the ear by doing experiments on it, and then make some declarations about the limits of the ear's resolution, but we audiophiles think it's pretty weird if these experiments are done using signals that don't represent the very phenomenon that we want investigated, the whole reason that anyone is interested in "fine" audio reproduction.
 
Jul 13, 2016 at 12:41 AM Post #70 of 135
The methodology of western science usually involves breaking a phenomenon down into its component parts before making observations and conducting experiments, because there are observations that simply cannot be made out of a massed jumble of interactive factors. Once observations have been made on these atomic phenomena, further experiments can be conducted on how these phenomena interact, but not before. If you think taking a holistic approach right from the start when you have no idea how each component functions, is a good idea, then Chinese medicine might be your thing. (and I say this as a Chinese)
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 13, 2016 at 12:45 AM Post #71 of 135
   
The idea that musicians have no idea what they sound like is a myth.
 
Think about it this way.
 
Let's assume you go a concert hall, and this particular hall has no acoustic problems: the sound is balanced and direct/reverb ratio is not problematic anywhere. 
 
Sat you sat in the left part of the front row while Michael Tilson Thomas conducts Mahler's Fifth, and you think it's a great performance. Then, the next night you sit in the back right, MTT conducts a pretty much similar performance and you think it's terrible.
 
Do you think that could happen? Or is that unlikely? Why or why not?
 
I think it's extremely unlikely. The reason why gets at this myth.

 
What does this have to do with musicians knowing how they sound? Neither location is anywhere near the musicians point of view. If I move during the performance do I I still think it good or terrible? A great performance is great performance even when the sound quality is bad. A great performance is more important then sound quality (within limits).
I know and have recorded hundreds of musicians, they have no possible way of knowing what they sound like in the room till they have heard themselves many times in recordings.
 
Jul 13, 2016 at 1:05 AM Post #72 of 135
   
 
I wasn't distinguishing between the engineer's "ear" and their "technique." Obviously a musician can't learn the technique and the technical concepts involved without a lot of study and practice. However, I do tend to regard the ear as primary. So I was thinking that musician's can make very competent evaluations of the clarity in a recording. I don't really know what fraction of musicians have awful stereos, but it seems that in many cases they just turn off their perception of nuance when they listen to recordings, and they can easily turn it back on with a shift of the attention. I.e., in moments.
 
I guess you will never convince me that recorded music has clarity equal to live, or that an engineer's job doesn't affect that clarity. You will never convince me that clarity in performance doesn't demand precision, nuance, etc. and that a bad recording doesn't have the effect of obscuring the performer's precision. If you mean to convince me of those things, then you are wasting your time. But maybe I can be convinced the reasons for those things are other than I thought.


It works in the opposite way of what you are explaining, The recording doesn't affect the player at all, it is just a document of a performance. A performance can drastically affect the clarity of the recording. You can can see how good a group's timing is on a phase meter or an oscilloscope. The more precise their timing the better the phase will be, the better the clarity and better the dynamics. Poor precision can actually flip the dynamics of the music, which if you don't catch it can be very difficult to fix. It happens way too often, a member is having a bad day, is nervous or excited, and throws everything off. 
 
Jul 13, 2016 at 10:44 AM Post #73 of 135
  [1] I guess you will never convince me that recorded music has clarity equal to live, or that [2] an engineer's job doesn't affect that clarity.

 
2. Certainly an aspect of the engineer's job is clarity, over which the engineer can exercise enormous control, far more control than can the musician. Therefore ...
 
1. And here we have it, the central issue with all your threads! You have made observations and then created a logical theory (or set of theories) to explain those observations. To you, those observations are reality and therefore sacrosanct, you "will never" be convinced of anything which doesn't obviously conform to your observations or explanations of them. This puts you firmly in the camp of the hardcore audiophiles, who are forced into espousing more and more ludicrous theories (relative to the known science) in order to defend the sanctity of their observations. The whole point of science is to try and separate the truth of reality from the everyday observations/assumptions of reality. Despite appearances/assumptions, the earth is not flat, it's not the centre of the universe, the fundamental elements of the universe are not fire, water, air and earth, etc. In fact, the truth of reality (at least as science currently understands it) so utterly contradicts everyday observations/assumptions that the two main theories which describe the universe are almost impossible to even imagine! I'm not saying that the reality of sound science is as utterly bizarre and contradictory to everyday observation/assumption as say Quantum Mechanics but you have to be open to the possibility of at least some differences. If, before you even know all the factors, you state that you will never be convinced, then by definition you are eschewing science. You are not going to get the answers you want here and you'd be much better of asking your questions in one of the extremist audiophile forums!
 
You stated "I guess", which potentially leaves the door open a crack that you can be convinced, on that basis I'll respond to your point and see if we can get anywhere. The concept of "clarity" may appear on the surface to be a simple one but in reality it isn't. In fact, your apparent assertions of clarity can be challenged even in terms of your own internal logic, as well as in terms of the actual reality/science which includes factors beyond your observations/assumptions! Even just sticking to music observation (without considering sound science), clarity is not so simple, it has a number of different levels, some of which require a deliberate lack of clarity! For example, typically in a symphony orchestra we do not want to hear 18 clearly defined individual 1st violinists, we typically want a lack of clarity which results in those 18 violinists being perceived as essentially a single musical entity (the first violin section). However, we would typically want clarity between the first violin section and the other string sections. Even in the case of divisi 1st violins we're still not after clarity, just maybe an additional level of clarity; two clear musical entities (of 9 violinists each) rather than one of 18. Sometimes of course we do want an individual 1st violinist to have particular clarity (EG. The leader). Clarity is therefore superficially easy to define musically, it's the level of detail and separation of the individual musical entities. However, look beyond the superficial, even just in musical terms and clarity is not so easy because the individual musical entities are not static, they combine, divide and sub-divide, from the level of the entire orchestra all the way down, on occasion, to single musicians within the orchestra. Logically, even from a purely musical perspective, one wouldn't want perfect clarity of every individual musician within the orchestra all the time. Clarity is therefore referenced against what "one would want", which is effectively entirely subjective. From an audio engineering perspective, we've not only got these same musical issues of clarity but also a whole bunch of additional issues caused by equipment practicalities, sound science and psycho-acoustics/perception. Psycho-acoustics is a big one because not only in practise is it usually the most, or one of the most profound factors at play but because it's typically completely ignored/eliminated by audiophiles (on the grounds that observation is reality and therefore that psycho-acoustics in effect does not exist)! Let's look at reality though, if we're sitting in a concert hall, say 20m from a violinist we can hear incredibly subtle nuances in the fiction of a horse's tail being dragged against a string. At the same time, we're completely unaware of the (relatively) massive sound of a powerful muscle thumping and blood being forced around the body just a few centimetres or millimetres from our ears. It doesn't take a pHd to realise there must be some autonomous (sub-conscious) process/es at work which results in a perception of reality which differs significantly from actual reality. Beyond the obvious heartbeat, perception is altering reality in many other respects. For example, if we are concentrating on our violinist 20m away, depending on the hall acoustics, we are in reality hearing relatively little of the direct sound the violinist is creating, mostly we are hearing reflections of the violinist's sound. Stick a mic in that position and we'll pick up more of that reality, a recording which lacks clarity due to too much reflections (reverb) relative to the direct sound. A problem which is significantly less obvious if we're actually there in a live situation because if we concentrate on the violinist our brain will filter out some of that reverb and manufacture a greater clarity than exists in reality. The obvious riposte to this is; why, when listening to the recording with too much reverb, doesn't our brain do the same as the live situation and filter some of it out? The answer is that we're listening to a recording, not in the live situation. What we might wish to believe (that we're in the concert hall) is contradicted by our other senses, plus other biases arising from the live situation and even by the audio reality itself. With regards to the latter: We have two point sources of sound production (speakers) which are trying to represent the acoustic information which is arriving from all directions (in the live situation) and those two point sources are also creating significant reflections in your listening environment (say a living room), reflections which conflict with the desired reproduced reality. Even with perfect transducers (mics and speakers), the reproduced acoustic reality would be a concert hall inside a living room, which of course doesn't and can't exist and is a fundamental conflict.
 
Added to other weaknesses: In the stereo illusion, weaknesses in transducers, perceptual differences induced by the other senses, conflicts with the other senses and different biases (expectation bias, etc.) which affect perception, it's obvious that an audio recording can only ever be an approximation of a live gig, an approximation which at best may fool some. Typically, as with audiophiles, you are making incorrect assumptions and concentrating in the wrong areas. You've mentioned the dynamic range/resolution of digital audio but that's a complete red herring! Although it might not appear intuitive, digital audio has infinite resolution (even at 16bit) and is capable of a dynamic range which not only far exceeds the ear but far exceeds the capabilities of transducers to record or reproduce, so even if your ears were in theory capable of hearing it, you still wouldn't be able to hear what your sound system is not producing!
 
I don't have time to deal just now with what musicians hear, what the differences are between what an engineer hears and some of the holes completely missing from your theory/ies. Although there are a few hints above, if you care to read it carefully. Are we getting anywhere or am I wasting my time?
 
G
 
Jul 13, 2016 at 3:50 PM Post #74 of 135
I hoped you would find it more useful - at the very least it should show the methods needed to make repeatable, usable hearing discrimination's that can inform Psychoacoustics models at the level serious professional researchers fare currently engaged with
 
Quote:
 So in the paper you cite, there were some issues. The biggest issue is the reliance on tones. It seems that sound scientists want to build up a model of the ear by doing experiments on it, and then make some declarations about the limits of the ear's resolution, but we audiophiles think it's pretty weird if these experiments are done using signals that don't represent the very phenomenon that we want investigated, the whole reason that anyone is interested in "fine" audio reproduction.

 
again you do seem to keep going fuzzy/meta when there are relevant and accessible "baby steps" that have been/have to be tested, known results incorporated in any more "advanced" considerations
 
and what little I know of musical practice does involve teacher-student exchanging short examples, immediate attempt to copy, or even just hear the distinction in a note or short phrase - not "now listen to how I modulate attack/decay at 10min:27sec in the 1/2 hour I will now play..., OK now you play from the beginning the 1/2 hr..."
 
if you use terms like dynamic range then human hearing with test tones, masking theory, physical acoustics, noise limits of rooms, mics, electronics, storage media, playback conditions all apply
 
when audiophiles talk about the greater "realism" of say analog tape or vinyl in terms of measureable system metrics like noise floor, channel separation, timing accuracy vs todays better digital we do think they're deluded 
 
Jul 13, 2016 at 4:13 PM Post #75 of 135
The methodology of western science usually involves breaking a phenomenon down into its component parts before making observations and conducting experiments, because there are observations that simply cannot be made out of a massed jumble of interactive factors. Once observations have been made on these atomic phenomena, further experiments can be conducted on how these phenomena interact, but not before. If you think taking a holistic approach right from the start when you have no idea how each component functions, is a good idea, then Chinese medicine might be your thing. (and I say this as a Chinese)

 
I understand the idea of breaking down a phenomenon and investigating the interactions of the parts. In the end, however, you must have some wavy of checking the whole against the prediction of your theories. If you didn't break down the parts in the right way, then investigating their interaction will be of limited use. The problem is that you will never know that if you don't check the whole.
 
For example, physics has explained some individual "parts" with theories such as quantum mechanics and general relativity. Someday these theories will have to be combined and the result checked. There has to be some observation that can verify the combined theory. Perhaps observing gravity waves can help. Or maybe it will be something else, but in the end it must be checked.
 
In the case of audio, the phenomenon that interests me is the perception of live music and what I perceive in the control room matches or doesn't match that perception.
 
Here are some individual theories that are useful in predicting that phenomenon:
 
  1. Theories about what component parts music contains
  2. The psychology of hearing
  3. The effect of sound fields such as produced by speakers
  4. Distortions in microphones and speakers
  5. The predicted distortions or lack thereof in electronics
 
In the end we must step into the control room and check what we perceive. Otherwise there is no way of checking if we have broken down this phenomena correctly and investigated the interaction of the parts correctly.
 
I am not arguing here that current theories are failing. I am just making a case for the need to check the phenomenon as a whole.
 
But I will also say that what I've read in sound science papers is very, very far from testing the "whole."  You say that the phenomenon has to broken down and the interactions studied, but I haven't seen evidence that has progressed very far. For instance the paper that jcx linked investigates very limited signal types. Then it suggests the results say something about big phenomena, like the question of whether systems with bandlimited impulse responses produce meaningful distortions. The problem is how very far from making that conclusion we seem to be.
 
However, I still need to read the Brian Moore book, which I ordered.
 

Users who are viewing this thread

Back
Top