Confused about all the subjectivity involved in audio
Mar 20, 2016 at 8:21 AM Post #61 of 106
Does anyone here know the details of this "highly complex" relationship calculation? Yes? Please explain it.


Simply explained, the signal is put through a K-weighted filter and the RMS of the result is measured. The important part is the k-weighted filter, which was designed by the ITU after compiling and averaging many decades of studies. The thing to remember is that it's a relative rather than an absolute measurement of loudness and even then, as it's based on a broad average, it may or may not align with any particular individual's perception. Furthermore, it only works under certain conditions, in relatively small rooms.
OK, thanks. I thought it would have something to do with the energy in each of the ERB frequency bands?

Are you saying that soundstage in recordings is ALWAYS manufactured? I doubt you mean this.


Why do you doubt this? Soundstage is always manufactured.
If you are just repeating your previous point that the configuration of miking in a room is a manufacturing process, then we are talking at cross purposes. If these mikes faithfully capture the room's volume & ambience without any further manipulation in the studio, I consider this is not "manufactured"

By "not an audio characteristic" do you mean that it's a figment of our imagination


Effectively yes, it's a figment of our imagination and not based on a property or characteristic contained in the sound waves.
In my example of the room miking used to capture the room's ambience - what do you think the room miking is capturing?

OK. Can you show a set of such measurements that define a real (not studio manufactured) soundstage?


No, you're not getting it, there is no such thing as a "real" soundstage. Soundstage is a human concept, an artificial construct of the brain, a perception. It's not a physical property of the sound waves! What you are asking for is akin to asking for a set of measurements which define a real dragon. The only way we could create a set of measurements for a dragon would be to get a bunch of people to draw a dragon next to something of a known size, say a car, then work out the dimensions of each of the drawn dragons and finally calculate an average. Then we would have a set of measurements. This is, in effect, how we measure loudness. Soundstage is a more tricky problem though, because there's not even consensus on what soundstage actually is, and until we have that, we can't even start to think about a way of creating an averaged model to measure it. That's why I used loudness as an example, there's general consensus over what loudness is, although not general consensus over exactly what loud is and that's why we can only model relative loudness and not absolute loudness.
But loudness is not in one's imagination - it is based on characteristics within the signal.
That's what I'm getting at with regard to soundstage - it is founded on certain characteristics within the signal - it isn't imagination or "dragons" , it has a a relationship to the signal stream. How it is then constructed within the auditory processing system is another ball of wax but let's not use the incorrect & misleading term "imagination" - this is auditory processing

You are going to struggle to grasp these concepts until you understand that perception and reality are two different things which are either unrelated or related in a complex way which varies between individuals. Our brains create complex perceptual structures or models, the functional design of which is allow us to make sense of the world around us. The mistake made by many is a failure to understand this principle design goal and instead believe that the design goal of perception is to represent reality as accurately as possible and there are some/many who don't even believe there is any difference between perception and reality. The difficulty facing these latter groups of people is that perception is all they have ever experienced, reality cannot be experienced and therefore we can't compare reality and perception, except indirectly, through knowledge and understanding of the concepts, rather than through physical experience. At it's heart this is what science is; in an attempt to model reality, science attempts to separate reality from perception. Perception is easy, so easy we barely have to think about it and so common we invent labels to describe shared perceptions but if we want to really understand what's going on, we only have two choices; accept that reality and perception are two different things or invent some "magic" in an attempt to justify that they're the same thing! Science dictates that sound waves only have two intrinsic properties, amplitude and frequency. What we perceive when we listen is only loosely related to these two intrinsic properties, sometimes not related at all or related so complexly it's near impossible to work out if there's any relationship and because it's a perception rather than reality it can vary significantly from person to person.
I completely agree & have been a fan of Donald Hoffman for a while who states exactly this in his TED talk So you have no argument from me on this score - I just don't understand where we differ if we share this same viewpoint? The central point is that our brain doesn't REconstruct reality, it interprets the signals from the limited senses into an interpretation of the world - an interpretation that has been successful for our survival as a species. It doesn't mean our senses are accurate or that a more accurate perception wins in the survival stakes - it doesn't win!

Much of what is attributed to properties of sound waves does not exist in reality, regardless of how trivially easy to accomplish or how commonly shared those perceptions are. We've already mentioned loudness and soundstage, to these we can add a whole host of other commonly shared and labelled perceptions such as; pitch, musicality and even "music" itself, to name just a few, none of which exist in reality! Audiophiles are commonly unable to understand this difference, relying solely on personal experience rather than on knowledge/understanding and that's why the response when trying to justify personal experience against the science must ultimately must come down to "magic". Of course the response to that accusation is always along the lines of; It's only magic because science isn't yet able to measure properties beyond just amplitude and frequency but when it does, it will cease to be magic and it will become science. Unfortunately for audiophiles, we are not talking about the type of theoretical scientific model which we know is flawed but is just the best we currently have, we are talking about a precise, proven mathematical understanding of sound waves which has been around for nearly 200 years. So in this case, for magic to become science, this mathematical proof would have to be proved incorrect, a feat which nearly 200 years worth of the world's top mathematicians have failed to achieve. There's additionally a simple logical proof, borne out of the practical application of the mathematical proof; As amplitude and frequency are the only properties of sound waves we know about, they are the only properties of sound waves which we are able to measure and therefore record. In other words, if there is some other "magic" in there, we can't record (or reproduce) it! So, as this "magic" cannot and does not exist in the recordings these audiophiles are listening to, the only logical conclusion is that if it exists at all, it must exist somewhere other than the recording and the only logical place that could be is in their perception!

G
Wow, you've talked yourself down a rabbit hole based on some strange logic that I can't really fathom - after you introduce "magic" it all becomes surreal logic to me:
- Time is a hugely important factor in soundwaves, not just amplitude & frequency. It's a stream of point to point moments of amplitude & frequency that we are processing in our auditory perception & we are constantly analysing this stream into auditory objects & deciding what freq/amplitude belongs to what auditory object & hence building an auditory scene - much the same way as we build a visual scene of the signals that come through our eyes
- the perception of vision has largely been mapped - much more so than auditory perception. So we understand better the way this sense works i.e we understand the interpretation process of the sense not that we understand "reality"
- I'm suggesting the same understanding be applied to auditory perception - understanding the interpretation processes of this sense. I don't see what the problem here is?
- yes, we are in the middle of trying to understand the workings of auditory processing - we don't understand it all & in fact there are many issues to be resolved. You call them "magic" - I call them "mysteries" i.e issues yet to be resolved
- these "mysteries" are arising from our lack of understanding of how this processing of the signals - but these signals arise form the vibration of air molecules on the tympanic membrane, not from "imagination"
- now what we also have is an added complexity in this hobby - that of the recreation of an illusion by our audio playback systems. This illusion has to tick our perceptual boxes, otherwise it becomes less & less of an illusion we buy into - the more it satisfies our auditory perception & matches it's model of how the auditory world works in our everyday experience
 
Mar 20, 2016 at 9:35 AM Post #62 of 106
In my example of the room miking used to capture the room's ambience - what do you think the room miking is capturing?
But loudness is not in one's imagination - it is based on characteristics within the signal.
That's what I'm getting at with regard to soundstage - it is founded on certain characteristics within the signal - it isn't imagination or "dragons" , it has a a relationship to the signal stream. How it is then constructed within the auditory processing system is another ball of wax but let's not use the incorrect & misleading term "imagination" - this is auditory processing
I completely agree & have been a fan of Donald Hoffman for a while who states exactly this in his TED talk So you have no argument from me on this score - I just don't understand where we differ if we share this same viewpoint? The central point is that our brain doesn't REconstruct reality, it interprets the signals from the limited senses into an interpretation of the world - an interpretation that has been successful for our survival as a species. It doesn't mean our senses are accurate or that a more accurate perception wins in the survival stakes - it doesn't win!
Wow, you've talked yourself down a rabbit hole based on some strange logic that I can't really fathom - after you introduce "magic" it all becomes surreal logic to me:
- Time is a hugely important factor in soundwaves, not just amplitude & frequency. It's a stream of point to point moments of amplitude & frequency that we are processing in our auditory perception & we are constantly analysing this stream into auditory objects & deciding what freq/amplitude belongs to what auditory object & hence building an auditory scene - much the same way as we build a visual scene of the signals that come through our eyes
- the perception of vision has largely been mapped - much more so than auditory perception. So we understand better the way this sense works i.e we understand the interpretation process of the sense not that we understand "reality"
- I'm suggesting the same understanding be applied to auditory perception - understanding the interpretation processes of this sense. I don't see what the problem here is?
- yes, we are in the middle of trying to understand the workings of auditory processing - we don't understand it all & in fact there are many issues to be resolved. You call them "magic" - I call them "mysteries" i.e issues yet to be resolved
- these "mysteries" are arising from our lack of understanding of how this processing of the signals - but these signals arise form the vibration of air molecules on the tympanic membrane, not from "imagination"
- now what we also have is an added complexity in this hobby - that of the recreation of an illusion by our audio playback systems. This illusion has to tick our perceptual boxes, otherwise it becomes less & less of an illusion we buy into - the more it satisfies our auditory perception & matches it's model of how the auditory world works in our everyday experience

 
"what do you think the room miking is capturing?"
Let's see. The vibrations in the air which move the sensor in the microphone which in turn create a electrical signal that is analogous to the pattern of the air movement. So in the case of ambience the mike is capturing the original sound and all the reflected sounds within the room. No magic.
 
"But loudness is not in one's imagination - it is based on characteristics within the signal." So one records a live performance by a heavy metal rock group and during the live performance the group was playing at a high volume. Now one plays back the recording but at a really low volume. OMG where did all the loudness go!?!? Ah this time it must be magic!!!
 
Soundstage is simply the relationship of all the various direct and indirect sounds (reflected sounds) that the microphone(s) capture. Move a microphone to a different position and the relationships of the various direct and indirect sounds all change as well. No magic.
 
"Time is a hugely important factor in soundwaves, not just amplitude & frequency" Wow and here I was thinking that "frequency" had something do with time. Silly me.
 
All joking aside, if the above statement doesn't show that all you are doing is being deliberately argumentative and disruptive. Please stop it.
 
Mar 20, 2016 at 11:47 AM Post #63 of 106
Yes with the proper headphones/eaphones the iPhone should produce excellent results. And is it more likely that amp/DAC is producing less distortion with your head/earphones than with the iPhone alone. And the distortion when using just the iPhone is being caused by a poor match between the iPhone and your head/earphones and not because the measurements of the iPhone are somehow "wrong".



Yes could well be.Can you detail the distortion characteristics that give rise to this flattened sound? What exactly is the flattening, non musical sound that Krismusic is describing - can you put some meat on this bone?



Is this "flattening" that Krismusic mentions, a flattening of soundstage? Maybe Kris could tell us?

Sorry guys. I was on a plane. Just got into Tokyo!!!
Feel almost as out of my depth on this thread but it seems to have come alive. :)
I have been very happily using IPhones with my Noble K10 CIEM's for a couple of years now. Periodically on the Noble site someone mentions that although the K10 sounds "good" out of an iPhone, it "scales well" with "better" equipment.
Having been quite happy listening to the iPhone and having been told several times by people who seem very knowledgeable in Sound Science that the iPhone measures very well, I have been quite vocal in refuting
the idea that anything better is needed.
Then I realised that I have never used anything other than Apple sources and on a bit of an impulse I bought a secondhand Onkyo Amp/DAC.
It has made me reasses things.
The Onkyo has more depth and detail in the bass and the treble sounds smoother and more "relaxed"(?).
By comparison the iPhone on its own sounds a bit thin. The top end sounds a bit "scratchy"(?)
I never worry too much about soundstage. It seems to be asking too much for headphones to recreate the experience of listening to speakers, let alone performers, in a room.
Having said that I think the Onkyo also widens things out. I often hear stuff well outside my head. Which is a neat trick from CIEM's that are fitted directly into the ear canal. IMO.
All these improvements from the Onkyo could well be placebo. I have been caught by that many times!
I have been listening back and forth for a few weeks now and so far it seems worth carting the Onkyo around.
So I don't know. I was very comfortable with the iPhone until I heard the Onkyo.
I was perfectly prepared to believe that all well designed amps and DAC's should sound the same.
I heard the Mojo a couple of times at meets and didn't discern any improvement over the iPhone HO.
I suspect that the Onkyo, being a consumer product rather than a product from an audiophile "boutique" company has some internal DSP going on which results in a pleasing sound.
I'd better go. Tokyo and the missus are calling!
 
Mar 20, 2016 at 3:14 PM Post #64 of 106
In my example of the room miking used to capture the room's ambience - what do you think the room miking is capturing?

But loudness is not in one's imagination - it is based on characteristics within the signal.

That's what I'm getting at with regard to soundstage - it is founded on certain characteristics within the signal - it isn't imagination or "dragons" , it has a a relationship to the signal stream. How it is then constructed within the auditory processing system is another ball of wax but let's not use the incorrect

"what do you think the room miking is capturing?"
Let's see. The vibrations in the air which move the sensor in the microphone which in turn create a electrical signal that is analogous to the pattern of the air movement. So in the case of ambience the mike is capturing the original sound and all the reflected sounds within the room. No magic.
Yes, correct & the playback of the captured Direct Vs reflected sounds should produce a perception of soundstage. It's not a manipulation, it's not manufactured - it's a natural recording process. Agreed that there are also studio created sound stage & manipulation. See we can agree, sometimes :)

"But loudness is not in one's imagination - it is based on characteristics within the signal." So one records a live performance by a heavy metal rock group and during the live performance the group was playing at a high volume. Now one plays back the recording but at a really low volume. OMG where did all the loudness go!?!? Ah this time it must be magic!!!
What are you trying to achieve with this text?

Soundstage is simply the relationship of all the various direct and indirect sounds (reflected sounds) that the microphone(s) capture. Move a microphone to a different position and the relationships of the various direct and indirect sounds all change as well. No magic.
Correct & just to remind you as you keep using "magic" - I wasn't the one who introduced the word in this discussion so I think you are addressing the wrong person.

"Time is a hugely important factor in soundwaves, not just amplitude & frequency" Wow and here I was thinking that "frequency" had something do with time. Silly me.
It's trivial to show that playing a song backwards will sound very different to playing it forward even though all amplitudes & frequencies are exactly the same?

All joking aside, if the above statement doesn't show that all you are doing is being deliberately argumentative and disruptive. Please stop it.
If you can show examples of me being argumentative & disruptive, please do. If not, please retract that accusation.
 
Mar 20, 2016 at 3:29 PM Post #65 of 106
If you can show examples of me being argumentative & disruptive, please do. If not, please retract that accusation.

Absolutely:
 
"It's trivial to show that playing a song backwards will sound very different to playing it forward even though all amplitudes & frequencies are exactly the same?" - which was your response to my pointing out that the term "amplitude & frequency" contains a time component, i.e. FREQUENCY.
 
If that's not being argumentative & disruptive I really don't know what would be other very direct name calling and slander.
 
Mar 20, 2016 at 3:46 PM Post #66 of 106
Yes with the proper headphones/eaphones the iPhone should produce excellent results. And is it more likely that amp/DAC is producing less distortion with your head/earphones than with the iPhone alone. And the distortion when using just the iPhone is being caused by a poor match between the iPhone and your head/earphones and not because the measurements of the iPhone are somehow "wrong".



Yes could well be.Can you detail the distortion characteristics that give rise to this flattened sound? What exactly is the flattening, non musical sound that Krismusic is describing - can you put some meat on this bone?



Is this "flattening" that Krismusic mentions, a flattening of soundstage? Maybe Kris could tell us?

Sorry guys. I was on a plane. Just got into Tokyo!!!
Feel almost as out of my depth on this thread but it seems to have come alive. :)
I have been very happily using IPhones with my Noble K10 CIEM's for a couple of years now. Periodically on the Noble site someone mentions that although the K10 sounds "good" out of an iPhone, it "scales well" with "better" equipment.
Having been quite happy listening to the iPhone and having been told several times by people who seem very knowledgeable in Sound Science that the iPhone measures very well, I have been quite vocal in refuting
the idea that anything better is needed.
I find it's always best to do your own checks rather than taking what people say at face value
Then I realised that I have never used anything other than Apple sources and on a bit of an impulse I bought a secondhand Onkyo Amp/DAC.
It has made me reasses things.
The Onkyo has more depth and detail in the bass and the treble sounds smoother and more "relaxed"(?).
By comparison the iPhone on its own sounds a bit thin. The top end sounds a bit "scratchy"(?)
Right so this is what you mean by more musical - I didn't really think you meant a flattened sound stage but rather a richer, more dynamic & more musical sound
I never worry too much about soundstage. It seems to be asking too much for headphones to recreate the experience of listening to speakers, let alone performers, in a room.
Having said that I think the Onkyo also widens things out. I often hear stuff well outside my head. Which is a neat trick from CIEM's that are fitted directly into the ear canal. IMO.
All these improvements from the Onkyo could well be placebo. I have been caught by that many times!
I have been listening back and forth for a few weeks now and so far it seems worth carting the Onkyo around.
So I don't know. I was very comfortable with the iPhone until I heard the Onkyo.
I was perfectly prepared to believe that all well designed amps and DAC's should sound the same.
I heard the Mojo a couple of times at meets and didn't discern any improvement over the iPhone HO.
I suspect that the Onkyo, being a consumer product rather than a product from an audiophile "boutique" company has some internal DSP going on which results in a pleasing sound.
I'd better go. Tokyo and the missus are calling!
You think there is some DSP going on - because you are perceiving a richer sound?
 
Mar 20, 2016 at 4:10 PM Post #67 of 106
If you can show examples of me being argumentative

Absolutely:

"It's trivial to show that playing a song backwards will sound very different to playing it forward even though all amplitudes & frequencies are exactly the same?" - which was your response to my pointing out that the term "amplitude & frequency" contains a time component, i.e. FREQUENCY.

If that's not being argumentative & disruptive I really don't know what would be other very direct name calling and slander.

Please point out the argumentative & disruptive aspects in the sequence of posts that resulted in this particular line of the discussion

- Gregorio stated "Science dictates that sound waves only have two intrinsic properties, amplitude and frequency." & "It's only magic because science isn't yet able to measure properties beyond just amplitude and frequency" & "As amplitude and frequency are the only properties of sound waves we know about, they are the only properties of sound waves which we are able to measure and therefore record."

- I replied to this stating "Time is a hugely important factor in soundwaves, not just amplitude & frequency. It's a stream of point to point moments of amplitude & frequency that we are processing in our auditory perception & we are constantly analysing this stream into auditory objects & deciding what freq/amplitude belongs to what auditory object & hence building an auditory scene - much the same way as we build a visual scene of the signals that come through our eyes"

- You replied ""Time is a hugely important factor in soundwaves, not just amplitude & frequency" Wow and here I was thinking that "frequency" had something do with time. Silly me." (which many would consider argumentative but anyway)

I responded "It's trivial to show that playing a song backwards will sound very different to playing it forward even though all amplitudes & frequencies are exactly the same?"

I was pointing out an simple example of how it's not just amplitude & frequency which is of importance to auditory perception. Can you point out what is argumentative or disruptive in these exchanges?
 
Mar 20, 2016 at 4:45 PM Post #68 of 106
Please point out the argumentative & disruptive aspects in the sequence of posts that resulted in this particular line of the discussion

- Gregorio stated "Science dictates that sound waves only have two intrinsic properties, amplitude and frequency." & "It's only magic because science isn't yet able to measure properties beyond just amplitude and frequency" & "As amplitude and frequency are the only properties of sound waves we know about, they are the only properties of sound waves which we are able to measure and therefore record."

- I replied to this stating "Time is a hugely important factor in soundwaves, not just amplitude & frequency. It's a stream of point to point moments of amplitude & frequency that we are processing in our auditory perception & we are constantly analysing this stream into auditory objects & deciding what freq/amplitude belongs to what auditory object & hence building an auditory scene - much the same way as we build a visual scene of the signals that come through our eyes"

- You replied ""Time is a hugely important factor in soundwaves, not just amplitude & frequency" Wow and here I was thinking that "frequency" had something do with time. Silly me." (which many would consider argumentative but anyway)

I responded "It's trivial to show that playing a song backwards will sound very different to playing it forward even though all amplitudes & frequencies are exactly the same?"

I was pointing out an simple example of how it's not just amplitude & frequency which is of importance to auditory perception. Can you point out what is argumentative or disruptive in these exchanges?


Let's just let the moderators decide. Okay?
 
Mar 20, 2016 at 6:02 PM Post #69 of 106
[Moderator comment]
I am getting utterly sick of being called in to have a look at the endless circular debates - so I will make this clear.
 
@ralphp@optonline - answer the points, avoid direct attacks, and if the other side in the debate refuses to actually answer the questions (avoidance), either politely point this out to them - or ignore them.
 
@mmerrill99 - the biggest problem I'm seeing in the threads I am asked to moderate is your constant avoidance to actually answer the questions being posed to you - and the constant changing of topic when you get asked to stand behind your claims.  In the Sound Science section, this is the one area where you can be asked to provide proof. If you do not want to - then I would suggest that just maybe you are in the wrong section of the forum.
 
Also - you asked above about obfuscation - I will quote:
 It's trivial to show that playing a song backwards will sound very different to playing it forward even though all amplitudes & frequencies are exactly the same?

 
This is not very smart, and IMO deliberate obfuscation and goading.  The amplitudes and frequencies may be the same, but the order those frequencies occur within the time domain essentially dictate how the sound is played/perceived.  You know that.  I know that.  Everyone here knows that.  Yet you chose an example which was not relevant, deliberately (IMO) calculated to elicit a response, and then you sit back to watch the fallout.  It had zero relevance to what was being discussed.
 
So all of you have choices at this stage.
  • Carry on the way you are going - and there will be evictions from this and other threads
  • Change your ways - and actually use this section for the reason it is here - to actually delve into the true science (known and unknown) about audio.  By doing this everyone might learn something
  • Disengage - and go find something else to do.
 
The choice is yours gentlemen.  
 
Mar 20, 2016 at 7:50 PM Post #71 of 106
  [Moderator comment]
I am getting utterly sick of being called in to have a look at the endless circular debates - so I will make this clear.
 
@ralphp@optonline - answer the points, avoid direct attacks, and if the other side in the debate refuses to actually answer the questions (avoidance), either politely point this out to them - or ignore them.
 
@mmerrill99 - the biggest problem I'm seeing in the threads I am asked to moderate is your constant avoidance to actually answer the questions being posed to you - and the constant changing of topic when you get asked to stand behind your claims.  In the Sound Science section, this is the one area where you can be asked to provide proof. If you do not want to - then I would suggest that just maybe you are in the wrong section of the forum.
 
Also - you asked above about obfuscation - I will quote:
 
This is not very smart, and IMO deliberate obfuscation and goading.  The amplitudes and frequencies may be the same, but the order those frequencies occur within the time domain essentially dictate how the sound is played/perceived.  You know that.  I know that.  Everyone here knows that.  Yet you chose an example which was not relevant, deliberately (IMO) calculated to elicit a response, and then you sit back to watch the fallout.  It had zero relevance to what was being discussed.
 
So all of you have choices at this stage.
  • Carry on the way you are going - and there will be evictions from this and other threads
  • Change your ways - and actually use this section for the reason it is here - to actually delve into the true science (known and unknown) about audio.  By doing this everyone might learn something
  • Disengage - and go find something else to do.
 
The choice is yours gentlemen.  


Let me say first that I apologize to everyone involved that it had to come to this point. I understand what is being asked of me and I will do my best to comply going forward.
 
Thank you Brooko for clearing the air.
 
Mar 20, 2016 at 10:16 PM Post #72 of 106
 
1/outside of binaural and maybe a few special cases using particular techniques, the very very vast majority of instruments are recorded in mono. even when several microphones are used they are usually used for purpose unrelated to making a stereo recording. not that we couldn't do so, just that most sound engineers decide not to focus on that.

Really? I believe you are overstating things way too much here. I'm pretty sure that there are a vast number of recordings which use room miking. Maybe someone with professional recording experience can chime in here?

Correct me if I'm wrong but what you seem to be maintaining here is that room ambience & soundstage are solely manufactured in the recording studio & not part of the original recording?
2/ binaural is the best chance we have at actual positioning with headphones. there can still be problems with the headphone's signature, distortions, and individual HRTF of the listeners, but yes that corrects the biggest problem of headphones when it comes to reproducing a real band in a real space.

So you're original point - that "isn't it mostly our brain trying to create a soundstage out of conflicting cues that shouldn't make a soundstage?" - seems incorrect to me. Surely some of the techniques used in binaural are equally applicable to standard recordings thus producing a more realistic soundstage? It's not therefore a confusion of the brain to conflicting cues?

If what you meant was that in using headphones, the soundstage stays fixed when we move our head & this is unnatural & somewhat confusing, then I would agree but it's not the reason for hearing a soundstage in the first place. I would also agree that headphone soundstage can be less natural than speaker soundstage but again I don't believe that it's the confusion of the brain that is creating the soundstage in the first place.
3/ yes. as we're talking about perceived soundstage. and it will be an individual interpretation, we need to measure the human and see how much variations can occur from one dude to the other. and that's not sound anymore. we're on the subjective side of things where everything is complicated because people are different and senses aren't used independently by the brain, and past experiences impact everything. IDK how much variations we can get from people to people. stuff on the left will still feel on the left, reverb like we're in the toilet will still feel like a small room, and other stuff like that, but as far as I know not everybody is as good at making 3D visualizations, maybe that has an influence too for all I know. the human element just adds too much variables that have nothing to do with sound.

Well, here's an interactive demonstration of soundstage construction & the tradeoff between ITD & ILD. Maybe you can test yourself to see if you hear what you are supposed to hear? I doubt any subjective imagination will add complexity to an individual's perception of left/right up/down?
4/both. of course it depends on the sound stimuli, but it also depends on what the person will actually hear, plus like any subjective notion, the final result can be altered with external factors. imagining myself in front of the guys, instead of just waiting to feel like they're here in my room, closing my eyes, or looking at the DVD of the live. all those IMO have the potential to alter my perception of the soundstage, in conjunction with the actual sound. so do I measure that? no, this is one more stuff we leave to subjectivity and it goes beyond the sound itself.

OK, I'm finding it hard to keep up with what I perceive as your changing viewpoint on this. First you said "usually people decide that sound is very complex and full of unknowns, because they project human constructs like pleasure or even more ludicrous like quantifying a perceived soundstage, and hope for a machine to tell them all of it in a simple graph... except that has very little to do with the audio signal and a lot to do with the person listening." Then "it's not a sonic characteristic, but our interpretation of sonic characteristics. and as soon as interpretation is involved, the result may be subjective and hard to quantify." And now the above " it depends on the sound stimuli, but it also depends on what the person will actually hear,"

So, again, correct me if I'm wrong but what you seem to be saying in this & your points 1-3 above is that soundstage has largely nothing to do with any signals on the recording - it's mostly a construct of the listener? Again, let me be clear here & get this straight - all our auditory perceptions are a construct of the brain - what I believe you mean here is that perceived soundstage is very loosely based on weak characteristics within the signal stream & mostly enhanced by our imagination to arrive at a perceived soundstage which bears little resemblance to what the signal stream would justify? Am I right in this restatement of your position?
5/ I don't get what you're asking for? I'm talking about audible variations that can be measured. if a guy listens to 2 amps or whatever and gets a change in soundstage(even in a blind test ^_^), then I'm confident that we can measure a difference between both outputs.

Well, I mostly see stated that soundstage differences are not a measurable entity & yours was one of the few statements that bucked this trend so I was interested in some examples of these measurements, if you have any. I have not seen any such examples ever produced by anybody up to now so it piqued my interest that you have such confidence in there being a measurement that will show this.

1/ look that up, capturing the band position in space is rarely on top of the priority list. in fact I've seen many guys saying how they dismissed it because it was too limiting in term of what they could then do with the mics or post prod.
 
2//3/  I believe you're wrong and wrong.  I would follow your points if we had almost perfect reproduction of spacial cues, but we don't. even taking binaural as a case, do you believe your head is the same size as the dummy head used to record? that the mic ended up with the same HRTF as you do? and that the headphone is reproducing it all good enough and flat enough? nope nope nope. some may be big differences, others might be small, but even if we were all calibrated perfectly for our bodies, and not recalibrating as time passes to adapt, the sound we would get with a headphone would still fail with standard use of heasphones. for moving the head, for frequency response, for ITD, it might be all the values moved up or down by a fixed delay on a smaller/bigger head, but I'm sure you agree that it will change the angle of instruments a little, maybe make something be higher than it is because of a particula change in the frequency response or whatever.  and that's me making an effort to remove all the human brain side of things where other variables, biases, preconceptions, will tend to add more diversity. or on the contrary oppose some cues to still sound like you think it should when it really doesn't.  I'm trying to talk about it at an objective level and say that already it's a bust to expect measurements for soundstage on headphones.
and then enters the human individual as the subject, with all his senses linked and cross-referenced by the brain, all the preconceptions, and in the end senses that do not give reality, but an interpretation of it. our senses are great tools for animals, as in "help me not die". they're not feeding us identical reproductions of reality though. limited precision, limited range of frequencies, stereo system, a life of self calibration from mixing what I hear with what I see to make ideas of what is and find patterns to ease on the CPU load ^_^. just like an object doesn't have a color, the sun blasts a bunch of frequencies on it, some are reflected and we decide that it is the object and it's color... only interpretations of the actual reality. that's what we live with. and what's worst, we can change those. wear colored glasses for a few hours, check the temperature of a room when getting out of bet vs when coming back in from running in the cold outside, 2 different perceptions of the same objective temperature. only the objective data has actual meaning to try and describe reality as it is.
for all those reasons, soundstage formed in our heads are subjective stuff. not objective one, at least untill we can measure all of the human body, electrical signals, and thoughts. then if such a day was to come, we would be able to make a clear objective prediction of the soundstage from a given song on a given headphone. but I don't imagine I'll live to see it. so I say, it's foolish to expect measurements to describe a soundstage, and even more foolish to pretend that there is more to sound than we can measure when it's the human that we fail to measure not the sound. and that was my initial point in my first post.
 
4/ I could say that I don't know, and that would make my point. as soon as we're in the brain dealing with thoughts, what is real and what isn't? how much of the data is kept and how much is just a pattern game? what is the impact of me knowing something or thinking I know? how much changed since I got used to listening to a given headphone? if you feel like you got it from your senses you decide it's real? well not really, if you see a guy levitating, no sense is telling you it's fake, yet you believe it is. so our senses don't even have to dominate our idea of what is happening. subjectivity is a mix of senses and ideas and memories and whatever. who can tell what part of the perceived soundstage comes from me looking at the band playing on the TV while playing the DVD? if I never had the DVD would my brain have placed them all the same way at the same distance same angle etc? it's amazing what we can feel, but also a huge mess with very inaccurate limits betwwen senses and thoughts. we can and do make use of perceived data from our senses, but how much at a given time? IDK. what's for sure is that our brain isn't in the fidelity business.
 
 
5/ ok so I talk about our ability to objectively measure a difference between 2 audibly different sounds, and you ask for a soundstage measurement... that's how I understood it the first time but as it made no sense, I thought maybe there was something else.
 
 
 
about the actual topic http://www.bbc.com/news/science-environment-23717228   should judges go blind for a fair piano competition?
 
Mar 21, 2016 at 4:59 AM Post #73 of 106
If you are just repeating your previous point that the configuration of miking in a room is a manufacturing process, then we are talking at cross purposes. If these mikes faithfully capture the room's volume & ambience without any further manipulation in the studio, I consider this is not "manufactured" ...

 
1. Mics are not "faithful". Compared to most other items/points in the recording/playback chain, mics are highly inaccurate, coloured and, often deliberately so.
 
 
2. Mics do not capture volume & ambience of a room, neither do they capture music! The ONLY thing mics do; is respond to the amplitude and frequency of sound pressure waves travelling through air which hit the mic's capsule and transduce those physical movements into analogue voltages.
 
3. What determines if those analogue voltages can eventually be easily interpreted as say the volume & ambience of a room is the recording engineer's choice, positioning and settings of the mic/s. This choice is driven by the engineer's judgements of their own perception, together with their knowledge of acoustics and different mics' response characteristics. Furthermore, once the output from this mic has been recorded, it needs, at the very least, to be balanced and positioned in the mix relative to the outputs from the other mics, which again, is a subjective decision based on the engineer's own perception. Without these subjective decisions and choices based on human perception, it would be largely impossible for listeners to perceive any soundstage and indeed, there would be a fair probability that they would barely even be able to perceive music!
 
4. If all this does not constitute "manufactured", I don't know what would. It appears that you are trying to redefine "manufactured" to support your flawed understanding, rather than actually addressing your flawed understanding. Although you haven't specifically used the word "magic", what you are doing is entirely indicative of the type of "magic" thinking I am talking about. With no recourse to logic, you are forced to turn to something else; irrationality, magic, call it what you will. In this case, a redefinition of the word "manufactured".
 
But loudness is not in one's imagination - it is based on characteristics within the signal.

 
Again, repeating the same flawed understanding does not eventually make it true! To arrive at a measurement of loudness we have to dramatically change the characteristics of that signal, IE. Change it into a substantially different signal. The equation is effectively: Characteristics of the signal + human perception = Loudness. Without the "human perception" part of the equation, the "characteristics of the signal" do NOT equal loudness ... loudness is not a property of the signal.
 
Wow, you've talked yourself down a rabbit hole based on some strange logic that I can't really fathom - after you introduce "magic" it all becomes surreal logic to me.

 
1. Yes, the rabbit hole I've talked myself into is the rabbit hole of facts and science.
 
2. Yes, I can appreciate that a logic based on facts and science would appear "strange" and "unfathomable" to someone who doesn't know, understand or refuses to accept those facts/science.
 
3. A logic based on something other than the facts, is obviously a logic based on something else. For want of a better term and because I've seen it used by those who have based their logic on this "something else", I've called it "magic". Based on this nomenclature, it's you who have introduced "magic" and again, for those who base their logic on "magic" I'm sure any other basis for logic would appear "surreal". In other words, you have completely reversed the situation because you have arrived at a belief which is in effect that "magic" is real and reality is "surreal".
 
Time is a hugely important factor in soundwaves, not just amplitude & frequency. It's a stream of point to point moments of amplitude & frequency that we are processing in our auditory perception

 
As has been pointed out already, frequency is, by definition, time. Also, soundwaves are not a stream of point to point moments of amplitude & frequency, soundwaves are continuously varying.
 
I'm suggesting the same understanding be applied to auditory perception - understanding the interpretation processes of this sense. I don't see what the problem here is?

 
The problem here is, that your suggestion is the opposite of your actions/statements!
 
... we are in the middle of trying to understand the workings of auditory processing - we don't understand it all & in fact there are many issues to be resolved. You call them "magic" - I call them "mysteries" i.e issues yet to be resolved ...

 
No, I do NOT call them "magic", I call them human perceptions. You appear to be calling only some auditory perceptions "mysteries", others you are calling reality. What I am calling "magic" is the logic you are using to arrive at calling these other perceptions reality.
 
.. these "mysteries" are arising from our lack of understanding of how this processing of the signals - but these signals arise form the vibration of air molecules on the tympanic membrane, not from "imagination" ...

 
No they do not! These auditory perceptions arise from the brain, which usually have relatively little to do with the vibration of the air molecules on the ear drum and sometimes have absolutely nothing to do with those vibrations. Tinnitus is a good example of this, as is the previously posted McGurk Effect, where what we hear is unrelated to the actual soundwaves and there are numerous other examples of us perceiving differences where there are none or of perceiving identifiable sounds where there is none.
 
G
 
Mar 21, 2016 at 7:25 AM Post #74 of 106
Castle, we mostly agree but our interpretations/inferences are perhaps different
I was thinking that classical recordings try to capture the collective sound as well as individual sections of the orchestra - is this incorrect?
I'm not suggesting for one minute that our perceptions are accurate but in the example I gave & in many experiments of lateral positioning, people tend to be fairly close to one another in location identification. That's my point - we react to the soundstage cues contained in the signal in much the same way. Now, "are these cues an accurate transcription of the soundstage we would hear if listening to the live event" is a different question. I agree that HRTF is a significant issue with regard to binaural sound & the only way to address this accurate transcription is to use in ear microphones to record the event as David Greisinger has done & reported that the playback was uncannily realistic.

So we don't differ here it seems - I thought you were denying that soundstage had anything to do with cues in the signals & that was my point.

The final point that we seem to differ on is that you seem to maintain that once the signal is in the brain it's impossible to know what's a result of input signal & what's a fabrication of the mind. I don't share this throwing in of the towel - I believe that auditory perception & processing will be mostly teased out just as visual perception is mostly understood.
 
Mar 21, 2016 at 8:11 AM Post #75 of 106
Gregorio,
We seem to be closer in understanding than either of us are willing to admit :)
Can I restate a central point again "I'm suggesting the same understanding be applied to auditory perception - understanding the interpretation processes of this sense."
If I have led anybody to believe otherwise than I have failed in communicating clearly my intent.
The problem with discussing something like auditory perception, the workings of which are not fully known, is that we are speculating about the unknown aspects of this. If this specualtion sounds like an invocation of "magic" on my side then I apologise as it is far from my intent.

Just to your specific points:
I think you are mixing up the "art" of the recording engineer with the actual recording. If I can go back to the example I gave castle, David Greisinger has done recordings using in ear microphones & without any manipulation of these recordings has proclaimed that they are scarily realistic when played back with IEMs. So, no I don't accept that "manufacturing" is needed to create a set of cues in the signal that will be interpreted by our auditory perception as something that is reminisicent of a sound stage that we would hear in a live event (my usual shortcut for saying this is "real" soundstage so can we use this "real" to mean just that?)

I can't fathom this - you quote me "But loudness is not in one's imagination - it is based on characteristics within the signal." then proceed to express it in a formula "The equation is effectively: Characteristics of the signal + human perception = Loudness." and then you seem to want to make anything I say into a contradiction by grossly misquoting me as follows "characteristics of the signal" do NOT equal loudness Please look at what I said "it is based on characteristics within the signal" Nowehere did I ever state that "characteristics of the signal" equal loudeness

Can you please stop the personal attacks & trying to paint m as someone who doesn't understand science or logic?

You stated that amplitude & frequency were the only two elements in sound - I stated that time was the third element. If you don't disagree with this can we just move on as this bit about frequency incorporating time is nothing to do with what I said - it's a completely different point that you & Ralph seem to keep stating.

Finally "magic" Vs "mysteries" I have never used the word "magic" but I do find auditory perception both fascinating & mysterious. I do understand that there are lots of unanswered questions & aspects yetto be worked out about it's processing. Given this, I am of the opinion that our measurements are not sophisticated enough to map to our perceptions. Now this is maybe where I differ from the body of posters here & where the friction comes from - if every time I raise an issue someone say well show me the measurements to prove it, then this becomes fractious. Broko stated "actually use this section for the reason it is here - to actually delve into the true science (known and unknown) about audio. By doing this everyone might learn something" If in this section we can't discuss what's unknown about sound science (which to me includes auditory perception) then I obviously misinterpreted the section title.

Tiniitus & where it comes from is not fully worked out so I'm not sure that stating it just arises in the barin is fully correct - although I could be wrong
Stating with the McGurk effect that "what we hear is unrelated to the actual soundwaves" is incorrect - we only hear variations of Ba or Fa or Ga - we don't hear Hi or Fi - so the actual soundwave characteristic is instrumental in what we perceive. If the sound was hi or Fi & the same video was played, mouthing Ba or Fa, what do you think we would hear?
 

Users who are viewing this thread

Back
Top