comparing live and recorded music
Jul 13, 2016 at 6:17 PM Post #77 of 135
originally posted by gregorio
 You have made observations and then created a logical theory (or set of theories) to explain those observations. To you, those observations are reality and therefore sacrosanct, you "will never" be convinced of anything which doesn't obviously conform to your observations or explanations of them. This puts you firmly in the camp of the hardcore audiophiles, who are forced into espousing more and more ludicrous theories (relative to the known science) in order to defend the sanctity of their observations.

 
This is not really true.
 
First of all note that I wrote that I can be convinced there is some unexpected reason for my observation.
 
There is one key observation I make---recorded music lacks clarity as compared to live music. (Note below, you don't quite understand what I mean by clarity.)
 
It is sound science that has made many observations and constructed elaborate theories. But to explain what phenomenon?
 
It seems that many sound science theories don't come remotely close to explaining the phenomenon I mention--or perhaps they are trying to explain some other phenomenon completely. If you would like to explain what theories explain this phenomenon, I'm interested.
 
 Even just sticking to music observation (without considering sound science), clarity is not so simple, it has a number of different levels, some of which require a deliberate lack of clarity! For example, typically in a symphony orchestra we do not want to hear 18 clearly defined individual 1st violinists, we typically want a lack of clarity which results in those 18 violinists being perceived as essentially a single musical entity (the first violin section).

 
 
I think you are interpreting clarity as "separation" or "the ability to hear individual instruments." I mean something else.
 
The musician intends to convey a pattern. First let's talk about one moment in time, then talk about patterns that are spread over time.
 
At one moment in time, there is a set of instruments playing. Usually the musicians (and composer) intend for some of those to be foreground, and some background. It may be that the presentation is supposed to have at most one foreground sound, or two. There may be a kind of hierarchical structure, in which sounds go progressively into the background. The structure may be about note attacks and timing of attacks as well as the phenomena of sustained sounds.
 
If you hear every single violin, the result is not clear. That will not make the hierarchical structure of foreground and background clear. It will sound muddy as a whole. I have some chamber music recordings in which the individual instruments do not blend at all.
 
I also mean clarity of patterns over time. For instance, the musicians may shape the dynamics of a phrase. And that shape relates to the shape of previous and successive phrases. The question is how clear this shape, and the relationship of shapes, is.
 
 Clarity is therefore referenced against what "one would want", which is effectively entirely subjective.

 
No, I think clarity should be investigated by psychology. I think it's an important and universal phenomenon. The visual arts, for example, talk about the composition of a painting. Or graphic design talks about the arrangement of elements into a hierarchical structure and how the eye moves around the page at the same time the brain comprehends the meaning of the words and images.
 
You could argue that science should not be investigating art because "art is subjective" but I disagree. I'm sure we agree that science should be investigating perception, but if you express an interest in perception, I don't see how you can fail to acknowledge that there's a very important and universal phenomenon called art.
 
 
 
 Let's look at reality though, if we're sitting in a concert hall, say 20m from a violinist we can hear incredibly subtle nuances in the fiction of a horse's tail being dragged against a string. At the same time, we're completely unaware of the (relatively) massive sound of a powerful muscle thumping and blood being forced around the body just a few centimetres or millimetres from our ears. It doesn't take a pHd to realise there must be some autonomous (sub-conscious) process/es at work which results in a perception of reality which differs significantly from actual reality.

 
You say "perception is different from reality" but that's not the right way to look at the problem. There is the reality of how perception works. That's a reality. And it affects how we perceive music and recordings.
 
I never said that our brains work like spectrum analyzers. Obviously they don't---if they did, if they were linear devices that produced an internal signal in the way that scientists do measurements, then psychoacoustics wouldn't be necessary.
 
 
 We have two point sources of sound production (speakers) which are trying to represent the acoustic information which is arriving from all directions (in the live situation) and those two point sources are also creating significant reflections in your listening environment (say a living room), reflections which conflict with the desired reproduced reality. Even with perfect transducers (mics and speakers), the reproduced acoustic reality would be a concert hall inside a living room, which of course doesn't and can't exist and is a fundamental conflict.

 
My point is that when we put the sound through those two speakers or through headphones, we degrade the clarity. But also, depending on which we choose, we degrade it more or less.
 
 Stick a mic in that position and we'll pick up more of that reality, a recording which lacks clarity due to too much reflections (reverb) relative to the direct sound. A problem which is significantly less obvious if we're actually there in a live situation because if we concentrate on the violinist our brain will filter out some of that reverb and manufacture a greater clarity than exists in reality. The obvious riposte to this is; why, when listening to the recording with too much reverb, doesn't our brain do the same as the live situation and filter some of it out? The answer is that we're listening to a recording, not in the live situation.

 
I think my central question is--why choose one mic position over another? I think the best reason for choosing one mic position over another is how it affects the musical patterns as heard in the control room, and that means the engineer needs to be hearing those patterns in the first place (and care about those patterns). Judging which of two mic positions gets closer to the original musical pattern is at least 50% a musical decision.
 
Jul 13, 2016 at 7:38 PM Post #79 of 135
In the end we must step into the control room and check what we perceive. Otherwise there is no way of checking if we have broken down this phenomena correctly and investigated the interaction of the parts correctly.

I am not arguing here that current theories are failing. I am just making a case for the need to check the phenomenon as a whole.


Is anybody advocating NOT checking the final mix against the original performance (leaving aside the question of how this can be done reliably across the necessary time gap)? :confused:

Does your theorizing bring anything more to the table than "the performer should listen to the recorded mix and their opinion on its fidelity to the performance should be considered" ? :confused:

But I will also say that what I've read in sound science papers is very, very far from testing the "whole."  You say that the phenomenon has to broken down and the interactions studied, but I haven't seen evidence that has progressed very far. For instance the paper that jcx linked investigates very limited signal types. Then it suggests the results say something about big phenomena, like the question of whether systems with bandlimited impulse responses produce meaningful distortions. The problem is how very far from making that conclusion we seem to be.

However, I still need to read the Brian Moore book, which I ordered.


If your question is simply whether frequencies not recorded by CD make a difference, that's testable easily enough with complex music. Take a piece of hi-res music, downsample to 44.1kHz sample rate, sample back up to original sample rate and file format, pit the original and resampled files against each other using the foobar ABX plugin, check your results for statistical significance.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 13, 2016 at 7:45 PM Post #80 of 135
There is one key observation I make---recorded music lacks clarity as compared to live music. (Note below, you don't quite understand what I mean by clarity.)


Since when did this become your key observation? :D

What playback system are you listening to your music on when you say this--and what do you know about tuning your system to improve "clarity"?

The latter is something I investigate on a daily basis--and I believe my tuned system can rival any live venue in "clarity"--though admittedly I probably have much less live listening experience than you.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 13, 2016 at 8:37 PM Post #81 of 135
Since when did this become your key observation?
biggrin.gif


What playback system are you listening to your music on when you say this--and what do you know about tuning your system to improve "clarity"?

The latter is something I investigate on a daily basis--and I believe my tuned system can rival any live venue in "clarity"--though admittedly I probably have much less live listening experience than you.

 
And further, why would one think that the musicians on-stage would be in the position to decide how much "clarity" there was out in the audience? The last word I would associate with my experience playing viola right in front of the trombones in Mahler 1 would be "clarity"…
 
Jul 13, 2016 at 9:49 PM Post #82 of 135
Clarity seems to be the problem, not the solution. It's an artifact and type of audio distortion-- I have submitted my opinion in another head-fi post:
 
http://www.head-fi.org/t/805727/clearly-a-problem
 
The issue I have with much of the high-end gear, headphones in particular, is excessive detail-- an "on stage" audio perspective, instead of "at the venue" or concert-hall type of sound. It depends greatly on the recording too, some distantly miked recordings benefit from enhanced detail (wider dynamics, more focus on high frequency, greater detail) while closely miked ones sound a bit smoother (more natural) with a more laid back amp and headphones.
 
Audiophiles were debating the exact same question 30 years ago.I'd say we are 90% there, and with great recordings and the best equipment, 95%, which is close enough for my critical, jaded ears! I've had plenty of headphone experiences where I've mistaken a sound in a recording for something live in my listening space.
 
Jul 14, 2016 at 12:03 AM Post #83 of 135
Is anybody advocating NOT checking the final mix against the original performance (leaving aside the question of how this can be done reliably across the necessary time gap)?
confused.gif


 

 
Well, we've had some people (including you I think, not sure) say that it's pointless to talk about "fidelity" when comparing a live venue with a recording because they are so different. I disagree with that.
 
I also am suggesting that fidelity is a musical observation. To give an extreme example to make a point, an engineer trained only in sound fields will not be in a position to evaluate fidelity.
 
 
 Does your theorizing bring anything more to the table than "the performer should listen to the recorded mix and their opinion on its fidelity to the performance should be considered" ?
confused.gif

If your question is simply whether frequencies not recorded by CD make a difference, that's testable easily enough with complex music. Take a piece of hi-res music, downsample to 44.1kHz sample rate, sample back up to original sample rate and file format, pit the original and resampled files against each other using the foobar ABX plugin, check your results for statistical significance.

 
I'm not "theorizing" as much as (1) asking some questions about the theories in Sound Science, (2) suggesting a "paradigm" for investigating audio knowledge in the first place.
 
If you think it's trivial to suggest that a performer listen to the recorded mix, note how much disagreement that got on this thread. See RRod's reply below.
 
That's a very interesting test which I will definitely try.
 
But talking a "big picture" paradigm, note that it doesn't investigate the effect of your DAC or the original recording. If there is a problem in those, it will be present in both A and B. The closest thing to an audio signal that is not reprocessed is a live microphone feed, so I think it would be more appropriate to compare a live feed with an ADC/DAC chain inserted.
 
 
   
And further, why would one think that the musicians on-stage would be in the position to decide how much "clarity" there was out in the audience? The last word I would associate with my experience playing viola right in front of the trombones in Mahler 1 would be "clarity"…

 
I never said it was the musicians on-stage who should be evaluating the clarity. I'm suggesting that fidelity is a musical judgment that should be a comparison between the live venue and the recording. The person playing viola might happen to be more trained in music than the recording engineer. He may play viola and nothing else, but musicians often have generally trained ears, useful for other instruments or orchestra. So the viola player might have something useful to say. But ultimately it's about someone who can make a proper comparison.
 
Jul 14, 2016 at 12:19 AM Post #84 of 135
  I never said it was the musicians on-stage who should be evaluating the clarity. I'm suggesting that fidelity is a musical judgment that should be a comparison between the live venue and the recording. The person playing viola might happen to be more trained in music than the recording engineer. He may play viola and nothing else, but musicians often have generally trained ears, useful for other instruments or orchestra. So the viola player might have something useful to say. But ultimately it's about someone who can make a proper comparison.

 
Are recording teams often in the habit of not being at all familiar with the sound of the venue they are recording? At least in classical I can think of a plenty of conductor/orchestra/engineer "teams" that produced consistently good sound, and today it seems we're going even a step further to the big orchestras starting up their own labels. Perhaps you don't like the sound of SFS+MTT, but there are plenty of other examples to be had. This of course brings up another point in that classical is one genre where they pay a guy to stand in front of all the people actually playing and make judgements about sound, and I can't imagine he completely ignores the work in the studio (not that the conductor's musical point-of-view is exactly like the audience's, of course).
 
Jul 14, 2016 at 12:24 AM Post #85 of 135
But talking a "big picture" paradigm, note that it doesn't investigate the effect of your DAC or the original recording. If there is a problem in those, it will be present in both A and B. The closest thing to an audio signal that is not reprocessed is a live microphone feed, so I think it would be more appropriate to compare a live feed with an ADC/DAC chain inserted.


To listen to a live microphone feed, you must use loudspeakers / headphones. I suppose you can do without an ADC and DAC (by amplifying the mic signal directly) but the mic and loudspeakers will always be introducing orders of magnitude more coloration to the result than the ADC and DAC anyway.

To make the amplified microphone feed sound as close to the original sound as possible, the sound engineer who "only knows about sound fields" (as you put it) knows to calibrate the microphone and speakers, account for the differences in acoustics between the recording venue and the playback venue, account for whether the mic is close or far from the performer, account for the two last items together and apply any additional reverb as necessary. If instead it is a live sound reinforcement application, there would be multiple speakers set throughout the venue, and the time, phase and frequency interactions between these speakers becomes so complicated that it is beyond my knowledge to even accurately describe what the general factors to be considered are to be, let alone how they may each be specifically technically addressed so that as much of the audience as possible gets a reasonable acoustic result.

What would you be able to bring onto the table from your perspective that "the sound engineer needs to know about the music performance"?

How much more would the engineer need to know above the level that he would most probably have incidentally learned already in the course of his studies or out of casual interest?

How would such knowledge contribute to improving the answers to considerations such as those listed above?
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 14, 2016 at 12:58 AM Post #86 of 135
   
I think my central question is--why choose one mic position over another? I think the best reason for choosing one mic position over another is how it affects the musical patterns as heard in the control room, and that means the engineer needs to be hearing those patterns in the first place (and care about those patterns). Judging which of two mic positions gets closer to the original musical pattern is at least 50% a musical decision.

There is many reasons to pick one mic position over another, much of it has to do with capturing the timbre of an instrument, the microphone selection also affects this. In a multi-mic recoding typical in most current recordings, phase also plays a large role. With digital mixers you have greater control over the phase between the microphones. With analog you more often had to compromise between the best sound of a single instrument and it's phase to  the rest of instruments. For example recording a trap kit, I will have the stereo room mic's maybe 15-20 feet from the drums, then I have stereo overheads, then we have the close mic's sometimes on both the top and bottom drum heads. In the analog days you would move the mic's around for your best phase response. In digital you can delay other mic's into time with the room mic's. For a punchy pop song I might compress the dynamics of the close mic's and leave the natural dynamics of the room microphone. The room microphones and overheads I will spend the most time on placement and selection. Those capture the sounds of the instrument in the room and you build from there, in a pop recording the engineer and producer are as much creating the musical textures as the musicians. In classical music you are attempting the capture to sound of the performance in best seat in the house. Jazz can be either one and anything in between. 
 
Today recording little to no audible effect on the reproduction of the performance. In touring live sound it is common to record every show and soundcheck. You can switch between the band playing and recording chain it sounds the same. 
 
Almost all the distortions are from your transducers. Transducer distortions are magnitudes greater then any other part of the chain.  We are getting closer everyday. I have had a few times where I have looked for a person or instrument I know is not there. It is actually kind of stressful.
 
Jul 14, 2016 at 3:52 AM Post #87 of 135
Although I would argue that the practical layout of the transducers and listening environment is more limiting than their actual performance. Ideally we want a loudspeaker layout that corresponds to the mic setup somehow in both position and pickup / radiation patterns (of the mic and speakers respectively) for each recording. And the mic position, performance venue acoustics and playback venue acoustics should complement each other in a way such that the three add up to reproduce the amount and nature of the reverberance of the original performance upon playback, no more, no less. Satisfy all these conditions, and I believe we can produce a music reproduction that would leave a dozen johncarms slack-jawed in awe :D even with middle-of-the-road transducer technology (and suitable corrective measures). The obstacles seem to be no less enormous for being practical rather than technical in nature.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 14, 2016 at 7:47 AM Post #88 of 135
For everybdoy interested into what influence the choice of microphone and the positioning during the recording has here a link to a post by sorrodje in the HD800 thread:
 
http://www.head-fi.org/t/650510/the-new-hd800-impressions-thread/22365#post_12511596
 
Even listening online in mp3 quality
eek.gif
it get's pretty obvious that the decisions made by the recording engineer on mics and mic placement are by order of a magnitude of greater importance than any type of DAC choice, chip choice, PCM vs DXD choice, cable choice etc.
 
Jul 14, 2016 at 9:24 AM Post #89 of 135
For everybdoy interested into what influence the choice of microphone and the positioning during the recording has here a link to a post by sorrodje in the HD800 thread:

http://www.head-fi.org/t/650510/the-new-hd800-impressions-thread/22365#post_12511596

Even listening online in mp3 quality :eek: it get's pretty obvious that the decisions made by the recording engineer on mics and mic placement are by order of a magnitude of greater importance than any type of DAC choice, chip choice, PCM vs DXD choice, cable choice etc.

Actually I disagree with you. I think it is more than one order of magnitude more important . :joy:
 
Jul 14, 2016 at 1:24 PM Post #90 of 135
  [1] I think you are interpreting clarity as "separation" or "the ability to hear individual instruments." I mean something else.
 
[2] At one moment in time, there is a set of instruments playing. Usually the musicians (and composer) intend for some of those to be foreground, and some background. It may be that the presentation is supposed to have at most one foreground sound, or two. There may be a kind of hierarchical structure, in which sounds go progressively into the background. The structure may be about note attacks and timing of attacks as well as the phenomena of sustained sounds.
 
[3] If you hear every single violin, the result is not clear. That will not make the hierarchical structure of foreground and background clear. It will sound muddy as a whole. I have some chamber music recordings in which the individual instruments do not blend at all.
 
[4] I also mean clarity of patterns over time. For instance, the musicians may shape the dynamics of a phrase. And that shape relates to the shape of previous and successive phrases. The question is how clear this shape, and the relationship of shapes, is.

 
1. I think you must be explaining poorly what you mean by clarity or misunderstood what I stated, if you really "mean something else". Because ...
 
2. How is separating (groups, sub-groups or individual) musicians into "foreground" and "background" not "separation"? Would you generally not want/expect greater clarity of those musical entities which are supposed to be in the foreground or put differently, a level of clarity which allows for a differentiation/separation between foreground and background?
 
3. Notice I've used the invented term "musical entities", a musical entity might be a single musician or might be a group/sub-group of musicians, depending on the composition (orchestration) at any particular point. Regardless of who actually comprises a musical entity at any one point in time, in the case of one or more musical entities in the foreground, we would want/expect greater clarity of that foreground entity (relative to other entities in the background) but not of the individual musicians who comprise that entity, unless of course the entity is a single musician. Your statement appears to entirely agree with what I'm saying! So I'm not sure why you appear to be arguing?
 
4. Again, this is effectively the same thing I'm saying!
 
BTW, it's dangerous/potentially misleading to use the terms "foreground" and "background". I know that you are talking from a musical perspective and essentially mean "to the fore" but we need to be careful "foreground" and "background" also have specific meanings in terms of physical geography, which we can also represent in audio recording/reproduction. We do this almost constantly in TV/film audio, commonly in popular music genres but less so in orchestral music, except maybe in the case of those few pieces which employ say off-stage brass, in which case we maybe looking at the off-stage brass as being both background geographically but foreground musically.
 
  [1] You could argue that science should not be investigating art because "art is subjective" but I disagree. I'm sure we agree that science should be investigating perception, but if you express an interest in perception, [2] I don't see how you can fail to acknowledge that there's a very important and universal phenomenon called art.

 
1. I'm not arguing that science shouldn't investigate art. However, science hasn't got particularly far in this regard, it's still trying to work out many of the basic individual aspects of perception, let alone how all the aspects of perception combine/integrate to into an appreciation/evaluation of art.
 
2. This statement appears deliberately disingenuous just to prove a point. The term "art" exists universally (AFAIK) in all languages but there is no universal phenomenon called art. Certainly there are some individual works which would most likely be universally accepted as art, the Mona Lisa or Beethoven's 5th for example but there is no universal definition of what art is. While it's an interesting philosophical question, to investigate it scientifically, we have to know what "it" is and not only is there no universal agreement of what it is but we're not even close to a universal agreement, opinions can vary diametrically and occasionally, even to the point of violence!
 
  You say "perception is different from reality" but that's not the right way to look at the problem. There is the reality of how perception works. That's a reality. And it affects how we perceive music and recordings.

 

I'm not sure I understand what you're saying, what is the right way to look at the problem? If I see a pig (A) and imagine that pig flying (B), I can accept that my imagination itself exists in reality but not that what I'm imagining does, I don't accept that flying pigs actually exist. If I want to understand something about the process of imagination, isn't comparing the difference between A and B a good place to start and then coming up with theories/experiments to explain the difference? What's the alternative, coming up with theories/experiments to explain how pigs fly?
 
[2] I think my central question is--why choose one mic position over another? [1] I think the best reason for choosing one mic position over another is how it affects the musical patterns as heard in the control room, and that means the engineer needs to be hearing those patterns in the first place (and care about those patterns). [3] Judging which of two mic positions gets closer to the original musical pattern is at least 50% a musical decision.

 
1. You would think that, you are a musician! A musician is defined by their musicality and trains their hearing/perception to be sensitive to the evaluation of musicality. A recording engineers job is to record an acoustic signal as best as possible and musicality is largely irrelevant. If I perfectly record a terrible musical performance, I've done my job fabulously well! As a recording engineer I'm primarily listening for: mic frequency and amplitude response, phase artefacts between mics/input signals, signal to noise ratios of the inputs, various other potential interference/unwanted artefacts, the relative amplitude and frequency response (balance) of direct vs indirect acoustic signals being picked-up by the individual mics and combination of mics and, by extension of these factors, stereo imaging. These aspects of recorded sound are in my control and are my responsibility as a recording engineer, musicality isn't! Obviously, a great recording of a terrible performance or a terrible recording of a great performance are both undesirable results and that is why the role of Producer exists. This is one of the missing "holes" in your assumption which I mentioned previously. The producer is listening for musicality and how that musicality translates (through the speakers) and in practise is a sort of bridge/arbiter between the musician and the engineer. Ignoring some of the practicalities/technicalities of the role, a musician could potentially take on the role of producer relatively quickly, not so much the role of engineer though!
 
2. To best satisfy those things I'm listening for, mentioned above. It was discussed previously whether a musician has any idea of what the audience hears during a performance. I would say they do, although only a very generalised vague idea or at least very generalised and vague compared to the recording engineer. A musician will tend to think in terms of "the audience" and of how an acoustic affects the audience perception, because the musician only has some degree of control at that level. The recording engineer has a great deal more control and therefore thinks in terms of individual audience members or sections of the audience and the different acoustics in different parts of the venue. The goal of the engineer (and producer) would commonly be some sort of mean or biased mean of these difference acoustical positions. That commonly results in an illusion of an acoustic space rather than a reality, in the same way that no family actually has 2.4 children, even though that may be the average family size.
 
3. Here we enter murky ground. I've painted a very black and white picture above of the roles of engineer, producer and musician in order to more succinctly explain them but in reality, it's usually rather more grey. In practise, the engineer will design a mic'ing scheme, depending on the acoustics of the venue, to fulfil both the fundamental engineering requirements and the desire/s of the producer. Mic choice/positions therefore does in practise include musical perception considerations and a good, experienced recording engineer will have picked-up a considerable amount of that over the years. Typically IME, particularly when working with an new producer, there would be a meeting well before the first recording session, so the engineer can gain an insight into the desires of the producer and therefore design a mic'ing scheme likely to facilitate those desires.
 
  I also am suggesting that fidelity is a musical observation. To give an extreme example to make a point, an engineer trained only in sound fields will not be in a position to evaluate fidelity.

 
Doesn't this bring us back to what I mentioned above? What is fidelity in relation to, is it the relationship/similarity between two real/actual signals (say in the case of digital audio data input into a DAC and the resultant electrical signal it outputs) or is it the relationship/similarity between a perceived experience of a live performance and the audio recording/reproduction of it? In the case of the former, we can turn around your argument because a musician (or any human being) is not in a position to evaluate fidelity because no human can hear digital data or an electrical signal or compare them, humans can only hear acoustic signals. The case of the latter introduces interesting factors: Of course we cannot reproduce a perceived experience of a live performance with an audio recording because an audio recording only contains audio, not any of the other factors which contribute to a perceived experience. Having said this, there are some generalised aspects of perception which we can compensate for, by "compensate for" I mean change the reality to help create a better representation of the perception. For example, in the mid/late '80's, a few of the more cutting edge labels were taking advantage of digital audio and in addition to single position mic arrays they added spot mics (mics positioned to pick up individual musicians or a small group of musicians within the orchestra). This allowed the spot mic output to be added to the mic array output in appropriate places, to emphasize that instrument, to a similar subjective level as would occur in a live situation when a visual cue would cause a perceptual emphasis. The use of this perceptual effect is far more prolific in the film sound world, where it's known by the term "hyper-reality" and has been used as an audience manipulation tool for over 6 decades.
 
All the above relates mainly to orchestral music, from the 1960's popular genres of music evolved to take advantage of recording technology and the concept of realism is therefore a largely abstract one.
 
G
 

Users who are viewing this thread

Back
Top