Understanding the parameters in the dynamic range database
Feb 6, 2017 at 10:02 AM Post #16 of 27
   
1. I have a couple of problems with this: Firstly, I don't know Audition well but it has rather basic controllability/functionality compared to what I'm used to (and what I need).

Like you say, you may not know Audition well. There is quite a depth there covering much more than basic functions and control. It won't be what you're used to, but it is actually quite complete.
Secondly, even if I could some how get around it's limitations with a "stack" of processors, this is hardly a consumer friendly solution. I'm guessing a fraction of one percent of consumers would even attempt such a stack and only a small fraction of those would get decent results.  

Perhaps, but it depends on the goal, doesn't it? Basic DR reduction isn't that complicated if what you're trying to do is process for low volume or background use of two channel music.
2. That's strange, I've found the exact opposite! Much greater capabilities and far more complex controls which are relatively difficult to comprehend. At this stage it's probably worth covering a bit about compression, as that's mainly what we're talking about here as far as dynamic range and processing is concerned: This 7min vid, is a basic primer on compression; what compression is, the basic controls, how they're used and what they sound like. I recommend this vid for anyone who doesn't really know what compression is, only has a vague idea, just wants a refresher or wants to be sure they've understood the basics.
 
In response to pinnahertz, here is a brief (just under 7mins) tutorial of a compressor which I commonly use. I strongly recommend watching this vid, especially for those who think they already have a decent understanding of compression basics. A couple of notes: It's quite advanced but all the controls this vid covers are constantly being demonstrated, so you can hear what's happening even if you don't fully understand the controls themselves. Secondly, this is just part one of the tutorial and even both parts together only cover some of the controls available in this compressor.

As I said, all the functions are there in Audition, most within two "effects": Dynamics Processing and Multiband Compressor. The plugin in the video has a much better control configuration, and some things like control side-chain EQ are confusing in Audition, but they are all there. The plugin in the video does do exactly what I said: present a better interface. There may be a tweak or so that would be a bit difficult to match, but I didn't see anything there that wasn't covered with Auditions included plugins. For example, the Audition Dynamics Processing plug has the usual attack and release time controls, but also lets you draw a curve. That, combined with other available parameter tweaks, accomplishes everything in the plugin's "threshold", panel.  It lets you grab a control input from other than the audio input (side chain EQ and more), choose detector type, etc.  The video plugin's interface is a lot easier to deal with though. It goes on from there, but I've made the point. 
 
Some further observations/points: A. The videos demonstrate compression on individual channels or on a sub-group/submix (the drums submix in the second vid), rather than master-buss compression. Obviously the same techniques apply to the entire mix (master-buss) as to a submix, although with different settings. It should be noted that in the vast majority of cases most of the compression applied to a mix is applied during mixing rather than during mastering, IE. To individual channels and submixes rather than on the master-buss. B. Both vids were demonstrating different popular music genres. In classical music, compression is still quite commonly employed on individual tracks, sub-groups and on the master-buss but not as ubiquitously and far more subtly than in the popular genres. C. There are some "highend" compressor processors which only have the basic compressor controls, these tend to be the modelling compressors: Software plugins with algorithms modelled to emulate a vintage analogue compressor. These software compressors are non-linear, they introduce various distortions (including IMD commonly) which provide the "character" of the original unit. Generally there is no control over any of the attack or release curves, other parameters or even the amount of distortion, beyond how hard it is driven (as with the original units). So it's a case of using one of these compressors for it's particular character or using a completely different compressor if that "character" is not appropriate for a particular channel/submix/mix.
 
 

Good observations; I do not disagree (apart from the classical music comments...not correct for serious classical music recording).
3. Yes, a simple set of presets to cover some basic situations could be useful but what you suggest already exists. Dolby Digital already includes 6 presets (DRC profiles) which are set in the DD metadata and many AV Receivers allow that setting to be overridden.

This comment is the result of perspective. Dolby Digital processing is not available to the consumer for processing his own material for use, say, in a car. The adjustments the consumer has are there only for decoding pre-encoded material, and unfortunately, even those are buried to the point that the typical guy won't even know they exist.
Also, the "Loudness" control on some amps (particularly car sound systems) is effectively a simple compression preset. How these presets interact with the content is variable though, depending on the content. Sometimes it's relatively benign other times it's very annoyingly not so. "Pumping" is a common problem for example, due to an inappropriate release time/curve of the compressor, especially if it fights with pumping already deliberately/artistically applied or with content which changes dynamics quite rapidly. And, this is just one of several quite common issues with simple consumer compression and presets.

The "loudness" control you refer to (not actually called that), typically comes from two processes: Audyssey Dynamic volume and Dolby Volume. Both assume the mix was done at theatrical reference and played at some lower reference. Audyssey "knows" the specific SPL moment by moment, Dolby Volume does not. Both do make errors, but Audyssey does nail it pretty often, being quite sophisticated and working with a better model of whats going on in the listening room. You'll also find "night mode" or similar, usually a horrible compression function that nobody likes much. Again, none of this is available to the consumer for re-processing material for other applications.
3a. This is simply impossible and impractical, for several reasons.....

Yeah...I know. Not possible to completely undo per-track processing in mixing. The best it could do is track mastering processing (where I still think a lot goes wrong), and broadcast processing.
3b. How could it be "easily encoded" into the files themselves? Not only would you need metadata fields to cover every possible parameter of every existing (and future) compressor but you'd need to somehow embed data to change any/all of those parameters in time with the music, as automation of compressor parameters during mixing/mastering is not uncommon. This doesn't sound to me like something which could be "easily encoded"!

If you think of parameters as vectors, not a continual data stream, there could be full encoding of hundreds of parameters. Think object-oriented parameter encoding. Sound familiar?
And of course, that's assuming the impossibilities/impracticalities in 3a have all been overcome and some chip developed (for inclusion in DACs/AVRs) upon which all this metadata and embedded data can act.  

No, those aren't limitations. Once the meta stream is figured out, the quantity required to serve the consumer would make the chips cheap. It would start with the usual generic DSP and programming, and trickle down from there. The killer of this idea is simply stated, "Multi-industry standardization". You need to get everyone in music, TV, film, and broadcast on board. Never going to happen.
3c. I've seen similar requests several times here on head-fi. I can only assume those making the request simply don't understand how/why compression is applied.

You've covered your world well. You've entirely missed what happens downstream, say, in broadcast processing.
If you've watched both videos above, you'd realise that compression changes the volume, tonality and "presence", and as separate instances (with different settings) and/or different compressors are used on different individual channels and sub-groups, this in turn changes how each of those channels are balanced and positioned against each other and how one applies other processing such as EQ, reverb, etc., to all the channels. Removing all the compression does NOT result in the same mix just with a bigger DR, it results in a complete dogs-dinner of a mix, where the elements in the mix no longer balance with each other (dramatically so!) and virtually all the other processing is also now wrong/inappropriate. In the majority of cases, the result would be an un-listenable mix! And just to reiterate what I mentioned above, while there maybe a few exceptions, it's largely a myth that a highly dynamic mix is delivered to the mastering engineer who then applies massive amounts of additional compression to kill the track in the name of making it louder. I've been handed mixes (even entire albums) to master where I couldn't have added a single dB more compression even if I'd wanted to. While this is rare, it's also rare that I could apply an additional level of compression equal to or greater than that already applied (during mixing).

Yes, but you could do it for mastering processing and broadcast processing. The latter doesn't even required anyone but broadcasters to all get on board (still not going to happen...missed our chance with HD Radio). We really wouldn't want to mess with your carefully chosen per-track processing anyway. The most annoying over-processing happens on either the master bus or in mastering, my opinion, you've already disagreed. And this is never going to happen anyway.
 
(And let me just say, I'd KILL for a good WYSIWYG editor that lets you quote in context in this forum!)
 
Feb 6, 2017 at 12:36 PM Post #17 of 27
when it comes to possibilities, it shouldn't be that hard to store raw tracks on one side, and all the data defining an effect in another file(would need to have all the mixing/mastering software perfectly known, and define some standards like for picture RAW processing), and then have a software apply effects for the end user with the possibility to alter some settings. the obvious trouble with compressors is that setting up routines wouldn't work on all tracks anyway and couldn't be as selective as a pro doing his job. plus the average audiophile has prehistorical mentality about anything DSP related. it would most likely lead to some trends with audiophiles not touching anything because "I want the real sound". and the rest would be using too much everything like newbies do in any domain. I imagine very few people getting out of such a massive adventure with actual audio benefits. don't get me wrong, I often wish I had some multitrack record to deal with just one instrument or just the voice with a little more compression, dehisser, EQ...or simply turning the right gain setting to tell the cymbals to ****. but I don't see a practical way to get there, I will most certainly not become a sound engineer and master my own albums song by song. I still have a life I would like to live ^_^.
 
IMO an easier solution would be to sell albums with a master including very minimalist compression work, and another with a more advanced job(loudness war needn't apply). I for one would be more interested in that than in all the highres lies, and would find more rational to pay for that extra version.
or I'd love to have engineers battles, same band playing, plenty of recordings, plenty of mixing/mastering. I would also give extra money for the ability to pick the sound engineers I prefer and get my favorite stuff mastered by them. oh! all the majestic bands with garbage albums that could be saved like that. loudness war would be but one of all the positive impacts resulting from this.
* heavy breathing*
 
Feb 6, 2017 at 1:48 PM Post #18 of 27
^^^
 
None of that is likely to happen either.  Think "loss of control" on the part of anyone creating...well, anything.  The finished work is their art, we'd be asking them to let us do the finishing.  
 
I'm not always this pessimistic, but I really think we're stuck with the loudness war.  The only way to "win" is to vote with our wallets, but content is king, and music buyers will always buy for content first, quality way down the list.  Yes, there are tiny market segments that pick quality over content, or try for both, but they don't drive anything financially.  I admire Ian Shepherd's attempts, and the supporters of the whole Dynamic Range Day thing (is there one this year???), but it's really not hitting the target, which has to be the bottom line.  
 
Feb 6, 2017 at 4:10 PM Post #19 of 27
   
1. I have a couple of problems with this: Firstly, I don't know Audition well but it has rather basic controllability/functionality compared to what I'm used to (and what I need). Secondly, even if I could some how get around it's limitations with a "stack" of processors, this is hardly a consumer friendly solution. I'm guessing a fraction of one percent of consumers would even attempt such a stack and only a small fraction of those would get decent results.
 
2. That's strange, I've found the exact opposite! Much greater capabilities and far more complex controls which are relatively difficult to comprehend. At this stage it's probably worth covering a bit about compression, as that's mainly what we're talking about here as far as dynamic range and processing is concerned: This 7min vid, is a basic primer on compression; what compression is, the basic controls, how they're used and what they sound like. I recommend this vid for anyone who doesn't really know what compression is, only has a vague idea, just wants a refresher or wants to be sure they've understood the basics.
 

 
In response to pinnahertz, here is a brief (just under 7mins) tutorial of a compressor which I commonly use. I strongly recommend watching this vid, especially for those who think they already have a decent understanding of compression basics. A couple of notes: It's quite advanced but all the controls this vid covers are constantly being demonstrated, so you can hear what's happening even if you don't fully understand the controls themselves. Secondly, this is just part one of the tutorial and even both parts together only cover some of the controls available in this compressor.
 

 
Some further observations/points: A. The videos demonstrate compression on individual channels or on a sub-group/submix (the drums submix in the second vid), rather than master-buss compression. Obviously the same techniques apply to the entire mix (master-buss) as to a submix, although with different settings. It should be noted that in the vast majority of cases most of the compression applied to a mix is applied during mixing rather than during mastering, IE. To individual channels and submixes rather than on the master-buss. B. Both vids were demonstrating different popular music genres. In classical music, compression is still quite commonly employed on individual tracks, sub-groups and on the master-buss but not as ubiquitously and far more subtly than in the popular genres. C. There are some "highend" compressor processors which only have the basic compressor controls, these tend to be the modelling compressors: Software plugins with algorithms modelled to emulate a vintage analogue compressor. These software compressors are non-linear, they introduce various distortions (including IMD commonly) which provide the "character" of the original unit. Generally there is no control over any of the attack or release curves, other parameters or even the amount of distortion, beyond how hard it is driven (as with the original units). So it's a case of using one of these compressors for it's particular character or using a completely different compressor if that "character" is not appropriate for a particular channel/submix/mix.
 
3. Yes, a simple set of presets to cover some basic situations could be useful but what you suggest already exists. Dolby Digital already includes 6 presets (DRC profiles) which are set in the DD metadata and many AV Receivers allow that setting to be overridden. Also, the "Loudness" control on some amps (particularly car sound systems) is effectively a simple compression preset. How these presets interact with the content is variable though, depending on the content. Sometimes it's relatively benign other times it's very annoyingly not so. "Pumping" is a common problem for example, due to an inappropriate release time/curve of the compressor, especially if it fights with pumping already deliberately/artistically applied or with content which changes dynamics quite rapidly. And, this is just one of several quite common issues with simple consumer compression and presets.
 
3a. This is simply impossible and impractical, for several reasons, the main two being: 1. Typically several different compressors are used in a mix, on individual channels and sub-groups and then another compressor used on the master-buss (in mastering). It's simply impossible to un-pick (un-mix) a mix, let alone un-mix the mix, identify which compressors have been used, where and with what settings and then apply inverse compression algorithms to each channel, sub-group and the entire mix. 2. Even if this were possible in theory, it would still be impossible in practice because although basic compression algorithms are freely available (and therefore an inverse algo could be freely designed) this is definitely not true of the higher end compressors whose algorithms are trade secrets (or covered by exclusive copyright licenses in the case of some modelling compressors). I can't see how you would get ALL of the companies/software developers to effectively donate the algos upon which their company relies!
 
3b. How could it be "easily encoded" into the files themselves? Not only would you need metadata fields to cover every possible parameter of every existing (and future) compressor but you'd need to somehow embed data to change any/all of those parameters in time with the music, as automation of compressor parameters during mixing/mastering is not uncommon. This doesn't sound to me like something which could be "easily encoded"! And of course, that's assuming the impossibilities/impracticalities in 3a have all been overcome and some chip developed (for inclusion in DACs/AVRs) upon which all this metadata and embedded data can act.
 
3c. I've seen similar requests several times here on head-fi. I can only assume those making the request simply don't understand how/why compression is applied. If you've watched both videos above, you'd realise that compression changes the volume, tonality and "presence", and as separate instances (with different settings) and/or different compressors are used on different individual channels and sub-groups, this in turn changes how each of those channels are balanced and positioned against each other and how one applies other processing such as EQ, reverb, etc., to all the channels. Removing all the compression does NOT result in the same mix just with a bigger DR, it results in a complete dogs-dinner of a mix, where the elements in the mix no longer balance with each other (dramatically so!) and virtually all the other processing is also now wrong/inappropriate. In the majority of cases, the result would be an un-listenable mix! And just to reiterate what I mentioned above, while there maybe a few exceptions, it's largely a myth that a highly dynamic mix is delivered to the mastering engineer who then applies massive amounts of additional compression to kill the track in the name of making it louder. I've been handed mixes (even entire albums) to master where I couldn't have added a single dB more compression even if I'd wanted to. While this is rare, it's also rare that I could apply an additional level of compression equal to or greater than that already applied (during mixing).
 
G


 
Excellent post and some good links for anyone not up with how compression gets used.
 
Now I hate to make this a you're wrong and I'm right type of post.  BUT....you are wrong.
 
Even though everything in your post is correct, well considered and laid out you are still wrong.
 
Why are you wrong?  And please I am addressing this in general not really specifically an attack on you.  Your posts are regularly some of the best on this forum. We aren't supposed to do that here and no need for it.
 
A good example of how this is all wrong.  Take an artist with a long career.  I listen to the earlier recordings and despite any flaws can listen for an hour and hear music to my ears.  Then I pick up the latest recording knowing more or less what I am getting.  I like the new music as expected, but after 15 minutes I have to shut it down and wonder who it is that should be charged with aural assault and battery.  It is squashed to death.  Flatlined in the worst sense of the word for music.
 
Sure everyone uses the latest methods did the right thing every step of the way.  Nevertheless somehow if the end result is a squashed recording I can't enjoy for more than short bits at a time something went off track somewhere along the way.  Somebody, somewhere, somehow needs to put a limit on limiting on compression on these recordings that are so squashed.  Would using less sound different?  Oh yeah.  Maybe even in a short this vs that comparison the squashed seems to have some good attributes.  And still someone took their eyes off the prize and you get something hard to listen to for long.  Maybe I am just getting old and this driven loud all the time sound would seem energized if I was 17 again.  As it is now it just wears you out to listen to it. 
 
I know there is no turning back the clock either.  You are never going to get production done like days of yore.  I don't know how to direct a better path in the now.  It is what it is.  But sometimes these explanations of why the recordings are squashed sound like telling us "you guys just don't understand this is really better".  Ah, NO, it is not better.
 
Feb 6, 2017 at 11:26 PM Post #20 of 27
 
Take an artist with a long career.  I listen to the earlier recordings and despite any flaws can listen for an hour and hear music to my ears.  Then I pick up the latest recording knowing more or less what I am getting.  I like the new music as expected, but after 15 minutes I have to shut it down and wonder who it is that should be charged with aural assault and battery.  It is squashed to death.  Flatlined in the worst sense of the word for music.

I strongly suspect that this is the result of processing at the mastering stage, that would not be per-track of a multitrack master, that would be processing of the 2 channel final mix. I too have heard this. 
 
The other tell-tail is looking at the waveform in an editor.  If it's flat-topped, that pretty much has to be done to the final stereo mix (or in mastering), you can't get that to happen with per-track processing. 
 
Feb 7, 2017 at 9:04 AM Post #21 of 27
Seems like much talking past one another going on. I've played only minimally with a compressor to affect the sounds of isolated tracks (percussion, mainly), but it's obvious that using one can lend much to the sound of a given track in a positive way. It would seem non-controversial to say that this use of compression is different than the global hard-limiting that many of us hate. Am I missing something?
 
Feb 10, 2017 at 11:21 AM Post #22 of 27
  [1] Perhaps, but it depends on the goal, doesn't it? Basic DR reduction isn't that complicated if what you're trying to do is process for low volume or background use of two channel music.
[2] There may be a tweak or so that would be a bit difficult to match, but I didn't see anything there that wasn't covered with Auditions included plugins.
[2a]The video plugin's interface is a lot easier to deal with though. It goes on from there, but I've made the point.
[3] Good observations; I do not disagree (apart from the classical music comments...not correct for serious classical music recording).
[4] Dolby Digital processing is not available to the consumer for processing his own material for use, say, in a car.
[4a] The adjustments the consumer has are there only for decoding pre-encoded material, and unfortunately, even those are buried to the point that the typical guy won't even know they exist.
[5] Yeah...I know. Not possible to completely undo per-track processing in mixing.
[5a]The best it could do is track mastering processing (where I still think a lot goes wrong), and broadcast processing.
[5a] If you think of parameters as vectors, not a continual data stream, there could be full encoding of hundreds of parameters. Think object-oriented parameter encoding. Sound familiar?
[5b] The killer of this idea is simply stated, "Multi-industry standardization". You need to get everyone in music, TV, film, and broadcast on board. Never going to happen.
[6] You've covered your world well. You've entirely missed what happens downstream, say, in broadcast processing.
[7] Yes, but you could do it for mastering processing and broadcast processing.
[8] The most annoying over-processing happens on either the master bus or in mastering, my opinion, you've already disagreed.

 
1. Agreed, for a simple setting such as background/low volume, but as you also say; "it depends on the goal" and the "goal", as you suggested, is not only this type of simple setting but to provide "a completely unprocessed version to become available to the listener" and a "a pseudo "expert" mode that allowed for the complete application of an inverse algorithm.", which the listener can re-process to their own taste. This is what I'm disagreeing with!
2. I use Compassion to apply a number of "tweaks" which are effectively impossible to achieve any other way, which I why I bought this processor! While I don't know Audition well, I do not believe that it can accomplish what I could not with the pro industry standard DAW + many thousands of dollars of other third party processors (including; about 7 other compressors, a multi-band compressor and two dynamic EQs).
2a. The plugin's interface IS a lot easier to deal with than a complex routing and stack of other plugins which still wouldn't give me exactly what I want! So no, you have not made the point.
3. Oh no, you're not going to argue about what is a "serious" classical recording are you?! I've worked on recordings with the LSO and Abbey Road Studios, would that not qualify as "serious"?
4. I was responding to you statement: "Home receivers could have simplified activity-targeted presets (low-background, party background, concert, etc.)" and making the point that this is in fact already available to a limited extent on some home receivers and that some car radios have had a very basic "loudness" control for at least a couple of decades.
4a. You seem to be arguing my point for me! It's already there (for some encoded material) but as you say the average guy doesn't even know it's there, let alone how to use it but now you want that average guy to have access to not just a compressor with no configurable parameters beyond 5 presets but but way, way more complex tools?
5. So you agree then that your suggestion of "a completely unprocessed version" is impossible even in theory?
5a. I presume you're talking about something like Dolby Atmos? Atmos is relatively simple by comparison, as it only deals with positional info. I'm not saying it would be completely impossible to achieve (in theory) but it would be very difficult rather than "easily encoded".
5b. So you agree it's impossible in practice?
6. Again, you seem to be arguing my point. What happens downstream (in broadcast chains) is even more processing/complexity than what I've mentioned previously, which is already impossible!
7. Then you wouldn't be providing "a completely unprocessed version" and even just with mastering we've still got insurmountable problems. Some mastering engineers work with stems rather than just on a completed stereo mix, others still swear by vintage analogue units which haven't yet be modelled (or modelled satisfactorily).
8. Just to be clear, I've disagreed that your statement is always true. Sometimes it is true, the annoying over-processing is purely a result of poor mastering but not always, not uncommonly the problem already exists in the mix before the mastering engineer even starts.
 
Quote:
  Why are you wrong? ...
A good example of how this is all wrong.  Take an artist with a long career.  I listen to the earlier recordings and despite any flaws can listen for an hour and hear music to my ears. ...

 
You haven't actually said why you think I'm wrong and your example does NOT demonstrate I'm wrong! All I've done is explain some basic methodology of how compression is generally used in mixing for the last 40 years or so and one of the more modern compression tools. Why can't your example be an example of me being right but of some people (for whatever reason) misjudging or deliberately abusing the tools?
 
  when it comes to possibilities, it shouldn't be that hard to store raw tracks on one side, and all the data defining an effect in another file(would need to have all the mixing/mastering software perfectly known, and define some standards like for picture RAW processing), and then have a software apply effects for the end user with the possibility to alter some settings.

 
The suggestion of just splitting off all the compression/compressors is impossible, now you're suggesting all the other thousands of processors in addition to only compressors and suggest that "it shouldn't be that hard"! Even if it weren't impossible, instead of 2 channel stereo you'd have to download about 24 to say 100 channels, plus who knows how much code and other data (such as impulse responses) and have a chip inside your AVR which could apply all that processing to all those channels in real time. Dealing with all audio content, which would include films, means downloading say 1,000 or so channels of audio instead of 6 or 8, plus many gigabytes of non-audio data (inc. IRs) and have a chip in your AVR with the same processing power of at least 3 or so max spec'ed Mac Pros, plus some way of getting those 1,000 channels of audio from your playback device (server farm?!) to your AVR.
 
 
The other tell-tail is looking at the waveform in an editor.  If it's flat-topped, that pretty much has to be done to the final stereo mix (or in mastering), you can't get that to happen with per-track processing. 

 
Of course you can get that to happen with per-track processing. There's no difference (as far as flat-topping is concerned) between adding a huge amount of compression on each track individually and then summing them all together or just adding that same amount of compression on the master-buss. In practise that's not how we tend to mix, instead some channels will be compressed individually (lead vox, lead guit and keyboard for example) and others will be compressed as part of a subgroup (backing vox, rhythm guitars and most commonly, as in the Compassion vid, the drums). I can't remember ever seeing a (popular genre) mix which didn't already have some degree of flat-topping before mastering (or a master-buss compressor) and as I've mentioned, I've seen some which were so mashed before mastering it would have been impossible to add any more. For Spuce's benefit; I'm not at all suggesting that compression during mixing should be abused to this level, just pointing out that it can be done and that some do.
 
  It would seem non-controversial to say that this use of compression is different than the global hard-limiting that many of us hate. Am I missing something?

 
Yes you are missing something, it's that what you are saying IS in fact controversial! Have a look again at the Compassion vid again, particularly from around 5:50, at this point in the vid a (-0.1dB ceiling) brick-wall limiter function is added, post compression/transient shaping. You'll notice that he takes the hard-limiting all the way up to the meter's limit of 11dB reduction but settles on about 4dB (in addition to about 4 or 5dB of compression). There is intrinsically no difference between what is done in this vid to the drum submix and what is done to the full mix during mastering!
 
Whether you prefer the end result in the vid to how the drums sounded before the compassion processing is a matter of style/taste, maybe you or some others preferred the sound before this compressor was applied? What the vid demonstrates though is that: It's not just me making it all up, there is often a very substantial amount of compression added during mixing, before the master-buss and before mastering. In fact, the make-up and transient gain, plus the peak reduction applied in the vid is considerably more than I would typically want to apply during mastering! And one further point, which wasn't mentioned in the vid; the individual sounds/channels which comprise the original drum submix had already been heavily processed (including compression) before the compassion processing is applied! Probably to the samples themselves rather than in the mix session, btw.
 
G
 
Feb 10, 2017 at 1:22 PM Post #23 of 27
about my part, I was really talking about a method that could possibly exist. I never thought it would be practical or make sense in any way. just compression alone is a brainstorm, and as most operations need to be done in sequence to get the same result, there would also be the need to store data for gating, EQ, and all the hundreds or thousands of funky VSTs that the guys like to use.
eek.gif

 
Feb 10, 2017 at 5:08 PM Post #24 of 27
   
1. Agreed, for a simple setting such as background/low volume, but as you also say; "it depends on the goal" and the "goal", as you suggested, is not only this type of simple setting but to provide "a completely unprocessed version to become available to the listener" and a "a pseudo "expert" mode that allowed for the complete application of an inverse algorithm.", which the listener can re-process to their own taste. This is what I'm disagreeing with!

How about if I restate the goal: to provide a version without the final loudness processing applied? Even if the final 2-track processing is complex, there's no good reason technically it couldn't have an inverse algorithm applied, you'd just need a processor that can "publish" it's algorithm. For example, Dolby A and SR are actually fairly complex multi-band processes with complex compression ratios in each band applied during encoding. Yet they can apply an exact inverse during decoding because they know what's going on. By today's standards A and SR are fairly simple.
2. I use Compassion to apply a number of "tweaks" which are effectively impossible to achieve any other way, which I why I bought this processor! While I don't know Audition well, I do not believe that it can accomplish what I could not with the pro industry standard DAW + many thousands of dollars of other third party processors (including; about 7 other compressors, a multi-band compressor and two dynamic EQs).
2a. The plugin's interface IS a lot easier to deal with than a complex routing and stack of other plugins which still wouldn't give me exactly what I want! So no, you have not made the point.

But…that IS my point. I acknowledge that you don't believe the same results could be achieved. You also admit, you don't know Audition well. The one tool missing when using Audition is knowledge, which is why there are those expensive tools in the first place. Don't get me wrong, I'm not saying that those using the expensive tools don't have deep knowledge and experience, but it's really not about the functions, it's about the UI. It just makes life easier and work go faster.
3. Oh no, you're not going to argue about what is a "serious" classical recording are you?! I've worked on recordings with the LSO and Abbey Road Studios, would that not qualify as "serious"?

LSO at Abbey Road...sounds like scoring, right? I'm talking orchestras in actual halls. The "greats" in that field would disagree with you, some vehemently. People like Jack Renner, and earlier, C. Robert Fine, etc. Much or their work went straight to stereo! Renner has stated, in an interview, that even in mastering he don't do nuttin'. And that's a guy who made his living doing it, and founded one of the premier classical labels of all time, Telarc. I've attended Decca sessions in Chicago, CSO, and worked with many producers of orchestral recordings for decades. I won't say that per-track compressing isn't used ever, especially today when we actually have real digital multitrack in post, but I don't think you'll find that being done as much as you think outside of the scoring world. In fact, it would generally not even be tolerated.
 
Once again, this is a "your world vs my world" perspective thing. Let's not both get tunnel vision, ok?
4. I was responding to you statement: "Home receivers could have simplified activity-targeted presets (low-background, party background, concert, etc.)" and making the point that this is in fact already available to a limited extent on some home receivers and that some car radios have had a very basic "loudness" control for at least a couple of decades.

Again, not quite true. Today's loudness compensation (really, not even the right term now), is completely different from that of a couple of decades. The difference? Just take Audyssey Dynamic Volume…an algorithm developed from researching how actual listeners respond to changes in volume, and one that applies changes dynamically, spectrally, and relatively based on front vs surround ratio. That's nothing…and I mean NOTHING like the "Loudness Compensation" function of previous decades, a fixed inverse Fletcher-Munson curve applied without regard for actual volume setting because, even if it was modified by the volume control knob, the specific in-room SPL was never known or considered. And, none of the above was what I was referring to anyway.
4a. You seem to be arguing my point for me! It's already there (for some encoded material) but as you say the average guy doesn't even know it's there, let alone how to use it but now you want that average guy to have access to not just a compressor with no configurable parameters beyond 5 presets but but way, way more complex tools?

Let me give you a real-world example of a Denon AVR. There are no less than 4 possible dynamics modifiers in the box: Audyssey Dynamic Volume, Audyssey Dynamic EQ, Dynamic Compression and Dialog Normalization (the latter two under the heading of Loudness Management, and I do realize Dialnorm is not dynamic per se). What's he got turned on? If..and that's a big IF, he's gone through Audyssey calibration during setup (easily skipped, BTW), he's may have Dynamic Volume and Dynamic EQ on…possibly, if those choices were made and he knew what he was picking. He'll have Loudness Management/Dynamic Compression off, and Dialnorm on, looking for the metatag. If he has not done calibration, Dynamic Volume and Dynamic EQ will be off, Loudness Management with Dynamic Compression will be on, and set to the default, Auto. None of this is anywhere that a user could find it without drilling through a menu. What there is now is pretty much inaccessible, and incomprehensible. Much could be improved, using my hair-brained idea or not.  And I'm not suggesting the adjustment be buried.
5. So you agree then that your suggestion of "a completely unprocessed version" is impossible even in theory?

That's not at all what I said.
5a. I presume you're talking about something like Dolby Atmos? Atmos is relatively simple by comparison, as it only deals with positional info. I'm not saying it would be completely impossible to achieve (in theory) but it would be very difficult rather than "easily encoded".

The object control data deals with vectors and positional changes over time. All processor parameters can be reduced to vectors over time. Same problem, same solution.
5b. So you agree it's impossible in practice?

Yes, but not for technical reasons.
6. Again, you seem to be arguing my point. What happens downstream (in broadcast chains) is even more processing/complexity than what I've mentioned previously, which is already impossible!

No, not at all. Every single bit of broadcast processing could be reduced to metadata. It has the advantage over what you do that it operates on two fully mixed channels. Broadcast processing could be un-done with my hair-brained idea. The technical hurdles could be solved, the non-technical ones, not a chance.
7. Then you wouldn't be providing "a completely unprocessed version" and even just with mastering we've still got insurmountable problems. Some mastering engineers work with stems rather than just on a completed stereo mix, others still swear by vintage analogue units which haven't yet be modelled (or modelled satisfactorily).

You're locking into the technical problems. They could be solved. They won't be because industries won't ever cooperate on that level, and frankly, reversing processing or providing an unprocessed version would be out of most producers world and desire. The ultimate in "loss of control".
 
Look, we can continue to knock heads. I've outlined a reasonable solution to part of the problem. My solution has technical merit, and is feasible. It also is not feasible for sooo many non-technical reasons.
 
I promise you, I'll never bring it up again. OK?
8. Just to be clear, I've disagreed that your statement is always true. Sometimes it is true, the annoying over-processing is purely a result of poor mastering but not always, not uncommonly the problem already exists in the mix before the mastering engineer even starts.

Whatever.
Of course you can get that to happen with per-track processing. There's no difference (as far as flat-topping is concerned) between adding a huge amount of compression on each track individually and then summing them all together or just adding that same amount of compression on the master-buss.

Huh. Interesting. So, you think that flat-topping per track will end up with the same result as doing it at the final mix? I don't think so. Summing dissimilar channels…it may not end up that tightly controlled at all. Things don't flat-top at the moment in time.  I've included some screen shots, two mono tracks, flat-top limited, then summed. The result is no more flat-topping, just as summing should work. However, this doesn't mean your observation below is completely wrong either. (more)
In practise that's not how we tend to mix, instead some channels will be compressed individually (lead vox, lead guit and keyboard for example) and others will be compressed as part of a subgroup (backing vox, rhythm guitars and most commonly, as in the Compassion vid, the drums). I can't remember ever seeing a (popular genre) mix which didn't already have some degree of flat-topping before mastering (or a master-buss compressor) and as I've mentioned, I've seen some which were so mashed before mastering it would have been impossible to add any more. For Spuce's benefit; I'm not at all suggesting that compression during mixing should be abused to this level, just pointing out that it can be done and that some do.  
 

In a complex mix with flat-top processing on several dominant channels, the observable flat-topping in the final mix is related to how dominant a flat-topped channel is at any given moment. If everything was mixed equally, you won't end up with much observable flat-topping, but that's obviously not how you mix, so yeah, some will be visible. There's just no way several channels, all with the same flat-top processing (and threshold) could be in a mix together and maintain that characteristic all the time.
 
And that's why I can correctly state that per-track processing, especially the hard limiting we're discussing, is not the equivalent of a final mastering-level processor acting on the final composite mix.
 

 
Feb 11, 2017 at 5:11 AM Post #25 of 27
  [1] How about if I restate the goal: to provide a version without the final loudness processing applied?
[2] For example, Dolby A and SR are actually fairly complex multi-band processes with complex compression ratios in each band applied during encoding. Yet they can apply an exact inverse during decoding because they know what's going on. ... The object control data deals with vectors and positional changes over time. All processor parameters can be reduced to vectors over time. [2a] Same problem, same solution.
[2b] I've outlined a reasonable solution to part of the problem.
[3] I acknowledge that you don't believe the same results could be achieved. You also admit, you don't know Audition well. The one tool missing when using Audition is knowledge, which is why there are those expensive tools in the first place.
[4] LSO at Abbey Road...sounds like scoring, right? I'm talking orchestras in actual halls.
[5] Today's loudness compensation (really, not even the right term now), is completely different from that of a couple of decades.
[6] In a complex mix with flat-top processing on several dominant channels, the observable flat-topping in the final mix is related to how dominant a flat-topped channel is at any given moment.

 
1. Sure, providing all the following: A. You can model all the compressors, EQs, multi-band compressors and dynamic EQs every mastering engineer employs. B. You figure out a way to un-mix mastering which was done with stems. C. You can find mastering engineers, producers and record labels who don't mind having their knowledge/skills effectively published in metadata/coding. C. That you and every other user realises that this version probably still contains a very significant amount of loudness processing and D. You can solve all the intellectual property issues of "A".
 
2. Sure, you can have a process applied and a hardware unit (or chip) then apply an inverse decoding. And yes, that's effectively what DD and Atmos do, you encode during print-mastering and then every HD TV and compatible AVR contains a chip licensed from Dolby to decode.
2a. So when you say "same solution" are you suggesting an AVR which contains a few hundred different licensed chips, one from from each of the companies who produce all the tools that mastering engineers employ for loudness processing or are you suggesting just one chip which effectively contains all the algorithms which would otherwise have to be in hundreds of proprietary chips? Presuming the latter and if it makes you happy, I'll concede; If you had say Apple's resources to design such a chip, model every single tool ever used by mastering engineers for loudness processing and were willing to invest such massive resources for the benefit of the tiny number of audiophiles who seem to desperately want it, then sure, it's theoretically possible. Though not even remotely feasible in practise: Economically, legally/contractually and because engineers/producers/labels wouldn't agree to it.
2b. This seems to be our point of contention, how we define "reasonable solution". If a suggested solution is completely impossible/impractical at a number of different levels, then I personally wouldn't call it "reasonable" but that's just me.
 
3. Sure, all us pros use more expensive, less capable tools because we lack the knowledge to use Audition.
 
4. So in fact you are basically saying that LSO and Abbey Road are not "serious" classical recordings. I'm not going to argue, you're entitled to your opinion of what "serious" classical recordings means. BTW, I'm not just talking about scoring, Abbey Road have a very fine mobile recording unit.
 
5. I didn't say that the loudness button on some car radios of a couple of decades ago was anything like the loudness controls in modern AVRs! I was merely pointing out that some form of loudness control has been available to the driving public and that the basic idea of metadata controlled loudness/compression with some user configurability has also been around for more than a decade.
 
6. So are you saying that flat-topping is in fact observable in mixes before master-buss compression and therefore observing flat-topping in an editor does not necessarily prove that flat-topping must have been done in mastering (master-buss compression/limiting) or are you saying that it's only possible in theory but in practice we'd never see flat-topping without master-buss compression/limiting?
 
G
 
Feb 11, 2017 at 1:39 PM Post #26 of 27
   
   
You haven't actually said why you think I'm wrong and your example does NOT demonstrate I'm wrong! All I've done is explain some basic methodology of how compression is generally used in mixing for the last 40 years or so and one of the more modern compression tools. Why can't your example be an example of me being right but of some people (for whatever reason) misjudging or deliberately abusing the tools?
 
 
  G

Well I said I wasn't really attacking you.  I don't know which recordings you have done nor do I care to make it a personal comment on your work.  I said you were wrong in a generalist sense because you seem to be defending what is common practice in the industry.  And common practice in the industry makes for flat topped tracks that in my opinion are too compressed.  I gave one example of an artist with recordings over many years.  That is a typical result.  The results of "more modern compression tools" from the last 40 years have moved in one direction.  More total compression with less dynamic range.  I already acknowledged your descriptions in every instance being correct.  The reason I don't agree with the idea of some people misjudging or abusing the tools is the typical results are exactly that.  It is the common result.  It isn't an occasional result of someone abusing the tools.
 
Feb 11, 2017 at 3:26 PM Post #27 of 27
   
1. Sure, providing all the following: A. You can model all the compressors, EQs, multi-band compressors and dynamic EQs every mastering engineer employs. B. You figure out a way to un-mix mastering which was done with stems. C. You can find mastering engineers, producers and record labels who don't mind having their knowledge/skills effectively published in metadata/coding. C. That you and every other user realises that this version probably still contains a very significant amount of loudness processing and D. You can solve all the intellectual property issues of "A".
 
2. Sure, you can have a process applied and a hardware unit (or chip) then apply an inverse decoding. And yes, that's effectively what DD and Atmos do, you encode during print-mastering and then every HD TV and compatible AVR contains a chip licensed from Dolby to decode.
2a. So when you say "same solution" are you suggesting an AVR which contains a few hundred different licensed chips, one from from each of the companies who produce all the tools that mastering engineers employ for loudness processing or are you suggesting just one chip which effectively contains all the algorithms which would otherwise have to be in hundreds of proprietary chips? Presuming the latter and if it makes you happy, I'll concede; If you had say Apple's resources to design such a chip, model every single tool ever used by mastering engineers for loudness processing and were willing to invest such massive resources for the benefit of the tiny number of audiophiles who seem to desperately want it, then sure, it's theoretically possible. Though not even remotely feasible in practise: Economically, legally/contractually and because engineers/producers/labels wouldn't agree to it.
2b. This seems to be our point of contention, how we define "reasonable solution". If a suggested solution is completely impossible/impractical at a number of different levels, then I personally wouldn't call it "reasonable" but that's just me.

I've already addressed all of the above with what followed your snippet of my post,"My solution has technical merit, and is feasible. It also is not feasible for sooo many non-technical reasons."  That means it's possible to do technically, impossible to do for non-tech (administrative, political, artistice...want more????) reasons. 
3. Sure, all us pros use more expensive, less capable tools because we lack the knowledge to use Audition.

That is, again, absolutely NOT what I said.
4. So in fact you are basically saying that LSO and Abbey Road are not "serious" classical recordings. I'm not going to argue, you're entitled to your opinion of what "serious" classical recordings means. BTW, I'm not just talking about scoring, Abbey Road have a very fine mobile recording unit.

That is, again, absolutely NOT what I said.
5. I didn't say that the loudness button on some car radios of a couple of decades ago was anything like the loudness controls in modern AVRs! I was merely pointing out that some form of loudness control has been available to the driving public and that the basic idea of metadata controlled loudness/compression with some user configurability has also been around for more than a decade.

The "loudness" control provided to the driving public is the volume control, that's it. No loudness comp on the typical car radio at all.
 
Having a form of loudness control in a home system that is not accessible to the common user is not at all like my suggestion.
6. So are you saying that flat-topping is in fact observable in mixes before master-buss compression and therefore observing flat-topping in an editor does not necessarily prove that flat-topping must have been done in mastering (master-buss compression/limiting) or are you saying that it's only possible in theory but in practice we'd never see flat-topping without master-buss compression/limiting?

That is, again, absolutely NOT what I said.  
 
You have an amazing ability to misconstrue my posts.  I think I was clear, even posted actual examples.  
 
As tempting it is to continue, it doesn't seem to be productive, and I doubt it is informative to any other readers. 
 

Users who are viewing this thread

Back
Top