how do dynamic range meters work?
Jul 8, 2018 at 10:32 AM Post #31 of 103
I believe that the way individual inputs are mixed can create the effect of density(everything the same apparent loudness, regardless if 8 or 28 tracks are being mixed, or in mastering, how much compression is applied to the final overall mix.

Hey Truth,
I understand what you’re getting at. My name for that is “someone who’s still learning to mix”! :wink: Seriously though, some genres require that sort of “throw everything at the wall (of sound) and see if we can create a hit” approach, death metal being one example. Most mixers try to put every player in their own sonic pocket, through judicious use of panning/placement, individual ’verb/delay, EQ (always cutting, never adding gain) and dynamic range control of individual elements and stems. I promote that approach as it lets the listener hear into the mix rather than being assaulted by it. However, if all you have is a hammer (or Mackie and free plug–ins in our case), then everything looks and sounds like a nail. An advantage to the, let’s call it, “pocket” approach is; mixes like that suffer less from lossy codec data compression but again, for some genres, who give a rat’s culo about fidelity!

BTW, we have strayed a bit from the thread’s initial subject!
 
Last edited:
Jul 8, 2018 at 2:37 PM Post #32 of 103
The introduction of home studios has led to some really bad DIY engineering by people who really should have hired a professional to do it. The same thing happened in the graphic arts when Quark was introduced on the Mac. All of a sudden typography was being misused and the fundamentals of design were ignored out of ignorance. After a while, it finds its level and starts self-correcting as people who bit off more than they could chew start gaining some experience. I think this is already happening in sound engineering. I hear some sophisticated stuff coming out of home studios now. It's not as bad as it was 5 years ago.
 
Jul 8, 2018 at 3:12 PM Post #33 of 103
Hey Truth,
I understand what you’re getting at.

Well it's about frickn time, lol! The way some on here are, you'd think I was speaking RUSSIAN! he he heh!

My name for that is “someone who’s still learning to mix”! :wink: Seriously though, some genres require that sort of “throw everything at the wall (of sound) and see if we can create a hit” approach, death metal being one example. Most mixers try to put every player in their own sonic pocket, through judicious use of panning/placement, individual ’verb/delay, EQ (always cutting, never adding gain) and dynamic range control of individual elements and stems. I promote that approach as it lets the listener hear into the mix rather than being assaulted by it. However, if all you have is a hammer (or Mackie and free plug–ins in our case), then everything looks and sounds like a nail. An advantage to the, let’s call it, “pocket” approach is; mixes like that suffer less from lossy codec data compression but again, for some genres, who give a rat’s culo about fidelity!

Regarding your last point, "mixes like that suffer less from lossy codec data compression..."

That something I brought up as one theory, aside from loudness, for hot mastering/brickwall limiting: I called it "maximizing use of the bits". Of, the engineers on GearSlutz, Hoffman, and I'm sure on here, will stalwartly deny that any of what we are talking about is taking place. The old Rumsfeldian "things we know we know but don't know" in front of others scenario! Been dealing with such double-talk for almost a decade now, what can one do?
 
Jul 8, 2018 at 3:26 PM Post #34 of 103
By the way, I haven't found that any particular genre of music is easier to compress than any other. It's not like a symphony orchestra is harder to encode than a flutophone. Lossy works on the way the ears hear, not the way the signal is constructed.

Gregorio, have you ever heard of anyone creating music recordings specifically to favor lossy artifacting? I suppose you could roll of the the stuff above 15kHz, but the codec will do that itself when you encode.

The purpose of lossy is to be audibly transparent with a sufficient data rate. It doesn't make sense that an engineer would change the way he recorded or mixed to suit compression artifacting. Unless you're talking about voice transmission where fidelity isn't required to do the job.
 
Last edited:
Jul 9, 2018 at 7:46 AM Post #35 of 103
Gregorio, have you ever heard of anyone creating music recordings specifically to favor lossy artifacting?

On a couple of occasions I have myself. I remember on one occasion noticing an artefact in a cymbal tail/decay, I changed the mix slightly so it wasn't as affected by the perceptual coding and made no intrinsic difference to the raw/wav file. That was a long time ago though, probably nearly 20 years or so, when the quality of perceptual coding was significantly poorer than it became a few years later. The only sense in which I "create recordings specifically to favour lossy artefacting" today is, depending on customer requirements, a master which observes true peak rather than sample peak, in order to avoid potential clipping during the conversion process to a lossy format.

[1] That something I brought up as one theory, aside from loudness, for hot mastering/brickwall limiting: I called it "maximizing use of the bits". Of, the engineers on GearSlutz, Hoffman, and I'm sure on here, will stalwartly deny that any of what we are talking about is taking place.
[2] The old Rumsfeldian "things we know we know but don't know" in front of others scenario! [2a] Been dealing with such double-talk for almost a decade now, [2b] what can one do?

1. So you made-up some theory based on no professional experience or knowledge (just intuition/assumption) but actual professional engineers, across a range of forums, who make commercial mixes for a living and refute your made-up theory are all wrong and you are right? It's amazing you would even attempt to make such a claim, it's even more amazing that you would make such a claim here in the sound science forum without any supporting evidence and even more amazing still that you would keep repeating such claims even after it's been explained how delusional it makes you appear!!

2. Are you talking about the engineers or you? Either way, it's wrong. In the case of professional engineers, they know what professional engineers actually do because they are those professional engineers! In your case, you think "you know what you know" but clearly "you don't know what you don't know" and what you don't know is how professional engineers actually create mixes and masters and that renders as incorrect much of what you think you do know!
2a. You have no idea if it's "double-talk" or not, you're calling it "double-talk" purely on the basis that it doesn't agree with your made-up theory/agenda! And BTW, I've been dealing with ignorant and delusional audiophiles for over two decades now.
2b. Either provide some reliable evidence to back up your claims/assertions or If you "don't know what you don't know" then ask or at least phrase your assertions conditionally. Do NOT just make-up nonsense and pass it off as fact, do NOT claim that you know what professional engineers are doing but the professional engineers themselves don't (because that's just delusional!), and lastly, absolutely do NOT pepper your made-up nonsense with insults, that leads the thread nowhere, is against forum rules and in addition to seeming delusional it also makes you appear ignorant and foolish! Why are you even asking this question? You've had this same answer explained to you several times already in the recent past. What part of it are you finding so difficult to comprehend?

G
 
Last edited:
Jul 9, 2018 at 8:56 AM Post #36 of 103
On a couple of occasions I have myself. I remember on one occasion noticing an artefact in a cymbal tail/decay, I changed the mix slightly so it wasn't as affected by the perceptual coding and made no intrinsic difference to the raw/wav file. That was a long time ago though, probably nearly 20 years or so, when the quality of perceptual coding was significantly poorer than it became a few years later. The only sense in which I "create recordings specifically to favour lossy artefacting" today is, depending on customer requirements, a master which observes true peak rather than sample peak, in order to avoid potential clipping during the conversion process to a lossy format.



1. So you made-up some theory based on no professional experience or knowledge (just intuition/assumption) but actual professional engineers, across a range of forums, who make commercial mixes for a living and refute your made-up theory are all wrong and you are right? It's amazing you would even attempt to make such a claim, it's even more amazing that you would make such a claim here in the sound science forum without any supporting evidence and even more amazing still that you would keep repeating such claims even after it's been explained how delusional it makes you appear!!

2. Are you talking about the engineers or you? Either way, it's wrong. In the case of professional engineers, they know what professional engineers actually do because they are those professional engineers! In your case, you think "you know what you know" but clearly "you don't know what you don't know" and what you don't know is how professional engineers actually create mixes and masters and that renders as incorrect much of what you think you do know!
2a. You have no idea if it's "double-talk" or not, you're calling it "double-talk" purely on the basis that it doesn't agree with your made-up theory/agenda! And BTW, I've been dealing with ignorant and delusional audiophiles for over two decades now.
2b. Either provide some reliable evidence to back up your claims/assertions or If you "don't know what you don't know" then ask or at least phrase your assertions conditionally. Do NOT just make-up nonsense and pass it off as fact, do NOT claim that you know what professional engineers are doing but the professional engineers themselves don't (because that's just delusional!), and lastly, absolutely do NOT pepper your made-up nonsense with insults, that leads the thread nowhere, is against forum rules and in addition to seeming delusional it also makes you appear ignorant and foolish! Why are you even asking this question? You've had this same answer explained to you several times already in the recent past. What part of it are you finding so difficult to comprehend?

G


Just answer this question: Even if YOU don't engage in the practice, is it remotely possible that some engineers, at the mixing and mastering stages, max things to full scale with the goal of 'maximizing bit use' or 'using all the bits' at whatever bit depth they are operating in?
 
Jul 9, 2018 at 9:44 AM Post #37 of 103
Just answer this question: Even if YOU don't engage in the practice, is it remotely possible that some engineers, at the mixing and mastering stages, max things to full scale with the goal of 'maximizing bit use' or 'using all the bits' at whatever bit depth they are operating in?

With 24bit, that's a practical impossibility in the first place. With 16bit, then typically no, "using all the bits" is undesirable as it's simply way too much dynamic range for consumers, by as much as 1,000 times, and nothing in real life achieves that amount of dynamic range without being hazardous to hearing anyway, hence one of the reasons why compression is employed.

G
 
Jul 9, 2018 at 9:54 AM Post #38 of 103
With 24bit, that's a practical impossibility in the first place. With 16bit, then typically no, "using all the bits" is undesirable as it's simply way too much dynamic range for consumers, by as much as 1,000 times, and nothing in real life achieves that amount of dynamic range without being hazardous to hearing anyway, hence one of the reasons why compression is employed.

G

My idea of 'using all the bits' implies keeping all or most of the audio signal as hot as possible, wih peaks at 0dBfs and average levels just below. Typical of modern pop releases. In some cases, is that done to maximize bit usage, or just loudness?
 
Jul 9, 2018 at 9:56 AM Post #39 of 103
I haven't found that any particular genre of music is easier to compress than any other. It's not like a symphony orchestra is harder to encode than a flutophone.

Wellllll, it is depending on the content, codec and its settings. A harmonic series of highly periodic (and thus highly predictable), sine-like signals are far “easier’ (smaller resulting payload) to parametrically encode than highly chaotic audio (the other extreme; not very predictable), when given a low, fixed bit rate/budget.

…have you ever heard of anyone creating music recordings specifically to favor lossy artifacting?

I know a rather well known mastering engineer who prefers truncation to redithering as he likes the “edge” (a.k.a. artifacts) it creates, allowing his product to stand out during distribution. I’m sure he’s not alone. Dither is not lossy encoding but his method is to create interesting sounding trash where none existed. When you starve/overload a lossy codec, all you end up with is one or more bands being additionally suppressed. So, not a very good outcome but, we all need our Gladwellian 10,000 hours to know it’s not the most prudent choice.

The purpose of lossy is to be audibly transparent with a sufficient data rate.

Yes, that was the design goal but, audibly transparent? I have yet to hear a current lossy codec of any flavor that’s actually transparent. I do think highest rate LAME VBR sounds decent, though not transparent by any means. As with all reproductions of a performance, it’s the knowledge and expectations of the artist and engineering team behind a track that creates the facsimile, and the consumer’s suspension of disbelief when listening to the result. :wink:

O.K., since we are waaaay off the original subject, I’ll toss this in…another lossy codec we all sometimes overlook is the data compression occurring during consumption, namely Bluetooth. Even the current and still rare HD version is an improvement but isn’t transparent. Serial encode/decode cycles through two different codecs every time you play back a track makes for less than perfect reproduction. Of course, that’s why we all use lossless!
 
Jul 9, 2018 at 10:01 AM Post #40 of 103
My idea of 'using all the bits' implies keeping all or most of the audio signal as hot as possible, wih peaks at 0dBfs and average levels just below.

Hey Truth,
Savvy engineers never do 0 dBfs anymore thanks to the almost universal adoption of (slightly) more conservative True Peak measurement standards, which prevent DAC overload on playback.
 
Jul 9, 2018 at 10:03 AM Post #41 of 103
Hey Truth,
Savvy engineers never do 0 dBfs anymore thanks to the almost universal adoption of (slightly) more conservative True Peak measurement standards, which prevent DAC overload on playback.

True musical peak, yes, I saw the YouTube for that plug-in! :)

But try selling that over on GearSlutz to that 'ISPs don't matter' crowd! lol
 
Jul 9, 2018 at 10:33 AM Post #42 of 103
Just answer this question: Even if YOU don't engage in the practice, is it remotely possible that some engineers, at the mixing and mastering stages, max things to full scale with the goal of 'maximizing bit use' or 'using all the bits' at whatever bit depth they are operating in?


I'll awnser this: We call those people mixing in the "dumb-ass-zone" (-4 to 0dbfs) . Master engineers bring it up full scale and into a limiter for the digital delivery that has poor signal to noise ratio. The dacs are so bad a reproducing the signal, we have to have the mastering engineer make it loud. Is that where we mix at? no! -12 to -10dbfs for most. I like to break the new ones in by making them do a mix that is -22dbfs nominal with peaks at -20 dbfs. Then I tell them to normalize it to -0.5 dbfs. Then play it and let them hear what headroom is. Then I tell them "you will not ever will get to do a great mix like this anymore, for people are into quantity and not quality, depth, and realism you get in a mix like this, they will want the opposite (unfortunately).
 
Last edited:
Jul 9, 2018 at 10:37 AM Post #43 of 103
Hey Truth,
Savvy engineers never do 0 dBfs anymore thanks to the almost universal adoption of (slightly) more conservative True Peak measurement standards, which prevent DAC overload on playback.
never changed the way we did it , the fancy meters just change scale readouts, and most of us don't look at meters at all when we mix.
 
Jul 9, 2018 at 11:02 AM Post #44 of 103
I'll awnser this: We call those people mixing in the "dumb-ass-zone" (-4 to 0dbfs) . Master engineers bring it up full scale and into a limiter for the digital delivery that has poor signal to noise ratio. The dacs are so bad a reproducing the signal, we have to have the mastering engineer make it loud. Is that where we mix at? no! -12 to -10dbfs for most. I like to break the new ones in by making them do a mix that is -22dbfs nominal with peaks at -20 dbfs. Then I tell them to normalize it to -0.5 dbfs. Then play it and let them hear what headroom is. Then I tell them "you will not ever will get to do a great mix like this anymore, for people are into quantity and not quality, depth, and realism you get in a mix like this, they will want the opposite (unfortunately).

But that brings in the client side: Is the artist, producer, or label demanding this level of loudness? Where does the "dumba$$" factor really lay?
 
Jul 9, 2018 at 11:46 AM Post #45 of 103
Its the label competing with amplitude. The big labels that are own by the cd player manufacturers released a whole bunch of CD that were louder than the others. Then the other people's sales dropped because they thought it was because of this loudness, and not because of the album being in circulation for the past few decades. So others fallowed suit to be comparative. Thousands of CDs were pulled out of circulation, even overnight so that new versions replaced them that has 6-10 db of gain against a 20:1 limiter
 

Users who are viewing this thread

Back
Top