What are the benefit of Balance?
Mar 30, 2022 at 8:08 AM Post #16 of 32
Isn’t EQ usually applied subtractively in the digital domain to avoid clipping? It would have to stay under zero if it’s digital. For analog headroom to be effective you’d need analog signal processing pushing it over zero. Anything digital would clip wouldn’t it? Do professional sound studios use analog sweetening past the digital stage? That wouldn’t be EQ. It would be wire reverbs or slap backs. That’s pretty specialized, and I’d bet most studios would just use a DSP to do those effects digitally.
No EQ is not applied subtractively when live. Headroom is added. Internally inside the DSP that is easy mathematically. But once it goes back to analogue headroom is needed just like at the ADC.
 
Mar 30, 2022 at 8:09 AM Post #17 of 32
The anaolgue signal out of a mixer is usually corresponding to 0dBFS=20-24dBu, making the reference of 0dBV or 4dBu (depends on standards/manufacturer) equivalent to around -20dBFS. This is still headroom.
What analogue out of a mixer? A digital mixer has a digital input, a digital output and processes in the digital domain, there is no analogue out. 0dBFS = 0VU (+4dBU) occurs when there is an analogue output, the output of the DAC for example. 0VU represents the optimal level for an analogue signal, IE. A level above that will likely incur analogue distortion. This is not the case with digital, there is no distortion until 0dBFS, at which point there is total distortion.
Digital compressors need to take a signal larger than their output (assuming a simple downward compression) if it is to do its job. Take an example of a 4:1 compression set at 0dBu. An input of 8dBu will output 2dBu, 16dBu will output 4dBu. etc. If there is not enough headroom it will just hard limit.
Firstly, a digital compressor does not take a input of 8dBu (which is a measurement of an analogue signal), it obviously takes a digital input. Even using your figures and considering an analogue compressor, if the input is 8dBu and the output is 2dBu, why do we need any headroom, the output signal is 6dBu lower than the input signal, it is not higher. Likewise with the 16dBu input signal where the output is 12dBu lower.
There are many other examples of compressor and limiter, but we need to allow for the user to have some freedom.
Yes but in every case, the output of a compressor or limiter is lower than the output, so you don't need any headroom. This is also true of a digital compressor.
Simple answer: EQ. The user adds a bass boost, it will go above the original signal. So you have to have headroom above that. If you are working live, you cannot have the output signal go down to accommodate that.
Of course you can, that's what the input gain/trim is for.
There are plenty of other more complex DSP effect which add signal.
A bass boost or any other DSP effect that adds to the amplitude of the signal will only clip the signal if you add so much that the output hits 0dBFS and obviously you can avoid that by reducing the input signal where/if necessary. But this is in the digital domain, what's this got to do with a calibration to an analogue line level of 0VU?
0dBFS in the digital domain needs to be including the headroom.
0dBFS does because that's digital clipping but anything below that does not.

G
 
Mar 30, 2022 at 8:24 AM Post #18 of 32
No EQ is not applied subtractively when live.
Yes it is, although sometimes it is added.
Headroom is added.
No, it's not or it doesn't need to be. Let's say we have an input signal peaks at -0.1dBFS. We want to add 6dB of bass boost, so we reduce the amplitude of the EQ's input by 6dB, and apply our 6dB boost, our headroom is pretty much zero and is the same for both the input and output. In practice though we probably wouldn't have an input signal of 0.1dBFS, unless we'd already compressed or limited it and added gain, because the output from the mic-pre to the recorder (or mixer) would already have significant headroom, as explained in my first response.
Internally inside the DSP that is easy mathematically. But once it goes back to analogue headroom is needed just like at the ADC.
No, the analogue output doesn't need any headroom and neither does the ADC.

G
 
Mar 30, 2022 at 9:32 AM Post #19 of 32
I guess I don’t understand how an EQ boost in the digital domain could go beyond zero without clipping. And if the audio isn’t clipping in digital, I would assume that the output to analog would be below zero too. My experience is that headroom beyond peak level is a thing for analog recording and mixing, not digital. I haven’t seen it used since the 80s when we were working with 24 track tape. With digital, the noise floor is so low, you just build in a buffer at the top by working well below zero so you never need to go beyond that. At least that’s the way I’ve always seen it done.
 
Last edited:
Mar 30, 2022 at 10:08 AM Post #20 of 32
I guess I don’t understand how an EQ boost in the digital domain could go beyond zero without clipping.
Theoretically it can, because floating point math is typically used, although at some stage in the chain you'll have to reduce the level below 0dBFS to avoid clipping the DAC. In practice it's best to consider 0dBFS as max and not to exceed it.
And if the audio isn’t clipping in digital, I would assume that the output to analog would be below zero too.
Careful here, 0 in digital audio is a different thing than 0 in analogue audio. 0(VU) in analogue represents the optimal level, 0(dBFS) in digital represents the absolute peak level. There is effectively nothing above 0dBFS but there is above 0VU, the signal plus an increasing amount of analogue distortion, increasing until total saturation is reached.
My experience is that headroom beyond peak level is a thing for analog recording and mixing, not digital. I haven’t seen it used since the 80s when we were working with 24 track tape.
Headroom is the amount of dB between the peak level of the signal and the peak level of the system. Typically when recording we have to allow significant headroom, in digital or analogue, because we don't know what the peak level of the signal (output from the mic pre-amp) is going to be. Usually in digital we'll aim for around 12-20dB of headroom to account for the musicians playing louder during the take than during the sound check. Of course once we've recorded the signal then we know what the peak level is and we no longer need any headroom, we can add gain or any other DSP until we get to 0dBFS at which point we have to reduce the input level (to the digital desk or DSP). Analogue is a bit different because there's quite a big gap between 0VU and absolute system peak level, however that "headroom" isn't headroom if we think in terms of distortion. If we want a clean signal then 0VU is effectively our peak system level, in these terms, that "headroom" above 0VU is an increasing amount of added distortion. In practice we frequently used at least some of this headroom because a bit of tape saturation or other analogue distortion was typically euphonic and used almost ubiquitously in popular music genres. This is the main reason why analogue mixing/mastering remained the default standard for most popular music genres well into the 1990's and early 2000's.

G
 
Last edited:
Mar 30, 2022 at 10:23 AM Post #21 of 32
What analogue out of a mixer? A digital mixer has a digital input, a digital output and processes in the digital domain, there is no analogue out. 0dBFS = 0VU (+4dBU) occurs when there is an analogue output, the output of the DAC for example. 0VU represents the optimal level for an analogue signal, IE. A level above that will likely incur analogue distortion. This is not the case with digital, there is no distortion until 0dBFS, at which point there is total distortion.

Firstly, a digital compressor does not take a input of 8dBu (which is a measurement of an analogue signal), it obviously takes a digital input. Even using your figures and considering an analogue compressor, if the input is 8dBu and the output is 2dBu, why do we need any headroom, the output signal is 6dBu lower than the input signal, it is not higher. Likewise with the 16dBu input signal where the output is 12dBu lower.

Yes but in every case, the output of a compressor or limiter is lower than the output, so you don't need any headroom. This is also true of a digital compressor.

Of course you can, that's what the input gain/trim is for.

A bass boost or any other DSP effect that adds to the amplitude of the signal will only clip the signal if you add so much that the output hits 0dBFS and obviously you can avoid that by reducing the input signal where/if necessary. But this is in the digital domain, what's this got to do with a calibration to an analogue line level of 0VU?

0dBFS does because that's digital clipping but anything below that does not.

G
This discussion is about balanced connections, which are analogue. Your discussion of digital mixers is only relevent if you ignore analogue connections, which you cannot in this discussion or in real life. At some point it starts with a analogue, and ends with analogue.

So when we interface with analogue in pro audio, 0dBFS is chosen to be a few dB above your signal level, to ensure that EQ, compression, expansion, reverb, chorus, bass synthesis are all handled correctly. EBU recommends that to 18dB headroom, and SMPTE 20dB headroom. These are just for line level. Mic inputs it varies a lot. However it seems many cheap manufactures skimp on this because they will not pay for the higher dynamic range ADCs that allow decent headroom.

But then you know this. Why do you have to argue a corner case that is unrelated to the OP question just to have the last word?

Also before you say, yes AES-EBU digital connections are balanced, but again irrelavent here.
 
Last edited:
Mar 30, 2022 at 11:03 AM Post #22 of 32
This discussion is about balanced connections, which are analogue. Also before you say, yes AES-EBU digital connections are balanced, but again irrelavent here.
It's not irrelevant, AES digital connections are balanced for exactly the same reason as analogue connections, SNR.
Your discussion of digital mixers is only relevent if you ignore analogue connections, which you cannot in this discussion or in real life.
Of course you can. We can output from a digital mixer to a digital audio file. How do you think we make a digital audio mix/master? Now have a look at some of your commercial digital audio music files and tell me how much headroom they have. Also, you're the one who brought up DSP limiters, how is a digital signal processor going to work unless it's given a digital signal and what do you think it outputs?
At some point it starts with a analogue, and ends with analogue.
Yes, we obviously have to calibrate the digital signal level to some analogue signal level.
So when we interface with analogue in pro audio, 0dBFS is chosen to be a few dB above your signal level, to ensure that EQ, compression, expansion, reverb, chorus, bass synthesis are all handled correctly.
No, once we calibrate 0dBFS it's fixed, we don't choose it to be anything else. What we choose is a mic pre-amp level so the signal is some 12-20dB below 0dBFS and that is because we don't know exactly how loud the musicians are going to play. Once we've recorded them, obviously we do know the peak levels of that recording and we don't need any headroom for DSP or for final output, so long as we stay below 0dBFS.
EBU recommends that to 18dB headroom, and SMPTE 20dB headroom.
No, you are confusing calibration level with headroom. The EBU (R128) specifies -1dBTP max level, therefore just 1dB headroom, SMPTE doesn't specify any at all, ATSC specifies -2dBTP max.

G
 
Last edited:
Mar 30, 2022 at 11:18 AM Post #23 of 32
It's not irrelevant, AES digital connections are balanced for exactly the same reason as analogue connections, SNR.

Of course you can. We can output from a digital mixer to a digital audio file. How do you think we make a digital audio mix/master? Now have a look at some of your commercial digital audio music files and tell me how much headroom they have. Also, you're the one who brought up DSP limiters, how is a digital signal processor going to work unless it's given a digital signal and what do you think it outputs?

Yes, we obviously have to calibrate the digital signal level to some analogue signal level.

No, once we calibrate 0dBFS it's fixed, we don't choose it to be anything else. What we choose is a mic pre-amp level so the signal is some 12-20dB below 0dBFS and that is because we don't know exactly how loud the musicians are going to play. Once we've recorded them, obviously we do know the peak levels of that recording and we don't need any headroom for DSP or for final output, so long as we stay below 0dBFS.

No, you are confusing calibration level with headroom. The EBU (R128) specifies -1dBTP max level, therefore just 1dB headroom, SMPTE doesn't specify any at all, ATSC specifies -2dBTP max.

G
Your first point? See my last. AES-EBU doesn't need a lot of SNR, because it's digital. Bandwidth helps jitter performance if you don't have a studio wide master clock, but noise doesn't mess it up.

We are saying the same thing from different point of view. You say reference point several dB below 0dFS, so do I. Both represent headroom above reference. When everything is done you can eliminate the headroom on the digital master. But that isn't the OP question. Balanced signals are often larger than the 0.3V to 2Vrms consumer uses because headroom is wanted.

Then you say I'm wrong. So how do you get to accuse others of being trolls in other threads.
 
Mar 30, 2022 at 12:11 PM Post #24 of 32
AES-EBU doesn't need a lot of SNR, because it's digital.
It does if you're running AES/EBU over a long distance, up to it's specified 100m.
We are saying the same thing from different point of view. You say reference point several dB below 0dFS, so do I. Both represent headroom above reference.
We agree, from a different point of view, up to here.
When everything is done you can eliminate the headroom on the digital master.
No, you can eliminate headroom when the recording is done, before even the mixing is done and long before the master is completed.
Balanced signals are often larger than the 0.3V to 2Vrms consumer uses because headroom is wanted.
But that's my point, headroom isn't needed after that initial recording, dynamic range is needed but that's about SNR, not headroom. If, as you state, we use balanced pro-audio line level for headroom and headroom can be eliminated on the digital master, why do we still use balanced pro-audio line level after mastering?

G
 
Mar 30, 2022 at 12:49 PM Post #25 of 32
It does if you're running AES/EBU over a long distance, up to it's specified 100m.

We agree, from a different point of view, up to here.

No, you can eliminate headroom when the recording is done, before even the mixing is done and long before the master is completed.

But that's my point, headroom isn't needed after that initial recording, dynamic range is needed but that's about SNR, not headroom. If, as you state, we use balanced pro-audio line level for headroom and headroom can be eliminated on the digital master, why do we still use balanced pro-audio line level after mastering?

G
For the other reasons I stated in my first post in this thread.
 
Mar 30, 2022 at 3:41 PM Post #26 of 32
I'm going to try and use your words and see if I can get my concept across better.

Theoretically it can, because floating point math is typically used, although at some stage in the chain you'll have to reduce the level below 0dBFS to avoid clipping the DAC. In practice it's best to consider 0dBFS as max and not to exceed it.

You say floating point math makes it possible for EQ to boost above the normal clipping point within the digital domain, but when you output it, it has to be reduced to absolute peak level to avoid clipping in conversion from digital to analog. Since the output would have to be below digital absolute peak level, any additional headroom in the analog stage after the DAC wouldn't be used. That last sentence was what I was trying to say. Is that correct?

Careful here, 0 in digital audio is a different thing than 0 in analogue audio. 0(VU) in analogue represents the optimal level, 0(dBFS) in digital represents the absolute peak level. There is effectively nothing above 0dBFS but there is above 0VU, the signal plus an increasing amount of analogue distortion, increasing until total saturation is reached.

I was only talking about digital zero in my post- the absolute peak level- when I was talking about the signal within the digital domain. When I said analog headroom, I meant the ability to go beyond analog zero and "burn in" peaks (that was the term the engineers I worked with back in the 80s used.)

My point was, if the output of the DAC at the end of the digital chain is below clipping, that is your absolute peak level. So any analog headroom beyond that wouldn't be used unless you were doing some sort of analog processing or sweetening after the export of the mix. Is that correct? I can't imagine that analog processing after the digital bounce down would be common at all.
 
Mar 30, 2022 at 3:55 PM Post #27 of 32
We are saying the same thing from different point of view.

I've found that a lot of the misunderstandings with Gregorio come about because of the way he defines words. He has very specific definitions, which only makes sense because he's a professional and needs precision to communicate with other engineers. I tend to think conceptually, so I have to bounce around until I figure out the words he's using to describe specific ideas and then use those words. If I focus on my concept and use my own words for the ideas, he reads what I say literally according to his own definitions, and that gives an impression that I'm saying something I'm not really saying. That sends him off down a digression answering an argument I'm not even making. That seems to be the source of a lot of the frustration. It gets worse if you dig in your heels and get argumentative. I'd rather learn from him the correct way to precisely express a concept than argue with him about semantic definitions when we're both on the same page conceptually.
 
Last edited:
Mar 31, 2022 at 4:07 AM Post #28 of 32
You say floating point math makes it possible for EQ to boost above the normal clipping point within the digital domain, but when you output it, it has to be reduced to absolute peak level to avoid clipping in conversion from digital to analog.
Yes. In fixed point the maximum possible value is all the bits set to “1” (0dBFS), so any part of a signal that tries to exceed that value is set to the same all “1s” state and is clipped/lost. In floating point, the decimal point is movable because a number of bits are used to encode the exponent, so the maximum value is theoretically something like +1500dBFS. Of course, at some stage we’ve got to write a 24 or 16bit (fixed) audio file or convert the signal in a DAC and again cannot exceed 0dBFS. The practical difference between fixed and floating point in mixing/mastering is that floating point allows us to recover a signal that has exceeded 0dBFS, fixed point doesn’t, it’s gone forever.
Since the output would have to be below digital absolute peak level, any additional headroom in the analog stage after the DAC wouldn't be used. That last sentence was what I was trying to say. Is that correct?
Correct.
When I said analog headroom, I meant the ability to go beyond analog zero and "burn in" peaks (that was the term the engineers I worked with back in the 80s used.)
Yes, I vaguely remember that term. If I remember correctly, “burn in” was a metaphor for overloading/saturating analogue tape.
My point was, if the output of the DAC at the end of the digital chain is below clipping, that is your absolute peak level. So any analog headroom beyond that wouldn't be used unless you were doing some sort of analog processing or sweetening after the export of the mix. Is that correct?
That is correct.
I can't imagine that analog processing after the digital bounce down would be common at all.
Actually it’s very common, not in the digital bounce down of course but in the B-Chain (monitoring chain), where EQ (room) correction is typically applied. This EQ must either be applied subtractively or the signal must be attenuated prior to the EQ.
He has very specific definitions, which only makes sense because he's a professional and needs precision to communicate with other engineers.
Yep, time is money and miscommunication can cost a great deal of it. So we need precise common terminology, not just between engineers but also between producer and engineers. Same is true for pro musicians, conductors and producers. It’s possible that’s the case here (with @jagwap). For example, “headroom” is that portion at the top of the dynamic range which is not used. Once you do use it (have some signal in that portion), by definition it is no longer headroom.
For the other reasons I stated in my first post in this thread.
The only reason you gave in your first post for why pro-audio line level is so much higher was to avoid compression of unpredictable live signals by typically allowing 16-20dB of headroom. This is not really correct, as I explained in my first response. You are ignoring, amongst other things (such as when in the chain the signal is no longer unpredictable), that pro-audio line level predates digital and 16-20dB headroom is only applicable to 24bit digital recording, not to the original 16bit. Pro-audio line level is about SNR and therefore dynamic range, not headroom.

G
 
Mar 31, 2022 at 4:14 AM Post #29 of 32
Actually it’s very common, not in the digital bounce down of course but in the B-Chain (monitoring chain), where EQ (room) correction is typically applied. This EQ must either be applied subtractively or the signal must be attenuated prior to the EQ.

Yeah I meant after the digital bounce down... some sort of analog audio processing done at the very end that is then laid back over the original digital track. I can't see that being done much at all.
 
Mar 31, 2022 at 5:15 AM Post #30 of 32
Yeah I meant after the digital bounce down... some sort of analog audio processing done at the very end that is then laid back over the original digital track. I can't see that being done much at all.
It can still happen and for many years it was standard practice. For example the final mix is bounced down, given to the mastering engineer who then applies some additional compression or other effect using a vintage (analogue) compressor. Obviously the digital mix has to be converted to analogue first and then back to digital after processing. Often these days the mastering is all done “in the box” using digital compressors/effects but it’s not uncommon to find a piece of vintage analogue gear being used in some genres.

G
 

Users who are viewing this thread

Back
Top