Testing audiophile claims and myths
Jun 29, 2019 at 5:11 PM Post #13,021 of 17,336
If the one on the right is digital, I don't understand the chart. With digital, you've got nothing at all above zero, so if you want to include headroom, you would have to push the limit of the headroom down to zero... which would result in a chart that looks exactly like the one on the left, just 20dB lower.

I'm not clear on what "headroom" would be in digital. But I'm not a tech head, so I may be misunderstanding it.

The caption says that the right is analog, left is digital. Wikipedia says that digital still has headroom because the "0" (nominal level) is not whatever loudest peak is hit during a song. There's also various standards with different nominal levels:

https://en.wikipedia.org/wiki/Headroom_(audio_signal_processing)

I would assume that audio is similar to photography: in which a digital medium is still reliant on an analog to digital converter, and then ultimately the digital to analog converter used in playback. With photography, a RAW file records all sensor data: going down to the noise floor and highest saturation point. Digital cameras can record a higher dynamic range than displays and print have been able to reproduce: so you can optimize contrast through adjusting contrast curves (or tone mapping). To create a rendered image, they also pick the black point and white point in the tonal scale (and each camera brand has different values: some have cleaner ADCs and can have a lower black point, while others try to have higher saturation). I can imagine how much more complicated things get when you have a recording system that is also dependent on time: when there can be swings in dynamic range.
 
Jun 29, 2019 at 8:09 PM Post #13,022 of 17,336
I guess I don't understand what normalization does under the hood. I always thought 100% was raising the peaks to the edge of clipping. I assumed that was zero.
 
Jun 29, 2019 at 9:30 PM Post #13,023 of 17,336
You are correct....
(But you must remember that "normalizing" is a very general term which really just means "adjusting a bunch of stuff to all be at some standard level".)
In general, normalization usually is applied to audio signals such that the loudest peaks are reised to 0 dB (or some level near it).
If you look at it one way normalization is the process of very carefully eliminating unnecessary headroom.

The purpose of having headroom is as a safety margin.
The reason you leave headroom in a recording is so that, if a few peaks are a little louder than you thought they would be, your recording won't clip.
However, because the noise floor occurs at some fixed level, when you reduce the overall level to leave headroom, you reduce the S/N ratio.
(Not to mention the fact that, if you're talking about a radio station, your broadcast becomes a tiny bit less loud compared to your competitors.)
Therefore, once your content is finalized, the final step is often to raise the overall level such that the loudest peaks are "as high as they can be without clipping".
(At this point, since you know there will never be higher peaks, there is no reason to leave headroom... so it is removed in order to optimize the S/N.)

In the old days, you would carefully adjust the levels manually until the loudest peaks were just under the loudest possible level, which would be the loudest safe level you could use.
However, with an old style analog system, that actual maximum point might be somewhat vague - for example, with tape or vinyl, due to record EQ, the maximum level varies with frequency.
Therefore, you would have to run the entire file, carefully detect the highest peaks, then leave a little extra headroom to allow for any slight errors you might have made.
With a digital system, since you actually have the numerical value of every sample, there is a very well defined maximum level - "0 dB".

In most digital editors, when you select "normalize", you actually get to enter a number.
The program then scans the entire file, finds the highest peak level in it, and raises the overall level such that the peak is adjusted to become the level you selected.
(So, if you "normalize to 0 dB", the program scans the file, finds the highest peak level, then raises the level of the entire file equally such that the loudest point is at a level of 0 dB.)
However, in some cases, you might decide to normalize to a different value.
For example, some digital systems actually do have issues at exactly 0 dB, so many editors prefer to avoid actually going to 0 dB, and normalize to -2 dB or -3 dB.

The idea of normalizing can also be viewed at different levels.
For example, you could normalize the levels of all the songs on a CD such that the highest peak on each song is at -2 dB.
Or, instead, you could normalize the level of the entire CD so that the loudest peak on the loudest song is at -2 dB.
In that case, you would say that the entire CD was normalized to -2 dB, but the individual songs were not normalized, because you prefer to preserve the level differences between them.

But, yes, saying that "you had normalized the level to 100%" is normally equivalent to saying that you "normalized to maximum level" or "normalized to 0 dB"....
I suppose someone might use the expression to mean that they had "completely" applied the process of normalizing the levels...
(As compared to "partially normalizing the levels" - which could mean "moving the levels closer to being fully normalized while still leaving some variation".)

I guess I don't understand what normalization does under the hood. I always thought 100% was raising the peaks to the edge of clipping. I assumed that was zero.
 
Jun 29, 2019 at 9:43 PM Post #13,024 of 17,336
I just don't see what this is pointing at. If the recording has been mixed and mastered, and you have a digital file of the song, you normalize it up to just under 100%. You don't need headroom because the peak has been established. You don't have to guess what it is. It's right there in the track established and fixed. Headroom has nothing to do with it.

I can see maintaining headroom when you record, because you can't totally predict how high the peaks will go. But that isn't what our friend SonicTruth is referring to this about. He's trying to shoehorn in his pet subject again- hot mastering. With mastering, the peak level is established in the mix. They compressed the old Stones singles to make them sound loud and opaque the same way they do current pop songs. It doesn't matter if it's digital or analog. Compressed and loud is the same in either case. I don't see what he's talking about... and I don't understand those charts.

The way I always understood it was that with digital, you build in headroom BELOW zero, not above it. With tape, you can burn in peaks above zero because the distortion doesn't sound as bad. I don't see how this is reflected in those charts.
 
Last edited:
Jun 29, 2019 at 10:01 PM Post #13,025 of 17,336
I've had those same thoughts after staring at the chart for a while. Also, the first chart he put up was quite a bit different from the second chart, which confuses me even more. My gut feeling is we'd have to read the actual pages in the books and not just look at the pictures to get it, as unfortunate as that may sound. Or have a recording engineer explain it to us.
 
Last edited:
Jun 29, 2019 at 11:02 PM Post #13,026 of 17,336
I just don't see what this is pointing at. If the recording has been mixed and mastered, and you have a digital file of the song, you normalize it up to just under 100%. You don't need headroom because the peak has been established. You don't have to guess what it is. It's right there in the track established and fixed. Headroom has nothing to do with it.

At least sources I've seen show that there's still headroom with digital systems (+24dB with 24 bit masters for example). There might be performance and standardization reasons why you don't set to your highest peak. I see that Wikipedia's chart in headroom starts above 100dB for live/mic levels, and that master files are +24dB and CDs being slightly less than +20. Loudspeakers can apparently have very low headroom space: so perhaps if you didn't have headroom space in the source files, there could be issues with loudness and particular reproduction systems (and where Keith is coming from with audio reproduction system). There's also apparently quite a few different standards for the nominal level. Will look forward to Gregorio's explanations.

I'm wondering also if my more technical understanding of digital photography is a good analogy with audio. Video files now have dynamic ranges that exceed normal consumer displays. 8 bit per channel (256 tones of color) was the standard with analog displays and most photos on paper. Now high dynamic range is one of the rages for consumer 4K video. The best consumer displays can start approaching 10bpc color (1024 tones of color), while regular digital intermediate video files have been 12bpc (4096 tones). The best video RAW is now some RED cameras that can reach 16bit (fully realized with their sensors: 65,536 tones). There are now standards, that try to automatically normalize higher dynamic ranges to lower ones. Dolby has pretty much defined themselves now in the home 4K market. Atmos is becoming the standard 3D audio format , and Dolby Vision being the first 12bit format (that could take the same DR as an intermediate file and tone-map to the display's calibrated DR).

Just as an aside, since I'm doing more music listening with headphones and loudspeaker listening with home theater setup, I am finding new remasters to Atmos/DTS:X to have some pretty interesting immersion. Even though my previous 7.1 system had a good surround field in the X/Y plane, I find the new mixes for 3D don't just emphasize heights. I've also noticed older movies can have more ambient sounds going to the sides and in back of me. Case in point Apollo 13's DTS:X remix: music had some added depth with instruments and reverb going on in my surrounds (with 7.1.4). There were also scenes in ofices where they had typewriter sounds all in back of me (more immersive and pronounced than what I've heard with the previous lossless surround).
 
Last edited:
Jun 30, 2019 at 12:05 AM Post #13,027 of 17,336
I just don't see what this is pointing at. If the recording has been mixed and mastered, and you have a digital file of the song, you normalize it up to just under 100%. You don't need headroom because the peak has been established. You don't have to guess what it is. It's right there in the track established and fixed. Headroom has nothing to do with it.

I can see maintaining headroom when you record, because you can't totally predict how high the peaks will go. But that isn't what our friend SonicTruth is referring to this about. He's trying to shoehorn in his pet subject again- hot mastering. With mastering, the peak level is established in the mix. They compressed the old Stones singles to make them sound loud and opaque the same way they do current pop songs. It doesn't matter if it's digital or analog. Compressed and loud is the same in either case. I don't see what he's talking about... and I don't understand those charts.

The way I always understood it was that with digital, you build in headroom BELOW zero, not above it. With tape, you can burn in peaks above zero because the distortion doesn't sound as bad. I don't see how this is reflected in those charts.

I understand Bob's charts because I learn better via visualization than via acres of text. You are probably the opposite of me.

The left chart simply represents various sources/tracks, of differing dynamic range(or PLR) normalized so that they all peak at or a fraction of a dB below 0dB full digital scale. Instead of being 'commercials' or 'movie soundtracks' or 'news broadcast' or whatever, imagine, for purpose of the discussion, that all of those items in the left-hand graph are SONGS - of widely varying dynaamic ranges - on a CD. Since they are peak-normalized, whoever is listening to that album will have to adjust the volume up or down at the beginning of each song, depending on their actual sequence.

Now, imagine the right-hand graph representing that SAME album, but all songs on it LOUDNESS normalized - either with the help of a loudness meter plugin, or even by the engineer just using their ears, playing each one at a time, or several simultaneously, until no one song seems to stick out in the ensuing cacophony.

Mastered that way, such a CD could be played with the listener setting their volume once, during the first track, and perhaps never having to adjust the volume again - except to turn it down to take a phone call, or perhaps because his concentration when driving is needed while proceeding through a construction zone.

Make sence now, Biggie? :wink:
 
Last edited:
Jun 30, 2019 at 12:19 AM Post #13,028 of 17,336
Also, the first chart he put up was quite a
bit different from the second chart,

Yes, they were of different colors, and one did not identify each source.

A HUUUGE difference - had me on the floor gasping for air!

IMG_0003.PNG
 
Last edited:
Jun 30, 2019 at 12:56 AM Post #13,029 of 17,336
The left chart simply represents various sources/tracks, of differing dynamic range(or PLR) normalized so that they all peak at or a fraction of a dB below 0dB full digital scale. Instead of being 'commercials' or 'movie soundtracks' or 'news broadcast' or whatever, imagine, for purpose of the discussion, that all of those items in the left-hand graph are SONGS - of widely varying dynaamic ranges - on a CD. Since they are peak-normalized, whoever is listening to that album will have to adjust the volume up or down at the beginning of each song, depending on their actual sequence.
How do you infer this? The caption says this is from given albums (certainly not every SONG). That would mean with this normalization, there won't be such dramatic swings to adjust volume with a song.
 
Jun 30, 2019 at 1:07 AM Post #13,030 of 17,336
How do you infer this? The caption says this is from given albums (certainly not every SONG). That would mean with this normalization, there won't be such dramatic swings to adjust volume with a song.

So you're saying that in the left-hand example, if, as I suggested one imagine each bar in the graph to represent a group of songs, such as on an album, the listener would NOT have to adjust volume between songs??
 
Jun 30, 2019 at 1:24 AM Post #13,031 of 17,336
So you're saying that in the left-hand example, if, as I suggested one imagine each bar in the graph to represent a group of songs, such as on an album, the listener would NOT have to adjust volume between songs??

I'm thinking you're the only one to infer the left hand chart is representing individual songs.
 
Jun 30, 2019 at 1:39 AM Post #13,032 of 17,336
I'm thinking you're the only one to infer the left hand chart is representing individual songs.

Nowhere in that caption does Katz suggest or imply what those bars represent.

But in the left example, whether the bars represent individual songs or entire albums, then from left to right within that graph those songs or albums get progressively louder and louder.
 
Jun 30, 2019 at 1:47 AM Post #13,033 of 17,336
Nowhere in that caption does Katz suggest or imply what those bars represent.

But in the left example, whether the bars represent individual songs or entire albums, then from left to right within that graph those songs or albums get progressively louder and louder.

Did you miss the sentence in the caption that says "A standarized average level yielded fairly consistent loudness from album to album."?
 
Jun 30, 2019 at 1:54 AM Post #13,034 of 17,336
The caption says that the right is analog, left is digital. Wikipedia says that digital still has headroom because the "0" (nominal level) is not whatever loudest peak is hit during a song. There's also various standards with different nominal levels:

https://en.wikipedia.org/wiki/Headroom_(audio_signal_processing)

That page is horrible. The simple fact they have different "headroom" for different bit depths is flawed. Digital has the same headroom no matter the bit depth. It has been said "footroom" would be a better term. In music recording headroom is something you leave yourself for peaks. So is they are trying to explain with 24 bit you can record at lower level to leave yourself more head room that be correct. In things like film the levels are more standardized to keep dialog at around -18 dBFS that leaves 18 db for effects like car chases, explosion and so. in the chart is the "speaker" a person speaking? or a loudspeaker? It looks more like the dynamic range of person than a loudspeaker. Then they throw a few EBU level standards and don't really explain any of it.
 
Jun 30, 2019 at 2:35 AM Post #13,035 of 17,336
That page is horrible. The simple fact they have different "headroom" for different bit depths is flawed. Digital has the same headroom no matter the bit depth. It has been said "footroom" would be a better term. In music recording headroom is something you leave yourself for peaks. So is they are trying to explain with 24 bit you can record at lower level to leave yourself more head room that be correct. In things like film the levels are more standardized to keep dialog at around -18 dBFS that leaves 18 db for effects like car chases, explosion and so. in the chart is the "speaker" a person speaking? or a loudspeaker? It looks more like the dynamic range of person than a loudspeaker. Then they throw a few EBU level standards and don't really explain any of it.

Where is your source that all digital sources have the same headroom? I think the link I quoted is good enough for showing how different standards are. Given that "footroom" isn't a recognized definition, I'm not really sure what standard you would be recognized.
 

Users who are viewing this thread

Back
Top