Testing audiophile claims and myths
Jun 23, 2019 at 9:13 AM Post #13,006 of 17,336
[1] I thought YT, Spotify, and other streaming services already had their own loudness standards, like -12 or -16LUFS or something. Automatical levelling would occur when viewers played stuff off them. .... I for one hate it when I'm listening to a YT playlist, and I constantly have to adjust volume after every 1-2 songs, or, get blasted out by a commercial(commercials were worst thing ever to happen to YT!)
[2] And how would standardized loudness "kill" streamers?
1. You are contradicting yourself: If YT had implemented a loudness standard then you would not "constantly have to adjust volume after every 1-2 songs", you would never have to adjust the volume because it would be the same between all the songs and commercials! YT does reportedly have their own loudness standards (which equate to about -13LUFS) but obviously they haven't applied this to everything, exactly how and when they apply it isn't known. Also, when loudness normalisation is applied, it would have to be applied during ingest, not playback.
2. Most home/amateur videos are made with mic's built into video recording device (digital camera, mobile phone, etc.) and then posted with little/no audio processing, which almost invariably results in a relatively high noise floor. Applying loudness normalisation automatically on ingest would require adding a considerable amount of compression and make-up gain, which would result in a considerable increase in the noise floor and render the dialogue (and/or other wanted sound) of most home videos unintelligible. The other option would be for the public to get the necessary audio tools, learn how to use them and apply their own loudness normalisation before posting but that's a lot of time/effort, they just want to post their home videos/vBlogs, not learn audio post. Neither option would be acceptable and therefore if loudness normalisation were imposed, video distribution platforms which rely on content supplied by consumers would be "killed"!
[1] To borrow this diagram, from the book of a mastering engineer who I have shaken the hand of and hold in higher regard than most - sorry Greg, sorry Ludwig, and Lord-Alge)
[2] - the right-hand diagram represents(in principle) the loudness situation during the pre-digital VU analog meter era(basically from the birth of metering up to the early 1980s)
[2a] ... and also the ideal we must aim for and return to.
[3] Imagine that '-24' figure and associated red line in that right hand version to represent the zero on any typical VU meter manufactured between WW2 and the '80s.
[3a] That zero, and the 2dB above and below it on a typical VU, is the sweet spot where both producers and deliverers of broadcast content kept the needles on their recorders, mixers, and master output faders, effectively guaranteeing loudness consistency 90% of the time...

[4] The left diagram, of course, represents the path digital, since its advent ...
1. Clearly, just shaking someone's hand does not mean you have any understanding of what they're saying!!

2. No it does NOT, you actually have this backwards! The left hand diagram represent the loudness situation of both the VU (analogue) meter era and the digital era before loudness normalisation. The right hand diagram represents the era of loudness normalisation, which cannot be achieved with a VU or any other analogue meter and can only be done in the digital era!
2a. So the peak (or "quasi peak" in the case of a VU and other QPPMs) level specifications as opposed to the loudness specifications is the "ideal" that we must aim to return to? The "ideal" that encouraged huge amounts of compression and other processing which allows the audio to sound louder without increasing the peak (or quasi-peak) levels. This is the exact opposite of what you've previously incessantly argued, that the "ideal" is less compression and more dynamic range!

3. I have great difficulty in "imagining that" because a typical VU or any other type of meter manufactured between ww2 and the '80's cannot measure loudness ONLY quasi-peak levels!
3a. Firstly, if producers kept the needles within plus or minus 2dB of the VU zero point, then the end result would have no more than 4dB dynamic range, which would in most cases require pretty heavy compression to maintain. Secondly, it would absolutely NOT "guarantee loudness consistency 90% of the time", a VU meter does not measure loudness, it measures the quasi-peak levels and it's mainly due to this fact that the TV commercials were so much louder than the programmes!!

Additionally, I presume you're talking about Bob Katz? Bob is an extremely knowledgeable, experienced and highly respected professional music mastering engineer, however, he's not a film or TV re-recording mixer. The charts you posted are not correct (or you are posting them out of context). Cinema Movies do not have any loudness normalisation but if you take theatrical mixes and measure their LUFS value, they should on average come out at around -32dBLUFS, not -24LUFS. However, this is a rough average and varies significantly, a big action/war blockbuster will obviously have a higher LUFS than say a gentle period drama (which is likely to be more around -35dBLUFS). This is one of the most compelling current arguments against applying loudness normalisation to cinema movies, filmmakers do not want a drama film to be the same loudness as say the loudest action packed war films.

4. and finally, NO, it doesn't!
Meters with a loudness-ballistic(vs. peak based) algorithm should be mandatory, on pocket digital recorders, digital mixing consoles, and digital processors. A peak indicator could be provided, which remains green when peaks are below a specific threshold, turn yellow, then orange, and then red at the onset of clipping. 0 itself already has several suggested standards, ranging from LUFS-16 down to -24.
With this post and the post I've responded to above, you are demonstrating that you have absolutely no idea what loudness normalisation actually is, or how it works! Loudness normalisation/specifications is a series of measurements of the signal which has had a filter (that roughly corresponds to a loudness contour) and gating applied and then all these short term measurements are averaged over the duration of the program to arrive at the LUFS (or in the case of the ATSC, LKFS) value.

Your suggestion is therefore clearly nonsense! What "loudness-ballistic algorithm" could there possibly be that accurately measures what hasn't happened and hasn't been recorded yet and then creates an average which incudes those measurements? There's only two possible ways of achieving a particular LUFS/LKFS value: A. To do so in audio post, after all the recording has been completed and can therefore be measured/analysed or B. To provide a running total of what the LUFS is so far, highly restrict the peak levels and dynamic range of what will be recorded and then, with relatively minor real time adjustments, the LUFS value can be maintained. It should be obvious that "B" is the only option for the live broadcast of sports events but it requires "setting the rack" (as you put it) just so, and "easing off" the compression would make it more difficult or impossible to comply with the LUFS spec/legal requirement.

[1] I can’t believe all of the different sound options in all of my devices. I never appreciated it until I started digging through menus the last few days. It seems so complex for a normal person to get their arms around. Not easily understood or well documented.
[2] Someone just tell me what button to press already. :confused: Right now I have most everything on every device set to “auto” and I just hope it’s all playing nice together.

1. It is complicated for the consumer and under the hood, it's far more complicated.

2. What button to press depends on what you want. "Auto" will typically give you what the average consumer wants, the average consumer with a decent/average sound system in a decent/average listening environment. If it's late at night and you don't want to disturb the neighbours/children, choosing a higher compression scheme, often called "night mode" or something similar (and a lower volume) will probably give you what you want. If on the other hand you've got a better than decent/average consumer system and listening environment, don't have to worry about disturbing anyone else and want the full dynamic range actually contained in the dolby datastream, then you should turn off the dialogue normalisation and compression and turn up your volume to suit. Incidentally, "off" for dialogue normalisation is normally represented as "-31" and you should check this after you start the film/program as some AVRs will override your settings with the settings in the dolby metadata. Lastly, you'll have to experiment, even if this is what you want, depending on the individual mix, it might not always give better results plus, exactly what your AVR is doing and is calling these parameters is often not easy to fathom!

[1] I have never seen it in frequency response curves (always exceptions of course, but I mean in the general sense) of speakers/playback equipment/recorded material and why would it be necessary in the first place - [1a] ie why would studios include that emphasis and then the consumer industry design around it?
[2] Is there a specific standard around it for studios and consumer manufacturers to follow?
[2a] What about mastering destined for vinyl which the format inherently emphasises the mid-bass due to the inaccuracies of that medium. Wouldn't it make it worse?

1. Virtually no consumer speakers are full range (20Hz - 20kHz), even very good ones tend to roll-off quite severely by about 40Hz and start rolling-off around 50Hz - 60Hz and of course, most consumers don't own "very good speakers". Many consumer speaker manufacturers slightly boost the bass freqs to try and compensate for bass freqs their speakers can't reproduce at all or can't reproduce powerfully enough. Also, many consumers add more bass with their tone controls. I'm sure you probably know consumers who do this but how many consumers do you know who reduce the bass on their 2 speaker stereo systems?
1a. No, it's the other way around. Mastering is the process of taking the studio mix and adjusting it to sound as intended when played back by the consumer. If the consumer (or their equipment) is adding bass, then the mastering process needs to take this into account, by for example, likewise adding bass to their monitoring system/environment.

2. No, there are no standards for music studios, recording or mastering studios. There are for Film sound mixing studios but they are only applicable to cinemas, not home consumers and home consumers typically don't get the theatrical mix anyway. Many mastering studios are flat but then some/much of the mastering process occurs at very high playback levels which increases the perception of the amount of bass anyway. However, some do have a "house curve" and many recording/mix studios do.
2a. No, it would make it better. If the mastering studio had a "house curve" with a raised mid bass, then the master would contain less mid-bass and compensate for the "emphasised mid-bass" on vinyl. I've never mastered specifically for vinyl though, so I can't say exactly what they did/how they did it. Mastering specifically for vinyl pretty much died out many years ago, most vinyl today (and for the last couple of decades or so) is pressed from the same master as the digital releases and if the RIAA curve is applied at all, it's typically done by the pressing plant rather than the mastering engineer.

Since upgrading my home theater receiver to a new Denon, I've also noticed it has settings that say "cinema" that can slightly roll off treble: with "Cinema EQ" their reasoning is that there can be higher treble in a home system vs center channel theater that's behind a screen.

Yes, the treble can effectively be too high in a theatrical mix ... due to the use of the x-curve. Your system would therefore in theory also have to roll-off the treble (as per the x-curve) in order to get a perceptually flat response. Of course though, this assumes you are reproducing the theatrical mix, which typically isn't the case, typically you're reproducing a BluRay or TV mix (which are not made with the x-curve in the monitoring chain). Unfortunately, it can be difficult to know, as even films/versions listed as "Original Theatrical Cut" or "Mix" quite commonly aren't. Personally, I'd leave that setting "off", the x-curve is actually quite a complex thing, in that it depends on the HF perception of reproduced sound in a large room (such as a cinema) and therefore even if you are listening to a theatrical mix, trying to compensate for the x-curve might not be appropriate in your relatively small room, hence why I said "in theory" above. If you're interested in more than my oversimplified assertion/explanation, try this article, it's short, easy to understand and accurate.

G
 
Last edited:
Jun 23, 2019 at 11:37 AM Post #13,007 of 17,336
I think you misunderstand the situation at a very basic level.

1) It is the advertisers, and not the viewers, who are the "paying customers" for a TV or radio show.
1a) The paying customers (the advertisers) want their commercials to reach the most viewers overall.
1b) The significant metrics are how many people watch the show and how many people watch the show to the end (called a "stickiness" rating)... and NOT how happy they are.
(That only matters if they become so unhappy that they change the channel.)
(In the case of NASCAR - most people will watch to the end because they want to see who wins.)
2) The advertisers want their commercials to stand out and grab your attention... whether that means being funnier, being more pleasing, showing sexier spokespersons, or simply playing louder.
(And they want to be especially sure you hear their commercial when you wander into the next room to grab a beer during that commercial break.)

Obviously, if those annoying commercials were annoying enough to actually get people to stop watching the show, then there would be sufficient motivation to "fix" the situation.
Or, if you could actually convince the advertisers that you would avoid purchasing their product as a protest against their annoying commercials, that would also convince them.
However, the reality is that they already know, from long experience, that this rarely if ever happens.
In fact, historically, the most effective commercials have been either the ones everyone likes, or the ones that are "so annoying everybody remembers them"...

In most major brand advertising the goal is usually something called "brand recognition".
This means that they want you to remember their name of their product - so it seems familiar the next time you go shopping.
In this context, a commercial that you remember because you hate it is almost as effective as one that you remember for some other reason.
And, apparently, in reality, people rarely avoid buying a product because they find the commercials annoying (more often the gruimble about the ads - then buy the product anyway).
(Are you really going to stop watching NASCAR because the level of the commercials annoys you? Really?)
Unfortunately, this means that, to the advertiser, annoying the audience isn't necessarily a bad thing - unless you can convince them that it will actually cost them viewers.

I suspect you'll find that the few public service announcements on NPR stations are played at relatively reasonable levels.
That's because their listeners really are their paying customers (because you presumably have some influence over their funding).

I should point out something else....
For most people I know the volume level of commercials is NOT their most objectionable characteristic.
Here's what most people I know find most annoying about commercials:
1) The sheer number of them (between 14 and 21 minutes out of every hour)
2) "Bottom thirds" (those annoying commercials that play during the show - that you can't avoid because they play at the same time as the parts you want to see).

Don't even, Keith!

I've been jarred by commercials coming out of a Nascar race in progress, green flag conditions not caution, mind you.

And what I was illustrating with those graphs is that such nonsense can be fixed - if enough viewers complain, and if TV station want to.
 
Last edited:
Jun 23, 2019 at 12:39 PM Post #13,008 of 17,336
1. You are contradicting yourself: If
YT had implemented a loudness standard then you would not "constantly have to adjust
volume after every 1-2 songs
", you would never have to adjust the volume because it would
be the same between all the songs and commercia

Not all content in YT playlists(self-created or ones I have favorited) has been loudness normalized. In some playlists, I have to rush to turn down the volume for an outlier song or commercial.

does NOT, you actually have this backwards!

You say that all the time to me! It's a political tool, used to deflect attention from the truth. You're good Greg! You should run for office.

r. The charts you posted are not correct

Tell Mr. Katz that - they're from his book.


That's why, at the mastering engineers resort retreat, there are two pools: One containing Katz & Diament, and another containing you and all the rest. :wink:
 
Jun 23, 2019 at 2:25 PM Post #13,009 of 17,336
Not all content in YT playlists(self-created or ones I have favorited) has been loudness normalized. In some playlists, I have to rush to turn down the volume for an outlier song or commercial.

I don't think YouTube normalizes anything. They just take what you upload. If it works in a playlist, that's because the recording was mastered to work well in shuffle mode.

Based on the diagram I just posted, it is possible for both heavily-DRC'd material and highly dynamic content to coexist. Meters with a loudness-ballistic(vs. peak based) algorithm should be mandatory, on pocket digital recorders, digital mixing consoles, and digital processors.

I'm not talking about how sound is represented graphically. I'm talking about mastering for a specific purpose. sound in movies in the 40s were compressed to optimize them for the expected venue and playback system. It's no different today. If most people are listening to music with phones and earbuds on the street, you don't want to master them the same as for high end home stereos in silent listening rooms.
 
Last edited:
Jun 23, 2019 at 6:24 PM Post #13,010 of 17,336
When you right-click on a video and select "Stats for nerds" then an overlay box will show up and one of the things displayed there is how much the volume is reduced:
Adele - Skyfall.jpg

In the attachment https://cdn.head-fi.org/a/10310926.zip there is a 30 seconds of the downloaded stream (opus converted to flac, in case your player doesn't support opus) so you can compare the volume from youtube player to the actual volume "baked" in the file. For me the difference is clear. Here's the youtube link:


BTW isn't it strange that audio forum doesn't allow uploading audio files?
 

Attachments

  • ADELE - Skyfall-7HKoqNJtMTQ.30sec.flac.zip
    2 MB · Views: 0
Last edited:
Jun 23, 2019 at 7:52 PM Post #13,011 of 17,336
Interesting. That's a neat trick. Thanks!
 
Jun 23, 2019 at 10:50 PM Post #13,012 of 17,336
Yes, the treble can effectively be too high in a theatrical mix ... due to the use of the x-curve. Your system would therefore in theory also have to roll-off the treble (as per the x-curve) in order to get a perceptually flat response. Of course though, this assumes you are reproducing the theatrical mix, which typically isn't the case, typically you're reproducing a BluRay or TV mix (which are not made with the x-curve in the monitoring chain). Unfortunately, it can be difficult to know, as even films/versions listed as "Original Theatrical Cut" or "Mix" quite commonly aren't. Personally, I'd leave that setting "off", the x-curve is actually quite a complex thing, in that it depends on the HF perception of reproduced sound in a large room (such as a cinema) and therefore even if you are listening to a theatrical mix, trying to compensate for the x-curve might not be appropriate in your relatively small room, hence why I said "in theory" above. If you're interested in more than my oversimplified assertion/explanation, try this article, it's short, easy to understand and accurate.

G

Yeah, I've left "Cinema EQ" off as I notice it doesn't make a whole lot of difference with most movie tracks, and I could be using the same format for streaming TV series and music. Interestingly, there's also the room calibration software from Audyssey that apart from adjusting levels and crossovers for all speakers, also has "Multi-EQ" (which has its own frequency curves). It defaults to "Reference", which it says has another slight roll off in high frequencies to be optimized for movies. It says "flat" is better for small rooms, but I notice it more noticeably has more upper mids (and probably a flatter response that's better for all uses). It might also be that I notice more and more, thunderous bass is an ideal for action movie fans. It is kind of funny how some people say just to use the automatic calibration with included mic and Audyssey setup, while others say you should do some of your own calibrations. I have bought a SPL meter and laser measuring device to manually set speaker distance and levels for each speaker. When I've switched between my measured levels vs Audessey's default distance/levels, the overall volume has seemed slight (though I think my measurements have helped keep the center channel clear and equalize my height speakers).

Interesting to read in your article about stark differences with cinema and room treatments. You bring up another topic over theater master vs home media. I would have thought some of the new standards in 4k/HDR/3D audio would mean home releases would need less attention to re-authoring. My background is more video, and know cinema standards have had intermediates that are true 2K or 4K cinema resolutions (along with 10-12 bit color). With 4K video, it's been reliant on h.265 to be able to have more file compression. I'm sure there's also more compression with streaming services vs UHD discs. But since more consumer products are UHD and support Dolby Vision, I would have assumed there might be less effort for re-scaling and adjust contrast for 8bit HD. I had also assumed the audio chain might be easier with movies first being mixed for cinema and then home systems having the same surround processor (IE, Atmos supporting 128 tracks): especially since even streaming now supports Atmos (albeit on a DD core instead of a disc's TrueHD core). There are still many movies made with 2K or 3K intermediates (since 4K requires a lot of computing power: especially scenes with 3D VFX)....so scaling to UHD resolution is a given. I would think color grading now is similar with cinema vs home media, so a lot of the re-authoring is setting resolution and optimal compression. I assume an Atmos mix for cinema would have the same object information for the home master, but that certain EQ curves could be applied for the frequency differences with cinema vs home theater. I've also noticed Netflix is having more and more of their series and movies in Atmos: I assume since it's direct to home, they don't ever need to apply these cinema standards.
 
Jun 24, 2019 at 3:26 AM Post #13,013 of 17,336
1. Yes, that is somewhat naive. Each have somewhat different requirements for hifi reproduction. For example, movies have what's called the "x-curve" which is applied to the monitoring (B-chain) during mixing and reproduction (although this isn't applicable to home reproduction), TV is essentially flat and the common/usual trend for several decades with music is for a house curve with a raised bass but as there are no mandated specifications/requirements (unlike TV and film) this house curve can essentially be whatever the individual studio wants. In general, a slightly raised bass in the consumer reproduction chain would therefore give a more hi-fidelity reproduction and many/most consumer transducers have this built-in (for example, it's one of the typical differences between a "speaker" and a "monitor"). A flat response will certainly give a more hi-fi reproduction than whatever is the natural response of speakers+room acoustics in a consumer listening environment but ideally, most of the time, for the highest-fidelity music reproduction, you should achieve a flat response and then raise the bass a little. A flat response is therefore somewhat of an audiophile myth.


G


X-curve (See SMPTE ST 202) is applied to the measurement window. It is not really an eq curve. It is based on the average theater size of 500 seats, the x-curve is what was found to be the typical response in testing many 500 seat theaters (it was also developed a long time ago) What it does, is attempt to give you the same response in 100 seat theater or 2000 seat theater as it does in the 500 seat one. Dub stages are smaller than 500 seats so the response is set to mimic a 500 seat theater. So if you are in a small 20 seat room the air and room are not absorbing much the high and lows like the 500 seat room. A 200o seat room will be absorbing more than the 500 seat room.
 
Jun 24, 2019 at 6:08 AM Post #13,014 of 17,336
[1] Not all content in YT playlists(self-created or ones I have favorited) has been loudness normalized. In some playlists, I have to rush to turn down the volume for an outlier song or commercial.
[2] You say that all the time to me! It's a political tool, used to deflect attention from the truth. You're good Greg! You should run for office.
[2a] That's why, at the mastering engineers resort retreat, there are two pools: One containing Katz & Diament, and another containing you and all the rest. :wink:

1. If not all content on YT has been loudness normalised then by definition it is not a "standard".

2. You quoted 2 graphs which are clearly labelled "Peak Level Normalization" and "Loudness Normalization", then you effectively relabel them by FALSELY stating they are "Digital Era" and "Analogue Era" respectively, PURELY to suit your own agenda and then you accuse me of deflecting from the truth and playing politics! How ironic and hypocritical is that? Impressive!! The solution is glaringly simple, if you don't want me to "say that to you all the time" then don't "all the time" make assertions that are so false/incorrect that they're actually completely backwards! Your solution of making up falsehoods and then defending them with insults isn't acceptable here and doesn't work anyway, as you've demonstrated a number of times, so why do you persist with such a pointless tactic?
2a. What "retreat" and "pools" would they be, ones you've just falsely made up? "Mastering" and mastering engineers only exist in the music industry, not in the film industry and while Bob is a leading expert in mastering, he's not a leading expert in film sound, in fact he's relatively inexperienced. Finally, there really aren't two pools, that's a gross over-simplification but if there were, I'd actually be in the same one as Bob. So again, you've made-up a falsehood that's actually backwards, the exact opposite of the truth. Thanks for demonstrating my point, again!

[1] It is kind of funny how some people say just to use the automatic calibration with included mic and Audyssey setup, while others say you should do some of your own calibrations.
[2] I would think color grading now is similar with cinema vs home media, so a lot of the re-authoring is setting resolution and optimal compression.
[3] I assume an Atmos mix for cinema would have the same object information for the home master, but that certain EQ curves could be applied for the frequency differences with cinema vs home theater. I've also noticed Netflix is having more and more of their series and movies in Atmos: I assume since it's direct to home, they don't ever need to apply these cinema standards.

1. To be honest, that makes perfect sense to me. Using the automatic calibration is going to provide superior results to just plugging your speakers in and not doing any calibration, as many consumers do. It's also likely to provide better results than someone who tries to do their own calibration but doesn't really know what they're doing. But, if you have the tools, know how to use them and know the facts which are pertinent to home cinema (as opposed to theatrical standards) then you'll generally get a better result than an auto calibration.

2. I'm certainly no expert on colour grading but as far as I'm aware, there are still differences. Colour grading for reflected light is somewhat different to colour grading for produced light.

3. Not exactly, there are other differences. For example, differences in peak levels, overall loudness, balance between front and surround speakers and potentially a few others. It is possible to play a theatrical mix on a home system without it sounding terrible, if you've got a better than average home Atmos system but for the best results a separate home Atmos mix would be created. Netflix actually require this as a delivery requirement/specification: "If a 85 db reference theatrical mix is created, two complete sets of deliverables are required. One for theatrical, one for nearfield." - In the "Nearfield Atmos Mix" section. (Netflix Originals Delivery Specifications, v3.2.1)

A 200o seat room will be absorbing more than the 500 seat room.

Counter-intuitively, one would expect to boost the mid/high freqs in a larger room to compensate for the greater air absorption but the x-curve applied to cinemas' b-chain effectively does the opposite. This is mentioned and somewhat explained in the article I referenced in my previous post.

G
 
Jun 24, 2019 at 12:03 PM Post #13,015 of 17,336
Audessey is good for getting a starting place, but fine tuning might be necessary. I have a Yamaha amp with their in house room calibration circuit. It did good with some things and terrible with others. I did a lot of measuring and experimenting beyond its setting.
 
Jun 25, 2019 at 4:04 PM Post #13,016 of 17,336
You quoted 2 graphs which are clearly labelled "Peak Level Normalization" and "Loudness Normalization", then you effectively relabel them by FALSELY stating they are "Digital Era" and "Analogue Era" respectively, PURELY to suit your own agenda

Like I said: Tell Katz he's "wrong" - to his face!

IMG_4917.JPG


There's Bob Katz and Barry Diament - and then there's you and all the others. I think I know who to trust.
 
Jun 26, 2019 at 7:39 AM Post #13,017 of 17,336
[1] Like I said: Tell Katz he's "wrong" - to his face!
[2] There's Bob Katz and Barry Diament - and then there's you and all the others. I think I know who to trust.

1. I would tell Bob he's wrong to his face about cinema sound. There is not now nor has there ever been loudness normalisation in film. Have you never been to the cinema to watch a film? Are dramas the same loudness as blockbuster action films? In film we have relative loudness, due to the fact that monitoring levels are standardized but there is no loudness normalisation. Now what about YOU, are you going to tell Bob "he's wrong to his face"? You stated: "As for TV sound, in general, I find it to be theee most dynamically compressed, not necessarily loudness-processed, but more compressed than even the most recent pop or rock CD release." - However, both Bob (in the graphs YOU quoted) and me stated it's the other way around!

2. There's me, Bob Katz and numerous other experienced professionals - and then there's you. Anyone with even the slightest of rational minds would "know who to trust"! You've demonstrated your typical nonsense, you've interpreted what Bob has written, according to your personal agenda and then posted that interpretation out of context as fact.
Once again, because you don't seem to be getting it: Unlike the music industry, TV has always had standardised levels based on analogue Quasi Peak Program Meters (QPPMS, like a VU meter). This remained the case long into the digital era, even into the era of DAWs in the late 1990's, with plugin QPPMs (which were programmed with the identical scales and ballistics as the previous analogue versions). Although the exact specified peak level varied between different networks, nevertheless peak level normalisation was the ONLY paradigm used up until only about 7 years ago (with the legal requirement to change to loudness normalisation). It was these old peak/quasi-peak normalisation standards which caused the era of much louder TV commercials and the loudness normalisation standards which cured that issue can ONLY be performed in the digital domain/era. So your whole analogue/digital thing is nonsense as far as TV sound is concerned, as the issue is dramatically improved with digital technology (loudness normalisation). It's amazing that you can misinterpret/apply what's inside Bob's book but that you can't even understand it's title! It's called "Mastering Audio: The Art and the Science.", it's NOT called "Re-recording Audio: The Art and the Science"!!

I asked you "Your solution of making up falsehoods and then defending them with insults isn't acceptable here and doesn't work anyway, as you've demonstrated a number of times, so why do you persist with such a pointless tactic?" - And your response is simply to blindly carry on doing exactly what I accused you of (trying to defend your falsehoods), which just makes you look even more foolish/ignorant. Why?

G
 
Jun 29, 2019 at 12:15 PM Post #13,018 of 17,336
Enjoy:

 
Jun 29, 2019 at 2:53 PM Post #13,019 of 17,336
If the one on the right is digital, I don't understand the chart. With digital, you've got nothing at all above zero, so if you want to include headroom, you would have to push the limit of the headroom down to zero... which would result in a chart that looks exactly like the one on the left, just 20dB lower.

I'm not clear on what "headroom" would be in digital. But I'm not a tech head, so I may be misunderstanding it.
 
Jun 29, 2019 at 5:00 PM Post #13,020 of 17,336
In one context headroom is merely a way of defining or describing the characteristics of a circuit.
However, looking at it a different way, it's simply a fiction that you get to define however you like.
Headroom is essentially "the difference between something's rated output and the maximum output it can actually deliver".
And, obviously, it is going to depend on how you choose to call out your ratings.

Let's say you have an amplifier which is perfectly clean up to a level of 2V - and clips at 2V.
You could rate that amplifier at 2V - in which case it would have "no headroom".
You could rate it as having "1V of output" and say it had "1V of extra headroom" (6 dBV).
Or, if you were aiming for a huge headrom rating, you could instead choose to rate it as having "0.1V output" in which case it would have LOTS of "headroom".
Those would all be equally reasonable ways of describing the same amplifier.

Now, let's say you have an amplifier that can deliver 100 watts RMS continuously forever...
And can deliver 200 watts RMS for five seconds before it starts to distort (when its power supply runs out of stored power)...
Is it "a 100 watt amplifier with a lot of headroom" or "a 200 watt amplifier with a weak power supply"?
In fact, both answers describe that amplifier equally well.

The tricky part is the relationship between modern measurement standards and actual music.
We currently measure amplifiers based on how much power they can deliver continuously.
However, in reality, music is very dynamic, so this is not very representative of what we generally expect an amplifier to do when we use it.
(With typical music, the peak power often exceeds the average power by a factor of 10x or even 20x.)

Therefore, arguably, an amplifier that can deliver 200 watts for five seconds at a time really is "a 200 watt amplifier" FOR PURPOSES OF PLAYING MUSIC.
However, designing that 100 watt amplifier with enough headroom to deliver 200 watts short term costs more...
(Note that, by our current measurement standards, we could only describe it or sell it as "a 100 watt amplifier".)
Not many people will be willing to pay as much for it as they would for a 200 watt amplifier because "it has 3 dB of headroom".

In the case of recording technology, headroom is useful as a safety margin.
You record at -20 dB instead of at 0 dB for two main reasons....
1) Because your equipment actually has higher distortion very close to 0 dB - and you wish to avoid it.
2) Because you can't be sure that there won't be occasional peaks above what you expect.

While there is still truth to both of those considerations when making a live recording, or a digital copy of an analog source...
neither is really true in the context of digital audio playback these days.

Unlike tape devices , the THD on most DACs is no higher at -3 dB than at -20 dB...
Therefore, your recording is NOT "going to sound better on most playback equipment if you record it at -20".
And, likewise, most modern solid state amplifiers are quite clean right up to the point where they clip...

Modern digital recordings have a perfectly well known 0 dB point.
I no longer have to worry about "recording a cassette at -20 dB so I avoid distortion and don't miss any clips that might go over"...
Nowadays, I can record a CD so that the loudest peaks reach PRECISELY -3 dB (or whatever number I choose).
Most sane folks prefer to stay at least a few dB below 0 for various reasons - but not all.
There is no uncertainty, no possibility of a short loud peak "getting past the meter", and almost no likelihood that the CD player will distort more at -2 dB than at -20 dB.
(In fact, when I look at that clip, I can hit one button and see precisely and immediately how loud the loudest point is.
And another button adjusts it so the loudest spot in taht clip is exactly as louds as it can possibly be - without clipping.)

Therefore, in many situations, there is simply no longer a PURPOSE for leaving lots of headroom.

Feel free to argue that, by leaving such decisions to automatic gadgets, we often end up with results that are artitically somewhat poor...
However, from a practical perspective, for most of us these days headroom is a commodity of limited value.

Many of us still agree that power amplifiers sound best when they have "power to spare" and "plenty of headroom".
HOWEVER, if you want "a 100 watt amplifier with lots of headroom", the solution is simple.
You don't need to find and purchase an expensive "100 watt amplifier with phenomenal amunts of headroom".
Buy a 200 watt amplifier "with no headroom at all" - and just CALL IT "a 100 watt amplifier with lots of headroom".
It really is the same thing.

If the one on the right is digital, I don't understand the chart. With digital, you've got nothing at all above zero, so if you want to include headroom, you would have to push the limit of the headroom down to zero... which would result in a chart that looks exactly like the one on the left, just 20dB lower.

I'm not clear on what "headroom" would be in digital. But I'm not a tech head, so I may be misunderstanding it.
 

Users who are viewing this thread

Back
Top