1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Testing audiophile claims and myths

Discussion in 'Sound Science' started by prog rock man, May 3, 2010.
First
 
Back
860 861 862 863 864 865 866 867 868 869
871 872 873 874 875 876 877 878 879 880
Next
 
Last
  1. dprimary

    I missed your post before. I think we are saying the same thing from the opposite direction. It really depends on the room you can have large bright room or a dead one. Unlike music studios where the acoustics and monitors are referenced to "flattness" The x-curve is reference made from the measurement of many (100's, 1000's I don't think I have ever seen how many) The theory being it the mix theater sounds like the average theater(I would say response, but it not really what we consider response today since it only frequency) so it should sound very close to what the engineers mixed. So I design mix theaters to support x-curve in the other control rooms which tend to be much smaller those I design to be flat. You could apply an x-curve but you are doing it in room that falls into small room acoustics which the x-curve was not designed for, it is getting into the same problem as using x-curve in the home theater. I'm probably about as clear as mud in this post.
     
  2. dprimary
    0dBFS is the same on every piece of equipment and software I have used. I can record the same signal with 8 bit, 16, 24 and 32. They will all play back at the exactly the same level the only difference will be the noise floor. Bit depth only pushes your noise floor down. There is only two standards on the page the EBU level for FM broadcast and digital broadcasts. For FM you will compress the signal to 9dB peak to nominal, for digital you can have that range be 18 dB. I guess you can claim 24 bit has 24dB of headroom since the best analog struggles to have even 20 bit of resolution. In practice you will raise the record level to leave the 24 dB of room in the noise floor. The page should be labeled Headroom in EBU broadcast standards and the chart at the bottom deleted.
     
  3. dprimary
    One would hope the artist listened to the album song levels as it was mastered, so the the album flows with listener adjusting the levels, we often lowered individual song levels a few dB to give a consistent levels from one song to the next.
     
  4. KeithEmo
    I agree entirely.
    It actually provides what might amount to a few interesting details - if you already understand the context quite thoroughly.

    I think I know a decent non-technical way to explain it......

    Headroom sort of serves the purpose of a shoulder on a mountain road.
    You try really hard to avoid getting onto the shoulder... but it's reassuring to know it's there... and you may occasionally end up there.
    (And it would be really bad to go off the road if the shoulder wasn't there.)

    When you're recording a live performance, or an analog source, you don't know what the exact levels of the loudest spots will be.
    But, in order to optimize the S/N of your recording, you would want to set the levels just below the level where the loudest sounds will clip.
    What you do is that you compromise.
    You set the level to where you think the loudest level will be (you can clap your hands or wait for the band to warm up to help you guess).
    Then you set the level down an extra 10 dB (allowing yourself that much of a margin for error about the highest level).
    That safety margin you leave yourself is the headroom.

    However, the situation is very different when you're playing a CD, or a commercially produced audio file.
    You already have all the data to look at.
    You don't need to leave any headroom as a safety margin because you know exactly how loud the loudest spot is...
    And you can pretty well trust that, with anything you read from a CD, the loudest peaks will be somewhere between -2 dB and 0 dB.
    There's no reason to leave a safety margin, and accept the sacrifices that go with doing so, because you absolutely know you won't need it.

    What makes things like TV commercials sound so loud is that they're very often compressed...
    When you apply compression to an audio signal you narrow the difference in loudness between the quietest and the loudest spots.
    By bringing the level of the quietest things closer to the level of the loudest things...
    And then bringing the level of the loudest things to their maximum level...
    You have also increased the level of the quiet things - a lot...
    Therefore, the average level will have gone way up.

    The whole meme of "loudness wars" is a total misnomer...
    The peak level you can store on a CD hasn't changed...
    Therefore modern CDs aren't any louder than older ones...
    What they are is more dynamically compressed...
    The quiet parts have been made less quiet - which raises the average level - but the loudest spots aren't any louder.

    The catch, however, is that this is NOT some sort of equipment flaw or operator error....
    It is a conscious artistic decision....
    Modern recordings don't sound like that "because the guy who mixed them couldn't get them right"...
    They sound exactly like they're intended to sound...
    (Which, sadly, seems to be what a lot of people these days prefer, or at least accept.)

     
  5. TheSonicTruth
    The above statement neglects, completely, to take into consideration how humans, and presumably other species that hear, actually HEAR.

    Sure, peaks/transients are louder than other sounds. But in the case of music, they last typically less than 1/10 of 1 second, sometimes as little as 10-30milli-seconds. Their presence, relative to the average level of a song, make folks want to get up and dance to it, or at least nod their head or tap their foot. A recording with a good arrangement plus a higher PLR actually makes one want to crank UP the volume, not turn it down.

    "Therefore modern CDs aren't any louder than older ones."

    Then come to my place, Keith, put on my Run-DMC CD from 30 years ago, set the volume to your preference. Then, switch in Black Eyed Peas' 'Elephunk',(similar rap genre) from last decade, and play it back with the volume left where you set it for the older DMC, and see if your ass doesn't get blasted right back into my dining room!

    "What they are is more dynamically compressed....
    The quiet parts have been made less quiet - which raises the average level -"

    That final part is correct:

    The "AVERAGE LEVEL" - which is 90% of what we judge the loudness of something by!! Not the peaks!

    Which totally invalidates your statement about post-2000 albums(CD or download) not being "any louder" than pre-2000 CDs.

    What a load a...
    IMG_0003.PNG
     
    Last edited: Jun 30, 2019
  6. Steve999
    Look where the 0 db reference level is on the two sets of charts (click to expand below). And that's just the beginning of it. The two charts are very different. If you can’t figure that out or be more articulate those are problems that are beyond my control.

     
    Last edited: Jun 30, 2019
  7. TheSonicTruth
    KeithEmo wrote: "
    The catch, however, is that this is NOT some sort of equipment flaw or operator error....
    It is a conscious artistic decision....
    Modern recordings don't sound like that "because the guy who mixed them couldn't get them right"...
    They sound exactly like they're intended to sound...
    (Which, sadly, seems to be what a lot of people these days prefer, or at least accept.)"

    Of which I am perfectly aware!

    A "conscious artistic decision" to produce a single or an album that sounds like a over-loaded clipped input stage on a cheap portable radio.

    "intended to sound" exactly like an over-loaded clipped input stage on a cheap portable radio.

    "a lot of people these days prefer, or at least accept.)" - because they assume the experts know better and it should be left up to them. A real audio expert would draw the line, apply the principles of gain-staging, and allow the various sections of the amplifier to get the thing as loud as the consumer wants to play it.

    Then, when they hear something which is well-produced, or at least adequately produced, adhering to basic audio engineering prinicples, it sounds somehow 'wrong' or 'off' to them.
     
  8. KeithEmo
    I think we're in perfect agreement.

     
  9. KeithEmo
    Forget "my preferences"...... it's simply a matter of which definition you use for "loudness"....

    If you use the defnintion of "perceived loudness" then the new CDs are indeed louder...
    This is the definition someone listening to music, or mastering it, would use.

    However, if you use the definition as "the actual signal levels we need to handle successfully" then it has not...
    This is the definition someone measuring a signal, or determining how much amplifier power or DAC voltage will be required to reproduce it, would use.

    If you look at the actual levels on that vintage Run DMC CD and the latest Black Eyed Peas album....
    You will almost certainly find that the voltage level of the loudest peaks on both is somewhere within two or three dB of 0 dB....
    Therefore the peak loudness on both is the same...
    (And the same amount of amplifier power would be required to reproduce each at the same peak level without clipping.)
    And, of course, the noise floor is also the same on both...
    So, technically both recordings also have the same dynamic range...
    However, the newer recordings have less long term dynamic variation in level...

    In the context of "what we HEAR", the perceived loudness of new CDs is usually much higher...
    However, in the context of how much voltage and how much power do you need to reproduce the signal without clipping....
    The highest peak voltage hasn't gone up....
    And neither has the highest peak power necessary to play it....

    Here's a really excellent in-depth article that explains the difference...
    And provides LOTS of statistics from various studies about the subject...
    I HIGHLY recommend that everyone read the whole thing.

    https://www.soundonsound.com/sound-advice/dynamic-range-loudness-war

    According to their analysis:
    "the loudness war actually didn't result in any reduction in the closest well-defined descriptor there is to "dynamic range”,
    which is loudness range as defined by the EBU 3342 technical document.
    Neither is it possible to ascertain any decrease of dynamic variability at any scale."

    They describe it this way:
    "In the end, it's all about style. Reduced crest factor values bring a 'compact' aspect to the sound; Waves describe it as a
    "heavily in-your-face signal that rocks the house” on their MaxxBCL page. It may be suited to your kind of music, or it may not."

    However, my point was that, in the context of "what signal levels you need to be able to handle cleanly".....

    For a preamp, or a DAC, there has been no difference... because the peak levels, the noise floor, and the distance between them, has NOT changed.
    The new CDs may sound louder - but they are no more likely to cause your preamp to distort at the same volume setting.

    However, for a power amplifier, which may or may not be able to deliver the same output power dynamically and long term...
    - having an amplifier that can deliver full power continuously is more important than before
    - while having an amplifier with plenty of headroom to handle dynamic peaks has become less important

     
  10. TheSonicTruth
    But something else HAS gone up, over time, within that 96dB container we call Red Book: The AVERAGE level, which my repeated mention of here seems to have had thus far zero effect on your understanding of how we hear. You continue to fixate on peaks, which again, while louder than average level, are mostly fleeting in their existence during playback of a recording.

    Higher average levels = higher average voltages, which can indeed cause longer-term strain on both input and output stages of even a moderately expensive stereo receiver or amplifier.
     
  11. gregorio
    Part of the confusion which appears to be occurring is due to the misunderstanding of the term "normalization". Normalisation just means raising or lowering different audio files/tracks to the same specified amount of the same specified property. Firstly then, you are saying just "normalization" and haven't specified the property! This is entirely understandable though, because by far the most commonly the specified property was peak level and therefore, when the term "normalisation" was used on it's own, it was assumed to mean "peak normalisation". Furthermore, as there is only one recognised standard in music, the physical limit of 0dB, it is assumed this is the "specified amount". However, that's not the case in other areas of commercial audio, TV for example. Before the new/current paradigm, the "specified property" of normalisation was again peak levels (or quasi-peak levels) but the "specified amount" varied. Most commonly in Europe it was -9dB. IE. In most European TV stations the audio was normalised to -9dBFS.

    All this might seem like semantics but it's vital to understand the above, otherwise there's not the slightest chance that you'll understand the new paradigm of "loudness normalization". The "specified property" of loudness normalisation is the human perception of loudness, which is both frequency and time dependant but unrelated to peak level!

    They are a bit different. The second one is basically demonstrating that with digital we can normalise to the absolute peak (0dBFS) and over time most pop/rock music has become more compressed, higher average levels and typically either no headroom or only a few tenths of a dB from absolute peak. You couldn't do that with analogue media because unlike digital media, which stays perfectly flat/linear until you hit 0dBFS, analogue media gradually distorts more and more before hitting the absolute physical limit. Some of that distortion, in limited amounts, was often desirable artistically (tape saturation being a good example) but even then, there was a significant gap between the peak levels of the recording and the peak physical limit of the media, this gap is the "headroom". The first graph makes more sense, and it's designed to illustrate a point simply, which it does, but it's not entirely accurate. To be honest, it's many years since I read/studied Bob's book and the charts could be more accurate than they appear, depending on exactly how he's defining "headroom".

    1. Ironically, you've both confirmed what I posted in my last message and explained why you DON'T understand Bob's charts!! Instead of "visualizing" an understanding from Bob's charts, read the text, understand the context of the charts and then you'll have a much higher likelihood of arriving at a true understanding instead of a complete misunderstanding!
    1a. Instead of "imagine, for the purpose of the discussion", why don't you stick to the actual facts rather than changing the "purpose of the discussion" to your personal agenda?
    1b. Again, this is the "sound science" forum, not the "what thesonictruth wants to imagine" forum! Also, your last suggestion clearly wouldn't work. A song "sticking out in the ensuing cacophony" could just indicate that it's at a particular loud point relative to the other songs, not that it's louder overall. For example, it might be sticking out because it's the chorus while the other songs are in their verse. Again, you don't seem to understand what loudness normalisation is but rather than go and learn what it is, you simply carry on regardless.
    1c. Yes, you could and there would certainly be some great advantages to loudness normalisation being applied to music creation but there would also be some disadvantages. Rather than only picking facts (or misunderstandings) which support your agenda, why don't you include all the facts and arrive at a true understanding?

    2. True!
    2a. Oh dear, completely false! You said you don't want me to state that your assertions are completely wrong/backwards but seem oblivious to the blatantly obvious solution, don't post assertions that are completely wrong/backwards to start with! Surely that's not a difficult concept to grasp? To your point: An acoustic Mahler Symphony obviously has no compression and very high peaks relative to it's average level. How many people have you seen "get up and dance or at least nod their head or tap their foot" at a Mahler Symphony? On the other hand, DJ's in night clubs typically apply huge amounts of compression and reduce the peak/transients to almost no higher than the average level of a song. How many people have you seen "get up and dance, or at least nod their head or tap their foot" in a night club? There are several factors at play here of course, but high peak/transients relative to average song level isn't really one of them!

    1. While I agree with the basic message/principles of your post, this part of it is incorrect. You are confusing loudness with level!
    1a. True.
    1b. False, modern CDs are louder than older ones but their peak level hasn't changed.
    1c. Partly true. The average level is higher but the loudest spots are also louder (although their peak level is the same). Loudness is a human perception, while levels are not. The perception of loudness depends on several factors; average levels over the short term, differences/contrasts in average levels over the longer term, the audio frequency/pitch of those levels and a few other factors (size/distance of an acoustic environment for example).

    The loudness wars is not just about more compression, it's also about all the other factors that affect the perception of loudness and therefore "loudness wars" is not a misnomer, it's accurate.

    If you're going to correct someone, please do so with the actual facts. Average level is not 90% of what we judge loudness by, there are other factors as important or more important. For example, which is louder: A. A sound with an average level of -10dB (RMS) or B. A sound with an average level of -16dB (RMS)? The answer according to you must be "A" but the correct answer is that it's impossible to know from the information given, it could be either. For example, if A is centred around 80Hz and B is centred around 2kHz, B will sound several times louder, even though it's average level is half of A!

    G

    Edit: I wrote this before I saw Keith's last post. I have posted that document several times and also highly recommend it. I would say that there is only one definition of loudness - "perceived loudness", the actual signal levels are the actual signal levels and not "loudness".
     
    Last edited: Jun 30, 2019
    Steve999 likes this.
  12. bigshot
    I may be dense, but in the digital chart, what does the stuff above 0dB labelled headroom represent? Wouldn't that headroom be into the range of clipping? When I record, I set my peak level a bit below 0dB to prevent a stray peak from clipping. I call that headroom, but it is always below 0dB, not above it like that.

    Also you let me know that my understanding of normalization was peak normalization. That is what I always run across in digital audio. But he has peak normalization labelled analog. I'm totally confused by this graphic.
     
    Last edited: Jun 30, 2019
  13. TheSonicTruth
    In that Sound On Sound loudness article?

    I pay that thing no mind. This goes against my handle, below, but I can HEAR if something sounds louder, given the same volume setting. A remaster CD of '1984' sounds louder than an original CD of 1984, and the waveform on the DAW is just visual confirmation of it.
     
  14. bigshot
    I think paying it no mind is good advice.
     
  15. TheSonicTruth
    I don't know what that would make you want to do, but it would make me want to EXIT THE F|_|KING CLUB IMMEDIATELY.
     
First
 
Back
860 861 862 863 864 865 866 867 868 869
871 872 873 874 875 876 877 878 879 880
Next
 
Last

Share This Page