1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Testing audiophile claims and myths

Discussion in 'Sound Science' started by prog rock man, May 3, 2010.
First
 
Back
857 858 859 860 861 862 863 864 865 866
868 869 870 871 872 873 874 875 876 877
Next
 
Last
  1. bigshot
    The operative phrase was "older" not necessarily the number of channels. Older movies, like ones from the 40s, would sound more controlled dynamically because they are compressed and mixed for clarity with a more limited dynamic range. They are normalized up to peak level and this means their overall level is likely to sound louder. So you turn the volume down a bit to compensate. When a commercial comes up at peak level, it isn't a great contrast.

    A modern dynamic soundtrack might go to commercial at a very low level in a quiet passage, and the contrast in volume compared to the normalized up commercial would be more jarring. For low level late night listening, more compressed dynamics is preferable. If you are giving the program your full attention and the volume is higher, a dynamic program might be better.

    Of course commercials are always a pain.
     
    Last edited: Jun 22, 2019
  2. TheSonicTruth
    And ^that^ right there is one of the single biggest contributors to this loudness issue in the digital and the broadcast realms.

    To borrow this diagram, from the book of a mastering engineer who I have shaken the hand of and hold in higher regard than most - sorry Greg, sorry Ludwig, and Lord-Alge) - the right-hand diagram represents(in principle) the loudness situation during the pre-digital VU analog meter era(basically from the birth of metering up to the early 1980s) and also the ideal we must aim for and return to.

    Imagine that '-24' figure and associated red line in that right hand version to represent the zero on any typical VU meter manufactured between WW2 and the '80s. That zero, and the 2dB above and below it on a typical VU, is the sweet spot where both producers and deliverers of broadcast content kept the needles on their recorders, mixers, and master output faders, effectively guaranteeing loudness consistency 90% of the time...

    image.jpeg

    The left diagram, of course, represents the path digital, since its advent, has taken us down - literally into the sub-basement level - with it's early peak-based-maximize-level-while-avoiding-clipping mantra: 'aim for 0dB' Full Scale. Effective bit-depth utilization is guaranteed, while average levels - how we perceive loudness - are given zero regard, and are all over the frickn' place. Notice the loudness difference between columns #4('avg broadcast') and #7('bcast commercial') in that peak-norm. diagram - no WONDER content viewers are scrambling for their TV remotes (or ripping their earbuds out of their skulls!) to check their volume during commercial sets!
     
    Last edited: Jun 22, 2019
  3. Steve999
    My Apple TV has a “reduce loud sounds” option in the audio settings. It says it helps you reduce loud sounds so you can listen to movies and music without disturbing others, but still hear all of the details. I am wondering if it does some combination of reducing peaks, applying compression, and applying EQ to enhance listening at low levels.

    I can’t believe all of the different sound options in all of my devices. I never appreciated it until I started digging through menus the last few days. It seems so complex for a normal person to get their arms around. Not easily understood or well documented.

    Just this 13-page FAQ from Dolby gives me a headache:

    https://www.dolby.com/us/en/technologies/dolby-digital.pdf

    Someone just tell me what button to press already. :confused: Right now I have most everything on every device set to “auto” and I just hope it’s all playing nice together.
     
    Last edited: Jun 22, 2019
  4. bigshot
    You cut out the context of what I was talking about... I was talking about movies from the 40s. Those movies had optical soundtracks and were intended to be viewed in theaters. They were compressed for a very good reason. If movies back in the 40s had massively wide dynamic ranges, half the soundtrack would be buried in noise and audiences would be straining to hear what was going on. The focus was on BALANCE and CLARITY. That is much more important than just wide dynamics. Different applications require different compression levels. More isn't better. The right amount mixed for balance and clarity is.

    Steve999, your document there has the answer to the original problem we were discussing. Dolby has a feature called Dialogue Normalization, which keeps dialogue clear over the music at different volume levels. See number 23.
     
    Last edited: Jun 22, 2019
  5. TheSonicTruth

    Based on the diagram I just posted, it is possible for both heavily-DRC'd material and highly dynamic content to coexist. Meters with a loudness-ballistic(vs. peak based) algorithm should be mandatory, on pocket digital recorders, digital mixing consoles, and digital processors. A peak indicator could be provided, which remains green when peaks are below a specific threshold, turn yellow, then orange, and then red at the onset of clipping. 0 itself already has several suggested standards, ranging from LUFS-16 down to -24.
     
  6. old tech
    I have to say I'm a bit confused with this. Is what you are saying that mastering for music playback includes a bump in the lower frequencies?

    I have never heard that transducers (speakers) or other equipment in the playback chain have a sort of reverse bump built in. I have never seen it in frequency response curves (always exceptions of course, but I mean in the general sense) of speakers/playback equipment/recorded material and why would it be necessary in the first place - ie why would studios include that emphasis and then the consumer industry design around it? Is there a specific standard around it for studios and consumer manufacturers to follow? What about mastering destined for vinyl which the format inherently emphasises the mid-bass due to the inaccuracies of that medium. Wouldn't it make it worse?

    I admit I am no expert in audio engineering but I did spend quite a bit of time in a well known Melbourne studio through a friend that worked there. I never noticed an intentional bump in the lower frequencies in the mastering process, except when it was part of a bag of tricks to improve the sound.

    Lastly, while it is true that professional studio monitors are flat and on their own do not go very deep, many studios augment their monitors with sub(s) to reach right down into the lower registers. Ian Sheppard had a podcast on this and other mastering engineers that contributed to that episode said they do the same after spending time "tuning" the sub to integrate seamlessly with the monitors. How would this play out if the typical home hi fi adds another 3db to the lower frequencies.

    G, I'm not being critical or necessarily doubting what you say, it is just that it is the first time I've heard of it and I haven't seen specs of recorded material or home hi fidelity speakers that are tuned to reverse a 3db gain.
     
    Last edited: Jun 22, 2019
    TheSonicTruth likes this.
  7. KeithEmo
    What you suggest makes sense but, unfortunately, the reality is somewhat more complex.

    The real world has a huge dynamic range. Our ears, and our brains, routinely apply what amounts to a massive amount of very long term dynamic range compression. At night, when it's quiet, your ears become so sensitive you can hear crickets, and the kitchen faucet dripping. During the day, at the office, they reduce their sensitivity to accommodate hearing normal conversation, while not being deafened by the copier. And, when you go to a concert, they adjust again. The catch is that these adjustments occur over a variety of time frames, from a few seconds, to many minutes, often far longer than the duration of a commercial, or a scene in a TV show, and over a truly impressive range of loudness levels. More importantly, since your brain is controlling them, they remain for the most part unnoticed. For example, when you start your car, your ears do in fact physically adjust to reduce their sensitivity. However, your brain, realizing that you're starting the car, or that a car is approaching down the street, also adjusts itself to be perceptually less sensitive... and it interacts with your ears to proactively prepare them for the loud noise to follow. So, when you hear the car start, your brain is already prepared to compare that sound to what it expects a car engine to sound like. However, you're not expecting that loud commercial, which is part of why it's so jarring. When that TV show jumps from a bedroom scene, to an office scene, to a scene at the race track, your ears don't have the same time to adjust. And, since your brain isn't involved in the process, it doesn't receive the same cues about what levels to expect. This is why there is both art and science in determining the best compromise. and ignores.

    The result of all this is that how things fit together gets very complicated.
    Take three clips, a commercial, a quiet woodland scene, and a scene in a crowded bar.
    All three may be "within reasonable loudness and dynamic range limits".
    Yet the commercial may seem terribly loud if it pops into the middle of the woodland scene...
    But that same commercial may be almost inaudible if dropped into the middle of the barroom scene...
    Figuring out what levels would make the commercial seem "equally loud and annoying" in both situations is a very complex process.
    There is no "meter" that can recognize whether your commercial is going to be played in the middle of a televised concert or a quiet scene in the woods (a leats not yet).

    And, to make matters worse, all of this also depends on the absolute listening level, because our hearing sensitivity is far from linear.
    So, if two sounds are carefully adjusted so that one sounds a certain amount louder than another... that relationship will only be true at a certain absolute level.
    Turn the volume up or down, and the apparent difference in loudness will itself change because of the nonlinearity of our hearing.

    The upshot is that you not only need to use compression to reduce the overall dynamic range...
    But the amount you use must itself vary depending on the absolute level at which you're listening...

    There are a variety of "midnight modes" and "automatic volume controls" that attempt to do this - with varying success.
    Some attempt to "figure things out and level them" automatically.
    Others, like Dolby Volume, themselves operate based on settings that are programmed into the disc itself by the sound engineer.
    Dolby Volume is a standard.
    The producer of the disc programs its behavior into the disc.
    The equipment you play it on then applies those settings when you enable it (and you have choices about how it does so.)

    Most mixing consoles and mastering programs do support a variety of these sorts of options... and many support the latest "perceptual loudness control standards".
    Many pocket recorders also offer limiting and compression options.
    (However, in general, you're better off laaving them off, and applying adjustments more carefully when you create the mix.)

     
  8. Davesrose
    I would never speak for Gregorio, but I do know that there is a discrepancy with frequency curves with different environments. One genre that I know has a natural delineation with higher frequencies is auditoriums with live symphonies. I'm in Atlanta, which has done the main Telarc recordings for symphonies. I've heard that when they do a recording, they place sheets of wood over all seats to help mitigate decay.

    Since upgrading my home theater receiver to a new Denon, I've also noticed it has settings that say "cinema" that can slightly roll off treble: with "Cinema EQ" their reasoning is that there can be higher treble in a home system vs center channel theater that's behind a screen.

    These are just a couple examples of frequency curves...I can see that there would be many others.
     
    analogsurviver likes this.
  9. Davesrose
    I have medical background, so I'll have to pick this apart. There's no set time in which your ears become more sensitive. If anything, I'd say many people's auditory systems become more dull during night-time because of what other stimulus they've added:) When you say your ears adjust during a concert, it greatly depends on what kind of concert. An acoustic performance will be much different than a concert with loudspeakers at high volumes. With the loud concert, at "damaging" levels, your ears will adjust as a defense: middle ear muscles will clamp down and you'll have less sensitivity with frequency range. These are during a period of time. I drive a Prius, so there's no effect with my engine start....but even the loudest engine: that's too short of a time to effect your autonomic system. Instead of your hypothetical of driving down the road in a car, another factor for hearing can be your diet (if you don't have proper nutrients). Finally, when it comes to loud ads, it shouldn't be an art: it's measurable.

    I know my complaint about Vudu's free with ads content has to do with it not being part of the TV broadcast standard: but it is what more and more of what audiences are encountering (as we cut the broadcast TV ties).
     
    Last edited: Jun 22, 2019
    sonitus mirus likes this.
  10. old tech
    I can understand that with the production side, there are all sorts of recording, mixing and mastering tricks to achieve the wanted sound. I can also understand mastering for the intended audience, eg compromises in order to sound good on a variety of home, car and other environments, that sort of thing. But why would there be an assumption that most home speakers need a specific 3db bump in the lower register? And more to your point, wouldn't the assumption be that most listeners of symphonies would have a decent stereo and a quiet environment? Why alter the mastering for a specific case of listeners that use certain AV settings - unless of course the CD, SACD (for 5.1) or whatever specifically states that the listener should use a particular setting on their gear?
     
  11. TheSonicTruth
    Don't even, Keith!

    I've been jarred by commercials coming out of a Nascar race in progress, green flag conditions not caution, mind you.

    And what I was illustrating with those graphs is that such nonsense can be fixed - if enough viewers complain, and if TV station want to.
     
  12. analogsurviver
    You are definitely on the right track here. As I never would record in a studio and infinitely prefer live recording in real venues, one can not deny that there WILL be mistakes in playing in live concerts .... - and musicians just don't like them to land on the finished product.

    So, logical would be to record in the same venue - but without the audience, so repeating recording of the difficult passages is possible. Right ?

    Erm... not quite. There are very few venues that do not change acoustics with audience present and without - and recording without audience would simply not sound right. Audience represents a considerable ( in some cases close to 100% ) portion of absorption - thus having a great effect on decay. Even IF one could obtain "real fake audience" to just seat quietly during the recording session ( most unlike what one as a concert goer experiences live ; stopping for mistakes, repeating the portion of score until satisfied, pauses, "I think we should redo from bar x to bar y, because I did not sing/play at my best" by any (number of ) players/singers, interruptions by traffic ( mostly aircraft taking off/landing - audible from miles around the airfield ) etc, etc. )) - in real world only performers and recording crew is really interestin in making the recording right. No "real fake audience" would seat - for the better part of the day, sometimes evening/night, into the vee hours - at exactly the same spot during all of the recording, interrupted by necesarry breaks. But it is indispensable for having the "acoustics" consistent across the entire recording session/s)

    It has been a long journey from when I first started recording to where things stand today. And the biggest obstacle has been the treating of the acoustics of venues for the lack of audience. It can be done in many ways - above mentioned used by Telarc being just one of them. And, yes, this alone creates not only decay as desired, but also some kind of "curve".

    The similar can be said for any recording studio. No studio is "neutral" - each has its own sonic stamp, dependant both on the equipment used and more, much more on the taste/decisions of the people who work in the studio. And there are cases that explicitely mentions the equipment used for monitoring during recording and/or mastering - which, ideally, should be also used by the end user. It is up to the recording engineer/producer to decide for which segment of the audience/market to adjust the finished product for - havingit accurately flat down to (insert any number below 100 Hz you feel is appropriate ) to sound right on big full range speakers in large room ( expensive...) - or being bumped a few dB in bass, to sound appealing on smaller speaker much more people can afford. The later intended deviation from a flat curve, although technically/scientifically not nice, can mean worlds of difference in recording sales.
     
  13. sander99
    I guess you missed my question and G.'s answer. There is no inverse bump in the playback chain, but a bump. There is no bump in the audio track resulting from the mix, but there is a bump in the monitor system that they listen to during mixing. Because of this bump there will be an inverse bump in the resulting audio track (see bold part below).

    Nevertheless, I do think this is a somewhat strange situation, maybe just a case of a "de facto situation", a situation that was not originally planned or aimed for but somehow resulted from history.
     
  14. TheSonicTruth
    Yeah, Calbi needs to clarify this in terms most ordinary folk can understand. Julian Hirsch: where are you when we need you! RIP good man.
     
  15. old tech
    Yes, I understood it is an inverse bump in the audio track, hence the point that an ideal home hi fi should raise that frequency (whatever that is) by 3db. But why is there a bump in the monitor system in the first place, necessitating consumer hi fi be engineered to counteract it? It seems entirely unnecessary and somewhat surprising that a 21st century studio cannot have a perfectly flat frequency response from 20hz to 20khz when a reasonably flat response is achievable in a home listening environment.
     
    TheSonicTruth likes this.
First
 
Back
857 858 859 860 861 862 863 864 865 866
868 869 870 871 872 873 874 875 876 877
Next
 
Last

Share This Page