The future of mastering": Loudness in the age of music streaming
Feb 15, 2020 at 4:58 AM Post #2 of 5
It is interesting and thanks for posting it. Do bare in mind that it's somewhat of an over-simplification and it effectively represents "a desire" of "future mastering" rather than the actual reality of what's happening.

The reality is that different services have different loudness normalisation levels, so unless you're going to create a different master for each of the services, the mastering engineer is likely to be required to master to the highest (and most popular) of them, which is Youtube and it's roughly -13LUFS level. Achieving that level with acoustic music genre (classical and most jazz for example) would generally require very extreme compression/limiting, which would be a bad thing, as particularly classical music has never really been a victim of the loudness war anyway. While at the same time it potentially harms some popular genres where very extreme compression/limiting is not only "not a bad thing" but an entirely desirable and actually required thing! For most other genres though, loudness normalisation is certainly a step in the right direction and if we can get some of the issues ironed out, could hopefully be the "future of mastering".

G
 
Feb 16, 2020 at 3:51 AM Post #3 of 5
It is interesting and thanks for posting it. Do bare in mind that it's somewhat of an over-simplification and it effectively represents "a desire" of "future mastering" rather than the actual reality of what's happening.

The reality is that different services have different loudness normalisation levels, so unless you're going to create a different master for each of the services, the mastering engineer is likely to be required to master to the highest (and most popular) of them, which is Youtube and it's roughly -13LUFS level. Achieving that level with acoustic music genre (classical and most jazz for example) would generally require very extreme compression/limiting, which would be a bad thing, as particularly classical music has never really been a victim of the loudness war anyway. While at the same time it potentially harms some popular genres where very extreme compression/limiting is not only "not a bad thing" but an entirely desirable and actually required thing! For most other genres though, loudness normalisation is certainly a step in the right direction and if we can get some of the issues ironed out, could hopefully be the "future of mastering".

G
Yep, I agree with all of the above.
As I understand, Tidal is the only streaming service that actually uses LUFS for loudness normalisation. Spotify uses ReplayGain, Apple some proprietary software and the rest a mystery. All streaming services seems to be pretty consistent, with tidal at the top. Comparing recordings that are validated to have high or low DR across streaming services can yield interesting results. Things might have changed since I last checked though.
Reminds me a bit of the computer industry before usb became a standard..
 
Feb 17, 2020 at 7:18 AM Post #4 of 5
Yep, I agree with all of the above.
[1] As I understand, Tidal is the only streaming service that actually uses LUFS for loudness normalisation. Spotify uses ReplayGain, Apple some proprietary software and the rest a mystery. All streaming services seems to be pretty consistent, with tidal at the top.
[2] Comparing recordings that are validated to have high or low DR across streaming services can yield interesting results. Things might have changed since I last checked though.
[3] Reminds me a bit of the computer industry before usb became a standard..

1. LUFS is the EBU's (Europe) implementation of the ITU's BS. 1770 recommendation, while LKFS (North America and others) is the ATSC's implementation of it. Essentially it's the same underlying loudness algorithm just with slightly different fixed levels, for example -23LUFS and -1dBTP (true peak limit) for the EBU and -24LKFS and -2dBTP for the ATSC. Although no one outside Apple (and Google for YouTube, etc.) knows exactly what their loudness normalisation software is doing, I can't see that it would make much sense to "try and reinvent the wheel". So most probably they're either using the ITU BS. 1770 or a tweaked version of it.

2. Where large amounts of compression/limiting have been applied purely for the sake of greater loudness, then loudness normalisation would generally be detrimental, making the master sound weaker and/or thinner. In this way loudness normalisation discourages the over application of compression/limiting but doesn't mandate it. The problem arises when large amounts of compression/limiting isn't applied purely for the sake of loudness, also with some more modern genres that have been specifically composed/arranged for the use of large amounts of compression/limiting and with much classical/acoustic music which would often require more compression/limiting than desirable to achieve even -16.5 LUFS (Apple), let alone -13LUFS (YouTube).

3. Indeed. However, in this case there's good justification for there not to be a standard. As mentioned the EBU R128 specifies -23LUFS (long term loudness) for TV broadcast and radio but this is impractical in some cases. For example, on most mobile phones and tablets -23LUFS would simply be too quiet in noisy environments with HPs/IEMs and way too quiet (pretty much inaudible) using the device's internal speakers. This is why Apple uses about -16.5 LUFS (as opposed to -23 LUFS) and I presume YouTube have judged that their consumers more frequently use their mobile devices' internal speakers (than do Apple's) and therefore have an even higher normalisation level (-13LUFS). And as putting significantly bigger, more powerful speakers in mobile devices is typically low on the list of cost and space utilisation priorities (below say bigger batteries, more camera lenses, etc.), we're currently a bit stuck.

G
 

Users who are viewing this thread

Back
Top