Huge Controversy Within the Hi-Rez Community
Jan 14, 2016 at 10:24 PM Post #136 of 146
First time I've seen the video and for the most part I have to agree with what was said.  
What did bother me was that many times in the first half of the video the consumers that 
were really interested in the provenance of the HDA files they were paying for were referred
to as "nuts" for doing things like running a spectrograph on the files they had paid for.
Since we as consumers have little other way of looking into the file and seeing if we're getting 
is something close to the "real deal" I was somewhat offended that we were referred to as "nuts".
We all know of the cases of files sold early on as HDA when they were simply up-sampled ripped CD's.
We as consumers have a right to protect ourselves without being called nuts.
 
Jan 14, 2016 at 10:32 PM Post #137 of 146
  First time I've seen the video and for the most part I have to agree with what was said.  
What did bother me was that many times in the first half of the video the consumers that 
were really interested in the provenance of the HDA files they were paying for were referred
to as "nuts" for doing things like running a spectrograph on the files they had paid for.
Since we as consumers have little other way of looking into the file and seeing if we're getting 
is something close to the "real deal" I was somewhat offended that we were referred to as "nuts".
We all know of the cases of files sold early on as HDA when they were simply up-sampled ripped CD's.
We as consumers have a right to protect ourselves without being called nuts.

 
I think anyone should have the right to call someone nuts without repercussion.  I'm nutty that way.
 
Jan 14, 2016 at 10:58 PM Post #138 of 146
Jan 14, 2016 at 11:05 PM Post #139 of 146
Maybe you have the right to call someone a Fraudster, when they are.  Good enough for me. Selling higher sample rate greater bit depth recordings that were upsampled from redbook at a premium price is fraud in my book. 
 
Then again, maybe consumers are nuts to think they are getting something better.
 
Jan 15, 2016 at 10:28 AM Post #140 of 146
  Dithering when reducing bit depth isn't always such a good idea, especially when dealing with 'natural' sounds. Often it only amounts to adding noise to noise.

 
Dithering when reducing bit depth is always a good idea, regardless of what type of sounds the recording contains! Quantisation error caused by truncation is correlated to the signal and will always be of significantly higher amplitude than dither, even noise-shaped dither. There's 3 caveats to this statement:
 
1. Noise-shaped dither should only be applied once, as the final step of the mastering process. Successive applications of noise-shaped dither will sum the dither noise which is already concentrated in smaller frequency bands to start with. Noise-shaped dither is not a tool the consumer should be playing with, unless they are making their own recordings which they then bit reduce.
 
2. When dithering to high bit rates, say from a 64bit mix environment to a 24bit file, dither is commonly not applied, as even the higher amplitude of truncation error is insignificant. It can theoretically benefit some workflows, which require multiple bit reductions to 24bit, in which case the dither applied should be TDPF not noise-shaped dither, following the rule of only one application of noise-shaped dither.
 
3. While the effects of truncation error can be shown to be more severe than the effects of dither, through measurements, I am not aware of any studies which have demonstrated that even truncation error is audible. In the famous Boston Audio study where SACDs were ABX'ed against 16/44.1 equivalents derived from the SACD (IE. the same master). The 16/44.1 versions were created by converting the SACDs to 24/96 and then simply truncated (no dither applied) to 16/44.1, still no one could identify any difference.
 
A caveat to these caveats: Of course we can artificially manufacture scenarios to magnify all these (and pretty much any other) inaudible digital artefacts to make them audible!
 
Originally Posted by Sal1950 /img/forum/go_quote.gif
 
I was somewhat offended that we were referred to as "nuts".

 
I do see a certain justification to it. If running the file through a spectrograph is the only way you can tell whether or not you're getting what you paid for, then why did you buy it? Isn't it "nuts" to pay a premium for a product with claimed additional sonic quality, if you can't actually hear any of that additional sonic quality? Or, are you saying that paying the premium has nothing to do with what you can hear but is worth it purely because of some additional visual quality, (IE. What the waveform looks like in a spectrograph) and if so, don't you think that too is a little "nuts"?
 
 
I've tried to explain that this "huge controversy" is itself "nuts" and indeed, what some consumers seem to want is also "nuts".  Some consumers seem to want the "original" recordings and they want them at 24/96 or 24/192. This is "nuts" because it's not possible to record 24bits at any sample rate, about 14bits is the absolute maximum in practice. Of course we can write our 14bits to a 24bit file format or to say a 1,024bit file format (if someone invents such a format) but you're still only going to get a maximum of 14bits no matter what file format you write it to! The situation isn't much better with sample rates. The initial "resolution" of a commercial recording is probably somewhere around 4bit/15mHz but we can't even mix/master in that format, let alone distribute, it has to be converted (decimated)! Maybe consumers don't want the original recordings, maybe they want the original masters? Sorry but you can't have those either, they only exist virtually, in a mix/mastering environment, to turn them in to actual audio files we have to loose at least half the bits. Plus, processing commonly up-samples and down-samples, a lot of the time we the engineers don't even know what sample rate/s are occurring within our mixes/masters. Also notice that in this paragraph I haven't even mentioned what is or isn't audible!
 
The whole thing is a nonsense to start with and now it seems we've got a "huge controversy" about how to define that nonsense?!
 
G
 
Jan 15, 2016 at 10:43 AM Post #141 of 146
Some other wrinkles:
 
When streaming at home over something like AirPlay, AppleTV, or the new Dynaudio Xeos 2 systems, all the high resolution files get downsampled, usually to something like 24/48.
 
Or on my desktop, after being converted to analog, my active monitor speakers do an A/D conversion (24/192) because they use a DSP-based crossover.
 
Jan 15, 2016 at 11:02 AM Post #142 of 146
Originally Posted by gregorio /img/forum/go_quote.gif
...
 
I do see a certain justification to it. If running the file through a spectrograph is the only way you can tell whether or not you're getting what you paid for, then why did you buy it? Isn't it "nuts" to pay a premium for a product with claimed additional sonic quality, if you can't actually hear any of that additional sonic quality? Or, are you saying that paying the premium has nothing to do with what you can hear but is worth it purely because of some additional visual quality, (IE. What the waveform looks like in a spectrograph) and if so, don't you think that too is a little "nuts"?
 
....  
The whole thing is a nonsense to start with and now it seems we've got a "huge controversy" about how to define that nonsense?!
 
G

+1
biggrin.gif
... positively nuts !
 
Jan 15, 2016 at 11:26 AM Post #143 of 146
   
(... ) Some consumers seem to want the "original" recordings and they want them at 24/96 or 24/192. This is "nuts" because it's not possible to record 24bits at any sample rate, about 14bits is the absolute maximum in practice. Of course we can write our 14bits to a 24bit file format or to say a 1,024bit file format (if someone invents such a format) but you're still only going to get a maximum of 14bits no matter what file format you write it to! The situation isn't much better with sample rates. The initial "resolution" of a commercial recording is probably somewhere around 4bit/15mHz but we can't even mix/master in that format, let alone distribute, it has to be converted (decimated)! Maybe consumers don't want the original recordings, maybe they want the original masters? Sorry but you can't have those either, they only exist virtually, in a mix/mastering environment, to turn them in to actual audio files we have to loose at least half the bits. Plus, processing commonly up-samples and down-samples, a lot of the time we the engineers don't even know what sample rate/s are occurring within our mixes/masters. Also notice that in this paragraph I haven't even mentioned what is or isn't audible!  
The whole thing is a nonsense to start with and now it seems we've got a "huge controversy" about how to define that nonsense?!
 
G

 
For someone 'knowing' this ends with cyclic redundancy loop, for someone with BSOD, for some other with RESET, for someone with REFRESH... ... ... etc. Hopefully BIG MONEY hunters from 'music business' don't read this thread... or You (as insider?) need to hide your identity...
If to go with 'philosophy and defining, etc' then there is nice and serious book 'MUSIC AND MANIPULATION: On the Social Uses and Social Control of Music'
 
Jan 15, 2016 at 2:17 PM Post #144 of 146
   
Dithering when reducing bit depth is always a good idea, regardless of what type of sounds the recording contains! Quantisation error caused by truncation is correlated to the signal and will always be of significantly higher amplitude than dither, even noise-shaped dither. There's 3 caveats to this statement:
 
1. Noise-shaped dither should only be applied once, as the final step of the mastering process. Successive applications of noise-shaped dither will sum the dither noise which is already concentrated in smaller frequency bands to start with. Noise-shaped dither is not a tool the consumer should be playing with, unless they are making their own recordings which they then bit reduce.
 
2. When dithering to high bit rates, say from a 64bit mix environment to a 24bit file, dither is commonly not applied, as even the higher amplitude of truncation error is insignificant. It can theoretically benefit some workflows, which require multiple bit reductions to 24bit, in which case the dither applied should be TDPF not noise-shaped dither, following the rule of only one application of noise-shaped dither.
 
3. While the effects of truncation error can be shown to be more severe than the effects of dither, through measurements, I am not aware of any studies which have demonstrated that even truncation error is audible. In the famous Boston Audio study where SACDs were ABX'ed against 16/44.1 equivalents derived from the SACD (IE. the same master). The 16/44.1 versions were created by converting the SACDs to 24/96 and then simply truncated (no dither applied) to 16/44.1, still no one could identify any difference.
 
A caveat to these caveats: Of course we can artificially manufacture scenarios to magnify all these (and pretty much any other) inaudible digital artefacts to make them audible!

 
Given what you state, you'd expect a difference file made from a 24bit original and its 16bit truncated copy to show clear signs of quantization errors, right?
When using a high DR real world recording as the original, those errors don't seem too eager to reveal themselves. However, if I set up an intentionally bad scenario, they do.
 
 
 
That's why I say, when dealing with 'natural' sound sources, the use  of dither should be the exception, not the rule.
 
Jan 15, 2016 at 2:41 PM Post #145 of 146
   
Given what you state, you'd expect a difference file made from a 24bit original and its 16bit truncated copy to show clear signs of quantization errors, right?
When using a high DR real world recording as the original, those errors don't seem too eager to reveal themselves. However, if I set up an intentionally bad scenario, they do.
 
 
 
That's why I say, when dealing with 'natural' sound sources, the use  of dither should be the exception, not the rule.

 
Even with your pathological example, could you hear the quantization artifacts below the actual signal? More and more my view of dither seems to match up with the ubiquitous Lenna dither examples: a means to avoid missing detail in extreme low bit situations; avoiding seas of black and white, if you will. 16-bit really isn't a low bit situation, especially if we're talking people in normal listening conditions (I assume most of you don't do your ABXing inside Skywalker). I have yet to find a track where non-dithered truncation to even 14 bits makes one whit of audible difference (though perhaps a couple of my really dynamic classical tracks might go over the edge in a slightly quieter room). It all just seems really oversold, especially if we're only talking about doing once down to 16-bit at the end of a project.
 
Jan 15, 2016 at 2:56 PM Post #146 of 146
   
Even with your pathological example, could you hear the quantization artifacts below the actual signal? More and more my view of dither seems to match up with the ubiquitous Lenna dither examples: a means to avoid missing detail in extreme low bit situations; avoiding seas of black and white, if you will. 16-bit really isn't a low bit situation, especially if we're talking people in normal listening conditions (I assume most of you don't do your ABXing inside Skywalker). I have yet to find a track where non-dithered truncation to even 14 bits makes one whit of audible difference (though perhaps a couple of my really dynamic classical tracks might go over the edge in a slightly quieter room). It all just seems really oversold, especially if we're only talking about doing once down to 16-bit at the end of a project.

 
Yes, it's fairly apparent*, but as you say, it's also quite an extreme example. It might apply to someone like Ryoji Ikeda, but not many else.
 
*If you amplify by 20–30dB
 

Users who are viewing this thread

Back
Top