Benchmark talked about headroom for intersample peaks in DAC, does it really matter?
Aug 4, 2017 at 10:26 AM Post #77 of 90
...the original design is flawed ... that includes millions, possibly most of the DACs out in the field. So your definition of "broken" is also the definition of the norm. We need to be aware of the average and norm. That means, "broken" or not, we must work with it if we don't want our audio clipped at the DAC..

Disagree. Any properly designed DAC will properly manage all valid Nyquist information (information that is unclipped in reconstructable Nyquist space). Yes, that's a tautology, but it also happens to be true. Proper DAC design is not flawed. Garbage in, garbage out. In other words, if we have digital program that shows no clipping in a properly designed (reconstructive) digital meter, then a properly designed DAC will deliver a non-clipped analog reconstruction. I believe there may be a caveat here for "synthetic data" created to purposely confuse an anti-imaging filter, but still researching that.
 
Last edited:
Aug 4, 2017 at 10:33 AM Post #78 of 90
Disagree. Any properly designed DAC will properly manage all valid Nyquist information (information that is unclipped in reconstructable Nyquist space). Yes, that's a tautology, but it also happens to be true. Proper DAC design is not flawed. Garbage in, garbage out. In other words, if we have digital program that shows no clipping in a properly designed (reconstructive) digital meter, then a properly designed DAC will deliver a non-clipped analog reconstruction. I believe there may be a caveat here for "synthetic data" created to purposely confuse an anti-imaging filter, but still researching that.

Improperly designed but working as designed is not "broken". Broken is not working as designed. And now we're arguing about the meaning of the word "Broken". Really????

You are disagreeing with the premier English dictionary, not me. Go for it, dude!
 
Aug 4, 2017 at 10:34 AM Post #79 of 90
I seen no point in developing your own word definitions. "Broken" does not mean "design deficiency".

I don't care what you call it, inter-sample clipping (better to just call it clipping) is a result of poor product design or operator misuse, or both. In the engineering world, a poorly designed product that fails to provide an operator with sufficient information is as good as broken. But, yes, that's my personal bias. If you don't like the word "broken" then use "insufficient to the task."
 
Last edited:
Aug 4, 2017 at 11:53 AM Post #80 of 90
whats fun is that many old albums where the noise floor was a solid issue(even more so as time passed+ tape copies), are recorded with peaks several dB below 0. and most modern stuff where even 16bit has more room than needed, we get stuff clipped or stuck at some very "safe" levels like -0.1dB(so room, much impressed). and everybody is paranoid about losing a bit of dynamic where no relevant signal is recorded anyway. it's like the vinyl story all over again. we get way better tools and resolution with digital, hey let's push stuff to the limit so that people start thinking about vinyl as the format with the best dynamic!
 
Aug 4, 2017 at 12:13 PM Post #81 of 90
My car's spedometer goes up to 120 mph, but I've never felt the need to actually go that fast. Why push it? In a studio situation, especially with 24 bit, I don't see why you wouldn't work with a healthy headroom and just normalize at your bounce down.

cloggins, try the word "lousy". That might work. This whole conversation reminds me of the two kings in Gulliver's Travels arguing over which side of an egg to break.
 
Last edited:
Aug 4, 2017 at 12:26 PM Post #82 of 90
[1] A properly designed digital mixing engine, or any digital processing system, will maintain unclipped signal integrity unless the operator pushes the program level beyond full-scale. In a properly designed digital metering system, all peaks are identified. In a poorly designed digital metering system, some peaks are missed, which is missed inter-sample information. Poorly designed tools, nothing more.
[2] We also know that most mixing engines are based on a 32-bit IEEE float topology, which assures that all 24 audio audio bits are maintained perfectly (the actual mixing algorithm is a different story, which is why different DAWs sound different after a mix), hence any "missed clipping" after mixing results from either insufficient tools or operator error.

1. IRRELEVANT! True peak meters and true peak limiters only became available a few years ago. By your definition, all music is broken, all music producers are broken/incompetent, so are all mix engineers and mastering engineers and they all have been for more than 30 years. The only people who aren't producing broken audio content are TV re-recording mixers. So what relevance is there to your statements? None whatsoever, we have to deal with the situation as it exists, not with some engineering utopia of perfect tools and equipment.

2. Nope, most are 64bit these days and have been for a number of years. The summing engine is pretty much identical in all DAWs and they all sound identical. And, not only DAWs but also between DAWs and hardware mixers. Of course, there is a get deal of difference between the functionality, the plugin processors and how different producers/engineers employ those processors but the summing engines are effectively identical.

G
 
Aug 4, 2017 at 12:57 PM Post #83 of 90
So, let me see if I have this right: Benchmark is wrong, gregorio is wrong, I'm wrong, the rest of us here are wrong, Merriam Webster is wrong, (or perhaps you'd say "broken"?), and that leaves only one guy who is right? And that would be…


Here's a suggestion for the word of the day.
 
Last edited:
Aug 4, 2017 at 1:21 PM Post #84 of 90
For a positive note, actually listenning to BBC3 radio lossless FLAC MPEG-DASH stream.
And I am really surprised and glad to find that in average:
  • TPL is around -7dBTP
  • RMS is in the -35dBTP area
  • LRA is in the 24dBTP area (LRA loudness Range)
Luckily not everything is broken.

Edit: type correction
 
Last edited:
Aug 4, 2017 at 1:22 PM Post #85 of 90
1. IRRELEVANT! True peak meters and true peak limiters only became available a few years ago. By your definition, all music is broken, all music producers are broken/incompetent, so are all mix engineers and mastering engineers and they all have been for more than 30 years. The only people who aren't producing broken audio content are TV re-recording mixers. So what relevance is there to your statements? None whatsoever, we have to deal with the situation as it exists, not with some engineering utopia of perfect tools and equipment.

Maybe so. I don't know. I'm trying to address objective technical issues as they are now, today, not "relevancy" (whatever that is) over the last 30 years. This isn't about engineering utopia, it's about good engineering tools and practices. The tools to identify and manage all PCM peak data are available to us, now, today, and have been for years (as you say). Those who aren't using such tools should be extra careful, or give their project to an engineer who has the tools to do the job right.

2. Nope, most are 64bit these days and have been for a number of years. The summing engine is pretty much identical in all DAWs and they all sound identical. And, not only DAWs but also between DAWs and hardware mixers. Of course, there is a get deal of difference between the functionality, the plugin processors and how different producers/engineers employ those processors but the summing engines are effectively identical.

You're confusing micro-processing (hardware) with the audio engine (software). Yes, most DAWs today are written for a 64-bit processing environment (Intel, etc.), but almost all DAWs process audio using a 32-bit IEEE float engine (Avid, Logic, Sonar, etc.), and as such are limited to 24-bit audio. And if you think all DAW summing engines sound the same, you've probably not done any ABX comparisons, as we have. But I'll let you believe whatever you wish, as "what sounds best" gets into the realm of personal opinion.
 
Last edited:
Aug 4, 2017 at 1:35 PM Post #86 of 90
This whole conversation reminds me of the two kings in Gulliver's Travels arguing over which side of an egg to break.

Actually, the level of bad information in this thread is breathtaking. Go back and look at the waveform image in post #20. Or simply look at gregorio's reply in comment 82. I mentioned that "most mixing engines are based on a 32-bit IEEE float topology" to which he/she replied "Nope, most are 64bit these days and have been for a number of years." He/she didn't even know the difference between DAW hardware (64-bit) and DAW software (32-bit). Sigh.......
 
Last edited:
Aug 4, 2017 at 1:44 PM Post #87 of 90
You're fighting an uphill battle that won't likely add up to much, but it's plenty entertaining. Kind of like seeing film of elks butting heads in the wild.
 
Aug 4, 2017 at 3:43 PM Post #88 of 90
And if you think all DAW summing engines sound the same, you've probably not done any ABX comparisons, as we have. But I'll let you believe whatever you wish, as "what sounds best" gets into the realm of personal opinion.

That sounds interesting! Has the study been published? And who is "we"?

This could be your chance to change personal opinion!
 
Aug 4, 2017 at 4:14 PM Post #89 of 90
[1] The tools to identify and manage all PCM peak data are available to us ... [1a] Those who aren't using such tools should be extra careful, or give their project to an engineer who has the tools to do the job right.

[2] I mentioned that "most mixing engines are based on a 32-bit IEEE float topology" to which he/she replied "Nope, most are 64bit these days and have been for a number of years." He/she didn't even know the difference between DAW hardware (64-bit) and DAW software (32-bit). Sigh.......
Yes, most DAWs today are written for a 64-bit processing environment (Intel, etc.), but virtually all DAWs process audio using a 32-bit IEEE float engine (Avid, Logic, Sonar, etc.), and as such are limited to 24-bit audio.

1. Not entirely but quite close!
1a. Go tell that to all the music labels and all the world's great producers and mastering engineers.

2. Oh dear, here we go again, you're talking nonsense, why? What do you get out of it, do you like making a fool of yourself? Avid's Pro Tools has a 64bit summing engine, it's had a 64 bit summing engine for 6 years. Before that, going back at least 15 years, it had a 56bit summing engine (48 bit + an 8 bit accumulator). Here is a technical paper going back more than a decade explaining the Pro Tools 48/56bit summing engine! Come on, even just a quick look on wikipedia will tell you: "The Pro Tools mix engine has traditionally employed 48-bit fixed point arithmetic, but floating point is also used in some cases, such as with Pro Tools HD Native. The new HDX hardware uses 64-bit floating point summing." Jeez, what is it with you?

G
 
Aug 5, 2017 at 4:41 PM Post #90 of 90
Well, that was written by someone in marketing.



Well it can go a lot higher than 3 dB if you try. If you use a signal like [... +1 -1 +1 +1 -1 +1, ...] you can repeat the pattern to make the intersample peak go arbitrarily high.





This is highly contrived, though. I think it unlikely that you'd see more than a few dB with real music.



The reconstructed analog signal exists between the samples, so if the samples are already at FS, the reconstructed signal can exceed this in level.



Has nothing to do with Nyquist. Nyquist limit for 48 kHz is 24 kHz.
  • A sine wave at 12 kHz with samples [+1, +1, -1, -1, +1, +1] will have intersample peaks of +3 dBFS top and bottom.
  • A DC-shifted sine wave at 16 kHz of [+1, +1, -1, +1, +1, -1] will have intersample peaks of +4.5 dB on top side only.
Both are under Nyquist limit.



Or worse. I found this thread because I'm designing a product with a DAC that doesn't clip for intersample peaks, but instead becomes "uncontrolled" and produces random signals in that region instead. So if you're playing at 96 kHz, and play a 32 kHz signal as described above [+1, +1, -1, +1, +1, -1], it should be completely inaudible ultrasound, but the intersample peaks produce very audible white noise/distortion instead. So I'm trying to figure out how much I need to attenuate the digital signal to make sure this never happens in realistic situations. -3.5 dB is probably good enough?

Though this experiment found 3% of songs with >4 dB intersample peaks, though they couldn't say which songs they were, or whether those samples of the songs were already at FS due to clipping.

I have done some homework with two of your examples:
Well, that was written by someone in marketing.



Well it can go a lot higher than 3 dB if you try. If you use a signal like [... +1 -1 +1 +1 -1 +1, ...] you can repeat the pattern to make the intersample peak go arbitrarily high.





This is highly contrived, though. I think it unlikely that you'd see more than a few dB with real music.



The reconstructed analog signal exists between the samples, so if the samples are already at FS, the reconstructed signal can exceed this in level.



Has nothing to do with Nyquist. Nyquist limit for 48 kHz is 24 kHz.
  • A sine wave at 12 kHz with samples [+1, +1, -1, -1, +1, +1] will have intersample peaks of +3 dBFS top and bottom.
  • A DC-shifted sine wave at 16 kHz of [+1, +1, -1, +1, +1, -1] will have intersample peaks of +4.5 dB on top side only.
Both are under Nyquist limit.



Or worse. I found this thread because I'm designing a product with a DAC that doesn't clip for intersample peaks, but instead becomes "uncontrolled" and produces random signals in that region instead. So if you're playing at 96 kHz, and play a 32 kHz signal as described above [+1, +1, -1, +1, +1, -1], it should be completely inaudible ultrasound, but the intersample peaks produce very audible white noise/distortion instead. So I'm trying to figure out how much I need to attenuate the digital signal to make sure this never happens in realistic situations. -3.5 dB is probably good enough?

Though this experiment found 3% of songs with >4 dB intersample peaks, though they couldn't say which songs they were, or whether those samples of the songs were already at FS due to clipping.

I did homework with some of your patterns. :L3000:
Please, do you mind providing more details about:
  1. Pattern with ISP vs Sample length curve:
    • unless misinterpretation, with 1 million samples -> I am getting TPL= +3.6dBTP @ 48kSps / 24bits
    • it seems to be a phase modulated sampled signal Fc=24kHz Fm=2.4kHz (or even a kind of BiPhase Coding with phase wrongly interpolated) added with some DC residual
    • whatever it is, sorry but where is the audio ?
  2. DC-shifted sine wave at 16 kHz of [+1, +1, -1, +1, +1, -1]:
    • agreed. I am getting TPL= +4.6dBTP @ 48kSps / 24bits
You requested advice regarding how much you should attenuate the digital signal in order to avoid such situation in your project, well if your DAC is dealing with audio and provided that you are feeding it with proper entry I think you have everything under hands for verifying the 3.5dB attenuation value.
 

Users who are viewing this thread

Back
Top