NOS dacs and upsampling

Mar 28, 2025 at 5:43 AM Post #31 of 74
Is this statement based on that old Stanley & Lipschitz paper? There's a reason most dacs convert to DSD internally, with any multi bit solution you either get settling time issues or ultrasonic aliasing around the Mhz range. All their paper proved is you can't naively dither DSD or overloading will occur.
It’s mainly based on the whole discussion around 20-25 years ago, including both the Vandekooy & Lipshitz papers on the subject. Their papers demonstrated that 1bit DSD/SACD could not be fully dithered. By around the mid to late 1990’s many DACs were oversampling to DSD rates but using several bits rather than one, so the issue was largely irrelevant, except for SACD.
Yes Benchmark uses 0dbspl as the threshold of absolute inaudibility and most music when played back does not have peaks above 110db. Of course you can contrive a situation where these effects become audible, but my point was that generally speaking these effects should not be audible.
Technically the theoretical threshold of human hearing is about -9dBSPL but I believe only around -7dBSPL to -8dBSPL has ever actually been achieved in practice. That’s not achievable in practice by consumers though, few could achieve 0dBSPL, 10-15dBSPL would be more realistic for the vast majority of audiophiles, although 0dBSPL is a pretty safe bet, which is why 0dBSPL was chosen in the first place of course. As I mentioned though, a lot of the measured artefacts in digital conversion can only exist in the digital or analogue domains but not in the acoustic domain at reasonable listening levels, so any discussion of audibility is moot.
Discussing science isn't allowed?
Not really. The TOS specifically prohibits the mention of biases, controlled listening tests, etc. But posts are often moved or deleted on the basis of containing science because “that’s what the Sound Science subforum is for”.

G
 
Last edited:
Mar 28, 2025 at 7:40 AM Post #32 of 74
Discussing science isn't allowed? The only thing isn't allowed is telling cable buyers etc. that everything they hear different is imagined, to protect threads against pointless rehashed debates and mud slinging.
A thread where people constructively discuss how their objective and subjective experiences diverge and might possibly be reconciled is the exact opposite, unless sound science forum users start leaking out and proceed to tell everyone they're wrong..
Agreed. BTW, this thread is the most informative I've seen on HF in a very long time.
 
Mar 28, 2025 at 11:29 AM Post #33 of 74
Yep, bizarre isn’t it? They buy a Non-OverSampling DAC and then oversample it.
To be able to control what filters to use etc. It's amazing to be able to try out different things and have full control of the chain. May + HQP has teached a lot.
1. When choosing bit depth it doesn’t matter. Each bit represents about 6dB of dynamic range (6.02dB to be precise), so 16bit has about 96dB dynamic range. Music recordings typically have less than 50dB dynamic range, only about 30dB dynamic range with the highly compressed recordings, up to about 60dB with some uncompressed orchestral recordings and a very few classical recordings that go to about 70dB dynamic range. Studio/Music microphones only typically have 70dB or less dynamic range and the most dynamic go up to just over 80dB. In all cases, this is a lot (or a massive amount) less than the dynamic range offered by 16bit, even the highly dynamic (60dB dynamic range) orchestral recordings are effectively using just 10bits of the 16bits, the remaining 6bits are just random values (noise). So, what does 20bit or 24bit get you except ever quieter levels of noise? 24bit is useful when recording because it allows a huge amount of headroom but for playback there is literally nothing (other than noise) to be gained.
What about low signal accuracy:
1743173807943.png

Fig.13 HoloAudio May, waveform of undithered 1kHz sinewave at –90.31dBFS, 16-bit data (left channel blue, right red).
1743173849358.png

Fig.14 HoloAudio May, waveform of undithered 1kHz sinewave at –90.31dBFS, 24-bit data (left channel blue, right red).

Even if we didn't go to -90dBFS, this still applies. The more levels we have available, the more accurately we can represent the wave. Note that there won't be any further reconstruction so these bits represent the final samples as is.
I suggest moving this discussion over to the Sound Science forum if the OP or anyone else wants the actual facts, as discussing science isn’t allowed in this or any other forum on Head-Fi except the sound science forum. If the answer you’re looking for is just marketing misinformation or audiophile myths based on marketing misinformation, then it’s fine to just leave it here.
The problem with sound science forum is that the attitude there is arrogant and passive aggressive. No-one here has any "agenda", but we are merely trying to understand what we hear and go further. My experience with sound science forum has been very negative as the general attitude isn't helpful. The ending of your comment signals the very attitude I'm talking about. You are not trying to help, but you are attacking.

Your discussion about bit depth also isn't starting with a good grounding when you say things like:

"So, what does 20bit or 24bit get you except ever quieter levels of noise? 24bit is useful when recording because it allows a huge amount of headroom but for playback there is literally nothing (other than noise) to be gained.".

I have just stated earlier that I can hear a clear difference between the two. It's trivial to test those. There are also others saying the same thing. IMO it's arrogant to just sweep that under the rug and just talk about how in theory it's not audible. Every theory has it's assumptions, scope and limits. To me the right scientific attitude would be to either study why these people hear these things or then just leave it for others who are interested. I bet that if we would make all the assumptions explicit, we would find out that we have been comparing apples and oranges. As an example: R2R dac converts this PCM directly into analog, while delta-sigma dac will further process it into 1 bit. We can't directly compare the situation to official or unofficial studies where delta-sigma dac has been used.

Yes, it's not possible to study and prove everything that people claim to hear, but assuming that everyone else is delusional, stupid or selling something isn't constructive. We all know the argument about fly next to jet plane, still we can difference between for example 20 and 24 bits in for example these specific circumstances. I started this thread because I couldn't explain that with my knowledge of our current theories. People discussing in these threads are well aware of brain's abilities to imagine. We are not splitting hairs, but talking about clear differences.
 
Last edited:
Mar 29, 2025 at 3:53 PM Post #34 of 74
To be able to control what filters to use etc.
Sure but then why would you want to use a suboptimal filter to start with, rather than the standard (optimal) filters we’ve been using for decades? It’s only really in the audiophile world we see optional filters. Professional ADCs/DACs don’t have those filter options because they only want a standard (optimal) filter resulting in high fidelity.
What about low signal accuracy:
What about it? Dither converts quantisation error and therefore the low level signal accuracy is effectively perfect.
The more levels we have available, the more accurately we can represent the wave.
No, bit depth is irrelevant to the output wave accuracy because again, dither removes all the error. That’s how DSD/SACD works, SACD uses only one bit and yet isn’t 16 times more inaccurate than CD.
Note that there won't be any further reconstruction so these bits represent the final samples as is.
Yes, that’s the major problem with NOS; the samples are not waveforms, they’re just discrete instances in time that need to be reconstructed into a waveform (with an anti-image/reconstruction filter) as per the requirements of digital audio.
My experience with sound science forum has been very negative as the general attitude isn't helpful.
That all depends. If someone posts to the Sound Science forum claiming marketing falsehoods or audiophile myths are actual facts, obviously that won’t be well received but if someone someone goes there asking a genuine question, then they’re usually helpful.
Your discussion about bit depth also isn't starting with a good grounding when you say things like:
"So, what does 20bit or 24bit get you except ever quieter levels of noise? 24bit is useful when recording because it allows a huge amount of headroom but for playback there is literally nothing (other than noise) to be gained.".
That quoted assertion is entirely in line with the proven facts, so I can’t see how it is apparently not “a good grounding”.
I have just stated earlier that I can hear a clear difference between the two. It's trivial to test those.
It is indeed trivial to test them, Sampling Theory is proven, it’s been tested in theory and of course in practice. I have no idea what you can hear or under what conditions you perceived a difference, I was just stating the facts.
There are also others saying the same thing. IMO it's arrogant to just sweep that under the rug and just talk about how in theory it's not audible.
There are many people “saying the same thing”, in fact I’m one of them! It’s trivially easy to tell 16bit apart from 24bit, provided you don’t noise-shape dither the 16bit and providing you whack the levels up so high it would damage your equipment and/or hearing during the loudest sections.
Yes, it's not possible to study and prove everything that people claim to hear …
No one is claiming to have studied and proved everything that people claim to hear, only certain things, such as in/audible levels of digital artefacts.

G
 
Last edited:
Mar 30, 2025 at 8:06 AM Post #35 of 74
What about it? Dither converts quantisation error and therefore the low level signal accuracy is effectively perfect.
Dither helps decorrelate quantization error, but does nothing to improve low level signal accuracy, the signal accuracy is still a function of bit-depth unless you are talking about noise shaping + dither.

No, bit depth is irrelevant to the output wave accuracy because again, dither removes all the error. That’s how DSD/SACD works, SACD uses only one bit and yet isn’t 16 times more inaccurate than CD.
Here too you are referring to noise shaping and not just dither, a 1-bit signal cannot be dithered nor can it by itself match the accuracy of 16bit CD even at 64x rates.

If you are saying bit-depth in the strict sense is irrelevant because noise shaping can be used to increase the effective bit depth in the audible band, then yes, I agree with that. However, noise shaping also requires a higher sampling rate and at the minimum 4x rates, but ideally 8x or higher rates. The lower the bit depth, the higher the sampling rate you will need to push out the quantization noise. This is the reason why SACD/1-bit DSD needs 64x CD rates.
 
Last edited:
Mar 30, 2025 at 8:36 AM Post #36 of 74
Sure but then why would you want to use a suboptimal filter to start with, rather than the standard (optimal) filters we’ve been using for decades? It’s only really in the audiophile world we see optional filters. Professional ADCs/DACs don’t have those filter options because they only want a standard (optimal) filter resulting in high fidelity.
What exactly is meant by "standard (optimal) filters," and why should any other filter be considered suboptimal? ADCs and DACs must operate within certain hardware constraints, and the filters they use are often the best available given those limitations. However, it’s neither fair nor scientific to categorically dismiss other filters as suboptimal. With the virtually unlimited processing power available today, it’s possible to design oversampling filters with superior attenuation and better rejection of out-of-band images - objectively better in many cases. What we’re really discussing here are the audible and subjective differences, and sharing experiences based on those preferences. Some might prefer NOS for its particular sonic qualities, while others may favor external oversampling or the oversampling filters built into the DAC.
 
Mar 30, 2025 at 11:58 AM Post #37 of 74
Dither helps decorrelate quantization error, but does nothing to improve low level signal accuracy, the signal accuracy is still a function of bit-depth unless you are talking about noise shaping + dither.
If dither decorrelates quantisation error then by definition of no longer having quantisation error the signal must be more accurate. Then it’s a question of how much of that effectively perfect signal one can hear below the dither noise. If it’s noise-shaped dither then we can theoretically hear more of that effectively perfect signal below the dither noise because that dither noise has been moved.
If you are saying bit-depth in the strict sense is irrelevant because noise shaping can be used to increase the effective bit depth in the audible band, then yes, I agree with that.
Yes, noise shaped dither effectively converts quantisation error into noise and then moves that noise to less or inaudible frequency regions. Noise shaped dither is a requirement of SACD and standard practice in CD mastering for 25-30 years.
However, noise shaping also requires a higher sampling rate and at the minimum 4x rates, but ideally 8x or higher rates.
So you’re saying we can’t use noise-shaped dither for CD mastering (44.1kFs/s), despite the fact it’s been standard practice for decades?
The lower the bit depth, the higher the sampling rate you will need to push out the quantization noise. This is the reason why SACD/1-bit DSD needs 64x CD rates.
Sure, with say just 1bit there is a great deal of dither noise and therefore a far greater audio frequency range is required to redistribute all that noise. With say 16bit though, there is a great deal less dither noise and therefore it can be redistributed to areas of the audible band at levels that are inaudible.
What exactly is meant by "standard (optimal) filters," and why should any other filter be considered suboptimal?
Standard (optimal) filters are filters that have a relatively fast roll-off (starting say around 19-20kHz) with typically linear or near linear phase, thereby “optimal” because they cause no audible artefacts in the hearing band and “standard” because this was the only type of filter employed for many years and is still the only type used in professional converters. A filter that does cause audible artefacts would therefore be suboptimal, for example a slow roll-off filter starting at say 10kHz.

G
 
Mar 30, 2025 at 12:41 PM Post #38 of 74
To be able to control what filters to use etc. It's amazing to be able to try out different things and have full control of the chain. May + HQP has teached a lot.

What about low signal accuracy:
1743173807943.png
Fig.13 HoloAudio May, waveform of undithered 1kHz sinewave at –90.31dBFS, 16-bit data (left channel blue, right red).
1743173849358.png
Fig.14 HoloAudio May, waveform of undithered 1kHz sinewave at –90.31dBFS, 24-bit data (left channel blue, right red).

Even if we didn't go to -90dBFS, this still applies. The more levels we have available, the more accurately we can represent the wave. Note that there won't be any further reconstruction so these bits represent the final samples as is.

The problem with sound science forum is that the attitude there is arrogant and passive aggressive. No-one here has any "agenda", but we are merely trying to understand what we hear and go further. My experience with sound science forum has been very negative as the general attitude isn't helpful. The ending of your comment signals the very attitude I'm talking about. You are not trying to help, but you are attacking.

Your discussion about bit depth also isn't starting with a good grounding when you say things like:

"So, what does 20bit or 24bit get you except ever quieter levels of noise? 24bit is useful when recording because it allows a huge amount of headroom but for playback there is literally nothing (other than noise) to be gained.".

I have just stated earlier that I can hear a clear difference between the two. It's trivial to test those. There are also others saying the same thing. IMO it's arrogant to just sweep that under the rug and just talk about how in theory it's not audible. Every theory has it's assumptions, scope and limits. To me the right scientific attitude would be to either study why these people hear these things or then just leave it for others who are interested. I bet that if we would make all the assumptions explicit, we would find out that we have been comparing apples and oranges. As an example: R2R dac converts this PCM directly into analog, while delta-sigma dac will further process it into 1 bit. We can't directly compare the situation to official or unofficial studies where delta-sigma dac has been used.

Yes, it's not possible to study and prove everything that people claim to hear, but assuming that everyone else is delusional, stupid or selling something isn't constructive. We all know the argument about fly next to jet plane, still we can difference between for example 20 and 24 bits in for example these specific circumstances. I started this thread because I couldn't explain that with my knowledge of our current theories. People discussing in these threads are well aware of brain's abilities to imagine. We are not splitting hairs, but talking about clear differences.
Hearing differences happening between 20 and 24bit is the same fact contradicting the facts as we got with Rob Watts and his forever audible impact of -250dB noise shaping.
The instantaneous dynamic of the ear is like 60 or 70dB(maybe someone with a failing acoustic reflex could get over that, but it would also have caused serious damage to his hearing over the years, so that's not our answer).
We sure can hear sounds of lower intensity than some louder ones, and even within ambient noise, we can perceive quieter sound cues up to a point. But human hearing, even with its bag of tricks, still behaves in a way following the well accepted model of auditory masking where a loud 500Hz might not mask completely a nearly as loud 550Hz, but it will mask(as in, we don't notice it's there) a quiet 550Hz(and same for temporal masking). We encounter a fairly broad range of noises from our environment, even in subjectively silent rooms in our house. A quiet recording studio is usually said to have noises in the 20 to 30dB SPL, a house usually is worse.
As gregorio said, in an anechoic chamber with kids, we'll get perception of sound below 0dB sometimes(not much below, but a handful is expected). On the other hand, in a normal listening setting at home, and not being a kid anymore, what are the odds of us picking up stuff at or below 0dB SPL?

We have all the well established, many times verified, data for what to expect on average, those are the facts we know and trust, because they've been tested rigorously and replicated(it's literally what made us know that 0dB was not in fact the lower limit for humans even though that's the definition it was initially given). Now imagine what conditions you'd need, to have a chance of hearing changes in the music below 20bit(so about 20*6=120dB below peaks)? Will those be above 0dB SPL? Let's say we want those audible differences at 10dB SPL in a most ideal scenario and a fair amount of optimism for someone still young. That means listening to music with peaks possibly reaching 10+120+whatever extra dB to reach the actual signal is imagined to change in your hypothesis(somewhere between 1 and 4bit under 20). So we need peaks at least above 130dB SPL. Do you listen to music that way? Doesn't seem like a safe habit.

Let's say you have that situation at home. We still have a few facts that challenge the possibility of hearing anything that actually is below 20bits:
The instantaneous dynamic range of your ears is the most obvious. You'd need an extremely quiet passage, a long enough one to even stand a chance. Do you feel the difference you're noticing, only at the beginning and end of tracks during silent passages? If not, we seem to have a direct conflict with the idea that you have normal human ears. It's more likely that the hypothesis is wrong.
Another obvious issue is the recorded material. What kind of signal are we going to find 20bit down and below? Noises, noises, noises, so very many noises. Even if your hypothesis is correct, it would be a matter of hearing changes in the background noises. Hard to get motivated about that, even if you're entirely right.
Then there is the simple notion that you need a playback system that can handle that kind of dynamic and resolution. No DAC actually manages 24bit, then the amp needs to not create any distortion loud enough to mask below 20bit content. Same for the headphone or speakers. One more necessary condition that doesn't seem realistic.

All in all, the facts disagree with your idea that what you're noticing really are differences below 20bits in the music/signal.
And I have the same conclusion for the same reasons about all the weird ideas Watts brought up about hearing below what humans are expected to hear. Because the facts, and some very well researched ones, disagree with the possibility, meaning the idea is wrong.

Now, of course, this does not mean Rob never heard anything, or that you never heard anything changing. It's one clear possibility that you didn't, and you probably should try to test for it(to prove to yourself that you're indeed hearing something instead of feeling like you do for reason X or Y or some visual cues tricking the brain). But there is another possibility, that's just as likely. Your correlation between the change you are hearing, and the signal changing below 20bits is the wrong one. Pretty much everything we know about human hearing, music albums, ambient noises, and playback gear, is screaming at us that if you're hearing something, it must be a good deal louder than 120dB below peak.
So I think a good start would be to measure the 2 scenarios allegedly making audible changes, and confirm how loud the differences actually are in your house with your gear. Obviously you're not going to measure anything near or below 0dB SPL out of a headphone in your room(other than more noises unrelated to our concerns), but the output of the DAC and that of the amp might already give us the information we're looking for.
And if nothing was found different in the signal louder than 20bit down, then you'd start to have a more serious case to say you're actually hearing something that low(if you also test for biases).

Hearing thresholds, as commonly given, tend to already be fairly optimistic. Some aren't even average values, but the very best someone got under conditions made to get the best possible result(ideal test signal, ideal delays...). So it's important to know where they come from and to not underestimate how rare it is for someone to do better(except kids, kids have undamaged young ears, if they're ever going to break some hearing record, that's when they'll do it, not when they're 45).
 
Mar 30, 2025 at 12:49 PM Post #39 of 74
If dither decorrelates quantisation error then by definition of no longer having quantisation error the signal must be more accurate.
Decorrelation is not the same as removing quantization noise. It prevents the quantization noise from being modulated by the music signal, but it does not eliminate the quantization error itself. Unless you apply noise shaping, dither is not a substitute for bit-depth. Dither is effective when there's a reasonable usable bit-depth—such as 16 bits, 24 bits, or something in between. The audibility and listener preference between these options are also part of the discussion.

For example, 8-bit audio with dither is not the same as 24-bit audio with dither and 8-bit audio will have an increased noise floor due to quantization error. While dither may reduce harmonic distortion caused by quantization error, 8-bit audio will still sound less detailed and more smoothed out compared to 16-bit or 24-bit audio.
So you’re saying we can’t use noise-shaped dither for CD mastering (44.1kFs/s), despite the fact it’s been standard practice for decades?
At a 44.1 kHz sampling rate, you're limited by the Nyquist frequency of 22.05 kHz. Yes, you can apply psychoacoustic noise shaping to gently push some of the noise above 2–3 kHz, and then more steeply between 20 kHz and 22 kHz. But there’s very little headroom to work with, and some of that shaped noise still remains within the audible band. You might gain the equivalent of 1 or 2 extra bits, but that’s not enough to accommodate a significant reduction in bit depth—certainly not down to 1, 5, or even 8 bits.

A simpler and more effective approach is to use a higher sampling rate, which allows you to push quantization noise further into the inaudible range with greater flexibility and less impact on perceived audio quality.

Standard (optimal) filters are filters that have a relatively fast roll-off (starting say around 19-20kHz) with typically linear or near linear phase, thereby “optimal” because they cause no audible artefacts in the hearing band and “standard” because this was the only type of filter employed for many years and is still the only type used in professional converters. A filter that does cause audible artefacts would therefore be suboptimal, for example a slow roll-off filter starting at say 10kHz.
Most software-based upsampling filters fit this definition of relatively fast roll-off starting around 19–20 kHz, with the exception of a few designed specifically for lower-bitrate formats. These software filters often offer selectable characteristics, such as linear-phase, minimum-phase, or mixed-phase. While not all may be optimal, most are objectively superior to the filters implemented within typical DAC hardware.
 
Last edited:
Mar 30, 2025 at 1:08 PM Post #40 of 74
Hearing differences happening between 20 and 24bit is the same fact contradicting the facts as we got with Rob Watts and his forever audible impact of -250dB noise shaping.
The instantaneous dynamic of the ear is like 60 or 70dB(maybe someone with a failing acoustic reflex could get over that, but it would also have caused serious damage to his hearing over the years, so that's not our answer).
We sure can hear sounds of lower intensity than some louder ones, and even within ambient noise, we can perceive quieter sound cues up to a point. But human hearing, even with its bag of tricks, still behaves in a way following the well accepted model of auditory masking where a loud 500Hz might not mask completely a nearly as loud 550Hz, but it will mask(as in, we don't notice it's there) a quiet 550Hz(and same for temporal masking). We encounter a fairly broad range of noises from our environment, even in subjectively silent rooms in our house. A quiet recording studio is usually said to have noises in the 20 to 30dB SPL, a house usually is worse.
As gregorio said, in an anechoic chamber with kids, we'll get perception of sound below 0dB sometimes(not much below, but a handful is expected). On the other hand, in a normal listening setting at home, and not being a kid anymore, what are the odds of us picking up stuff at or below 0dB SPL?
Without getting into an argument of what one is supposed to hear and not hear, which many have beaten to death on multiple forums and threads, there is a difference between the smallest signal one can hear and being sensitive to the discrete levels available in the reproduced signal, these are not the same.

Even if the effective dynamic range for a specific genre of music is say 60dB, music signal varies continuously (i.e., infinitely granular) within this range. If 10bits were used to accommodate 60dB dynamic range, then there are exactly 1024 levels available to represent, one of the bits being the sign bit, it is really about 512 levels in the postive and negative direction. The question is if 512 levels are enough, given we are used to living in an analog world where the resolution is infinite.
 
Mar 30, 2025 at 2:28 PM Post #41 of 74
All this talk is tempting me to take 10 seconds of a nice audiophile track and PGGB it to 8-bit and 20-bit at x32 rate with noise shaping and try the FooBar ABX plugin.
I'm quite confident these two will be clearly distinguishable despite both having a noise floor below audibility.
 
Mar 30, 2025 at 3:51 PM Post #42 of 74
I just want to point out that if you properly dither to 8bit, the noise floor will be around 48dB below full scale, which is very much audible under normal circumstances, like a reasonably quiet room with music that's not brickwalled. I imagine this is one of the reasons why virtually noone uses 8bit either for playback or for recording.
 
Mar 30, 2025 at 4:00 PM Post #43 of 74
I just want to point out that if you properly dither to 8bit, the noise floor will be around 48dB below full scale, which is very much audible under normal circumstances, like a reasonably quiet room with music that's not brickwalled. I imagine this is one of the reasons why virtually noone uses 8bit either for playback or for recording.
I don't know if you read the whole thread, when I tested Gaussian dither there was indeed a quite loud hissing present with 8-bit PCM.
LNS15 has no audible hiss as far as I can discern but i'd love to see a measurement.
The only reason i'm testing 8-bit here is I want to amplify the effects @Rayon hears between 20, 21 and 24 bits.
If the effect is there surely throwing away 2/3ds of your bits will make it more pronounced.
 
Mar 30, 2025 at 4:26 PM Post #44 of 74
Without getting into an argument of what one is supposed to hear and not hear, which many have beaten to death on multiple forums and threads, there is a difference between the smallest signal one can hear and being sensitive to the discrete levels available in the reproduced signal, these are not the same.

Even if the effective dynamic range for a specific genre of music is say 60dB, music signal varies continuously (i.e., infinitely granular) within this range. If 10bits were used to accommodate 60dB dynamic range, then there are exactly 1024 levels available to represent, one of the bits being the sign bit, it is really about 512 levels in the postive and negative direction. The question is if 512 levels are enough, given we are used to living in an analog world where the resolution is infinite.
More bits just mean lower noise(all else being equal).
Dither and noise shaping are the most obvious ways to demonstrate that subjectively, and @plumpudding2, I do encourage you and everybody who never tried, to indeed go fool around with files at bit depths where the background noise is clearly audible. Then you can apply various noise shaping, or progressively increase the bit depth and find out where the noise stops being noticeable. Because obviously, just because something super loud is noticed, doesn't mean the same thing at a very quiet level will also be noticed. The entire concept of hearing threshold exist because our sensitivity to various things is always limited somewhere.
Spoiler(things start sounding the same well before 20bit). At a purely digital level, the question of what we do with the bits below 20 is irrelevant to a human ear and a modern playback system. That has been answered for a long time, and nobody here is the explorer of new territory. Nothing new in testing or research has given any hint that we might need to reconsider. Even the hires industry has stopped trying to push that idea and only still tries to timidly push for ultrasounds having some unconscious impact. Bit depth is done, many times over.
Now, because a DAC isn't a perfect mathematical tool, and because this thread talks about R2R, there will be issues, non-linearity, correlated noise and whatever else that I can't even think about but might exist anyway. That's why instead of just going "nuh uh!", I explained all I did. It doesn't change the conclusion about hearing that low, but there might be louder impacts, maybe directly related, maybe not? IDK. And of course, it could still also just be a brain trick that has nothing to do with sound. Considering all the possibilities means considering that one too. That's how real diagnostic works.


As for digital/analog, I think it's a false conversation. Vinyl has the amplitude accuracy and noise floor of a bad digital solution, but it's analog(continuous stuff). It definitely does not have infinite resolution and strictly speaking, nothing has. Certainly not our ears that are made of discrete elements triggering, or not, neurons, which act in a binary way. Action potential is reached, or it isn't.
That this somehow results in us feeling a fluid realistic experience to the point we constantly convince ourselves it's the objective reality, that's the impressive and most mysterious part of it all.
 
Mar 30, 2025 at 4:56 PM Post #45 of 74
To be able to control what filters to use etc. It's amazing to be able to try out different things and have full control of the chain. May + HQP has teached a lot.

What about low signal accuracy:

Fig.13 HoloAudio May, waveform of undithered 1kHz sinewave at –90.31dBFS, 16-bit data (left channel blue, right red).

Fig.14 HoloAudio May, waveform of undithered 1kHz sinewave at –90.31dBFS, 24-bit data (left channel blue, right red).

Even if we didn't go to -90dBFS, this still applies. The more levels we have available, the more accurately we can represent the wave. Note that there won't be any further reconstruction so these bits represent the final samples as is.

The problem with sound science forum is that the attitude there is arrogant and passive aggressive. No-one here has any "agenda", but we are merely trying to understand what we hear and go further. My experience with sound science forum has been very negative as the general attitude isn't helpful. The ending of your comment signals the very attitude I'm talking about. You are not trying to help, but you are attacking.

Your discussion about bit depth also isn't starting with a good grounding when you say things like:

"So, what does 20bit or 24bit get you except ever quieter levels of noise? 24bit is useful when recording because it allows a huge amount of headroom but for playback there is literally nothing (other than noise) to be gained.".

I have just stated earlier that I can hear a clear difference between the two. It's trivial to test those. There are also others saying the same thing. IMO it's arrogant to just sweep that under the rug and just talk about how in theory it's not audible. Every theory has it's assumptions, scope and limits. To me the right scientific attitude would be to either study why these people hear these things or then just leave it for others who are interested. I bet that if we would make all the assumptions explicit, we would find out that we have been comparing apples and oranges. As an example: R2R dac converts this PCM directly into analog, while delta-sigma dac will further process it into 1 bit. We can't directly compare the situation to official or unofficial studies where delta-sigma dac has been used.

Yes, it's not possible to study and prove everything that people claim to hear, but assuming that everyone else is delusional, stupid or selling something isn't constructive. We all know the argument about fly next to jet plane, still we can difference between for example 20 and 24 bits in for example these specific circumstances. I started this thread because I couldn't explain that with my knowledge of our current theories. People discussing in these threads are well aware of brain's abilities to imagine. We are not splitting hairs, but talking about clear differences.
You're wasting your trying trying to have a conversation with these "Science" types. They are true Zealots.
They forget that Real Science pays deference to observation, not the other way around.
To them theories are Sacrosanct.
Every thread they comment on, turns into a clusterf**k.

Differences between a 24 bit file and 16 bit file, especially 24/96 and up, are so easy to spot that even a 2 year old can hear it.
It's like going from 1080p to 4K. Clarity, dynamics, instrument separation, all get an audible boost. I would describe it as a general lack of fuzz/noise.
 
Last edited:

Users who are viewing this thread

Back
Top