This assumes that the human limit of hearing is exactly 20khz.
No it does not assume that at all. Adult human hearing is typically significantly lower than 20kHz, on average 16kHz but in my personal experience of testing roughly 1,800 students over a 6 year period, the vast majority of whom were 18-21, the average was around 17kHz. The most exceptional managed 19kHz (at 80dBSPL). I have seen evidence (peer reviewed, published) of young adults of the same age hearing up to 24kHz but only single tone tests and only at very high (potentially damaging) SPLs. So, very significantly more optimal conditions than listening to music at reasonable levels. I have seen no reliable evidence with music at reasonable listening levels that higher than 20kHz is possible for adults, while I have seen reliable evidence demonstrating the opposite. So, it’s based on evidence, NOT assumption!
Which even if we go based off that, many DAC reconstruction filters do roll off treble to some degree below that.
Some do but only by a fraction of a dB, except for pathalogical (effectively faulty) filter options.
And there examples of both affordable and very expensive DACs rolling off early.
Yes there are but only for the last decade or so, as an audiophile marketing gimmick. With the exception of the effectively broken NOS/filterless DACs, there weren’t examples before then because DAC chip manufacturers only provided optimal filters, until it was suggested that non optimal filters could provide marketing opportunities.
Sure, but then with the limited compute power available in a typical DAC chip, you can't keep things flat under 20khz and actually achieve a correct Nyquist reconstruction,
That is false, on two counts! Firstly, it’s easy to achieve a correct Nyquist reconstruction that’s flat under 20kHz and has been for decades. You simply allow for a transition band above the required band (up to 20kHz) by having a higher sampling frequency/Nyquist point, which is exactly what the CD and higher standards do. We want a reconstruction that’s flat to 20kHz, so we have a Nyquist point higher than that, at 22.05kHz, to allow correct Nyquist reconstruction up to 20kHz. Secondly, DAC chips had enough “
compute power available” to produce optimal filters over 25 years ago, you’re surely not claiming that today’s chips have less available compute power than 2 or more decades ago?
If you do not attenuate fully by the Nyquist frequency you will have aliased products remaining.
No you will not! If you do not attenuate fully by Nyquist you will have aliased products remaining, reflecting down from Nyquist. So if you only attenuated fully by say 24kHz, then you would have alias products from the Nyquist frequency down to just above 20kHz and therefore the requirement of fidelity up to 20kHz is fulfilled. However, this is only the case in the ADC process, as
@danadam stated, in the DAC process there is no aliasing, there are just “images” above Nyquist.
Hence why dacs don't just all use crazy slow filters which would be much simpler to implement in the first place.
But all (or nearly all) DACs do use crazy slow (reconstruction) filters, that’s pretty much the whole point of oversampling during the DAC process to start with. They don’t use crazy slow anti-image filters because that is seriously sub-optimal, you would either have to impinge on the <20kHz band or suffer even more imaging (which is some cases could cause IMD).
And you're assuming it to be inaudible despite existing evidence contradicting that.
(See my video for instance, or have a read through this:
https://www.aes.org/tmpFiles/elib/20241226/18296.pdf )
Why is it that when trying to support audiophile claims with actual science they always cite the same handful of unreliable papers (this Riess one, the Oohashi and Kuncher papers and the one by Stuart, although this Reiss paper was funded by Stuart/Meridian so is really just a continuation) but somehow completely miss all the papers/reliable evidence to the contrary? Cherry Picking this Reiss paper, which ironically is also guilty of cherry picking, is a fallacy, not supporting evidence.
If it truly did not matter, then again, DACs (and ADCs) would all just use an extremely basic and slow filter that is flat under 20khz and be done with it.
Again, they do use an extremely basic reconstruction filter in DACs and extremely basic anti-alias filters in ADCs. In addition, then they use a relatively basic linear-phase decimation filter in ADCs and a relatively basic linear-phase anti-image filter in DACs, going back to at least the early/mid 1990’s. And one more point, there are no commercial/professional ADCs with switchable filter options because professionally we only need the one, optimal filter, never sub optimal ones. There are some specialist DACs that are marketed as professional DACs and very rarely/occasionally used by professionals that do have switchable filters, a filter option that emulates a filterless/NOS DAC for example but this would really be a circular supporting argument as the reason for it’s use would be for engineers (who have a mind to cater to whacky audiophile myths) can check what a mix would sound like on an effectively broken filterless NOS DAC!
well its a believe question, imo its better to have a bit of rolloff instead of "distortion-like" byproducts so i wouldnt call these dacs "broken"
No, it’s a question of fidelity within the audible range (up to 20kHz) and therefore: Is it better to have a roll-off in the audible range or inaudible distortion above the audible range? Although in practice, this is a false dichotomy because we can have both no roll off in the audible range and virtually no distortion or “
distortion-like byproducts” above the audible range at the same time and have been able to achieve that for 25 years or so.
G