Originally Posted by jcx
so I do call "bit perfect digital filter" hype or puffery - perhaps strictly true description but of no practical relevance
In other words, you don't like it so it must be useless? Ok then.
its not a matter of like or dislike
its asking the question of what “bit perfect digital filter” means to the user – any audibly meaningful, detectable difference in the V out of the DAC, any audibly detectable significance when listening to commercial recorded music
there's nothing sacred about the 23rd, 24
th lsb bits in a music recording – put 100 “24 bit” studio ADC on the same mic feed and you will never get 100 matching 24 bit PCM files – you won't get 2
a "bit perfect digital filter" is pointless in practice, in engineering theoretical analysis that includes current tech analog noise floor
even more so when human hearing limits, commercial music recording and processing practice are included
any want to set up, show up for a David Clark style listening challenge - "bit perfect digital filter" vs a "conventional" FIR filter with coefficients generated from Mike's “bit perfect” frequency response template sent to REMENZ() in Matlab's Signal Processing Toolbox?
http://tom-morrow-land.com/tests/ampchall/
http://www.mathworks.com/help/signal/ref/firpm.html
(I'd be willing to spot you say 0.01 dB of passband response ripple)
“bit perfect digital filter” is as useless, irrelevant to listening as a 10 ppb THD amplifier spec
Schiit makes a big deal of not playing the spec game in one case – why give themselves the pass in the other
and as other recent posters mention, with 20 bit DAC you should be more concerned with how they are getting those 20 bits from the 24 bit input stream coming in from the source, output from a "bit perfect digital filter"
with a mismatch of high rez audio format and AD5791 20 bit DAC wordlength dither really is required on technical grounds – adding noise that shows up in the 19,20
th bits - how is that "bit perfect"?