castleofargh
Sound Science Forum Moderator
- Joined
- Jul 2, 2011
- Posts
- 10,384
- Likes
- 5,970
we're all similar people presented with the same options. we feel something and get some idea as to why. then from time to time, measurements seem to conflict with those impressions and ideas. and that's where we're not so similar anymore:Right (IM here being two-tone stimulus I take it).
This is an interesting avenue of study. The filters in S-D DACs are in general built down to the lowest silicon area possible, which means adopting a half-band filter for the first stage upsampling (1X->2X). Half band filters are a compromise to save on MACs - half the coefficients are zero. The downside is the FR violates Nyquist because instead of the Nyquist frequency being firmly in the stop band, its in the transition band and only single digit dB down (I forget if that digit is 3 or 6)
Summing up so far then we look to be in broad agreement that today's measurements aren't telling the whole story about a DAC's performance as we have the 'Sabre brightness' not being shown up in any measurements so far. I would add to the 'brightness' issue the biggest thing lacking for me in an ES9023 and that's subjective dynamics. I also haven't much clue what measurement is needed to quantify subjective dynamics but my first stab at it would be noise modulation.
-when I'm presented with objective data contradicting my ideas and memories, if I can test my idea with some control, I'll do it and see what happens. when I can't, I consider the conflict to be big enough of a problem not to decide that my feelings and ideas are conclusive. and if I absolutely have to pick a side, I'll side with the objective data(but that's really something I'd rather avoid).
-based on the last few posts, it's pretty obvious that not only you take your feelings and ideas very seriously, but when they conflict with measurements, you decide to suspect the measurements of being flawed or incomplete instead of second guessing yourself and the quality of your experiences.
you bring up some fairly reasonable questions about measurements. we obviously don't get enough specs by default. and with the natural tendency from manufacturers to "forget" the variables showing bad specs for their gears, having apparently good specs are even less of a proof that we'll get transparency. but focusing on that alone is pure fallacy. we obviously also have to deal with our testing conditions and all the possible issues we can just call "human error". from the sighted anecdotal experiences, the accuracy of our memories, the quality of interpretation, jumping to conclusion, etc. all very real very relevant concerns. trying to find which set of measurements is effectively presenting what we feel, that should obviously come after making sure that what we feel was induced by sound in the first place. something the average audiophile is not going to do, because of ignorance, laziness. or the usual idea that preconceptions, placebo, logical fallacy, etc, all somehow belong in the box labelled "it only happens to others".
I've followed discussions about the sound of delta sigma vs R2R, the specific sound of some chipsets and related stuff. I'm personally very curious about this, I've tried to test my fair share of gears and conditions, but right now I have no confidence claiming that anything is real or more than circumstantial. I've very clearly failed to pass blind tests using DACs with different chipsets. I can blame my low listening skills and my ears not growing younger, but whatever the reason, those DACs were still good enough to fool me.
I also have been able on rare occasions, to identify some DACs despite them having the same chipset.
the usual idea that R2R will have less linearity and more aliasing or treble roll off, while delta sigma has more noise, even such generalities based on the designs can be contested if we go pick the right DACs. some R2R stuff have impressive linearity, some delta sigma have impressive SNR.
so all in all, I'm confident that at a statistical level we can find patterns and correlations, but who has gone through enough gears in a rigorous enough way to call his results statistically significant? I sure didn't.