Mr. Watts,
Firstly, thank you for taking the time to answer my questions. Secondly, as a general statement to various other responders to my post, I'm not suggesting that measurements alone will tell you too much about how you will perceive soundstage or any other attribute of a DAC's performance. In fact, it seems to me that Mr. Watts is the one advancing that argument when he says "any distortion, however small, has audible consequences on depth perception". Based on a lot of personal experience with 2-channel systems over the years, rooms and speakers generally have more to do with soundstage dimensions and layering than DACs do. Rooms aren't a factor in headphone listening obviously (although individual ear topology is), but clearly soundstaging varies rather dramatically between different headphones, again more so than between different DACs I've owned or auditioned over the years.
Regarding the specific answers to my questions:
1) You state "Then this signal is passed through the module under test, simulated, and results captured". I'm slightly confused here. What is the "module under test" exactly, and why is simulation necessary?
2) I wasn't suggesting that you should necessarily publish your results in AES or elsewhere, given intellectual property concerns, the time involved, etc. I was pushing back on the notion that people weren't being "rational" in questioning your findings when they have no actual documentation on exactly what those findings are and how you arrived at them. You also state "the tests I do are extremely carefully done and objective", but we don't really have insight into your exact methods or testing setup. It's pretty easy to convince oneself that you're being completely objective (I've done it myself), but you're developing commercial products, not working for a research institution, so without external verification of results, people might reasonably have doubts about that, IMO.
Thanks for clarifying the -301 dB audibility claim. What you're saying seems at least plausible now, although it's still not clear to me how much it may actually affect the perceived performance of a DAC. The DAVE is clearly a very fine and unique product, but there are a lot of high end DACs now on the market which have addressed noise floor modulation and reduced distortion artifacts well below audibility. The differences in sound between them may or may not hinge on the performance metrics you are choosing to highlight. It's an interesting academic question, but it may remain academic given that you're unlikely to publish your findings anytime soon, and almost everyone selects equipment based on personal auditions coupled with ergonomic and pricing factors anyway.
Your first question was:
"1) You state "Then this signal is passed through the module under test, simulated, and results captured". I'm slightly confused here. What is the "module under test" exactly, and why is simulation necessary?"
Digital design starts off by defining an overall functional spec, then breaking the design down into various modules. Each module will have it's own spec; you then write the HDL (hardware description language - I use Verilog) code (in the good old days it would be done with macros and gates schematically) and then you have to test that code. You could jump straight to a FPGA and test it in real life, but that's a dangerous and reckless way to proceed. So you test it by creating a simulation, where the module under test is fed some input data, and the output of the module is captured with both a visual display and with data outputted into a file. The tests are to explore that it works as intended, and you run some what if scenarios to catch unexpected behaviour. The more you test, the more likely you won't have bugs.
But the really interesting thing about simulation is that you can do really powerful measurements - and these measurements are 100% accurate given the input data - and you can explore the audible performance using these real measurements. That's how I can observe -301dB performance (and stuff below that as well). Once your suite of tests are complete, you can listen to the various modules and then see if a particular distortion makes a difference to the sound. And this is the really strange thing - ultra small effects are audible. When designing Dave I started with my usual 200dB noise shapers, and progressively improved the performance of the noise shaper and noticed that every improvement gave an improvement in depth perception - I ended up with 350 dB noise shaping (that's the best I can do currently), and to test these noise shapers I used the -301dB test. Now every module that is in the digital audio path has to pass the -301 dB test - that is perfect amplitude accuracy and perfect phase accuracy too - if you want it to be transparent. And I do this now as a matter of course - it's just my standard test. And I re-evaluate this with listening tests - that's why I am confident that this is a good metric to define transparency in the digital domain, as I have repeated these listening tests on many different occasions - and still hear the same thing.
Your second question was:
"2) I wasn't suggesting that you should necessarily publish your results in AES or elsewhere, given intellectual property concerns, the time involved, etc.
I was pushing back on the notion that people weren't being "rational" in questioning your findings when they have no actual documentation on exactly what those findings are and how you arrived at them. You also state "the tests I do are extremely carefully done and objective", but we don't really have insight into your exact methods or testing setup. It's pretty easy to convince oneself that you're being completely objective (I've done it myself), but you're developing commercial products, not working for a research institution, so without external verification of results, people might
reasonably have doubts about that, IMO."
Apologies if I gave the impression that people were not being rational in questioning my findings. It is entirely rational to be sceptical - and even rational to be sceptical about ones scepticism. My observations are just that - things I have evaluated and concluded to be important, and should be treated as just my opinion. I always treat my listening tests as being tentative and subject to re-evaluation, no matter how careful you approach things. In particular, one has to be very careful about whether a SQ change is actually good or bad - it's extremely easy to hear an increase in brightness as better transparency when in fact it's actually worse due to more noise floor modulation for example. That said, when you do a listening test and it clearly sounds better in a defined way (like better depth - there is no question about interpretation here) and do that test many times over on many different occasions and it still does the same, then it's sensible to conclude that something real is happening - even if it is due to something that appears ultra small or insignificant.
My annoyance comes when people instantly dismiss listening tests and state that it's impossible that something can make a difference without doing any kind of listening tests themselves. Science is about discovering and understanding new things - that means being both highly sceptical and very open minded at the same time as reality is much more complex than our very limited understanding.