CHORD ELECTRONICS DAVE
Apr 14, 2022 at 3:56 AM Post #19,756 of 26,005
Hi all after bit of advice on an end game set of headphones that match well to dave

For my portable setup have hifiman anandas which I really like and some new focal clear mg pro which i haven't properly run in yet

Enjoying the difference experience headphones bring but curious as to what partners well with a dave

Thx

Mat
 
Apr 14, 2022 at 7:16 AM Post #19,757 of 26,005
Hi all after bit of advice on an end game set of headphones that match well to dave

For my portable setup have hifiman anandas which I really like and some new focal clear mg pro which i haven't properly run in yet

Enjoying the difference experience headphones bring but curious as to what partners well with a dave

Thx

Mat
Meze Elite are the best I've found, for my tastes. They add a bit of warmth, but not overly so.
 
Apr 14, 2022 at 9:57 AM Post #19,758 of 26,005
Your first question was:

"1) You state "Then this signal is passed through the module under test, simulated, and results captured". I'm slightly confused here. What is the "module under test" exactly, and why is simulation necessary?"

Digital design starts off by defining an overall functional spec, then breaking the design down into various modules. Each module will have it's own spec; you then write the HDL (hardware description language - I use Verilog) code (in the good old days it would be done with macros and gates schematically) and then you have to test that code. You could jump straight to a FPGA and test it in real life, but that's a dangerous and reckless way to proceed. So you test it by creating a simulation, where the module under test is fed some input data, and the output of the module is captured with both a visual display and with data outputted into a file. The tests are to explore that it works as intended, and you run some what if scenarios to catch unexpected behaviour. The more you test, the more likely you won't have bugs.

But the really interesting thing about simulation is that you can do really powerful measurements - and these measurements are 100% accurate given the input data - and you can explore the audible performance using these real measurements. That's how I can observe -301dB performance (and stuff below that as well). Once your suite of tests are complete, you can listen to the various modules and then see if a particular distortion makes a difference to the sound. And this is the really strange thing - ultra small effects are audible. When designing Dave I started with my usual 200dB noise shapers, and progressively improved the performance of the noise shaper and noticed that every improvement gave an improvement in depth perception - I ended up with 350 dB noise shaping (that's the best I can do currently), and to test these noise shapers I used the -301dB test. Now every module that is in the digital audio path has to pass the -301 dB test - that is perfect amplitude accuracy and perfect phase accuracy too - if you want it to be transparent. And I do this now as a matter of course - it's just my standard test. And I re-evaluate this with listening tests - that's why I am confident that this is a good metric to define transparency in the digital domain, as I have repeated these listening tests on many different occasions - and still hear the same thing.

Your second question was:

"2) I wasn't suggesting that you should necessarily publish your results in AES or elsewhere, given intellectual property concerns, the time involved, etc. I was pushing back on the notion that people weren't being "rational" in questioning your findings when they have no actual documentation on exactly what those findings are and how you arrived at them. You also state "the tests I do are extremely carefully done and objective", but we don't really have insight into your exact methods or testing setup. It's pretty easy to convince oneself that you're being completely objective (I've done it myself), but you're developing commercial products, not working for a research institution, so without external verification of results, people might reasonably have doubts about that, IMO."

Apologies if I gave the impression that people were not being rational in questioning my findings. It is entirely rational to be sceptical - and even rational to be sceptical about ones scepticism. My observations are just that - things I have evaluated and concluded to be important, and should be treated as just my opinion. I always treat my listening tests as being tentative and subject to re-evaluation, no matter how careful you approach things. In particular, one has to be very careful about whether a SQ change is actually good or bad - it's extremely easy to hear an increase in brightness as better transparency when in fact it's actually worse due to more noise floor modulation for example. That said, when you do a listening test and it clearly sounds better in a defined way (like better depth - there is no question about interpretation here) and do that test many times over on many different occasions and it still does the same, then it's sensible to conclude that something real is happening - even if it is due to something that appears ultra small or insignificant.

My annoyance comes when people instantly dismiss listening tests and state that it's impossible that something can make a difference without doing any kind of listening tests themselves. Science is about discovering and understanding new things - that means being both highly sceptical and very open minded at the same time as reality is much more complex than our very limited understanding.

I did some tests myself, I eventually came to conclusion that the earth really is flat and that on the dark side of the moon there is a hotel where all the guests throw their car keys in to a bowl, not sure why though.
 
Apr 14, 2022 at 10:28 AM Post #19,759 of 26,005
Science is about discovering and understanding new things - that means being both highly sceptical and very open minded at the same time as reality is much more complex than our very limited understanding.

From my own experience and having the benefit of input from people who know far more than I do technically, this sums up where we are in both the evolution of our own personal systems and the design of new digital products. Unfortunately, some people seemingly take up the sceptical option without necessarily accepting the open minded aspect.
 
Last edited:
Apr 14, 2022 at 9:09 PM Post #19,760 of 26,005
Following up on Rob’s really cool observations on the inferiority of USB vs. optical, which I had previously cast doubt on, I realize that instinctively I was always preferring to the sound of red book CDs (either on the Blu or on my ultra Uber CEC TL0 Mk2 transport) to what I hear from streaming via Qobuz/Roon. I wonder if there’s something to his astute observation now. I can’t explain it rationally. USB should sound the same as SPDIF. Except it doesn’t. I still enjoy playing back my CDs through a high quality CD transport. Why? Is it because subliminally my brain is telling me something is awry with respect to USB streaming.

I think the WTA has the scope to do more processing of the lower res files than the higher res audio files and thus the magic of that filter is more pronounced on red-book resolution audio than on, say, 192kHz material.
 
Apr 14, 2022 at 11:30 PM Post #19,761 of 26,005
Your first question was:

"1) You state "Then this signal is passed through the module under test, simulated, and results captured". I'm slightly confused here. What is the "module under test" exactly, and why is simulation necessary?"

Digital design starts off by defining an overall functional spec, then breaking the design down into various modules. Each module will have it's own spec; you then write the HDL (hardware description language - I use Verilog) code (in the good old days it would be done with macros and gates schematically) and then you have to test that code. You could jump straight to a FPGA and test it in real life, but that's a dangerous and reckless way to proceed. So you test it by creating a simulation, where the module under test is fed some input data, and the output of the module is captured with both a visual display and with data outputted into a file. The tests are to explore that it works as intended, and you run some what if scenarios to catch unexpected behaviour. The more you test, the more likely you won't have bugs.

But the really interesting thing about simulation is that you can do really powerful measurements - and these measurements are 100% accurate given the input data - and you can explore the audible performance using these real measurements. That's how I can observe -301dB performance (and stuff below that as well). Once your suite of tests are complete, you can listen to the various modules and then see if a particular distortion makes a difference to the sound. And this is the really strange thing - ultra small effects are audible. When designing Dave I started with my usual 200dB noise shapers, and progressively improved the performance of the noise shaper and noticed that every improvement gave an improvement in depth perception - I ended up with 350 dB noise shaping (that's the best I can do currently), and to test these noise shapers I used the -301dB test. Now every module that is in the digital audio path has to pass the -301 dB test - that is perfect amplitude accuracy and perfect phase accuracy too - if you want it to be transparent. And I do this now as a matter of course - it's just my standard test. And I re-evaluate this with listening tests - that's why I am confident that this is a good metric to define transparency in the digital domain, as I have repeated these listening tests on many different occasions - and still hear the same thing.

Your second question was:

"2) I wasn't suggesting that you should necessarily publish your results in AES or elsewhere, given intellectual property concerns, the time involved, etc. I was pushing back on the notion that people weren't being "rational" in questioning your findings when they have no actual documentation on exactly what those findings are and how you arrived at them. You also state "the tests I do are extremely carefully done and objective", but we don't really have insight into your exact methods or testing setup. It's pretty easy to convince oneself that you're being completely objective (I've done it myself), but you're developing commercial products, not working for a research institution, so without external verification of results, people might reasonably have doubts about that, IMO."

Apologies if I gave the impression that people were not being rational in questioning my findings. It is entirely rational to be sceptical - and even rational to be sceptical about ones scepticism. My observations are just that - things I have evaluated and concluded to be important, and should be treated as just my opinion. I always treat my listening tests as being tentative and subject to re-evaluation, no matter how careful you approach things. In particular, one has to be very careful about whether a SQ change is actually good or bad - it's extremely easy to hear an increase in brightness as better transparency when in fact it's actually worse due to more noise floor modulation for example. That said, when you do a listening test and it clearly sounds better in a defined way (like better depth - there is no question about interpretation here) and do that test many times over on many different occasions and it still does the same, then it's sensible to conclude that something real is happening - even if it is due to something that appears ultra small or insignificant.

My annoyance comes when people instantly dismiss listening tests and state that it's impossible that something can make a difference without doing any kind of listening tests themselves. Science is about discovering and understanding new things - that means being both highly sceptical and very open minded at the same time as reality is much more complex than our very limited understanding.
Thanks again, Rob. Very good info.

I guess my next question would be: you've done the simulations, and everything is operating with vanishingly low distortion and noise floor modulation. Presumably, you then burn this code onto an FPGA, and listen to the result. There must be other sources of noise and distortion in the actual hardware that overlay what shows up in the simulation, correct? Or am I missing something?
 
Apr 15, 2022 at 1:30 AM Post #19,762 of 26,005
Thanks again, Rob. Very good info.

I guess my next question would be: you've done the simulations, and everything is operating with vanishingly low distortion and noise floor modulation. Presumably, you then burn this code onto an FPGA, and listen to the result. There must be other sources of noise and distortion in the actual hardware that overlay what shows up in the simulation, correct? Or am I missing something?
Correct - if possible (which is the vast majority of the time) I will embed the listening options into the FPGA, so you can use a bit switch to listen to the options. In the past different place and route (different FPGA configurations) would sound very different. Today they sound very similar, but doing listening tests you need to remove any possibility of listening to other problems - you only want to hear the variable you are looking at.

The simulation/listening approach is fantastic for steady state stimulus - you can measure and listen to extremely small errors. But it's not good at defining transient errors; for that it's a case of listening to optimize performance and guide your understanding of what is going on.

The analogue hardware is another problem. You can use SPICE simulation (which I do extensively) to measure very small errors, but that doesn't perfectly describe the actual performance unlike Verilog simulation. And making significant changes often means new layouts, and that takes months. And then the errors that you know are audible you can't measure. It's these delays that is the reason why new designs take years to do - and there is always something unexpected or odd that happens. And it's why with new analogue designs I spend a lot of time on SPICE modelling - it's easier to see and correct errors at this stage than with prototypes.
 
Apr 15, 2022 at 1:59 PM Post #19,765 of 26,005
And it's why with new analogue designs I spend a lot of time on SPICE modelling - it's easier to see and correct errors at this stage than with prototypes.

Spice is a circuit design and behaviour test software. But does this take extreme small signal (un)linearities into account well enough?.. even PCB material choice can change its components characteristics.

I assume you use it as 'rough' start.. followed by single component change and listening tests too?
 
Apr 16, 2022 at 3:27 AM Post #19,766 of 26,005
Spice is a circuit design and behaviour test software. But does this take extreme small signal (un)linearities into account well enough?.. even PCB material choice can change its components characteristics.

I assume you use it as 'rough' start.. followed by single component change and listening tests too?
Sure you can model non-linearities - I often look at distortion harmonics well below -200dB. And the models are pretty good, for say things like the discrete OP stage and the second order noise shaper (the amp in the DACs). But they are models, not exact representations, and for parasitics you have to estimate these effects and model it yourself.

When doing RF analysis, you need to add PCB track impedances, and the internal parasitics of passive components (inductance, series resistance and internal capacitances - the parasitic/internal LRC) these can be estimated quite accurately. How close you get to reality depends upon how much work you do, and how accurate the estimates of these parasitics are.

SPICE is also really good at improving your understanding of a particular distortion and error, and quantifying it from first principles, particularly when you add all the parasitics and internal LRCs. I remember in the early 80s designing high performance SOTA thick film hybrid op-amps (0.5GHz gain bandwidth unity stable audio op-amps) which took months to design with pen, paper and calculator, and months to tweak in hardware. Today with SPICE it would take a day and be much more accurate, with a good chance it would work first time with hardware.
 
Apr 16, 2022 at 5:00 AM Post #19,767 of 26,005
Sure you can model non-linearities - I often look at distortion harmonics well below -200dB. And the models are pretty good, for say things like the discrete OP stage and the second order noise shaper (the amp in the DACs). But they are models, not exact representations, and for parasitics you have to estimate these effects and model it yourself.

When doing RF analysis, you need to add PCB track impedances, and the internal parasitics of passive components (inductance, series resistance and internal capacitances - the parasitic/internal LRC) these can be estimated quite accurately. How close you get to reality depends upon how much work you do, and how accurate the estimates of these parasitics are.

SPICE is also really good at improving your understanding of a particular distortion and error, and quantifying it from first principles, particularly when you add all the parasitics and internal LRCs. I remember in the early 80s designing high performance SOTA thick film hybrid op-amps (0.5GHz gain bandwidth unity stable audio op-amps) which took months to design with pen, paper and calculator, and months to tweak in hardware. Today with SPICE it would take a day and be much more accurate, with a good chance it would work first time with hardware.

I havent worked with the program.. only seen it on ocasion. Do component brands supply LCR values to load in e.g. Spice or does all need to be estimated? A side effect capacitance for a inductor can work in favor too.
Or laying PCB tracks in ways to create impedances.
 
Apr 16, 2022 at 6:55 AM Post #19,769 of 26,005
Same with dave, maybe a bit but not major difference
 
Apr 16, 2022 at 7:38 AM Post #19,770 of 26,005
Just got the Dave , how long the break in time ?
I came from TT2 and find no significant difference of the sound .
Congratulations on your newly acquired DAVE. Coming over from the TT2 will require a bit of time. Your ears have to adjust accordingly as you are still remembering the TT2 'sound signature'. You will know when you have reached the hearing difference goal when you begin to hear everything in between and behind the Music that you previously did not notice. It will also become much more 'airier' in the soundstage. They will be very mild at first and then you will begin to actually listen for differences.... and hear them. It will take about..... 20 hours of listening to 'break you out' of the TT2 but the wait... and listening.... is worth the while. May I ask what Headphones you are using?
 

Users who are viewing this thread

Back
Top