Lot's of posts to reply too today!
Can you expand on why it will take a long time Rob? Is it weight other projects on the go for instance?
PS: if you can’t I fully understand Rob
Primarily other projects. I think non-engineers tend to think that designing products is just turning a crank, and out pops new designs in minutes. Unfortunately, it's not so simple - each new design typically takes a few years, with 4 or 5 hardware prototypes. I am not happy with a design until it measures acceptably (and more importantly) meets SQ expectations. But Davina has been hit with hardware issues, and problems with the decimation filters - in particular the 104MHz to 768k decimation.
At the moment this year I am in Duck mode - nothing much going on on the surface but frantic paddling under the water!
Is that simply because correct timing of the modes <= 22 kHz is more important than the additional content from 22 kHz -> 96 kHz that you'd get from a 192 kHz PCM?
Absolutely. There is no real evidence that anything above 20 kHz is audible (not quite true as HF components create intermodulation through the non-linear air - but these artifacts are in the audible bandwidth and would be captured by the microphone anyway), but we have plenty of evidence that the ear/brain is sensitive to timing.
Yes, but with with higher sampling rates you can reconstruct sharper transients, which human ears have evolved to be remarkably sensitive to.
I have to disagree there; it's not about how fast the transient edges are, but whether those transients appear at the correct time. With conventional DACs the edges are constantly moving in error, backwards and forwards, depending upon the past and future music; and it's this uncertainty in transient timing that I am trying to reduce. We know that for a fact that an ideal sinc function filter will have zero uncertainty in the timing of transients, and my quest has been to make my filters as close as possible to the ideal - until such times as increasing the accuracy gives no further benefit. And I am sure that even at 1M taps on the M scaler we are not there yet.
As I went into above for some reason if you increase the sampling rate to frequencies even in the Mhz region you can still hear differences, even though you can, in my case, hear nothing above 12k - if Rob can hear 15k he is doing pretty good. The conjectured reason is timing - evidently the ear can detect timing differences as small as something like 7us with 3us considered prudent.
Thanks
Bill
Sure the evidence is that 7-4 uS is the resolution of the timing differences from between the ears; and this is based on probing cats neurons within the interaural delay, and in the case of humans asking what level of discrimination of phase shift can change the left/right placement. But I think we are much more sensitive to timing errors than this; and my evidence for this is the WTA 2 filter. This takes over from 768kHz and filters to 256 FS - so it goes from 1.3 uS resolution to 88 nS resolution. Now this replaced a perfectly good third order IIR type filter with a WTA filter, and I was surprised how easy it was to hear the difference. The change in timing POV is very small and subtle; much less than the 7-4 uS perceptual limit that the interaural delay suggests. I have also been looking at timing within noise shapers, and have heard some very small changes when timing was accurate to a few tens of nS; so my view today is that the smallest timing error is important.
I need to stress that timing errors that are audible are non-linear ones and by this I mean when the timing constantly changes with program material - and that change maybe amplitude related, or sampling and signal related. A fixed timing error is not important - so a shift of 1us at 5 kHz say is inaudible; but a 1us shift that is constantly changing is audible, as if that change is music related then the timing error confuses the brain's ability to perceive audio.
I think so too, but would love to hear Rob's thoughts on exactly how. I agree with your comments on the microsecond timing nuances.
That part I cannot agree with. That cannot be so. At least not in this universe. Here's an example to illustrate. I created a picture, which I then down-sampled as follows:
Let me know (using whatever proprietary up-sampling algorithm you like) what the original image was. I have a large cash PayPal transfer awaiting the person that gets this right
Yes agreed; I am not in the business of re-creating new information - from a mathematical sense the Nyquist Shannon sampling theory has a strictly limited information content. What the WTA filter is about is actually maintaining the information content as perfectly as possible as it goes from sampled data back to the continuous waveform. What conventional filters do is seriously degrade the original information content...
Bill, you now seem to be discussing MQA, which is a totally different issue and, arguably, a bit off-topic for this thread.
I was responding to your claim that upsampling perfectly recovers the original 96/24 or 192/24 content. That isn't correct - it cannot do that. And if you're saying it's only noise beyond a certain wavenumber cut-off and so it doesn't matter, then why bother even attempting to recover it? I think it would help if Rob jumped in and explained the exact purpose of his up-sampling. I believe it is simply to reconstruct (within 16 bit resolution) the original bandwidth limited signal. I'll eat copious quantities of humble pie if I'm wrong, but I'm fairly sure Rob isn't claiming that his upsampling takes RBCD and converts it perfectly into the original hi-res file, which is what you seem to be suggesting.
Yes it's about reconstructing the bandwidth limited signal. And I maintain that that is all one needs to do; my view is that properly bandwidth limiting to 20kHz would be perfect SQ wise.
But what do I know? We can't know anything for certain until we perform rigorous SQ tests. And that gets me nicely back to Davina, when we can actually do these tests, and will know this for certain.