Watts Up...?
Oct 10, 2017 at 3:43 AM Post #346 of 4,668
I have exactly the same question.

We are trying to reconstruct back to the original analog signal before ADC

However what that signal contains some sampling from keyboards which has PCM. Do we still hear the original PCM effects?

What if the ADC contains the original output of say a CD player. Does M scaler then play back that signal as it was, or actually corrects for the damage done by the DAC of that CD player?

From what I can hear, I think WTA is able to correct transients for any type of signal, even if the signal at the ADC was of digital origin?

Please forgive the question, but the blu2 isn't cheap and this could help me spend the money on it :)

I'll probably buy it anyway regardless of the answer though.
 
Last edited:
Oct 10, 2017 at 7:57 AM Post #348 of 4,668
What does it even mean that RT60 is 1.4 seconds. RT60 it is hardly a constant value. I actually didn't get this from the slide.
 
Oct 11, 2017 at 8:56 AM Post #350 of 4,668
The WTA will reconstruct the transients correctly irrespective of their origin; digital or analogue. Indeed, electronica sounds amazing on a BluDave - very fast and tight rhythms. Remember, this issue is about sampling creating timing uncertainties on transients (jitter if you like), and an M scaler reduces these uncertainties to well below the 16 bit level.

As to mixing, so long as all sources are the same sample rate, and the mixing process is linear, then WTA will reconstruct the transients correctly - as if the original recording was straight to stereo.

On the RT60 aspect - for a 16FS filter, the delay depends solely upon the tap length. So Dave's 164,000 is about 0.2 seconds - and if it was 1.4 seconds, then it must be a 1M tap filter. 0.2S is nowhere near the RT60 value, but 1.4S can be (although it can be substantially higher than this). My suggestion that the time the filter processes the data being similar to RT60, and that was the reason for the huge change in depth is merely a suggestion - frankly I don't understand why it does it - I have plans to test via Davina this idea, and will be able to disprove the linkage. And although I came up with the RT60 idea, I am sceptical that this is indeed the reason.
 
Oct 11, 2017 at 9:39 AM Post #351 of 4,668
Thank you Rob.

as 16 Bit is being misunderstood was 16 bit recording of CD, does it mean that there could be a 18 bit WTA filter accuracy?

Is there any relationship between the 16 Bit WTA level of uncertainty and 16 bit of CD recording?
 
Last edited:
Oct 11, 2017 at 9:52 AM Post #352 of 4,668
When I talk about 16 bit accuracy than that means the WTA filter is identical to an ideal or perfect sinc function, to better than 16 bit accuracy - and to do this you need 1M taps.

So with a 16 bit source the transient timing error is smaller than 16 bits.

Let's say the error was at 17 bits, then the error would still be 17 bits whether it is a 16 bit source or a 24 bit source.
 
Oct 11, 2017 at 10:24 AM Post #353 of 4,668
...what if that signal contains some sampling from keyboards which has PCM? Do we still hear the original PCM effects?
If the recording ADC has had access to the samples in the digital domain (I have no idea about today's recording/studio practice), they will benefit from the M-Scaler's advanced filtering. Otherwise, if the ADC has to deal with them as analogue signals, they won't (more precisely: not to the same degree). In any event the sampling process itself with its indispensable low-pass filtering will introduce some timing errors nonetheless that the M-Scaler will «accurately» reproduce. This applies to all recordings of the pre-Davina era.
 
Last edited:
Oct 11, 2017 at 10:45 AM Post #354 of 4,668
Thanks for the clarifications. I now totally get that linear mixing doesn't change anything. (It's been 2 decades+ since my own class in circuits/signals/systems!).

I still need to ponder for myself whether non-linear effects by audio processors (e.g. there are effects that adjust transient attacks) fit into the picture -- whether the audio engineer implicitly voiced the product bec s/he happens to be using a more average DAC to monitor and whether that leads to any deleterious implications as reproduction occurs through the WTA process.... It may be one of those questions that can only be answered by listening tests as opposed to on theory alone.
 
Oct 11, 2017 at 12:56 PM Post #355 of 4,668
The only aspect that potentially messes up the WTA is aliasing - as for a sinc function to perfectly reproduce the original un-sampled signal, it must be bandwidth limited - and modern ADC's, pretty much all use poor quality half band filters (-6dB at FS/2), so the signal is messed up by aliasing. But no filter can remove aliasing - once the damage is done, your are in trouble as it is irreparable.

I have nearly 5TB of music, and every single track sounds better using a BluDave, so fortunately this is not an issue in terms of the filter design - in the sense that better tap length works for aliased recordings as well as properly bandwidth limited ones.
 
Oct 16, 2017 at 11:46 AM Post #356 of 4,668
So now to the second presentation, which is about DAC's themselves.

I spent a lot of time thinking about how this presentation would be put together, then I started to put the material together into powerpoint - then quickly realised that the material I wanted to talk about would last for several hours, so I had to take my pruning shears to it. The problem I have is that this is a very complex subject, so what follows is necessarily brief.

Slide1.JPG


Slide2.JPG


Slide3.JPG


Slide4.JPG


These different noise shapers were put together for a particular client. They had trouble believing that errors that are below the threshold of audibility were important. So I put together this test for one of their engineers in my listening room. He was shocked - not because he could hear a difference - but because it was so easy to hear the difference. The reason why very tiny errors become significant is because they interfere with the brain's ability to process the data from the ears. The ear is actually a very crude transducer, and the brain has remarkable processing powers that seemingly overcomes the technical limitations of the ear.

Slide5.JPG

This is a very important slide - because if you accept this argument, then it means that there is only one optimum design solution for DAC design - there are not multiple paths to true transparency.

Slide6.JPG


Slide7.JPG


Slide8.JPG



Slide9.JPG


Slide10.JPG


Slide11.JPG


This result is truly bizarre as it suggests that there is no limit to how accurate small signal accuracy needs to be - this conclusion I am profoundly unhappy with. At first, I thought the noise shaper performance was merely a proxy for something else going on in the analogue domain; but you also need 350 dB accuracy for digital only noise shapers (such as converting from 56 bits down to 24 bits where the application is completely digital). I have repeated the listening tests many times with different noise shapers, and I always have had a consistent change in depth perception. It's very,very strange... But it explains why reproduced audio is so awfully poor in depth reproduction.

Slide12.JPG

Slide13.JPG


Slide14.JPG

Slide15.JPG


This was to get across the idea that using chip DAC's has fundamental limiting problems. Having said that, at least they are competently designed, with good measurements... And this is important too.

Slide16.JPG

Slide17.JPG


I started this slide by stating I loved DSD - in 1994, before I invented pulse array....

Slide18.JPG


Slide19.JPG


Slide20.JPG


DAC's are pretty useless without analogue! The point of this slide is that pulse array has huge benefits on the analogue side too.

Since I will hit the limit on a single post, the last part of the seminar will be next...
 
Oct 16, 2017 at 12:06 PM Post #357 of 4,668
The next part is measurements, and this provides objective justification for what I have been talking about.

Slide21.JPG

I can't tell you how much digital simulation has improved my work. Simulation allows one to measure the exact performance of a digital device, and you can see things that are much smaller than what you can measure in reality - hence the FFT's showing -301dB signals being faithfully reproduced. It's important for two reasons - when you code, it's easy to make mistakes, and running a FFT from your code proves that it is correct - and it allows one to listen to effects, and actually put numbers onto what you are listening too, and this is immensely valuable. In the past it was a struggle to simulate for longer than a few mS; now I can run it at fractions of seconds, and do 4M point FFT's with ease.

Slide22.JPG

All of my designs have no measurable noise floor modulation... This one is from Dave, but it applies to Mojo too.

Slide23.JPG


This illustrates the power of modern simulation tools. What it provides is the ability to modify, measure and listen to one variable at a time. You have certainty, without stumbling in the dark.

Slide24.JPG

Ignore marketing claims - see the actual FFT's of jitter.

Slide25.JPG

Slide26.JPG

This shows how accurate small signals are reproduced.

Slide27.JPG

So there it is - it is necessarily simplified, but gives an objective overview into the issues involved in designing a DAC from the bottom up.
 
Oct 17, 2017 at 9:58 AM Post #360 of 4,668
Since I couldn't find mentioning taps anywhere except for Chord products, are there some synonyms for taps which other manufacturers are using in their specs?

It's just the multiplier terms after each delay in the FIR filter. e.g. see:

https://en.wikipedia.org/wiki/Finite_impulse_response
Your explanation makes for more confusion than it helps. «Taps» is just another term for «coefficients». The number of coefficients stands for the complexity of a filter.
 

Users who are viewing this thread

Back
Top