Watts Up...?
Apr 12, 2017 at 2:15 PM Post #241 of 4,672
HI Rob,
 
with hugo2 hitting shopfloors soon in the UK will there be an announcement or even better still a solution to iOS compatibility issues? the problem is affecting some more than others, especially for those looking forward to becoming proud new owners of the hugo2. an update to the previous message would be great.
 
many thanks Mk.
 
Apr 12, 2017 at 3:51 PM Post #242 of 4,672
 
  Not been my experience - I have silver plated solid core PTFE cables, that are decades old, and when you strip down the insulation it is oxide free perfect mirror finish silver.
 
These wires are rated at 200 deg C - so its impervious to oxidation.
 
The only issue you need to be wary off is soldering it - when its soldered the silver is absorbed by the solder and you get an inter-metallic layer that does not sound good - using silver solder eliminates this issue. Also, you can't use crimp connections too; but then you can't use crimp with copper either.


I thought he was saying there can be very thin oxide layer between the copper and silver after a period of time, not on the outer surface of the silver.

Its a real effect at 250 deg C with stranded silver plated PTFE after 2000 hours - but at 150 deg C there is not an issue. Moreover, internal silver/copper interface oxidation is always accompanied by silver surface oxidation too as clearly the surface will get oxidized first. So the fact that decades old PTFE insulated silver plated surfaces are still pure indicates that at 20 deg C this is not an issue at all.
 
I used to anneal the copper at 200 deg C overnight; and that process sounded much better with better transparency and a smoother SQ. If this was a real issue, then the opposite would have occurred. 
 
Apr 25, 2017 at 10:12 AM Post #243 of 4,672
Rob,
I came across this...
https://research.googleblog.com/2016/11/enhance-raisr-sharp-images-with-machine.html?m=1
Sounds like something the audio industry could utilize ...to similarly recreate the original audio from 44.1k samples ...akin to WTA ...but using big data and compute power to determine the 'best' filter.
This is Google's way to go about building a Davina...or doing what Bob Stuart/MQA is trying to do by reversing digitizing flaws.
Google could run this process on every recorded track they have originals for and reveal the corresponding idealized filter for DAC decoding. Per song!
Or you could combine this with WTA to Davina to fine tune your algorithms.

I'd be curious on your view of this.
Thanks
Dan
 
Apr 27, 2017 at 11:22 AM Post #244 of 4,672
Yes it is very interesting - but you can't extrapolate visual interpolation to audio interpolation, even if the theory can be applied to both. The first major difference is visual interpolation is 2 dimensions - filtering must compute from samples (pixels) up and down, left and right, but audio is only one dimension in time. But the primary difference is you can see an individual pixel in the examples given - but you can't hear above 20 kHz, and CD from a frequency point of view is perfect (at least that's my assumption today - I will come back to this point later). So if audio was visual, you would not be able to see an individual pixel, so extra high frequency content is not needed. Now in the case of the RAISR you can actually see the pixels, and RAISR is trying to re-create high frequency content that simply isn't there - so its inferring it using clever adaptive processing.

Now with audio we do not need to reproduce HF content above 20 kHz simply because we can't hear it - and the visual analogy would be to make a picture sharper is pointless when the eye focus is the limiting factor not the image focus.

But what we categorically do need is that the time when the sample changes state (visual: point it goes from dark to white) for this timing information to be accurately reproduced - and this is where the interpolation filter fails, as it creates timing uncertainty. Fortunately, we know absolutely for certain how to do this - simply use an infinite tap length FIR filter that has the ideal sinc function impulse response. Then the interpolated output will be identical to the bandwidth limited signal in the ADC before it was sampled.

Now of course we can't do infinite tap length sinc function filters, but we can do more and more taps as technology improves. And we can adjust the filter impulse response so that it does as good as job as possible to recover the original transient timing given a finite number of taps. Hence my constant pushing on tap length, and constant improvements to the WTA algorithm. So far every increase has given a big improvement to sound quality, and even though I am at 1 million taps with the M scaler I doubt that this is the end of the process.

The benefit of the Davina project (the 768 kHz pulse array ADC that I am currently working on) is that it will answer two fundamental questions. Because I can take a 768 kHz recording, decimate it to 48 kHz, then M scale it back up to 768 kHz, we can hear for sure how much of a loss that 1 M taps generates - and this will be for certain. The second point is I can take the 768 kHz recording, and filter it to 24 kHz to ensure bandwidth limiting - this is the part one does for decimation - but I have the processing capability to actually filter without decimating it, so it will remain at 768 kHz. Then we can answer the assumption I put at the beginning that we can't hear the effects of bandwidth limiting - properly done of course - that the only benefit of higher sample rates is the improvement in timing of transients accuracy (which of course we can hear to much greater accuracy than CD allows). Of course, I may be wrong about my assumptions here, but until you do a carefully controlled listening test you know nothing for certain.

And a final point - adaptive processing sounds terrible - the brain can detect that something is constantly modulating the signal, and you always lose out in realism.

Rob

PS I will when they become available, be posting the audio files showing the effects of M scaling against ideal, and the effects of bandwidth limiting....
 
Apr 27, 2017 at 2:45 PM Post #246 of 4,672
I understand all the caveats, but surely this can be a useful tool in comparing your DAC to the original analog and tweaking to become nearer. I guess that's what you have been working towards anyway...
 
Apr 27, 2017 at 3:55 PM Post #247 of 4,672
an audio equivalent of the stuff to make pixelated images look more detailed would be to take an absolute garbage low bitrate mp3 and make it sound a little better, or maybe take a 6bit music and make it sound better.
the point is, as @Rob Watts pointed, that we need to start with something clearly bad for the subjective improvement to serve a purpose.

when it comes to try and compensate for the defects of the recording tools, something like the lens and camera modules of DXO optics pro would be a closer analogy. at some point we'll have something like that where we enter what amp and headphone we're using, a loudness level(maybe automated detection/calibration), and maybe data on who's listening(HRTF). then the signal would be tailored to minimize some of the distortions of the headphone. that could be fun and in theory it's already doable.
maybe even non linear distortions could be accounted for at some point.
 
Jun 10, 2017 at 2:48 AM Post #248 of 4,672
Chord's Japanese distributor, Hiroko from Timelord, kindly sent me a private 768 kHz recording that they have. It's a test recording from an animation song production company.

I initially verified that it was native 768k via an FFT, and yes it shows the characteristic of increasing ADC noise shaper noise hitting a max of -70 dB at 384 kHz.

What was interesting was it sounded just like an M scaled recording; immense sound-stage, with the reverb sounding distinct from the source. It has the unique quality you get from the M scaler, which is difficult to explain, but becomes essential when you experience it - it's a feeling of tangibility, or solidity - in short, things sound just plain real.

I was very excited about this track, and it sounds completely different to the DXD recordings I have heard, which so far have not impressed me. Is this due to poor examples, or is there something different in perceptual terms about 768 kHz recordings rather than DXD 352 kHz? This is something the Davina ADC project will answer, among many other things.

The high frequency noise from this test track is a problem though; engaging Dave's HF filter gave a huge difference in sound quality. This problem won't happen with the Davina project, as noise shaper noise at 384 kHz will be below -180 dB due to the ADC 11th order noise shaping running natively at 6 bits 104 MHz. So ADC noise shaper noise won't be an issue. I strongly suspect this aspect will prove extremely significant.

Further news - the Davina prototype is now being assembled....
 
Jun 10, 2017 at 5:04 AM Post #249 of 4,672
I wonder if what's happening here is that you are hearing an architectural issue in the way DAVE works?

What if DAVE is working better due to the first stage WTA being inactive, when fed with 16FS (or 768KHz sample rate) material. The power consumption within DAVE associated with the arduous primary WTA could be substantially lower if it's not being used.

Alternatively, there's a subtle bug in DAVE's first stage WTA, or there's some kind of interaction between the 16FS WTA output and the second stage WTA input?

Also, in the past haven't you said that there's an issue with the coefficients inside the primary WTA, which is the reason that the HF filter sounds better when playing back 44100Hz sample rate through DAVE, even though "it shouldn't".

Now playing: Timesbold - Sing
 
Jun 11, 2017 at 1:33 AM Post #251 of 4,672
I wonder if what's happening here is that you are hearing an architectural issue in the way DAVE works?

What if DAVE is working better due to the first stage WTA being inactive, when fed with 16FS (or 768KHz sample rate) material. The power consumption within DAVE associated with the arduous primary WTA could be substantially lower if it's not being used.

Alternatively, there's a subtle bug in DAVE's first stage WTA, or there's some kind of interaction between the 16FS WTA output and the second stage WTA input?

Also, in the past haven't you said that there's an issue with the coefficients inside the primary WTA, which is the reason that the HF filter sounds better when playing back 44100Hz sample rate through DAVE, even though "it shouldn't".

Now playing: Timesbold - Sing

For sure Dave sounds a bit better and measures slightly better via an M scaler or with a 768k input.

And when I first started listening to the M scaler, I could not believe the sound quality improvements of M scaling; and I did seriously entertain the possibility that there was a fault in the WTA coding of Dave.

But for sure, I know that is not the case, for a couple of reasons. Firstly, simulation. A Verilog simulation is not a simulation in the sense that it approximates the output; with a Verilog simulation, if your FPGA module is fed that particular data set, you are guaranteed that actual output (assuming the real FPGA meets timing closure). In the past, simulation was very limited - there is no way I could simulate a WTA filter and get enough data to do a FFT. Today, it's not a problem; I can do a 4 million point FFT from simulation data and find out exactly how well the module is performing to a level of accuracy that you can't get with real world measurements. So for example, here is the plot of the THD and noise performance of the output noise shaper from the M scaler:
301 dB trun.png


This is my usual test of distortion that I use for digital modules - can it perfectly reproduce a -301dB signal. I do this test because I know that depth reproduction relies upon a perfect reproduction of small signals in terms of amplitude. What is interesting with this test is that the truncator is perfectly reproducing the dither of the 80 bit test tone; the noise floor you can see at -385 dB comes from the test tone. So actually it has better than 80 bit performance up to 15 kHz. All the modules in Dave passes this test, and so does the M scaler. Also, when I test the filter performance I know for certain it is performing as intended - when you examine FFT's of the filter performance, the side lobes are examined against the ideal; if one coefficient is incorrect (even with half a million others), you will see a difference, and part of my testing process is to ensure they are identical.

So I know objectively that Dave and the M scaler is correct; but the real proof is when you set it to video mode. In this mode, I select a different set of coefficients for some 16,000 out of the half a million, and merely insert the data into a different point in the SRAM buffer. Everything else is the same; Dave can't know there is a change; THD and noise is identical, the signal path is identical; but when you listen to 2/3 million taps against the full one million you hear a surprising big difference - the sound-stage opens up tremendously with the full 1 M taps.

So there really is something odd about the full million taps, and running at 750/768 kHz, as the sound-stage also collapses at lower sample rates. And I still do not fully understand why this is the case. I have some ideas why, and will be testing these ideas out with Davina.

Rob
 
Jun 11, 2017 at 6:26 AM Post #252 of 4,672
F[...]but the real proof is when you set it to video mode. In this mode, I select a different set of coefficients for some 16,000 out of the half a million, and merely insert the data into a different point in the SRAM buffer. Everything else is the same; Dave can't know there is a change; THD and noise is identical, the signal path is identical; but when you listen to 2/3 million taps against the full one million you hear a surprising big difference - the sound-stage opens up tremendously with the full 1 M taps.
Aha, I didn't think about the reduced tap-count mode, which is still upsampling to 16FS. That proof really is bullet proof.

What do you hear when you listen to the "error signal"? If you subtract DAVE's output with 2/3 million taps from DAVE's output with 1 million taps, what kind of signal is that? Are there any clues there?

Perhaps you need Davina to do that experiment?

Or perhaps if you did the subtraction in the digital domain?

Though the two subtractions won't be entirely the same in their nature, presumably, since the analogue comparison will have been subject to DAVE's noise shaper. I wonder if that difference, between the analogue and digital subtractions, holds another clue.

Now playing: Led Zeppelin - Your Time is Gonna Come
 
Jun 11, 2017 at 7:56 AM Post #253 of 4,672
With Davina I have got a test planned that will give a measured number for the transient error signal. Of course, that won't tell us why extra taps above 0.66M improves sound-stage as its a perceptual issue, but it will at least give an objective number to what I have been talking about with longer tap lengths and performance.
 
Jun 11, 2017 at 9:22 AM Post #254 of 4,672
Just finished the 17 pages, good read. I remember posting after I got Hugo 2.5 years ago and reading about taps, a poster commented he thought taps were what he did with his pencil while he listened to his Hugo at work. I laughed at the frank similarity in my depth of understanding which is not to be confused with apreciation of music and the strive for acuity, transparency, and honesty derived from the listening.
While I have all kinds of bias, confounding errors in judgment of the topic, as well as a neophyte understanding of acoustic engineering, Robb watts blog and all his posts in chord threads have shed some light on the plight and success of what, at my price points, I am hearing.
I applaud the work, the questions, and above all the development of these ideas, while I continue to tap my pencil to music, with a smile on my face, I have subscribed to this thread, and await future instalments.
Ps I appreciate jazz your questions on TT and power and the use of amps, guilty as charged. As a lateral I look forward to developments down the road in the middle market desktop dac meme. Cheers to all as I slip back into lurker mode.
 
Last edited:
Jun 15, 2017 at 11:32 PM Post #255 of 4,672
@Rob Watts
While following the PSAudio DirectStream threads regarding their regular DAC firmware updates, i was surprised that several final release candidates were auditioned prior to each release - and the code was mostly functionally identical .. with only changes in signal flow, timing, parallel -vs- serial, compilation switches, etc. Yet, each variant sounded 'different' - and the final release determination was based on a compromise. What this is telling me is that we are so near the bleeding edge of silicon switching noise and ground plane disturbances affecting even a 'perfect' DAC algorithm.

I know you have a handle on the small details ... but some of the claims of M-Scaler taps and ultra-low noise floor affecting perceived sonics seem so incredulous. Is there an end to all this? Its starting to sound silly. Are humans blessed with a quantum computer in our heads that can do math at 80-bits precision in real time? Or, like the PSAudio firmware variants ...are you just shuffling around the electrons and claiming 'real' differences where there are none?
- No offence or disrespect intended, please ... i am just curious of your comments.
Thanks
Dan
 
Last edited:

Users who are viewing this thread

Back
Top