Chord Electronics - Blu Mk. 2 - The Official Thread
Apr 1, 2017 at 7:12 AM Post #226 of 4,904
  Oh dear we are not comparing the same bit numbers. When I am talking about bit depth re WTA filters we are talking about how close the interpolation filter gets to the signal in the ADC in terms of reproducing transient timing; and I am comparing the ideal coefficients that will perfectly reproduce the original bandwidth limited signal in the ADC at the point of sampling to the actual coefficients used. Non WTA DAC's do this to only 1 to at best 7 bits accuracy or so. What you are talking about is the audio frequency steady state resolution of the signal, and that is something entirely different!

 

Okay show me some measurements that proves this!

 

I mean you have to have test this on MSB et al to state something like this. So please include MSB Select II and Shiit Yggdrasil that onsionsi had as example. 

 
Apr 2, 2017 at 9:21 AM Post #227 of 4,904
It is a theoretical result, for a constant "sinoid" most reconstruction filter are better, for some other signals the situation can be much worse. For music it is hard to know the exact error, but this is not what Rob claims.
 
Hence back to the claim that is theoretical and does not need a measurement to understand only the length of the reconstruction filter. A perfect reconstruction filter for a perfect bandwidth limited signal is a sinc function, sin(x)/x, hence the amplitude is determined by 1/x. Take Dave as an example then we have 164000 taps at 16x hence the reconstruction filter covers -5125 to +5125 "original" samples hence zero outside and hence the maximum error outside of this intervall is then roughly the max amplitude of the sinc i.e. roughly 1/(5125*pi) which is roughly 1/2^14=1/16384 i.e. 14 bits. Observe that the actual error could be even larger, but unless we get the coefficents of the filter we cannot check the error "within" the intervall.
 
Apr 2, 2017 at 10:09 AM Post #228 of 4,904
It is a theoretical result, for a constant "sinoid" most reconstruction filter are better, for some other signals the situation can be much worse. For music it is hard to know the exact error, but this is not what Rob claims.

Hence back to the claim that is theoretical and does not need a measurement to understand only the length of the reconstruction filter. A perfect reconstruction filter for a perfect bandwidth limited signal is a sinc function, sin(x)/x, hence the amplitude is determined by 1/x. Take Dave as an example then we have 164000 taps at 16x hence the reconstruction filter covers -5125 to +5125 "original" samples hence zero outside and hence the maximum error outside of this intervall is then roughly the max amplitude of the sinc i.e. roughly 1/(5125*pi) which is roughly 1/2^14=1/16384 i.e. 14 bits. Observe that the actual error could be even larger, but unless we get the coefficents of the filter we cannot check the error "within" the intervall.
Absolute rocket science!!!!
 
Apr 2, 2017 at 6:30 PM Post #229 of 4,904
This is just a passing comment, I don't pretend to know how it works but some universal blu ray player from Sony and Oppo are outputting the DSD bitstream directly to the amp through the HDMI connection. The PSD Audio Directstream Memory Player does it through the I2s connection. So it's unusual for SACD player to deliver the digital data out to a DAC, but possible. However it probably won't work with the Blu Mk2 as it is.

Cheers, Pierre
 
Apr 2, 2017 at 6:46 PM Post #230 of 4,904
The aforementioned Blu ray players convert DSD to PCM when sending a signal via HDMI. If you connect a PS Audio Directstream Memory Player via I2S to a PS Audio Directstream DAC you get the same SACD upsampled while still remaining in a DSD. The Directstream DAC, however, upsamples everything in DSD format regardless of whether it started out as PCM or DSD.
 
Apr 2, 2017 at 8:07 PM Post #231 of 4,904
   
 
 
Yeah, a little OT, but screw it. There's some weird **** going on in here anyway.
 
So, you just need AnyDVD which you run - it then runs in the background - and what it does it strips all of the encoding and regions and stuff from the Blu Ray of DVD-A.
 
Then you do all of the work in DVD Audio Extractor.  It breaks down everything, and there's been nothing I haven't been able to extract in any format that I want. I usually rip these in Wav, first, and then I convert it to Flac if I want.
 
I just did YES's Tales From Topographic Oceans - Steven Wilson remix - in 24/192, 24/96, and the 5.1 mixes, as well, all with the same software. No downsampling or any junk like MakeMKV.
 
Have fun!
atsmile.gif

 
Thank you @EVOLVIST for the pointer to these resources!  I finally have all my Blu Ray Audio discs ripped and happily sitting on my Roon Core
 
For folks that find this by search in the future, here is my workflow on Mac OS X:
 
Rip Blu Ray Audio to an ISO using Aurora Blu Ray Copy (strips controls on the BR)
Use DVD Audio Extractor (available for OS X) to rip the tracks
   Rip 2 channel PCM sources as PCM/Wav
   Separately rip multichannel (usually) DTS sources as multi-channel FLAC 
Clean up tags with Metadatics
Use XLD to convert 2 channel PCM to FLAC in native resolution, and to Redbook (16/44) Apple Lossless using XLD (for use on iPhone)
 
Thanks again!  DVD Audio Extractor was the missing link for me
 
Apr 2, 2017 at 8:45 PM Post #232 of 4,904
It is a theoretical result, for a constant "sinoid" most reconstruction filter are better, for some other signals the situation can be much worse. For music it is hard to know the exact error, but this is not what Rob claims.

Hence back to the claim that is theoretical and does not need a measurement to understand only the length of the reconstruction filter. A perfect reconstruction filter for a perfect bandwidth limited signal is a sinc function, sin(x)/x, hence the amplitude is determined by 1/x. Take Dave as an example then we have 164000 taps at 16x hence the reconstruction filter covers -5125 to +5125 "original" samples hence zero outside and hence the maximum error outside of this intervall is then roughly the max amplitude of the sinc i.e. roughly 1/(5125*pi) which is roughly 1/2^14=1/16384 i.e. 14 bits. Observe that the actual error could be even larger, but unless we get the coefficents of the filter we cannot check the error "within" the intervall.


Please explain where does the pi (=3.1415926...) in the 1/(525*pi) come from.
Also, this point of view just means the higher the number of taps (in log scale), the higher the number of bits in the coefficients. Hence a Mscaler is ~ 3 bits more than that of a DAVE.
 
Apr 3, 2017 at 12:54 AM Post #233 of 4,904
It is from the Nyquist rate, i.e. the samples are at -2pi-pi,0,pi,2pi etc. Often you see the samples at -2,-1,0,1,2, etc but then you use the "other" sinc function sinc(pi*x)/(pi*x).
 
Clearly you need higher bit resolution on the coefficients than the target resolution but that is not the point, the point it that to recreate the original signal we should have added the value according to the sinc function multiplied with the sample at 0 but we did not hence we have an error, if the sample is maximum i.e. 1 then the error would be attenuated according to how much smaller the value of the sinc function has become i.e. 1/(5125*pi).
 
Apr 3, 2017 at 3:36 AM Post #234 of 4,904
The aforementioned Blu ray players convert DSD to PCM when sending a signal via HDMI. If you connect a PS Audio Directstream Memory Player via I2S to a PS Audio Directstream DAC you get the same SACD upsampled while still remaining in a DSD. The Directstream DAC, however, upsamples everything in DSD format regardless of whether it started out as PCM or DSD.


From page 31 of Sony's UHP-H1 manual:"[DSD Output Mode]
[Auto]: Outputs DSD signals from the HDMI OUT jack when playing a Super Audio CD and DSD format file. Outputs LPCM signals instead if the HDMI connected device not support DSD. [Off]: Outputs PCM signals from the HDMI OUT jack when playing a Super Audio CD and DSD format file."

From page 26 of Sony's BDP-S490 manual:"[DSD Output Mode]
[On]: Outputs DSD signals from the HDMI OUT jack when playing a Super Audio CD. When [On] is selected, no signal is output from other jacks.
[Off]: Outputs PCM signals from the HDMI OUT jack when playing a Super Audio CD."

From page 60 of Oppo's UDP-203 manual:"Multi-Channel Digital Audio to Receiver through HDMI
If the player is connected to an A/V receiver or processor with HDMI inputs (as described on page 12), you can send all current audio formats to your receiver in pure digital form. To get the best possible audio via HDMI, you may need to set the following options on the player’s Audio Output Setup menu:
 If your receiver supports HDMI v1.3 with decoding capability for high resolution lossless audio formats such as Dolby TrueHD and DTS-HD Master Audio, please use the following audio output setup options:
o Secondary Audio:Off(or On if you need secondary audio)
o HDMI Audio Format:Bitstream
o SACD Output:pCM (or DSD if the receiver supports DSD over HDMI)
o S/PDIF Output:frowning2:any – not in use)

 If your receiver supports HDMI v1.1/1.2 Multi-Channel PCM audio, but not high resolution lossless audio formats such as Dolby TrueHD and DTS-HD Master Audio, please use the following audio output setup options:
o Secondary Audio:Off (or On if you need secondary audio)
o HDMI Audio Format:LPCM
o SACD Output:pCM (or DSD if the receiver supports DSD over HDMI)
o S/PDIF Output:frowning2:any – not in use)."

Equally Oppo's BDP-103D supports DSD through HDMI, see page 84 of the manual. I'm sure other universal players such as Marantz's and Pioneer's are doing it now as well. Of course you need a DAC or an amplifier capable of receiving and deciding DSD bitstream through HDMI, but many do now.
So in theory the Blu Mk2 could do it as well and output DSD bitstream to DAVE which can decode it, but it would have to be implemented, probably through HDMI, and licence fees payed. Since DAVE can handle DSD through DoP using the USB connection, I suspect the Blu Mk2 will as well, but as I said I'm not an expert.

Cheers and sorry for the off-topic post, Pierre

Ps: multiple edits while I retrieved and copy pasted information.
 
Apr 3, 2017 at 1:41 PM Post #235 of 4,904
  It is a theoretical result, for a constant "sinoid" most reconstruction filter are better, for some other signals the situation can be much worse. For music it is hard to know the exact error, but this is not what Rob claims.
 
Hence back to the claim that is theoretical and does not need a measurement to understand only the length of the reconstruction filter. A perfect reconstruction filter for a perfect bandwidth limited signal is a sinc function, sin(x)/x, hence the amplitude is determined by 1/x. Take Dave as an example then we have 164000 taps at 16x hence the reconstruction filter covers -5125 to +5125 "original" samples hence zero outside and hence the maximum error outside of this intervall is then roughly the max amplitude of the sinc i.e. roughly 1/(5125*pi) which is roughly 1/2^14=1/16384 i.e. 14 bits. Observe that the actual error could be even larger, but unless we get the coefficents of the filter we cannot check the error "within" the intervall.

 

The length of the reconstruction filter depends on the frequency. Nyquist-Shannon sampling theorem: If a continuous-time signal contains only frequencies below the Nyquist frequency fs/2, then it can be perfectly reconstructed from samples taken at sampling frequency fs. This suggests that prior to sampling, it is reasonable to filter a signal to remove components with frequencies above fs/2.

 

The more you up sample the longer the reconstruction filter need to be to cover the increase in bandwidth. Does it also mean that the samples are more accurate the more you up sample an RBCD that is recorded at 44.1 KHz 8x, 16x or 32x or does it only have to be the below the Nyquist frequency fs/2, in theory?

 
Apr 3, 2017 at 2:36 PM Post #236 of 4,904
 

The more you upsample the longer the reconstruction filter needs to be to cover the increase in bandwidth. Does it also mean that the samples are more accurate the more you up sample an RBCD that is recorded at 44.1 KHz 8x, 16x or 32x or does it only have to be the below the Nyquist frequency fs/2, in theory?

 
Upsampling doesn't increase the bandwidth of a signal (which is already bandwidth limited, as you pointed out yourself), but serves as a low-pass filter – to suppress aliasing and to reconstruct the original waveform by smoothing the steps originating from the 44.1 kHz sampling rate. The more taps (= coefficients) the filter algorithm comprises, the sharper the filter – which is a good thing, since it helps preserving transient accuracy in the audio band below, in contrast to smoother filters.
 
Apr 3, 2017 at 2:44 PM Post #237 of 4,904
If the start sample frequency increases then the distance between samples at the Nyquist rate decreases. Hence for a fixed target sampling rate the number of taps in the filter for a given accuracy decreases. Back to the Dave example, we have 164000 taps at 16 x i.e. 44100 upsamples to 705600, then we get 82000 taps at 8x for 88200 upsampled to 705600. Observe then that is the same number of computations for the FPGA 164000*44100*2=82000*88200*2 i.e. the FPGA is still running full load. The 82000 still covers the same number of original samples -5125 to +5215 which determine the error outside of the intervall covered.
 
The higher you upsample with a good filter i.e. the WTA in this examples the better accuracy compared to e.g. linear interpolation, but it is a trade-off between the lenght in the number of Nyquist samples and the upsample rate. That is if we go to 32x the accuracy within the intervall increases but we get more errors due to that the filter is too short in the number of original samples we cover e.g. the 164000 taps only cover -2562 to +2562 over the original samples.
 
Apr 3, 2017 at 3:53 PM Post #238 of 4,904
  If the start sample frequency increases then the distance between samples at the Nyquist rate decreases. Hence for a fixed target sampling rate the number of taps in the filter for a given accuracy decreases. Back to the Dave example, we have 164000 taps at 16 x i.e. 44100 upsamples to 705600, then we get 82000 taps at 8x for 88200 upsampled to 705600. Observe then that is the same number of computations for the FPGA 164000*44100*2=82000*88200*2 i.e. the FPGA is still running full load. The 82000 still covers the same number of original samples -5125 to +5215 which determine the error outside of the intervall covered.
 
The higher you upsample with a good filter i.e. the WTA in this examples the better accuracy compared to e.g. linear interpolation, but it is a trade-off between the lenght in the number of Nyquist samples and the upsample rate. That is if we go to 32x the accuracy within the intervall increases but we get more errors due to that the filter is too short in the number of original samples we cover e.g. the 164000 taps only cover -2562 to +2562 over the original samples.

 

Yes the more you up sample the more taps you need to keep the accuracy. There is certainly a trade-off between the length in the number of samples and the up sample rate at some point.

 

What about diminishing returns? Not in cost but in SQ. Diminishing returns because the noise of other components that will increase, responsiveness of the change in voltage is not without limits, the quality of the recordings, the transiency of the transducer etc etc.

 
Apr 3, 2017 at 3:56 PM Post #239 of 4,904
   
Upsampling doesn't increase the bandwidth of a signal (which is already bandwidth limited, as you pointed out yourself), but serves as a low-pass filter – to suppress aliasing and to reconstruct the original waveform by smoothing the steps originating from the 44.1 kHz sampling rate. The more taps (= coefficients) the filter algorithm comprises, the sharper the filter – which is a good thing, since it helps preserving transient accuracy in the audio band below, in contrast to smoother filters.

Depends, there are three definitions of bandwidth.

 

1) In computer networks, bandwidth is used as a synonym for data transfer rate, the amount of data that can be carried from one point to another in a given time period (usually a second).

2) Bandwidth is the range of frequencies.

3) In business, bandwidth is sometimes used as a synonym for capacity or ability.

 

When we up sample we change the sample rate, so definition 1. 88.2 kHz consist of 100 % more samples than 44.1 KHz and 176.4 KHz of 400 % more samples than 44.1 KHz.

 

The length of the reconstruction filter depends on frequency but also on the amount of samples. More samples means more samples to filter, I think. 

 

http://searchenterprisewan.techtarget.com/definition/bandwidth

 

 
Apr 3, 2017 at 4:33 PM Post #240 of 4,904
   

Yes the more you up sample the more taps you need to keep the accuracy. There is certainly a trade-off between the length in the number of samples and the up sample rate at some point.

 

What about diminishing returns? Not in cost but in SQ. Diminishing returns because the noise of other components that will increase, responsiveness of the change in voltage is not without limits, the quality of the recordings, the transiency of the transducer etc etc.

Yes agree, for me it seems very hard to explain how you can need the number of taps Chord is using, at least with the typical ADC, but this is what makes it so much fun when someone like Rob does the listening and pushes on despite what everyone else in the industry is saying. But listening to WTA DACs proves to me that Rob is right or at least on the right track. Cannot wait for some music examples from the Davina ADC.
 
But looking at typical test-signals all the errors that you would expect from the short reconstruction filters in other DACs seems to cancel out, why you for music need these very large filters is fascinating. Maybe Rob is right ... the brain is the worlds most advanced signal processor.
 

Users who are viewing this thread

Back
Top