Hugo M Scaler by Chord Electronics - The Official Thread
Dec 17, 2019 at 1:47 PM Post #9,736 of 18,414
HQPlayer PCM upsampling needs only a Core i3. That is 44.1/48khz PCM upsampled 16x to 705.6/768khz PCM uses 15% of a Core i5 and <10% on a Core i7. You can set bits=24, filter 1x,Nx = 'Sinc-M' (or 'poly-sinc-long-LP') and dither = none. IMO, with months of listening, this comes so very close to WTA1.
 
Dec 17, 2019 at 5:32 PM Post #9,739 of 18,414
has anyone powered their cca with ifi i power and if so could the benefits gained be shared here? mk
 
Dec 17, 2019 at 6:16 PM Post #9,742 of 18,414
also to add to above will 2go be able to connect to hms?
 
Dec 17, 2019 at 6:49 PM Post #9,743 of 18,414
Hello Rob,

Some background:

I use Roon's DSP Convolution engine for 131k tap FIR room correction filters along with the volume leveling feature for volume adjustment free listening. Before any DSP is applied to the signal, Roon expands the audio to 64bit floating point and does all DSP with 64bits of precision. Once the desired DSP has been applied, Roon will dither (TPDF) back down to the maximum word length accepted by the audio device. So in the case of Chord DACs, this would be 32bits for the USB input and 24bits for the Toslink input.

My questions:

A. Coming from the 64bit floating point audio stream, is it best to dither to 32bit and then send to the M-Scaler via USB, or should I dither to 24bit and use the optical input for full electrical isolation?

B. Does the M-Scaler output 32bit audio on dual BNC or is it internally dithered to 24bits before leaving the device? If it internally dithers to 24bit, are the advantages of having the 32bit input greater than the cost of dithering twice (64 -> 32 -> 24)?

C. You've stated numerous times that your devices perform their best when given a bit-perfect audio stream, for obvious reasons. In your best guess/subjective opinion, at what level of DSP volume attenuation would one be causing serious auditory consequences? Assuming that this attenuation was being done with 64bit precision and dithered back to 32/24bit using TPDF.


Thanks!
 
Dec 17, 2019 at 6:58 PM Post #9,744 of 18,414
all dsp disabled is the best option. just a simple unadulterated rbcd stream and the mscaler will take care of everything.
 
Dec 17, 2019 at 8:24 PM Post #9,747 of 18,414
i only use headphones room correction is new to me
 
Dec 17, 2019 at 9:22 PM Post #9,748 of 18,414
HQPlayer PCM upsampling needs only a Core i3. That is 44.1/48khz PCM upsampled 16x to 705.6/768khz PCM uses 15% of a Core i5 and <10% on a Core i7. You can set bits=24, filter 1x,Nx = 'Sinc-M' (or 'poly-sinc-long-LP') and dither = none. IMO, with months of listening, this comes so very close to WTA1.

Ah, those outdated Intel CPUs... Intel has already admitted it can't compete with AMD any time soon, and so they say, their goal is no longer to be the biggest CPU manufacturer but the biggest chip manufacturer.
And regarding core count specifically, which is the most relevant factor in the case we are talking, then AMD dominance is enormous, in every price point.
AMD is now coming out with 64 cores (Ryzen Threadripper 3990X). Not counting their enterprise CPUs, which kick Intel even harder.

But the curious thing is that (for PCM 768K 1M taps upsampling) HQ player is using less than 1% of processing power available. GPU offload (which processing power is more than 10x the CPU), is only used by HQPlayer for DSD processing, as CPU power is only partial used in PCM anyway.

I was also surprised that DSD processing (and converting) takes much more power than PCM processing.
 
Last edited:
Dec 17, 2019 at 9:37 PM Post #9,749 of 18,414
all dsp disabled is the best option. just a simple unadulterated rbcd stream and the mscaler will take care of everything.

@mastercheif
The potential for transparency is maximized when you send the source bits untouched to the MScaler+DAC. Even the 64-bit + dithered processing by Roon will be detrimental to transparency. It's audible for sure.

I fully understand that using DSP will reduce "transparency", it's plainly audible when I do volume matched comparisons between DSP engaged vs disengaged.

This is a difficult concept to explain, and I can only speak for myself, however you may be surprised to find that "transparency" does not always mean "the most faithful reproduction of the original music performance". "Transparency" in the sense that Rob (I think) uses it or in general conversations about DACs is equivalent to "reproducing the analogue signal that was in the ADC". While AD/DA conversion is a critical component in the audio/music reproduction chain, there are other factors to consider as well. Here's an illustration with commentary of what I'm getting at:

Musicians playing instruments in a room -> Microphone -> Preamp / Analog mixer -> A/D converter -> Mixing / Mastering -> File.​

This file contains audio information such as:

"at sample 435784237 the amplitude in the left channel is equal to +4bits"​

Easy enough. What happens when you try to play back this file?

File -> D/A converter -> PreAmplification -> Amplifier -> Speakers / Headphones -> Air -> Ears​

As you can see, there are many sources of potential distortions and other nasty bits that can interfere with your ability to even hear your pristine D/A conversion.

The simple answer here is to not buy gear that distorts the audio, and I practice this methodology to fullest extent possible. This is why I'm seriously considering driving my speakers from a TT2 (or waiting for DX).

However, money can only get you so much audio gear. The "Air" part of the reproduction chain is where things can really get difficult. I live in a small apartment in NYC and my listening room is long and narrow. I have a huge room mode at 70hz and nasty side wall reflections. I've mitigated these somewhat with the purchase of bass traps and absorption panels from GIK Acoustics, but they can only do so much to cheat the physics of the room.

Getting back to the File -> Ear interface, let's consider again:

"at sample 435784237 the amplitude in the left channel is equal to +4bits"​

What if this amplitude/time curve is somewhere around the nasty 70hz room mode in my room? In that case, the resulting sound that one would be hearing would be more like:

"at sample 435784237 the amplitude in the left channel is equal to +10bits"​

The distortions that the room imposes on the audio heard by the listener can be far more detrimental to a faithful reproduction of the music performance than a slight loss in transparency resulting from the use of DSP.

This is of course assuming that the DSP is done in a competent manner. I use https://www.audiovero.de/en/acourate.php to generate the FIR convolution filters because it allows for the fine tuning of nearly every parameter involved in the filter generation process, allowing one to tailor the level of correction to their needs instead of a static heavy-handed approach. The kindle book "Accurate Sound Reproduction Using DSP" is a vital resource in wrapping one's head around the science involved with DSP room correction and how to use Accurate, it's really the "missing manual".
 
Last edited:

Users who are viewing this thread

  • Back
    Top