Watts Up...?
Sep 17, 2018 at 1:27 PM Post #1,036 of 4,674
Yes sorry about the tech talk - this blog is strictly no holds barred on the tech side. But if there are things that you don't understand, just ask a question.

Agreed about the sound with upsampling to DSD sounding more analogue - although I would use the terms softer or warmer, as DSD colours the sound. I know this with certainty, as if you take a DSD master, a DXD recording, commonly used to master DSD. DXD is actually 352.8 kHz PCM, but has the benefit of being high resolution, but being PCM edits and processing can be done transparently. Anyway, take a DXD recording and make a DSD out of it, and indeed the DSD sounds soft and warm compared to the DXD. So DSD adds coloration, as the softness is not present on the original. And I can see this on simulation too; if you take a large step change signal (it is negative DC then goes positive DC) and look at the DSD OP, then for large changes the OP changes immediately; for small signals it takes a lot longer for the OP to change it's state - and we are talking about large time delays of the order of tens of uS for the signal to respond. So we have a delay that depends upon amplitude of the signal, and this timing non-linearity is highly audible; as it gives transients a timing delay dependent on amplitude. This makes it difficult for the brain to perceive transients accurately; and if you can hear transients, things sound soft.

There are other problems with DSD which are not solvable - small signal accuracy, and this robs music of the perception of depth and detail resolution, but that's another story.

As to upsampling and maintaining it as PCM - I do not recommend it, as the tap length is smaller, and more importantly they use conventional algorithms, not the WTA which has been specifically designed to recover transients as accurately as possible.

And yes, I am enjoying F1 in Singapore. Lewis Hamilton pole was simply an exceptional drive, totally unexpected given that the Mercedes is slow around Singapore. The music afterwards was superb - Liam Gallagher, (not a fan but the performance was very good) and later the Killers, which also was brilliant.

Rob

PS if it's sounding too bright, then you need to fix it by upgrading your system for something better; fixing issues with distortion or colourations never works musically in the long run...

I fully agree with Rob. I used to have a dcs Scarlatti DAC. Upsampling red book to DSD compared to 176 kHz PCM sounded extremely soft with poor transient timing. It killed the music.
When I am away from home I stream Qobuz on my Samsung S7 to a Mojo. Unfortunately Android audio up-samples 44.1 kHz flac to 192 kHz and when streaming using the Qobuz app there is no way around this. Fortunately USB audio player pro app can output bit perfect 44.1 Qobuz content. It sounds so much better than the 192 Android up-sampled content on the Mojo. Straight 44.1 has better resolution, dynamics and is far more musical. The only way to up-sample is using the BLU 2 or M-scaler.
 
Sep 17, 2018 at 7:52 PM Post #1,037 of 4,674
Also if multiple hifi listeners globally all report back saying chord gear sounds more musical and is a far more entertaining experience compared to other brands is that test not the strongest of all?
 
Sep 19, 2018 at 2:42 PM Post #1,038 of 4,674
I rejected the analogy because there is no notion of continuous signal between samples/pixels on a picture. sampling theorem just doesn't apply for reconstruction between pixels as we're capturing a scenery where each pixel is(ideally) an independent variable. unlike audio that will sample a continuous signal variation. that is a most fundamental difference when we're asked to create extra samples. in audio we reconstruct the band limited signal, on a picture we interpolate with some setting we prefer, and hope for the best.
In a normal camera a lens is sufficiently soft that it will put an upper limit on the amount of fine detail that can be captured. That's your band-limited, continuous, input to the sensor (assuming a digital sensor with sensels arranged in a uniform pattern).

Where you're probably getting confused is that lenses normally have more resolution than the sensor. This means that the picture is full of aliasing, since Nyquist is not being respected.

In photography when reducing the resolution of the image, it is preferable to first apply a Gaussian blur targeted at the Nyquist limit of the target image. Once the blur has been performed, the downsampling will introduce no more aliasing. If you do not perform the Gaussian blur, first, then you will add aliasing artefacts to the image. The mostly commonly seen such artefacts are moire patterns on fine detail, such as a stripey shirt.

When you want to upsample this downsampled image, you can only get back to the Gaussian blurred image. The Gaussian blur has enforced Nyquist and upsampling with a sinc filter will produce a "perfect result" that looks just like the Gaussian blurred image. This assumes that you used a suitably specificed sinc filter to downsample in the first place, of course. If you downsample carelessly then the errors are irreversible.

Now playing: Lance Friedel, LSO - Bruckner Symphony 5
 
Sep 19, 2018 at 6:51 PM Post #1,039 of 4,674
In a normal camera a lens is sufficiently soft that it will put an upper limit on the amount of fine detail that can be captured. That's your band-limited, continuous, input to the sensor (assuming a digital sensor with sensels arranged in a uniform pattern).

Where you're probably getting confused is that lenses normally have more resolution than the sensor. This means that the picture is full of aliasing, since Nyquist is not being respected.

In photography when reducing the resolution of the image, it is preferable to first apply a Gaussian blur targeted at the Nyquist limit of the target image. Once the blur has been performed, the downsampling will introduce no more aliasing. If you do not perform the Gaussian blur, first, then you will add aliasing artefacts to the image. The mostly commonly seen such artefacts are moire patterns on fine detail, such as a stripey shirt.

When you want to upsample this downsampled image, you can only get back to the Gaussian blurred image. The Gaussian blur has enforced Nyquist and upsampling with a sinc filter will produce a "perfect result" that looks just like the Gaussian blurred image. This assumes that you used a suitably specificed sinc filter to downsample in the first place, of course. If you downsample carelessly then the errors are irreversible.

Now playing: Lance Friedel, LSO - Bruckner Symphony 5
ok I expressed myself poorly and my first sentence is just nonsense now that I read it with my mental context. sorry about that. it's not that sampling theorem doesn't apply, it's that increasing pixel resolution is outside the limits. I see how my sentence somehow can be interpreted as if I was saying that digital sampling doesn't follow the sampling theory, which would be super weird even for me. ^_^

but your reply seems even more confused to me. the initial post that made me reply was this:
Hello Rob. I have Qutest DAC and like to know more about M scaler; I have never liked upsampling either through hardware DAC or software like Roon and always ending got back to the original. Does M Scaler add something to the sound like say, photoshop that when upsample a low resolution file, it double or triple neighbor pixels to increase the resolution which is nonsense or what? Regads
his question was clearly about a possible similarity between upsampling audio and interpolating pixels on a picture to make it bigger. you're pulling a previous downsampling of the picture(why?), and Gaussian blur out of your pocket to create a brand new goalpost nobody was going for.

here is the simple fact before you get tempted to pick 20 more unrelated notions of sampling theory requirements in picture processing that are unrelated to the original question. when oversampling the audio signal, the new samples are just more points on a band limited sine signal we already could fully reconstruct. the values of the extra samples were available all along and we're not actually creating new information.
now try transferring that to the contiguous pixels of a picture and then adding more pixels through interpolation. oops! it just doesn't work because the very size number and surface captured by the pixels is literally limiting the resolution of the image we captured. which is why I said that the analogy was crap and couldn't apply in the context OP mentioned. picture interpolation is clearly destructive as it's trying to create sample points at positions that were never captured. an audio equivalent would be asking a 44.1khz record to make up some content in the ultrasounds once oversampled to 96khz. we just never captured that ultrasonic information so if we want some, we have to make it up by following one reasoning or another. but it wasn't characterized by the initial capture and the result is no perfect anything.

am I less confuseding now?

I have to point out that all this was about saying that the bad analogy was bad, something everybody agreed on from the start.
 
Last edited:
Sep 19, 2018 at 7:05 PM Post #1,040 of 4,674
You're pulling a previous downsampling of the picture(why?), and Gaussian blur out of your pocket to create a brand new goalpost nobody was going for.
You seem to forget the indispensable low-pass filtering before A/D conversion.
 
Sep 19, 2018 at 7:52 PM Post #1,042 of 4,674
Not a really new approach: the attempt to remove the Gibbs phenomenon in the belief that it is responsible for the «digital sound». At the latest since Rob has demonstrated the irrelevance of the ultrasonic ringing at the filter frequency we know how little this «cure» does to the transient behavior in the audio band. Already Nyquist & Shannon have shown that an ideal low-pass filter rings infinitely – and the ringing contains no other frequencies than the filter frequency itself, which is beyond the audio band.

Nice: «So far, we have identified that we prefer the GTO filter because it has few taps. Because: More taps = more reverberation. Few taps = minimal reverberation.»
 
Last edited:
Sep 19, 2018 at 8:41 PM Post #1,043 of 4,674
am I less confuseding now?
A pixel is not a square. It is an infinitesimally small point.

A picture rendered from continuous squares of colour is aliased, since the little squares were never part of the original image before sampling.

Here's some reading for you:

http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

It's over 20 years old, but sadly people continue to think that pixels are little squares.

Now playing: Zola Jesus - Exhumed
 
Sep 19, 2018 at 10:32 PM Post #1,044 of 4,674
A pixel is not a square. It is an infinitesimally small point.

A picture rendered from continuous squares of colour is aliased, since the little squares were never part of the original image before sampling.

Here's some reading for you:

http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

It's over 20 years old, but sadly people continue to think that pixels are little squares.

Now playing: Zola Jesus - Exhumed
wait what? I never even suggested a shape for pixels and clearly wasn't talking about that all. why did you even make that post?
I'm back to being confused, just not about interpolation in photography.
 
Sep 19, 2018 at 11:33 PM Post #1,045 of 4,674
Of possible interest: a new offering claiming transient aligned filtering:

https://ifi-audio.com/wp-content/uploads/2018/08/iFi-audio-Tech-Note-GTO-filter-FINAL.pdf

Just saw it -- don't know much more about it at this time....

What a load of pseudoscience nonsense. Ringing is reverb? They clearly do not understand basic sampling theory; a sinc function interpolation filter (with an infinite amount of ringing using a non-bandwidth limited and hence illegal impulse) will perfectly reconstruct a bandwidth limited impulse with no change whatsoever, no ringing or "reverb" at all.

Boy this business really annoys me sometimes....
 
Sep 20, 2018 at 3:25 AM Post #1,047 of 4,674
Sep 20, 2018 at 1:05 PM Post #1,048 of 4,674
Yes, you did:


pixels are not contiguous. They are not little squares. They are distinct points.
you don't say? ok, I expect it was obvious that I was trying to draw a parallel with the idea of contiguous audio samples, at least obvious to someone who pretends to care about the point I was making on that analogy for now several posts because of you. even if you took the one definition of contiguous I wasn't using and got confused by my sentence at first, after I insisted that I wasn't saying anything about the shape of pixels, you could maybe have caught on the issue instead of insisting on telling me what I meant.
anyway, this isn't DPreview, and you clearly have nothing to say about the actual point I was making in answer to the analogy question, so I'll stop wasting everybody's time and let you go back to making ferrite necklaces as a hobby.

What a load of pseudoscience nonsense. Ringing is reverb? They clearly do not understand basic sampling theory; a sinc function interpolation filter (with an infinite amount of ringing using a non-bandwidth limited and hence illegal impulse) will perfectly reconstruct a bandwidth limited impulse with no change whatsoever, no ringing or "reverb" at all.

Boy this business really annoys me sometimes....
my rule of thumb now is that the more someone uses the shape of a Dirac pulse to "demonstrate" that their system will make better audio, the less I trust them. just the insinuation that band limiting is bad for audio because of how it looks on a pulse is maddening. coming from audiophiles I assume ignorance. but coming from marketing, I assume dishonesty.
 
Sep 20, 2018 at 4:51 PM Post #1,049 of 4,674
and you clearly have nothing to say about the actual point I was making in answer to the analogy question,
Since you seem to refuse to be educated that upsampling audio is precisely the same as upsampling imagery, and that the analogy introduced is correct, it really does seem like you should:
stop wasting everybody's time
If you want to spend your time wisely, I suggest you read the PDF I linked, so you can understand why this is utter nonsense:
now try transferring that to the contiguous pixels of a picture and then adding more pixels through interpolation. oops! it just doesn't work because the very size of the pixels is literally limiting the resolution of the image we captured
Pixels can't have size. Read the PDF I linked.
 
Sep 20, 2018 at 5:03 PM Post #1,050 of 4,674
Since you seem to refuse to be educated that upsampling audio is precisely the same as upsampling imagery, and that the analogy introduced is correct, it really does seem like you should:

If you want to spend your time wisely, I suggest you read the PDF I linked, so you can understand why this is utter nonsense:

Pixels can't have size. Read the PDF I linked.
Can you two please take this private and stop polluting the thread...
 

Users who are viewing this thread

Back
Top