Can you hear upscaling?
May 23, 2024 at 1:34 PM Thread Starter Post #1 of 132

knownothing2

100+ Head-Fier
Joined
Jan 20, 2024
Posts
150
Likes
46
Location
Seattle, WA, USA
Wondering what members have to say about Rob Watts newly developed upscaling device, or more precisely, what he says about developing the new device, and the theory behind it. Lot’s of comments from Watts in this interview about tuning algorithms by ear. Start video at 11:30 in.



kn
 
May 23, 2024 at 4:02 PM Post #2 of 132
I won't waste my time looking at that video. There exist audibly transparent (audibly perfect) DACs for very affordable prices. Once you have such a DAC you don't need a better one, audibly better than audibly transparent is impossible. Furthermore we know that Rob Watts sometimes says the most ridiculous things, for example that things at -200 or -300 dB could matter, so he can not be taken seriously anyway.
 
May 24, 2024 at 6:04 AM Post #4 of 132
Wondering what members have to say about Rob Watts newly developed upscaling device, or more precisely, what he says about developing the new device, and the theory behind it. Lot’s of comments from Watts in this interview about tuning algorithms by ear. Start video at 11:30 in.



kn

Wow. Thanks for the link to the video.

The interview with Rob is very interesting. IMO, he is indeed a digital audio expert (as the YouTuber said). He knows very well about upsampling. I consider he is one of the few people who really knows what is upsampling (the other one is the developer of HQPlayer).

Here is the relevant part of the transcript of what Rob was saying in the interview (from YouTube's auto CC, good enough but not perfect. Better to watch the video directly):
now the important thing about m scalers is getting

12:33
transits to be reconstructed correctly and transits are used by the brain um so that we can perceive instruments as being separate entities locating those instruments in Space the tambra and the pitch and with current digital the big problem with digital is the timing of transience and when you put the Digital Signal into an interpolation filter and every single deck on the planet has got an interpolation filter these timing of transients are all wrong they're shifting backwards and forwards continuously and this shifting backwards

13:14
and forwards of the timing of transients confuses the brain and as a result your instruments lack separation you don't get T variation you can't perceive the low frequency pitch and you can't locate instruments in space so by working on the transient reconstruction accuracy we get much better instrument separation and focus um uh we get the tomra of the instruments coming through much more naturally the base pitch is is reproduced you know much better. why can't the Dave do that by itself? simply

13:52
because the amount of processing power. the more processing power that you've got um delivers much better sound quality because you can more accurately reconstruct the timing of transients so if you wanted to um perfectly reconstruct the original timing um you would need an infinite amount of processing so the more processing that you've got the better the sound quality but um what's more important than the amount of processing you've got is the algorithm that you use to to do that. and the beauty about the WTA algorithm

14:30
it's the only algorithm that has been designed and listened to in order to reconstruct transients correctly um if you buy a you know an up PC up sampler for example you may have more taps but you're using the wrong algorithm to reconstruct the transit timing information so what's unique about the WT algorithm is the amount of effort that's gone into into fine-tuning that algorithm um and um you know that that's something that's taken thousands of listing tests to actually get right the only way you can do it is by ear on what

15:07
what I actually do is to to listen to different aspects and I've got different test tracks um so you you've got a couple of different test tracks for depth I've got different test tracks for tambra variation I've got different test tracks for instrument separation and then you score each individual parameter and then sh and adjust the sound to to to optimize that performance it's been a hell of a time which is why it's taken six years to to work on it's coming out when it's finished so you're still

15:36
working on so we're still could be a seven-year project so it could be a seven-year project
He explained quite well about the benefit and why not all upsamplers are created equally.

One thing I am not sure is that he gave an impression that only WTA algorithm can reconstruct transients correctly and no PC algorithm can do it well. Hmm... for this I doubt.
 
May 24, 2024 at 8:28 AM Post #5 of 132
Can you hear upscaling?
“Upscaling” is a technique for converting images to a higher resolution, so unless you look at images using your ears, you obviously can’t hear upscaling!
Wondering what members have to say about Rob Watts newly developed upscaling device …
I thought he only made audio products, I didn’t know he’d “newly developed an upscaling” image/video device, so I’ve nothing to say about it. Without watching the video, I presume he’s applying the term to some new audiophile product he’s just released but audio cannot be “upscaled” as there are no pixels in audio, so it would seem to be just more Rob Watts snake oil.
The interview with Rob is very interesting.
“Very interesting” if you want to learn pseudoscience, lies and other BS. Why would you want to learn that though, don’t you already know more than enough?

G
 
Last edited:
May 24, 2024 at 11:08 AM Post #7 of 132
Troll volleyball… one sets it up and the other one spikes it. It’s probably one person with two accounts.
No, it’s me again, and I still know pretty much nothing. @sunjam is another entity altogether. As if anyone who is curious about different approaches to the recreation of soundstage in digital reproduction of music must be such a rare bird that there can only be one. LOL

I am sincerely curious what readers and contributors in this forum think of Watts “logic” in discussing his techniques and his new product. If this is snake oil, there is probably a much cheaper way than his claim of spending years developing and honing a new approach to bilk hifi snobs out of their capital gains. Maybe he is delusional. Maybe he is flat-out lying. Or maybe he is onto something real.

FWIW, I have tried upsampling digital files in PCM to 352.8 kHz or 384 kHz using JRiver on my laptop and USB out to several different DACs, and I did not care for the outcome. With that upsampling implementation in my system, the music sounded smoother, but lost some bite and immediacy. I did not notice an improvement in soundstage reproduction.

Watts new product, as with the current Chord Hugo M-Scaler, as I understand it, is optimized to use with dual BNC out for upscaling to 705.6 up to 768 kHz. This would make the Chord DACs that Watts also helped design with their dual BNC inputs and 768 kHz capability the logical partnering equipment. In my limited experience, Chord DACs have excellent reproduction of soundstage when used on their own.

As discussed in another thread in this Forum - to death - there is no current equipment or method available to measure soundstage reproduction, which I am sure in objectivist views would make this subjective element of reproduction fertile grounds for marketing abuse.

kn
 
Last edited:
May 24, 2024 at 12:08 PM Post #8 of 132
It is always fun to look at comments in audio science / sound science forums. The comments are so "creative". Hmm... could I be Rob Watts? or the developer of HQPlayer? or an AI-driven BOT? Or someone else? I know the answer but I would keep it for myself. :relieved:

Learning how / why people create pseudo science claims are my interest. In pseudo science, there are always something look factual mixed with somethings that are really factual. It is a good learning opportunity for me to learn about the factual stuffs; meanwhile, in the learning process, it often shows how good / bad the people are doing in attempting to use the factual stuffs to cover up somethings that look factual but indeed not.

Upsampled music is like a subset of Hi-Res music. If people believe that "Hi-Res music is useless", they would believe that "Upsampling is useless" too.

But, is Hi-Res music indeed useless? We had a long discussion on this topic with a closed thread. (FYI, that thread was deleted completely, i.e. no one could see it. But it re-surfaced magically after a day or two). Anyway, have a look of that thread and make the judgement yourself.

If you ask the question to one of the most advanced objective AI engine, ChatGPT-4, you would get the following:

Screenshot 2024-05-24 214718.png
Screenshot 2024-05-24 214735.png
Based on the above reply, people who cannot perceive the difference between CD-quality and Hi-Res music (for whatever reasons) could save a bit on the HiFi equipment, disk spaces, and cost for acquiring Hi-Res music files.

I would suggest people to check if they can hear any difference by using high quality upsamplers (e.g. M-Scaler, or HQPlayer). This may probably help them to tell if they are lucky ones who can save some money. (I am not that lucky, LOL)

Having said that, we all know that AI may not be 100% correct all the time (even with GPT-4).

However, its answer could give us some insights about the question and where to look for more information.

Cheers, :L3000: (Sigh... I'd just spent some money to get a better DAC for playing my DSD256 music that is upsampled from CD-format...)

The problem with ChatGPT is that virtually all of the legitimate research is behind paywalls and not available to the LLM as the model is trained. Sites like AES are unavailable to ChatGPT, so the results are largely based on marketing material.

Your continued use of commercial AI to generate what you believe to be evidence is a fundamentally flawed model. We’ve already covered this yet you seem to not understand the biases in the current model.
 
May 24, 2024 at 12:17 PM Post #9 of 132
What an interesting subject. One time I decided to use the sox application to upscale audio to 176.4kHz. There was an improvement in the clarity in a few records but the effect using this method were minimal. I think it can make a difference if you nailed down an effective method to upscale.
FWIW, I have tried upsampling digital files in PCM to 352.8 kHz or 384 kHz using JRiver on my laptop and USB out to several different DACs, and I did not care for the outcome. With that upsampling implementation in my system, the music sounded smoother, but lost some bite and immediacy. I did not notice an improvement in soundstage reproduction.

Watts new product, as with the current Chord Hugo M-Scaler, as I understand it, is optimized to use with dual BNC out for upscaling to 705.6 up to 768 kHz. This would make the Chord DACs that Watts also helped design with their dual BNC inputs and 768 kHz capability the logical partnering equipment. In my limited experience, Chord DACs have excellent reproduction of soundstage when used on their own.
Tried DSD256 or 768k if you can. The digital filters play an important role here too. If you didn't try HQPlayer before, I suggest you can try it (as it opens my eyes to upsampling music). With the free trial, you can have unlimited 30-min playing sessions with full functionality.

Cheers, :gs1000smile:

Here is what I am listening now (and the HQPlayer's settings):

Screenshot 2024-05-25 001346.png
 
May 24, 2024 at 12:24 PM Post #10 of 132
The problem with ChatGPT is that virtually all of the legitimate research is behind paywalls and not available to the LLM as the model is trained. Sites like AES are unavailable to ChatGPT, so the results are largely based on marketing material.

Your continued use of commercial AI to generate what you believe to be evidence is a fundamentally flawed model. We’ve already covered this yet you seem to not understand the biases in the current model.

Did I say the output of ChatGPT is evidence? I think I didn't. Did I?

What is the fundamentally flawed model you mentioned? What model you are referring to? Are you saying the flawed model is "the problem with ChatGPT is that virtually all of the legitimate research is behind paywalls and not available to the LLM as the model is trained"?

BTW, I hope you saw this in my comment earlier:

1716567544110.png
 
May 24, 2024 at 12:36 PM Post #11 of 132
Tried DSD256 or 768k if you can. The digital filters play an important role here too. If you didn't try HQPlayer before, I suggest you can try it (as it opens my eyes to upsampling music). With the free trial, you can have unlimited 30-min playing sessions with full functionality.

Cheers, :gs1000smile:

Here is what I am listening now (and the HQPlayer's settings):


With Chord DACs, DSD is converted back to 16fs PCM though but if the DSD conversion from HQPlayer is excellent to begin with, there will be measurable differences in the output from a Chord WTA filter applied to PCM from converted DSD to PCM from the FPGA programming
 
Last edited:
May 24, 2024 at 12:38 PM Post #12 of 132
I got the following graph from a fellow member.

They show the effects of different oversampling filters on square wave. This may give you an idea of how critical the filter is when the DAC reconstruct the final audio signal.

Screenshot 2024-05-25 002918.png


Screenshot 2024-05-25 002937.png


Screenshot 2024-05-25 002953.png


Screenshot 2024-05-25 003015.png
 
May 24, 2024 at 1:06 PM Post #13 of 132
Did I say the output of ChatGPT is evidence? I think I didn't. Did I?

What is the fundamentally flawed model you mentioned? What model you are referring to? Are you saying the flawed model is "the problem with ChatGPT is that virtually all of the legitimate research is behind paywalls and not available to the LLM as the model is trained"?

BTW, I hope you saw this in my comment earlier:

1716567544110.png

You definitely did use the ChatGPT output as evidence. Your quote:

“Based on the above reply, people who cannot perceive the difference between CD-quality and Hi-Res music (for whatever reasons) could save a bit on the HiFi equipment, disk spaces, and cost for acquiring Hi-Res music files.

I would suggest people to check if they can hear any difference by using high quality upsamplers (e.g. M-Scaler, or HQPlayer). This may probably help them to tell if they are lucky ones who can save some money. (I am not that lucky, LOL)”

I’ll leave the rest up to you- play whatever word games you like. Attempting to engage in any kind of actual discussion with you is futile.
 
May 24, 2024 at 1:24 PM Post #14 of 132
You definitely did use the ChatGPT output as evidence. Your quote:

“Based on the above reply, people who cannot perceive the difference between CD-quality and Hi-Res music (for whatever reasons) could save a bit on the HiFi equipment, disk spaces, and cost for acquiring Hi-Res music files.

I would suggest people to check if they can hear any difference by using high quality upsamplers (e.g. M-Scaler, or HQPlayer). This may probably help them to tell if they are lucky ones who can save some money. (I am not that lucky, LOL)

I’ll leave the rest up to you- play whatever word games you like. Attempting to engage in any kind of actual discussion with you is futile.
I thought the below is a fact (even I didn't ask ChatGPT for its reply). Correct?

Based on the above reply, people who cannot perceive the difference between CD-quality and Hi-Res music (for whatever reasons) could save a bit on the HiFi equipment, disk spaces, and cost for acquiring Hi-Res music files.

I would suggest people to check if they can hear any difference by using high quality upsamplers (e.g. M-Scaler, or HQPlayer). This may probably help them to tell if they are lucky ones who can save some money. (I am not that lucky, LOL)
 
May 24, 2024 at 2:10 PM Post #15 of 132
Wondering what members have to say about Rob Watts newly developed upscaling device, or more precisely, what he says about developing the new device, and the theory behind it. Lot’s of comments from Watts in this interview about tuning algorithms by ear. Start video at 11:30 in.



kn

I tried, started at 11:30, at 13:30 I had enough. The disregard for the concept of magnitude and hearing threshold is the elephant in the room. In general, he's pushing his "forever more" approach to hit a fly with an atomic bomb. If at the end of the day the fly is gone, I guess it's a job well done...
 

Users who are viewing this thread

Back
Top