Watts Up...?
Oct 21, 2017 at 4:03 AM Post #406 of 4,697
I think battery power is about due a 50year heyday Rob. R&D investment in that space has been exponential in recent years. The spinoffs will hopefully feed into portable devices.
 
Oct 21, 2017 at 8:26 AM Post #407 of 4,697
I think battery power is about due a 50year heyday Rob. R&D investment in that space has been exponential in recent years. The spinoffs will hopefully feed into portable devices.

Unfortunately, far wealthier interests may have been investing in suppressing the results of any R&D investment for newer energy technologies, Dave, even though it pains me to say it.

Still, I am hopeful that a tipping point may be reached within the next decade or so.
 
Oct 21, 2017 at 12:29 PM Post #408 of 4,697
With mobile devices I suspect the imperative for 8 or 10 hours between charges will change over time due to a change toward ubiquitous cable-less charging facilities in public areas. I drive an Audi and when I jump in the car I place my mobile in the centre consol on a rubber mat. The phone charges all the time it’s there. I think the tops of tables in coffee shops and all manner of other public areas will be charging cable-less whilst it still can be used facing you. The size, quality and capability of batteries will continue improve in the meantime.
 
Oct 21, 2017 at 4:04 PM Post #409 of 4,697
For sure - but - Moore's law is running out of steam. Moreover, it is becoming more and more expensive - a 13nm mask set will set you back $150M - that's a sizeable percentage of an FPGA's companies entire annual revenue. Also, the economic sweet spot is two nodes below state of the art - so currently the most economic point is 28nm process. My guess is 7nm will provide FPGA's allowing an economic portable M scaler; but that would need state of the art to be at about 2 or 3 nm for 7nm to be economic - and we are many years away from that - perhaps decades. The key issue for portability is power - and we need at least a tenfold improvement in power dissipation.
Next year SAMSUNG and TSMC will start producing in 7nm and in two years they will reach 5-3 nm. But what about FPGA they can't do it with SAM or TSMC ?
 
Oct 21, 2017 at 8:02 PM Post #411 of 4,697
I would be skeptical -- perhaps he meant by offloading to a GPU.... then yes, as modern Nvidia's have well over 2000 cores. I believe the newest consumer class gaming GPUs are now over 3500 cores. The HQPlayer's web page does indicate that it supports GPU offloading.

The slides on M-scaler has it using somewhere around 500-600 DSP cores... so the math would easily work out.

I recall someone at RMAF asking why we don't swap GPUs for FPGAs and Rob's answer was that bec he knows FPGAs and not GPUs. Given that it's DSP, which is a core competency of GPUs (as opposed to SHA calculations for Bitcoin mining), GPUs should be competitive with FPGAs on a core-for-core basis. The other reason might be packaging -- I don't know if nVidia will sell you a chip by itself if you are not making graphics cards.

The fascinating idea that I thought of was -- why not do it in the USB drivers... Imagine BluDave with the upscaler done on the PC in the USB driver instead of in Blu... (I don't know Windows enough to know if a driver can communicate with a GPU card or not, but even if not, an auxiliary process can act as go-between. Bandwidth for audio is not high compared to even 1 PCI lane.)

(An even crazier thought is -- Is there a decent enough DSD DAC that supports a high enough bitrate [high to avoid the error accumulation issue mentioned in the DAC slides] such that someone can write software synthesizing the DSD stream... i.e. put the real filter on the PC side. An open source filter platform! Alas, while software is my thing, signal processing isn't, or I would be inspired to explore...)
 
Last edited:
Oct 22, 2017 at 12:03 AM Post #413 of 4,697
Regarding FPGA limits ...my understanding from HQPlayer's designer is that modern cpu's can trivially do many millions of taps.

A modern CPU can't even convert DSD to DoP at DSD512 without hiccups - and that is a simple repackaging process, which an FPGA can do glitch free, tens of thousands of times over.

And he uses the term "input related" taps. A Blu Dave has getting on for a tera input related taps - and these pseudo numbers are meaningless, as cascaded FIR filters do not sound the same as a single flat stage. Taps are taps, and when I quote the tap length I only talk about the real numbers in the real actual filter, not equivalences or fantasy numbers. He also claims that long tap lengths sound worse anyway... If it was many millions of taps, then the delay would be many seconds to process the data.
 
Oct 22, 2017 at 12:20 AM Post #414 of 4,697
Next year SAMSUNG and TSMC will start producing in 7nm and in two years they will reach 5-3 nm. But what about FPGA they can't do it with SAM or TSMC ?
A 16 nm FPGA from Xilinx today costs thousands of USD - and the availability is rocking horse pooh... Also, test wafers will be available at 7nm, but it will be with poor yields, with the process not qualified or calibrated, and then the tools will need updating and qualifying, which will take a year or so. Then the designs can start, and a successful design will be production ready in a couple of years after that. So you are looking at 3 years before parts become available, and even at that point yields will be poor.

I may be wrong, and suddenly an FPGA becomes production available at 7nm and is low cost - but history is against us. Xilinx launched the 7 series way back in 2012, but you could only actually buy FPGA's in 2015. They have not even talked about the successor to the 7 series yet.
 
Oct 22, 2017 at 2:45 AM Post #415 of 4,697
The secret sauce is the WTA filter so until HQ player can build a filter that sounds as good as WTA which Rob took 30 years to refine, I wouldn’t pay too much attention to multiple millions of taps claims.

Ho hum.

We’re lucky enough to get a million WTA taps in 2017
 
Last edited:
Oct 22, 2017 at 4:05 AM Post #416 of 4,697
Absolutely, mic choice and positioning are crucial. But there are vast problems that current ADC's have, for which Davina will be able to solve. I will be doing a post soon about these issues.
I am very glad you are addressing this issue as well. We are talking here all the time about perfect reconstructing the impulse and transients, assuming that the ADC conversion was perfect. But what if it wasn't? There are several recordings which drove me mad with the pumping noise going up and down. I couldn't understand it, since when played through different DACs and devices, the noise stayed more or less constant. I guess the noise sneaked in there already during the ADC process.
I have a few laymen questions: is there any cure for this pumping noise in a sense of a good DAC saving a bad ADC, can this noise be filtered/avoided in the dac reconstruction process and if not, how much sense does it make to make more and more perfect DACs, without improving the ADC side?
 
Last edited:
Oct 22, 2017 at 8:59 AM Post #418 of 4,697
Rob, are you planning to have a Dual Monoblock option in the planned new style amp releases? I just want to swap to mono block dual amp setup as soon as possible.
 
Oct 28, 2017 at 12:40 AM Post #419 of 4,697
The secret sauce is the WTA filter so until HQ player can build a filter that sounds as good as WTA which Rob took 30 years to refine, I wouldn’t pay too much attention to multiple millions of taps claims.

Ho hum.

We’re lucky enough to get a million WTA taps in 2017

Agreed; the M scaler isn't telling us how good a million real 16FS FIR taps sound, but how good the algorithm is. The algorithm becomes even more important as the tap length increases - that's why when I initially went from Hugo's 26k to Dave 164k taps there was not a huge change - but tweaking the WTA algorithm made a very much bigger change than simply increasing the tap length. And the changes I initially made were counter intuitive, in that the expected change in SQ for a change in the algorithm gave better sound quality, when intuition predicted worse sound quality - but this opened up a whole new avenue for improvements, and I ended up with quite a big overall change to the WTA. But of course, for a given algorithm, (so far at least) increasing the tap length has always improved the sound quality, and technically gets us closer to the ideal sinc function, which will perfectly reconstruct the original bandwidth limited analogue signal. For the WTA, doubling the tap length, doubles the accuracy against the ideal.
 
Oct 28, 2017 at 12:43 AM Post #420 of 4,697
Rob, are you planning to have a Dual Monoblock option in the planned new style amp releases? I just want to swap to mono block dual amp setup as soon as possible.
Yes - a single DX input and it will turn the unused channel into a negative channel; so you can bridge, or bi-amp (and swap the OP connection on the inverted channel).
 

Users who are viewing this thread

Back
Top