Fidelizer Pro - Real or Snake Oil?
Status
Not open for further replies.
Feb 18, 2016 at 10:47 AM Post #241 of 683
 
 
Therefore, my conclusion is that, while I wouldn't expect Fidelizer to make a significant difference on many systems (including my current one), I have to say that the claims it makes do in fact make sense. By reducing extraneous processes running in Windows, it MAY reduce the number and magnitude of timing errors and jitter on the data coming from Windows, which in fact MAY produce an audible improvement with SOME DACs.
 

 
I think that's the general consensus of the "objectivist" posters (if I broadly stereotype and label).
 
Feb 18, 2016 at 7:02 PM Post #243 of 683
I think I found some notable improvements with Fidelizer software now after today test.
 
I made 3 tracks of added silence on master for alignment, before using Fidelizer, and after using Fidelizer and find the average of 2x3=6 data from each. Here's the result
 
Align: 152.95 dB (146.0-165.1)
Before: 131.2 dB (125.9-136.2)
After: 136.3 dB (131.6-143.9)
 
Without Fidelizer, correlation depth swings around 12x-13x dB
With Fidelizer, correlation depth swings around 13x-14x dB
 
So Fidelizer isn't snake oil and the improvements can be measured by with statistics data from DiffMaker. I have finished my part now to prove my part now that I'm not doing something worth being accused for selling snake oil.
 
Regards,
Windows X
 
Feb 18, 2016 at 10:48 PM Post #245 of 683
But 100dB+ worth of null is inaudible anyways. I really don't see why you think this is proof of anything...


If you can measure something audible on bit-perfect, something must be broken. Why can't you just accept that I can finally measure some changes with Fidelizer? Be real objectivist for once, please. It maybe inaudible but there's a proof of measurable improvements data here.

Regards,
Windows X
 
Feb 18, 2016 at 11:59 PM Post #246 of 683
But 100dB+ worth of null is inaudible anyways. I really don't see why you think this is proof of anything...


If you can measure something audible on bit-perfect, something must be broken. Why can't you just accept that I can finally measure some changes with Fidelizer? Be real objectivist for once, please. It maybe inaudible but there's a proof of measurable improvements data here.

Regards,
Windows X


Questionable results at best and certainly inaudible. If this isn't simply algorithm rounding errors, it still isn't proof that Fidelizer is making an improvement that any human on the planet could actually hear.

You never mentioned the spec of the system you ran these test on nor, I believe the CPU loading during each phase of the test,
 
Feb 19, 2016 at 12:39 AM Post #247 of 683
Questionable results at best and certainly inaudible. If this isn't simply algorithm rounding errors, it still isn't proof that Fidelizer is making an improvement that any human on the planet could actually hear.

You never mentioned the spec of the system you ran these test on nor, I believe the CPU loading during each phase of the test,



After reading this thread from start to this point, this is exactly what I thought would happen. X bends over backwards to justify his claims, just as you requested, and you say "Questionable reesults at best..."

You should take note that even on a beastly computer, instantaneous cpu usage spikes can occur. These can have high priority. If the product lessens the impact of spikes, then its useful.

It is hard to devise tests for all possible cases.

Also, just frequenting the Sound Science forum doesn't make you a scientist or entitle you to make demands on others.

This forum has some of the most arrogant and insulting members in all of head-fi.
 
Feb 19, 2016 at 2:00 AM Post #248 of 683
After reading this thread from start to this point, this is exactly what I thought would happen. X bends over backwards to justify his claims, just as you requested, and you say "Questionable reesults at best..."

You should take note that even on a beastly computer, instantaneous cpu usage spikes can occur. These can have high priority. If the product lessens the impact of spikes, then its useful.

It is hard to devise tests for all possible cases.

Also, just frequenting the Sound Science forum doesn't make you a scientist or entitle you to make demands on others.

This forum has some of the most arrogant and insulting members in all of head-fi.

 
Are you sure you understood X's data?
 
Yes, perhaps it did have an effect (although it may have also been quantization), but the results in this specific scenario are in the range of -130 dB, which is inaudible.
 
Feb 19, 2016 at 4:15 AM Post #249 of 683
   
Are you sure you understood X's data?
 
Yes, perhaps it did have an effect (although it may have also been quantization), but the results in this specific scenario are in the range of -130 dB, which is inaudible.

I also think X played along quite well given the demands, though I don't think the demands were too extreme. Also, I don't think one can judge head-fi by levels of 'arrogance'. This forum (though I don't visit it often), and this thread in particular, is intended to ask difficult questions, and demand answers that follow some minimum scientific methods. 
 
I basically just consider myself a consumer of digital audio, with a pretty solid mid-fi system. I'm willing to take some of the hard-earned money from the family funds and invest in a product that can improve on the experience, because it's one of the important ways I relax and enjoy life, in general. 
 
If someone asks hard questions that are considered impertinent to others, I always try to understand whether the question is a bad question, or whether it could have simply been worded more politely. If the second, I applaud. We don't need to be polite all the time, especially when essentially we are trying to solve problems that will benefit everyone in the future.
 
That said, we still have more questions, particularly about load spikes, or even what happens when CPU usage is at, say 80-90% often while listening to music.
 
I believe that Fidelizer Pro makes music more involving, rhythmically stable, and spatially accurate when I listen to a whole album. Does that mean that if I were given two 20 second clips, one with Fidelizer Pro, and one without, I would be able to tell the difference blindly? I doubt it. So honestly, while I applaud impertinent questions, and I applaud manufacturers/designers who strive to answer those questions with real science, I do have doubts about the limits of the tests we can do with conclusive (or even reasonably certain) answers. 
 
Feb 19, 2016 at 4:24 AM Post #250 of 683
X bends over backwards to justify his claims ...

 
WindowsX "claims" in big bold letters: "Sound Quality Improvement Solutions for Everyone". Sound quality or indeed the judgement of the quality of anything is relative, "quality" is a comparative term. One therefore has to be able to experience some amount of difference in the first place, in order to be able to make a comparative judgement about quality. A determination of whether one thing is an improvement, higher quality, than something else.
 
WindowsX "bending over backwards" is only relevant if that bending over actually does "justify his claims". Unfortunately, his bending over backwards has done the exact opposite, as indeed he himself predicted it would a number of pages ago! Below -130dBFS we are in the realm of the levels of noise created by electrons colliding inside resistors. Regardless of whether some extremist audiophiles are deluded enough to believe they can hear the sound of sub-atomic particles colliding, no DAC in the world can resolve sound at that level and no speakers or headphones can reproduce sound anywhere even vaguely near that level. There is no difference to be experienced and therefore no judgement of quality is possible.
 
It is hard to devise tests for all possible cases.

 
WindowsX is part of the group which contains "Everyone". Even if we accept that the difference he measured is due to Fidelizer, if he himself cannot attain any audible difference with Fidelizer this one test alone is enough to prove his claim of "everyone" is a lie!
 
If the product lessens the impact of spikes, then its useful.

 
Only if that "impact of spikes" is audible AND that Fidelizer audibly improves the "impact of spikes". If it doesn't, as his test results indicate, then how is it useful?
 
Also, just frequenting the Sound Science forum doesn't make you a scientist or entitle you to make demands on others.

 
1. One doesn't have to be a scientist to know that what happens below -130dB is utterly inaudible.
 
2. If someone makes a claim about a product they are trying to sell, one doesn't have to be a scientist to be entitled to demand evidence of their claims. Are you really saying that no one except a scientist is entitled to demand answers or proof of, for example, the claims of say a car salesman?
 
It seems abundantly clear that Fidelizer does NOT provide "Sound Quality Improvement Solutions for Everyone". A little basic knowledge and some simple deductive reasoning infers that far from sound quality improvements for "everyone", Fidelizer is actually snake oil for the majority and probably for the vast majority. WindowsX responded with some marketing BS to dispute this inference and refused to provide tests or other reliable evidence on the grounds that it would support rather than refute this inference! Eventually he did attempt an apparently valid test, which does indeed appear to support the inference!
 
All the test evidence done/quoted in this thread so far, including that done by the developer himself, demonstrates no audible difference. WindowsX even stated that "it's impossible" for Fidelizer "to beat the boundaries of audible threshold". Yet bizarrely, you still seem to support the product's claim. What rationale/logic allows you to arrive at such a conclusion? If even the developer's statement and test is not enough for you, what would it take to convince you?
 
G
 
Feb 19, 2016 at 6:14 AM Post #251 of 683
@bfreedma Questionable indeed. We're talking about measuring changes in bit-perfect on pure software domain here. It's not something anyone can grasp like running software to read common specifications from DA conversion.
 
I configure DiffMaker to make very detailed comparision with least rounding error as much as possible and made 3 reference samples of original data with added silence to measure the threshold of rounding error. It was around 152.95 dB (146.0-165.1). So we know the scope of changes for null result with rounding error calculated.
 
My Computer running AMD FX8350 with 4.2GHz and 8MB cache for L2/L3. I use high quality motherboard with 16GB RAM and Platinum grade PSU so no need to worry about slow computer.
 
@Harry Manback Thank you. It's indeed a hard test since bits are bits believers don't cooperate nor ever state clear demand. I used to give up at some points because they believe only in measurements of audible range and that's impossible task for Fidelizer. It's like asking to see gravitational wave with audible/visible result instead of very small number of data we have no clue about.
 
@watchnerd If I make comparision with audible changes, or even supply diff files that you can hear the result, will you accept that parameters or shoot it down as invalid samples with errors?
 
Do you understand that we're working with bit-perfect on pure software environment? You can't expect anyone to measure this difficult subject for you in the first place. And I did this for you, with result to prove it and you finally accepted that Fidelizer make some measurable changes. Being audible or not, it's measurable.
 
If you aren't satisfied with my method and results, try arranging up method for me to test for measurable and audible data that you can accept.
 
@jdpark Thank you. I think I finally made some progress to get measurable data out from bit-perfect on pure software environment. The reason I spent time and effort into this experiment is to know it myself too.
 
Regards,
Windows X
 
Feb 19, 2016 at 7:47 AM Post #252 of 683
 
WindowsX is part of the group which contains "Everyone". Even if we accept that the difference he measured is due to Fidelizer, if he himself cannot attain any audible difference with Fidelizer this one test alone is enough to prove his claim of "everyone" is a lie!
 

 
Formal logic 101!
 
It's been a while, but I think this is valid: ∀x (x∈D → P(x)) ≢ ∃x (x∈D → ¬P(x))
 
Feb 19, 2016 at 8:17 AM Post #253 of 683
After reading this thread from start to this point, this is exactly what I thought would happen. X bends over backwards to justify his claims, just as you requested, and you say "Questionable reesults at best..."

You should take note that even on a beastly computer, instantaneous cpu usage spikes can occur. These can have high priority. If the product lessens the impact of spikes, then its useful.

It is hard to devise tests for all possible cases.

Also, just frequenting the Sound Science forum doesn't make you a scientist or entitle you to make demands on others.

This forum has some of the most arrogant and insulting members in all of head-fi.

 
Always good to see someone jumping into the middle of a discussion without grasping the context.  Are you actually suggesting that the data presented represents an audible change?
 
As to alleged instantaneous CPU spikes, care to show any examples of this actually happening on a modern computer dedicated to audio reproduction?   No one here is arguing that in certain, very limited scenarios, thread prioritization and CPU affinity settings may add value, but the claims being made is that Fidelizer makes an audible improvement in all scenarios, not just when a CPU is oversubscribed.  And yes, claims made in Sound Science are going to challenged and proof requested.
 
Without going overboard self credentializing, I'm very comfortable with my knowledge base as it pertains to the discussion of computer performance and tuning.  Real arrogance is jumping into this thread and accusing others of lack of domain knowledge while presenting no countering evidence, just insults.....
 
Feb 19, 2016 at 8:18 AM Post #254 of 683
Are the measurements currently predicated on the inability to get a truly bit-perfect loopback recording?

I don't pretend to fully understand the issue at the moment but it appears to be a complicated problem. There was however a particular audio interface mentioned in the thread that did produce a perfectly bit-perfect loopback recording.
https://www.gearslutz.com/board/so-much-gear-so-little-time/471239-getting-bit-perfect-recording.html

Both the output and input of the digital recording chain would be buffered. The inability to get a bit-perfect loopback would seem to be caused by a systematic software error rather than any jitter. After all, it's not like if a "1" on the sending end falls in the crack between two time slots on the receiving end it would be interpolated to two "0.5" samples--either the buffers eliminate the timing differences and produce the original stream, or you get a dropout.

In the limited testing I did I got the same slight volume decrease as Muriel Esteban got in 1a) of post 10 in the above thread. Amplifying to the same amplitude and then comparing the input and output, of course, didn't yield anywhere near a null signal. But the error is very systematic and not at all indicative of a random noise process.

Wouldn't it make more sense to directly compare jitter measurements of the S/PDIF output of a computer with and without Fidelizer?
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Feb 19, 2016 at 8:27 AM Post #255 of 683
  @bfreedma Questionable indeed. We're talking about measuring changes in bit-perfect on pure software domain here. It's not something anyone can grasp like running software to read common specifications from DA conversion.
 
I configure DiffMaker to make very detailed comparision with least rounding error as much as possible and made 3 reference samples of original data with added silence to measure the threshold of rounding error. It was around 152.95 dB (146.0-165.1). So we know the scope of changes for null result with rounding error calculated.
 
My Computer running AMD FX8350 with 4.2GHz and 8MB cache for L2/L3. I use high quality motherboard with 16GB RAM and Platinum grade PSU so no need to worry about slow computer.
 
@Harry Manback Thank you. It's indeed a hard test since bits are bits believers don't cooperate nor ever state clear demand. I used to give up at some points because they believe only in measurements of audible range and that's impossible task for Fidelizer. It's like asking to see gravitational wave with audible/visible result instead of very small number of data we have no clue about.
 
@watchnerd If I make comparision with audible changes, or even supply diff files that you can hear the result, will you accept that parameters or shoot it down as invalid samples with errors?
 
Do you understand that we're working with bit-perfect on pure software environment? You can't expect anyone to measure this difficult subject for you in the first place. And I did this for you, with result to prove it and you finally accepted that Fidelizer make some measurable changes. Being audible or not, it's measurable.
 
If you aren't satisfied with my method and results, try arranging up method for me to test for measurable and audible data that you can accept.
 
@jdpark Thank you. I think I finally made some progress to get measurable data out from bit-perfect on pure software environment. The reason I spent time and effort into this experiment is to know it myself too.
 
Regards,
Windows X

 
Lots of hand waving and distraction for the fact that you still haven't come close to supporting your claim that Fidelizer makes an audible difference outside of the limited scenario where a CPU is oversubscribed, something that is unlikely to happen on a modern PC, particularly one dedicated to audio reproduction.
 
If you believe :"I used to give up at some points because they believe only in measurements of audible range and that's impossible task for Fidelizer.",  then we should just end the conversation because that is a completely inaccurate statement.  Measuring the audible impact of Fidelizer is not particularly difficult, and the measurements have been described in this thread several times.  Instead, you chose to pursue some odd form of "bit perfect" analysis, which is both irrelevant to the discussion and actually produced no results in the audible range.
 
Status
Not open for further replies.

Users who are viewing this thread

Back
Top