jdpark
100+ Head-Fier
- Joined
- Jan 14, 2014
- Posts
- 362
- Likes
- 121
I'm afraid I don't understand how this shows that Fidelizer Pro works to improve sound quality? I'm not a programmer or engineer, but I can understand basic principles if someone can explain this to me.
Is this basically saying that bit perfect playing doesn't really exist on computers? Or is it saying that the test shows that a new test should be done using a DAC and some sort of analogue output?
Perhaps a test should be done that records what a recording sounds like when computer A is being used at 80-90% CPU capacity to see if there is some kind of audible 'jitter' or whatever versus when no other non-sound related processes are being used, and then thirdly, when Fidelizer Pro is being used to see if there is a difference between the three. Obviously, I can tell you one thing without being a computer guy: when the fan starts it makes noise and vibrations, which I suppose could have a minute effect on the sound. If you can cut down on CPU usage to the point that the fan rarely if ever starts, then you already have done something.
Lastly, I do understand the drop-out analogy, since I tend to type very fast and I've almost never had a computer that was able to keep up with me 100% of the time, ether when typing on the Internet or on Microsoft Word. If typing, which I consider (albeit without proof) less involving than playing, say a 24/192 WAV file, is so difficult for a computer to manage in real-time, then it is possible to speculate that yes, even fairly good computers may need a program that forces priority to audio.
I don't see how that goes against computer science, if it confirms what I basically experience in day-to-day computer usage.
So if basic intuition and practical experience tends to support the audio-prioritization/noise-reduction concept, why not just jump to blind listener A/B tests?
Is this basically saying that bit perfect playing doesn't really exist on computers? Or is it saying that the test shows that a new test should be done using a DAC and some sort of analogue output?
Perhaps a test should be done that records what a recording sounds like when computer A is being used at 80-90% CPU capacity to see if there is some kind of audible 'jitter' or whatever versus when no other non-sound related processes are being used, and then thirdly, when Fidelizer Pro is being used to see if there is a difference between the three. Obviously, I can tell you one thing without being a computer guy: when the fan starts it makes noise and vibrations, which I suppose could have a minute effect on the sound. If you can cut down on CPU usage to the point that the fan rarely if ever starts, then you already have done something.
Lastly, I do understand the drop-out analogy, since I tend to type very fast and I've almost never had a computer that was able to keep up with me 100% of the time, ether when typing on the Internet or on Microsoft Word. If typing, which I consider (albeit without proof) less involving than playing, say a 24/192 WAV file, is so difficult for a computer to manage in real-time, then it is possible to speculate that yes, even fairly good computers may need a program that forces priority to audio.
I don't see how that goes against computer science, if it confirms what I basically experience in day-to-day computer usage.
So if basic intuition and practical experience tends to support the audio-prioritization/noise-reduction concept, why not just jump to blind listener A/B tests?