Some questions about settings (sample rate, latency, etc.) for an external USB DAC running off of ASIO
Aug 3, 2015 at 12:48 PM Post #47 of 138
 
  yeah when you don't have some lingering effects turned ON from your soundcard, my experience with windowzzz mixer isn't bad. but still it really is ludicrous how windows deals with usb sound.


What do you mean by this?


 when you don't have control over some dsp, THX widget, beat by dre crap app. or just that we're not aware that they even exist because they were bundled when we bought the computer.  I imagine that many people find a night and day difference with kernel, asio or wasapi simply from one of those unnoticed DSP running in the background.
 
Aug 3, 2015 at 1:06 PM Post #48 of 138
   when you don't have control over some dsp, THX widget, beat by dre crap app. or just that we're not aware that they even exist because they were bundled when we bought the computer.  I imagine that many people find a night and day difference with kernel, asio or wasapi simply from one of those unnoticed DSP running in the background.

 
@goodyfresh
 
You should share that info you sent me via PM about how when you changed some stuff, the measurements for your equipment improved!
 
Aug 3, 2015 at 2:10 PM Post #49 of 138
   
@goodyfresh
 
You should share that info you sent me via PM about how when you changed some stuff, the measurements for your equipment improved!


Okay then:

So basically guys, what I told Music Alchemist is that I've been doing some experimentation with RightMark Audio Analyzer, an awesome free benchmarking tool for audio, which tests dymanic range, crosstalk, total harmonic distortion, and of course frequency-response accuracy (both standard and for swept-sine) for onboard computer audio or, if you set it up correctly, external DAC's and soundcards.  My HP Envy 15t comes equipped with fairly nice speakers, as far as laptops go, they get pretty loud and even have a subwoofer.  The internal sound chipset is one of the higher-end Realtek audio chips, although I haven't been abel to fiure out exactly which one.  That being said, Hewlett Packard RUINED this thing's audio performance by including BEATS AUDIO software in it.  Supposedly you can uncheck the beats-audio checkbox in the audio control panel to disable the Beats software EQ-ing, but it never completely goes away, in actuality. So I ran some tests with RightMark, and lo-and-behold, even with teh Beats Audio box unchecked, this thing gets as high as five percent THD in the lower bass range and high treble, and has a variation of +/- over FIFTEEN DECIBELS throughout the audible frequency range, and even within the 50hz to 10khz range it's getting +/- 8dB.  Keep in mind this is the actual sound being output to the speakers or headphone-jack that is already this distorted, not even the sound of the speakers thesmelves.

HOWEVER, without the Microsoft and Beats EQ software, it turns out this thing is capable of doing far, far better.  Using either WASAPI or ASIO4ALL bit-perfect audio output, the THD levels decrease to less than 0.05%, and the variation in frequency-response from 50hz to 20khz is less than .5dB!  The response rolls-off below 50hz down to 20hz, hitting about -5dB or so.  The dynamic range improves from about 70dB with the Beats/Microsoft EQ, to 80 dB with the bitperfect output, and the crosstalk numbers improve as well.

That being said, I of course get even BETTER performance when using my Fiio X3 2nd Generation as a USB DAC. . .THD less than .004%, dynamic range of about 90dB, and less than .2dB frquency-response variation throughout the entire range from 20hz to 20khz, with the vast majority of ANY variation at all being between 20hz and 50hz and 10khz and 20khz.  Oh, and crosstalk of less than -80dB, which of course is totally inaudible, yay :wink:

By the way, folks who are interested should definitely check out RightMark Audio Analyzer and play around with it, it's fun to experiment with and is GREAT for free software :)
 
Aug 3, 2015 at 2:43 PM Post #50 of 138
 RMAA is cool, for gratis it's an amazing software, but you might want to have a look at nwavguy warnings about it, maybe you already did? (I don't think I'm allowed to link to his blog so buy google on ebay). 
ph34r.gif

 
 
about beat audio, I remember reading that the mofo's trick was that ON had the beat sound, and off was just another EQ that sucked. all to make the ON sound seem better. the difficulty being to find where is the EQ that is still active. but you give a good example of what I was talking about, if some stuff like that is active on our computer, of course any bit perfect stuff will sound like a revolution and be very obviously audible. and any test will measure bad if it goes through a DSP.
 
 
 
 

 
Aug 3, 2015 at 3:47 PM Post #51 of 138
   RMAA is cool, for gratis it's an amazing software, but you might want to have a look at nwavguy warnings about it, maybe you already did? (I don't think I'm allowed to link to his blog so buy google on ebay). 
ph34r.gif

 
 
about beat audio, I remember reading that the mofo's trick was that ON had the beat sound, and off was just another EQ that sucked. all to make the ON sound seem better. the difficulty being to find where is the EQ that is still active. but you give a good example of what I was talking about, if some stuff like that is active on our computer, of course any bit perfect stuff will sound like a revolution and be very obviously audible. and any test will measure bad if it goes through a DSP.
 
 
 
 
 


I'm fully aware of RMAA's issues, but unless you're aware of some BETTER audio benchmarking software whcih is also available as freeware, I'm gonna stick with it :p

And yeah, Beats is so deceptive, I freaking hate them.  Doctor Dre is one of hte best producers in all of Rap/Hip-Hop and should be ASHAMED of himself for what he's done with that company.  I guarantee you taht when he's in the privacy of his own home, he never listens to music on his own company's headphones. . .he's probably using Sennheisers or something :p
 
Aug 3, 2015 at 4:11 PM Post #52 of 138
   
Ignores the fact that the primary reason for the development of ASIO was to reduce latency in live recording/real time monitoring applications.
 
https://en.wikipedia.org/wiki/Audio_Stream_Input/Output
 
"Audio Stream Input/Output (ASIO) is a computer sound card driver protocol for digital audio specified by Steinberg, providing a low-latency and high fidelity interface between a software application and a computer's sound card. "
 
Since the latency (minimum time between recording and playback) during audiophile playback for listening enjoyment is already measured in days or years, sound card driver latency isn't much of a real world issue outside of its intended use which is live recording with real time monitoring of the recording.
 
In many cases the ASIO and  other driver scheme (example: WSASPI, MME, DirectSound, etc.) performance are indistinguishable for every other parameter than latency.

 
I think we agree on everything you said. (However, there are still ongoing discussions about "why ASIO does/does not sound better than WASAPI".) I suspect it simply boils down to what some people are more familiar with.
 
Aug 3, 2015 at 4:19 PM Post #53 of 138
   
I think we agree on everything you said. (However, there are still ongoing discussions about "why ASIO does/does not sound better than WASAPI".) I suspect it simply boils down to what some people are more familiar with.


Honestly, I can't hear a difference between ASIO and WASAPI, myself.  They're both bit-perfect, and sound the same to me and much better than audio that has gone through DSP.  If there really is a quantifiable and measurable sonic diference (besides the latency, and how much of a difference is there even in THAT?) between ASIO and WASAPI, then please, by all means, enlighten me!  I'm serious.  I'd really like to know if there's a measurable difference.

Also, how abotu Kernel Streaming bit-perfect output?  I've read before that it achieves the lowest latency of all, at least on Windows systems. . .is that true?
 
Aug 3, 2015 at 4:41 PM Post #54 of 138
   RMAA is cool, for gratis it's an amazing software, but you might want to have a look at nwavguy warnings about it, maybe you already did? (I don't think I'm allowed to link to his blog so buy google on ebay). 
ph34r.gif

 

 
Why the heck not?
 
Aug 3, 2015 at 5:13 PM Post #56 of 138
   
http://www.head-fi.org/t/584763/the-wizard-appreciation-thread-long-live-the-wizard-the-former-ha-appreciation-thread/150#post_8144761

 
So "because page 11 of some 500+ thread." Still don't see why relevant, factual information should be denied from any source, even from personæ non gratæ.
 
Aug 3, 2015 at 5:19 PM Post #57 of 138
  So "because page 11 of some 500+ thread." Still don't see why relevant, factual information should be denied from any source, even from personæ non gratæ.

 
You could ask the powers that be about the details if it interests you. We're also generally not allowed to link to other forums.
 
Aug 3, 2015 at 5:20 PM Post #58 of 138
   
You could ask the powers that be about the details if it interests you. We're also generally not allowed to link to other forums.

 
I tend to ask for forgiveness, not permission ^_^
 
Aug 3, 2015 at 5:22 PM Post #59 of 138
   
By and large the experimental approach starts out with what is called the null hypothesis, which basically posits that there is no difference between two conditions such as a control condition and an experimental condition or say redbook and upsampled (or whatever) . Then we run tests and analyze the data and we can either conclude that we find a statistically significant difference and "reject the null hypothesis" or that we "fail to reject the null hypothesis", of course this is all done with a specific set of conditions. 
 
Of course one study is not enough but if we run sufficient variations of the tests with big enough samples (bigger is better) and they all end up with us failing to reject the null hypothesis while we never say the case is proven we pragmatically eventually conclude that the preponderance of evidence supports the null hypothesis (until something contradicts it) at this point we take it as "generally accepted" and study something more interesting unless we receive contradictory evidence. we no longer need to drop apples from trees 
 
While we can argue the toss about studies like Meyer and Moran the more interesting question is given the resources that folks like Sony (SACD) , Meridian, Neil Young and so on have where are their controlled tests that conclusively support the audible benefits of XXXXXX vs YYYYYYY ?? The best you get is the same old dog and pony shows.

 
The problem with your claim is that there HAVEN'T been "sufficient tests" with "sufficient variations" to prove the point either way. As such, everybody seems to just pick the option that they "intuitively believe", consider it to be the null hypothesis, then assume (and claim) that it's true because it hasn't been sufficiently thoroughly disproven. The simple fact is that, in the world of marketing (sales), enough people believe that "more is better" that simply claiming you have more is sufficient - which means that a company can sell more players based simply on the fact that they support high-res files while their competitors don't. They have little incentive to provide proof because, to put it bluntly, it won't affect sales. (If they spend the money to do the testing, and are proven right, it probably won't increase sales by much; and, of course, if they're proven wrong, it stands to hurt sales. I don't see much incentive there to spend money on testing.) And, if some mythical company sells a player that only supports Red Book files, it will probably cost them less to jump on the band wagon and add support for 96k in their product than it will to try and convince their customers not to bother.)

 
Honestly, I'm not aware of any of what I would call properly run tests, so it's more a matter of "no reliable evidence either way" than "a preponderance of good evidence". What I've seen documented are a few tests, with far too few test subjects, dubious or badly controlled test conditions, test equipment that hasn't been proven to be able to allow any possible differences to be heard if they do indeed exist, and equally uncertain test material. The fact that you can list a whole laundry list of folks who have failed to substantiate the claim may not go far towards proving it, but it also doesn't go anywhere towards DISPROVING it either. And, in the absence of sufficient proof either way, even anecdotal claims are somewhat better than nothing.
 
Aug 3, 2015 at 5:26 PM Post #60 of 138
  The problem with your claim is that there HAVEN'T been "sufficient tests" with "sufficient variations" to prove the point either way. As such, everybody seems to just pick the option that they "intuitively believe", consider it to be the null hypothesis, then assume (and claim) that it's true because it hasn't been sufficiently thoroughly disproven. The simple fact is that, in the world of marketing (sales), enough people believe that "more is better" that simply claiming you have more is sufficient - which means that a company can sell more players based simply on the fact that they support high-res files while their competitors don't. They have little incentive to provide proof because, to put it bluntly, it won't affect sales. (If they spend the money to do the testing, and are proven right, it probably won't increase sales by much; and, of course, if they're proven wrong, it stands to hurt sales. I don't see much incentive there to spend money on testing.) And, if some mythical company sells a player that only supports Red Book files, it will probably cost them less to jump on the band wagon and add support for 96k in their product than it will to try and convince their customers not to bother.)
 
Honestly, I'm not aware of any of what I would call properly run tests, so it's more a matter of "no reliable evidence either way" than "a preponderance of good evidence". What I've seen documented are a few tests, with far too few test subjects, dubious or badly controlled test conditions, test equipment that hasn't been proven to be able to allow any possible differences to be heard if they do indeed exist, and equally uncertain test material. The fact that you can list a whole laundry list of folks who have failed to substantiate the claim may not go far towards proving it, but it also doesn't go anywhere towards DISPROVING it either. And, in the absence of sufficient proof either way, even anecdotal claims are somewhat better than nothing.

 
I'm just gonna nod towards this as sufficient documentation: https://xiph.org/~xiphmont/demo/neil-young.html
 

Users who are viewing this thread

Back
Top