What a long, strange trip it's been -- (Robert Hunter)
Jul 6, 2016 at 2:40 AM Post #736 of 14,566

I can understand if people have privacy / security / philosophical reasons for not liking Windows 10...but in my experience, stability is not a concern.


Just a fun fact: My employer blocks Windows 8 machines from the network because they have malware-like behavior. So all those with a shiny Windows 8 couldn't access internet on that network...


 
I've never had Windows 8 crash on me, but the ergonomics and the impossible-to-figure-out uber-friendly settings were enough of turn off for me. But hey, I'm a biased Linux user who ditched Microsoft since XP was still shiny.

 
Jul 6, 2016 at 3:13 AM Post #737 of 14,566
I've never had Windows 8 crash on me, but the ergonomics and the impossible-to-figure-out uber-friendly settings were enough of turn off for me. But hey, I'm a biased Linux user who ditched Microsoft since XP was still shiny.

 
Totally agree that the Win 8 UI was a disaster, but all of that is more or less fixed in Win 10. There's still some weird seemingly arbitrary distinction between "settings" (new) and the old "control panel" way of getting to system stuff, but there are some right-click shortcuts right off the start button that let you get to most things "the old way" still.
 
Nothing against Linux by any means...If I didn't have to live in the Windows universe for work, I'd seriously consider the switch.
 
Jul 6, 2016 at 3:19 AM Post #738 of 14,566
 
 
Totally agree that the Win 8 UI was a disaster, but all of that is more or less fixed in Win 10. There's still some weird seemingly arbitrary distinction between "settings" (new) and the old "control panel" way of getting to system stuff, but there are some right-click shortcuts right off the start button that let you get to most things "the old way" still.


On top of the Windows 8 UI experience, I've been hearing of horror stories with the Windows 10 upgrade procedures (I'm disabling all updates from Windows on all my machines until I take the time to figure out this mess):
http://www.pcworld.com/article/3073457/windows/how-microsofts-nasty-new-windows-10-pop-up-tricks-you-into-upgrading.html
 
So yeah, that's death knell for me as far as Windows is concerned. Keeping it around only for the occasional program I need for work, and won't be available on Linux.
 
Jul 6, 2016 at 4:07 AM Post #739 of 14,566
 
On top of the Windows 8 UI experience, I've been hearing of horror stories with the Windows 10 upgrade procedures (I'm disabling all updates from Windows on all my machines until I take the time to figure out this mess):
http://www.pcworld.com/article/3073457/windows/how-microsofts-nasty-new-windows-10-pop-up-tricks-you-into-upgrading.html
 
So yeah, that's death knell for me as far as Windows is concerned. Keeping it around only for the occasional program I need for work, and won't be available on Linux.

I hear ya, Windows (albeit Windows Server) is how I make my living so I will be keeping it around for awhile :) 
 
Jul 7, 2016 at 11:02 AM Post #740 of 14,566
Having said that, "when your media player plays the audio in your FLAC file, it is essentially decoding the FLAC data to a PCM format prior to sending that PCM data to the sound card (or DAC). It will decompress it to the exact same data that went in; so if 16-bit 44.1 KHz PCM data went in, that's what'll come out, and go to your speakers."

That's why it's called "lossless".


True, but it takes processing power to decode that file in real time and in providing a suitable stream for the USB. Something is certainly going on in that process and FLAC might just be easier unpacked than ALAC within Foobar, or could be in general, but I don't know. All I know is that the pure WAV file sounds just a hint more in the positive direction.

BTW - I almost always used ASIO exclusively when I can. I recently discovered Kernal Streaming, and I must say it is only a slight difference over ASIO with Windows 10. If you are using any version of Windows, please use ASIO or WASAPI. One of my beefs with the steaming services is that they do not allow connection via ASIO or WASAPI on Windows machines, only Direct Sound. What a jip for browser only hi-fi streaming from places like Tidal.

(Update: Just looked and you now can through the app if you have JRiver Media Center)
 
Jul 7, 2016 at 11:10 AM Post #741 of 14,566
True, but it takes processing power to decode that file in real time and in providing a suitable stream for the USB. 

 
It takes laughable (read - insignificant) amount of computing power to decode FLAC. It's been known to run on 74MHz ARM cores with room to spare.
 
Jul 7, 2016 at 12:07 PM Post #742 of 14,566
Don't care. I am not claiming it puts on strain on the processor or not. Just saying it has to be processed. This means stuff has to fly around. The WAV file can just be loaded in RAM and sent along its merry way. I am not sure what is happening in the process of the unpacking of the FLAC in real time and in theory all should be the same, just like all USB chips should be relatively the same, but there is a little something happening.

Well...this was also with using low buffers. I have now increased my buffer so much in Foobar to unpack the entire song into RAM, and I will run a test again. There is a very advanced setting in Foobar to to this, but it can be done. For those that use Foobar:

Preferences -> Advanced -> Playback -> Full File buffering up to (kb): 2000000

That is basically 2GB. Overboard, but it this setting is the minimum file size it will allocate in RAM for any particular file. I realize that 2GB is about three hours and fifteen minutes with a 16/44.1 file, but only about half an hour with a 192/24 bit file. So it is just an added insurance policy. I haven't really tested since I made this change, so I will have to test again.
 
Jul 7, 2016 at 12:07 PM Post #743 of 14,566
Hmmm...regardless of the PC's processing power, doesn't a well-designed (asynchronous) DAC drive the communication process, have it's own buffers and clock-management?
 
Jul 7, 2016 at 5:48 PM Post #744 of 14,566
Hmmm...regardless of the PC's processing power, doesn't a well-designed (asynchronous) DAC drive the communication process, have it's own buffers and clock-management?


I will only defer to Mike's posts on this issue in this very thread:


On Windoze, Macs, and Linux - http://www.head-fi.org/t/784471/what-a-long-strange-trip-its-been-robert-hunter/75#post_12132242

On WAV and FLAC - http://www.head-fi.org/t/784471/what-a-long-strange-trip-its-been-robert-hunter/120#post_12202483

On XMOS chips - http://www.head-fi.org/t/784471/what-a-long-strange-trip-its-been-robert-hunter/180#post_12213457

On Science and Cognitive Neurology - http://www.head-fi.org/t/784471/what-a-long-strange-trip-its-been-robert-hunter/180#post_12213457


So to quote Mike, "As a bonus, if you are not farting around with compression/decompression on the fly (Why??) I attest it to be better performing and sounding than many later, far more expensive Macs. [SIC] it is not my purpose here to make any tech arguments."
 
Jul 7, 2016 at 8:46 PM Post #745 of 14,566
Yeah...I've read his posts. Am actually looking for a logical explanation...not it sounds better.

My limited understanding is that lossless into a well-designed, asynchronous DAC shouldn't be impacted by the PC...assuming it's feeding the DAC a bitperfect stream when requested by the DAC. If I'm wrong on this, I'd truly like to understand why.

Don't get me wrong, I buy into hardware/software differences on DACs (own a Bimby). That said, I thought the whole purpose of designing asynchronous DACs was to eliminate the PC issues.
 
Jul 7, 2016 at 9:26 PM Post #746 of 14,566
My limited understanding is that lossless into a well-designed, asynchronous DAC shouldn't be impacted by the PC...assuming it's feeding the DAC a bitperfect stream when requested by the DAC. If I'm wrong on this, I'd truly like to understand why.

 
If you don't hear the difference - just enjoy the music and don't think too much about it, don't let the neurosis ruin it for you. Plenty of people affected by it in this hobby.
 
Jul 8, 2016 at 1:32 AM Post #747 of 14,566
My limited understanding is that lossless into a well-designed, asynchronous DAC shouldn't be impacted by the PC...assuming it's feeding the DAC a bitperfect stream when requested by the DAC. If I'm wrong on this, I'd truly like to understand why.
 

The bit-perfect stream is necessary but not sufficient. There are other stuff going on when real-time streaming audio bits to a DAC (followed by immediate conversion to analogue signals), such as timing errors (i.e. jitter) and electrical noise. What's more, neither SPDIF nor USB audio implement any kind of Error Correction Mechanism, so there is no guarantee that the bit stream arrives intact to the DAC. When "farting around with compression/decompression on the fly", one theory would be that the increase in processing even if tiny leads to a subtle increase in electrical noise that feeds into the analogue signal which your brain may somehow detect. (Of course, some will argue that we're talking about really small levels of noise.)

Here's a nice starter on "bits are bits":
http://www.head-fi.org/t/766347/schiit-yggdrasil-impressions-thread/2220#post_12453511

While there is obviously a lot of neurosis running around, don't let yourself bullied into the "no diff" neurosis either, and simply trust your perceptions. And if you don't hear a diff, just enjoy the music...
 
Jul 8, 2016 at 7:40 AM Post #748 of 14,566
but then if you magnify everything enough, you will see an impact on performance at one point or another. unplugging my mouse may improve the signal, removing the drivers did have an effect(but then again razer drivers brought so many problems on my computer). underclocking my CPU and removing one of the fans may also help, who knows? why not lower the screen resolution so that the GPU doesn't have to work as much? and remove the extra hard disk that certainly generates some matter of noise somewhere? obviously we should also use a dedicated computer, and do nothing else while playing music, no ethernet, no wifi, etc etc.
 do I also stop moving or reading stuff because the resolution of my listening is dropping from lack of concentration?
all those stuff may measurably, and at times audibly improve the sound, so where do we draw the line and decide this is enough?
 
to me it's rather simple, I don't even bother about audibility unless it's clearly annoying, in general convenience will be my judge, and only then if 2 options are as practical/nice to use, then I will linger on sound quality and mostly audible background noise. so of course having a file good for 85db vs 88db, that's a problem foreign to the realm of my life. as you can see I decided to draw a line I could reach without effort ^_^. lazy people of all countries, unit and let's ... not do too much!
biggrin.gif

 
Jul 8, 2016 at 7:57 AM Post #749 of 14,566
Thanks, Landroni. I definitely won't be bullied into the "no-diff/sound science" camp. If that was the case, I wouldn't own some of the equipment I own (i.e. multibit DACs and Tube Amps).

Regarding the electrical noise thing, I get the concept and have dealt with this issue in the past but don't currently have those issues to any discernible degree in my systems. Even so, I've given thought to buying a Wyrd to see if it could make an audible difference; for $100, why not? Hmmm, maybe some Nordost cables would help as well?! :wink:

Regarding timing/jitter errors, isn't this what 'Asynchronous' is supposed to fix -- assuming, of course it's well implemented?

Now on Error Correction...this actually sounds like there could be differences that might be discernible. That said, it should also be easily measurable (bits in = bits out) between brands/models of PC's/components. If this were the case, then it would make sense to choose a specific PC brand/component.


That said, my understanding is that many DACs have bi-directional communication and error correction built into them to ensure that uncorrupted, bitperfect data transfer does occur. Some manufacturers tout this capability including Schiit...

"Advanced Clock Management, USB Input Standard -- Bifrost uses a sophisticated master clock management system to deliver bit-perfect data to the DAC—unlike many DACs that use asynchronous sample rate conversion (ASRC), which destroys the original samples. And, with our acclaimed Gen 2 USB input now standard, you’re ready for computer, tablet, and even phone-based sources."

If Schiit's statement is true, doesn't it really boil down the the DAC/AMP combo at that point -- not the PC (assuming electrical noise isn't an issue)?


On the 'trust my ears' front, I'd buy a high-end set of Beats if I thought they sounded better than my other headphones. :eek:
 
Jul 8, 2016 at 6:42 PM Post #750 of 14,566
snip  
to me it's rather simple, I don't even bother about audibility unless it's clearly annoying, in general convenience will be my judge, and only then if 2 options are as practical/nice to use, then I will linger on sound quality and mostly audible background noise. so of course having a file good for 85db vs 88db, that's a problem foreign to the realm of my life. as you can see I decided to draw a line I could reach without effort ^_^. lazy people of all countries, unit and let's ... not do too much!
biggrin.gif

Emphasis mine.
 
This is the primary difference right hear…
 
We (meaning us audiophools) aren't looking at improvements in audibility based primarily upon annoyances.
Rather we are looking for improvements for their own sake.
And this road is by definition a more difficult path to take.
 
And for us convenience, and ease of use will always take a back seat to striving for 'Better' SQ.
 
This is a case where our priorities are markedly different, and as such advice we use to make our systems 'Better', will be different than what advice you provide to meet your goals.
 
Does this help to explain why anyone's advice is never going to be universal, due to a different set of goals and aspects of what makes it 'Better'?
 
JJ
 

Users who are viewing this thread

  • Back
    Top