Foobar v Winamp
Nov 3, 2003 at 10:48 PM Thread Starter Post #1 of 28

pbirkett

Headphoneus Supremus
Joined
Jun 12, 2002
Posts
3,239
Likes
54
Everyone seems to say Foobar sounds better than Winamp. I've tried both and cant really tell the difference. More to the point, the Foobar FAQ says this:-

Quote:

Does foobar2000 sound better than other players?
No. Most of "sound quality differences" people "hear" are placebo effect (at least with real music), as actual differences in produced sound data are below their noise floor (1 or 2 last bits in 16bit samples). Foobar2000 has sound processing features such as software resampling or 24bit output on new high-end soundcards, but most of other mainstream players are capable of doing the same by now.


Discuss!
biggrin.gif
 
Nov 3, 2003 at 11:14 PM Post #2 of 28
I use foobar since it is very customizeable, and takes less resources than any other player I've used. I just got sick of winamp after a while (having come from musicmatch), and foobar was reccommended to me. I haven't used another player since. The only thing I miss is making custom winamp skins, but that was really jsut a way for me to waste time
biggrin.gif
 
Nov 3, 2003 at 11:15 PM Post #3 of 28
The only way Foobar2000 would sound better would be if it's doing something another player isn't doing but should be.

For example, if Foobar2000 is using kernel streaming or ASIO to avoid Windows messing with the audio, but WinAMP is using a plugin that uses DirectSound or WaveOut -- thus the sound is altered before it reaches the card.

I don't think there's much to discuss beyond that... it's been pretty well established that there's not much value to software upsampling (e.g. 16-bit to 24-bit or 44.1 KHz to 48 KHz ) unless you're doing it (effectively, with dither) to avoid the soundcard or Windows drivers from doing it crappily.

BTW, I briefly used the MAD plugin with WinAMP to get 24-bit output (with my 24-bit sound card) and really didn't hear any differences worth noting -- remember, the files themselves are all 16-bit, so what's the point really? Maybe there was a very slight improvement (or just a slight difference that wasn't an improvement), but eventually I gave up on MAD after it refused to play a few of my files (nothing but silence) that the WinAMP decoder handled just fine.
 
Nov 3, 2003 at 11:56 PM Post #4 of 28
Yeah the only differences are only in the quality of the pipeline, types of processing you add, and the quality of the decoders.

64bit floating point pipeline is more "accurate". I like the anti clipping features of foobar. MAD is kind of old no?

The compressed files aren't 16bit anymore so decoding to 24bit still makes it closer to the original than a 16bit decode. I don't really encode to MP3 unlesss going to my portable. I find doing 16bit -> 24bit conversion in cooledit and compressing to Musepack actually sounds better to me than straight 16bit conversion.
 
Nov 3, 2003 at 11:57 PM Post #5 of 28
Quote:

Originally posted by pbirkett
No. Most of "sound quality differences" people "hear" are placebo effect (at least with real music), as actual differences in produced sound data are below their noise floor (1 or 2 last bits in 16bit samples).


I use Foobar myself. That said, the developer of Foobar has been peddling this bogus line for quite some time. It's been demonstrated that some other players do indeed corrupt the last 1 or 2 bits in 16-bit samples. The developer of Foobar claims this is below the noise floor of a decent system, but that's simply not true; if the last 2 bits get corrupted you're looking at about a 91 dB noise floor. Most decent sound cards measure better than that, and external DACs almost certainly do.

Moreover -- and this is the critical part -- almost all modern DACs use noise shaping to extend 16-bit recordings to the equivalent of 18-bits or more. (For a technical discussion of this, visit Analog Devices' website and check out their technical article "How Many Bits are Enough?") When you corrupt the last 2 bits of 16-bit recordings, you also prevent the noise shaping from working correctly. Thus, you're really hobbling your hardware if you don't insist on proper, accurate playback.

The developer of Foobar is a software guy, not a hardware or signal processing guy, and his claims there should be taken with a grain of salt.

That said, Foobar is very well-written, and if your drivers can do bit-perfect playback, Foobar will do it for you. In some cases, Foobar's support for Kernel Streaming can be used to coax a card that's not bit-perfect into generating proper output. The developer of Foobar himself was initially skeptical of this, until people on HydrogenAudio and AVSForum demonstrated this in a measurable way, and he retracted his claims that it probably never made a difference. (It is true that in many cases, however, Kernel Streaming makes no difference.)
 
Nov 4, 2003 at 12:05 AM Post #6 of 28
Quote:

Originally posted by MirandaX
I use Foobar myself. That said, the developer of Foobar has been peddling this bogus line for quite some time. It's been demonstrated that some other players do indeed corrupt the last 1 or 2 bits in 16-bit samples. The developer of Foobar claims this is below the noise floor of a decent system, but that's simply not true; if the last 2 bits get corrupted you're looking at about a 91 dB noise floor. Most decent sound cards measure better than that, and external DACs almost certainly do.



The noise floor of CD (the full 16 bits) is -96dB, and it's very rare that much of that dynamic range is even used. I guess if you're concerned about losing that 5dB of signal down there it's worth it... provided you listen to your equipment at earsplitting volume, because otherwise it's below the noise floor of the human ear. -96dB is probably quieter than the sound of blood rushing through your ears.
Quote:


Moreover -- and this is the critical part -- almost all modern DACs use noise shaping to extend 16-bit recordings to the equivalent of 18-bits or more. (For a technical discussion of this, visit Analog Devices' website and check out their technical article "How Many Bits are Enough?") When you corrupt the last 2 bits of 16-bit recordings, you also prevent the noise shaping from working correctly. Thus, you're really hobbling your hardware if you don't insist on proper, accurate playback.



Wouldn't a DAC still noise shape whatever's fed to it, regardless? You're talking about feeding a digital signal to a DAC, how exactly would "corrupting" the last 2 bits prevent the DAC from doing the noise shaping properly? Seems to me the noise shaping or dithering would take place down around those last 2 bits (i.e. at a very low volume) and replace whatever was in those bits in the first place.

BTW if you wanted to prove something to the author of Foobar, he'd probably accept one of those tests that are unmentionable around these parts
tongue.gif
.
 
Nov 4, 2003 at 12:28 AM Post #7 of 28
Quote:

Originally posted by fewtch

The noise floor of CD (the full 16 bits) is 96dB, and it's very rare that much of that dynamic range is even used. I guess if you're concerned about losing that 5dB of signal down there it's worth it... provided you listen to your equipment at earsplitting volume, because otherwise it's below the noise floor of the human ear. -96dB is probably quieter than the sound of blood rushing through your ears.


It's a common misconception that the noise floor of CDs is 96dB. It's actually 98.1dB, for reasons that are complicated to explain (for a discussion, see the following technical note at Analog Devices:
http://www.analog.com/UploadedFiles/...3938AN-327.pdf

The dynamic range represented by the encoded bits is important to preserve even if you're not listening at loud volumes -- quantization noise is relative to the current volume, not absolute. If you have 6dB of quantization noise happening to the trailing bits, that's 6dB of noise below whateve current volume you're listening at.

And no, even in absolute terms, -96dB is still much higher than the sound of blood rushing through your ears.

Quote:


Wouldn't a DAC still noise shape whatever's fed to it, regardless? You're talking about feeding a digital signal to a DAC, how exactly would "corrupting" the last 2 bits prevent the DAC from doing the noise shaping properly? Seems to me the noise shaping or dithering would take place down around those last 2 bits (i.e. at a very low volume) and replace whatever was in those bits in the first place.


The DAC has no way of knowing that those last two bits are bad, and it certainly can't replace "whatever was in those bits in the first place." Think about it this way (simplistic analogy, but effective): noise shaping is a lot like predicting the stock market. It's hard to predict what the price of a stock will be tomorrow, even with perfect information, but if your last two days of information is bogus, it's even harder to predict tomorrow's price.
 
Nov 4, 2003 at 12:33 AM Post #8 of 28
Quote:

Originally posted by MirandaX
The DAC has no way of knowing that those last two bits are bad, and it certainly can't replace "whatever was in those bits in the first place." Think about it this way (simplistic analogy, but effective): noise shaping is a lot like predicting the stock market. It's hard to predict what the price of a stock will be tomorrow, even with perfect information, but if your last two days of information is bogus, it's even harder to predict tomorrow's price.


Errm... not sure you understand the nature of digital here. There's no such thing as "bad" bits -- a bit is either on (1) or off (0). Whatever is fed to a DAC will get noise-shaped/dithered, including any "corrupted" bits (which would consist of noise rather than music).

To put it another way -- if you fed pure white noise to a DAC (which would be just random data in all 16 bits), it would still get dithered or noise shaped before being converted to analog. Same thing if you fed digital silence to a DAC (00000000 00000000 in binary) it would get dithered or noise shaped as well. The data sent to a DAC doesn't affect how it behaves -- it's a hardware device.

The noise shaping/dithering is usually very quiet, so whatever was in those lower bits will likely be replaced by shaped dither noise anyway. That's what I was saying.
 
Nov 4, 2003 at 12:36 AM Post #9 of 28
hum...intersting, i hear much difference between winamp and foobar, foobar has clearer sound and winamp's just sounds dull with or without equalizer
 
Nov 4, 2003 at 1:06 AM Post #10 of 28
Hi I just switched from winamp to foobar. There seems to be quite a lot of options, but I dont know that much about the computer settings. I just know that my sound card outputs 16 bits. So should I stick to 16 bit fixpoint output?

Is there a quick and dirty way of customizing the foobar to reach best sound quality?

Also which DSP should I use? Right now I am using volume control, simply surround, cross feeder, cross fade.
 
Nov 4, 2003 at 1:27 AM Post #11 of 28
Quote:

Originally posted by fewtch
Errm... not sure you understand the nature of digital here. There's no such thing as "bad" bits -- a bit is either on (1) or off (0). Whatever is fed to a DAC will get noise-shaped/dithered, including any "corrupted" bits (which would consist of noise rather than music).


Dude, I've taken Dr. Atlas' digital signal processing class at the UW in Seattle -- I know digital. Corrupted bits make correct noise shaping impossible, period.
 
Nov 4, 2003 at 1:32 AM Post #12 of 28
Quote:

Originally posted by blueice
Hi I just switched from winamp to foobar. There seems to be quite a lot of options, but I dont know that much about the computer settings. I just know that my sound card outputs 16 bits. So should I stick to 16 bit fixpoint output?


Yes, definitely.

Quote:

Is there a quick and dirty way of customizing the foobar to reach best sound quality?


The first thing you can do is switch the output to kernel streaming. This may or may not do anything depending on your sound card or drivers, but it's the most conservative choice. (Personally, I've measured both my systems and know that they're each bit-perfect using both DirectSound and Kernel Streaming, so I use DS to save CPU time, but if I didn't know that, I would use Kernel Streaming.)

Quote:

Also which DSP should I use? Right now I am using volume control, simply surround, cross feeder, cross fade.


If you're interested in maximum sound quality and have an external amplifier with a volume pot, you should disable all the DSPs. If you don't have an external amp, just use the volume control. You can experiment with the others, of course, but minimalism is the path to bit-perfect output with Foobar.
 
Nov 4, 2003 at 1:36 AM Post #13 of 28
if you have 4.1 speakers, "convert stereo to 4 channels" is very very useful
 
Nov 4, 2003 at 1:36 AM Post #15 of 28
Quote:

Originally posted by MirandaX
Dude, I've taken Dr. Atlas' digital signal processing class at the UW in Seattle -- I know digital. Corrupted bits make correct noise shaping impossible, period.


The noise shaping (if it's predictive) will be correct for whatever signal is fed to the DAC... period. If it isn't predictive or is just simple dither noise (hiss) added, it's the same thing.

The DAC doesn't care if it's getting music, tones, random data or whatever -- it will convert it to analog in whatever form it's in, and do so properly. Ask Dr. Atlas if I'm right or wrong, if you like... maybe you have his Email address.
Quote:

Originally posted by MirandaX
The first thing you can do is switch the output to kernel streaming.


Not under Win9x you can't. For Win9x OS's, use WaveOut with the proper device driver setting (output port) for your soundcard.
 

Users who are viewing this thread

Back
Top