Head-Fi.org › Forums › Equipment Forums › Computer Audio › Bit accurate sound from USB?
New Posts  All Forums:Forum Nav:

Bit accurate sound from USB?

post #1 of 32
Thread Starter 

 I'm really new at this.  I have the Headroom Total Bithead and the Aune mini USB/DAC/amp.

 

So I basically can plug and play and not worry about anything with both of those.

 

But how can I be sure I'm getting bit-accurate audio when right now I'm probably not?

 

Is there a FAQ on this?  Maybe a useful older thread?

 

If I install ASIO4ALL, would that do it?  Any other drivers as an option?

 

Thanks !!

 

post #2 of 32

If you use either ASIO, WASAPI or Kernal Streaming with a Windows based machine, I think that'd be a great start. I'm not sure exactly what audio players work with which plug-in... I know Foobar2000 can use all all 3. Media Monkey, Winamp and a few others can use the plug-in's also.. 

post #3 of 32

What OS do you have?

 

For ASIO, I think you'll probably have to use ASIO4All since you won't have drivers for your equipment.

 

As for testing it, I'm not sure that there's an easy way for an USB amp/DAC. The normal way is a sending a DTS encoded WAV file through optical to a receiver (or decoding device), but since I don't think either of those supports that, you won't be able to do that.


Edited by darkswordsman17 - 10/18/10 at 5:04am
post #4 of 32

A quick word to the wise mate.

 

In theory 'bit perfect' is best BUT if you are playing modern pop CDs you need to be aware of the effect of intersample peaks.

 

http://www.hometracked.com/2007/11/08/prevent-intersample-peaks/

 

http://www.cadenzarecording.com/papers/Digitaldistortion.pdf

 

You can test this yourself. Play a new CD at unity gain (volume slider on max.) and re record it with something like Audacity (again all sliders on max). Look at the resulting waveform. It'll quite often show clipping.

 

ASIO and related drivers were not originally written to move data around 'bit perfect'. They are designed to avoid windows kmixer and thus reduce latency. For instance when someone is trying to record a song and wants to add comfort reverb when monitoring. Usually when tracking (recording) you do so at well below max amplitude (up to -20dB) to avoid this problem and allow headroom for mixing. Then during the final mastering stage the complete tune is compressed to within an inch of it's life to obtain maximum volume. 

 

Nothing wrong with using ASIO, KS or WASAPI if you want but I'd recommend setting the playback device to at least -3dB to be on the safe side and avoid the risk of clipping. It will not be strictly speaking 'bit perfect' but it will sound better. 

post #5 of 32

Isn't using ReplayGain (to prevent clipping) still bit perfect?

post #6 of 32

Absolutely not.
 

Quote:
Originally Posted by baglunch View Post

Isn't using ReplayGain (to prevent clipping) still bit perfect?

post #7 of 32


 

Quote:
Originally Posted by FallenAngel View Post

Absolutely not.
 


 


From your terse response I decided to look it up.  According to http://www.replaygain.org/

 

"you can losslessly apply the suggested gain change to your mp3 files"

 

Is this different than bitperfect?  I'll admit to not understanding.

post #8 of 32

By it's concept, ReplayGain is a digital gain multiplier, so lets say the CD is too loud, you'll multiply the volume by 0.8x, if it's too quiet, you'll multiply the gain by 1.2x - the result is obviously not a bit-perfect representation of the original, though it can certainly be loss-less as there is should be no resolution being lost when the calculation is done at a high enough bit depth (such as 24-bit to a 16-bit CD, which leaves TONS of room for gain adjustment), but that also means the output stream must be 24-bit instead of 16-bit.

post #9 of 32

What do you use for volume leveling?  Or do you just not?  I tend to listen to my entire library on Shuffle (unless I'm in the mood to listen to an album intact), so volumes are all over the place.  ReplayGain has been the best solution I've found so far.

post #10 of 32

I simply use the pot on my amp, though I've heard ReplayGain before, I simply chose not to use it.

post #11 of 32

The samples themselves in effect contain 'volume' information. If you don't play the sample at unity gain i.e. without changing the volume, then it is no longer strictly speaking 'bit perfect'.

 

So if you don't play the samples exactly as they appear on the original source then they are not bit perfect.

 

That's why bit perfect playback is a red herring. Do you see? It's not an obvious concept so if you are still confused feel free to ask again.

post #12 of 32

I think I got it: It's not bit perfect if the volume adjustment is being done in software, but is if the volume adjust is done in hardware.

 

If I had a cheap hardware way of keeping the volume consistent across songs without manually fiddling with the volume every song, I'd look into it.  But ReplayGain sounds transparent to me, so it makes me happy.  smily_headphones1.gif

post #13 of 32
Thread Starter 

One question, right now I use Windows Media Player, but now my impression is that the player I use has to support "bit-accuracy" from the USB port? 

post #14 of 32

 

Quote:
It's not bit perfect if the volume adjustment is being done in software, but is if the volume adjust is done in hardware.

 

Now it's my turn to be unsure - but I think you've got it now but maybe not for the right reasons.

 

1) If you change the volume in the digital domain (software) then it is no longer bit perfect.

 

2) If you change the volume in the analogue domain (a hardware amplifier) then it is in a way bit perfect - provided point 1 above has been met.

 

The real point I am trying to make is that you really shouldn't worry about 'bit perfect' at all.

 

Please let me try and explain. You can help me by telling me whether I have succeeded.

 

Your ears are designed to detect small changes in air pressure over time. Your brain translates this information into sound.

 

Now cast your mind back to high school physics class........

 

If you move a magnet in a coil of wire this generates a current - change of voltage over time - This is a microphone.

 

Conversely.

 

If you pass a current - changing voltage over time - through a coil this will cause a magnet to move. This is a loudspeaker.

 

So you can now envisage analogue audio not as changes of pressure in the air over time but as changes of a voltage over time in a conductor (wire).

 

A perfect analogue amplifier increases those voltages evenly. The sound doesn't really change. It just gets louder.

 

There is no such thing as a perfect analogue amplifier. They all add some distortion. We will not cover why now as it's not relevant to the current discussion.

 

An attenuator does the same in reverse. Reduces all the voltages evenly over time. This can be done without adding distortion.

 

Now we get to digital audio. What an analogue to digital (ADC) converter does in effect is measure the voltage in a wire at very precise and very small time intervals.

 

44.1 thousand times a second in the case of red book standard CD. The accuracy is determined by the bit depth. Again for CD this is 16 bits. 

 

So the voltage at any one instant becomes a number. In fact it's a pure number - a ratio. 

 

So what was originally a measure of air pressure at one instant in time and then became a voltage in a wire can now be envisaged for the purposes of our explanation as a number between 1111111111111111 & 0000000000000000.  You will need to do some further research of your own to fully grasp the details but for now that will do.

 

So a what a digital amplifier does is increase all the numbers by a set amount and a digital attenuator decreases the numbers by a set amount. 

 

An digital to analogue converter (DAC) does the whole process in reverse. Translates the numbers back into a voltage.  The analogue amplifier increases the voltages to your desired listening level.  The loudspeaker translates the changing voltages back into changes in air pressure. Your ears measures that and your brain interprets the result as sound.

 

The real miracle in all this is not the technology - it's your brain. As we can now see if I've succeeded with my explanation is that sound is in effect one dimensional. Almost a series of loudnesses. Frequencies - notes, harmonies, words are not as such recorded. It's your brain that creates them. Pretty cool eh? Well done evolution.

 

So the upshot is - don't worry to much about about bit perfect digital audio. Set your media player volume control to about -3dB (about 95% of maximum) and all will be well. All that is happening is that all those huge numbers in the samples are being reduced by a  small set amount and it's being done to an accuracy of less than 1 part in 2 raised to the power of 16. That is well beyond your ability to detect - your brain is clever - but not that clever.

 

 

 

 

 

post #15 of 32

Your explanation makes sense (and thanks for all that!), and until this thread I'd thought bit perfect was the grail.  I suppose transparency is the goal, or really: sounds that you like... which for me now is transparency.

 

The only thing that doesn't make sense is why the sampling frequency was set to 44.1 kHz?  Sure, it sounds like a lot, but why not have the recording sampling done at a million kHz, or whatever?  Stands to reason that more finely grained sampling would lead to more accurate representation of the original voltage fluxuations, and it'd surprise me that the technology isn't there nowadays to accomplish this.  It could always be downsampled to 44.1 for compatibility with existing devices that only know how to interpret that, but the higher resolution would be available for newer equipment that could handle it.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Computer Audio
Head-Fi.org › Forums › Equipment Forums › Computer Audio › Bit accurate sound from USB?