Mac OS X Music Players - alternatives to iTunes
Mar 1, 2011 at 2:15 AM Post #286 of 3,495
BTW I choose the 'impuls'  as a test signal because it is theoretically impossible to reproduce this signal as soon as any analog stage is involved. And for most digital (non-IIR) based filters this is also the case. Theoretically this impuls needs a bandwith of multi MHz to reproduce. But as long as you stay in the digital domain it is the same as transporting any other form of data. It should return exactly as is.
 
If I can find the time I will do some more experiments this weekend:
- test with different waveform (square or stepped shape)
- test by using the optical out of my macbook, hence allowing 'exclusive mode' for some players
 
Any suggestions as to improve these measurements are welcome!
 
And please DO try this at home
popcorn.gif

(Audacity is freeware)
 
Mar 1, 2011 at 2:30 AM Post #287 of 3,495
Here's another bit from a VERY long thread... One of them contacted Rob at Channel D to discuss integer/float and bit perfect operation, as well as resource consumption.

Submitted by Lars on Wed, 03/17/2010 - 14:00. Joined: 07/01/2008 .:. Offline .:. Comments: 473

I would like to share Rob's discussion about floating point and hog that you should find very interesting. Also, with the next release of Pure Music, the activation code for Pure Vinyl should also work with this program for the impatient guys like myself.

Rob's e-mail to me:

I think there is some confusion regarding "floating point conversion"

Rob Robinson


This was very informative. Thanks!!!
 
Mar 1, 2011 at 3:33 AM Post #288 of 3,495


Quote:
 
You are right! Thats why I used a completely digital signal path and a synthesized file. And why I first tested if any of the other components affected the signal by not being bit perfect. To control the measurement variables.
The fact that every measurement with Audacity gives a sample/bit accurate signal for the entire chain (software/interface out/interface in/software) is the proof that my method is valid. 
 

 
agree your method is fine.  even with adc, it is fine, as it is a basis of comparison.
 
 
 
Mar 1, 2011 at 3:47 AM Post #290 of 3,495
captured some sections using a capture program as the player is playing, as kwkarth suggested, and captured to aiff.  the file was flac, 44.1khz/16-bit, a cd rip.  the 2 players were fidelia and cog.  the tracks were lined up in audacity under high magnification.  cog was inverted, and then the tracks were exported to wav.
 
the wav file was opened in audacity with a spectrum analysis performed.  attached is that analysis.  it's not entirely meaningful as the full stack of playback was not analyzed, but at the very least it seems to correlate with my ears, as i hear a high frequency boost on fidelia compared to cog.
 
i have not determined if cog was/is bitperfect but i've read reports that it was for cd formats.  all processing options were disabled in the players, fwiw.
 
The level is fairly low, so I'm not sure if it matches what I am hearing exactly, but at the same time, this does seem to remind me of a digital filter being applied.
 

 
Mar 1, 2011 at 8:07 AM Post #291 of 3,495
I'm trying out Amarra in combination with Itunes, I notice it changes the sample rate on the fly. My fear is that Amarra is only changing the Midi settings, meaning the default behavior of Itunes applies. This would mean if the output sample rate was 44.1Khz at the startup of Itunes and a 96Khz file is played, Itunes downsamples first to 44.1Khz after which core audio then upsamples again to 96Khz before sending it to the DAC.
Does anyone know if this is the case?
 
Mar 2, 2011 at 11:55 AM Post #293 of 3,495
I ended up back with Decibel.  It was sounding off because Audirvana automatically changes bit rate 24 and Decibel sounds best in 16 bit on my setup.  When I tested the different players they all sound close except for Audirvana, so that may be think it might be doing something with the sound and I'd rather be trying for bit perfect.  I thought about just using iTunes alone but something is just a little smother with Decibel.  It's also the only player I haven't had bugs with and plays gapless audio perfect.  And with the remote hack, I can still use my iPhone remote with it.   I'll give Amarra another try whenever the next upgrade comes out, but I think Decibel is going to work out for me.
 
Mar 2, 2011 at 11:30 PM Post #295 of 3,495
I'm back to Decibel as well. I've had Fidelia switching songs too early and having trouble determining the bit-rate of some tracks (and not switching the output as it should). I need to put in a support ticket for it though.
 
Seems Decibel finally has a price: US$33.
 
Mar 3, 2011 at 1:50 AM Post #296 of 3,495
im still comparing the newest versions of audirvana and decibel, though i better decide quick since decibel only allows 24 hours of use without paying. but i think audirvana is in the lead for me.
 
And also, does anyone know why they always go in 24 bit mode, even when im playing 16/44.1 redbook files? the sample rate is right in audio midi setup, but the bitdepth isnt. i even tried closing audirvana/decibel and going in and manually setting the bit depth then reopening and it just switches it back to 24 bits...
 
Mar 3, 2011 at 2:28 AM Post #297 of 3,495
Purchased Decibel.
It seems a fair price for a great player.
 
Mar 3, 2011 at 7:38 AM Post #298 of 3,495

 
Quote:
Thats exactly what I did!
 
In fact I started to doubt the bit-perfect theory after being able to hear differences between these players. Other contributors to this topic are right: "there is no jitter inside the computer". So the only way that differences occur are because the software is not 'bit-perfect'.
 
I've buildt the following test setup: Macbook > TC-electronic Konnekt D24 SPDIF out > SPDIF in > Macbook. 
 
As test material I created an audio file with Audacity that was completely silent except for 1 sample going to maximum value. I used a 24bit/96kHz WAV format. This file looks like this:
 

 
Then I played AND recorded it with Audacity. It resulted in exactly the same puls shape. Conclusion: Audacity play/record cycle is bit perfect.
...
 
So, how come that the measurements are so different?
My theory is that as soon as VLC or iTunes are playing, the kernal/AU is using a different digital audio path.
When I closed VLC and/or iTunes and played a piece with Fidelia or Decibel in 'exclusive mode' the system was sort of Reset again. When i played the test file they both provided bit-perfect output.
It is not possible in this setup to test the 'exclusive mode' of these players because Audacity cannot record the input stream anymore (hence 'exclusive' 
angry_face.gif
 )
 
Anyone with an audio interface with SPDIF in AND out can verify this test.
If you need the test file please PM me and I'll mail it.
 
I'm very curious to hear what you think of these measurements.
Have I forgotten something?
Is there something not correct in my setup?
I would be pleased to get your input!

Possible Explanations.
 
Core Audio has a variety of features, all of which can interfere with the sound output if you are looking for 1-to-1 (and 0-to-0 :)) bit perfect output.  I'm not clear if you tested iTunes and VLC running concurrently, but any time hog mode is off, CoreAudio's mixer -- which combines output of multiple programs together before outputting -- can get involved, and that's a whole can of worms.  Heck ANYTHING can be interfering at that point, even something outputting the simplest beep!  That's why hog mode is an important feature.  That alone could explain the screw ups.  (I have more to say on iTunes in a different post.)
 
It may be that iTunes et al. are engaging some additional CoreAudio feature such as the Equalizer, which can be utilized even if you don't see the interface.  One other issue is the constant recommendation to turn System Volume and iTunes Volume to maximum to achieve bit-for-bit output. I believe this is specific to USB, but I'm not sure.  When I use SPDIF output, the System volume is greyed out, but I can still adjust volume in iTunes.  (Decibel does not allow you to adjust volume unless it's enabled in Preferences!)
 
Also remember Core Audio deals with Floating Point.  Anything that does math -- volume applies multiplication, for example -- on the floating point sample will introduce error.  Your DAC cannot deal with floating point, so there must be some conversion back to integer before output, and this involves truncation or rounding error if you have done some math or bit manipulation to the sample in Core Audio.  Once again, the Equalizer or mixer can be major culprits here.  One engineer said the conversion to floating point is analogous to lossless compression; that's only true if you don't touch the data after conversion... once you do any math or mix or transform on the floating point data, you are dealing with something analogous to lossy compression.  That means if you're not in hog mode, or if your volume is wrong, you have introduced potential error.
 
On "Bit Perfect."
 
One last comment, this on "bit perfect."  Here is an oft-cited paper on Computer Audiophile "proving" bit perfect output.
 
http://www.acourate.com/OperatingSystemsHandlingOfSampleRates.pdf
 
There is one HUGE problem, though.  They never compared any bits!  Comparing wave forms does not prove bit perfect output.  The only way to do that is to directly compare the bits from the file (losslessly converted to LPCM) to the bits being output.  From what I can tell of your test, you've fallen in to the same trap.  Sure, that wave form looks identical in some pictures, but there still could be bits that are lost or off that error correction took care of.
 
I appreciate your work an do not want to diminish it, but for accuracy's sake, you proved they are "bit similar" rather than "bit perfect."  They may actually be bit perfect, but you have to compare the bits, not the graphical representation of the wave form. Bit perfect is only bit perfect if you compare the digital data bit-for-bit.
 
-Pie
 
Mar 3, 2011 at 10:29 AM Post #299 of 3,495
One of the computer audihiophilE links I posted did do a bit compare by taking aes ebu output. They did validate iTunes was running bit perfect, at least for 24/96 (if I recall the test correctly).
 
Dan Clark Audio Make every day a fun day filled with music and friendship! Stay updated on Dan Clark Audio at their sponsor profile on Head-Fi.
 
@funCANS MrSpeakers https://danclarkaudio.com info@danclarkaudio.com
Mar 3, 2011 at 12:14 PM Post #300 of 3,495


Quote:
Possible Explanations.
 
Core Audio has a variety of features, all of which can interfere with the sound output if you are looking for 1-to-1 (and 0-to-0 :)) bit perfect output.  I'm not clear if you tested iTunes and VLC running concurrently, but any time hog mode is off, CoreAudio's mixer -- which combines output of multiple programs together before outputting -- can get involved, and that's a whole can of worms.  Heck ANYTHING can be interfering at that point, even something outputting the simplest beep!  That's why hog mode is an important feature.  That alone could explain the screw ups.  (I have more to say on iTunes in a different post.)
 
It may be that iTunes et al. are engaging some additional CoreAudio feature such as the Equalizer, which can be utilized even if you don't see the interface.  One other issue is the constant recommendation to turn System Volume and iTunes Volume to maximum to achieve bit-for-bit output. I believe this is specific to USB, but I'm not sure.  When I use SPDIF output, the System volume is greyed out, but I can still adjust volume in iTunes.  (Decibel does not allow you to adjust volume unless it's enabled in Preferences!)
 
Also remember Core Audio deals with Floating Point.  Anything that does math -- volume applies multiplication, for example -- on the floating point sample will introduce error.  Your DAC cannot deal with floating point, so there must be some conversion back to integer before output, and this involves truncation or rounding error if you have done some math or bit manipulation to the sample in Core Audio.  Once again, the Equalizer or mixer can be major culprits here.  One engineer said the conversion to floating point is analogous to lossless compression; that's only true if you don't touch the data after conversion... once you do any math or mix or transform on the floating point data, you are dealing with something analogous to lossy compression.  That means if you're not in hog mode, or if your volume is wrong, you have introduced potential error.
 
On "Bit Perfect."
 
One last comment, this on "bit perfect."  Here is an oft-cited paper on Computer Audiophile "proving" bit perfect output.
 
http://www.acourate.com/OperatingSystemsHandlingOfSampleRates.pdf
 
There is one HUGE problem, though.  They never compared any bits!  Comparing wave forms does not prove bit perfect output.  The only way to do that is to directly compare the bits from the file (losslessly converted to LPCM) to the bits being output.  From what I can tell of your test, you've fallen in to the same trap.  Sure, that wave form looks identical in some pictures, but there still could be bits that are lost or off that error correction took care of.
 
I appreciate your work an do not want to diminish it, but for accuracy's sake, you proved they are "bit similar" rather than "bit perfect."  They may actually be bit perfect, but you have to compare the bits, not the graphical representation of the wave form. Bit perfect is only bit perfect if you compare the digital data bit-for-bit.
 
-Pie


Well done!
 
 

Users who are viewing this thread

Back
Top