- Joined
- Jun 7, 2007
- Posts
- 7,040
- Likes
- 118
I use Amarra, originally full version but I have updated several times.
Good article, here's another - http://archimago.blogspot.com/2013/05/measurements-bit-perfect-audiophile.html - in which 6 pieces of software are tested, including some DSD tests. So, if we can trust that a piece of software claiming to be bit-perfect is in fact bit-perfect... that is, the exact data from your file is going to your DAC unadulterated, the exact same bits are making it in the exact right order from point A to point C (despite the detour at point B, the media player)... where do the discrepancies in sound come from? Jitter is a possibility, expectation bias is another, what other pieces to this puzzle are there? I can't imagine any sort of processing happening prior to hitting the DAC that would mean a change in sound with no change in data (bit-perfection).
I really don't know what you are looking for in an answer as I don't really spend that much time listening to the equipment.
I fired up Miles Davis "Kind of Blue" in Fidelia tonight, optical out to the E17 docked in the E09K, then Line Out to the tube amp. Miles just sounds amazing on the GMP 8.300 D Pro's through a nice Soviet 6Н23П.
Isn't it all about enjoying the music?
What you are saying sounds too reasonable.I am just trying to prove that the difference I am hearing between the music players is real and not imaginary. I am a bit anal about this topic.
I personally enjoy all the players mentioned. But I prefer Audirvana+ and Amarra. Both have a type of ambiance to the sound, with Amarra actually sounding warmer (more analog) to me. But I think the Fidelia and the J River player are probably more accurate.
So there, I am commenting about the subjective aspect of all of this.
Bob Graham
PS: I do like Miles Davis "Kind of Blue". IMO Fidelia is a good player, worth every penny.
[snip]
My understanding of jitter is it is the micro second jitter of the sample clock that the DAC is being gated by caused by the hardware clock containing this variability. I was also under the impression that Async USB prevents this since the clock is referenced off the DAC side not the computer side assuming the DAC hardware clock is low jitter then the software has nothing to do with it. It is simply sending a stream of numbers which are the same on every playback of the same file regardless of hardware clock jitter.
In comparing both waveforms you may want to try using something like matlab to calculate cross-correlation between the two over time and see if the 'jitter' you are referring to is really the difference.
Well I am an embedded software engineer who has written to the wire level drivers for DACs in the past and typically between the USB and DAC there would be a FIFO buffer of at least 3-4 KB to hold the samples which are then clocked into the DAC using a local hardware clock next to the DAC which has jitter of its own and is not under SW control by the host. The only way the host SW can have any affect on the DAC output is to starve the buffer and not keep the buffer full. But I wouldn't call this jitter, it is considered a 'buffer under-run error' which normal hw/sw design avoids at all cost because you will certainly hear this sort of problem and it would more then piss off any user, not just an audiophile. Could be wrong but I would expect the USB interface of any decent DAC has a sufficient buffer to avoid this and considering USB has more then enough bandwidth to completely fill the buffer at a moments notice at any bit rate especially Async USB where the DAC-side clock is in charge of when data is delivered.
I recommend running a cross correlation to find out if the two waveforms are slowly shifting in time. A cross correlation should have a peak at the number of samples the two waveforms are shifted in time. Take frames of the audio say 1K and run the correlation between them and plot the result over the length of the track should give you a graph of the shift in timing in units of samples. If it is flat zero then there is no 'jitter' at the 1K resolution, then try halving the frame size until you see the jitter.
I think the only true way to answer this though is to snoop the data on the USB bus going to the DAC using the various players and do a simple binary diff on that data to see if they are identical or not. I am not convinced pulling this data using a SW redirect is giving you the same data that is actually being sent over the bus. Could even use a logic analyzer to grab the usb data and save it.
[delete]
Personally I heard a distinct difference between iTunes playback and JRiver and my ears aren't all that great. The difference was in the high frequencies being less sharp for JRiver then iTunes and being able to hear actual texture within the siblance if that makes sense instead of just a hiss i could hear the true sound of the thing that was hissing. I have to admit now though that thinking back on my experiement I don't think I was only using lossless files so it could be I was hearing the difference in codecs used to decode the file. In any case my library is only part lossless and mostly iTunes matched files so it makes sense to use a player that has a higher accuracy decoder even if both are 'bit perfect'.
Are you sure the file you were using is a lossless format file? Could it be differences in the codecs reconstructing the audio? If there is a difference in the codecs used to decode the lossy files I would put money on the table to say that JRiver has done a better job of retaining precision decoding this data using more precision at the expense of compute cycles and memory. Two codecs for AAC for example can return very different data for the same frame of audio. I used to impliment audio codecs as a research assistant for Alan Gersho during my Masters program in Digital Signal Processing at UCSB.