Head-Fi.org › Forums › Equipment Forums › Dedicated Source Components › USB to SPDIF converters shoot-out : EMU 0404 USB vs. Musiland Monitor 01 USD vs. Teralink-x vs. M2Tech hiFace
New Posts  All Forums:Forum Nav:

USB to SPDIF converters shoot-out : EMU 0404 USB vs. Musiland Monitor 01 USD vs. Teralink-x vs.... - Page 19  

post #271 of 1712
Quote:
Originally Posted by rosgr63 View Post
Dan you are a fine gentleman, you make great products, and you have my utmost respect.
You know a lot more than all us participating in this thread put together.
I think some people are saying things that may not be accurate.
Please ignore them as I feel you are wasting your time.
Thanks a lot!
The payback for trying to help and educate jenkey is his attempt to smear my name by saying that I am misleading people. He did so on another thread, where he also claimed to have a link for the Motorola paper I suggested to read. Note that the claimed link is not there, and could not be, since I did not suggest a paper, I suggested a BOOK (soft cover) titled “Designing with MECL Integrated circuits”. There is much more I can say about his statments, but it would be a waste of time. This kind of “behavior” stinks! I do not know the motivation but this is sick. I do not wish to be present around where the smell so bad.

I have been contributing by means of lectures and posts and papers. But I chose where to do so. There are many places where an effort to help is appreciated. I appreciate your comment, but your is a lonely comment. I do not see many other participants trying to take a more active role make fix the “environment” here.

I am a 64 years old professional engineer with proper education and 39 years of design experience. I am not a kindergarten teacher ready to be accepting when some rowdy child decide to throw kaka at the teacher.

Regards
Dan Lavry
post #272 of 1712
OK anyone posting off topic need to STOP now. Have some respect for the original poster and other contributors.
post #273 of 1712
Dan,
I think that silence is more eloquent at these moments instead of trying to reassure that we're on your side. I do think that the whole discussion deserves another thread and honestly, it's above my very poor technical knowledge.

back to the topic (or kind of...):
I agree with tosehee. I like the simple solution and if I get the performance that I want from one device I'd be very inclined to sell my current setup to accomodate it. It would have to be better than my DAC at the moment and also others that I'm looking at for upgrade. I'll be using it with my speaker setup, since I don't have headphones and don't intend to go down that road, but this forum still seems to be the best spot to get information about this kind of product.
Dan, could you tell me if you know anyone here in Brazil who, by any chance, has your DA11? Can inputs be changed through the remote?
post #274 of 1712
Dan, I completely agree with you.
I have been reading your and Vinnie Rossi's posts with a lot of interest, great posts, I wish there would be more people like you contributing!
Education is most important but to those who want to learn!
I want to learn.
post #275 of 1712
Quote:
Originally Posted by slim.a View Post
I inquired about a separate usb to spdif converters because there might people (like me) who love the sonic signature of their DACs (even if it is not the most transparent) but are looking for a better transport using their computer, and I tought that a company like yours would make competitive products in that segment.

While I understand that integrating everything in one box would be optimal, there are separate usb to spdif converters popping out every day (bel canto, musiland, Stello, wavelengthaudio ...) and I believe that a "pro" company could make a cost effective no-nonsense product in that segment of audio gear that many people would be intersted in. This is just my point of view on the subject
OK, I understand what you want. I will reply to that, and not leave it hanging.

I like to do what I do best.

I have been making analog to digital and digital to analog converters for many years. My first one was around 35 years ago (the first 8 bit at 100MHz). I made converters for instrumentation, medical, telecom, industrial and so on. Of course I have been specializing in audio converter for the last 24 years. I chose audio because I am both a musician and an EE (electronic engineer). To do a good job of it, one has to know both analog and digital, and being a musician certainly helps. So I put it all together into making audio converters and some audio analog gear (mic preamps).

USB is a needed function, and so is SPDIF, AES and more. But for me, those are, for the most parts, long protocols, hundreds of pages of what bit to stuff in what register in which memory. It needs to be done, but it is not what I find exciting. So much of those protocols have to do with conforming to standards... The first bit of the first byte tells you if the stereo standard is PRO (AES) or Consumer (SPDIF). There is a bit for copy protection, some bits indicating sample rates, bits for word length... even bits to mark the time of day!

It all needs to be done, and I respect folks that do a good job of it, and when needed I too do it. But it is just not my cup of tea. I like music. I play music. I like electronics hardware design. So I do what I like, and I am my most critical customer.

Other then that, as I already mentioned, integration is a good thing. and putting circuits near each other often yields better results. Using one chassis, one supply (and so on) is more cost effective then using multiple chassis, and supplies... Avoiding cables and connectors and using a 1 inch solid soldered trace often has much to offer...

I am not at all sure that my stand alone USB box would be better then another such box. If both units conform to the protocol correctly, the main difference will be about timing: synchronous or asynchronous operation for the USB (my DA11 asynchronous) and jitter.

When it comes to jitter, most of it is about how well the jitter is being "cleaned out" INSIDE the DA converter, AFTER the USB to spdif conversion.

A USB to SPDIF offering low jitter but driving a poor DA will yield poor results.
A USB to SPDIF with much more jitter, driving a DA with good jitter rejection will yield much better timing, and if the rest of the DA is good, the results will be good (jitter is only one factor in making of converters).

Keep in mind that with a separate USB to SPDIF, there is still a cable issue, leading to the DA. With the cable, one gets more jitter introduced by electromagnetic interference, termination tolerance, separate grounds for the chassis (thus ground currents) and much more (there are at least 5 more factors that come to my mind). Putting the circuits INSIDE the DA, with a proper layout, yields much better jitter outcome.

Regards
Dan Lavry
post #276 of 1712
Thread Starter 
Quote:
Originally Posted by Dan Lavry View Post
OK, I understand what you want. I will reply to that, and not leave it hanging.

I like to do what I do best.

I have been making analog to digital and digital to analog converters for many years. My first one was around 35 years ago (the first 8 bit at 100MHz). I made converters for instrumentation, medical, telecom, industrial and so on. Of course I have been specializing in audio converter for the last 24 years. I chose audio because I am both a musician and an EE (electronic engineer). To do a good job of it, one has to know both analog and digital, and being a musician certainly helps. So I put it all together into making audio converters and some audio analog gear (mic preamps).

USB is a needed function, and so is SPDIF, AES and more. But for me, those are, for the most parts, long protocols, hundreds of pages of what bit to stuff in what register in which memory. It needs to be done, but it is not what I find exciting. So much of those protocols have to do with conforming to standards... The first bit of the first byte tells you if the stereo standard is PRO (AES) or Consumer (SPDIF). There is a bit for copy protection, some bits indicating sample rates, bits for word length... even bits to mark the time of day!

It all needs to be done, and I respect folks that do a good job of it, and when needed I too do it. But it is just not my cup of tea. I like music. I play music. I like electronics hardware design. So I do what I like, and I am my most critical customer.

Other then that, as I already mentioned, integration is a good thing. and putting circuits near each other often yields better results. Using one chassis, one supply (and so on) is more cost effective then using multiple chassis, and supplies... Avoiding cables and connectors and using a 1 inch solid soldered trace often has much to offer...

I am not at all sure that my stand alone USB box would be better then another such box. If both units conform to the protocol correctly, the main difference will be about timing: synchronous or asynchronous operation for the USB (my DA11 asynchronous) and jitter.

When it comes to jitter, most of it is about how well the jitter is being "cleaned out" INSIDE the DA converter, AFTER the USB to spdif conversion.

A USB to SPDIF offering low jitter but driving a poor DA will yield poor results.
A USB to SPDIF with much more jitter, driving a DA with good jitter rejection will yield much better timing, and if the rest of the DA is good, the results will be good (jitter is only one factor in making of converters).

Keep in mind that with a separate USB to SPDIF, there is still a cable issue, leading to the DA. With the cable, one gets more jitter introduced by electromagnetic interference, termination tolerance, separate grounds for the chassis (thus ground currents) and much more (there are at least 5 more factors that come to my mind). Putting the circuits INSIDE the DA, with a proper layout, yields much better jitter outcome.

Regards
Dan Lavry


Dan Lavry,

Thanks a lot for your answer. It is very clear and helpful.

I guess I should stop worrying about the transport side (usb to spdif) and save my next upgrade to a DAC with better built-in jitter rejection, which would yield a better jump in sound quality.

Regards,
post #277 of 1712
Quote:
Originally Posted by slim.a View Post
I guess I should stop worrying about the transport side (usb to spdif) and save my next upgrade to a DAC with better built-in jitter rejection, which would yield a better jump in sound quality.
As I understand it Jitter is really only an issue when it reaches absurd levels , the jitter you get from a decent commercial digital audio device is unlikely to exceed 5ns (Apart from one McIntosh music server ) , this is below the detection threshold for humans at any frequency and with any jitter spectrum, even as correlated jitter, random jitter is even less detectable.

There is not one single controlled test of jitter audibility anywhere that indicates that jitter in the sub-ns area is at all audible. You can use whatever techniques you like to lower jitter but "real" evidence that jitter is really detectable at such low levels (500ps etc) just does not exist.

You can model jitter audibility until the cows come home, viz Dunn and Hawksford and so on, but whenever it has been put to the test under controlled conditions these theoretical thresholds turn out to be orders of magnitude lower than real world detection thresholds.

It turns out that our disciminative abilities are just not that good here.

Jitter does degrade objective sound quality, you can see it as distortion sidebands or raising of the noise floor (lowers effective bit-depth) depending on its type but below pathological levels it is just not worth losing sleep over. Bob Adams of Analog devices suggests it is not even worth measuring most of the time as it will show up in conventional downstream measures of THD and IMD etc anyway, you cannot hear jitter until it is in the analog domain anyway and in fact if you look at the measurements for CD players in Stereophile the correlation between other distortion measures and jitter is extremely strong.

But of course if someone has some new non anecdotal evidence on this I will be happy to read it...
post #278 of 1712
Nick.

Take a look at Ayre QB8 thread. Some people are claiming that they can hear some notes that they never heard before.

Whether that's a pure placebo or an actual improvements by the reduction in jitter, who knows..
post #279 of 1712
Thread Starter 
Quote:
Originally Posted by nick_charles View Post
There is not one single controlled test of jitter audibility anywhere that indicates that jitter in the sub-ns area is at all audible. You can use whatever techniques you like to lower jitter but "real" evidence that jitter is really detectable at such low levels (500ps etc) just does not exist.

But of course if someone has some new non anecdotal evidence on this I will be happy to read it...
Nick,

You have already stated many times that jitter is not audible below certain levels. You have also stated where you stand on cables. I get it.

This thread is intended for people who believe there are audible differences at those levels. If they don't think so, they shouldn't be reading a shoot out between different usb to spdif converters.

If you still want to post about what is the treshold of audibility, you should discuss it in the sound science forum.

Regards,
post #280 of 1712
5 nsec of jitter is terrible!

First on a very basic level:

For a 16 bit system, you have a "quantization grid" of 65563 levels. For a say +/- 1V signal, each quantization level is 2V/65536 = 30.5uV (micro volts). The fastest analog signal within 44.1KHz system is 22KHz full scale sine wave, nothing is faster.

Such a signal has the fastest slope when you cross the midpoint of the signal (the zero crossing). At that point the slope for that +/- 1V signal is 138160 volts per second. In other words, each 1nsec on such a slope cost you an error of 138uV.

But given that each quantization step is 30.5uV, each 1nsec of error is in fact 138uV / 30.5 uV = 4.5 quantization levels. Or you can say that a timing error of 0.222nsec (220 psec - pico seconds) is an error of one quantization level. Of course, that 220psec is for a full scale sine wave at 22KHz.

If the sine wave is say 11Khz, the jitter can be 444psec for 1 quantization level off. And a signal of a lower level then full scale can also have more time error for missing 1 bit. At the limit, you have DC input where jitter makes no errors at all, if you are late or early the DC did not change so accurate timing is a non issue.

On a more complex level:

Some architecture requires low jitter to end up with a low noise floor. Jitter is a complicated issue, and there all sorts of types and issues. There is random jitter, signal dependent jitter, non random interference (such as "tones") and more.

But 5nesc is a lot of jitter. It makes your 16 bit machine into a less then 13 bit machine. It will behave much better when playing lower frequencies and levels, but at the extreme fast and high level signals it is less then 13 bits!


The amplitude and frequency content of the music does matter a lot. As a designer I need to take care of the worst case. When listening, the conclusion has a lot to do with the music. The faster and louder the music signal , the more impact the jitter has. That is good to keep in mind. I can find you a CD where a couple of nsec is no big deal ,and another CD where 2nsec is just real bad. So it is no surprise that people can have conflicting views on the audibility of jitter.


Yet it matters very much WHERE that jitter is. It is only important to have the low jitter AT THE CONVERTER, right where the digital is converted to analog. That is the "conversion jitter" and that is the jitter that matters. Moving data around can tolerate 100 times the jitter level with no sonic impact. We call that "data transfer jitter". If we have say huge jitter on say the spdif cable, but we get to "clean it" before it gets to the critical circuitry, then we are doing fine.

And BTW, the real difficulties are much more pronounced at say 20 bit performance where the jitter requirement is 16 times tighter then at 16 bits! As a rule, the signal getting to the DA has much higher jitter than the jitter at the critical circuitry, and the DA clocking circuitry needs to clean it up.

Regards
Dan Lavry
Lavry Engineering
post #281 of 1712
Quote:
Originally Posted by Dan Lavry View Post
And BTW, the real difficulties are much more pronounced at say 20 bit performance where the jitter requirement is 16 times tighter then at 16 bits! As a rule, the signal getting to the DA has much higher jitter than the jitter at the critical circuitry, and the DA clocking circuitry needs to clean it up.
Dan,
Does that mean that software oversampling at the player level is in general a bad idea?
post #282 of 1712
Dan.

Did you or someone say that most modern DACs are already equipped with some sort of reclocking chip which cleans the jitter? If this is true, then most if not all of new SPDIF converters are just plain marketing, or is it?

I am curious because I do not want the placebo affecting my buying decision.

As also, thanks for your thoughtful response and insight.

With kind regards
post #283 of 1712
Quote:
Originally Posted by tosehee View Post
Dan.

Did you or someone say that most modern DACs are already equipped with some sort of reclocking chip which cleans the jitter? If this is true, then most if not all of new SPDIF converters are just plain marketing, or is it?
USB input on my DAC is limited to 16 bit/48. I need 24/96. The S/PDIF input on my DAC provides that, and has lower jitter as well.

Those are the reasons I needed a converter, Other peoples reasons may vary.
post #284 of 1712
I am using a 16bit/44.1kHz input Mac=>USB=>ND-S1=>Coaxial=>DAC and the sound is great!
Also my CD-Transport at 16bit/44.1kHz output via Coaxial sounds as good as the 24bit/192kHz output via I2S.
post #285 of 1712
Quote:
Originally Posted by Andrew_WOT View Post
Dan,
Does that mean that software oversampling at the player level is in general a bad idea?
I do not see the connection between jitter and software oversampling at the player. But since you brought oversampling up, I will say a couple of words about it, that may be of value.

In almost all cases, (my guess is over 99.9%), there is going to be oversampling (some may call it up-sampling) of the music data (samples) before it gets to the circuitry that actually does the conversion. The main question is (smaller issues aside) - how good is the oversampling. One can do a great over sampler, a real poor one, or anything in between.

The quality is what counts most. Where you do it is a secondary consideration. Almost all DA's do some oversampling, the exception is NOS DA's and they suffer seriously from the lack of oversampling. I posted about it under the NOS DA thread.

Say you feed a DA a 44.1KHz rate, but the device is going to over sample by X16 to 705.6Khz, or by X256 to 11.2896Mhz or what not. If you fed the DA a double rate which is 88.2KHz, the device will need to oversample by X8 to get to the same 705.6KHz, or by X128 for 11,2896MHz operation. You fed it twice the rate, and it is going to skip the first stage of X2 oversampling. If you feed it 176.4KHz, it will skip the first X2 and second X2 stages, because it was "already done".

So the issue gets back to the quality of the oversampling. One can do a nice job in a computer, or in a dedicated DSP, FPGA or even inside the DA. Or one can do a poor job of it anywhere... An external over sampler may be an improvement or degradation, and it depends on which over sampler is better. The concept of doing it outside in a separate box does not hold, unless it is done better in the external box.

There are 2 general uses of a DA (and AD):

1. There are devices that are designed with the goal of best sound.

2. There are also devices that are oriented towards real time monitoring. A guitar player or a drummer may want to hear what the play with a spot monitoring speaker or headphone. Or one may want to overdub tracks on top of each other - you listen to pre recorded music WHILE playing a new track.
In the first case, one can concentrate on making the best sound with no constraints. In the second case, it is important to keep the time delay through the AD and DA, and the computer (or workstation) short enough (a few milliseconds). If the delay is too long, the music one plays in real time will be heard too late, thus it will sound like an echo, and that is not good for listening in real time or over dobbing. So the second case calls for "LOW LATENCY converters (which means low time delay). In the first case, one does not care if the delay is many milliseconds. In the second case doing things in a hurry is important.

Not surprising is the fact that most often, when one has more time to do a job, it ends up yielding better results. One can do a lot more quality work in say 5msec, than in say 0.5msec. But the call for low latency is out there, especially when doing recording and over dobbing work in a audio work station, utilizing relatively long delay interface types such as fire wire. While such interfaces are capable of handling a lot of channels and that is a positive, they slow the delay time. So many DA and AD makers decided to push for low latency, in order to cover all bases.

And of course, some gear makers started advertizing low latency as some measure of quality, implying that low latency means better conversion (while the opposite is true). At least one DA IC maker got wise so they provide dual mode operation, low latency and high latency, where the sonic quality is better.

Latency is accumulated (AD, computer and interface and DA). Most of the latency is due to the interfaces such as fire-wire, but the AD and DA do add up some to the overall delay. Why am I talking about latency? You brought up oversampling, and that is where the major portion of the delay takes place. That is where the "corner cutting" takes place, lower latency, less computational hardware but at the expanse of quality.

In this group (head-fi) most people are interested in listening to already made music, not in real time over dobbing or spot monitoring. So it may be of value to realize that the notion that low latency being an indication of good quality is in fact upside down. There is no benefit to low latency, and most often it stands opposite to best sound quality.

The trade off between delay (latency) and quality takes place at the oversampling computational block. If someone tells you that they have a low latency converter, it does not mean better quality. Often the opposite is the case.

regards
Dan Lavry
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Dedicated Source Components
This thread is locked  
Head-Fi.org › Forums › Equipment Forums › Dedicated Source Components › USB to SPDIF converters shoot-out : EMU 0404 USB vs. Musiland Monitor 01 USD vs. Teralink-x vs. M2Tech hiFace