Lavry DA10 Inputs...which is better XLR/Optical...and best way to get it there from pc

May 17, 2009 at 2:16 AM Post #16 of 40
excellent info one and all...as for currawong's comments re jitter --- i never had a problem in that regard, and I don't know whether its just a clean signal or the lavry doing its thing

However, I still don't have an answer as to whether the 110 ohm balanced xlr or spdif/optical input is superior...or, is it a tie?
 
May 17, 2009 at 11:46 AM Post #17 of 40
AES = win in my experience; i'm able to get a much hotter signal with lower noise using that output vs my coax and optical. currawong is correct about the jitter issue; unless your dac has a very good reclocking unit, you will still be stuck with jitter in your signal. he is correct in his quest for a clock. actually buying a separate clocking unit is a really good idea that I dont see as often as I would have thought on head-fi. these are highly prized and very important pieces of kit (read expensive) in the average studio and can have huge impact on the sound. if you use AES or coax it can contain a separate clocking track rather than with optical and AFAIK USB it is always part of the signal and has to be 'recovered' by the i2s unit. Units such as the 'Big Ben' by apogee are the mainstay of many a studio. even just being able to clock other units such as my Beringer ADA8000 (adat and multichannel ADC/DAC unit) off my fireface produces far superior results than the otherwise fairly average output it has when running 'freely'

just thought i'd add this link from a stereotimes review of the Big Ben. I seriously recommend anyone who has the budget to consider getting one of these in your system. its one of those pieces of kit that improves anything that comes into contact with it, whether its the highest quality component, or something more lowly. the first time I heard one it was running in a small studio with a couple of other pieces of apogee kit and an RME fireface 800. the Big Ben made improvements that I didnt even know were needed and I walked away wanting one badly. its high on my agenda and apogee has again shown its ability to produce something of equal usability in a professional audio environment or a more pure audiophile setup.

:waits for accusations of being a shill:
biggrin.gif
. hehe, as much as I would love to be on the apogee payroll i'm not, I just think that with all the talk of jitter around here that linking something that IMO makes it a non issue was appropriate and I would consider buying a big Ben before upgrading from a mid-fi CD player or high-rez transport/dac. sorry for the OT.
 
May 17, 2009 at 5:15 PM Post #18 of 40
Quote:

Originally Posted by bergman2 /img/forum/go_quote.gif
excellent info one and all...as for currawong's comments re jitter --- i never had a problem in that regard, and I don't know whether its just a clean signal or the lavry doing its thing

However, I still don't have an answer as to whether the 110 ohm balanced xlr or spdif/optical input is superior...or, is it a tie?



Hi bergman

A. XLR:

1. It has the advantage of being balanced which is a good method to overcome common mode noise. Let me explain: both pin 2 and pin 3 wires (the signal wires) occupy the same space (they are near each other and are parallel). So they both pick up nearly the same environmental electromagnetic noise interference. The receiver at the end of the cable "looks at" the difference between the voltages of pin 2 wires and pin 3 wires. Since the noise pickup is the same, the difference is zero so the noise is canceled. But the signal imposed by the driver side is not the same on both wires. It is a "forced" voltage difference...

2. XLR signals are transformer isolated (AES specification). The transformer helps isolating the driver and receiver units from ground loops, which is an unwanted current flow.

3. The AES standard is based on a few volts signal, which is a good thing; it helps defeat weaker unwanted interference. The higher the voltage means more power. The cable and load are 110 Ohms so a 2V signal is really 18mW power. A 3V signal is 41mW power.

4. The XLR has 3 pins. Pin 1 is use for a grounded shield. It is best to have the shield connected only at the driver side. Connecting at both sides will provide a possible path for ground currents between the driver and receiver chassis.

B. RCA:

RCA is a single ended signal (un balanced), thus no common mode rejection. There is typically no transformer isolation, and the signal is relatively weak. 400mV into 75 Ohm amounts to 1.07mW power. That is a lot less then the XLR signal. Also, a typical RCA cable does not have a separate shield.

That is why XLR can be used for very long distances (hundreds of feet) and in a harsh environments. The RCA is restricted to around 15 feet if I recall correctly.

But for short distances, say 6 feet for example, the RCA is just fine. Most consumer and hi fi gear does not offer XLR, but if it does I would use it. RCA is just fine for reasonably short distances. The XLR was designed for pro gear where distances can be very long.

C. Optical:

Optical has some advantage; it provides the best electrical isolation between units, and an opaque sleeve certainly blocks external light interference. But optical has some issues as well. The limitations depend on the type of optical transmitter, optical receiver and the light pipe itself. It would take a very long post to explain optical. For the most part, 15 feet or less works fine, better then RCA, and not as good as XLR. I like optical at short distances, it is very robust.

Additional comments:

People often get confused between the hardware and the format. What I said above is about the hardware, not the format. It is true that traditionally, RCA and optical (Toslink) were invented to support SPDIF format and XLR for AES format. But the lines have been blurred. One can send an AES format over RCA or optical, and SPDIF over XLR. One of the reasons for the "blurred line" is that while the various format features are different, the part of the data that contains the music is the same. So one can use the same digital audio transmitter and receiver IC's for both formats.

I hope that helps.

And yes, one more comment: In all cases, short cable or optical link is always better then longer one. Whenever possible, go for short. I do not mean that one has to overdo it, but if you can use a 6 foot length, do not use a 15 foot. It may help, and if it does not help, it will certainly not hurt.

Regards
Dan Lavry
 
May 17, 2009 at 9:33 PM Post #19 of 40
thanks, dan...this confirms my suspicions...now I have to decide on a few different usb>aes output options in my price range...
 
May 18, 2009 at 2:11 AM Post #20 of 40
I think doing something other than motherboard SPDIF out is a good idea. Most motherboard SPDIF is limited to 48 out, whereas the CD is encoded and therefore ripped at 44.1. There will certainly be resampling by the kmixer from 44.1 to 48 in the innards of your PC. Probably in the middle of the CPU cycles. Of course some better motherboard cards may do better, most do not.

The USB to SPDIF or AES should provide a bitperfect data stream to the DAC, that should improve the sound. I doubt the small jitter element will make it worse. And as always you can spend just about as much as you can on a box that does this. Most people are happy with either a good sound card making bitperfect SPDIF, (even the lowly XFi cards can do this.) or a USB - SPDIF/AES bit of gear.

Eliminate the kmixer, and output bitperfect to your dac however you can, you will notice a difference.

Of course if you had a Mac, you would have bitperfect optical out already.
 
May 19, 2009 at 11:40 PM Post #21 of 40
Quote:

Originally Posted by qusp /img/forum/go_quote.gif
I just think that with all the talk of jitter around here that linking something that IMO makes it a non issue was appropriate and I would consider buying a big Ben before upgrading from a mid-fi CD player or high-rez transport/dac. sorry for the OT.


I was quite surprised by your post, you being a member of the trade. With a decent DAC, jitter is already a non-issue. The PLL (or other jitter rejection) circuitry will completely remove any jitter introduced in transport by either SPDIF or AES.

Secondly, a masterclock like the Big-Ben is useful for distributing wordclock in a studio and is likely to improve the timing of cheaper ADCs but I certainly wouldn't want to replace the internal clock of one of Lavry's or PrismSound's professional ADCs (for example) with a Big Ben.

All in all, I certainly would not recommend the use of a Big Ben in a consumer setup. Would I be correct in thinking you actually sell Big Bens?

G
 
May 20, 2009 at 7:07 AM Post #22 of 40
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif
I was quite surprised by your post, you being a member of the trade. With a decent DAC, jitter is already a non-issue. The PLL (or other jitter rejection) circuitry will completely remove any jitter introduced in transport by either SPDIF or AES.

Secondly, a masterclock like the Big-Ben is useful for distributing wordclock in a studio and is likely to improve the timing of cheaper ADCs but I certainly wouldn't want to replace the internal clock of one of Lavry's or PrismSound's professional ADCs (for example) with a Big Ben.

All in all, I certainly would not recommend the use of a Big Ben in a consumer setup. Would I be correct in thinking you actually sell Big Bens?

G



LOL; whatever you say mate; jitter a non-issue LOL. even with the best dac although it is claimed it is a non-issue, in most cases however IMO that is far from the case in reality. Although it is of even more use in a studio, not so much to improve the clock on older units, but to keep everything running off the one superior clock. I think it laughable that you think a consumer audio piece like the DA10 as good as it is; would have a clock as steady as the big ben. obviously yo have never heard one, so i'll leave it there, guess you didnt read the article either??

and no as I stated I have nothing to do with apogee, I make and sell (mainly analogue) cables
 
May 20, 2009 at 10:25 PM Post #23 of 40
Quote:

Originally Posted by qusp /img/forum/go_quote.gif
LOL; whatever you say mate; jitter a non-issue LOL. even with the best dac although it is claimed it is a non-issue, in most cases however IMO that is far from the case in reality. Although it is of even more use in a studio, not so much to improve the clock on older units, but to keep everything running off the one superior clock. I think it laughable that you think a consumer audio piece like the DA10 as good as it is; would have a clock as steady as the big ben. obviously yo have never heard one, so i'll leave it there, guess you didnt read the article either??

and no as I stated I have nothing to do with apogee, I make and sell (mainly analogue) cables



A clock box is useful when one needs to synchronize multiple units, such as multiple AD chassis. When recording multiple channels, one needs to have them all synchronized to the same clock. This is very important, because if the clocking is different between units, say 100ppm (parts per million) difference will end up as .36 seconds time difference between channels after an hour of operation... There are a couple of other reasons why synchronization of units may be important.

But listening to a single unit (stereo, surround or 16 channels...) does not require external synchronization because the gear inside the chassis is already synchronized to a common internal clock.

So what is it about an external clock that is better then external clock? The answer is NOTHING. On the contrary.

Converter technology is very complex and demanding. Clock technology is relatively child play compared to AD and DA conversion. A basic clock is a device that deals with a simple 1,0,1,0,1,0,1,0… sequence. Why would one choose to believe that putting such circuit in a separate chassis makes it better? It is a ridicules notion. The same circuits are better when placed next to the converter, where they belong, without a long rubber band between chassis (with all sorts of additional jitter increase mechanisms).

A good DA converter has a good built in internal clock and jitter rejection.

Regards
Dan Lavry
 
May 20, 2009 at 11:10 PM Post #24 of 40
Quote:

Originally Posted by qusp /img/forum/go_quote.gif
LOL; whatever you say mate; jitter a non-issue LOL. even with the best dac although it is claimed it is a non-issue, in most cases however IMO that is far from the case in reality. Although it is of even more use in a studio, not so much to improve the clock on older units, but to keep everything running off the one superior clock. I think it laughable that you think a consumer audio piece like the DA10 as good as it is; would have a clock as steady as the big ben. obviously yo have never heard one, so i'll leave it there, guess you didnt read the article either??

and no as I stated I have nothing to do with apogee, I make and sell (mainly analogue) cables



Pretty much everything you wrote was incorrect.

I actually said in my post a professional ADC, I didn't mention the DA10, you do know what an ADC is?

Also, I have not only heard the Big Ben but worked with one, and with colleagues tested one against a Lucid masterclock and a Rosendahl Nanosyncs, it performed adequately but no better than the other two, which was a disappointment as the Nanosyncs was quite a bit cheaper. We also used the Big Ben as the masterclock for a PrismSound ADA-8XR (ADC), it actually degraded the the sound quality (albeit minutely). So the best solution was to distribute the clock from the Prism unit, this gave the best sound quality and was quarter of the price of the Big Ben! So your statement that the Big Ben is a superior clock is incorrect and it's role as a clock distribution unit can be duplicated with better results using a clock distribution unit rather than an entirely new (and uneccesary) masterclock.

Yes, I did read the article - typical self serving audiophile review with no substantive evidence.

G
 
May 21, 2009 at 4:47 AM Post #25 of 40
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif
Pretty much everything you wrote was incorrect.

I actually said in my post a professional ADC, I didn't mention the DA10, you do know what an ADC is?

Also, I have not only heard the Big Ben but worked with one, and with colleagues tested one against a Lucid masterclock and a Rosendahl Nanosyncs, it performed adequately but no better than the other two, which was a disappointment as the Nanosyncs was quite a bit cheaper. We also used the Big Ben as the masterclock for a PrismSound ADA-8XR (ADC), it actually degraded the the sound quality (albeit minutely). So the best solution was to distribute the clock from the Prism unit, this gave the best sound quality and was quarter of the price of the Big Ben! So your statement that the Big Ben is a superior clock is incorrect and it's role as a clock distribution unit can be duplicated with better results using a clock distribution unit rather than an entirely new (and uneccesary) masterclock.

Yes, I did read the article - typical self serving audiophile review with no substantive evidence.

G



Hi,

I did not realize the subject is AD. Since it is so lets compare the signal path between internal clocking and external.

Interanal clocK:
A fixed frequency thus the lowest jitter clock. It is hooked directly to the converter, same ground, short interconnection. This is optimal.

External clock:
A fixed frequency thus the lowest jitter clock, but in a different chassis. Now you need to connect the clock via a cable to the AD chassis. There are 3 negatives at play: 1. possible electromagnetic interfearance picked up by the cable 2. the termination resistance and cable tollerance 3. different ground between units.

So now we already have more jitter and we are entering the AD box. The signal needs to be multiplied (in frequency) by typically X128 or X256 or X512 or X1024 (depending on the AD design). There is a SECOND oscillator that is not fixed, it has some pull range, and it is controlled by a PLL circuit (phase lock loop), thus more jitter...

This is all just too fundumental for an experienced designer of gear. So when ever possible, use internal clock. Use external when there is no other way (when you need to sync multiple chassis), but it will not improve the sound.

Of course there will be a lot of folks that bought into the marketing hype, and thier comments are always subjective (such as "it sounds better to me").

But this is a head-fi forum, so AD's ar not a typical topic. It is DA's that we use for listning, and an external clock is not called for. We do not need to sync multiple DA's to listen to music... We do not need the external clock...

Folks, I am pretty new here. Let me know if my posts are too technical, and I will "tone it down".

Regards
Dan Lavry
Lavry Engineering
 
May 21, 2009 at 1:46 PM Post #26 of 40
Quote:

Originally Posted by Dan Lavry /img/forum/go_quote.gif
Hi,


Folks, I am pretty new here. Let me know if my posts are too technical, and I will "tone it down".

Regards
Dan Lavry
Lavry Engineering




Not at all, I appreciate the information.
popcorn.gif
 
May 21, 2009 at 3:33 PM Post #27 of 40
Quote:

Originally Posted by Dan Lavry /img/forum/go_quote.gif
Hi,

I did not realize the subject is AD. Since it is so lets compare the signal path between internal clocking and external.

Interanal clocK:
A fixed frequency thus the lowest jitter clock. It is hooked directly to the converter, same ground, short interconnection. This is optimal.

External clock:
A fixed frequency thus the lowest jitter clock, but in a different chassis. Now you need to connect the clock via a cable to the AD chassis. There are 3 negatives at play: 1. possible electromagnetic interfearance picked up by the cable 2. the termination resistance and cable tollerance 3. different ground between units.

So now we already have more jitter and we are entering the AD box. The signal needs to be multiplied (in frequency) by typically X128 or X256 or X512 or X1024 (depending on the AD design). There is a SECOND oscillator that is not fixed, it has some pull range, and it is controlled by a PLL circuit (phase lock loop), thus more jitter...

This is all just too fundumental for an experienced designer of gear. So when ever possible, use internal clock. Use external when there is no other way (when you need to sync multiple chassis), but it will not improve the sound.

Of course there will be a lot of folks that bought into the marketing hype, and thier comments are always subjective (such as "it sounds better to me").

But this is a head-fi forum, so AD's ar not a typical topic. It is DA's that we use for listning, and an external clock is not called for. We do not need to sync multiple DA's to listen to music... We do not need the external clock...

Folks, I am pretty new here. Let me know if my posts are too technical, and I will "tone it down".

Regards
Dan Lavry
Lavry Engineering



Hi Dan,

Thanks for your reply. We are not specifically talking a AD conversion, I was responding to qusp who said the Big Ben would be even better for a studio than for a DAC.

As you have said and as I said there is no benefit to an external clock for a DAC. And the only time it's useful for an ADC is in my situation where I need to resolve both BlackBurst (video ref) and Wordclock to a common timing reference as I need to sync video and audio gear.

As an aside (and slightly off topic, sorry), many years ago I used to own two DigiDesign 888s (ADCs), one was the masterclock and one a clock slave. However, I tested an external masterclock (both 888s as clock slaves) and there was a significant improvement in sound quality. Running on internal clock there was a notable decrease in stereo width and separation and an attenuation of low-mid frequencies (centred roughly around 250Hz). In other words, running on internal clock the artifacts of the 888s sounded pretty much identical to certain phase artifacts. This subjective test was carried out by many 888 owners with almost universal agreement of the improvement (imaging and FR). Discussing with DigiDesign and the external masterclock manufacturer it turned out that the internal 888 clock had significantly less jitter than the external clock. So in that case more jitter seemed to improve sound quality. I did some research of my own and DigiDesign themselves took part in the discussion but nothing was ever resolved about why more jitter should sound better (in this case). Although it was interesting to note that the clocking structure within the 888 was completely redesigned in DigiDesign's 192 coverters.

I always try to minimise my system jitter but it always plays on the back of my mind there's a possibility that less jitter does not necessarily mean better sound quality. Even though logically this doesn't make sense and there's no evidence I've seen which would justify it.

What is your view?

Cheers, G
 
May 21, 2009 at 7:13 PM Post #28 of 40
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif
Hi Dan,

....I always try to minimise my system jitter but it always plays on the back of my mind there's a possibility that less jitter does not necessarily mean better sound quality. Even though logically this doesn't make sense and there's no evidence I've seen which would justify it.

What is your view?

Cheers, G



Hi Gregorio,

I do not know what the issue was with the 888, there are a lot of possibilities to have SASU (something all screwed up).

But I would suggest that jitter is never better, and less jitter is always better.

It is difficult to argue with subjective taste. One may like more bass, or the type of distortions of some tube. Those kind of alterations or distortions tend to be "fixed". They are there "all the time" when you use the particular gear.

But jitter is a different animal. The impact of jitter varies all the time, moment by moment, as the music changes. So how can one like such a thing?

In fact, "Fixed non linearity" (tubes, transformers and some electronic circuits) present a "stationary transfer characteristics", meaning the behavior does not change in time, and it "scales with the signal". Of course I am not talking about long term aging of components; the time frame is seconds not month and years. If you use a tube for a year it may sound different.... Say you feed a tube a 1KHz signal and it generates additional 2KHz and 3KHz. If you feed the tube a 2KHz, it will yield 4KHz and 6KHz. In both case it is responding to the input with adding 2nd and 3rd harmonics. If the tube amplitude response is flat, the harmonic amplitudes ratios will stay the same... In other words, there is some "predictability" here.


When you feed a steady tone into a fixed non linearity, the distortions are always at HIGHER frequencies, and the distortion energy falls on the harmonic frequencies of the tone. There are no sub harmonics. So while the tone is distorted (not the original sound) there in some sense some "musicality" to the outcome. The distortions are at harmonic locations of the original tone. Clearly, when you feed multiple tones into a non linearity, you end up with a mess (sums and differences of the various frequencies). That is why I do not advocate non linearity. But at the end of the day, one can argue that they like non linearity, because taste is subjective, and one is entitled to like any thing, even the sound of finger nails on a black board.

However, jitter is NOT a stationary behavior. If your jitter is a 120Hz tone, and you feed a signal of say 1KHz, you end up with 1.12KHz, as well as 880Hz. If you feed it 2KHz, the additional tones are at 2.12Khz and 1.88Khz. The distortions are non harmonic, and their relationship changes with the signal itself! It also changes with the jitter type itself. I used 120Hz (typical rectified AC line frequency in the US). The same gear in Europe will yield 1.1K and 900Hz instead of 1.12KH and 880Hz.... And all that that was for the simplest case - a pure fixed sine wave tone, and a steady fixes jitter pure tone. When you have complex music, all hell breaks loose. One can not like it, it is a constantly moving target.

And indeed jitter is a much more complex subject. There is random jitter (covering the whole spectrum), often due to electronic component noise, there is more specific jitter such as due to line frequency or some electromagnetic pickup, and there is also jitter that is due to coupling of digital audio signal into the analog path.

Then there are different circuits, and they respond to various jitter differently. A PCM converter is hardly effected by jitter when the signal is very low. A PCM converter is hardly effected when the signal is very low frequency. But a sigma delta is effected all the time, even the noise floor with no signal at all rises with increased jitter...

Jitter is a very complex subject. But in all cases, even the simplest steady pure sine wave, it makes for very unmusical distortions that change moment by moment all over the place. It is best to have lower jitter.

I am responding to your post, Gregorio, and this post is even more technical then the previous one. I still wonder if my posting here is too technical for most. I do not want to break the flow for the majority here.

Regards
Dan Lavry
 
May 22, 2009 at 3:56 PM Post #29 of 40
Thanks for that Dan. It pretty much confirmed what I already believed. As I said, I always try to achieve the lowest system jitter I can and except for the experience with the 888s I've never known more jitter to be better. Bearing in mind I do studio work and that I'm after neutrality rather than a subjectively pleasing sound. What was going on with the 888s must have been quite severe to have heard such obvious phase artifacts but I guess it will just have to remain a mystery.

You mentioned that delta-sigma converters are more prone to jitter problems than PCM converters but I thought these days most good PCM ADCs employed delta sigma as part of the oversampling regime before the decimation back to standard PCM sampling rates, or am I a bit out of touch? Do most use non- delta sigma multi-bit oversampling now?

Thanks for your info. BTW, there is a wide spread of people on here, some are quite new and some have a lot of technical knowledge. Strangely, some head-fiers seem to have a basic understanding of the science but don't want more because they don't believe it?! So I don't think being quite technical at times is a bad thing. Certainly there are many head-fiers who would not have found your last message too technical.

Cheers, G.
 
May 23, 2009 at 3:23 AM Post #30 of 40
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif

You mentioned that delta-sigma converters are more prone to jitter problems than PCM converters but I thought these days most good PCM ADCs employed delta sigma as part of the oversampling regime before the decimation back to standard PCM sampling rates, or am I a bit out of touch? Do most use non- delta sigma multi-bit oversampling now?

Cheers, G.



No you are not out of touch. I should have said "Non sigma delta" and instead I said "PCM". It is my error. I could have said "resistor based".
I can't believe I made such a basic error.

Yes of course, most converters are PCM coded.

Jitter is an issue when a signal is changing fast, high amplitude and frequency (we call it "high slew rate"), you can think of it as "slope" of voltage changing with time. The slowest changing signal is DC, and if you sample DC a little later or earlier, you still get the same value, so jitter will not have an impact. If the signal changes slowly, say due to low frequency, and you sample a little late, the error will be small. If the signal changes fast the error will be larger...

That is why resistor type conversion is not jitter sensitive to DC and has little sensitivity to slow signals (low amplitude and frequency).

But a sigma delta modulator is based on very fast changing signals in the loop and they are there all the time, even when the input is DC. That very high frequency energy will be filtered out by the decimation, however the jitter in the modulator increases the noise before the modulator, and some of that noise is in the audible range...

So yes, it is sigma delta vs. non sigma delta...

I am not at all against sigma delta. Jitter sensitivity is one of their down sides, but they do have advantages.

Both converter types tend to be impacted more by jitter when the slew rate is higher (such as a full scale 20KHz).

Regards
Dan Lavry
 

Users who are viewing this thread

Back
Top