CHORD ELECTRONICS DAVE

Discussion in 'High-end Audio Forum' started by magiccabbage, May 14, 2015.
Tags:
First
 
Back
84 85 86 87 88 89 90 91 92 93
95 96 97 98 99 100 101 102 103 104
Next
 
Last
  1. romaz
    If you're dead set on driving speakers directly with the DAVE, it is possible but you have to be creative.  With very high-efficiency speakers like a Klipsch horn speaker (efficiency up to 105 dB) and autoformer boxes (http://www.zeroimpedance.com/zeroimpedance_011.htm), you can potentially boost the impedance of these speakers to 64 ohms and then I don't see why you couldn't use the DAVE to drive these speakers directly.
     
    I can see why there is a push to put nothing between the DAVE and your transducers (headphones or speakers) because any preamp or amp would rob the DAVE of some of its magical transparency.
     
  2. Jawed
    There will be Chord digital power amps that work the same way as DAVE DAC:

    http://www.head-fi.org/t/766517/chord-electronics-dave/615#post_12041809


    My theory of how it will work:

    http://www.head-fi.org/t/756029/chord-hugo-tt-high-end-dac-amp-impressions-thread/210#post_12249147


    DAVE might be able to drive 8 ohm headphones (Mojo can) but they don't demand the 10s of watts of power on peaks that these loudspeakers will require :tongue_smile:
     
  3. rkt31
    8ohm into 4.5v or 6v or even 3v would draw heavy current , I don't know it will be safe or not but in past some people tried 8 ohm speakers with Hugo.
     
  4. rkt31
    I don't know about klimax twin but considering a low and mid gain option in benchmark ahb2, it can be good match to Dave !
     
  5. Christer
    I suppose digital amps  work a lot differently than  ordinary old school amps?
    But will  a 20 watts, or even 70 watts? really drive anything else than Horn speakers?
    Many years ago I had some huge horns  that I used to drive with a 30 watt amp with nice but typically colored horns sound in the midrange. But the best most powerful bass of any speakers I have  ever owned. It had 15 inch basses  in a huge horn.Now I am using  much more neutral electrostatic hybrids from ML and before buying them I tested them with quite a few amps in the 100 to 200 watts range before settling for MF's KW550 which outputs 900 watts per channel into 4ohm and 500 into 8 ohm.
    The first not really very strong climax  by Mahler standards of the mighty second  in Channel Classics SACD recording scaled 150 watts on a machintosh amp .
    With both orchestra and choir and soloist on full blast in the last movement REAL CLIMAX machintoshes 250 watts were not enough.
    My MF copes very well with double the power though.
     
  6. Jawed
    20W stereo / 70W mono is the baby one, apparently. Also, the ability to drive loudspeakers is not, as you hinted, the current capability into 8 ohms, but the current into 4, 2 or even 1 ohm (Apogee Scintilla :)

    The 8 ohm rating of an amplifier doesn't tell you what it can do into speakers with lower impedance.
     
  7. Beolab
    @Rob Watts
     
    Does Dave / Hugo also have +3,5db extra digital headroom, or what is your opinion of it Rob ?
     
    Regarding the "3.5 dB high headroom DSP" Implemented in Benchmarks DAC2 series, the product page of the DAC2 states the following:

    "All D/A converters need 3.5 dB "excess" digital headroom, but few have any headroom above 0 dBFS.

    All of the digital processing in the DAC2 DX is designed to handle signals as high as +3.5 dBFS. Most digital systems clip signals that exceed 0 dBFS. The 0 dBFS limitation seems reasonable, as 0 dBFS is the highest sinusoidal signal level that can be represented in a digital system. However, a detailed investigation of the mathematics of PCM digital systems will reveal that inter-sample peaks may reach levels slightly higher than +3 dBFS while individual samples never exceed 0 dBFS. These inter-sample overs are common in commercial releases, and are of no consequence in a PCM system until they reach an interpolation process. But, for a variety of reasons, virtually all audio D/A converters use an interpolation process. The interpolation process is absolutely necessary to achieve 24-bit state-of-the art conversion performance. Unfortunately, inter-sample overs cause clipping in most interpolators. This clipping produces distortion products that are non-harmonic and non-musical . We believe these broadband distortion products often add a harshness or false high-frequency sparkle to digital reproduction. The DAC2 DX avoids these problems by maintaining at least 3.5 dB of headroom in the entire conversion system.

    We believe this added headroom is a groundbreaking improvement delivering significant sonic advantages."

    More detailed explanations and graphs behind the reasons to implement the 3.5 dB high headroom DSP can be found in two application notes at the Benchmark website, written by John Siau:
     
  8. Rob Watts
     
    I found this faintly amusing. Yes all FIR filters require an overload margin embedded in the data, to handle Gibbs phenomena. I have always had an internal overload for the last twenty years, and its just a question of simple competency. But that said, you would be surprised how many devices do not have any overload margin.
     
    Is it a big sound quality issue? Maybe not, as a lot of recordings are clipped anyway. And those that are not clipped or compressed have lots of headroom normally anyway.
     
    But I test with random noise at 0DBFS, and max freq square waves and the filter must not overload on interpolation samples. Its just a question of simple competency - nothing to create a fuss over.
     
    Rob
     
    Chord Electronics Stay updated on Chord Electronics at their sponsor page on Head-Fi.
     
    https://www.facebook.com/chordelectronics https://twitter.com/chordaudio http://www.chordelectronics.co.uk/
  9. Beolab

    Thanks Rob!

    You should wright a book with your memoirs incl. your technical knowledge / groundbreaking solutions for the afterworld :clap::sunglasses:
     
  10. audionewbi
    I think Rob really don't think that aspect of his design is group breaking but just doing the bare minimum right. Perhaps what other brands consider ground breaking should really be the bare minimum of doing the design right.
     
  11. ecwl
    I really appreciate all the insights Rob Watts provided on DAC designs in this forum. Because it's not only about that. It's also about what to listen for when the measured performance is improved when comparing different DACs. Moreover, the experimentation during the design phase of Chord DAVE truly has a lot more implications for audiophiles beyond DACs. The fact that lower noise floor and improved noise floor modulation are audible beyond previously assumed thresholds of hearing naturally implies that different signal cables, preamplifiers and power amplifiers with varying levels of noise / dynamic range performances matter.  Similarly, the audibility of dynamic transients and timing based on variations in oversampling and tap length also implies that the so-called speed of an amplifier, aka. the ability to reproduce transients, probably matters beyond just having a non-clipping amplifier with flat frequency response between 20-20kHz and low levels of THD.
     
    Malcyg likes this.
  12. Beolab
    Yes, but i dont think we have so many other dac designers with Robs in dept experience in the world, so call it whatever you want but you need to think of so many aspects when you are designing, so if you have thaught about all known aspects, then it is a true achievement in my book.
     
  13. Beolab
    .
     
  14. Jawed
    I suspect Rob is still learning. Which is fun and rewarding.
     
    Malcyg likes this.
  15. romaz
    If you guys are interested in a really good read, just check out all of Rob's posts -- you'll come out feeling a lot smarter. I've compiled some of Rob's comments that are among my favorites (if you are not able to locate certain comments I have attributed to Rob as comments he has publicly made on Head-Fi, it is because some of the comments were made privately to me). Some comments were made with respect to the Mojo but should apply equally to the DAVE. Consider some of his answers as best practices with the DAVE.

    What is most important with the DAVE?

    In simple terms its about resolution first, then less jitter sensitivity, lower distortion and noise.

    Subjectively the resolution gives better depth perception, and lower THD and noise gives smoother sound.


    Why do you believe SE is better than balanced for a DAC?

    Well this is a complex subject, and sometimes a balanced connection does sound better than single ended (SE) - in a pre-power context - but it depends upon the environment, and the pre and power and the interconnect. But the downside of balanced is that you are doubling the number of analogue components in the direct signal path, and this degrades transparency. In my experience every passive component is audible, every metal to metal interface (including solder joints - I once had a lot of fun listening to solder) has an impact - in case of metal/metal interfaces it degrades detail resolution and the perception of depth. So going balanced will have a cost in transparency.

    In DAC design, going balanced is essential with silicon design; there is simply too much substrate noise and other effects not too. But with discrete DAC's you do not need to worry about this, so going SE on a discrete DAC is possible, and is how all my DAC's are done. But differential operation hides certain problems (notably reference circuit) that has serious SQ effects; so going SE means those problems are exposed, which forces one to solve the issue fundamentally. In short, to make SE work you have to solve many more problems, but the result of solving those problems solves SQ issues than differential operation hides when you do measurements.

    In the case of Dave, I have gotten state of the art measured performance - distortion harmonics below -150 dB, zero measurable noise floor modulation - and there is no way you could do this with a differential architecture. So it is possible to have better measured performance with SE than differential, but it is a lot harder to do it - indeed, the only way of getting virtually zero distortion and noise floor modulation is SE.


    What are distortion figures for the DAVE?

    Distortion components are all below -150dB, so better than 24 bits. Noise is 21 bits. But these numbers, although very important, won't tell you how good it sounds. Noise floor modulation, which is important, is un-measurable, and with my APX555 the noise floor is at -180dB.

    Why do vocals sound so good on the DAVE?

    The simple answer to why vocals sound so good on Mojo is complete lack of noise floor modulation.


    What is noise floor modulation?

    What is noise floor modulation? When a sine wave signal is used in a DAC, you get different types of distortion - harmonic distortion (distortion of integer multiples from the sine wave fundamental) enharmonic distortion (distortion products that are non integer) and changes to the noise level. So for example you may have a DAC that produces noise at -120dB with -60dB sine wave (traditional dynamic range test) but the noise with a 0dB sine wave maybe -115dB - thus the noise has increased by reproducing a higher level sine wave - in this case the noise floor (seen by doing an FFT measurement) would increase by 5dB.

    Now noise floor modulation is highly audible - it interferes with the brain's processing of data from the ear - and immeasurably small levels of noise floor modulation is audible. I know this as I have listened to noise floor modulation at around -200dB - these numbers are derived from simulation - and heard the effect when the noise floor modulation mechanism was switched on and off.



    What does noise floor modulation sound like?

    Noise floor modulation is extremely important subjectively - you perceive the slightest amount as a brightness or hardness to the sound. When it gets bad, you hear glare or grain in the treble.

    Less noise floor modulation, smoother sound quality. The curious thing about this is that the brain is very sensitive to it, so you can easily hear it. Problem is that many listeners hear the brightness as more detail resolution, and so think it sounds better - but that's another story.


    What is clock jitter, total jitter, source jitter?

    Clock jitter is timing uncertainty (or inaccuracy) on the main clock that is feeding the digital outputs. Its often expressed as cycle to cycle jitter as an RMS figure, but can be total jitter which includes low frequency jitter too. Total jitter is the most important specification. If you want here is a good definition:

    https://en.wikipedia.org/wiki/Jitter

    As you can see, the jitter subject can get complicated and its often abused by marketing...

    But with all of my DAC's you do not need to worry at all about source jitter, so all of the above AK numbers are fine. So long as its below 2uS (that is 2,000,000 pS) you are OK, and nobody has jitter that bad!

    1. SPDIF decoding is all digital within the FPGA. The FPGA uses a digital phase lock loop (DPLL) and a tiny buffer. This re-clocks the data and eliminates the incoming jitter from the source. This system took 6 years to perfect, and means that the sound quality defects from source jitter is eliminated. How do I know that? Measurements - 2 uS of jitter has no affect whatsoever on measurements (and I can resolve noise floor at -180dB with my APX555) and sound quality tests against RAM buffer systems revealed no significant difference. You can (almost) use a piece of damp string and the source jitter will be eliminated.

    2. USB is isochronous asynchronous. This means that the FPGA supplies the timing to the source, and incoming USB data is re clocked from the low jitter master clock. So again source jitter is eliminated.


    Does the DAVE have a fancy FEMTO clock like other DACs to help reduce jitter?

    The issue of clocks is actually very complex, way more of a problem then in simply installing femto clocks. People always want a simple answer to problems even if the problem is multi-dimensional and complex. I will give you a some examples of the complexities of this issue.

    Some years back a femto clock became available, and I was very excited about using it as it had a third of the cycle to cycle jitter of the crystal oscillators we were using. So I plugged it in, and listened to it. Unexpectedly, it sounded brighter and harder - completely the opposite of all the times I have listened to lower jitter. When you lower jitter levels in the master clock, it sounds smoother and warmer and more natural.

    So I did some careful measurements, and I could see some problems.

    The noise floor was OK, the same as before, and all the usual measurements were the same. But you could see more fringing on the fundamental, and this was quite apparent. Now when you do a FFT of say a 1 kHz sine wave, in an ideal world you would see the tone at 1 kHz and each frequency bucket away the output would be the systems noise floor. That is, you get a sharp single line representing the tone. But with a real FFT, you get smearing of the tone, and this is due to the windowing function employed by the FFT and jitter problems within the ADC, so instead of a single line you get a number of lines with the edges tailing of into the noise. This is known as side lobes or fringing. Now one normally calibrates the FFT and the instrument so you know what the ideal should be. Now with a DAC that has low frequency jitter, you get more fringing. Now I have spent many years on jitter and eliminating the effects of it on sound quality, and I know that fringing is highly audible, as I have done many listening tests on it. What is curious, is that it sounds exactly like noise floor modulation - so reduce fringing is the same as reducing noise floor modulation - they both subjectively sound smoother and darker with less edge and hardness.

    So a clock that had lower cycle to cycle jitter actually had much worse low frequency jitter, and it was the low frequency jitter that was causing the problem and this had serious sound quality consequences. So a simple headline statement of low jitter is meaningless. But actually the problem is very much more complex than this.

    What is poorly understood is that DAC architectures can tolerate vastly different levels of master clock jitter, and this is way more important than the headline oscillator jitter number. I will give you a few examples:

    1. DAC structure makes a big difference. I had a silicon chip design I was working on some years back. When you determine the jitter sensitivity you can specify this - so I get a number of incoming jitter, and a number for the OP THD and noise that is needed. So initially we were working with 4pS jitter, and 120dB THD and noise. No problem, the architecture met this requirement as you can create models to run simulations to show what the jitter will do - and you can run the model so only jitter is changed, nothing else. But then the requirements got changed to 15 pS jitter. Again, no problem, I simply redesigned the DAC and then achieved these numbers. So its easy to change the sensitivity by a factor of 4 just by design of the DAC itself - something that audio designers using chips can't do.

    2. DAC type has a profound effect on performance. The most sensitive is regular DSD or PDM, where jitter is modulation dependent, and you get pattern noise from the noise shaper degrading the output noise, plus distortion from jitter. R2R DAC's are very sensitive as they create noise floor modulation from jitter proportionate to the rate of change of signal (plus other problems due to the slow speed of switching elements). I was very concerned about these issues, and its one reason I invented pulse array, as the benefit of pulse array is that the error from jitter is only a fixed noise (using random jitter source with no low frequency problems). Now a fixed noise is subjectively unimportant - it does not interfere with the brains ability to decode music. Its when errors are signal dependent that the problems of perception start, and with pulse array I only get a fixed noise - and I know this for a fact due to simulation and measurements.

    3. The DAC degrades clock jitter. What is not appreciated is that master clock jitter is only the start of the problem. When a clock goes through logic elements, (buffers level shifters, clock trees gates and flip-flops plus problem of induced noise) every stage adds more jitter. As a rough rule of thumb a logic element adds 1 pS of more jitter. So a clock input of 1pS will degrade through the device to be effectively 4 pS once it has gone through these elements (this was the number from a device I worked on some years ago). So its the actual jitter on the DAC active elements that is important not the clock starting jitter.

    The benefit I have with Pulse Array is that the jitter has no sound quality degrading consequences - unlike all other architectures - as it creates no distortion or noise floor modulation. Because the clock is very close to the active elements (only one logic level away), the jitter degradation is minimal and there are no skirting issues at all. This has been confirmed with simulation and measurement - its a fixed noise, and by eliminating the clock jitter (I have a special way of doing this) noise only improves by a negligible 0.5 dB (127 dB to 127.5 dB).

    This is true of all pulse array DAC's even the simpler 4e ones. In short the jitter problem was solved many years ago, but I don't bleat on about it as its not an issue and because it's way too complex a subject to easily discuss.

    Pulse Array is a constant switching scheme - that is it always switches at exactly the same rate irrespective of the data, unlike DSD, R2R, or current source DAC's. This means that errors due to switching activity and jitter are not signal dependent, and so is innately immune from jitter creating distortion and noise floor modulation and any other signal related errors. The only other DAC that is constant switching activity is switched capacitor topology, but this has gain proportionate to absolute clock frequency - so it still has clock problems.

    I plan to publish more detailed analysis of this, but from memory all of my DAC's have a negligible 0.5dB degradation due to master clock jitter, so its a non issue.

    And yes you are correct, the absolute frequency is quite unimportant, so forget oven clocks, atomic clocks etc. Also the clock must be physically close to the active elements,with dedicated stripline PCB routing with proper termination. Running the clock externally is a crazy thing to do, as you are simply adding more jitter and noise and an extra PLL in the system.


    With the DAVE, does the quality of the source matter?

    Dave is insensitive to the digital source, assuming the data is bit perfect.

    How is the DAVE impervious to low quality source components like a basic laptop to the extent that they can sound equivalent go a very expensive, purpose-built music server?

    Going back to when Hugo first came out, I noticed different SQ with different lap-tops and PC's.

    Now the problem is definitely not jitter from the source - my DAC's can tolerate 2uS of jitter and it will have zero difference to the measurements - also the USB is isochronous asynchronous so the timing comes from the DAC clock, so source jitter is not a problem.

    So I looked into the issue of different SQ with sources and found two sources of error:

    1. RF noise. RF noise is a major pain with audio. With analogue electronics, very tiny amounts of RF noise will cause intermodulation distortion with the audio signal, and the intermodulation products is noise floor modulation.

    2. Correlated current noise. If a tiny current that is signal related but distorted enters the ground plane, then this current will be a source of error, as the current in the ground plane induces small voltages. Now this then adds or subtracts to small signals, thus degrading small signal resolution - and this upsets the brains ability to calculate depth. Now one of the most fascinating things I discovered with Dave is there is no limit to how small this error can be without a degradation in depth perception - so it does not matter how small the error is it will have an impact.

    So the solution to the above problems is galvanic isolation. This means that RF noise from the source can't get into Dave, and small correlated currents can't get in too. And this approach gave two benefits - much smoother sound quality, and a deeper soundstage.

    Now with Dave I can no longer hear which source is connected, but before without the galvanic isolation it was easy to hear.


    USB is widely believed to be a noisy interface. Some music server companies (Baetis) suggest you should avoid USB at all costs and that SPDIF is superior. Does this apply to the Mojo or DAVE?

    Just to make it 100% clear - the USB input will measure absolutely identically to the coax or optical inputs if the USB data is bit perfect.

    I have set up my APX555 so that it uses the USB via ASIO drivers, and I get exactly the same measurements on all inputs - 125 dB DR, THD and noise of 0.00017% 3v 1k 300 ohms. I have done careful jitter analysis, FFT analysis down to Mojo's -175dB noise floor, and can measure no difference whatsoever on all inputs (with the APX always grounded on the coax).

    If somebody does measure a difference its down to mangled data on the USB interface (or perhaps poor measuring equipment...)


    Which input sounds the best on the DAVE?

    With Dave the best input (by a tiny margin) is USB, then optical is very close. The BNC/AES depends upon the source and cabling.

    Does the DAVE's USB require 5V power?

    It needs the 5V to power the USB decoder chip - this is how the galvanic isolation works, as the isolation is on the decoded I2S data post USB.

    What USB cables are best?

    So what are the best USB cables? Firstly, be careful. A lot of audiophile USB cables actually increase RF noise and make it sound brighter, and superficially impressive - but this is just distortion brightening things up. Go for USB cables that have ferrites in the cable is a good idea - it may also solve any RF issues from the mobile that you may have too.

    What about USB purifiers/reclockers (USB Regen)?

    As to USB purifiers, for Dave, Hugo TT, 2 Qute don't bother as they are galvanically isolated. But in this case it's absolutely nothing due to jitter - its about RF noise and signal correlated noise upsetting Hugo.'s analogue electronics,not due to jitter as source jitter is eliminated by the internal buffer and DPLL.

    What about SPDIF cables, will any SPDIF cable do?

    Sadly no. Mojo is a DAC, that means its an analogue component, and all analogue components are sensitive to RF noise and signal correlated in-band noise, so the RF character of the electrical cables can have an influence. What happens is random RF noise gets into the analogue electronics, creating intermodulation distortion with the wanted audio signal. The result of this is noise floor modulation. Now the brain is incredibly sensitive to noise floor modulation, and perceives this has a hardness to the sound - easily confused as better detail resolution as it sounds brighter. Reduce RF noise, and it will sound darker and smoother. The second source is distorted in band noise, and this mixes with the wanted signal (crosstalk source) and subtly alters the levels of small signals - this in turn degrades the perception of sound stage depth. This is another source of error for which the brain is astonishingly sensitive too. The distorted in band noise comes from the DAP, phone or PC internal electronics processing the digital data, with the maximum noise coming as the signal crosses through zero - all digital data going from all zeroes to all ones. Fortunately mobile electronics are power frugal and create less RF and signal correlated noise than PC's. Note that optical connection does not have any of these problems, and is my preferred connection.

    Does this mean that high end cables are better? Sadly not necessarily. What one needs is good RF characteristics, and some expensive cables are RF poor. Also note that if it sounds brighter its worse, as noise floor modulation is spicing up the sound (its the MSG of sound). So be careful when listening and if its brighter its superficially more impressive but in the long term musically worse. At the end of the day, its musicality only that counts, not how impressive it sounds.


    Do AC cables make a difference?

    In the 1980's, people started talking about mains cables making a difference to the sound quality - and I didn't believe it either - particularly as my pre-amp had 300 dB of PSU rejection in the power supply. But I did a listening test, and yes I could hear a difference. Frankly I still could not believe the evidence of my own ears, so did a blind listening test with my girl friend. She reported exactly the same observation - mains cables did make a difference to SQ.

    To cut a long story short, I proved the problem was down to RF noise. RF noise inter-modulates with the wanted audio signal within the analogue electronics, and if the RF noise is random, then the distortion is random too and you get a increase in noise floor with signal. This increase in noise floor is noise floor modulation, and the brain is very sensitive to it...


    Should you connect the DAVE to a line conditioner?

    Give RF filters a go. Dave has an incredible amount of RF filtering internally, but you may get a benefit for other gear with RF isolation. If it sounds smoother and darker its better is the rule here - this will also make dynamics seem quashed too, but that's just reducing noise floor modulation.

    Is there a problem leaving the DAVE on 24/7 or is it best to put it into standby mode or turn it off completely at the end of each day?

    I leave both my Daves on all the time - but I am just lazy...

    Does the DAVE benefit from mechanical isolation (Stillpoints, etc)?

    Yes all products do.

    Should the HF filter be used for both hi-res files as well as 16/44?

    The HF filter is a sharp cutoff filter set to 60 kHz. The intention was to bandwidth limit high sample rate recordings - DXD and 384k have huge amounts of noise shaper noise from the ADC. This noise will degrade SQ by increasing noise floor modulation as the out of band noise creates intermodulation distortion with the wanted audio signal in the analogue electronics.

    Now it works very well, in using it makes it sound smoother and darker - exactly what you get from lower noise floor modulation. But the curious thing is that it also sounds better with 44.1 k - curious because the WTA filter typically has a stop band attenuation of 140 dB (worst case 120 dB). So out of band noise is very low with 44.1 k and I was not expecting a SQ change with the filter with CD. The filter is not something added, its just a different set of coefficients for the 16 FS to 256 FS WTA filter.


    On upsampling the source (i.e. HQPlayer) with Chord products:

    Oh dear. Do NOT use your computer to up-sample or change the data when you use one of my DAC's.

    All competent DAC's up-sample and filter internally; the issue is how well that filtering is done, in terms of how well the timing of transients is reconstructed from the original analogue. Computers are poor devices to use for manipulating data in real time as they are concurrent serial devices - everything has to go through one to 8 processors in sequence. With hardware and FPGA's you do not need to do that, you can do thousands of operations in parallel. Dave has 166 DSP cores with each core being able to do one FIR tap in one clock cycle. That is incredibly powerful processing power way more powerful than a PC.

    But its not just about raw processing power but the algorithm for the filter. The WTA filter is the only algorithm that has been designed to reduce timing of transients errors, and the only one that has been optimised by thousands of listening tests.

    So the long and the short is don't let the source mess with the signal (except perhaps with a good EQ program) and let Mojo (or DAVE) deal with the original data, as Mojo (or DAVE) is way more capable


    On why you shouldn't upsample PCM to DSD and why PCM sounds better than DSD:

    DSD as a format has major problems with it; in particular it has two major and serious flaws:

    1. Timing. The noise shapers used with DSD have severe timing errors. You can see this easily using Verilog simulations. If you use a step change transient (op is zero, then goes high) with a large signal, then do the same with a small signal, then you get major differences in the analogue output - the large signal has no delay, the small signal has a much larger delay. This is simply due to the noise shaper requiring time for the internal integrators to respond to the error. This amplitude related timing error is of the order of micro seconds and is very audible. Whenever there is a timing inaccuracy, the brain has problems making sense of the sound, and perceives the timing error has a softness to the transient; in short timing errors screw up the ability to hear the starting and stopping of notes.

    2. Small signal accuracy. Noise shapers have problems with very small signals in that the 64 times 1 bit output (DSD 64) does not have enough innate resolution to accurately resolve small signals. What happens when small signals are not properly reproduced? You get a big degradation in the ability to perceive depth information, and this makes the sound flat with no layering of instruments in space. Now there is no limit to how accurate the noise shaper needs to be; with the noise shaper that is with Mojo I have 1000 times more small signal resolution than conventional DAC's - and against DSD 64 its 10,000 times more resolving power. This is why some many users have reported that Mojo has so much better space and sounds more 3D with better layering - and its mostly down to the resolving power of the pulse array noise shaper. This problem of depth perception is unlimited in the sense that to perfectly reproduce depth you need no limit to the resolving power of the noise shaper.

    So if you take a PCM signal and convert it to DSD you hear two problems - a softness to the sound, as you can no longer perceive the starting and stopping of notes; and a very flat sound-stage with no layering as the small signals are not reproduced accurately enough, so the brain can't use the very small signals that are used to give depth perception.

    So to conclude; yes I agree, DSD is fundamentally flawed, and unlike PCM where the DAC is the fundamental limit, its in the format itself. And it is mostly limited by the format.


    And my favorite comment from Rob (this one is regarding the Mojo):

    I was kind of annoyed that some people were comparing it to $100 DACs when the true competitors were $100K.
     
    Mython, Blitzula, MusiCol and 11 others like this.
First
 
Back
84 85 86 87 88 89 90 91 92 93
95 96 97 98 99 100 101 102 103 104
Next
 
Last

Share This Page