Chord Mojo(1) DAC-amp ☆★►FAQ in 3rd post!◄★☆
Apr 17, 2016 at 12:13 PM Post #16,171 of 42,765
   
1. Audio mastering needs to be improved, but for this to happen it needs a steady target to aim for (rather than having to cater to everything from mono boomboxes to car stereos to audiophile systems in one recording).

2. Accordingly, a new audiophile music standard needs to be put forward that segregates the responsibilities of audio mastering and audio playback correctly; for a start dynamic compression needs to be specified as a standard playback parameter that can be switched on and adjusted on the playback end to cater to different playback equipment capabilities and listening environments. Equalization and room correction capabilities need to become standard so that mastering engineers can simply aim for the best sound in the studio environment (which should also be standardized), while the wildly varying end-user listening setups can intelligently do their best to match the studio sound, rather than the other way around.

3. A 2nd version of all albums, mastered for binaural (headphone listening) ought to become standard. (I'm sure all head-fiers can get behind that!) For old albums mastered for stereo only, headphone listening systems ought to be updated with speaker system virtualization software that goes beyond the presently common primitive crossfeed options. Darin Fong's OOYH software is a good start. http://www.head-fi.org/t/689299/out-of-your-head-new-virtual-surround-simulator Here's my own humble attempt: http://www.head-fi.org/t/555263/foobar2000-dolby-headphone-config-comment-discuss/810#post_12496793

4. A whole industry of consumer-oriented audio engineering needs to be built from the ground up. For loudspeaker systems it entails proper room setup and speaker calibration by trained professionals rather than end-users all trying to do their own thing. For headphone systems it entails widespread adoption of HRTF measurements a la those done for the Smyth Realiser: http://www.head-fi.org/t/418401/long-awaited-smyth-svs-realiser-now-available-for-purchase

The latter would be an alternative to (3) and Smyth Realiser is in the High-End audio forum for good reason. Most every Realiser user would tell you it makes a joke of all talk of headphone "soundstage" and "realism" on conventional headphone systems. Individual HRTF measurements are necessary because of the wild acoustic variations between individuals when wearing headphones.

5. Audiophile headphones should come standard with compensation curves for arriving at a neutral reference. For (4) the HRTFs should be recorded as deviations from the KEMAR dummy head reference, so that corrections can be applied to the compensation curve to arrive at the studio-intended sound for every listener, using whatever headphones. Software to apply such corrections should come as standard on any audiophile music player for portable use.

But as you can see, every point involves sweeping changes to the audio industry, I'm not sure there's any money to be made from it, and it seems obvious that the majority of the target market won't even appreciate the reasons behind such changes if and when they are proposed. It needs to be proposed as a whole new system for everything from recording, mastering to playback. Everyone would have their own slightly different version of the underlying ideas and it would be very difficult to arrive at a universally adopted standard.

 
 
Valid points throughout and totally agree on the Mastering issue. From what I've gathered with regards to some talk by Chord (@Rob Watts ) specifically is that they are trying to bring the technology learnt from developing the DAVE into the ADC side of things which would probably improve or even eliminate some of the issues above entirely.
If a DAC of Chord's caliber can extract and improve upon the details, imagine what they could accomplish by putting that technology into the recording studios as the ADC. We could even go as far as to say that with specialized DAC, we can extract differing levels of audio quality for differing listening criteria as you listed above.
I'm no expert but these are my thoughts on the matter above.

 
 
 
 
 
 
 
   
 
 
Vinyl rips have their fans, mostly because vinyl is considered, by many fans, to sound superior to digital formats.
 
But the problem is that as soon as you rip vinyl to digital, you are taking away the very thing about it that is supposedly superior - it's pure analogue quality.
 
Added to this, one should bear in mind that anyone doing a vinyl rip is unlikely to have a studio-grade ADC (Analogue-Digital-Converter).
 
So, in my opinion, vinyl rips are just absurdly stupid. I'd much rather have a digital file made, using a studio-grade ADC, from the original studio analogue master tape, than have someone at home, no matter how excellent their record deck may be, converting a vinyl record to a digital file.
 
 
 
In any case, I am really, really looking forward to the day (quite soon) when Rob Watts gets a high tap-count ADC into some commercial studios, so that some analogue master-tape albums can be remastered to digital, using his excellent digital conversion approach - these particular remasters should sound very substantially better than any other digital masters or remasters ever produced, thus far.


 
 
Regarding the comment on Rob Watts putting an ADC into some commercial studios, is this a wish, or has there been confirmation that he actually has this in the pipeline?
If so, sign me up at once
biggrin.gif

 

 
What follows is just a taster, to give you some pointers about Rob's future ADC, but I don't want to derail this, the Mojo thread, so if you have further questions about the ADC, then it'd be better to ask Rob about it, in the DAVE thread
popcorn.gif

 
 
  I am currently designing a ADC converter, that will match Dave's performance, and solve a number of issues that plague conventional ADC's - notably huge noise floor modulation, poor anti-aliasing filters, and poor noise shaper performance.
 
I know from the work with Dave that the perception of depth needs noise shapers of astounding accuracy; indeed, Dave ended up with 350 dB performance noise shapers, in order to ensure that small signals are resolved with zero error - from listening tests, this is needed to ensure the brain can perceive depth correctly.
 
Now I have designed a ADC noise shaper that exceeds 350 dB performance (note these numbers are digital domain performance only, so it is an idealised noise shaper - I am only looking at the THD and noise of the noise shaper only). To test the noise shaper I can run Verilog simulations, capture the data, then do an FFT on the data, and then check the results. Before I did that, I thought it would be a good idea to run a similar simulation with Dave's noise shaper. In this case, I am trying to evaluate whether it can accurately encode very small signals, so I am using a -301 dB sine wave at 6 kHz. If it can resolve a signal at -301 dB, then we can safely say that small signals are accurately encoded, at least in the digital domain.
 
So here are the results:
 
 

 
So this is the digital domain performance of the Dave noise shaper, and frequency is from DC to 100kHz (0.1 MHz).
 
The 6 kHz signal is perfectly reconstituted at -301 dB. You can see a flat line at -340 dB, but this is just a FFT issue. The real noise floor at 15 kHz is at -380 dB, which is about 100 trillion times lower noise than conventional high performance noise shapers. Note also the noise at 100 kHz is at -200 dB - that is extraordinary low for a noise shaper, and shows why I need to do little filtering on the analogue side.
 
-301 db is better than 50 bits accuracy.
 
Now to write the code for the ADC!
 
Rob

 
Davina is the first adc which is for analogue inputs so you can listen to vinyl at 768k and record the album at 44.1 at the same time. But really the motivation for the product is a first step towards a pro audio interface so pro recording can be done.
Rob

 
  .... on the ADC (project code word Davina), its a project that I have been working on for a long time (actually the first prototype was in 2001). There are a number of key things happening that conventional ADC's don't do well - noise floor modulation, aliasing, and noise shaper resolution. The noise floor modulation issue was solved way back in 2001. Aliasing is a major problem - normal ADC decimation filters are half band, so offer worst case only -6dB rejection. But I have used -140 dB decimation filters, and can still hear the effects of aliasing. Fortunately its not difficult to design a filter that has no aliasing, its just FPGA resources. On the noise shaper side, getting Dave standard (350dB) is not a problem, I have already designed that noise shaper.
 
We will be doing test recordings later this year, so I will publish test samples too on Head-Fi. I too am very excited about the sound quality possibilities of the ADC.
 
Rob

 
  One of the good things about the Davina project is that I will have clear answers to these problems.
 
Firstly, timing. The problem that Dave is solving, and its a very important problem only due to sampling the music, is the reconstruction of the timing of transients. Now a bandwidth limited signal (that is zero output at 22.05 kHz and above), if you use an infinite tap FIR filter, with a sinc function for the coefficients, would perfectly recover the missing waveform that was within the ADC before it was sampled. So if we have a DAC that has an interpolation filter that was "good enough" - that is double the taps and you hear no difference, and halve the time from one OP to the next and you still hear no change - then we will be left with a perfect reconstruction filter, and the DAC will re-create the signal effectively perfectly before it was sampled. What we will hear is the bandwidth limited signal. Now my question is - will bandwidth limiting within the ADC change the SQ? This I will find out from Davina, and I can test this without using decimation, so I will know this aspect for sure.
 
The second issue is amplitude accuracy. Now depth perception requires zero error in small signal accuracy - the smallest error in amplitude, no matter how small, seems to confuse the brain, and so it can't calculate the depth correctly, and we then see a degradation in the perceived depth. Now with Dave the small signal performance of the noise shaper allows a -301dB signal to be reproduced perfectly - that's way better than 50 bits, and actually more like 64 bit accuracy. So how do I encode 64 bit amplitude linearity within a 16 bit system at 44.1? Will triangular dither do it? In principle it will. Normally I use noise shapers to guarantee 64 bit audio performance, but although this works at 768 kHz, it won't work effectively at 44.1 kHz. Again, this is an aspect that I will find out from the Davina project.
 
Rob

 
 
More such posts here:
 
 
=766517&advanced=1]www.head-fi.org/newsearch/?search=adc&resultSortingPreference=recency&byuser=rob+watts&output=posts&sdate=0&newer=1&type=all&containingthread[0]=766517&advanced=1

 
 
Apr 17, 2016 at 12:16 PM Post #16,172 of 42,765
So after some quite extensive testing this weekend i have decided thagt to me there is a difference in transport.  My Cayin N6 connected with Coax sounds a lot nicer than my phone with OTG, im not sure if its the lack of the interference but i swear i could her more from it.  
 
This is bad for me as i wanted to sell that but no longer. 
 
Apr 17, 2016 at 12:20 PM Post #16,173 of 42,765
  So after some quite extensive testing this weekend i have decided thagt to me there is a difference in transport.  My Cayin N6 connected with Coax sounds a lot nicer than my phone with OTG, im not sure if its the lack of the interference but i swear i could her more from it.  
 
This is bad for me as i wanted to sell that but no longer. 

 
 
OK, so the next obvious step for you to (cheaply) try, is 1 or 2 ferrite RF-chokes on your USB OTG cable.
 
See if you can tell any difference!
popcorn.gif
 
 
 
Also, I don't know how experienced you are, so forgive me for mentioning it, but do please be sure you're not accidentally upsampling the data signal on the phone, before it gets sent to Mojo.
 
Apr 17, 2016 at 12:20 PM Post #16,174 of 42,765
  So after some quite extensive testing this weekend i have decided thagt to me there is a difference in transport.  My Cayin N6 connected with Coax sounds a lot nicer than my phone with OTG, im not sure if its the lack of the interference but i swear i could her more from it.  
 
This is bad for me as i wanted to sell that but no longer. 

 
did you try listening to your phone in airplane mode? shutting down the cellular radio can eliminate some noise and defeat the purpose of having a phone as a transport at the same time. none the less shutting down the radios can eliminate extra noise
 
Apr 17, 2016 at 12:37 PM Post #16,175 of 42,765
There has been some recent discussion about digital filters, in particular closed form mathematics. There is a lot of confusion about what is actually happening, and this is not surprising - filter design is complex, and people talk about things that they have little real understanding.
 
Indeed, the more time and work I spend in audio, the more I realise how much more there is to know - we are all scratching at the surface, so some humility is needed. "You know nothing Jon Snow" is my favourite quote from Game of Thrones, and I often bear it in mind when thinking about audio, and how to relate something I hear with theory.
 
Now there are two things that are talked about closed form filter design - one being that the the filter coefficients (these are fixed at the design of the filter) uses a closed form algorithm which just means that it is a formula to calculate the numbers. The second issue is that the initial filter samples are preserved.
 
Now most FIR filter algorithms are closed form. The exception, as pointed out by a poster earlier is the Parks–McClellan which uses the Remez algorithm to iteratively calculate the optimal solution for the coefficient calculation. It is not a closed form calculation, as it cleverly runs backwards and forwards until it converges onto the desired result. Now is a closed form a good or a bad idea? Frankly, it does not matter how the coefficients are calculated, its what those coefficients are, and what they sound like that is important. Now I don't like the Parks-McClellan algorithm, as it does not maximise rejection at the points where there is the most out of band energy which is at FS multiples. And its not very good at recovering timing information for the intermediate samples you are trying to create. But this is not closed form or iterative process that is important here. Now the WTA algorithm is closed form, you can calculate the ideal coefficients to as much accuracy as you like with one fixed equation. But whether it is closed form or not is just unimportant.
 
The second issue is exactly maintaining the original samples. Now the vast majority of FIR filters for audio are known as half band filters, and to create a 8 times oversampled filter you use a cascade of 3 half band filters. These are guaranteed by design to give the original data, and they are used because they are computationally efficient, as half the calculations are zero - you simply return the original sample, no maths. Most are designed with Parks-McClellan, so the issue of closed form has actually nothing to do with retaining the original sample data.
 
So maintaining the original sample data is a red-herring as regards closed form. But is keeping the original data actually a good idea? It sounds like a great idea, why mess with the actual data?
 
When I was developing the WTA algorithm in the late 1990's I hit a stumbling block. I had designed a very long tap length half band filter - so it was 2048 taps, half being zero, so it returned the original sample perfectly. It sounded very much better than the filters I had before, but I knew that timing recovery and transient accuracy was a problem. I could see also that aliasing issues from the half band filter would degrade transient accuracy, so I needed to remove these measurable aliasing problems. But that would mean the original data would get changed, and I did not like that.
 
One trap that designers and audiophiles fall into is to think doing XYZ is wrong and that it must sound better because of this particular idea. That is a very easy trap to fall into - or even think some idea must sound better, then listening to it, then convincing yourself that this soft muddled sound is actually better (or this bright hard sound is more transparency and at last I can hear how bad recordings actually are). In other words your thinking is convincing yourself that something is better (of course your lizard brain is not fooled and you end up listening to less music and enjoying it less). I too was stuck in the trap that the best thing to do was to keep the original data. But at the end of the day, you got to try it, do careful listening tests, and run by the evidence, not what you think may sound good or con yourself into thinking something is better. So eventually I tried eliminating the reconstruction aliasing, and boy did this make a big improvement - even though the samples were not being preserved - bass was much deeper, sound-stage much more accurate, and the flow and timing much more natural. 
 
So some humility is called for, nobody has a perfect understanding of anything, and thinking something must sound better is extremely dangerous. Do the work, listen carefully and neutrally, and base everything on the evidence, not on attractive ideas.
 
Rob
 
Apr 17, 2016 at 12:50 PM Post #16,177 of 42,765
  There has been some recent discussion about digital filters, in particular closed form mathematics. There is a lot of confusion about what is actually happening, and this is not surprising - filter design is complex, and people talk about things that they have little real understanding.
 
So some humility is called for, nobody has a perfect understanding of anything, and thinking something must sound better is extremely dangerous. Do the work, listen carefully and neutrally, and base everything on the evidence, not on attractive ideas.
 
Rob

 
having no real world experience, but an inquisitive mind, doesn't the original source dictate how any algorithm will eventually come to a conclusion? If that is a true statement, and playing arm chair quarterback... wouldn't it have been advantageous for the industry to have standardized the process of converting an analog signal into a digital format and then reproducing the digital information back into an analog wave form with the most accuracy? also is there any factor that calculates the inherent differences in the myriad of transducers in multiple sizes shapes and the environments they are used in?
 
as stated these are the ramblings of an inquisitive mind and not of an industry professional
 
Apr 17, 2016 at 1:10 PM Post #16,178 of 42,765
   
having no real world experience, but an inquisitive mind, doesn't the original source dictate how any algorithm will eventually come to a conclusion? If that is a true statement, and playing arm chair quarterback... wouldn't it have been advantageous for the industry to have standardized the process of converting an analog signal into a digital format and then reproducing the digital information back into an analog wave form with the most accuracy? also is there any factor that calculates the inherent differences in the myriad of transducers in multiple sizes shapes and the environments they are used in?
 
as stated these are the ramblings of an inquisitive mind and not of an industry professional

Yes it does, some recordings are more sensitive to the WTA than others, in that you get a greater effect - still the same, but with more change. Its one reason why I am so excited about the Davina project (ADC converter which will get migrated to pro audio products) as then I can control everything.
 
Apr 17, 2016 at 1:14 PM Post #16,179 of 42,765
   
 
OK, so the next obvious step for you to (cheaply) try, is 1 or 2 ferrite RF-chokes on your USB OTG cable.
 
See if you can tell any difference!
popcorn.gif
 
 
 
Also, I don't know how experienced you are, so forgive me for mentioning it, but do please be sure you're not accidentally upsampling the data signal on the phone, before it gets sent to Mojo.

 
I would say i have knowledge but not that much experience id i have educated myself over the last two years or so on this hobby but there is sooooooo much to learn.  I have already tried the ferrite cores on both sides of the cable (and different cables) and it didnt make a difference.  The interference im getting seems to come from the screen, when it on i get clicking and popping a lot if i start scrolling the music starts to slow while doing it.  With screen of it reduces greatly and not too noticeable but its there.
 
Im using UAPP and i think that bypasses the android upsampling and have tried all the buffer sizes with little difference.  I really dont like the app, it seems to stick to a single song and set shuffle list from this so i set up a playlist and cleared the queue but stil stuck on the same song until i go and select another.  l 
 
   
did you try listening to your phone in airplane mode? shutting down the cellular radio can eliminate some noise and defeat the purpose of having a phone as a transport at the same time. none the less shutting down the radios can eliminate extra noise

 
I tried airplane mode and all off but i dont think its rf, if rf is like a dial up modem sound or when a phone is near a speaker.  I think it just maybe the phone make as someone with a same manufacturer had the same issue
 
Apr 17, 2016 at 1:23 PM Post #16,180 of 42,765
having no real world experience, but an inquisitive mind, doesn't the original source dictate how any algorithm will eventually come to a conclusion? If that is a true statement, and playing arm chair quarterback... wouldn't it have been advantageous for the industry to have standardized the process of converting an analog signal into a digital format and then reproducing the digital information back into an analog wave form with the most accuracy? also is there any factor that calculates the inherent differences in the myriad of transducers in multiple sizes shapes and the environments they are used in?

as stated these are the ramblings of an inquisitive mind and not of an industry professional


As far as I know, the standard for analog-digital conversion:
1. All frequencies above the Nyquist frequency should be eliminated prior to conversion
2. Preferably, as much as possible of the frequencies below Nyquist are fully preserved
3. Phase relationships should be preserved

Modern ADCs trend closer and closer toward these ideals, while DACs assume that these standards have been ideally met and proceed to try their best to reproduce the waveform accordingly. The process is not perfect but the deviation from perfection can be calculated. Again Rob and I would just have to agree to disagree on how much the deviations matter. Especially in the case of high-res recordings, where the goalposts have been moved so far out of the field that it's almost like a blind guy can score 10/10.

The trouble comes in recovering those precious old recordings at the dawn of the CD age, converted using the earliest converters with no oversampling and an analog brickwall filter. Again (as with trying to recover above-Nyquist harmonics of the recording), I'm of the opinion that if one were to try to actually "recover" the recording, one should do away with above 3 assumptions of ADC/DAC and look at the situation from the reality angle. For example if one knew that a particular early recording was created with an analog brickwall filter below 22kHz, then in the interests of recreating the original pre-conversion phase relationships at high frequencies (if it were really that important), the DAC should have an option to counteract the phase distortion introduced by such a brickwall filter rather than simply be linear phase.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Apr 17, 2016 at 1:44 PM Post #16,181 of 42,765
tried the small micro to micro cable supplied with Hugo between Android and mojo. I put a single small ferrite core on the cable on mojo side. I would say the sound using uapp was much much better than fiio x3 as transport. it took about an hour or so to reach mojo it's full sound potential . I tried some Hindi Bollywood music and some opus 3 records audiophile stuff. vocals sounded sort of elevated and there was more punch and dynamism as compared to fiio x3 coaxial route. Android was used in airplane mode and headphone was Beyer dt880 600ohm. so far the best ever emotional musical experience ! I though that asynchronous data thing is working even with Android too.
 
Apr 17, 2016 at 2:29 PM Post #16,182 of 42,765
now ive decided to keep the N6 i want to utilise my home setup.  Am i right in thinking i need a 3.5mm to two RCA phono adapters to use with my stereo amp?  N6 to mojo with coax the 3.5 out of mojo to the twin rca in the amp.
 
Apr 17, 2016 at 2:33 PM Post #16,183 of 42,765
Apr 17, 2016 at 2:34 PM Post #16,184 of 42,765
I apologize for use of the term "drivel" (which was not even directed at you). I never used the term "audiophoolery". I fail to see how "audiophile" is insulting. I reserve the right to regard much of what is said describing the subjective results of Chord technology as "fluffery", even if it is par for the course in the trade.

As far as I can see objectivists are usually criticized for regarding almost everything in audio to "not matter" without putting forth their own theories on what needs improvement, thus serving as no better than roadblocks to progress in audio.


Indeed so. The "no diff" hypothesis is being put forth and defended robustly more often than seems sane or reasonable, IMO.
 
My reaction was curt (even if you were clearly not addressing me), but this is directed at what seems like a general acceptance that it is perfectly fine for some to belittle and disparage those who do hear a difference (DACs, amps, R2R vs DS, tubes vs SS, cables, etc.) by the quasi-systematic use of terms like "silly" or "fluffery" or "audiophools" (which you didn't use) or "drivel" or "ignorance"; whereas any criticism in the other direction is generally met with religious consternation along the lines of "how dare you ignorant and puny fool have an opinion on matters audio when science says so?", followed by stumping feet.
 
I find genuinely shocking how people can behave in such obnoxious ways (in my opinion) without even realizing how rude their manners are, and how insulting it can be for those on the receiving end. Even if "objectivists" were ALWAYS right (which is highly unlikely), this doesn't give one a free pass to routinely treating others as drivel-spouting fools (whether you append "audio" to it or not). In the end, when/if methodology catches up reality may prove that it's the other way around...
 
Apr 17, 2016 at 2:35 PM Post #16,185 of 42,765
  now ive decided to keep the N6 i want to utilise my home setup.  Am i right in thinking i need a 3.5mm to two RCA phono adapters to use with my stereo amp?  N6 to mojo with coax the 3.5 out of mojo to the twin rca in the amp.

 
This is right or you can run RCAs to the amp with a mini connector from the Mojo, and if your amp has a mini input, you can also run a mini - mini from Mojo to amp. 
 
 
MBP → SDragon → Mojo → AQ Sky → LC → HD 800S (WW Platinum)
 

Users who are viewing this thread

Back
Top