Separate names with a comma.
Spoiler: Posted this a while back, for anyone who's interested in Rob's ADC efforts
So after some quite extensive testing this weekend i have decided thagt to me there is a difference in transport. My Cayin N6 connected with Coax sounds a lot nicer than my phone with OTG, im not sure if its the lack of the interference but i swear i could her more from it.
This is bad for me as i wanted to sell that but no longer.
OK, so the next obvious step for you to (cheaply) try, is 1 or 2 ferrite RF-chokes on your USB OTG cable.
See if you can tell any difference!
Also, I don't know how experienced you are, so forgive me for mentioning it, but do please be sure you're not accidentally upsampling the data signal on the phone, before it gets sent to Mojo.
did you try listening to your phone in airplane mode? shutting down the cellular radio can eliminate some noise and defeat the purpose of having a phone as a transport at the same time. none the less shutting down the radios can eliminate extra noise
There has been some recent discussion about digital filters, in particular closed form mathematics. There is a lot of confusion about what is actually happening, and this is not surprising - filter design is complex, and people talk about things that they have little real understanding.
Indeed, the more time and work I spend in audio, the more I realise how much more there is to know - we are all scratching at the surface, so some humility is needed. "You know nothing Jon Snow" is my favourite quote from Game of Thrones, and I often bear it in mind when thinking about audio, and how to relate something I hear with theory.
Now there are two things that are talked about closed form filter design - one being that the the filter coefficients (these are fixed at the design of the filter) uses a closed form algorithm which just means that it is a formula to calculate the numbers. The second issue is that the initial filter samples are preserved.
Now most FIR filter algorithms are closed form. The exception, as pointed out by a poster earlier is the Parks–McClellan which uses the Remez algorithm to iteratively calculate the optimal solution for the coefficient calculation. It is not a closed form calculation, as it cleverly runs backwards and forwards until it converges onto the desired result. Now is a closed form a good or a bad idea? Frankly, it does not matter how the coefficients are calculated, its what those coefficients are, and what they sound like that is important. Now I don't like the Parks-McClellan algorithm, as it does not maximise rejection at the points where there is the most out of band energy which is at FS multiples. And its not very good at recovering timing information for the intermediate samples you are trying to create. But this is not closed form or iterative process that is important here. Now the WTA algorithm is closed form, you can calculate the ideal coefficients to as much accuracy as you like with one fixed equation. But whether it is closed form or not is just unimportant.
The second issue is exactly maintaining the original samples. Now the vast majority of FIR filters for audio are known as half band filters, and to create a 8 times oversampled filter you use a cascade of 3 half band filters. These are guaranteed by design to give the original data, and they are used because they are computationally efficient, as half the calculations are zero - you simply return the original sample, no maths. Most are designed with Parks-McClellan, so the issue of closed form has actually nothing to do with retaining the original sample data.
So maintaining the original sample data is a red-herring as regards closed form. But is keeping the original data actually a good idea? It sounds like a great idea, why mess with the actual data?
When I was developing the WTA algorithm in the late 1990's I hit a stumbling block. I had designed a very long tap length half band filter - so it was 2048 taps, half being zero, so it returned the original sample perfectly. It sounded very much better than the filters I had before, but I knew that timing recovery and transient accuracy was a problem. I could see also that aliasing issues from the half band filter would degrade transient accuracy, so I needed to remove these measurable aliasing problems. But that would mean the original data would get changed, and I did not like that.
One trap that designers and audiophiles fall into is to think doing XYZ is wrong and that it must sound better because of this particular idea. That is a very easy trap to fall into - or even think some idea must sound better, then listening to it, then convincing yourself that this soft muddled sound is actually better (or this bright hard sound is more transparency and at last I can hear how bad recordings actually are). In other words your thinking is convincing yourself that something is better (of course your lizard brain is not fooled and you end up listening to less music and enjoying it less). I too was stuck in the trap that the best thing to do was to keep the original data. But at the end of the day, you got to try it, do careful listening tests, and run by the evidence, not what you think may sound good or con yourself into thinking something is better. So eventually I tried eliminating the reconstruction aliasing, and boy did this make a big improvement - even though the samples were not being preserved - bass was much deeper, sound-stage much more accurate, and the flow and timing much more natural.
So some humility is called for, nobody has a perfect understanding of anything, and thinking something must sound better is extremely dangerous. Do the work, listen carefully and neutrally, and base everything on the evidence, not on attractive ideas.
Quality Audio is just beauty in the ears of the beholder.
having no real world experience, but an inquisitive mind, doesn't the original source dictate how any algorithm will eventually come to a conclusion? If that is a true statement, and playing arm chair quarterback... wouldn't it have been advantageous for the industry to have standardized the process of converting an analog signal into a digital format and then reproducing the digital information back into an analog wave form with the most accuracy? also is there any factor that calculates the inherent differences in the myriad of transducers in multiple sizes shapes and the environments they are used in?
as stated these are the ramblings of an inquisitive mind and not of an industry professional
Yes it does, some recordings are more sensitive to the WTA than others, in that you get a greater effect - still the same, but with more change. Its one reason why I am so excited about the Davina project (ADC converter which will get migrated to pro audio products) as then I can control everything.
I would say i have knowledge but not that much experience id i have educated myself over the last two years or so on this hobby but there is sooooooo much to learn. I have already tried the ferrite cores on both sides of the cable (and different cables) and it didnt make a difference. The interference im getting seems to come from the screen, when it on i get clicking and popping a lot if i start scrolling the music starts to slow while doing it. With screen of it reduces greatly and not too noticeable but its there.
Im using UAPP and i think that bypasses the android upsampling and have tried all the buffer sizes with little difference. I really dont like the app, it seems to stick to a single song and set shuffle list from this so i set up a playlist and cleared the queue but stil stuck on the same song until i go and select another. l
I tried airplane mode and all off but i dont think its rf, if rf is like a dial up modem sound or when a phone is near a speaker. I think it just maybe the phone make as someone with a same manufacturer had the same issue
As far as I know, the standard for analog-digital conversion:
1. All frequencies above the Nyquist frequency should be eliminated prior to conversion
2. Preferably, as much as possible of the frequencies below Nyquist are fully preserved
3. Phase relationships should be preserved
Modern ADCs trend closer and closer toward these ideals, while DACs assume that these standards have been ideally met and proceed to try their best to reproduce the waveform accordingly. The process is not perfect but the deviation from perfection can be calculated. Again Rob and I would just have to agree to disagree on how much the deviations matter. Especially in the case of high-res recordings, where the goalposts have been moved so far out of the field that it's almost like a blind guy can score 10/10.
The trouble comes in recovering those precious old recordings at the dawn of the CD age, converted using the earliest converters with no oversampling and an analog brickwall filter. Again (as with trying to recover above-Nyquist harmonics of the recording), I'm of the opinion that if one were to try to actually "recover" the recording, one should do away with above 3 assumptions of ADC/DAC and look at the situation from the reality angle. For example if one knew that a particular early recording was created with an analog brickwall filter below 22kHz, then in the interests of recreating the original pre-conversion phase relationships at high frequencies (if it were really that important), the DAC should have an option to counteract the phase distortion introduced by such a brickwall filter rather than simply be linear phase.
tried the small micro to micro cable supplied with Hugo between Android and mojo. I put a single small ferrite core on the cable on mojo side. I would say the sound using uapp was much much better than fiio x3 as transport. it took about an hour or so to reach mojo it's full sound potential . I tried some Hindi Bollywood music and some opus 3 records audiophile stuff. vocals sounded sort of elevated and there was more punch and dynamism as compared to fiio x3 coaxial route. Android was used in airplane mode and headphone was Beyer dt880 600ohm. so far the best ever emotional musical experience ! I though that asynchronous data thing is working even with Android too.
now ive decided to keep the N6 i want to utilise my home setup. Am i right in thinking i need a 3.5mm to two RCA phono adapters to use with my stereo amp? N6 to mojo with coax the 3.5 out of mojo to the twin rca in the amp.
Indeed so. The "no diff" hypothesis is being put forth and defended robustly more often than seems sane or reasonable, IMO.
My reaction was curt (even if you were clearly not addressing me), but this is directed at what seems like a general acceptance that it is perfectly fine for some to belittle and disparage those who do hear a difference (DACs, amps, R2R vs DS, tubes vs SS, cables, etc.) by the quasi-systematic use of terms like "silly" or "fluffery" or "audiophools" (which you didn't use) or "drivel" or "ignorance"; whereas any criticism in the other direction is generally met with religious consternation along the lines of "how dare you ignorant and puny fool have an opinion on matters audio when science says so?", followed by stumping feet.
I find genuinely shocking how people can behave in such obnoxious ways (in my opinion) without even realizing how rude their manners are, and how insulting it can be for those on the receiving end. Even if "objectivists" were ALWAYS right (which is highly unlikely), this doesn't give one a free pass to routinely treating others as drivel-spouting fools (whether you append "audio" to it or not). In the end, when/if methodology catches up reality may prove that it's the other way around...
This is right or you can run RCAs to the amp with a mini connector from the Mojo, and if your amp has a mini input, you can also run a mini - mini from Mojo to amp.
MBP → SDragon → Mojo → AQ Sky → LC → HD 800S (WW Platinum)