Hugo 2 Listening Impressions (Including Sighted/Unsighted Testing)
Jun 2, 2018 at 3:19 PM Post #46 of 74
Another thought:
What about volume matching, listening to mono recordings, and plugging headphones into two different amp/dacs at the same time (left in DAC one right in DAC two). This seems like a better test. (Obviously switching sides plugged in)
 
Jun 2, 2018 at 7:27 PM Post #47 of 74
What about creating a sample track that has 2 identical 15 sec tracks back to back and has several different songs? You could even break it down to individual instruments and notes. Just a cymbal etc.
I like where you guys are taking this. Could save people who are open minded lots of money.

My new question is after noticing limited to no differences will you keep your mojo and hugo 2?

I've read that our auditory memory which has higher resolution is our short-term echoic memory (https://en.wikipedia.org/wiki/Echoic_memory), which generally extends less than 5 secs, and that fits my experience, so that might be a suitable max length of music excerpts for A/B comparison.

No plans to sell the H2 any time soon. Lot's more testing to do before I could reach a conclusion that I should sell it. I do like the added power, cross-feed option, and aesthetics of the H2, but those wouldn't be sufficient reasons to spend 5x more than the Mojo if it turned out that the H2 doesn't offer any real improvement in sound quality for me.

Another thought:
What about volume matching, listening to mono recordings, and plugging headphones into two different amp/dacs at the same time (left in DAC one right in DAC two). This seems like a better test. (Obviously switching sides plugged in)

Interesting idea, but I'd be concerned that our brain couldn't accurately discern a difference in L vs R sound quality under those conditions, since the brain is always trying to construct a holistic auditory 'scene' from the auditory information coming in each ear (https://www.frontiersin.org/articles/10.3389/fnhum.2011.00158/full). There's also the general concern that the farther your test conditions deviate from normal listening, the more uncertainty there is about how relevant the test results are to normal listening (https://en.wikipedia.org/wiki/External_validity).
 
Jun 2, 2018 at 7:47 PM Post #48 of 74
Great points. Our eyes actually do the same thing although for parts of the visual field that don't overlap you can sometimes make out the difference. I don't have large headphones that this is easy to do with but I tried it various ear buds and my ciem and it was easy to discern differences but those will also be larger differences than a DAC or amp.
 
Jun 2, 2018 at 8:29 PM Post #49 of 74
Great points. Our eyes actually do the same thing although for parts of the visual field that don't overlap you can sometimes make out the difference. I don't have large headphones that this is easy to do with but I tried it various ear buds and my ciem and it was easy to discern differences but those will also be larger differences than a DAC or amp.

Yes, instant switching isn't quite possible with headphones and IEMs, but with headphones switching is possible in 2-3 secs, and the differences are often quite substantial. Of course, biases and misperception can creep into those comparisons also, but IMO we can at least confidently say that the differences between headphones and IEMs are likely to be a lot larger than the differences between DACs and amps.
 
Jun 2, 2018 at 8:50 PM Post #50 of 74
I have some further results to report, again comparing the H2 and Mojo. I listened to several tracks from the album "River: the joni letters" by Herbie Hancock. I used the LCD-3.

I first tried my Protocol A, where I had both DACs running in sync (within a fraction of a second), and switched instantly between the H2 and Mojo at varying intervals, ranging from about 2 to 20 secs. I tried to judge whether I liked A or B better, and there was no consistent pattern. I felt significantly uncertain in making those judgements because the music excerpts were never the same, so sometimes I'd be swayed towards A, sometimes B. Because of the way I arranged the setup, sometimes I forgot which DAC was A or B, which increased my uncertainty about which one I liked better. But when I remembered which one was A or B, since it was a sighted comparison, I could sense that I was biased toward picking out attributes in the sound which caused me to perceive the H2 as sounding better, and possibly downplaying those attributes when listening to the Mojo. Bias may have also influenced when I chose to switch between them (e.g., listen until hearing something which makes the H2 seem more dynamic, then switch, and by 'regression to the mean' wind up hearing something which makes the Mojo seem less dynamic).

Feeling that Protocol A was quite flawed due to the inconsistency in music excerpts, I once again changed to Protocol C, with the H2 sometimes running 3-4 secs ahead, sometimes the Mojo running ahead by the same amount. With Protocol C, I once again perceived no difference at all between the H2 and Mojo, and any subtle difference which I suspected based on Protocol A disappeared.

I'm increasingly moving towards the conclusion that, for my ears/brain, these two DACs either sound exactly the same, or the difference is so subtle that it's insignificant, and is far smaller than the variation in recording quality of tracks, effects of volume level differences, variations in headphones, and variations in my perception over time. This isn't the conclusion I want to reach, but it's where the evidence is pointing me so far. I'm quite willing to spend more for a DAC/amp which sounds noticeably better - but it needs to really sound noticeably better, not just better due to misperception.

I hope others do similar tests and report their results (the process of doing the tests is itself quite educational as a music listener). I suspect that others who perceive large differences in the sound of these DAC/amps are misperceiving due to biases and lack of controls, but I can't rule out the possibility that others perceive real but subtle differences which I don't.

I'd love to have Chord chime in on this thread to provide info on how they've reached the conclusion that the H2 genuinely sounds noticeably better than the Mojo, at least for some listeners.

And of course I haven't ruled out the possibility that differences are more apparent when using formats other than Spotify 320K and Tidal 16/44. My guess is that higher-res wouldn't make much difference, at least for me, but that's only a guess.
 
Last edited:
Jun 3, 2018 at 3:54 AM Post #51 of 74
I've tried all combinations of DAC/amps and iPhones plugged in vs running off batteries, and didn't find that it made any difference.

The setup is below. It has the recommended Apple lightning-USB adapters. The cables from those adapters to the DACs are the ones supplied by Chord. The cables from the DAC/amps to the switchbox are basic Anker 4' audio cables (https://www.amazon.com/gp/product/B00R124LAK/ref=oh_aui_detailpage_o03_s00?ie=UTF8&psc=1), which I assume are good enough to do their job and not adversely affect sound quality. The switchbox is the Sescom Flip2 (https://www.amazon.com/gp/product/B00NUPF30Y/ref=oh_aui_detailpage_o08_s00?ie=UTF8&psc=1). Various headphones have been used, and which iPhone goes to which DAC has also been rotated. The Hugo 2 has been run with no cross-feed and the white (neutral) filter.

Have you try to use the Mojo usb cables on the Hugo 2?
 
Jun 3, 2018 at 10:41 AM Post #52 of 74
Have you try to use the Mojo usb cables on the Hugo 2?

I've used the H2 cable with the Mojo in the past, and no obvious difference jumped out to me, but I haven't deliberately tried that during the more recent controlled testing. I'll try it, but I don't expect it to make any difference, since the H2 and Mojo currently sound the same to me using Protocol C, and it would be odd if the H2 cable made the Mojo sound better or worse compared to the H2.
 
Jun 3, 2018 at 10:56 AM Post #53 of 74
With my Bryston BDP3, the shortest cable of the Mojo is better. So i put some ferrites on the usb cable of the Hugo2.
But the aurender N100 with Wireworld optical fiber works better than with the Hugo2 usb cable.
 
Last edited:
Jun 3, 2018 at 11:23 AM Post #54 of 74
Interesting article: https://www.soundstagexperience.com...pulse-menu/834-the-problem-with-blind-testing

"That experience involved setting up, conducting, and participating in a blind test of ten portable DAC-headphone amps, including such models as the AudioQuest DragonFly Red and the Oppo Digital HA-2. This test required a great deal more work and expense than a typical equipment review, including having to rebuild one of my testing switchers, setting up four PCs with matching software and test files, then running each of the panelists -- all very experienced audio-equipment reviewers -- through at least nine test rounds, using HiFiMan HE1000 V2 and Sony MDR-7506 headphones and Shure SE535 earphones.

Countless audio writers and forum participants have opined about the sound of the DAC-amps we tested. Considering the large amount of sometimes conflicting opinion, and the fact that the output impedances of headphone amps and the input impedances of headphones vary in ways that can significantly influence the sound, I had no preconceptions about the results we’d get.

While we did hear more differences among the DAC-amps through the Shure earphones (which, because they use balanced armatures, exhibit a huge impedance swing as the frequency rises) than with the HiFiMan headphones (which have near-zero impedance swing), and we did end up agreeing on a couple of marginal favorites, we were all surprised by how elusive and insignificant the differences were, despite reviews we’d read describing large and important differences. One panelist typified the difference as being “maybe 0.5% between the best and worst.” Another wondered aloud, “Why would anybody care about this?”

The difference between this test and most testing done for audio publications was that our test was blind, with documented, carefully measured and matched levels, and effectively identical testing conditions for each product. We knew which ten DAC-amps were being tested, but because in each test the listener used a handheld remote control to switch among three or four different DAC-amps, which were randomly grouped and identified only by number, the listener didn’t know what she or he was hearing. Even the test administrator had no way to tell which product any listener was hearing at any given moment. We really were, as so many audio reviewers insist they do, “trusting our ears” -- in this case, our ears could get no help from our eyes."​

I don't know if any Chord products were among the ten DAC/amps tested, but will try to find out. I'm reluctant to make any inferences from the bolded paragraph about the Hugo 2 without knowing whether it was among the tested DAC/amps.

I agree with the following statement, and have been exploring ways us audiophiles can do better testing in our homes without much expense. My own experience shows that simply doing matching of volume and music excerpts, combined with instant switching, may be sufficient to reduce biases and enable sighted testing to be useful.

"While blind testing is by far the best way to get accurate results, it’s difficult, time-consuming, and requires considerable expertise and patience. For all of those reasons, blind testing is expensive."
EDIT:

Here's more info: https://thewirecutter.com/reviews/best-portable-headphone-amp-with-built-in-dac/. They limited the DAC/amps they tested to under $400, so IMO this test can't be viewed as telling us much (or anything) about how the H2 or Mojo would have fared in this comparison. Bummer that they went through all that trouble and didn't throw Chord units and other more expensive units in the mix.
 
Last edited:
Jun 3, 2018 at 6:42 PM Post #55 of 74
There doesn't seem to be much professional literature which delves into the specifics of how cognitive biases can affect our auditory perception, but I did run across this recent paper on "Overcoming Bias: Cognitive Control Reduces Susceptibility to Framing Effects in Evaluating Musical Performance."

https://www.nature.com/articles/s41598-018-24528-3

The paper focuses on how bias can affect perception of quality of music performances, rather than perception of sound quality, but IMO it has some relevance to our topic. The basic finding was that, when a listener is told that a performance is by a professional musician rather than a student musician, listeners will tend to perceive the performance as being of higher quality, and this is a more consistent effect than whether the performance was actually by a professional vs student. They used fMRI to see what was happening with people's brains as they listened, and they found that telling a listener that a performer is a professional tended to cause brain responses within a few seconds which would cause the performance to be viewed more favorably, and those brain responses persisted through listening to 70-sec music excerpts, as well as after listening. These brain responses involved the listeners paying more attention. Once a listener was told that a performance was by a professional, it was difficult for listeners to conclude otherwise, even when the performance provided mounting evidence that it wasn't by a professional.

Again, this test looked at music performance quality rather than sound quality, but we can speculate that similar mechanisms could be at work when we do sighted listening evaluations of DAC/amps. This may tie in with the differences I perceived between the H2 and Mojo effectively disappearing when I listened to the same music excerpts back to back with instant switching, which may have prevented time for biases to kick in. The observation about listeners paying more attention also ties in with my sense (could be wrong) that my attention was heightened when I thought I was listening to the H2 rather than the Mojo. If you guys try these various listening protocols yourself and notice how you make your judgments about sound quality, and your confidence in those judgments, all of this may make more sense to you.

Here are some quotes from the paper:

"By modulating expectations and beliefs, contextual information can alter the enjoyability of stimuli as diverse as artworks, soda, and wine, influencing or even dominating actual sensory perception."

"... contextual information can contribute materially to positive perceptual experiences. Aesthetic experiences sometimes depend on the prior activation of a set of beliefs that dispose a person to perceiving this way—a “preparatory set” consisting of expectations and beliefs. For instance, even though listening to Joshua Bell perform a concert on the violin can cost $100 per ticket, an incognito performance by him at a subway station triggered very little interest. Generally, this evidence suggests that contextual information can affect preferences and perception in both nefarious and beneficial ways."

"Previous neuroimaging studies suggest that the influence of beliefs and expectations arises not merely from the sensory system, but from the particular sensitivity to contextual information of reward structures in the brain."

"Our analysis revealed that, when a piece was preferred, the professional pianist frame induced significantly more activity in the primary auditory cortex relative to the student pianist frame (see Fig. 2A). This suggests that beliefs regarding the quality of a performer engendered a bias in attention."

"We observed higher activation in the primary auditory cortex when the player was described as a professional pianist relative to when the player was described as a student. Moreover, this difference in activity remained consistent, exhibiting no significant changes across the 70 seconds of the excerpt. A panel regression of activity in the primary auditory cortex on time showed no significant linear slope (b1 = 0.0003, z = 0.56, p > 0.5). This supports the notion that a bias in attention began almost immediately (i.e. 4 sec) after the presentation of the framing information and remained stable throughout the evidence accumulation period. Contrary to the notion that more evidence should diminish any framing effects generating during the relatively short framing period (i.e. 4 sec), we found that the professional framing gave rise to a constant attentional bias in favor of the professional player."

"...as information about the quality of the performance accumulated, participants needed to exert cognitive control in order to form and retain a negative evaluation for performances that had been framed as played by a professional compared to those that had been framed as played by a student. These data suggest that less cognitive effort was required to dislike a performance when it had been described as played by a student rather than a professional."

"... by expecting better performance from a professional, participants directed more attention toward professionally framed pianists compared to the student-framed performances, and therefore, exhibited a heightened tendency to gather more evidence that would confirm their prior expectation about the professional player’s performance."

"From the perspective of music psychology, these findings reinforce the notion that extrinsic factors—outside the borders of the notes themselves—can impact perception and evaluation as critically as the intrinsic characteristics of the acoustic signal."​
 
Last edited:
Jun 4, 2018 at 11:06 PM Post #57 of 74
Jun 4, 2018 at 11:09 PM Post #59 of 74
The threads theory as a whole

Not sure what you mean by that. The thread is a journey of observations, experiments, and interpretations. The journey isn't yet over.
 

Users who are viewing this thread

Back
Top