O2 AMP + ODAC
Mar 24, 2013 at 2:27 AM Post #1,126 of 5,671
Quote:
Quote:
You bring up some very good points there; thank you for the explanation.
 
You mention measurements often being incomplete, I think the same can be said for the O2 and ODAC as well. The designer of the Objective gear only did so many tests too.
I brought up Leckerton in a previous post because I think it was Currawong(?) who said that the O2 may sound good, but it may have a non-linear response in certain cases....I can't completely remember what was said. I recently came upon this website with different measurements between a Leckerton amp and the O2 and from those specific measurements, the Leckerton performed better objectively than the O2. Would this non-linearity be a reason why some people say the Leckerton sounds "more detailed" than the O2?

 
For a perfectly ideal linear system (and time invariant) one perfect measurement is sufficient.  For everything in the real world, you need more if you want a better characterization.  So yeah, we've seen a lot on the O2 from the designer and others but not everything (can't get everything), and we've seen just some on the ODAC.  However, the more well-behaved something is and closer to ideal, the closer the output would be to a completely known and predictable response, so I wouldn't expect huge surprises lurking if you were to somehow test some different signal or some other reasonable audio input and setup.
 
An unevenness in frequency response is not a nonlinearity, by the technical meaning of the word, by the way, but I see what you mean.
 
Note that the difference in frequency response as measured for that test is pretty much entirely a direct consequence of the slightly higher output impedance.  With pretty much anything other than a certain IEM with a really wild impedance curve, it would be much much flatter (and even considering extreme cases like these, do fraction of a dB differences really have much impact on the sound?  Discernible and shouldn't be trivialized, but put it in perspective.).  Many or most people talking about the different amps probably aren't using those exact IEMs; you'd see something else with some other IEMs or headphones.
 
Even if you were to assume that everybody comparing were using an IEM like that, an FR difference like that is unlikely to make people think one device is more detailed than another.
 
 
 
As I mentioned before, you can find differences with some bench measurements, but do they really correspond to what people say they hear?

Hm, that's true....:thinking-smile:
 
Quote:
Describe the differences you heard.

From the tests I just did, the instruments sound a little more spread apart in my head on the O2 compared to the C5 and they have better definition. I find that the C5's imaging is a bit more towards the center, which would be better "synergy" for the K 701 since I find the K 701's center imaging a bit left out (subjective stuff yeah). The difference isn't huge, but it's noticeable to me with the K 701 and might be a reason why I tend to use the C5 at home with the K 701 over the O2 (not to say the O2 sounds bad, but just a personal preference).
I might try this test at school with multimeter volume-matching for a more accurate test.
 
 
Is there a relatively inexpensive way to get more accurate in-person testing results? I feel like there's a better way to do these tests without having to constantly switch cables, pause/play music, etc. *this part should probably be in the Sound Science threads, but we're kind of on the topic anyway*
 
Mar 24, 2013 at 3:38 AM Post #1,127 of 5,671
Are you sure it's just not a stronger amplification? The K/Q701 is pretty notoriously annoying to drive for a low impedance dynamic, and I know this because I used to own it.

As for being able to switch between the amplifiers with little delay, find a cable or device that lets you connect the outputs of both amps to one input to the headphones, without summing to mono.
 
Mar 24, 2013 at 4:06 AM Post #1,128 of 5,671
Quote:
Are you sure it's just not a stronger amplification? The K/Q701 is pretty notoriously annoying to drive for a low impedance dynamic, and I know this because I used to own it.

As for being able to switch between the amplifiers with little delay, find a cable or device that lets you connect the outputs of both amps to one input to the headphones, without summing to mono.

I could try it with the M-100 at a later time [probably tomorrow].
 
Likewise, I could try it with the V-MODA SharePlay cable, maybe?
 
Mar 24, 2013 at 5:06 AM Post #1,129 of 5,671
Hey guys, I'm using a pair of Beyerdynamic DT880 600 ohms with a custom made O2 that I purchased. My source is an Asus Xonar DX sound card and computer volume at 100%. My problem is I can only get normal volume listening levels on low gain(I usually listen to them at 3  o'clock on the volume knob... but I can easily turn it to max and it's not ear deafening levels at all). I would like some extra headroom on the volume pot. On high gain, I notice there is a lot of distortion/crackling noises (it's really noticeable in deep sub bass frequencies)... is this what you call clipping? So my question is can I get these DT880's to play a lot louder without getting any distortions in high gain?
 
Mar 24, 2013 at 5:25 AM Post #1,130 of 5,671
Quote:
Hey guys, I'm using a pair of Beyerdynamic DT880 600 ohms with a custom made O2 that I purchased. My source is a Xonar DX sound card and computer volume is at 100%. My problem is I can only get normal volume listening levels on low gain(I usually listen to them at 3  o'clock on the volume knob... but I can easily turn it to max and it's not ear deafening levels at all). I would like some extra headroom on the volume pot. On high gain, I notice there is a lot of distortion/crackling noises... is this what you call clipping? So my question is can I get these DT880's to play a lot louder without getting any distortions in high gain?

 
Full-scale output is supposedly 2V rms on the DX, so ideally you'd want a gain of 3.5x.  Actually, default low gain is 2.5x; the amp really isn't capable of all that much more than the max of what you get on low gain, with a source at around Redbook 2V level.  In that case, yeah, high gain would cause clipping.
 
I guess it's a matter of perspective, what you consider normal, etc.  Some people say they get about normal levels out of an iPod, with 600 ohms Beyers (but not loud levels).
 
So by either changing the gain by swapping resistors internally, or setting high gain + turning volume down in software to the point where you don't get clipping, you could get say 2.9 dB more than what you're getting at low gain, if the DX really goes to 2V.  You could confirm with say a multimeter and a quick measurement.
 
 
You know, if low gain is default 2.5x and DX outputs 2V and you're reaching that (no volume turned down somewhere in software), you're getting 5V rms out max on low gain.  You might need to go to a different price tier to find amps that can output a lot more than that.  O2, FiiO E9, some others top out at around 7V, which is 2.9 dB more.  Schiit Magni supposedly does say 8.8V at most, or 4.9 dB over your current setup with O2 on low gain.
 
I mean, around 10 dB extra is considered perceptually around twice as loud.  To get 10 dB more, you'd be looking at a pretty powerful amp, looking for about 16V rms output, or a huge 416 mW into 600 ohms.  On the other hand, this is already really really loud, at least for most people, and relative to levels that cause hearing damage.  Actually, those kinds of levels can damage the headphones too.  They're rated for 100 mW nominal.
 
 
So, actually, because this is so loud, maybe you should double-check the output levels you're getting.
 
Mar 24, 2013 at 5:49 AM Post #1,131 of 5,671
Quote:
 
Full-scale output is supposedly 2V rms on the DX, so ideally you'd want a gain of 3.5x.  Actually, default low gain is 2.5x; the amp really isn't capable of all that much more than the max of what you get on low gain, with a source at around Redbook 2V level.  In that case, yeah, high gain would cause clipping.
 
I guess it's a matter of perspective, what you consider normal, etc.  Some people say they get about normal levels out of an iPod, with 600 ohms Beyers (but not loud levels).
 
So by either changing the gain by swapping resistors internally, or setting high gain + turning volume down in software to the point where you don't get clipping, you could get say 2.9 dB more than what you're getting at low gain, if the DX really goes to 2V.  You could confirm with say a multimeter and a quick measurement.
 
 
You know, if low gain is default 2.5x and DX outputs 2V and you're reaching that (no volume turned down somewhere in software), you're getting 5V rms out max on low gain.  You might need to go to a different price tier to find amps that can output a lot more than that.  O2, FiiO E9, some others top out at around 7V, which is 2.9 dB more.  Schiit Magni supposedly does say 8.8V at most, or 4.9 dB over your current setup with O2 on low gain.
 
I mean, around 10 dB extra is considered perceptually around twice as loud.  To get 10 dB more, you'd be looking at a pretty powerful amp, looking for about 16V rms output, or a huge 416 mW into 600 ohms.  On the other hand, this is already really really loud, at least for most people, and relative to levels that cause hearing damage.  Actually, those kinds of levels can damage the headphones too.  They're rated for 100 mW nominal.
 
 
So, actually, because this is so loud, maybe you should double-check the output levels you're getting.

 
Thanks for that info. I might have to lower the software volume then. Would lowering the Windows volume cause signal degradation? I've read somewhere that it should be kept at 100%?
 
Mar 24, 2013 at 5:59 AM Post #1,132 of 5,671
Quote:
Are you sure it's just not a stronger amplification? The K/Q701 is pretty notoriously annoying to drive for a low impedance dynamic, and I know this because I used to own it.

As for being able to switch between the amplifiers with little delay, find a cable or device that lets you connect the outputs of both amps to one input to the headphones, without summing to mono.

Quote:
Quote:
Are you sure it's just not a stronger amplification? The K/Q701 is pretty notoriously annoying to drive for a low impedance dynamic, and I know this because I used to own it.

As for being able to switch between the amplifiers with little delay, find a cable or device that lets you connect the outputs of both amps to one input to the headphones, without summing to mono.

I could try it with the M-100 at a later time [probably tomorrow].
 
Likewise, I could try it with the V-MODA SharePlay cable, maybe?

 
I said I would try it tomorrow, but I just couldn't wait because I'm curious myself to know if it's due to the headphones, hahaha.
Anyhow, stereo and binaural tracks, 1 kHz rough volume-matching, V-MODA M-100, manual cable switching (SharePlay cable splitter didn't work too well).
 
I can still perceive similar things with the M-100 as the K 701. C5's L/R placement of instruments in my head seem closer towards the center than the O2. Maybe closer isn't the right word and it's confusing to you guys. C5 seems more diagonally placed from my head (narrower), where as the O2 seems more horizontally placed from my head (wider). O2 has better "synergy" with the M-100 in this sense, opposite of the K 701, since the M-100's soundstage always seemed fairly cramped in width.
 
Since I usually don't listen to the M-100's at home, I kind of forgot how it sounds for home use. Eh....totally different from the 701's. XD
 
Tus-Chan, you mention that the amp shouldn't affect the soundstage. Do you think what I'm perceiving as the soundstage is really different colourations of the frequency response somehow (even though the C5 measures flat according to JDS Lab's measurements)?
 
Mar 24, 2013 at 6:03 AM Post #1,133 of 5,671
Quote:
Quote:
 
Full-scale output is supposedly 2V rms on the DX, so ideally you'd want a gain of 3.5x.  Actually, default low gain is 2.5x; the amp really isn't capable of all that much more than the max of what you get on low gain, with a source at around Redbook 2V level.  In that case, yeah, high gain would cause clipping.
 
I guess it's a matter of perspective, what you consider normal, etc.  Some people say they get about normal levels out of an iPod, with 600 ohms Beyers (but not loud levels).
 
So by either changing the gain by swapping resistors internally, or setting high gain + turning volume down in software to the point where you don't get clipping, you could get say 2.9 dB more than what you're getting at low gain, if the DX really goes to 2V.  You could confirm with say a multimeter and a quick measurement.
 
 
You know, if low gain is default 2.5x and DX outputs 2V and you're reaching that (no volume turned down somewhere in software), you're getting 5V rms out max on low gain.  You might need to go to a different price tier to find amps that can output a lot more than that.  O2, FiiO E9, some others top out at around 7V, which is 2.9 dB more.  Schiit Magni supposedly does say 8.8V at most, or 4.9 dB over your current setup with O2 on low gain.
 
I mean, around 10 dB extra is considered perceptually around twice as loud.  To get 10 dB more, you'd be looking at a pretty powerful amp, looking for about 16V rms output, or a huge 416 mW into 600 ohms.  On the other hand, this is already really really loud, at least for most people, and relative to levels that cause hearing damage.  Actually, those kinds of levels can damage the headphones too.  They're rated for 100 mW nominal.
 
 
So, actually, because this is so loud, maybe you should double-check the output levels you're getting.

 
Thanks for that info. I might have to lower the software volume then. Would lowering the Windows volume cause signal degradation? I've read somewhere that it should be kept at 100%?

If you're using Vista (maybe it's 7) and above, the software shouldn't degrade the signal from what I recall.
http://blog.szynalski.com/2009/11/17/an-audiophiles-look-at-the-audio-stack-in-windows-vista-and-7/
Quote:
The Vista/Win7 audio engine automatically feeds your sound card with the highest-quality output stream that it can handle, which is usually 24 bits per sample. Perhaps you’re wondering why you should care, given that most music uses only 16 bits per sample. Suppose you’re playing a 16-bit song with a digital volume control set to 10%. This corresponds to dividing each sample by 10. Now let’s assume the song contains the following two adjacent samples: 41 and 48. In an ideal world, after the volume control we would get 4.1 and 4.8. However, if the output stream has a 16-bit depth just like the input stream, then both output samples will have to be truncated to 4. There is now no difference between the two samples, which means we have lost some resolution. But if we can have an output stream with 24 bits per sample, for each 16-bit level we get 28 = 256 additional (“fractional”) levels, so we can still preserve the difference between the two attenuated samples. In fact, we can have ≈4.1016 and ≈4.8008, which is within 0.04% of the “ideal” samples of 4.1 and 4.8.

 
Mar 24, 2013 at 8:30 AM Post #1,134 of 5,671
Quote:
 
Full-scale output is supposedly 2V rms on the DX, so ideally you'd want a gain of 3.5x.  Actually, default low gain is 2.5x; the amp really isn't capable of all that much more than the max of what you get on low gain, with a source at around Redbook 2V level.  In that case, yeah, high gain would cause clipping.
 
I guess it's a matter of perspective, what you consider normal, etc.  Some people say they get about normal levels out of an iPod, with 600 ohms Beyers (but not loud levels).
 
So by either changing the gain by swapping resistors internally, or setting high gain + turning volume down in software to the point where you don't get clipping, you could get say 2.9 dB more than what you're getting at low gain, if the DX really goes to 2V.  You could confirm with say a multimeter and a quick measurement.
 
 
You know, if low gain is default 2.5x and DX outputs 2V and you're reaching that (no volume turned down somewhere in software), you're getting 5V rms out max on low gain.  You might need to go to a different price tier to find amps that can output a lot more than that.  O2, FiiO E9, some others top out at around 7V, which is 2.9 dB more.  Schiit Magni supposedly does say 8.8V at most, or 4.9 dB over your current setup with O2 on low gain.
 
I mean, around 10 dB extra is considered perceptually around twice as loud.  To get 10 dB more, you'd be looking at a pretty powerful amp, looking for about 16V rms output, or a huge 416 mW into 600 ohms.  On the other hand, this is already really really loud, at least for most people, and relative to levels that cause hearing damage.  Actually, those kinds of levels can damage the headphones too.  They're rated for 100 mW nominal.
 
 
So, actually, because this is so loud, maybe you should double-check the output levels you're getting.

 
one day ill be able to make such calculation too 
beyersmile.png

 
 
Quote:
 
Would lowering the Windows volume cause signal degradation? I've read somewhere that it should be kept at 100%?


if you scroll back a few pages on the thread, youll see a whole discussion on this. to sum it up quickly:
- as miceblue said:
 
Quote:
If you're using Vista (maybe it's 7) and above, the software shouldn't degrade the signal from what I recall.
http://blog.szynalski.com/2009/11/17/an-audiophiles-look-at-the-audio-stack-in-windows-vista-and-7/

- if you set your dac to output 24 bits, while playing 16 bit audio, you're "adding fake bits" which will be lost in the digital volume lowering, minimizing degradation to the signal.
- this is probably irrelevant, as you (or atleast i, and a few others) wont notice any degradation unless you reach about 9 bits, which is unlikely unless your lowering alot of digital volume.
- ofcourse, ymmv and it all depends on your setup. everything is specific to your gear.
 
Mar 24, 2013 at 9:35 AM Post #1,135 of 5,671
Quote:
 
Tus-Chan, you mention that the amp shouldn't affect the soundstage. Do you think what I'm perceiving as the soundstage is really different colourations of the frequency response somehow (even though the C5 measures flat according to JDS Lab's measurements)?

 
An amp shouldn't negatively impact the soundstage, but it could; it processes the entire audio signal so theoretically it could do almost anything to it. What isn't possible however is for an amp to improve the soundstage beyond what a headphone is capable of. It's safe to say that any amp that measures well, will allow for the full soundstage as brought forth by the transducer. The idea for instance that tube amps create a bigger soundstage than solid state amps because they are tube amps, makes no sense. Likewise, if some person says they felt the soundstage increasing when they were tube rolling, is no evidence of anything other than the problems of tube amps and the caveats of human perception.
 
Ofcourse, perceived soundstage is different; there's a huge amount of uncertainty and unpredictability involved there. The trick being ofcourse that even an amp that theoretically does not alter soundstage, might still be perceived to offer a smaller soundstage than an amp that actually diminishes it; when human perception enters the picture, things go unreliable and uncertain fast.
 
Mar 24, 2013 at 4:31 PM Post #1,137 of 5,671
Quote:
If you're using Vista (maybe it's 7) and above, the software shouldn't degrade the signal from what I recall.
http://blog.szynalski.com/2009/11/17/an-audiophiles-look-at-the-audio-stack-in-windows-vista-and-7/

 
The software audio stack wouldn't be mangling your audio, but we're all still using real-world hardware aren't noiseless, aren't perfect.  Reduce the output levels, and if the noise levels aren't reduced by the same factor, then you've effectively reduced the SNR.  (depending, quite possibly not by an audible amount or by any difference worth caring about, but still)
 
Mar 24, 2013 at 4:35 PM Post #1,138 of 5,671
As I mentioned before, you can find differences with some bench measurements, but do they really correspond to what people say they hear?


I noticed a significant difference between my clip+ and O2/ODAC soundstage: the clip+ almost as if it had crossfeed in comparison. I was really quite surprised. The difference in the measurements was 65db crosstalk vs 50db. That 15db is audible to me. Certianly, if I were buying a new DAC or recommending one, I would be careful tomake sure it were -65db on crosstalk.

It so happens that even prior to this that I would switch on the rockbox "stereo width" feature to get better soundstage whereas I rarely come across music that needs it when I play CDs off my laptop through the O2/ODAC (which I often do as it happens, since I am working my way through my flatmate's 1000 CD alternative collection; it's better than spotify, I'm so fortunate).

It's surprising to me that miceblue can hear a narrower soundstage on his C5 when it measures about the same as the O2.
 
Mar 24, 2013 at 4:38 PM Post #1,139 of 5,671
Hm, that's true....:thinking-smile:
I might try this test at school with multimeter volume-matching for a more accurate test.


Is there a relatively inexpensive way to get more accurate in-person testing results? I feel like there's a better way to do these tests without having to constantly switch cables, pause/play music, etc. *this part should probably be in the Sound Science threads, but we're kind of on the topic anyway*


Make sure to play a 60Hz sound file as multmeters don't work well with anything other than that (according to the O2 designer on how to test for output impedance).

I believe there is some gadget for doing blind testing using two devices and the same headphone. Maybe someone can tell us more.
 
Mar 24, 2013 at 4:52 PM Post #1,140 of 5,671
Thanks for that info. I might have to lower the software volume then. Would lowering the Windows volume cause signal degradation? I've read somewhere that it should be kept at 100%?
If you're using Vista (maybe it's 7) and above, the software shouldn't degrade the signal from what I recall.
http://blog.szynalski.com/2009/11/17/an-audiophiles-look-at-the-audio-stack-in-windows-vista-and-7/


Quoting the others'

The Vista/Win7 audio engine automatically feeds your sound card with the highest-quality output stream that it can handle, which is usually 24 bits per sample.


I found that my Vista computer automatically set the ODAC as 16bit?!?! I didn't discover this for a while. I've also understood that the Vista/win7+ audio engine processes everything internally at 32bit, so even 16 bit may not be affected for volume control purposes, but which XP definitely is. However, that quoted text does suggest to me that there could still be an undesirable degradation at 16bit output even if Vista is processing at 32bit, so to be on the safe side set the card to 24bit. Don't be tempted to set it to 96Khz, though. If you do then play a 60Hz test tone at high volume: you will likely hear high frequency artifacts. I can hear them with the ODAC. 24bit/44 for music, and 48 for DVD is best (although downsampling may be good enough these days).
 

Users who are viewing this thread

Back
Top