O2 AMP + ODAC
Feb 18, 2013 at 3:07 PM Post #946 of 5,671
Quote:
Quote:
Does anyone here have experience with the Leckerton UHA-6S? I'm curious as to how it compares to the O2 as a more portable option.

 
Not me, but I seem to recall that some of the pirate crew (purrin, Anax, maybe others) had heard both and preferred the Leckerton.  Of course, these are the guys who think NwAv's getting money under the table for the O2 / ODAC.
 
 
Seems like Mr. Leckerton quite knows what's up, is an application engineer at Cirrus.  UHA-6S MKII uses a high-end Cirrus Logic DAC.  There are some measurements of the older version published on the site, not completely comprehensive, but let's assume the rest checks out too.  According to the objectivist camp and so on, there probably shouldn't be a discernible difference in sound between the two devices, operating at levels before clipping.
 
So at least by the specs, the limitations compared to O2 / ODAC are:
  1. limited to 16/48 over USB (but there is an S/PDIF input)
  2. lower output power levels — 30 mW @ 16 ohms, 55 mW @ 32 ohms, 110 mW @ 100 ohms, 55 mW @ 300 ohms (presumably with stock output op amp)
  O2 gets 353 mW @ 15 ohms, 534 mW @ 33 ohms, (interpolated by me) 272 mW or so @ 100 ohms, 94 mW or so @ 300 ohms, on a mid-high charge on battery
 
So the O2 is better for some planar magnetics, maybe AKG Q701 if you listen really loud.  Otherwise, no lower-impedance sets actually need that kind of power.  O2 is also a little louder for high-impedance sets.
 
Supposedly the older UHA-6S has higher noise than the MKII, and even though there's no noise level listed, you can look at the spectrum on a THD+N spectrum graph for the original UHA-6S and see most of it around -145 dBV or so, whereas the O2 is around 5 dB quieter across the band.  So at least the amp portion is really really quiet, good if you use some super-sensitive IEMs.
 
Listed battery life is better than that of the standard O2.

Ah, thanks for the information. I read somewhere that the Leckerton was preferred over the O2. I also read that the Leckerton offers better detail retrieval over the O2.
Kind of related to the O2, John of JDS Labs said their new portable amp, the C5, should sound as transparent as the O2. I pre-ordered one, so I guess I'll make a comparison between the two if I can hear any differences.
 
And I just realised I said the UHA-6S in my previous post when I meant to say UHA-4, the portable amp. XD
 
Feb 18, 2013 at 3:28 PM Post #947 of 5,671
Quote:
Originally Posted by miceblue /img/forum/go_quote.gif
 
Ah, thanks for the information. I read somewhere that the Leckerton was preferred over the O2. I also read that the Leckerton offers better detail retrieval over the O2.

 
The Leckerton doesn't provide better detail retrieval over the O2 because it's not possible: The O2 is transparent. It's easy to believe, although I have no personal experience with the Leckerton, that its performance is every bit as good as the O2.
 
Is there an actual difference in sound between the two? If there is, a listener might choose either one based on his or her preferences. It's important to differentiate between subjective preferences and claims which are unjustified.
 
Feb 18, 2013 at 3:42 PM Post #948 of 5,671
Quote:
Quote:
Originally Posted by miceblue /img/forum/go_quote.gif
 
Ah, thanks for the information. I read somewhere that the Leckerton was preferred over the O2. I also read that the Leckerton offers better detail retrieval over the O2.

 
The Leckerton doesn't provide better detail retrieval over the O2 because it's not possible: The O2 is transparent. It's easy to believe, although I have no personal experience with the Leckerton, that its performance is every bit as good as the O2.
 
Is there an actual difference in sound between the two? If there is, a listener might choose either one based on his or her preferences. It's important to differentiate between subjective preferences and claims which are unjustified.

Yeah I know, I'm just restating what I read. I would like to do a [insert sound science term only] test if I ever get the chance.
 
This probably should be asked in the sound science section, but so many things about the O2/ODAC overlap with it anyway that I'll just ask it here. What measurements account for detail retrieval and soundstage? I can't make any objective reasons for it other than maybe distortion, but I own the FiiO E7 as well and the O2/ODAC and compared next to each other the O2/ODAC has superior detail retrieval, instrument separation, and a much more open-sounding soundstage (yes yes I know these are all subjective observations). The designer of the O2 certainly made positive remarks about the E7 for its price.
 
Feb 18, 2013 at 3:43 PM Post #949 of 5,671
Quote:
 
The Leckerton doesn't provide better detail retrieval over the O2 because it's not possible: The O2 is transparent. It's easy to believe, although I have no personal experience with the Leckerton, that its performance is every bit as good as the O2.
 
Is there an actual difference in sound between the two? If there is, a listener might choose either one based on his or her preferences. It's important to differentiate between subjective preferences and claims which are unjustified.

IMO the O2 is more neutral than the Leckerton, not that the Leckerton isn't neutral....the O2 is just more neutral. That's the only way I can describe it.
 
Feb 18, 2013 at 3:50 PM Post #950 of 5,671
Quote:
Quote:
 
The Leckerton doesn't provide better detail retrieval over the O2 because it's not possible: The O2 is transparent. It's easy to believe, although I have no personal experience with the Leckerton, that its performance is every bit as good as the O2.
 
Is there an actual difference in sound between the two? If there is, a listener might choose either one based on his or her preferences. It's important to differentiate between subjective preferences and claims which are unjustified.

IMO the O2 is more neutral than the Leckerton, not that the Leckerton isn't neutral....the O2 is just more neutral. That's the only way I can describe it.

"All animals are equal, but some animals are more equal than others" - George Orwell
 
Sorry, I couldn't help. XD
 
Feb 18, 2013 at 3:55 PM Post #951 of 5,671
Quote:
Hopefully nobody yells at me for responding to old posts.
 
 
16 bits to 24 bits is a difference of having 2^16 = 65536 possible values vs. 2^24 = 16777216 possible values, or a factor of 2^8 = 256 difference.  So around 20 * log10(256) = 48 dB.  Theoretically you could throw away 99.6% of the signal.  Note that 66% in some software volume control setting may not actually correspond to 66% of the signal.  The scaling often doesn't work that way.
 
If you're playing a 16-bit file with 24-bit output at 100% volume control, let's say those 24 bits for a particular sample are the ones below, as a simplistic example, ignoring a couple small details.  (The order is from most to least important starting from left.)  Notice the rightmost eight values are 0 because you only have 16 bits of information.:
01010110 00100111 0000000
 
If you reduce the volume some digitally, the output will be essentially shifted to the right:
00010101 10001001 1100000
 
With a 16-bit output device, the two things would look like
01010110 00100111
and
00010101 10001001  (whoops where'd the last couple bits go?)

 
Awesome, great post. I saw the topic being mentioned by others like Defiant in the Modi discussion threads but I never found the math behind it. Thanks for clearing that up with actual math instead of my fuzzy logic. 
 
beerchug.gif

 
Feb 18, 2013 at 4:01 PM Post #952 of 5,671
Quote:
Quote:
Hopefully nobody yells at me for responding to old posts.
 
 
16 bits to 24 bits is a difference of having 2^16 = 65536 possible values vs. 2^24 = 16777216 possible values, or a factor of 2^8 = 256 difference.  So around 20 * log10(256) = 48 dB.  Theoretically you could throw away 99.6% of the signal.  Note that 66% in some software volume control setting may not actually correspond to 66% of the signal.  The scaling often doesn't work that way.
 
If you're playing a 16-bit file with 24-bit output at 100% volume control, let's say those 24 bits for a particular sample are the ones below, as a simplistic example, ignoring a couple small details.  (The order is from most to least important starting from left.)  Notice the rightmost eight values are 0 because you only have 16 bits of information.:
01010110 00100111 0000000
 
If you reduce the volume some digitally, the output will be essentially shifted to the right:
00010101 10001001 1100000
 
With a 16-bit output device, the two things would look like
01010110 00100111
and
00010101 10001001  (whoops where'd the last couple bits go?)

 
Awesome, great post. I saw the topic being mentioned by others like Defiant in the Modi discussion threads but I never found the math behind it. Thanks for clearing that up with actual math instead of my fuzzy logic. 
 
beerchug.gif

Wait, so it is OK to have the source at a low-ish volume level without losing information with a 24-bit output?

 
Feb 18, 2013 at 5:30 PM Post #953 of 5,671
Quote:
Wait, so it is OK to have the source at a low-ish volume level without losing information with a 24-bit output?

 
OK?  I don't think anybody's going to label you a sick deviant, no matter how you adjust your volume.  
biggrin.gif

 
When representing a 16-bit word on a 24-bit system, you could reduce the level a lot without losing information.  As noted earlier, that's because the 24-bit representation just pads the last 8 bits with zeros.  There's a buffer there.  This is related to the reason why when mixing / mastering in the studio, they use 24 bits (or higher, or floating point).  They need to do all kinds of processing, and boosting and cutting multiple times.  The extra bits provide some good margin to work with.
 
For outputting 16-bit material on a 24-bit DAC, that's different because the hardware's noise level is a factor.  For example, if you were to play back a signal at a very very low 24-bit level (say all zeros except in the last few digits, meaning something much quieter than can even be represented with 16 bits), that would get lost under the noise.  For a 24-bit DAC with say 19 bits of effective resolution, if you play back 16-bit material, the 16-bit quantization noise (difference between the "true" signal and the 16-bit representation on the disc, from sample to sample) is higher than any noise from the DAC, so the noise from the DAC makes practically zero difference.  Reduce the volume enough, and the quantization noise may be lower than the DAC's own noise, because the music samples have been divided down to a low enough level.  Then it's going to be the DAC that's the limiting factor.
 
Like I said, there is noise in recordings, noise in listening environments, and very importantly—effects of auditory masking when music is playing (the louder stuff makes certain smaller details and things harder, if not impossible, to discern).  For any of the above to matter, you'd need the considerations in the previous sentence to somehow not matter, plus a whole lot of amplification after the DAC or some very sensitive IEMs, so any small amount of noise becomes actually audible.
 
Keep in mind that ~100 dB is a whole lot.  If you can hear the 16-bit quantization noise, you are listening way too ****ing loud.  Let's make up somewhat-realistic numbers.  Let's say you reduce the volume in software by 35 dB, such that the signal level out of the DAC is low enough that the effective SNR is 70 dB.  (Oh noes, 70 dB is not hi-fi.  You just used software volume control past the "okay" point and violated everything I said above.  Actually, ODAC and some better DACs should get better performance than that.)  Let's say you have the amp and headphones set such that when volume is on full blast, the peaks reach a deafening 115 dB SPL.  This means with the volume turned down 35 dB, peaks are at 80 dB SPL, for a comfortable level.  So the noise from the DAC is at 10 dB SPL.  You know what kind of room you need to hear noise at 10 dB SPL?  And then instantly when there is any sound from the system, you wouldn't hear the noise anymore.
 
It's effectively really not a big deal.  Use whatever is convenient for you.
 
Feb 18, 2013 at 6:17 PM Post #954 of 5,671
Quote:
Originally Posted by mikeaj /img/forum/go_quote.gif
 
1x gain is not so much adding current as adding the ability to deliver more current if necessary.  The current actually being delivered depends on the output volume level and the load impedance.  The electronics and structure are different for one amp as compared to another—there may be a difference in noise levels, output impedance, frequency response, nonlinear distortion at different loads / output levels / frequencies and so on.  This may or may not translate into some reliably-perceptible difference in sound quality.
 
Actually, with 1x gain, you are amplifying any noise from the input by a lesser amount, so that is slightly better, possibly, along with more negative feedback for the gain-stage op amp (which is not the limiting factor; the output stage op amp is).  Probably not any difference in practice.

 
i really am sorry, and if what i ask is impossible to do, please just tell me, and i promise i wont bug anymore.
if *1 isnt "adding" anything, then how does it work? if all im doing is "adding ability", then how come the signal gets louder? if im multiplying by 1, why would anything change? is there a way of explaining in layman terms what actually changes when i move the volume control while using *1 gain?
 
Quote:
 
OK?  I don't think anybody's going to label you a sick deviant, no matter how you adjust your volume.  
biggrin.gif

 
When representing a 16-bit word on a 24-bit system, you could reduce the level a lot without losing information.  As noted earlier, that's because the 24-bit representation just pads the last 8 bits with zeros.  There's a buffer there.  This is related to the reason why when mixing / mastering in the studio, they use 24 bits (or higher, or floating point).  They need to do all kinds of processing, and boosting and cutting multiple times.  The extra bits provide some good margin to work with.
 
For outputting 16-bit material on a 24-bit DAC, that's different because the hardware's noise level is a factor.  For example, if you were to play back a signal at a very very low 24-bit level (say all zeros except in the last few digits, meaning something much quieter than can even be represented with 16 bits), that would get lost under the noise.  For a 24-bit DAC with say 19 bits of effective resolution, if you play back 16-bit material, the 16-bit quantization noise (difference between the "true" signal and the 16-bit representation on the disc, from sample to sample) is higher than any noise from the DAC, so the noise from the DAC makes practically zero difference.  Reduce the volume enough, and the quantization noise may be lower than the DAC's own noise, because the music samples have been divided down to a low enough level.  Then it's going to be the DAC that's the limiting factor.
 
Like I said, there is noise in recordings, noise in listening environments, and very importantly—effects of auditory masking when music is playing (the louder stuff makes certain smaller details and things harder, if not impossible, to discern).  For any of the above to matter, you'd need the considerations in the previous sentence to somehow not matter, plus a whole lot of amplification after the DAC or some very sensitive IEMs, so any small amount of noise becomes actually audible.
 
Keep in mind that ~100 dB is a whole lot.  If you can hear the 16-bit quantization noise, you are listening way too ****ing loud.  Let's make up somewhat-realistic numbers.  Let's say you reduce the volume in software by 35 dB, such that the signal level out of the DAC is low enough that the effective SNR is 70 dB.  (Oh noes, 70 dB is not hi-fi.  You just used software volume control past the "okay" point and violated everything I said above.  Actually, ODAC and some better DACs should get better performance than that.)  Let's say you have the amp and headphones set such that when volume is on full blast, the peaks reach a deafening 115 dB SPL.  This means with the volume turned down 35 dB, peaks are at 80 dB SPL, for a comfortable level.  So the noise from the DAC is at 10 dB SPL.  You know what kind of room you need to hear noise at 10 dB SPL?  And then instantly when there is any sound from the system, you wouldn't hear the noise anymore.
 
It's effectively really not a big deal.  Use whatever is convenient for you.


im afraid iv gotten confused with the explanation of the 000's moving to the left...
i got that the answer to "if i use the 24 bit option for 16 bit audio does that mean i can use digital volume control without loosing fidelity?" was basically a yes, but heres another question.
 
i got the original notion from watching ethen winers video on youtube (i was the one who recommended it here). now, in that video, he demonstrated that actual, audible degradation only occurs at around 8-10 bits (ymmv and all that OF COURSE). but i noticed that even that, was mostly just added noise (or was it that the dynamic range shrunk, raising the noise floor?). is that all there is to it? is that how loss of fidelity manifests itself when degrading bit depth - noise? because if so, auditory masking aside, one can quite easily HEAR when hes gone to far with the digital volume control, without any speculations.
or is there more to it, such as loss of micro detail, or bass texture, or something else?
 
and thank you for your time explaining all this, cheers
 
Feb 18, 2013 at 7:29 PM Post #955 of 5,671
Quote:
Quote:
Wait, so it is OK to have the source at a low-ish volume level without losing information with a 24-bit output?

 
OK?  I don't think anybody's going to label you a sick deviant, no matter how you adjust your volume.  
biggrin.gif

 
When representing a 16-bit word on a 24-bit system, you could reduce the level a lot without losing information.  As noted earlier, that's because the 24-bit representation just pads the last 8 bits with zeros.  There's a buffer there.  This is related to the reason why when mixing / mastering in the studio, they use 24 bits (or higher, or floating point).  They need to do all kinds of processing, and boosting and cutting multiple times.  The extra bits provide some good margin to work with.
 
For outputting 16-bit material on a 24-bit DAC, that's different because the hardware's noise level is a factor.  For example, if you were to play back a signal at a very very low 24-bit level (say all zeros except in the last few digits, meaning something much quieter than can even be represented with 16 bits), that would get lost under the noise.  For a 24-bit DAC with say 19 bits of effective resolution, if you play back 16-bit material, the 16-bit quantization noise (difference between the "true" signal and the 16-bit representation on the disc, from sample to sample) is higher than any noise from the DAC, so the noise from the DAC makes practically zero difference.  Reduce the volume enough, and the quantization noise may be lower than the DAC's own noise, because the music samples have been divided down to a low enough level.  Then it's going to be the DAC that's the limiting factor.
 
Like I said, there is noise in recordings, noise in listening environments, and very importantly—effects of auditory masking when music is playing (the louder stuff makes certain smaller details and things harder, if not impossible, to discern).  For any of the above to matter, you'd need the considerations in the previous sentence to somehow not matter, plus a whole lot of amplification after the DAC or some very sensitive IEMs, so any small amount of noise becomes actually audible.
 
Keep in mind that ~100 dB is a whole lot.  If you can hear the 16-bit quantization noise, you are listening way too ****ing loud.  Let's make up somewhat-realistic numbers.  Let's say you reduce the volume in software by 35 dB, such that the signal level out of the DAC is low enough that the effective SNR is 70 dB.  (Oh noes, 70 dB is not hi-fi.  You just used software volume control past the "okay" point and violated everything I said above.  Actually, ODAC and some better DACs should get better performance than that.)  Let's say you have the amp and headphones set such that when volume is on full blast, the peaks reach a deafening 115 dB SPL.  This means with the volume turned down 35 dB, peaks are at 80 dB SPL, for a comfortable level.  So the noise from the DAC is at 10 dB SPL.  You know what kind of room you need to hear noise at 10 dB SPL?  And then instantly when there is any sound from the system, you wouldn't hear the noise anymore.
 
It's effectively really not a big deal.  Use whatever is convenient for you.

Wow, thank you for taking the time to write all of that out! :xf_eek:
I'll have to read it more carefully when I get the chance.
 
Feb 18, 2013 at 8:57 PM Post #956 of 5,671
Quote:
if *1 isnt "adding" anything, then how does it work? if all im doing is "adding ability", then how come the signal gets louder? if im multiplying by 1, why would anything change? is there a way of explaining in layman terms what actually changes when i move the volume control while using *1 gain?


 
So if I understand you correctly, you plug headphones into one thing, get a certain volume.  Plug that thing into O2 with 1x gain and volume control turned all the way to the right, plug headphones into O2, and get a louder sound?  If there's a difference in volume there and the original is not heaviliy distorted, then the difference in volume is a consequence of the O2 having a lower output impedance.
 
What headphones are you using?  For (more or less) any amplifier in this context or audio source, the signal output of the device is split between the source (meaning whatever is connected to the headphones) output impedance and the headphone impedance.  It's a voltage divider.  The source output impedance is a technical modeling term that describes one aspect of how the electric circuit is configured and operates.  The headphones get a percentage share of the signal based on its impedance and the source output impedance.  If the source output impedance is smaller, the headphones see a larger percent of the voltage the source is outputting.  There are a few other ramifications here with output impedance, but this much is sufficient to explain changes in volume.
 
Which volume control are you talking about?  Software volume control is usually (but not always) as I described earlier.  The volume control on the O2 and most amps is a potentiometer.  Regardless of whatever the gain is, by moving the pot's position, you are changing a few resistance values inside the circuit.  This changes how much of the signal from one part of the amp gets transferred to another part.
 
In the end, the voltage the headphones receive directly determines how much current is sent and how much power is delivered to the headphones.  i.e. if you know the headphone impedance and the voltage, you can calculate the current and power from that.  The greater the voltage, the greater the current and the greater the power delivered.  Power is energy / time.  The more power delivered, the more energy / time is being converted into mechanical motion and thus the greater the change in sound pressure levels that are getting to your ear.  Thus by reducing the signal level (can be achieved through any number of mechanisms), you can reduce pressure of the sound waves and thus how loud it is.
 
 
Quote:
i got the original notion from watching ethen winers video on youtube (i was the one who recommended it here). now, in that video, he demonstrated that actual, audible degradation only occurs at around 8-10 bits (ymmv and all that OF COURSE). but i noticed that even that, was mostly just added noise (or was it that the dynamic range shrunk, raising the noise floor?). is that all there is to it? is that how loss of fidelity manifests itself when degrading bit depth - noise? because if so, auditory masking aside, one can quite easily HEAR when hes gone to far with the digital volume control, without any speculations.
or is there more to it, such as loss of micro detail, or bass texture, or something else?

 
First of all, If you think about 10 bits is okay in Ethan Winer's video (12 bits?  some value; anyhow, I agree), then all this discussion is about things that don't make any practical difference.  In general there are a lot of people freaking out over pretty much nothing.  I don't mean to blame anybody, but that's how it is.  Maybe that's the audiophile experience.
 
What's being shown in Ethan Winer's video is pretty much what would happen if you used software volume control to lower the volume on a 16-bit output system, while simultaneously increasing the volume with your amp such that the total level in the end remains constant (and assuming there are no ramifications of boosting the volume with the amp, amp is noiseless, etc.).  By the way, by 10 bits resolution, that is effectively like 36 dB of volume control, so a pretty high amount.  The noise you get here is really more like the difference between the original signal and the quantized representation.  Actually, maybe not exactly, depending on if there is dithering.
 
Anyway, this is not what we're talking about.  When reducing volume on a 24-bit output system, it's the inherent noise level of the DAC output itself that is the floor, from thermal noise and whatnot, unless there's something I don't get either.  This kind of noise may or may not have the same characteristics as the quantization noise in the myths video.  Whether that kind of noise or difference gets perceived as noise added to the original (that's pretty much what it is, mathematically), loss of micro detail, bass texture, etc. depends on how you hear things.  I don't really know.  I really doubt it would be bass texture, though, as it should affect all frequencies.  Most likely it's mostly similar to whatever you heard in the video.
 
Keep in mind that if you reduce the volume in software (even 16 bits), you're not increasing the noise or anything.  Actually, reducing the level could reduce the noise a bit too, depending on the device.  You're just bringing the signal level down so it's quieter.  As a consequence, the signal-to-noise ratio is lower because you've reduced the signal.  No big deal.
 
The problem is only if you reduce the volume way too much in software and then boost the signal a lot to get to a normal listening level (why would you do this?), bringing up the noise level with it.  
 
Feb 18, 2013 at 9:56 PM Post #957 of 5,671
Quote:
 
 
So if I understand you correctly, you plug headphones into one thing, get a certain volume.  Plug that thing into O2 with 1x gain and volume control turned all the way to the right, plug headphones into O2, and get a louder sound?  If there's a difference in volume there and the original is not heaviliy distorted, then the difference in volume is a consequence of the O2 having a lower output impedance.
 
What headphones are you using?  For (more or less) any amplifier in this context or audio source, the signal output of the device is split between the source (meaning whatever is connected to the headphones) output impedance and the headphone impedance.  It's a voltage divider.  The source output impedance is a technical modeling term that describes one aspect of how the electric circuit is configured and operates.  The headphones get a percentage share of the signal based on its impedance and the source output impedance.  If the source output impedance is smaller, the headphones see a larger percent of the voltage the source is outputting.  There are a few other ramifications here with output impedance, but this much is sufficient to explain changes in volume.
 
Which volume control are you talking about?  Software volume control is usually (but not always) as I described earlier.  The volume control on the O2 and most amps is a potentiometer.  Regardless of whatever the gain is, by moving the pot's position, you are changing a few resistance values inside the circuit.  This changes how much of the signal from one part of the amp gets transferred to another part.
 
In the end, the voltage the headphones receive directly determines how much current is sent and how much power is delivered to the headphones.  i.e. if you know the headphone impedance and the voltage, you can calculate the current and power from that.  The greater the voltage, the greater the current and the greater the power delivered.  Power is energy / time.  The more power delivered, the more energy / time is being converted into mechanical motion and thus the greater the change in sound pressure levels that are getting to your ear.  Thus by reducing the signal level (can be achieved through any number of mechanisms), you can reduce pressure of the sound waves and thus how loud it is.
 

 
to be polite, i will answer your question and say im using dt770 (250 ohm). but let me explain:
my O2 has the standard gain options (*2.5 and *6.5). then we got to the subject of using *2.5 gain with sensitive headphones (specifically the m-100, 32 ohm, 105 dB sensitivity) and someone mentioned that *1 gain may be more suitable. which made me ask how *1 is even possible, because when you multiply by one, you keep the original value (obviously). thats what im trying to understand, why does *1 gain make a difference? obviously its "adding" something - what is it? how is it different from *2.5 (for arguments sake) gain?
 
Quote:
 
First of all, If you think about 10 bits is okay in Ethan Winer's video (12 bits?  some value; anyhow, I agree), then all this discussion is about things that don't make any practical difference.  In general there are a lot of people freaking out over pretty much nothing.  I don't mean to blame anybody, but that's how it is.  Maybe that's the audiophile experience.
 
What's being shown in Ethan Winer's video is pretty much what would happen if you used software volume control to lower the volume on a 16-bit output system, while simultaneously increasing the volume with your amp such that the total level in the end remains constant (and assuming there are no ramifications of boosting the volume with the amp, amp is noiseless, etc.).  By the way, by 10 bits resolution, that is effectively like 36 dB of volume control, so a pretty high amount.  The noise you get here is really more like the difference between the original signal and the quantized representation.  Actually, maybe not exactly, depending on if there is dithering.
 
Anyway, this is not what we're talking about.  When reducing volume on a 24-bit output system, it's the inherent noise level of the DAC output itself that is the floor, from thermal noise and whatnot, unless there's something I don't get either.  This kind of noise may or may not have the same characteristics as the quantization noise in the myths video.  Whether that kind of noise or difference gets perceived as noise added to the original (that's pretty much what it is, mathematically), loss of micro detail, bass texture, etc. depends on how you hear things.  I don't really know.  I really doubt it would be bass texture, though, as it should affect all frequencies.  Most likely it's mostly similar to whatever you heard in the video.
 
Keep in mind that if you reduce the volume in software (even 16 bits), you're not increasing the noise or anything.  Actually, reducing the level could reduce the noise a bit too, depending on the device.  You're just bringing the signal level down so it's quieter.  As a consequence, the signal-to-noise ratio is lower because you've reduced the signal.  No big deal.
 
The problem is only if you reduce the volume way too much in software and then boost the signal a lot to get to a normal listening level (why would you do this?), bringing up the noise level with it.  

oh i definitely agree theres alot of fuss about nothing going around, this discussion (for me) and many others is my way of trying to weed out the truth amongst all the placebos and biases.
just so as to explain where im coming from (and in no way am i making a declaration of war on those who object: everything hence forth is in my own, very humble and very uninformed opinion), theres no (audible) difference between 16 bit audio and 24 bit audio, but then i got to reading and found out that reducing volume digitally, restricts the bits going through, thus degrading sq. which would explain using 24 bit audio ("fake" or otherwise), because then you can reduce volume digitally in order to restrict the level going into the O2 and allow for use with more sensitive headphones, without opening it up and messing about with its interior and still maintain high quality sound.  
in the video, the biggest difference i heard was alot of noise, so i wondered if thats the ONLY difference when degrading bit depth or if there was more to it, which is why i asked, and gave bass texture as an example of something else that may perhaps be effected amongst others. you say all frequencies would be effected the same, so, your saying frequency response DOES suffer from bit depth degradation? say i reduce the digital volume by like, 60%. what effect do you think that would have on sound quality? again, if one (or rather, I) wanted to use his O2 with headphones such as the m-100, using *2.5 gain and had to reduce the digital volume, in order to have a functioning gain switch that wasnt too sensitive, would musical fidelity suffer? how so?
 
i reduced foobars volume and my laptops volume to very low levels and then, using high gain on the O2, brought it up to my regular listening level. why? to test my theory about noise. i could hear no noise. perhaps it is auditory masking, perhaps its not. but actual effect on fidelity will take a long time to determine, and will be hard to test without the ability of proper a/b comparisons, which is why asked here what would be effected by stripping bits.
 
and thank you again for giving such in depth explanations, much obliged =]
 
Feb 18, 2013 at 10:16 PM Post #958 of 5,671
Quote:
 
to be polite, i will answer your question and say im using dt770 (250 ohm). but let me explain:
my O2 has the standard gain options (*2.5 and *6.5). then we got to the subject of using *2.5 gain with sensitive headphones (specifically the m-100, 32 ohm, 105 dB sensitivity) and someone mentioned that *1 gain may be more suitable. which made me ask how *1 is even possible, because when you multiply by one, you keep the original value (obviously). thats what im trying to understand, why does *1 gain make a difference? obviously its "adding" something - what is it? how is it different from *2.5 (for arguments sake) gain?

 
Oh, I totally misread you then.  My first reply is what's relevant then.
 
In the real world, if you're connecting headphones (speakers) to an amp and asking that amp to drive them, the amp will not do a perfect job.  There will be some deviation from the original (the input to the amp).  The deviation depends on the electronics design of the amp, the headphones, the output level, and the nature of the input.  A so-called high-fidelity amp will make the signal the headphones receives look very much like the input to the amp, no matter what the other factors are.  Other amps may behave differently.  They may try and fail to maintain an output that looks like the input.
 
Some people spend megabucks to try to get amps that deviate in certain specific ways, hence a different sound.
 
By using an amp with 1x gain, you are still using that amp's electronics to run the headphones, so it's not pointless.  You need some kind of amp, or else you don't get sound.  Some are just better than others.  If you use some device not designed to drive headphones at all, you could get some terrible garbled junk out.
 
So internally, there is some step where you are multiplying by 1, but that's not all that's going on.
 
 

you say all frequencies would be effected the same, so, your saying frequency response DOES suffer from bit depth degradation? say i reduce the digital volume by like, 60%. what effect do you think that would have on sound quality? again, if one (or rather, I) wanted to use his O2 with headphones such as the m-100, using *2.5 gain and had to reduce the digital volume, in order to have a functioning gain switch that wasnt too sensitive, would musical fidelity suffer? how so?


 
Frequency response is about the relative amplitudes of signals at the output of the device, at different frequencies (technically it probably should include phase as well as amplitude, but...).  That's unrelated.  Be careful if you talk about reducing volume by a certain amount in software.  Reducing by 60% is turning down 8 dB in foobar.  If you set computer volume to 40 / 100, it's probably reducing it by more than 60%.  Those things rarely scale linearly.
 
Fidelity would suffer in the sense that you might reduce the SNR, but the noise is probably still inaudible.
 
 
 
i reduced foobars volume and my laptops volume to very low levels and then, using high gain on the O2, brought it up to my regular listening level. why? to test my theory about noise. i could hear no noise. perhaps it is auditory masking, perhaps its not. but actual effect on fidelity will take a long time to determine, and will be hard to test without the ability of proper a/b comparisons, which is why asked here what would be effected by stripping bits.



 
You might need some more-sensitive headphones or IEMs and reduce the volume even more in software, maybe more than 6.5x gain too.  Also, play some music or test track that doesn't have noise in the recording itself.  Then you might hear the noise from the DAC without totally blasting your ears off when the actual sounds start playing.  DT 770 250 ohms is too insensitive for this stuff to be a big deal.
 
Feb 19, 2013 at 12:56 AM Post #960 of 5,671
Quote:
The 6.5x gain on my O2 is a joke. It's like an  "add massive amounts of distortion" button. Is this normal?

 
Yes.  Or at least, it's not unusual.
 
 
As discussed before, this is a result of putting the volume control after the gain stage.  You can't control the input voltage with the volume control.  If the input multiplied by the gain is too high, then the gain stage op amp cannot handle outputting anything that high and will clip.  Lots of clipping -> massive amounts of distortion.
 
Barring some trickeration, electronics can't handle inputs greater than their positive supply rails and less than their negative supply rails, with a margin on top of those (and how much there, depends on the design).  Power supply rails are at +12V and -12V for a range of 24V.  The gain stage can handle roughly 7V rms (sine wave), so that's almost 20V peak-to-peak.  Higher than that, and it's too close to the supply rails and it will clip.  
 
So if an input is greater than 20V / 6.5 = 3.08V peak-to-peak (1.09V rms for a nominal sine wave), it will be clipped with that 6.5x gain.  Or technically, somewhere around greater than 1.5V on the positive side and less than -1.5V or so on the negative side.
 
Many computer outputs, many DACs have a full-scale output level greater than 1.09V rms.  In fact, Redbook standard is 2V, and the ODAC follows that.  Lots of audiophile gear is even higher, maybe because (1) you can get and claim higher SNR easier if you're allowed to increase the signal level and (2) in an A/B comparison, louder usually sounds better, so it's an arms race to juice it up.  Why 6.5x gain as a default?  Somehow the designer envisioned that people might be using phones, portable media players, etc. as sources, and these often output 1V or less.
 

Users who are viewing this thread

Back
Top