Spectrum Analysis of Headphone Amps -- checking linearity
Sep 11, 2011 at 1:59 PM Thread Starter Post #1 of 13

cws5

New Head-Fier
Joined
Jan 14, 2005
Posts
47
Likes
0
I've been curious about analyzing my headphone amps for linearity. This is a good way to begin to understand a large part of the reason why some amps sound better to us than others.
 
To those who have done this, what is needed besides spectrum analyzer software? In particular, we need a means to run the amp's headphone output back into the computer so that the software can access it. So how do we get the amp's output back to line level for proper input to the computer input jack? And if the amp has a preamp out, then should we use that, or might we then be analyzing the headphone out signal plus some additional components' affect on it?
 
Additional Info: I have a recently-manufactured Intel Mac. I'm now in the process of searching the web for spectrum analysis software.
 
Sep 11, 2011 at 5:20 PM Post #3 of 13
You need a pretty decent ADC for this as well.
 
Any kind of sound card recording has some limitations though.  As you were alluding to earlier, you're limited by the input level of the line in.  For some reasonable comparisons you could just set the output and amp at a level such that the output of the amp does not clip the line in.  But you can't measure the full output of the amp that way.
 
Most important part is to actually load up the amp with a reasonable headphones load--either actual headphones or maybe just resistors.  Then measure the voltage at the output of the amp via a Y-splitter or whatnot, while it's driving that realistic headphones load.  Amp performance is much better driving a high impedance line in rather than headphones-level impedances, so that would not be a fair test.
 
 
Sep 11, 2011 at 5:31 PM Post #4 of 13
Hi digger945. Given our previous discussion of the DV336 problem, it looks like we're beginning a healthy head-fi conversation. That's surely a good thing.
 
I am wanting to test the 336, but with the express purpose of comparing its handling of an input signal with my MG Head Mark II OTL.
 
I prefer the MG Head by a fairly wide margin. The problems with the 336 are these: anemic low end and what seems to be a peak around 4K. The peak makes things glassy, and the glassiness is exacerbated by the anemic low end, since one tends to want to increase volume to increase bass response. The 336 shares those problems with a DV332 I owned previously.
 
The low end problem would seem to be caused by the DV designer's decision to fit an output capacitance of merely 30uf. All the DV amps I've seen the insides of are fitted that way. Their doing this is crazy given that a corner that high up in the frequency spectrum on most accounts affects the upper mids. And of course most tube headphone amps are fitted with 100uf output capacitors, presumably partly for the reason I've offered (but also to better handle low impedance phones).
 
The cause of the 4k peak is less obvious. Perhaps the MG Head de-emphasizes the 4k area. I don't know. So I'd like to find out.
 
Another difference between the two amps is the circuit: the DV336 is a plate follower directly coupled to a cathode follower, while the MG Head is a plate follower capacitor-coupled to a second plate follower. And to make the matter of comparing them even more complex, the MG Head's output tube is choke loaded (by the transformers installed in the first version of the amp -- Mark I -- which was designed without the output transformerless function).
 
In learning about the difference between cathode follower and plate follower output stages, I've come across the web pages of a few DIYers who make it a priority to change the output stage of any cathode follower amp they purchase to plate follower. I am just beginning to understand some of the differences, one of which apparently has to do with cathode followers (ones without a capacitor bypassed resistor, anyway) having 100% local feedback, which accounts for their being merely buffer and not amplifier stages. Plate followers do not have 100% local feedback. The question of why local feedback might be a bad thing is a complicated one I've not yet understood.
 
Perhaps you can enlighten me about some of this stuff. Or at least we can try little by little to learn something about it together via casual discussion.
 
 
Sep 11, 2011 at 6:07 PM Post #5 of 13
 
Quote:
You need a pretty decent ADC for this as well.
 
Any kind of sound card recording has some limitations though.  As you were alluding to earlier, you're limited by the input level of the line in.  For some reasonable comparisons you could just set the output and amp at a level such that the output of the amp does not clip the line in.  But you can't measure the full output of the amp that way.
 
Most important part is to actually load up the amp with a reasonable headphones load--either actual headphones or maybe just resistors.  Then measure the voltage at the output of the amp via a Y-splitter or whatnot, while it's driving that realistic headphones load.  Amp performance is much better driving a high impedance line in rather than headphones-level impedances, so that would not be a fair test.

Hi mikeaj. I'm unfamiliar with the acronym ADC; what does it stand for?
 
I understand what you say about simply setting the amp's output at a level such that clipping of the line in doesn't occur.
 
When you say "measure the voltage at the output via a Y-splitter," do you mean a Y-splitter on a 1/4" jack plugged into the headphone out, with one of the splits going to the spectrum analyzer and another to the headphones?
 
When you say "that would not be a fair test," do you intend to refer to the Y-splitter setup? If so, then I'm unsure what a "high impedance line in" would amount to. That is, aren't we going to measure the output while feeding the amp in with something like a white noise sine wave?
 
Sep 11, 2011 at 6:49 PM Post #6 of 13
 
Quote:
The low end problem would seem to be caused by the DV designer's decision to fit an output capacitance of merely 30uf. All the DV amps I've seen the insides of are fitted that way. Their doing this is crazy given that a corner that high up in the frequency spectrum on most accounts affects the upper mids. And of course most tube headphone amps are fitted with 100uf output capacitors, presumably partly for the reason I've offered (but also to better handle low impedance phones).
 


The decision to use a ~30uf cap actually makes sense in a round-about sort of way. 
 
Lets start: as caps get bigger (more capacitance) they generally sound worse. Its doubly true when your using "who cares how it works as long as it survives voltage and measures right" parts. 
 
Now that that's out there, since we are talking about measurements & calculations before sound, simple cathode followers (like the darkvoice) absolutely suck for driving low impedance loads. I happen to like the way they sound most of the time, but if we only care about the distortion measurements simple cathode followers dont work with low impedance headphones. In real life you can tell something is different at a quick listen, but I dont think its bad. Mneh. 
 
So for the last part crunch the numbers where the amp works better - with a 300ohm headphone & 33uf you do have a -3db point of ~16hz, which is on the slightly high side compared to the standard 20hz. OTOH When you consider that VERY little music actually goes below 50hz (and that sub-50hz is reallllly hard to hear on headphones even if the amp was flat to DC) you do still have a bit of breathing room. With 600ohm or 2Kohm headphones the -3db frequency is low enough to be meaningless. In a practical world the choice to save a few bucks and use a slightly small part represents a very reasonable compromise for an amp like the DV.
 
Sep 11, 2011 at 7:37 PM Post #7 of 13


Quote:
 

The decision to use a ~30uf cap actually makes sense in a round-about sort of way. 
 
Lets start: as caps get bigger (more capacitance) they generally sound worse. Its doubly true when your using "who cares how it works as long as it survives voltage and measures right" parts. 
 
Now that that's out there, since we are talking about measurements & calculations before sound, simple cathode followers (like the darkvoice) absolutely suck for driving low impedance loads. I happen to like the way they sound most of the time, but if we only care about the distortion measurements simple cathode followers dont work with low impedance headphones. In real life you can tell something is different at a quick listen, but I dont think its bad. Mneh. 
 
So for the last part crunch the numbers where the amp works better - with a 300ohm headphone & 33uf you do have a -3db point of ~16hz, which is on the slightly high side compared to the standard 20hz. OTOH When you consider that VERY little music actually goes below 50hz (and that sub-50hz is reallllly hard to hear on headphones even if the amp was flat to DC) you do still have a bit of breathing room. With 600ohm or 2Kohm headphones the -3db frequency is low enough to be meaningless. In a practical world the choice to save a few bucks and use a slightly small part represents a very reasonable compromise for an amp like the DV.


I appreciate your point, nikongod, about the relevance to design of making cost-efficient tradeoffs. That is almost surely the reason the DV designers made the choice  of 30uf caps rather than ones with a greater value.
 
As I suggested in my comment about the 30uf caps, there's by no means a consensus as to whether or not a 3db down corner at frequencies where music happens is sufficient. For example, here's a blurb from the V-cap site:
 
"The real reason we don't select 20 Hz [for the corner frequency] is because near the the -3db point, there may be some phase anomalies introduced into the signal, and therefore we want to operate with a buffer from this ragged edge. We recommend using a -3db point of 1/10th of your desired low frequency response. For Human beings with audio systems manufactured on Earth, that would be 1/10th of 20 Hz, or 2 Hz."
 
Of course the V-cap people aren't audiophile gods to whom we must defer, but it's worth considering that inputting the value of 30uf for a 300ohm headphone into their calculator yields a value of 176Hz for what they refer to as "optimal low frequency response."
 
The salient question, I think, is this: if the 30uf caps aren't the cause of the DV336's poor low frequency response with high impedance phones like the HD600s, then what is? Evidence that the cause may be the caps (and I'm not arguing that it is, as I'm unsure what it is) is a number of head-fiers who have mentioned the poor low frequency response and stated that the most significant sound-altering mod they've made to the 336 is changing out the caps for larger ones.
 
 
Sep 11, 2011 at 8:13 PM Post #8 of 13
Quote:
Hi mikeaj. I'm unfamiliar with the acronym ADC; what does it stand for?
 
I understand what you say about simply setting the amp's output at a level such that clipping of the line in doesn't occur.
 
When you say "measure the voltage at the output via a Y-splitter," do you mean a Y-splitter on a 1/4" jack plugged into the headphone out, with one of the splits going to the spectrum analyzer and another to the headphones?
 
When you say "that would not be a fair test," do you intend to refer to the Y-splitter setup? If so, then I'm unsure what a "high impedance line in" would amount to. That is, aren't we going to measure the output while feeding the amp in with something like a white noise sine wave?

 
Sorry for being lazy not multi-quoting below.  Maybe you know all this already, just making sure you've thought of the basics already.

ADC is analog to digital converter, opposite of a DAC.  I just mean that you'll need an accurate output as well as an accurate input.  Should be a given.
 
Yes, I mean to split the output from the headphone jack such that one side is connected to a load and the other side is connected to the interface input for measurement.  It doesn't need to be a Y-splitter, but that seems like the most convenient way.  The key point is to measure the amp while it's driving a non-trivial load, since that's harder to do and more represents what it would be like driving headphones.  With a lower-impedance load to drive, the amp will need to source a lot more current at a given output level.  If you just hooked the output straight to the line in, then all the amp would be doing is driving the line in, which has a high impedance and thus doesn't take much current (and so is easy and should demonstrate higher linearity of the amp).  It's probably better to run the results for different impedances.  It should be that there is more distortion when the amp is driving a lower impedance load.
 
Not sure what you mean by "white noise sine wave."  Maybe you mean white noise or sine wave--or probably just a sine wave or maybe a combination of sine waves for an IMD test?  White noise has equal power in all frequencies, so it shouldn't be looking like any sine wave.
 
Sep 11, 2011 at 8:17 PM Post #9 of 13
If you're testing the frequency spectrum with a dummy load, how would you account for the impedance curves of many headphones? Some have relatively flat curves, but others have big swings. Is it possible to use particular headphones as a load while measuring?
 
Sep 11, 2011 at 8:52 PM Post #10 of 13


Quote:
Not sure what you mean by "white noise sine wave."  Maybe you mean white noise or sine wave--or probably just a sine wave or maybe a combination of sine waves for an IMD test?  White noise has equal power in all frequencies, so it shouldn't be looking like any sine wave.

 
Absolutely right, of course. My mistake, mikeaj. White noise is not a sine wave, since it has equal power in all frequencies.
 
Quote:
 
ADC is analog to digital converter, opposite of a DAC.  I just mean that you'll need an accurate output as well as an accurate input.  Should be a given.

 
Re/ADC: got it. (1) With regard to ouput, I was thinking that white noise generated by software on a laptop would be fed via a DAC to the amp. I'm assuming that nearly any decent DAC will be linear enough for a test of the level of precision I'm concerned with. Perhaps that's naive. Let me know. (2) I hadn't at all considered the ADC. I had tacitly assumed that feeding the audio input of my Mac would be good enough. God knows, though, how linear the laptop's ADC is. Please let me know your thoughts on this. It might be helpful to remember that the goal of all this is to compare two headphone amps. That is, if there's non-linearity in components other than the amps, then readings of both amps will be equally affected, thereby negating or at least mitigating the harmfulness of their effects on the test.
 
 
Quote:
It's probably better to run the results for different impedances.

 
I'm concerned with the two amps handling of a signal with my own headphones, the HD600s. (So I figure that the Y-splitter should be fine.) Am I understanding matters correctly, or is there some reason I'm unaware of for running the test with differing load levels?
 
Sep 12, 2011 at 8:22 PM Post #11 of 13


Quote:
If you're testing the frequency spectrum with a dummy load, how would you account for the impedance curves of many headphones? Some have relatively flat curves, but others have big swings. Is it possible to use particular headphones as a load while measuring?


...use a variable load, break the spectrum analysis into sections and approximate the load at each step?
 
 
Sep 12, 2011 at 9:30 PM Post #13 of 13

Users who are viewing this thread

Back
Top