tomb
Member of the Trade: Beezar.com
- Joined
- Mar 1, 2006
- Posts
- 10,891
- Likes
- 1,066
Wow! Thank you for those very kind remarks!Yeah, 0.25 is what will allow maximum output, but maximum output is rarely what we're after, so it could be unimportant.
A possible issue with using a higher voltage to feed the amp is that you might reach high distortion earlier (at a lower listening level into a given load/headphone). I say this in this so very conditional way because it is.
1/ tube amp lean into distortion in a softer, more progressive way than SS one.
2/ when they do, for many tube amp users, that's when they finally get that tube sound they like
3/ the headphone is an important variable as it impacts both impedance as a load for the amp and the necessary voltage for your preferred listening level. As a rule of thumb, a lower impedance, lower sensi headphone would be more likely to cause distortion/clipping issues when used.
4/ the amp design also has a big say in this.
You certainly can try and check if you notice differences. For example, with a lower input you will apply more gain on the amp to get the same listening level, that could increase the background noise. Or the other way around, you might find that the distortion level (texture, warmth or however that manifests itself) is different. If you struggle to notice a difference, then most likely you're nowhere near the max of what your amp can do, and you're not clipping the signal, so whatever you use is likely still working in a pretty nominal way.
About life expectancy of the tubes or the amp itself and possible damage, I have no idea. Maybe someone like @tomb has some advice on this? Not sure how expert he is as he discusses stuff I usually know nothing about, but I know he's one willing to help a fellow Headfier in need anytime he sees one on the side of the audio road in the help section or DIY areas.
I can't speak to the details of this discussion because there's not really enough specific info available. I can state a few generalities, though:
- Equation above - That S, R, I equation up there fails dimensional analysis, because the ohms have no cancelling term. It could be the poster simply left that off of one of the terms, though.
- Gain vs Attenuation - Technically speaking, "gain" is set at a constant by the amplifier circuit design. The ratio of that final voltage or current level to the input level is the "gain." The volume control has no effect on this - it only attenuates the entering signal. The amp will still amplify it by the same "gain." Of course, this is just semantics in the context of this conversation. We all know the volume control will reduce the overall output of the amplifier.
- Signal Strength vs Amplifier Optimization - The question of optimizing amplifier performance relative to signal input is a good one. It's answered in almost every case by the industry standard of 2V RMS output for signal sources, along with a volume control. What is in play here is that if the signal is too small, the amplifier amplifies some of its own noise floor along with the signal. Conversely, If the signal is too strong, the amplifier can clip. Think about phono preamps. There the signal is so small a pre-amp is absolutely required. Why? Because an amp designed for a 2V RMS input will not only not be loud enough, it will be amplifying mostly noise.
- Where is the Volume Control? - For a signal too strong, you might say, "But the volume control will prevent clipping by attenuating the input signal." Well, yes and no. It depends on where in the circuit you place the volume control. NWAVGUY notably placed the volume control of his O2 amplifier between the signal stage and the amplifier stage. This meant that if a source produced a signal too strong, the O2 would clip - and it didn't matter where you adjusted the volume knob! Thankfully, that was very unique in the case of the O2 and you probably won't find it anywhere else. That said, there are audio companies who like to "play" with the signal output, letting it go a bit higher than 2V RMS. This is an old trick and used to be done back in the day when auditioning speakers: what sounds loudest at first hearing, sounds the best.
- Tubes and Distortion - Tubes are OK driven to distortion ... for a shorter life. It's how guitar amplifiers work. You drive the tubes to distortion to get all those great sound effects. They certainly don't fail right away, but you'd have to ask a guitar player how long they last. Generally speaking, a stronger input signal is not going to damage a tube, even if it clips. You'd have to run the tube that way for a long time and assuming it got hotter than it should be doing that, it might have a shorter life. But then, you'd be adjusting the volume knob so it wouldn't do that, right?
- Guitar Amps are Different - Guitar amps drive a tube to distortion by using a much higher voltage on the plates. This is much more severe than the distortion by running a tube in clipping mode. If anything, most tube headphone amplifiers run with lower voltages on the plates, making them safer to operate and assemble. Famous DIY designs by Pete Millett and Alex Cavalli were designed this way: 100, 150, 200V tubes running on 24 or 48V. The tubes will become less linear in frequency response, but traditional distortion is still kept low - and they are safer to build.
Last edited: