Warning click bait: I hate to EQ

Jun 7, 2023 at 5:32 PM Post #31 of 110
Anything in the mix presented in mono should do that, regardless of crossfeed. Vocals are often mixed mono and appear inside your skull with headphones.

Ah, you are right, sorry it was badly explained, what i mean is that these left/right sounds actually move "into" the phantom middle (where the soundstage is a little smaller but more cohesive), which overall seems like a better soundstage for headphones because the transition from "middle" to right/left sounds kinda strange without crossfeed
 
Jun 8, 2023 at 5:08 AM Post #32 of 110
Hi all -

I'm relatively new here but I've built a nice little collection of stuff and I want to share what I guess would be considered a semi-controversial observation (I think?):

I hate to EQ.

Every time I apply the EQ presets that are custom-tailored for certain headphones or IEMs, the overall sound experience sounds thinner/less warm/more metallic. The decibel level clearly goes down so maybe some of the difference is attributable to that? On Mac I'm using SoundSource and Qudelix 5K on portable.

I'm not posting this here to pick a fight with EQ-lovers. I'm just curious if I'm doing something wrong?

P.S. I will say there is one case where EQ really felt like it was needed: UM Mest MKII. It was a muddled mess unless I put EQ on. The rest of my stuff (u12t and Arya) felt like they were better without EQ.

Any other insights/tricks I'm missing?

Rich
there are a couple ways to think about how to deal with the audio signal that gives you the music you listen to.

1) leave it alone, try to get the most transparent gear you can, just enjoy the artist/producer/engineers' vision of what they've worked hard to give you.
2) process it, essentially becoming an additional mastering engineer. any processing fits in this category, and eq is a popular one.

if you're going to process music to make it more in line with what you like, i would learn some about the technical aspects of it, for example what a shelf eq is doing or what the controls of a parametric eq do and what changing the bandwidth around the center frequency does in terms of zooming in on a particular frequency.

but even more importantly, i'd learn to connect what works for you about a track to frequencies. this is what great recording engineers do. sometimes some of what a microphone picked up might not be helpful to getting the music to sound the way you want it. for example, they'll - say - solo the snare drum and find resonant frequencies that hurt what the music is trying to do and 'notch' them out using a parametric eq. or they'll gate the snare signal to remove the signal from other sources when the snare drum has stopped sounding. or they'll find other frequencies where adding more makes the track more musically effective, for example high frequencies are commonly boosted in vocals.

when you're eq-ing a whole mix as a consumer, you obviously don't have access to individual tracks, but you do have access to frequency ranges, and sometimes boosting or dipping certain frequency ranges can make a musical track far more engaging. mastering engineers do this every day. a crude version of this process everyone knows about is "turning up the bass", which is just boosting low frequencies with a shelf eq. a more surgical version is running your song through parametric eqs and picking the frequency, the q or bandwidth (how much do you want to zoom tightly in around that frequency) and choosing how much to boost or dip it. what you need to do to get the music to sound its most engaging will differ on every track and obviously depend on your taste. doing this is jumping through hoops, but it does give you a lot of power as a listener and if you experiment a lot finding what you like, can really improve your listening experience.

good luck -

ps any signal going through an eq or other processor will be degraded, but if what comes out is preferable to you than what went in, have at it.
 
Last edited:
Jun 8, 2023 at 5:58 AM Post #33 of 110
Was that written using AI?
 
Jun 8, 2023 at 6:09 AM Post #34 of 110
yes miniature soundstage is a good wording too

what really is noticeable with crossfeed is that it actually gives you a "phantom-middle" like speakers do, but instead of infront of you the phantom point is inside your head
In my case the center of the miniature soundstage can be a little bit in front of my face, but it requires the whole stereo image and spatial cues. The quality and nature of the recording are important.

Typically crossfeeders use X-topology meaning filtered/delayed versions of the channels are fed independently to the "other side." This arrangement doesn't handle mono sound neutrally/unaltered way: We have mono colorization just as with speakers except the colorization doesn't change when we move our head side to side. In that way it is a stable version of mono colorization.

Some crossfeeders such as those by Jan Meier use H-topology which kind of creates the crossfed channels by looking at both channels simultaneusly and "figuring out" the crossfed channels from them. The left and right channel kind of consult each other about what to do. The benefit of this is total mono neutrality. H-topology crossfeeders do nothing to mono sound, because the "figuring out" process detects mono sound and lets it through unaltered.

H-topology crossfeeders become more aggressive when the channel separation increases while X-topology crossfeeders tread everything the same. H-topology crossfeeders kind of simulate multichannel speaker system (center speaker => mono neutrality) while X-topology crossfeeders simulate stereo speakers. They have a different sound image signature. However, both do organize the spatiality created for speakers to something that works better with headphones (for some people at least).

Crossfeed doesn't really give a phantom middle because that's in the recording all along, but it does make the sound image seem more stable/solid/whole and that can give the impression of better defined middle.
 
Jun 8, 2023 at 6:14 AM Post #35 of 110
What is the benefit of doing that, would there be reverb/echo if mixed in stereo?
You want to have clear center points in your sound image to make things stable and have other sound placed left and right compared to that. Vocals mixed of center can sound annoying. In many genres of music low frequencies such as bass is also mixed dead center. A lot of the stuff in music can be mono if there are some stereo elements to make it sound spacious, wide and stereophonic.

Reverb/echo can be mono or stereophonic. Sounds mixed center don't need to be monophonic. They need to have equal energy left and right.
 
Last edited:
Jun 8, 2023 at 6:17 AM Post #36 of 110
Drums can be mixed to the middle too.
 
Jun 8, 2023 at 6:39 AM Post #37 of 110
What i mean is that these left/right sounds actually move "into" the phantom middle (where the soundstage is a little smaller but more cohesive), which overall seems like a better soundstage for headphones because the transition from "middle" to right/left sounds kinda strange without crossfeed
That's how I experience crossfeed too. It organizes the sounds from left to right locking them in their places better. Music without crossfeed sounds to me "scattered all over the place" because my spatial hearing can't interpret the excessive spatiality correctly. Especially impulse-like clangy sounds without crossfeed appear to me like objects in a hall of broken mirrors: They are "seen" fractured in multiple directions as reflections from mirrors while crossfeed "removes" the mirrors and the sounds appear only in their proper places. My spatial hearing probably "invents" these fractured mirror images to "explain" the excessive spatiality somehow.

Stereo sound mixed for speakers is kind of raw and requires further "processing." For speakers the room acoustics does that and refines the spatiality for the listener. For headphones crossfeed is a way to do something rather than nothing.
 
Jun 8, 2023 at 6:43 AM Post #38 of 110
Jul 2, 2023 at 1:31 PM Post #41 of 110
I had to document all this for another project, so I thought it might be useful here.

An outline tutorial example of what is involved with using the major EQ type - a parametric EQ system, with my particular headphone system:

I have a system with a Samsung Galaxy S-4 tablet acting as the music source, mostly ripped and downloaded CD music files (16/44), android operating system. This source drives a Denafrips Terminator II DAC via USB, with a Sparkos Audio Aries headphone amp and Monoprice AMT (Air Motion Transformer) headphones.

I found that the superb AMT headphones required considerable midrange EQ, so I researched different Android USB apps and selected UAPP (USB Audio Player Pro), which is specialized for Android and has a USB processing module somewhat better sounding than the Galaxy Android processor, optimizing the USB digital data processing for the interfaces between Galaxy Tablet source and the driven DAC,.and also furnishes a very sophisticated parametric EQ app called Toneboosters, embedded in UAPP and. requiring a small fee for enabling it. The Toneboosters parametric EQ contains 6 different filters of several selectable types, like digital bell and analog bell (peaking filters), and low shelf (for low frequency boost or cut) and high shelf (for high frequency boost or cut). Parameters of each filter (which have to be set) include frequency in Hz, amplitude in dB, and Q (damping factor).

This parametric EQ system can be enabled by selection and payment of a few bucks in the UAPP screen. Then a music selection is cued up and Toneboosters selected. It then requires setting the parameters for the 6 parametric EQ filters. My AMT headphones require, to compensate for this slump in midrange response, one filter with a considerable boost or 4-5 dB in the midrange around 2.5 kHz, and with a moderate Q damping factor of 0.83. The Q value controls the time and frequency response implemented for the filter selected. A peaking or bell filter is used to establish a peak or dip in response.

Greatly facilitating the EQ parameter setting process is the frequency response graphic display, which shows and identifies each of the 6 filters plus giving the parameter values, and most crucially, displays the summed overall frequency curve from 20 - 20kHz resulting from all the filter parameter settings. If all 6 filters aren't needed, the amplitude parameters of the unneeded filters should to be set to 0, so that they are essentially out of the sonic picture.

Low Q values create a broad frequency response filter with values of 0.7-0.8 being about optimal damping, that is, transient response with minimal ringing. Higher Q values steepen the filter response so it can suppress narrow peaks or dips caused in the headphones, with the result of some ringing in the filter response.

According to information on the blog for these AMT headphones, the peak filter for the AMT was needed to be +5 dB at 2.5 kHz with a Q of 0.83. This Q establishes a broad or gradual frequency range coverage of the filter, at least 2 octaves, peaking at 2.5 kHz and sloping off to minimal value at 7 kHz or so.

I should mention that the other main type of EQ is the multiple bell peaking filter "slider" type where there is a spaced array of individual narrow band peaking filters with fixed preselected Q and frequency parameters, and settable amplitude of course. There generally is no global overall filter response display, making it difficult to visualize the cumulative frequency response of the EQ. And shelf type filters are hard to achieve with the "slider" type of EQ.

The next step is to set the last Toneboosters EQ parameter, which is the overall gain of the entire filter. In my case I selected -3.4 dB, required to minimize peak clipping. Peak clipping response is also displayed on one of the precursor Toneboosters displays.

The last step is to listen to the music selection and decide if the tonal balance and overall reproduction are better and are satisfyingly remedying the sonic problems to your ear.

If not, then you either adjust the parameters of the enabled existing filters, or create new filters by enabling some of the formerly disabled ones.

In a tweaking process, you then work out the optimal filter EQ arrangement.
 
Jul 5, 2023 at 8:24 AM Post #43 of 110
Low Q values create a broad frequency response filter with values of 0.7-0.8 being about optimal damping, that is, transient response with minimal ringing.
“Q” simply determines the bandwidth of frequencies affected by the bell curve. It doesn’t affect damping, it also doesn’t affect transient response or cause ringing, unless you have a very high “Q” setting along with a very large gain setting (which you would never use to correct for HP freq response).

You also mentioned you raised a band around 2.5k by 5dB and lowered the EQ input by 3.4dB. To be sure to avoid clipping, you should lower the input by at least the same amount as you’re boosting (5dB) and preferably a little more.

G
 
Jul 5, 2023 at 9:18 AM Post #44 of 110
“Q” simply determines the bandwidth of frequencies affected by the bell curve. It doesn’t affect damping, it also doesn’t affect transient response or cause ringing, unless you have a very high “Q” setting along with a very large gain setting (which you would never use to correct for HP freq response).
Yes, Q-factor is defined by fr/∆f, where fr is the resonance frequency and ∆f the the half power bandwidth of the resonance. However, Q-factor can also be defined by ratio of stored energy in the system and power loss which is linked to damping/ringing. Systems with higher Q factor lose energy stored in them slower:

Q < 0.5 means overdamped system
Q = 0.5 means critically damped system
Q > 0.5 means underdamped system

It doesn't take "very high" Q factor to have "ringing" (mathematically anything over 0.5), but for the ringing to be audibly significant in audio, the Q factor may need to be somewhat high. In general Q factors up to 0.8 are considered "safe" in audio.

You also mentioned you raised a band around 2.5k by 5dB and lowered the EQ input by 3.4dB. To be sure to avoid clipping, you should lower the input by at least the same amount as you’re boosting (5dB) and preferably a little more.

G
Theoretically yes, but in practice the signal level in the 2.5 kHz band might be so low that there is no danger of clipping at all. Gain reduction of 3.4 dB alone means the signal energy is reduced to about 46 % of the original. The 2.5 kHz band alone would need to contain near clipping peaks in the original signal for the risk of clipping. However, doing the same in the bass frequencies is another story, because most of the energy tends to live there.
 
Jul 5, 2023 at 2:05 PM Post #45 of 110
Theoretically yes, but in practice the signal level in the 2.5 kHz band might be so low that there is no danger of clipping at all. Gain reduction of 3.4 dB alone means the signal energy is reduced to about 46 % of the original. The 2.5 kHz band alone would need to contain near clipping peaks in the original signal for the risk of clipping. However, doing the same in the bass frequencies is another story, because most of the energy tends to live there.
In practice, most music have peaks very close or at 0dBFS. I'm pretty sure these peaks contain frequencies over a rather wide range so just about any positive gain at any frequency would cause clipping in most cases. In rare cases, even slightly negative gains (in dB) or no gain at all could also cause clipping due to unlucky phase shifts and potential oversampling. I think we can agree that in general, it's a good idea to set the pre-gain below the max gain.
 
Last edited:

Users who are viewing this thread

Back
Top