The most reliable/easiest way to EQ headphones properly to achieve the most ideal sound (for non-professionals)
Feb 22, 2016 at 11:16 PM Post #166 of 316
Koukol Have you tried my video EQ tutorial?

And yeah, all the VSTs I have work fine with Audacity, so I don't know what's up with that...
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Feb 23, 2016 at 2:14 AM Post #168 of 316
Koukol It's this one http://www.head-fi.org/t/794467/how-to-equalize-your-headphones-2016-update

It should help you pinpoint your actual problem frequency easily. You can also play with the dB and bandwidth / Q setting on the equalizer you use until the problem frequency is toned down by the correct amount with the appropriate steepness of filter. :smile:
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Feb 23, 2016 at 7:55 AM Post #169 of 316
  We do agree on many points, but just with some different emphases.

 
Agreed. Although the differences between us are quite small, they lead to a quite significant difference in our final conclusion.
 
Quote:
Originally Posted by Lunatique /img/forum/go_quote.gif
 
... you can EQ your system to be a lot more accurate than without EQ. ... I can't imagine you, or any other audio professional, playing back a log sweep and hearing obvious spikes and dips in the frequency response and then just shrug and not do anything about it.

 
In practice, just shrugging and not doing anything about it, is a very common response! Of course, it's not quite that simple in reality. We would first identify the cause of those dips and spikes, as that will suggest an appropriate treatment. Commonly, EQ is NOT an appropriate treatment! In the case of a dip caused by a cancellation for example, EQ is typically not an effective treatment because EQ boosting simply increases the amount of energy equally for both the direct sound and the reflections causing the cancellation, resulting in a net gain of very little or nothing at all. Absorption or the re-direction (diffusion) of those cancelling reflections would be very substantially more effective but, in the case of the reflections being caused by say the mixing console, we obviously can't cover the console in absorber or diffuser panels. There's really not much option other than just shrugging and doing nothing about it! Even in the case of spikes, EQ is sometimes no more than a band-aid rather than a cure. Ideally, we need to think in terms of the time domain itself, rather than just the timing of reflections and the resultant affect on freq response. If a spike is caused by some sort of resonance (or ringing) for example, then we not only have some amount of signal summing but also a substantial increase in the duration of that ringing freq, IE. Not just a freq problem but a time/duration problem. Just using EQ as the treatment may lower the average amount of energy at a particular freq, to the point where the response looks flat but it hasn't addressed the time/duration issue. In other words, to counteract the increase in total energy due to the longer duration of that energy at a particular freq, we've reduced the total energy so our freq response looks flat but if we were to take a snapshot of a particular instant then that freq would have significantly less energy (be a dip). That's why a "waterfall" plot is a useful measurement tool, in addition to just a standard freq response plot. Absorption would probably be the best solution here, but again, applying absorption maybe a practical impossibility.
 
Shrugging and doing nothing about it is the typical option for problems above about 800Hz, although there shouldn't be too many really serious problems due to the initial design, construction and treatment. Higher freqs are particularly sensitive to very small changes in position. What may have been a 5dB dip at say 1.5kHz may become a 5dB boost, just by moving the measurement mic an inch or two. We obviously can't tune a listening point to just a square inch. Even if we could position our head that accurately all the time, we have two ears which are more than an inch apart! How do you treat that with EQ? 
 
From all this, a few things should be apparent: 1. Acoustics is one of those audio rabbit hole areas; the more you investigate, the deeper you realise the hole goes! 2. EQ is both a blunt and frequently ineffective acoustic treatment tool. 3. A flat freq response is only part of the picture. It's entirely possible that a "flat" mix/mastering room is neither particularly accurate, particularly neutral nor conducive to producing quality audio, even if creating a "flat" room were attainable in the first place!
 
Quote:
  So if we can agree that audio professionals do have an objective baseline standard for accuracy that we try to aim for ...

 
Ah, but this our biggest point of disagreement! There are two elements to my disagreement: The first, I've addressed before and in more detail above. There have been some fairly extreme solutions to the issue of attaining an accurate/neutral response while avoiding the even worse pitfalls of an anechoic chamber, here's an example of such an extreme mastering room solution:
 

 
While covering almost every inch of the studio in quadratic diffusers probably gives an amazing result, the reflective surface closest to the mastering engineer (and directly between him and the monitors), the console, is obviously not covered in quadratic diffusers. So however flat/neutral/accurate this mastering suite is, it's still probably some way off "ideal". This mastering suite is obviously substantially different from the pictures you previously linked to of other mastering studios and would presumably sound at least somewhat different.
 
The second element of my disagreement is subjectivity, the personal preference/s of the mastering engineer. Although not a bass-head, I do like a little more bass than average and my tendency would therefore be to add a little too much bass to my masters. I sometimes counter this by adding a few dB of bass to my b-chain when mixing or mastering. Some other engineers add even more, most a little less. Obviously, this is all subjective rather than objective. It's a subjective observation that I tend to prefer a little more bass than others, a subjective determination of how much and a subjective determination of whether to counter it with just personal awareness or by actually altering my b-chain.
 
Putting these two elements together, I disagree that there is an "objective baseline standard for accuracy". IMO, there is a "subjective baseline standard" for what constitutes an environment conducive to good mastering and typically that means a fairly inaccurate freq response both deliberately and due to unavoidable circumstance. There is no objective baseline standard, all mastering studios are audibly different and all mastering engineers are individuals and at least somewhat different. I've been in some wonderful mastering rooms and also some which I felt were poor enough to preclude me from producing my best work but which don't preclude (and actually aid) other engineers to produce top quality results and, that's even in those cases where my idea of "top quality" is actually identical to another mastering engineer's! What you are "trying to aim for" is based on a fallacy that mastering suites (and mastering engineers) adhere to some objective standard. The actual target you are "trying to aim for" is a creation of your own imagination, of what you think/feel a mastering suite should be, not what they actually are!
 
I don't dispute your (or anyone else's) right to EQ your equipment however you wish. I also don't dispute that the results of that EQ may very well sound better to you and possibly even to me. What I'm saying is that acoustics, mastering suites and the personal act of mastering itself is a warren of rabbit holes, a complex set of objective and subjective variables. Reducing this to a single variable, reliably/easily treated with a single and rather blunt tool (EQ), is over-simplifying the issue to the point that it's just as likely to be counter-productive. I realise that system tweaking is an integral part of audiophilia for many and therefore any advice against tweaking is an anathema. The question is, what is our goal and if that goal is not tweaking itself, does tweaking get us closer to that goal? In this particular case, if our goal is to create a sound we personally like, then tweak away. If our goal is to try and experience what the artists/engineers intended then the answer is not so simple; tweaking may get us closer or it may take us further away but crucially, we're never going to know unless we're lucky enough to visit the mastering studio where the music was mastered!
 
  For while now I've been having a problem with shrillness around the 2K range with every HP I've used (many including the HD650's)
Some recording do sound great though.
I think I have ear damage or might be suffering from neurological problems (I just had an MRI) or maybe I own too many brick walled CDs :)

 
My advice is to visit an audiologist. Not only will they help you to identify exactly where you have a hearing weakness, making it much easier for you to attempt to EQ around the problem but they may also be able to identify why you have a weakness and/or refer you to a consultant, who in turn might be able to treat the cause and/or help reduce the possibility of further deterioration. If I were you, I would be visiting an audiologist at my earliest opportunity!
 
G
 
Feb 23, 2016 at 11:58 AM Post #170 of 316
Thanks guys.
I guess I better see an Audiologist
I've never had this problem before and I've been listening with HP for about 40 years.
 
btw~Reaper is giving me more accurate results.
I'm wondering if I downloaded the right Audacity version now.
 
Feb 23, 2016 at 4:08 PM Post #171 of 316
   
 
In practice, just shrugging and not doing anything about it, is a very common response! Of course, it's not quite that simple in reality. We would first identify the cause of those dips and spikes, as that will suggest an appropriate treatment. Commonly, EQ is NOT an appropriate treatment! In the case of a dip caused by a cancellation for example, EQ is typically not an effective treatment because EQ boosting simply increases the amount of energy equally for both the direct sound and the reflections causing the cancellation, resulting in a net gain of very little or nothing at all. Absorption or the re-direction (diffusion) of those cancelling reflections would be very substantially more effective but, in the case of the reflections being caused by say the mixing console, we obviously can't cover the console in absorber or diffuser panels. There's really not much option other than just shrugging and doing nothing about it! Even in the case of spikes, EQ is sometimes no more than a band-aid rather than a cure. Ideally, we need to think in terms of the time domain itself, rather than just the timing of reflections and the resultant affect on freq response. If a spike is caused by some sort of resonance (or ringing) for example, then we not only have some amount of signal summing but also a substantial increase in the duration of that ringing freq, IE. Not just a freq problem but a time/duration problem. Just using EQ as the treatment may lower the average amount of energy at a particular freq, to the point where the response looks flat but it hasn't addressed the time/duration issue. In other words, to counteract the increase in total energy due to the longer duration of that energy at a particular freq, we've reduced the total energy so our freq response looks flat but if we were to take a snapshot of a particular instant then that freq would have significantly less energy (be a dip). That's why a "waterfall" plot is a useful measurement tool, in addition to just a standard freq response plot. Absorption would probably be the best solution here, but again, applying absorption maybe a practical impossibility.
 
Shrugging and doing nothing about it is the typical option for problems above about 800Hz, although there shouldn't be too many really serious problems due to the initial design, construction and treatment. Higher freqs are particularly sensitive to very small changes in position. What may have been a 5dB dip at say 1.5kHz may become a 5dB boost, just by moving the measurement mic an inch or two. We obviously can't tune a listening point to just a square inch. Even if we could position our head that accurately all the time, we have two ears which are more than an inch apart! How do you treat that with EQ? 
 
From all this, a few things should be apparent: 1. Acoustics is one of those audio rabbit hole areas; the more you investigate, the deeper you realise the hole goes! 2. EQ is both a blunt and frequently ineffective acoustic treatment tool. 3. A flat freq response is only part of the picture. It's entirely possible that a "flat" mix/mastering room is neither particularly accurate, particularly neutral nor conducive to producing quality audio, even if creating a "flat" room were attainable in the first place!
 
 
Ah, but this our biggest point of disagreement! There are two elements to my disagreement: The first, I've addressed before and in more detail above. There have been some fairly extreme solutions to the issue of attaining an accurate/neutral response while avoiding the even worse pitfalls of an anechoic chamber, here's an example of such an extreme mastering room solution:
 

 
While covering almost every inch of the studio in quadratic diffusers probably gives an amazing result, the reflective surface closest to the mastering engineer (and directly between him and the monitors), the console, is obviously not covered in quadratic diffusers. So however flat/neutral/accurate this mastering suite is, it's still probably some way off "ideal". This mastering suite is obviously substantially different from the pictures you previously linked to of other mastering studios and would presumably sound at least somewhat different.
 
The second element of my disagreement is subjectivity, the personal preference/s of the mastering engineer. Although not a bass-head, I do like a little more bass than average and my tendency would therefore be to add a little too much bass to my masters. I sometimes counter this by adding a few dB of bass to my b-chain when mixing or mastering. Some other engineers add even more, most a little less. Obviously, this is all subjective rather than objective. It's a subjective observation that I tend to prefer a little more bass than others, a subjective determination of how much and a subjective determination of whether to counter it with just personal awareness or by actually altering my b-chain.
 
Putting these two elements together, I disagree that there is an "objective baseline standard for accuracy". IMO, there is a "subjective baseline standard" for what constitutes an environment conducive to good mastering and typically that means a fairly inaccurate freq response both deliberately and due to unavoidable circumstance. There is no objective baseline standard, all mastering studios are audibly different and all mastering engineers are individuals and at least somewhat different. I've been in some wonderful mastering rooms and also some which I felt were poor enough to preclude me from producing my best work but which don't preclude (and actually aid) other engineers to produce top quality results and, that's even in those cases where my idea of "top quality" is actually identical to another mastering engineer's! What you are "trying to aim for" is based on a fallacy that mastering suites (and mastering engineers) adhere to some objective standard. The actual target you are "trying to aim for" is a creation of your own imagination, of what you think/feel a mastering suite should be, not what they actually are!
 
I don't dispute your (or anyone else's) right to EQ your equipment however you wish. I also don't dispute that the results of that EQ may very well sound better to you and possibly even to me. What I'm saying is that acoustics, mastering suites and the personal act of mastering itself is a warren of rabbit holes, a complex set of objective and subjective variables. Reducing this to a single variable, reliably/easily treated with a single and rather blunt tool (EQ), is over-simplifying the issue to the point that it's just as likely to be counter-productive. I realise that system tweaking is an integral part of audiophilia for many and therefore any advice against tweaking is an anathema. The question is, what is our goal and if that goal is not tweaking itself, does tweaking get us closer to that goal? In this particular case, if our goal is to create a sound we personally like, then tweak away. If our goal is to try and experience what the artists/engineers intended then the answer is not so simple; tweaking may get us closer or it may take us further away but crucially, we're never going to know unless we're lucky enough to visit the mastering studio where the music was mastered!

I understand the pain of nulls from room modes too well. I dealt with it for a long time and couldn't do much about it, since EQing a null in an acoustic environment doesn't help much--it just puts unnecessary strain on the drivers. I finally had to add a subwoofer to fill out that null (and luckily the null was below the crossover point of the subwoofer). This is one area that headphones have an advantage, since the acoustic space it taken out of the equation, and that also makes EQing easier since you're not fighting against some stubborn null that can't be fixed.
 
Audessey's room/speaker correction technology involves time-domain correction too, not just frequency response: https://audyssey.zendesk.com/entries/20352398-Time-Domain-correction-explained
 
It also uses multiple measurements within the listening area, to accommodate for the movement of the head while seated at the listening position: https://audyssey.zendesk.com/entries/73287-How-does-MultEQ-apply-room-correction-
 
As for your most important point regarding subjectivity vs objectivity, I think maybe we can look at it from another angle. I agree that mastering engineers have their own subjective taste, since it's as much art as it is science. But let's simplify the issue down to one basic point, which is this:
 
If given a choice, wouldn't most mastering engineers prefer that the person listening to their masters is using a system that's more neutral/accurate than significantly colored? Regardless if that person can know or achieve the same sonic signature as the mastering facility, or even know what the mastering engineer really intended subjectively, does it not make logical sense that a neutral/accurate system is ultimately more desirable, because it will be able to playback the widest range of different subjective tastes from different mastering engineers without veering off that cliff of a basic standard for fidelity?
 
I know you'll probably bring up the possibility of corrections maybe causing more harm than good, but that is not a foregone conclusion, since it depends on multiple factors such as the gear being corrected, the tool used for correction, and the knowledge/skill of the person doing the correction. Generally speaking, my stance is that trying to achieve a more neutral/accurate sounding playback system is overall a good thing, and the more care taken with the process and tools and the better the gear is, the more positive the outcome will be. From your previous posts you seem to more or less agree with this.
 
So it appears the only real remaining issue I'm trying to see if we can agree on, is whether neutral/accurate sonic signature is ultimately more desirable than significantly colored ones (which unfortunately is far too common in consumer audio). 
 
Feb 24, 2016 at 6:05 AM Post #172 of 316
  Audessey's room/speaker correction technology involves time-domain correction too, not just frequency response ...

 
Yes, I've got a basic grasp of how the technology works. However, given the current state of audio processing capabilities, the practical implementation limitations present within consumer AVRs and the limitations of the average consumer speaker system, I can't see how this technology can apply enough corrections to provide a completely corrected, flat response. Having said this, I have heard the audyssey and Dirac systems and there's no doubt in my mind that for the average consumer they do provide a considerable improvement and indeed I did recommend their use earlier in this thread. We do need to put this improvement in context though, the context being, numerous truly horrific acoustic problems to start with, the level of which most consumers, even including many audiophiles, is far, far worse than they would ever suspect. A situation only complicated and worsened with 5.1 (or higher) home cinema systems.
 
  It also uses multiple measurements within the listening area, to accommodate for the movement of the head while seated at the listening position ...

 
Again, your use of the word "accommodate" implies a cure. Indeed, it may provide a cure in some freq ranges but I'm highly sceptical it can provide a cure throughout the spectrum, a general improvement sure but not a cure. Logically, it must work on some sort of average (albeit a complexly constructed one) and apply correction to that average rather than attempt to perfectly correct for each listening position individually. This would have to involve some degree of compromise, even to the point of making one particular listening position worse, to improve the majority of the other listening positions measured. Here in the science forum I need to clarify that this is just my personal opinion. I cannot easily integrate a consumer AVR into my studio setup to run the tests required to really understand what is going on under the hood and substantiate my assumptions with real evidence.
 
  This is one area that headphones have an advantage, since the acoustic space it taken out of the equation, and that also makes EQing easier since you're not fighting against some stubborn null that can't be fixed.

 
Agreed. However, that's not the end of the story, we don't just eliminate all the complex variables of room acoustics, we exchange them for another set of variables and importantly, a set of variables which are extremely difficult to objectively measure, unlike the variables of room acoustics. EQ'ing headphones is near impossible for serious consumers on any basis except subjectivity and, like with speakers and rooms, EQ is again only part of the picture.
 
Quote:
 
If given a choice, wouldn't most mastering engineers prefer that the person listening to their masters is using a system that's more neutral/accurate than significantly colored?

 
The answer to this is an emphatic "no"! Or more precisely, the answer is "yes", given a specific set of conditions which rarely exist. The answer could only be "yes" if the vast majority of those listening were using a system which is relatively flat/neutral. If it were an equal number or a significant minority, that would be problematic. Under those conditions, I personally would be looking to create a master for consumers somewhere between flat/neutral and the usual "significantly coloured" (or rather, inversely "significantly coloured"). In other words, a master somewhere in the middle and somewhat compromised for both groups of listeners within the target group! Currently, those listening with flat/neutral systems are part of a very small minority, an almost insignificant minority and therefore I can create a master with fewer compromises for that vast majority, fewer concessions to the flat/neutral group. I'm of course rather over-simplifying my approach to mastering but this is the basics of the equation.
 
There are some circumstances where those specific conditions mentioned above do exist or exist up to a point. For example, mixing audio for cinema. Cinemas are calibrated (albeit not entirely flat and with a fair margin of error) and of course our mix (dub) stages are similarly calibrated, meaning no counter colouration is required. A broadly similar situation existed with SACD. SACDs were relatively expensive, as were the players and neither were the players portable. The logical inferences of these facts, from a mastering perspective, is that: SACD consumers generally had higher quality playback systems than the average consumer (if they spent a significant sum on a SACD player and on content, they probably also had significantly better than average speakers), far better listening environments (SACDs were not playable in cars, trains or other portable scenarios) and SACD consumers would tend to listen more critically, IE. Were more likely to be concentrating on the listening experience, rather than just playing music in the background while doing something else. These inferences made this a rather specific target group and while it didn't necessarily make a huge difference as far as neutrality/flatness were concerned, as even higher end consumer systems are still coloured (although maybe slightly less so, on average) but it did make a significant difference in other aspects of mastering. The amount of audio compression applied perhaps being the most obvious (but not only) example. For this reason, although intrinsically no better than 16/44.1 as far as resolution, dynamic range or any other aspect of sound quality is concerned, SACD does somewhat represent the pinnacle of mastering as far as critical listening with a high quality system is concerned.
 
G
 
Feb 26, 2016 at 6:01 PM Post #173 of 316
For this reason, although intrinsically no better than 16/44.1 as far as resolution, dynamic range or any other aspect of sound quality is concerned, SACD does somewhat represent the pinnacle of mastering as far as critical listening with a high quality system is concerned.

G


... as long as you don't need any bass redirection or digital room correction, because SACD allows for no such things. :mad:

And as long as the recording and mastering engineers are under no grandiose illusion of being able to mix the whole thing without any PCM-domain DSP. I've heard one such "pure" recording and I could only facepalm.

http://www.head-fi.org/t/782131/why-high-res-audio-is-bad-for-music-take-2
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Feb 27, 2016 at 3:14 AM Post #174 of 316
... as long as you don't need any bass redirection or digital room correction, because SACD allows for no such things.
mad.gif


And as long as the recording and mastering engineers are under no grandiose illusion of being able to mix the whole thing without any PCM-domain DSP.

 
I wasn't making a case FOR SACD, personally I think it's a flawed format which offers no fidelity, resolution or audible benefits over 16/44.1. The reason I mentioned SACD is simply because from a mastering perspective, it presented a highly targeted consumer demographic. There is absolutely no technical reason why the exact same end result could not be achieved with 16/44.1: Simply create two 16/44.1 masters, one standard version and another more "purist" version with more dynamic range (less compression) and generally better suited to more critical listening on higher quality systems. This "purist" version could be called the "hi-fidelity" version or, just as legitimately as say 24/192 or SACD, a "high-def" version. The only reason this doesn't currently happen has got nothing to do with any actual format limitations of 16/44.1 but purely because of the perceived marketing limitations. It's (rightly or wrongly) perceived as more difficult to differentiate, from a marketing point of view, a standard priced 16/44.1 version from a higher priced version in exactly the same format. There is only the marketers' word that one version is worth more than another version but with SACD or 24/192 the marketers have got bigger numbers which they can use to support a difference, regardless of the reality that those bigger numbers don't actually result in any better quality.
 
I don't believe there are hardly any serious, knowledgeable professional mastering engineers under the "grandiose illusion" you are talking about or indeed the grandiose illusion of 24/192 either. There may appear to be more than there actually are though, I personally know some who publicly support the marketers' claims, even though they're well aware it's BS, because they have families to support and can't risk upsetting/loosing their clients by undermining their marketing strategies.
 
G
 
Mar 25, 2016 at 8:45 PM Post #175 of 316
Guys, im trying all this for the first time and i'm lost at the beginning. UNDERSTAND that I know basically nothing about EQ'ing. I dont know how to listen for spikes or dips. I think I may have identified spikes based on harshness/grating sound in the high frequencies but am completely lost at finding dips in bass frequencies.

I'm using jriver on mac, the provided test tones and the marvelGEQ plugin as in EQ in jriver.

Help please
 
Mar 25, 2016 at 9:25 PM Post #176 of 316
Guys, im trying all this for the first time and i'm lost at the beginning. UNDERSTAND that I know basically nothing about EQ'ing. I dont know how to listen for spikes or dips. I think I may have identified spikes based on harshness/grating sound in the high frequencies but am completely lost at finding dips in bass frequencies.

I'm using jriver on mac, the provided test tones and the marvelGEQ plugin as in EQ in jriver.

Help please

I replied to you via PM, but I"ll post my replies here so others can benefit from it too:
 
Are you using the test tones I posted? If you listen to the log sweep, you should hear clearly that as the tone sweeps from 20Hz to 20KHz, there will be obvious spike/dips--especially in the 1KHz~10KHz region. You can't miss it--it's very, very obvious. If your ears have no problems and you can discern differences in volume, then you will hear it. 
 
Same with the sine wave test tones. If you playback the test tones sequentially, you will hear the differences in amplitude at certain frequency intervals. Again, this is usually the most obvious from 1KHz to 10KHz region. Some frequencies in that range will sound especially sharp/loud, while some will sound duller/quieter. Those can be the spikes/dips, but you're not necessarily listening for harshness/dullness--you're listening for relative volume (how loud or quiet each tone "feels" to you when compared to each other). 
 
As for bass frequencies, are you using a frequency response measurement graph as instructed? It's much easier if you have one (search at InnerFidelity to see if your headphone has been measured). When you can see what the measurement is, it puts into what you hear into much clearer context. It takes some practice to discern the relative difference in energy level between bass frequencies, but if you go with your gut instinct and "feel" which tones sound louder or quieter, you can get pretty close. But keep in mind that 40~50Hz is supposed to "feel" stronger relatively than neighboring bass frequencies, but not overwhelmingly so.
 
Mar 25, 2016 at 10:10 PM Post #177 of 316
I replied to you via PM, but I"ll post my replies here so others can benefit from it too:

Are you using the test tones I posted? If you listen to the log sweep, you should hear clearly that as the tone sweeps from 20Hz to 20KHz, there will be obvious spike/dips--especially in the 1KHz~10KHz region. You can't miss it--it's very, very obvious. If your ears have no problems and you can discern differences in volume, then you will hear it. 

Same with the sine wave test tones. If you playback the test tones sequentially, you will hear the differences in amplitude at certain frequency intervals. Again, this is usually the most obvious from 1KHz to 10KHz region. Some frequencies in that range will sound especially sharp/loud, while some will sound duller/quieter. Those can be the spikes/dips, but you're not necessarily listening for harshness/dullness--you're listening for relative volume (how loud or quiet each tone "feels" to you when compared to each other). 

As for bass frequencies, are you using a frequency response measurement graph as instructed? It's much easier if you have one (search at InnerFidelity to see if your headphone has been measured). When you can see what the measurement is, it puts into what you hear into much clearer context. It takes some practice to discern the relative difference in energy level between bass frequencies, but if you go with your gut instinct and "feel" which tones sound louder or quieter, you can get pretty close. But keep in mind that 40~50Hz is supposed to "feel" stronger relatively than neighboring bass frequencies, but not overwhelmingly so.


I cannot hear bass from 16Hz to 20Hz unless i increas volume, does that mean I need an EQ bump in those frequencies?
 
Mar 25, 2016 at 11:43 PM Post #178 of 316
I cannot hear bass from 16Hz to 20Hz unless i increas volume, does that mean I need an EQ bump in those frequencies?

You're not supposed to be able to hear frequencies beflow 20 Hz--you're supposed to mainly "feel" the low frequency vibration instead. 
 
Looks like you need to learn some basic lessons about audio. I highly recommend you google search terms like "learn audio basics" or "music production basics." You really should have at least some basic understanding of audio otherwise you're going to be making all kinds of dumb mistakes (but it's good that you're asking, so we can help you avoid those mistakes).
 
Jun 18, 2016 at 4:24 PM Post #179 of 316
Lunatique,
I don't get the idea with first EQing the headphones and then EQing accordig to the target curve to actually afterwards flatten the response. If you already have flat sound with the first EQ than it is flat according to your hearing, but afterwards if you apply the target curve EQ and then try to flatten that sound again, practically you are neutralizing the target curve. Am I right or am I missing some point here? As I can notice what Tyll Hertsens explains, NAD VISO HP50 are headphones with almost neutral FR that is following the Harman curve which creates the feeling of relatively flat sound. Harman curve pracically compensates for the 20-50 Hz dip, then between 3-4 kHz dip and around 8 kHz peak.
 
Jun 18, 2016 at 4:40 PM Post #180 of 316
Lunatique,
I don't get the idea with first EQing the headphones and then EQing accordig to the target curve to actually afterwards flatten the response. If you already have flat sound with the first EQ than it is flat according to your hearing, but afterwards if you apply the target curve EQ and then try to flatten that sound again, practically you are neutralizing the target curve. Am I right or am I missing some point here? As I can notice what Tyll Hertsens explains, NAD VISO HP50 are headphones with almost neutral FR that is following the Harman curve which creates the feeling of relatively flat sound. Harman curve pracically compensates for the 20-50 Hz dip, then between 3-4 kHz dip and around 8 kHz peak.

Looks like you mixed up the process a bit. 
 
It's actually very straightforward. The end goal is to match the Harman Target Response Curve, as that is the current ideal according to headphone experts. So during the entire process, just keep that in mind as the end goal. 
 

Users who are viewing this thread

Back
Top