EQ Settings for 700+ Headphones
Feb 12, 2022 at 10:51 AM Post #136 of 165
You can't really average parametric equalizer filters if they have different center frequencies and qualities. You can download the sample 2 GraphicEq.txt file on your phone and import it to Wavelet.
Yes, Sorry. I wasn't clear. I mean testing all 3 options, Sample 1, Sample 2 and the Average.
But cool, I didn't know Wavelet allow external imports. Easier than migrating those GEQ files into Neutron support format.
 
Sep 18, 2022 at 1:16 PM Post #137 of 165
:tada: AutoEq just got a new and improved parametric eq optimizer :tada:

While all major types of equalizers have been supported a long time now, including parametric equalizers, the optimizer which finds the best parameters for the parametric eq was slow and produced problems in certain rare(ish) cases.

The new parametric eq optimizer runs a lot faster, supports low and high shelf filters and has limits for filter (band) center frequencies, gains and qualities (widths). Together with the recent addition of multiprocessing, the new version generates the results over 100x faster and the speedup for a single optimization run is around 10x. The low and high shelf filters make it easier to adjust the bass and upper treble levels in your eq app. This is especially useful as the preferred levels for both vary wildly from one listener to another. And finally the limits on the filter parameters ensure that there won't be values produced which you cannot add to your eq app.

I put quite a lot of effort into ensuring the new optimizer is robust but this is still the first time the new results have been released out there in the wild so please let me know if you find something wrong/odd/funny/weird.

Here's an illustration of how the parametric eq optimizer finds the best filter parameters
peq.gif


Hope you enjoy!
 
Dec 11, 2022 at 7:29 PM Post #138 of 165
If you followed my previous guides (on page 9 of this thread) to get AutoEQ working on M1 Mac Arm64 machines before, then you need to know there are changes that @jaakkopasanen made to AutoEQ and the old guides don't work anymore. One of the major changes is that Tensorflow is not required anymore. Anyway here's some new instructions and a few tips if you already had an older version of AutoEQ running and want to update to the latest improved version.

Visit https://github.com/jaakkopasanen/AutoEq to read about the latest changes.

1. Download the latest AutoEQ repository zip file.

https://github.com/jaakkopasanen/AutoEq/archive/refs/heads/master.zip

Extract the zip file and rename the extracted folder to AutoEQ. Move or copy the AutoEQ folder to your Users folder.

TIP: If you previously followed my previous instructions to get AutoEQ running, then you can skip steps 2 through 6.

2. Install Xcode from app store, Xcode Command Line Tools should install,

if not, open terminal and install with:

Code:
$ xcode-select --install


3. Check for latest version of Miniforge3 at:

https://github.com/conda-forge/miniforge#download

Download the file for arm64 (Apple Silicon).


4. Launch terminal and navigate to directory where you saved "Miniforge3-MacOSX-arm64.sh" file.

Example: If you saved to your User Downloads directory


Code:
$ cd ~/Downloads


5. Install Miniforge3:

Code:
$ bash Miniforge3-MacOSX-arm64.sh

You must accept the licensing agreement to continue the installation.


6. After complete installation, you are prompted to initialize Miniforge3.

Code:
Do you wish the installer to initialize Miniforge3
   by running conda init? [yes|no]
   [no] >>> yes


7. AutoEQ will run under the latest version of Python 3 now. Create a virtual environment and install Python 3.11 and AutoEQ dependencies:

TIP: If you followed my previous instructions to get AutoEQ running then you need to delete the previous virtual environment before continuing. You'll recreate the virtual environment again in the next steps. If you are installing for the first time, skip this deletion step.

Assuming you installed Miniforge3 in your Users folder: Delete the autoeq_venv folder located here: /Users/<Username>/miniforge3/envs/autoeq_venv

TIP: The list of AutoEQ dependencies are found in the pyproject.toml file in the AutoEQ folder. The dependency versions listed may be lower than is required for Python 3.11, so I manually install the dependencies with the command that follows.

Code:
(base) MAC_Name:~ Username$ conda create --name autoeq_venv python=3.11 pillow matplotlib pandas scipy numpy tabulate pyyaml tqdm

Activating virtual environment.

Code:
(base) MAC_Name:~ Username$ conda activate autoeq_venv

Terminal prompt changes to:

Code:
(autoeq_venv) MAC_Name:~ Username$


8. Installing dependencies outside of conda-forge:

Code:
$ conda install -c bricew soundfile

TIP: To deactivate a conda virtual environment use:

Code:
$ conda deactivate

NOTE: In the normal AutoEQ setup instructions you would create and activate a venv directory within the AutoEQ directory.
With Miniforge3 you will be creating and activating a venv directory within ~/miniforge3/envs directory.

9. Verifying AutoEQ.

Code:
$ cd ~/AutoEQ

   $ python -m autoeq --help  <<-- This is only a minimal verification!! AutoEq help may be displayed even if setup is a little wonky.

It's better to run a full EQ graph pass.

Here's an example for true verification from https://github.com/jaakkopasanen/AutoEq#equalizing-individual-headphones :

Equalizing Sennheiser HD 650 and saving results to my_results/HD650:

Code:
$ python -m autoeq --input-dir="measurements/innerfidelity/data/onear/Sennheiser HD 650" --output-dir="my_results/HD650" --compensation="measurements/innerfidelity/resources/innerfidelity_harman_over-ear_2018_wo_bass.csv" --bass-boost=4 --convolution-eq --parametric-eq --ten-band-eq --fs=44100,48000


10. Keeping conda updated.

Code:
$ conda update conda --all

That's it. Good luck and have fun with AutoEQ on your M1 Mac Arm64 machine.
 
Last edited:
Jan 9, 2023 at 12:08 AM Post #139 of 165
So I just started using Equalizer APO and AutoEQ this weekend. Its awesome.

I have settings for all of my cans besides my Sennheiser HD590. I owned a pair of 590s and the broke but a few years ago I was able to track down a pair in the original box and got them shipped over from England. Got a new cable, headband, and those awesome teardrop earcups.

Anyway they measure very similar to the HD650 in the bass and mids but sound way different in the highs.

I found a website that measured the HD590s and I was planning on making my own EQ by basically leaving the lows/mids the same as the 650 and then trying my best to lower the highs.

Anyone have experience making their own AutoEQ and want to help me with this?
Here are the 590s measured:
https://diyaudioheaven.wordpress.com/headphones/measurements/brands-s-se/hd590-prestige/
Here is the 590 vs the 650 taken from that site:
o6fy7ga.png



Here is the AutoEQ for the 650 I was going to use as a base:
GraphicEQ: 20 -0.2; 21 -0.2; 22 -0.2; 23 -0.2; 24 -0.2; 26 -0.2; 27 -0.2; 29 -0.2; 30 -0.2; 32 -0.2; 34 -0.2; 36 -0.2; 38 -0.2; 40 -0.2; 43 -0.4; 45 -0.6; 48 -1.1; 50 -1.3; 53 -1.6; 56 -1.7; 59 -1.7; 63 -2.3; 66 -2.9; 70 -3.2; 74 -3.3; 78 -3.9; 83 -4.9; 87 -5.4; 92 -6; 97 -6.4; 103 -6.7; 109 -6.8; 115 -7.1; 121 -7.4; 128 -7.6; 136 -7.7; 143 -7.9; 151 -8.1; 160 -8.4; 169 -8.4; 178 -8.5; 188 -8.6; 199 -8.8; 210 -8.9; 222 -8.7; 235 -8.7; 248 -8.5; 262 -8.3; 277 -7.9; 292 -7.7; 309 -7.5; 326 -7.4; 345 -7.2; 364 -7.2; 385 -7.1; 406 -7.1; 429 -7; 453 -7; 479 -7; 506 -6.8; 534 -6.8; 565 -6.8; 596 -6.8; 630 -6.7; 665 -6.7; 703 -6.6; 743 -6.5; 784 -6.6; 829 -6.8; 875 -6.9; 924 -7; 977 -6.7; 1032 -6.4; 1090 -7; 1151 -7.4; 1216 -7.3; 1284 -7.1; 1357 -7; 1433 -6.8; 1514 -6.7; 1599 -6.5; 1689 -6.3; 1784 -6.1; 1885 -5.8; 1991 -5.5; 2103 -5.2; 2221 -5.3; 2347 -5.6; 2479 -6; 2618 -6.1; 2766 -6.3; 2921 -6.5; 3086 -6.4; 3260 -6.1; 3443 -6.1; 3637 -6.1; 3842 -5.4; 4058 -4.8; 4287 -4.1; 4528 -4.2; 4783 -4.6; 5052 -5.1; 5337 -5.5; 5637 -5.4; 5955 -4.3; 6290 -2.9; 6644 -1.7; 7018 -1.9; 7414 -2.5; 7831 -2.6; 8272 -2.6; 8738 -2.8; 9230 -2.8; 9749 -2.7; 10298 -2.9; 10878 -3.2; 11490 -3.5; 12137 -3.9; 12821 -4.4; 13543 -5; 14305 -5.6; 15110 -6.3; 15961 -7.1; 16860 -7.9; 17809 -8.9; 18812 -9.8; 19871 -10.9
 
Jan 9, 2023 at 6:29 AM Post #140 of 165
So I just started using Equalizer APO and AutoEQ this weekend. Its awesome.

I have settings for all of my cans besides my Sennheiser HD590. I owned a pair of 590s and the broke but a few years ago I was able to track down a pair in the original box and got them shipped over from England. Got a new cable, headband, and those awesome teardrop earcups.

Anyway they measure very similar to the HD650 in the bass and mids but sound way different in the highs.

I found a website that measured the HD590s and I was planning on making my own EQ by basically leaving the lows/mids the same as the 650 and then trying my best to lower the highs.

Anyone have experience making their own AutoEQ and want to help me with this?
Here are the 590s measured:
https://diyaudioheaven.wordpress.com/headphones/measurements/brands-s-se/hd590-prestige/
Here is the 590 vs the 650 taken from that site:
o6fy7ga.png



Here is the AutoEQ for the 650 I was going to use as a base:
Read Jaakko's tutorial "Equalizing Headphones the Easy Way".
https://medium.com/@jaakkopasanen/make-your-headphones-sound-supreme-1cbd567832a9

You will learn how to convert the graph image into a dataset which you can use in AutoEq to generate a custom EQ.

The tutorial is a little dated and does not give a recent AutoEq command that will work with the latest version of AutoEq. Visit the AutoEq github page to get the latest command examples. Being able to create the dataset from the graph image will be very helpful though.
 
Jan 9, 2023 at 10:59 PM Post #141 of 165
Read Jaakko's tutorial "Equalizing Headphones the Easy Way".
https://medium.com/@jaakkopasanen/make-your-headphones-sound-supreme-1cbd567832a9

You will learn how to convert the graph image into a dataset which you can use in AutoEq to generate a custom EQ.

The tutorial is a little dated and does not give a recent AutoEq command that will work with the latest version of AutoEq. Visit the AutoEq github page to get the latest command examples. Being able to create the dataset from the graph image will be very helpful though.

Thanks for the push in the right direction, the new commands were confusing but I managed to figure it out. Now I am not sure if I did it correctly but following the example I made a 2019v2 EQ with a 6 db bass boost just like in the example.

Seeing this was one of the coolest things I have done on a computer in a while lol, think I have enough dots?
590Dotgraph.png


Good thing I studied CompSci and I work in IT for a living I don't know how anyone would figure that out if they weren't heavily into the command line and understood concepts like Git.
 
Jan 9, 2023 at 11:03 PM Post #142 of 165
So this is the graph I started with
no-compression.png


and this is what I came up with first try

590.png
 
Jan 10, 2023 at 12:19 AM Post #143 of 165
:tada: AutoEq just got a new and improved parametric eq optimizer :tada:

While all major types of equalizers have been supported a long time now, including parametric equalizers, the optimizer which finds the best parameters for the parametric eq was slow and produced problems in certain rare(ish) cases.

The new parametric eq optimizer runs a lot faster, supports low and high shelf filters and has limits for filter (band) center frequencies, gains and qualities (widths). Together with the recent addition of multiprocessing, the new version generates the results over 100x faster and the speedup for a single optimization run is around 10x. The low and high shelf filters make it easier to adjust the bass and upper treble levels in your eq app. This is especially useful as the preferred levels for both vary wildly from one listener to another. And finally the limits on the filter parameters ensure that there won't be values produced which you cannot add to your eq app.

I put quite a lot of effort into ensuring the new optimizer is robust but this is still the first time the new results have been released out there in the wild so please let me know if you find something wrong/odd/funny/weird.

Here's an illustration of how the parametric eq optimizer finds the best filter parameters
peq.gif

Hope you enjoy!
Thank you for this amazing work!!!
 
Jan 10, 2023 at 12:41 AM Post #144 of 165
So this is the graph I started with
no-compression.png


and this is what I came up with first try

You did well, though I believe the 2019 V2 compensation is for IEMs. You should try again with an 'Over Ear' compensation to get the correct EQ for your Senn HD 590s. I think you'll have better results after that. I'm far from being super proficient with this stuff but I'm glad I could help.
 
Jan 10, 2023 at 1:32 AM Post #145 of 165
You did well, though I believe the 2019 V2 compensation is for IEMs. You should try again with an 'Over Ear' compensation to get the correct EQ for your Senn HD 590s. I think you'll have better results after that. I'm far from being super proficient with this stuff but I'm glad I could help.

lol I was just coming here to say I realized I did in ear not on ear. I ran everything again with the 2018 on ear and the results were pretty similar.

I also found one other set of measurements from a different site and ran it again.

Its cool I got this all figured out tonight but something about the EQ I don't like as much on the 590s vs my other cans. It just sucks a lot of the energy out of the music at 100-300 hz. I am no audio engineer but stuff I have heard 1000x like Nirvana - Scentless Apprentice it sounds like the drums have some of the life sucked out of them.

Nirvana - In Utero was one of my favorite CDs as a teen so I have heard it on a ton of different speakers and headphones and I have never heard the drum section sound like that. It could also be I have no idea what I am doing so I messed the commands up

python __main__.py --input-dir="my_input/590" --output-dir="my_output/590" --compensation="compensation/harman_over-ear_2018_wo_bass.csv" --parametric-eq --ten-band-eq --convolution-eq --bass-boost=6 --standardize-input

590.png
 
Last edited:
Jan 13, 2023 at 12:21 PM Post #146 of 165
I just setup AutoEQ in a Docker Container to play around with some extensive EQ.
Since I found out about convolution filters I think that making one headphone sound like another could get pretty close results.

Especially since I have "similar" headphones like the Meze Elite and Empyrian, which basically share the same chassis, earcups etc. I wonder how close it will get in the end. Who knows maybe I can even sell one of them afterwards.

Technically everything seems to work fine, however I still have some trouble understanding the syntax and what AutoEQ actually does and how I have to use it to get my desired results. I hope someone can chime in and help me

Let's take an example
"Making an HD 800 Sound like a HD650" from the example commands

python -m autoeq -
-input-dir="measurements/oratory1990/data/onear/Sennheiser HD 800" (here I usually take the measurements of the headphone I have and want to use)
--output-dir="my_results/Sennheiser HD 800 (HD 650)" (just the output directory, could be anything but the name should make sense to identify)
--compensation="compensation/harman_over-ear_2018_wo_bass.csv" (here I thought this is the target response, and I would have expected the HD 650 graph here, instead of the harman graph)
--sound-signature="results/oratory1990/harman_over-ear_2018/Sennheiser HD 650/Sennheiser HD 650.csv" (instead it is here at the sound signature parameter and I'm confused)
--parametric-eq --parametric-eq-config=8_PEAKING_WITH_SHELVES --ten-band-eq (the EQ type configuratio, easy and straightforward)
--bass-boost=4 (I wonder if using the harman target instead of harman target_wo_bass would make this one obsolete; why is it done like this?)
--convolution-eq --fs=44100,48000 (convolution filter playback rate, also easy, add more if higher rates are desired)

I hope you guys can help me. If you want to explain using an example the ideal case would be to make the Meze Elite sound like the Meze Empyrean using Oratory1990s measurements.


Edit:
I created now some EQ profiles to make the Elite sound similar to the Empyrean and the resulting Target graph is actually similar to the Empyrean base graph, I still don't understand why the harman compensation is used in the calculation

Until my Wavelight Server comes out I can't use convolution Filters yet, until then I have to use parametric EQ in UAPP. Unfortunately it seems like the EQ in UAPP is not completely transparent.
 
Last edited:
Jan 14, 2023 at 1:51 AM Post #147 of 165
I just setup AutoEQ in a Docker Container to play around with some extensive EQ.
Since I found out about convolution filters I think that making one headphone sound like another could get pretty close results.

Especially since I have "similar" headphones like the Meze Elite and Empyrian, which basically share the same chassis, earcups etc. I wonder how close it will get in the end. Who knows maybe I can even sell one of them afterwards.

Technically everything seems to work fine, however I still have some trouble understanding the syntax and what AutoEQ actually does and how I have to use it to get my desired results. I hope someone can chime in and help me

Let's take an example
"Making an HD 800 Sound like a HD650" from the example commands

python -m autoeq -
-input-dir="measurements/oratory1990/data/onear/Sennheiser HD 800" (here I usually take the measurements of the headphone I have and want to use)
--output-dir="my_results/Sennheiser HD 800 (HD 650)" (just the output directory, could be anything but the name should make sense to identify)
--compensation="compensation/harman_over-ear_2018_wo_bass.csv" (here I thought this is the target response, and I would have expected the HD 650 graph here, instead of the harman graph)
--sound-signature="results/oratory1990/harman_over-ear_2018/Sennheiser HD 650/Sennheiser HD 650.csv" (instead it is here at the sound signature parameter and I'm confused)
--parametric-eq --parametric-eq-config=8_PEAKING_WITH_SHELVES --ten-band-eq (the EQ type configuratio, easy and straightforward)
--bass-boost=4 (I wonder if using the harman target instead of harman target_wo_bass would make this one obsolete; why is it done like this?)
--convolution-eq --fs=44100,48000 (convolution filter playback rate, also easy, add more if higher rates are desired)

I hope you guys can help me. If you want to explain using an example the ideal case would be to make the Meze Elite sound like the Meze Empyrean using Oratory1990s measurements.


Edit:
I created now some EQ profiles to make the Elite sound similar to the Empyrean and the resulting Target graph is actually similar to the Empyrean base graph, I still don't understand why the harman compensation is used in the calculation

Until my Wavelight Server comes out I can't use convolution Filters yet, until then I have to use parametric EQ in UAPP. Unfortunately it seems like the EQ in UAPP is not completely transparent.
This is the description from AutoEq github:

Equalizing Sennheiser HD 800 to sound like Sennheiser HD 650 using pre-computed results. Both have been measured by oratory1990 so we'll use those measurements. Pre-computed results include 4dB of bass boost for over-ear headphones and therefore we need to apply a bass boost of 4dB here as well.

I believe the example is to have the HD 800 sound like the HD 650 which is already compensated to the Harman target. It is not to EQ the HD 800 to sound like a 'raw' HD 650. I hope that makes sense.

Maybe you use a command like this for that purpose:

Code:
 python -m autoeq --input-dir="measurements/oratory1990/data/onear/Sennheiser HD 800" --output-dir="my_results/HD650" --compensation="measurements/oratory1990/data/onear/Sennheiser HD 650/Sennheiser HD 650.csv" --convolution-eq --parametric-eq --ten-band-eq --fs=44100,48000

About the bass boost. This is from the AutoEq github page:

None of these targets have bass boost seen in Harman target responses and therefore a +4dB boost was applied for all over-ear headphones, +6dB for in-ear headphones and no boost for earbuds. Harman targets actually ask for about +6dB for over-ears and +9dB for in-ears but since some headphones cannot achieve this with positive gain limited to +6dB, a smaller boost was selected. Above 6 to 12 kHz data is filtered more heavily to avoid equalizing the narrow dips and notches that depend heavily on the listener's own ears.
 
Jan 14, 2023 at 4:55 AM Post #148 of 165
I believe the example is to have the HD 800 sound like the HD 650 which is already compensated to the Harman target. It is not to EQ the HD 800 to sound like a 'raw' HD 650. I hope that makes sense.
That's what I would think as well, but the results from that command Show a target line which is similar to the raw line of the headphone in "sound signature " not a close line to Harman.

I will compare it to results when I put as compensation directly the target headphones.

None of these targets have bass boost seen in Harman target responses
maybe this is outdated. We have the harman_wo_bass targets and a regular harman target response as well
and therefore a +4dB boost was applied for all over-ear headphones, +6dB for in-ear headphones and no boost for earbuds. Harman targets actually ask for about +6dB for over-ears and +9dB for in-ears but since some headphones cannot achieve this with positive gain limited to +6dB, a smaller boost was selected. Above 6 to 12 kHz data is filtered more heavily to avoid equalizing the narrow dips and notches that depend heavily on the listener's own ears.
Is there a +6db limit?
Does that mean if the response of 1 headphones is more than 6db away from another we can't equalize them?
Edit: there is at default but it can be adjusted with the --max-gain Parameter

This opened more questions now.
Also does this mean that the generated results with +4 still lack 2db to reach Harman target?

Sorry for all these questions.

Edit:
So I now compared the results of a calculation in which I used the "compensation" parameter for the target response with one in which I used the "sound signature" parameter as target response and then compared the targets to the raw measurement of the headphone I wanted to have.
All 3 are incredibly close to each other. I created a graph in excel with all 3 and they basically lie on top of each other, differences are ~0.02 or smaller in the actuall value.
1673697350986.png


This leads me to believe that when the "Sound Signature" parameter is used, the compensation parameter is ignored?
@jaakkopasanen as creator of the tool, can you chime in what the differences between the two usages are?
Also does the "equalized" graph show the response when the parametric EQ is applied or when the convolution filter is applied?
Since the different EQ variants differ slightly, the equalized results does as well.

And one more question regarding USB Audio Player Pro. (UAPP)
Is the "analog bell" the equivalent to "peak"?
There is also digital bell, which one should I use to get the closest results?
Edit2: analog Bell smooths over, which ends up "putting a veil" on everything.
Therefore digital bell seems much more transparent for this purpose.
 
Last edited:
Jan 14, 2023 at 10:12 AM Post #149 of 165
That's what I would think as well, but the results from that command Show a target line which is similar to the raw line of the headphone in "sound signature " not a close line to Harman.

I will compare it to results when I put as compensation directly the target headphones.


maybe this is outdated. We have the harman_wo_bass targets and a regular harman target response as well

Is there a +6db limit?
Does that mean if the response of 1 headphones is more than 6db away from another we can't equalize them?
This opened more questions now.
Also does this mean that the generated results with +4 still lack 2db to reach Harman target?

Sorry for all these questions.

Edit:
So I now compared the results of a calculation in which I used the "compensation" parameter for the target response with one in which I used the "sound signature" parameter as target response and then compared the targets to the raw measurement of the headphone I wanted to have.
All 3 are incredibly close to each other. I created a graph in excel with all 3 and they basically lie on top of each other, differences are ~0.02 or smaller in the actuall value.
1673697350986.png

This leads me to believe that when the "Sound Signature" parameter is used, the compensation parameter is ignored?
@jaakkopasanen as creator of the tool, can you chime in what the differences between the two usages are?
Also does the "equalized" graph show the response when the parametric EQ is applied or when the convolution filter is applied?
Since the different EQ variants differ slightly, the equalized results does as well.

And one more question regarding USB Audio Player Pro. (UAPP)
Is the "analog bell" the equivalent to "peak"?
There is also digital bell, which one should I use to get the closest results?
Nice. Hope @jaakkopasanen will answer your questions.
 
Jan 15, 2023 at 12:47 PM Post #150 of 165
That's what I would think as well, but the results from that command Show a target line which is similar to the raw line of the headphone in "sound signature " not a close line to Harman.

I will compare it to results when I put as compensation directly the target headphones.


maybe this is outdated. We have the harman_wo_bass targets and a regular harman target response as well

Is there a +6db limit?
Does that mean if the response of 1 headphones is more than 6db away from another we can't equalize them?
Edit: there is at default but it can be adjusted with the --max-gain Parameter

This opened more questions now.
Also does this mean that the generated results with +4 still lack 2db to reach Harman target?

Sorry for all these questions.

Edit:
So I now compared the results of a calculation in which I used the "compensation" parameter for the target response with one in which I used the "sound signature" parameter as target response and then compared the targets to the raw measurement of the headphone I wanted to have.
All 3 are incredibly close to each other. I created a graph in excel with all 3 and they basically lie on top of each other, differences are ~0.02 or smaller in the actuall value.
1673697350986.png

This leads me to believe that when the "Sound Signature" parameter is used, the compensation parameter is ignored?
@jaakkopasanen as creator of the tool, can you chime in what the differences between the two usages are?
Also does the "equalized" graph show the response when the parametric EQ is applied or when the convolution filter is applied?
Since the different EQ variants differ slightly, the equalized results does as well.

And one more question regarding USB Audio Player Pro. (UAPP)
Is the "analog bell" the equivalent to "peak"?
There is also digital bell, which one should I use to get the closest results?
Edit2: analog Bell smooths over, which ends up "putting a veil" on everything.
Therefore digital bell seems much more transparent for this purpose.
The "sound" of a headphone is its deviation from neutral and this in AutoEq can be seen in the red error curve. Sound signature picks up the error data from CSV and adds that to the compensation. So you use the headphones you have as input, the headphones you wish to imitate as sound signature and use a normal compensation (Harman target). The bass needs to match whatever was in the results of the headphone you're simulating. This scheme allows you to simulate headphones measured on a different rig, although the usual disclaimer of rigs aren't compatible and you can transform measurements made with one to another still apply. If both measurements have been made with compatible rigs, then you could simply use these headphone you wish to imitate as the compensation.
 

Users who are viewing this thread

Back
Top