Testing audiophile claims and myths
Oct 18, 2018 at 9:27 AM Post #9,721 of 17,589
[Just to note that I am NOT presenting this as "scientifically verified data".... you asked - so I respond.]

The most recent DACs that I had the opportunity (and the inclination) to compare directly with any degree of care were:
- a Wyred4Sound DAC2 (the original version; not the later various updated versions)
- one of our Emotiva DC-1 units

The Wyred4Sound unit used the Sabre DAC (I believe an ESS9018; it was their top part when that DAC was designed)...
The Wyred4Sound also offered a choice of five or six different filters (we were using the most normal seeming one).
The DC-1 uses an Analog Devices AD1955 DAC and an AD1896 ASRC (it offers only the DAC's internal filter).
Both were connected to the same digital source... and the levels were matched.
I don't have the specs for either DAC handy, but both certainly have THD, IMD, S/N and frequency response specs that are all "good enough that they should be audibly perfect"
I should also note that I believe the Wyred4Sound uses custom filters rather than the ones included with the Sabre DAC; and one or two of them are apodizing filters that do introduce significant frequency response alterations.
We used one of the ones specified to be flat.
However, if you dig out the manufacturer's data sheets on both DACs, you will see that the impulse responses on their filters are visibly quite different.

There were only two of us present, so there was no opportunity to invite someone else to run the switch.
We had two different amplifiers, a variety of music, and three different sets of speakers, with which to try them (we were auditioning both speakers and the DACs).
The differences were NOT subtle at all.
In fact, we both agreed that the differences were about as obvious as the differences between the various sets of speakers.
(And, no, we did not test both DACs to confirm that both were operating up to spec.)

Of course, NOT HAVING BEEN THERE, feel free to insist that "we must have been imagining what we heard".
Incidentally, in terms of bias, we both expected to notice a slight difference, but were both surprised about the magnitude of the difference.
We agreed that SUBJECTIVELY the difference seemed about equivalent to a boost of about 1.5 dB, centered around 5-7 kHz (on the part of the Sabre DAC).

Fine, we agree that these were subjective impressions. Where we disagree is where to go from there.

You seem convinced that given the large subjective difference we should take it as given that the subjective difference has basis in the signal and we should analyze the signal for the cause. *

The rest of us would say, if the subjective difference is as large as you say, you guys should have no difference turning out a statistically significant result in a proper volume matched double blind test. And given that any factor we theorize / distill from the differences in the DAC signals to be the possible cause would have to be vetted by such DBTs anyway to become scientific facts, you may as well do us the favor of vetting your findings thus before those two devices are to be supplied as raw data for any kind of psychoacoustic analysis...

To those who say DBT isn't sensitive enough, I say, come up with a repeatable, unfalsifiable way of detecting these unconscious differences as *sensed by humans* that doesn't simply consist of "let's just forget about not letting the subject know which is which in the first place", and win a Nobel prize. Now THAT would be pushing forward the state of the art.

*Sorry, I know you said that you don't present your findings as scientific fact--but given the argument that has gone down in the past several pages I consider this response fair game...
 
Last edited:
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Oct 18, 2018 at 10:11 AM Post #9,722 of 17,589
In my experience if you can see a difference in the IRs you can usually hear it, too. As such, I'm not inclined to doubt Keith's result (even though it's more or less anecdotal) - as he's said, why would DAC manufacturers bother building in extra filters if there was no plausible chance that anyone could tell a difference? Since the consumer won't be dealing with filter selection, they know their audience is audio pros who are going to be doing some amount of serious, if not statistically significant measurement. If they knew it was just marketing hokum, I think Wolfson would probably stop at 10 or 15 filters, 20 is just overkill :wink:

All that said, a solid, irrefutable DBT (one way or the other) would certainly be a tonic for this forum.
 
Last edited:
Oct 18, 2018 at 10:27 AM Post #9,723 of 17,589
I'm not suggesting that you go anywhere "from here".
(If you know me well enough to value my judgment, then you should probably consider my claim, and that I may know something you don't... while making up your own mind.)
I was simply answering BigShot's question.

I agree with you.
I think it would be great if someone would sponsor, fund, and carry out a properly designed, properly executed, and properly scaled, test to determine exactly how many people can hear a difference.
I think it would make a great topic for a college research paper or thesis.
And, if I was still in college, I might actually consider doing it.
I also include "the audibility of high-res vs CD quality music" on my list of such things.

The "bottom line", as usual, is simply that nobody who has the budget to do so has the inclination to do so.
(The main reason that pharmaceutical companies run all those studies, and publish many of the results, is simply that they're required to, or that they actually expect it to increase sales.)

I've mentioned this before, but it's worth repeating.....

Doing these sorts of studies properly is expensive and time consuming (and, face it, people will dispute the results anyway).
There simply aren't enough academics or hobbyist audio societies with enough interest and funding to do them properly.
And, quite honestly, as "pure science", there are plenty of other things that people are a lot more interested in - and a lot more willing to fund.

Also, to be quite blunt, there is very little financial incentive for a commercial company to do so.
Remember that companies are in business to do business... and generally not "to advance science".

Let me offer you a very trivial business analysis of the exact claim I offered...

I was comparing a DAC that Emotiva sold for between $500 and $600 for most of its product life to one sold by one of our competitors for about twice that much.
We have no strong incentive to "prove" that our product is audibly better than one costing twice as much; we're quite content if you believe they sound the same (ours is half the price of theirs).
It doesn't benefit us in the least if you buy our DC-1 "because we proved it's better" instead of "because you believe they're both the same and ours is a much better deal".
We have no incentive to pay a lot of money to fund a study showing that ours is better; we've already "won by default".
And, for the folks who are already totally convinced that the other product is better, "sound unheard".... odds are they won't read the study, or won't believe it, anyway - so we gain nothing there.

Now, from their side of the fence, it would only benefit THEM if they could prove that a lot of people find their product clearly and significantly better.
If the study were to prove that nobody heard a difference at all - they "lose on price".
If the study were to prove that people heard a difference, but didn't express a clear preference - they lose on price.
If the study were to prove that people heard a difference, and a few people liked their product a little bit better - they still lose on price (at double the price, a few people finding it "a little bit better" would count as a "loss").
And, finally, if the study were to find that people preferred the sound of our DC-1, yet again, they lose.
Therefore, unless they honestly believe that ENOUGH people will find their product CLEARLY BETTER, they also have no incentive to fund a study.
There is simply no way that funding such a study is likely to make them enough profit to justify the cost.
(And they most certainly have no incentive to run a study, and disclose the results, unless they clearly "win".)
It is far more profitable for them to have at least some people assume that their product is better "because it's more expensive" or because they find the logic they use in their advertisements credible.

Therefore, there's very little chance that the study would actually benefit EITHER of our companies in terms of sales.

And, sadly, likewise, funding studies like this doesn't even benefit the audio magazines.
The few remaining magazines these days sell issues by publishing interesting articles, and by encouraging debate and controversy.
If they were to run a totally awesome study, and publish the results, it would sell a lot of issues.... that month.
But, then, what would they have to talk about next month?

Even worse, they have a very strong bias to find differences, whether they exist or not.
Think about it... which would sell more magazines?
1) "After careful study, we've found that all DACs sound the same, so buy whichever one you like." (Now we're all going home since there's nothing more to write on the subject.)
2) "Next month we'll start an extensive story, in ten segments, with tests and opinions by industry experts, about all the exciting differences between DACs.".

This is the same reason why you'll never see a real study about "the audibility of different interconnect cables".....
Because neither the manufacturers of $500 interconnects, nor the makers of $5 interconnects, nor the audio magazines, has a solid financial incentive to sponsor one.

Fine, we agree that these were subjective impressions. Where we disagree is where to go from there.

You seem convinced that given the large subjective difference we should take it as given that the subjective difference has basis in the signal and we should analyze the signal for the cause.

The rest of us would say, if the subjective difference is as large as you say, you guys should have no difference turning out a statistically significant result in a proper volume matched double blind test. And given that any factor we theorize / distill from the differences in the DAC signals to be the possible cause would have to be vetted by such DBTs anyway to become scientific facts, you may as well do us the favor of vetting your findings thus before those two devices are to be supplied as raw data for any kind of psychoacoustic analysis...
 
Last edited:
Oct 18, 2018 at 10:48 AM Post #9,724 of 17,589
Agree with everything Keith said. Case in point: Bose is one of the top selling audio manufacturers in the world, if not THE top, by revenue. They don't even publish meaningful specs for many of their products, let alone fund (public*) studies to quantify the audibility of things.

I think the only hope of advancing this debate materially is for a bunch of hobbyists to get together and run a Kickstarter, and use the money to hire some PhD students to run a proper study. You'll probably want at least 100 people off the street all paid about $200 each (say for a half day spent in a lab listening to stuff), plus another $50K or so for the equipment and another $50K or so to pay the scientists. I think this is certainly achievable, much dumber things have raised more money.

Maybe this *exact* thing is not the *only* hope, but waiting for audio companies or publications to do it, and grousing about it on the internet, are not going to change anything. Conducting a proper DBT test is a ton of work. It took me an entire day to conduct a (not proper at all, really shaky) DBT test of some whiskey once. The test cost hundreds of dollars and about a dozen people were involved. It took a week to plan. It took hours to conduct. And this was just for fun! Imagine what's involved when you actually need statistical significance about something that's hard to notice, and people actually give a crap about the results!

*I am sure Bose does some amount of research on what works and doesn't in acoustics and digital audio... they just aren't sharing.

NB: The winning bourbon was Corner Creek. Very Old Barton 100 also was a surprise favorite. Everyone agreed that after 6 or so 1/4oz portions, it was very hard to tell the difference between anything.
 
Last edited:
Oct 18, 2018 at 11:46 AM Post #9,725 of 17,589
A Kickstarter project just might work.... and it might be interesting to try.

Just an aside....

This whole discussion reminds me of a somewhat parallel situation (involving entirely different things).

There's a place called Oak Island (I forget where it is located; I think off Nova Scotia) - where there is widely believed to be a significant amount of buried pirate treasure..
Essentially, on this small island, there is a big hole.....
It's a pit something like a hundred feet deep... and full of water... and assorted debris.
The story started when someone found a hint that there was pirate treasure buried at this particular spot.
They financed a "dig", and found some suggestive stuff (heavy old wood platforms buried every ten feet as you go down).
Eventually, when they thought they were getting close, the hole flooded and partly collapsed.
It was later determined that sea water was filling the hole... and it was widely believed that "ancient pirates had dug water tunnels as booby traps".
Since then, several different people and groups have purchased the island, and attempted to dig out the treasure.
The hole has been excavated several times, has collapsed multiple times, and several people have died in the attempt over the years.
There have been lots of books written about Oak Island, and TV specials, and even several SEASONS of a TV show called "The Curse of Oak Island".
A few drill shafts have been sunk... one or two of which have retrieved small gold items (really small).

The bottom line, and my point, is that I have little doubt that a well financed operation could easily dig out the hole and settle the "mystery".
However, there is only believed to be something like ten million dollars worth of treasure down there.
And, to put it bluntly, while that sounds like a lot to a guy with a shovel, it isn't anywhere near enough to justify a full scale underwater mining excavation.
In fact, it's clear that, over the years, FAR more money has been spent, and made, on books, TV specials, TV shows, and mystery tours than the treasure could POSSIBLY be worth.
In other words, the "mystery" is worth FAR more than the solution could ever possibly be.
And actually SOLVING the mystery would collapse an entire cottage industry.

I believe that to be the case here.

Face it, if someone were to do an extensive study, and prove beyond any doubt that no living human could tell the difference between fifty popular DACs...
It would NOT convince people who believe otherwise...
And someone will always find some flaw in the test methodology (of claim the test was "fixed")...
And it would NOT stop a company introducing a new DAC next year, after the study was completed, and claiming that "finally, this time, this one actually sounds different"...
At best, with luck, if a study were to find that there clearly are audible differences, it would advance the science, and provide "something new to work on"....
(But, based on the number of products that come to market every year anyway, claimed to solve perceived issues that already exist, this would be of limited "commercial value".)
Therefore, it seems to me something like a Kickstarter campaign is virtually the only way such a study will ever actually happen.

Agree with everything Keith said. Case in point: Bose is one of the top selling audio manufacturers in the world, if not THE top, by revenue. They don't even publish meaningful specs for many of their products, let alone fund (public*) studies to quantify the audibility of things.

I think the only hope of advancing this debate materially is for a bunch of hobbyists to get together and run a Kickstarter, and use the money to hire some PhD students to run a proper study. You'll probably want at least 100 people off the street all paid about $200 each (say for a half day spent in a lab listening to stuff), plus another $50K or so for the equipment and another $50K or so to pay the scientists. I think this is certainly achievable, much dumber things have raised more money.

Maybe this *exact* thing is not the *only* hope, but waiting for audio companies or publications to do it, and grousing about it on the internet, are not going to change anything. Conducting a proper DBT test is a ton of work. It took me an entire day to conduct a (not proper at all, really shaky) DBT test of some whiskey once. The test cost hundreds of dollars and about a dozen people were involved. It took a week to plan. It took hours to conduct. And this was just for fun! Imagine what's involved when you actually need statistical significance about something that's hard to notice, and people actually give a crap about the results!

*I am sure Bose does some amount of research on what works and doesn't in acoustics and digital audio... they just aren't sharing.

NB: The winning bourbon was Corner Creek. Very Old Barton 100 also was a surprise favorite. Everyone agreed that after 6 or so 1/4oz portions, it was very hard to tell the difference between anything.
 
Oct 18, 2018 at 11:56 AM Post #9,726 of 17,589
A Kickstarter project just might work.... and it might be interesting to try.

Just an aside....

This whole discussion reminds me of a somewhat parallel situation (involving entirely different things).

There's a place called Oak Island (I forget where it is located; I think off Nova Scotia) - where there is widely believed to be a significant amount of buried pirate treasure..
Essentially, on this small island, there is a big hole.....
It's a pit something like a hundred feet deep... and full of water... and assorted debris.
The story started when someone found a hint that there was pirate treasure buried at this particular spot.
They financed a "dig", and found some suggestive stuff (heavy old wood platforms buried every ten feet as you go down).
Eventually, when they thought they were getting close, the hole flooded and partly collapsed.
It was later determined that sea water was filling the hole... and it was widely believed that "ancient pirates had dug water tunnels as booby traps".
Since then, several different people and groups have purchased the island, and attempted to dig out the treasure.
The hole has been excavated several times, has collapsed multiple times, and several people have died in the attempt over the years.
There have been lots of books written about Oak Island, and TV specials, and even several SEASONS of a TV show called "The Curse of Oak Island".
A few drill shafts have been sunk... one or two of which have retrieved small gold items (really small).

The bottom line, and my point, is that I have little doubt that a well financed operation could easily dig out the hole and settle the "mystery".
However, there is only believed to be something like ten million dollars worth of treasure down there.
And, to put it bluntly, while that sounds like a lot to a guy with a shovel, it isn't anywhere near enough to justify a full scale underwater mining excavation.
In fact, it's clear that, over the years, FAR more money has been spent, and made, on books, TV specials, TV shows, and mystery tours than the treasure could POSSIBLY be worth.
In other words, the "mystery" is worth FAR more than the solution could ever possibly be.
And actually SOLVING the mystery would collapse an entire cottage industry.

I believe that to be the case here.

Face it, if someone were to do an extensive study, and prove beyond any doubt that no living human could tell the difference between fifty popular DACs...
It would NOT convince people who believe otherwise...
And someone will always find some flaw in the test methodology (of claim the test was "fixed")...
And it would NOT stop a company introducing a new DAC next year, after the study was completed, and claiming that "finally, this time, this one actually sounds different"...
At best, with luck, if a study were to find that there clearly are audible differences, it would advance the science, and provide "something new to work on"....
(But, based on the number of products that come to market every year anyway, claimed to solve perceived issues that already exist, this would be of limited "commercial value".)
Therefore, it seems to me something like a Kickstarter campaign is virtually the only way such a study will ever actually happen.

Why don't people start by sampling the DACs' output and compare them?
 
Oct 18, 2018 at 12:38 PM Post #9,727 of 17,589
All that said, a solid, irrefutable DBT (one way or the other) would certainly be a tonic for this forum.

You're talking about a mythical beast there... Even if someone did a solid test, it wouldn't be irrefutable. You've been around Sound Science long enough to know the drill.... Someone does a test and reports back to us on it. If someone doesn't like the results, they question the testing methodology. The discussion shifts from the results to proper testing procedures, the difficulties in doing "proper" double blind tests, complex mathematics regarding probability, then the conversation isn't about sound quality any more and everyone gets bored and moves on. In a week or two, the same misconceptions raise their head again and the cycle begins again.

In order for proof to be convincing, a person has to be capable of being convinced. THAT'S the problem here, not the testing methodology.
 
Oct 18, 2018 at 12:47 PM Post #9,728 of 17,589
You're talking about a mythical beast there... Even if someone did a solid test, it wouldn't be irrefutable. You've been around Sound Science long enough to know the drill.... Someone does a test and reports back to us on it. If someone doesn't like the results, they question the testing methodology. The discussion shifts from the results to proper testing procedures, the difficulties in doing "proper" double blind tests, complex mathematics regarding probability, then the conversation isn't about sound quality any more and everyone gets bored and moves on. In a week or two, the same misconceptions raise their head again and the cycle begins again.

In order for proof to be convincing, a person has to be capable of being convinced. THAT'S the problem here, not the testing methodology.

Sure, "irrefutable" doesn't exist, and that's not limited to audio either. Maybe a better word would be "unambiguous". I think that's something we can realistically hope for. People may invent their own ambiguity, but if the results themselves don't suggest it, that would be a victory.
 
Oct 18, 2018 at 12:50 PM Post #9,729 of 17,589
Why don't people start by sampling the DACs' output and compare them?

Even that requires more resources than the average consumer has. In order to compare 2 DACs, you need an ADC which is known to have a great deal better performance than the devices being tested, probably a lot more expensive. Most people don't go out and buy $5K ADCs to test $500 DACs, and nobody who DOES have that equipment has the inclination to do such a test, as Keith has outlined.
 
Oct 18, 2018 at 1:00 PM Post #9,730 of 17,589
They have.... that's what ALL of the various measurements and graphs are... but we've gone way beyond that.
Most DACs are similar in a number of ways... and different in a number of ways... to varying degrees.
If you were hoping to see that "they all measure exactly the same" then you're destined to be disappointed.
Nobody is suggesting that all DACs "measure the same in every measurable way"; NOBODY is disputing that the outputs are in fact different.
The disagreement is about whether SPECIFIC differences are AUDIBLE or not.

There are many different ways of "sampling" or "measuring" something.

A "frequency response specification number" is one way of measuring frequency response...
An actual graph of the frequency response as a plot of amplitude vs frequency is another.
A frequency spectrum plot is yet another.
And each of those particular methods tells us certain things very clearly; and totally obscures other things.

Likewise, S/N is a nice simple metric for "how noisy something is"...
But actually plotting the noise SPECTRUM is more accurate, and more informative...
And something with a lot of noise at 60 Hz is going to sound very different than something with a lot of noise at 5 kHz.
It is common established practice to make audio devices and systems sound audibly quieter by "shifting the noise spectrum to frequency ranges we are less sensitive to".

And, likewise, there are several different ways of measuring ringing and impulse response...
One is the popular "impulse response graph"...
Another is to plot the output spectrum over time after applying an impulse signal as the input...
Another would be to describe the frequency at which the output rings, and the time it takes for the amplitude of the ringing to drop by 60 dB...
Or you might describe the ringing frequency and then specify how many cycles of ringing occur after the input signal stops... (until it is "no longer visible on an oscilloscope").

Here's a link to the data sheet for the Wolfson 8741 DAC chip:
https://www.mouser.com/ds/2/76/WM8741_v4.3-1141934.pdf

You will see that several pages are dedicated to "looking at the sampled output" in various ways when various configuration options are chosen.
You will see that they provided frequency response graphs and numbers for each of the filter choices... and all of them are different.
They also provided numerical data for many of what they consider to be the important specifications.
You will also note that all of their measurements were performed under specific conditions, and using specific test signals... so, if you measure them differently, odds are you'll get different results.
And Wolfson chose not to include the oscilloscope images of impulse response that other folks like to examine.
And, of course, these are all measurements of the chip itself... and the output of a complete DAC product is also determined by all of the associated circuitry connected to it.
(You can feel free to sample and measure them all yourself... but I doubt you'll catch Wolfson out on anything important.)

If you look at that data sheet... I'll refer you to page 54.
Compare figures #30 and #31 to figures #32 and #33.
They show the output amplitude responses of two different filter choices.
Would you expect to be able to hear a difference between them or not?
If so, which one would you expect to sound better, or more accurate?
(See.... it's not as simple as many people think.)



Why don't people start by sampling the DACs' output and compare them?
 
Oct 18, 2018 at 1:07 PM Post #9,731 of 17,589
Great! Now there's something to work with! I'm going to cut out the stuff where you try to figure out the reasons why there is a difference because that needs to be determined. Excuse me if I reorganize your quote a little bit.

The most recent DACs that I had the opportunity to compare directly with any degree of care were:

- a Wyred4Sound DAC2 (the original version; not the later various updated versions)
- one of our Emotiva DC-1 units.

Both were connected to the same digital source... and the levels were matched.
There were only two of us present, so there was no opportunity to invite someone else to run the switch.
We had two different amplifiers, a variety of music, and three different sets of speakers, with which to try them (we were auditioning both speakers and the DACs).
(And, no, we did not test both DACs to confirm that both were operating up to spec.)

The differences were NOT subtle at all. In fact, we both agreed that the differences were about as obvious as the differences between the various sets of speakers. Incidentally, in terms of bias, we both expected to notice a slight difference, but were both surprised about the magnitude of the difference. We agreed that SUBJECTIVELY the difference seemed about equivalent to a boost of about 1.5 dB, centered around 5-7 kHz (on the part of the Sabre DAC).

I don't have the specs for either DAC handy, but both certainly have THD, IMD, S/N and frequency response specs that are all "good enough that they should be audibly perfect". However, if you dig out the manufacturer's data sheets on both DACs, you will see that the impulse responses on their filters are visibly quite different.

OK. We will assume that your testing procedures were good enough to give an accurate result. You understand the process of setting up a listening test, and you have a proven record of fairness here that makes that assumption not a huge leap.

My first question is, which one of these DACs do you suspect is transparent, and which one do you think is colored? Have you compared either of them to other DACs and found them to sound the same?

The question of published specs and independent measurements (if they exist) is the next thing to look into. A 1.5dB difference should be clearly measurable. The fact that it is up in the 5-7kHz range (an area that isn't a core range of hearing or an important range in recorded music) means that if it sounds like 1.5dB, it is likely a bit more than that. In that range, it would take a significant boost to be significant. That would be the top end of a snare drum and planted squarely on cymbals. That range be a likely suspect for a manufacturer trying to goose the response to make their product stand out as "brighter/sharper" sounding without dragging in shrillness. That is worth checking out if it is true.

I'm getting ready for work right now so I don't have time, but when I get a spare moment, I'll google up the published specs and any measurements I can find. We'll see if the boost you're talking about shows up in the specs. If this exists, it should be measurable. Once we figure that out, we can go to the next question.

Thanks for something specific to chase down!
 
Last edited:
Oct 18, 2018 at 1:08 PM Post #9,732 of 17,589
Even that requires more resources than the average consumer has. In order to compare 2 DACs, you need an ADC which is known to have a great deal better performance than the devices being tested, probably a lot more expensive. Most people don't go out and buy $5K ADCs to test $500 DACs, and nobody who DOES have that equipment has the inclination to do such a test, as Keith has outlined.

Then consider myself as an exception .
 
Oct 18, 2018 at 1:14 PM Post #9,733 of 17,589
Most people don't go out and buy $5K ADCs to test $500 DACs

In my comparisons, I haven't found any correlation between price and audible performance, at least when it comes to solid state electronics.
 
Oct 18, 2018 at 1:35 PM Post #9,734 of 17,589
They have.... that's what ALL of the various measurements and graphs are... but we've gone way beyond that.
Most DACs are similar in a number of ways... and different in a number of ways... to varying degrees.
If you were hoping to see that "they all measure exactly the same" then you're destined to be disappointed.
Nobody is suggesting that all DACs "measure the same in every measurable way"; NOBODY is disputing that the outputs are in fact different.
The disagreement is about whether SPECIFIC differences are AUDIBLE or not.

There are many different ways of "sampling" or "measuring" something.

A "frequency response specification number" is one way of measuring frequency response...
An actual graph of the frequency response as a plot of amplitude vs frequency is another.
A frequency spectrum plot is yet another.
And each of those particular methods tells us certain things very clearly; and totally obscures other things.

Likewise, S/N is a nice simple metric for "how noisy something is"...
But actually plotting the noise SPECTRUM is more accurate, and more informative...
And something with a lot of noise at 60 Hz is going to sound very different than something with a lot of noise at 5 kHz.
It is common established practice to make audio devices and systems sound audibly quieter by "shifting the noise spectrum to frequency ranges we are less sensitive to".

And, likewise, there are several different ways of measuring ringing and impulse response...
One is the popular "impulse response graph"...
Another is to plot the output spectrum over time after applying an impulse signal as the input...
Another would be to describe the frequency at which the output rings, and the time it takes for the amplitude of the ringing to drop by 60 dB...
Or you might describe the ringing frequency and then specify how many cycles of ringing occur after the input signal stops... (until it is "no longer visible on an oscilloscope").

Here's a link to the data sheet for the Wolfson 8741 DAC chip:
https://www.mouser.com/ds/2/76/WM8741_v4.3-1141934.pdf

You will see that several pages are dedicated to "looking at the sampled output" in various ways when various configuration options are chosen.
You will see that they provided frequency response graphs and numbers for each of the filter choices... and all of them are different.
They also provided numerical data for many of what they consider to be the important specifications.
You will also note that all of their measurements were performed under specific conditions, and using specific test signals... so, if you measure them differently, odds are you'll get different results.
And Wolfson chose not to include the oscilloscope images of impulse response that other folks like to examine.
And, of course, these are all measurements of the chip itself... and the output of a complete DAC product is also determined by all of the associated circuitry connected to it.
(You can feel free to sample and measure them all yourself... but I doubt you'll catch Wolfson out on anything important.)

If you look at that data sheet... I'll refer you to page 54.
Compare figures #30 and #31 to figures #32 and #33.
They show the output amplitude responses of two different filter choices.
Would you expect to be able to hear a difference between them or not?
If so, which one would you expect to sound better, or more accurate?
(See.... it's not as simple as many people think.)

Rather than looking at a theoretical filter simulation I do prefer looking at the DAC output in its environment.
With proper samples you retrieve whatever you want spectrum,phase,etc..
With samples and a proper decimation you can also simply compare input and output, subtract and check for audibility levels.
I do not have your experience in audio. I started building digital filters and looking at my DAC output for my own curiosity.
I am lucky to have good instrumentation available at work. I am lucky to understand measurements. Time is another issue.
Correlating data is not simple as we both already mentioned.
But dealing with your anecdotal case an output DAC measurement would have shown your frequency boost much more convincingly than any words.
 
Oct 18, 2018 at 2:55 PM Post #9,735 of 17,589
First of all, as I stated earlier in the text, both DACs actually have very flat frequency response - according to spec.
I can vouch for the fact that the Emotiva DC-1 actually is flat within a small fraction of a dB (we build them, and we test a lot of them, they all pass that spec just fine).
Also, as far as I know, Wyred4Sound is a very "reputable" company... so I suspect that the DAC2 is also measurably as flat as they claim it is.
(I've also seen several reviews on various Wyred4Sound products, and they generally do meet their published specs.)
I wish I'd had an opportunity to confirm that on both - but I suspect that it would be confirmed that both really were quite flat.

You'll note that I specifically said that what I heard was "a difference that SUBJECTIVELY SOUNDED AS IF THERE WAS A BOOST OVER THAT RANGE OF FREQUENCIES".
I strongly suspect that, if you were to confirm the measurements, you would find that the frequency response on both is virtually identical, and the difference lies elsewhere.

And, in terms of grouping, I have found that many DACs that use various Sabre DAC chips sound this way to me...
While other brands of DAC chips sound more natural to me, and sound more similar to each other.

My theory, which is that it is due to differences in the impulse response of the filters, is based on two things.

First, there is plenty of precedent for similar effects in other areas that also involve perception.
For example, if you have used Photoshop, or any similar image editor... and, more specifically, used the very popular "Unsharp Mask" sharpening feature.
This feature is used to make images APPEAR PERCEPTUALLY TO BE SHARPER... and virtually everyone agrees that, when images are "processed" with it, they APPEAR to be much sharper.
This is quite similar to the way the "sharpness" feature on many TVs works... although some modern ones involve more complex processing.
In actual fact, what it does is to detect borders between different colors or brightness levels, and create artificial "halos" at the boundary between them.
So, for example, if a light object borders on a dark one, a light halo is added at the edge of the bright object, and a dark halo is added at the edge of the dark object.
This artificially boosts the local contrast between the edges where they touch.
The result is a sort of optical illusion that, to humans, makes it APPEAR as if the edges and details are sharper.
In fact, the actual sharpness or focus of the image remains unaffected; it only "appears" to have become sharper.
It makes sense that a similar effect might work for the "edges" of audible details as well... making them "stand out more sharply".
(This is similar to what I perceive when I turn up the range of frequencies between 5 and 7 kHz - details seem to be exaggerated.)

Second, I have noticed this difference very often specifically with Sabre DACs (ESS)... and many other people characterize many products using Sabre DACs as sounding "grainy or etched".
To me, this sounds very much like a description of the audible equivalent of how images sharpened using unsharp mask appear visually.
If the process is over applied, images look "over sharpened", and almost as if the edges of individual objects have been "etched into the surface".
And, in their early product literature, ESS actually described, as part of their design process, "organizing focus groups to decide which filter parameters appealed to the most people".
In short, they basically said that, rather than design for the greatest accuracy, they based their design on "customer preference".

And, based on my somewhat extensive experience, while not everyone claims to hear a difference between Sabre DACs and other brands...
The majority of people who do claim to notice a difference tend to describe it the same way.
Those who dislike them say that "the Sabre DAC sounds grainy or etched compared to the other one".
Those that like them say that "the Sabre DAC sounds more detailed".
To me, these seem likely to be descriptions of the same phenomenon, differing in whether they are viewed as being positive or negative.

In other words, I do not believe that Sabre "gooses" the frequency response in the upper treble...
I believe they "goose" their filters is such a way that they cause "the audible boundaries" at the edge of impulses and other high frequency sounds to seem clearer and more prominent.
And, to me, with my human ears, this produces the same "perceived difference" as I would get by boosting that frequency range.
And, because this phenomenon only affects the edges of transients, it does not show up on steady state frequency response measurements.
(Much as applying Unsharp Masking doesn't technically alter the overall brightness of an image.)

And, yes, this should be something that can be determined scientifically with relative ease...
Compare the impulse response of various DACs...
Ask a large sample of people to listen to several DACs and describe how they characterize the sound...
See if there is a correlation between certain types of filter responses and certain descriptions...
(Perhaps I'm wrong, and the filter response won't correlate well at all, but some other characteristic will..... )

Great! Now there's something to work with! I'm going to cut out the stuff where you try to figure out the reasons why there is a difference because that needs to be determined. Excuse me if I reorganize your quote a little bit.



OK. We will assume that your testing procedures were good enough to give an accurate result. You understand the process of setting up a listening test, and you have a proven record of fairness here that makes that assumption not a huge leap.

My first question is, which one of these DACs do you suspect is transparent, and which one do you think is colored? Have you compared either of them to other DACs and found them to sound the same?

The question of published specs and independent measurements (if they exist) is the next thing to look into. A 1.5dB difference should be clearly measurable. The fact that it is up in the 5-7kHz range (an area that isn't a core range of hearing or an important range in recorded music) means that if it sounds like 1.5dB, it is likely a bit more than that. In that range, it would take a significant boost to be significant. That would be the top end of a snare drum and planted squarely on cymbals. That range be a likely suspect for a manufacturer trying to goose the response to make their product stand out as "brighter/sharper" sounding without dragging in shrillness. That is worth checking out if it is true.

I'm getting ready for work right now so I don't have time, but when I get a spare moment, I'll google up the published specs and any measurements I can find. We'll see if the boost you're talking about shows up in the specs. If this exists, it should be measurable. Once we figure that out, we can go to the next question.

Thanks for something specific to chase down!
 

Users who are viewing this thread

Back
Top