Head-Fi.org › Forums › Equipment Forums › Sound Science › AES 2012 paper: "Relationship between Perception and Measurement of Headphone Sound Quality"
New Posts  All Forums:Forum Nav:

AES 2012 paper: "Relationship between Perception and Measurement of Headphone Sound Quality" - Page 2

post #16 of 130
Quote:
Originally Posted by autumnholy View Post

So it's all about consistency in hearing the same note? Or noticing it?

 

 

Well, that bit at the end was something of a simplification, only half the picture.  By "consistency" I mean that when you listen to A multiple times, you give similar responses (ratings) for A, likewise for B or anything else.

 

It's really a measure of rating things that are different as different, while rating things that are the same as the same.  It's a ratio, so the statistical power is increased when (1) you rate things that are different as even more different or (2) you are even more consistent in rating things that are the same as the same.  The low positioning of the non-trained listeners could be either due to (1) not distinguishing much between things that are different, (2) not being consistent in rating things that are the same, or (3) a combination.

 

They seemed to just use a 1-10 preference scale, as far as I could tell from a quick skim.  That doesn't really have anything to do with hearing the same note, unless I misunderstand what you mean.

 

Maybe this graph is easier to understand:

 

(click to view, from http://seanolive.blogspot.com/2012/05/more-evidence-that-kids-even-japanese.html)

 

though that just shows the averages, so it does not give you information about the variability within a given group / loudspeaker combination.  You see at least there that the trained listeners distinguish more between the different speakers.  Their score for A is much higher than their score for D.


Edited by mikeaj - 10/19/12 at 7:49pm
post #17 of 130
I absolutely agree that those data are likely meaningless at establishing differences between the groups presented. To your point, the arms are not balanced in sample size nor are the individual results or the standard error around the means shown. I suspect, that even if there were truly differences between the populations, giving the size of the differences that they are trying to detect, the variability within a group, that a much larger sample size would be necessary.
post #18 of 130

I'm also interested in programme 10-6 as well about the micro speaker non-linear distortion. AES has some interesting stuff up for sure. I would also like to hear the results of the AES recording competition going on.

post #19 of 130

On average, a flat response is preferred.. but what if half of the group prefer boosted bass and the other half prefer boosted treble? Can we really conclude that a flat response is the most desirable?

post #20 of 130
To me, the fact that flat response sounds better isn't a big surprise.
post #21 of 130

Flat response is a consistent response. There is not standard way to shoot a movie or record sound, so it makes sense for your speakers to "do a job" (i.e. replicate a recording faithfully), rather than try to "convince you" that big bass, big treble, or big something else is always the best way to hear something.


Edited by MrMateoHead - 10/23/12 at 6:55pm
post #22 of 130

The graph represents the "relative" performance of different groups of untrained listeners relative to the trained listeners. Here, the performance metric is based on the average individual listener Loudspeaker F-statistic within each group. The F-statistic is calculated from an ANOVA on the individual listener data, and in laymen terms represents the extent to which the listener can give discriminating  and repeatable ratings.

 

We've generally found that untrained listeners --  whether they are high school age, college, Japanese or American tend to like the same speakers as trained listeners, but rate them higher on the scale. See http://seanolive.blogspot.com/2012/05/more-evidence-that-kids-even-japanese.html

 

Trained listeners tend to use a larger range or spread of ratings and give more consistent ratings. Plus, through training, they are better able to describe the sonic differences they hear so you can understand the underlying reasons behind their preferences/

 

Cheers

Sean Olive

post #23 of 130
Quote:
Originally Posted by iim7V7IM7 View Post

I absolutely agree that those data are likely meaningless at establishing differences between the groups presented. To your point, the arms are not balanced in sample size nor are the individual results or the standard error around the means shown. I suspect, that even if there were truly differences between the populations, giving the size of the differences that they are trying to detect, the variability within a group, that a much larger sample size would be necessary.

 

Agreed. It would be nice to have a larger, balanced sample size across groups, but the trends and conclusions about training effects are pretty consistent across different studies done by myself and others. Trained listeners tend to be more discriminating and consistent in their loudspeaker ratings compared to untrained listeners, and they give lower ratings.


Edited by Tonmeister2008 - 10/23/12 at 8:16pm
post #24 of 130
Quote:
Originally Posted by Tonmeister2008 View Post

Agreed. It would be nice to have a larger, balanced sample size across groups, but the trends and conclusions about training effects are pretty consistent across different studies done by myself and others. Trained listeners tend to be more discriminating and consistent in their loudspeaker ratings compared to untrained listeners, and they give lower ratings.

The trends of the means or individual data?

Small differences in means require large sample size in order to reject a null. I would like to see if the groups were normally distributed or were they skewed or did outliers shift means. We're the groups block randomized in terms of the order of speakers and piece of music they listened to?

All of these factors matter when attempting to objectively separate difference from random chance. I tend to agree with you in terms of logic, it is just that the data presented do not make much of a case.
post #25 of 130
Quote:
Originally Posted by iim7V7IM7 View Post


The trends of the means or individual data?
Small differences in means require large sample size in order to reject a null. I would like to see if the groups were normally distributed or were they skewed or did outliers shift means. We're the groups block randomized in terms of the order of speakers and piece of music they listened to?
All of these factors matter when attempting to objectively separate difference from random chance. I tend to agree with you in terms of logic, it is just that the data presented do not make much of a case.

When I talk about trends I refer to the means of untrained groups of listeners compared to trained groups. In several published studies on loudspeaker preference I have done ANOVA where training was an independent variable and the effect is statistically significant. 

post #26 of 130
Quote:
Originally Posted by Tonmeister2008 View Post

When I talk about trends I refer to the means of untrained groups of listeners compared to trained groups. In several published studies on loudspeaker preference I have done ANOVA where training was an independent variable and the effect is statistically significant. 

Thanks Sean.

I would like to read these studies. Do you have any links to them or can you direct me to the journal, author title(s) etc.? Also, is the analysis of variance that you are referring to on the data that the figure with unbalanced sample sizes or another study?
post #27 of 130
Thread Starter 

Tonmeister2008, will you be making a blog post about this work? I'm eagerly waiting to read the paper but I'm not sure when AES will have it online. Your blog posts have been great at making the science accessible to more people, both in terms of availability and understandability.

post #28 of 130
Quote:
Originally Posted by JMS View Post

Tonmeister2008, will you be making a blog post about this work? I'm eagerly waiting to read the paper but I'm not sure when AES will have it online. Your blog posts have been great at making the science accessible to more people, both in terms of availability and understandability.

 

The AES typically don't do this. You need to join and then you can buy a cheap ($5) copy, otherwise it is normally $20 for a paper.

post #29 of 130
Quote:
Originally Posted by nick_charles View Post

 

The AES typically don't do this. You need to join and then you can buy a cheap ($5) copy, otherwise it is normally $20 for a paper.

 

Up from $10. Bastards. biggrin.gif

 

se

post #30 of 130
Quote:
Originally Posted by JMS View Post

Tonmeister2008, will you be making a blog post about this work? I'm eagerly waiting to read the paper but I'm not sure when AES will have it online. Your blog posts have been great at making the science accessible to more people, both in terms of availability and understandability.
Thanks. Yes, I intend to blog about this research and will include links to the paper and slides that were presented at the AES Convention in San Francisco last week.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › AES 2012 paper: "Relationship between Perception and Measurement of Headphone Sound Quality"