hypersonic effect discussion
May 30, 2012 at 1:35 PM Post #106 of 111
You claimed that Sony was a proponent of such frequencies, and used misleading and incomparable frequency ranges to support your point.  I'm asking for graphs to determine whether or not Sony actually does utilize these frequencies any more than other manufacturers

 

1.png

 
 ​
K-20090123-213449-5.png

K-20090123-172233-2.png




MDR_EX500SL_009.png


K-20090123-211707-8.png


K-20090124-12307-6.png


K-20090124-12245-6.png



K-20090123-215738-8.png
^Note: This is testing linearity from 6kHz - 100kHz with three different driver materials, the black line is Polyethelene / Polyimide  successive layering.

MDR-EX700.gif

 
May 30, 2012 at 3:18 PM Post #107 of 111
Quote:
 
So from reading the paper your take on it is flipping a coin 4 times with 4 tails results = 1/16 chance.  Then can I ask why there are statistics in it such as p=0.006?
 
Some of them are actually wrong according to my calculations and (without being immodest) I have rather more background in stats and experimental design than the main author, but it is all about how you roll the numbers up, my point was that 4 trials per individual is insufficient for statistical significance, this is why a minimum of 10 or better 14 to 20 is normally reccomended.
 
 
Plus... why is it then a peer reviewed study in the AES journal?  Which seems to be 'the' qualifier for 'science' along with dScopes, blind testing and null hypothesis, if I'm taking in all the recent hype correctly.
 
Actually is not a journal paper, it is a conference paper, the rules are a bit more relaxed for conference papers i.e a paper that would not make a journal can be accepted for a conference on grounds of interesting findings (I review conference papers)  even if there are minor issues with it, also even journal papers are not necessarily perfect i.e not holy writ, I should know I have several of those - M and M is a journal paper (Engineering Report: JAES Volume 55 Issue 9 pp. 775-779; September 2007) and you dont think that is perfect do you ?
 
Generally if you have a really strong paper you put it in a journal, often you can start with a conference paper gauge the response and if it is good enough beef it up for primetime (adding enough to make it a different paper as you really are not supposed to publish the same paper twice)  P and G seem to have done this with a paper on engineers. Pras is a PhD student and recorder/producer .
 
Don't think I hate the paper, it is an interesting paper, I just dont think the data supports their big conclusions - see below

 
 
Quote:
"Findings from the listening tests suggest that expert listeners can detect differences between musical
excerpts presented at 88.2 kHz and 44.1 kHz.
They cannot justifiably make that conclusion the data just does not support it. They have two out of 15 subsamples with a significant result, only one of them is a direct 88.2 vs 44.1 recording comparison, the other 13 show no effect of sampling rate or downsampling - this is not strong enough data to support the claim especially when properly aggregated the effects disappear!
Moreover, the qualitative analysis of verbal descriptors indicates that these differences were perceived in terms of spatial reproduction, high
frequency content, timbre and precision. However, the ability to perceive these differences depends on
the format comparison and musical excerpt. Listeners could significantly discriminate between files
recorded at different sample rates only for the orchestral excerpt, the only recording of a complex
scene with different musical instruments playing in a medium concert hall. This finding provides support
for theories that high-resolution formats..."
They did not compare the perceived differences against the actual rate of discrimination so the above is extremely dubious. You cannot tell if those who said "I detected a difference in x"  actually did detect a difference at all - statistically it is highly unlikely that they did since overall none of them were capable of reliably doing so across all samples. 13/16 described perceptual differences but we know for a fact that considerably under 13 were able to actually reliably detect differences.
You must not say things in a paper that you cannot support with data, P is clearly a strong subjectivist as evidenced by her other publications inc: 
  1. Qualitative evaluation of Wave Field Synthesis with expert listeners
  2. Improving the sound quality of recordings through communication between musicians and sound engineers
  3. Subjective Evaluation of MP3 Compression for Different Musical Genres
that is fine, but the paper in question calls for much more circumspection.
How did you arrive at this?
M and M cite about 60 subjects as the subject pool (I agree this is rather imprecise) - I used 60 as the sample size. Both 0/10 and 10/10 have a 1 in 1024 chance so we would not expect either of those by chance (probability about 5.8%) , 1/10 can occur exactly 10 times (only trial 1 or 2 or 3 ....) so 10/1024 or 1 in 102.4 so with a subject pool of 60 we have about a 58% chance of seeing one of those but not seeing one is not far outside the realms of probabilities For 2/10 however there are 45 ways of getting it, Trial 1 plus 2 or 3 or 4 ....10 (9) ,  Trial 2 plus 3 or 4 or 5...10 (8), then 7 then 6  down to 1 (trial 9 and trial 10) - total 45. For one individual there is a 4.3% chance of getting 2/10 (or 8/10) (2 right or 2 wrong - same thing)  - with 60 participants you would expect about 2.63 of them to manage 8/10


 
May 30, 2012 at 4:06 PM Post #108 of 111
Ok... I still need to read the paper like... five times... have you read this part?
 
"It should also be noted that all the files used in this
study were recorded and presented in 24 bits. Thus,
we were not comparing the CD standard (i.e.
44.1 kHz, 16 bits) with high-resolution formats but
restricted our experiment to sample rate
discrimination. This choice was based on the fact that
limitations of bit-depth of the CD standard at 16 bits
have been identified and documented [10]. Therefore,
differences between CD standard and high-resolution
audio formats should be easier to detect than the
differences observed in this study."

 
 
So they intentionally selected 24/44 versus 24/88, since they think 16 bit is too easy and already documented?
 
They reference
 
"[10] Stuart, J., “Coding for high-resolution audio
systems”, J. Audio Eng. Soc., vol. 52(3),
pp. 117-144. (March 2004)"

 
Edit: http://www.meridian-audio.com/w_paper/Coding2.PDF
 
May 30, 2012 at 5:04 PM Post #109 of 111
Quote:
Ok... I still need to read the paper like... five times... have you read this part?
 
"It should also be noted that all the files used in this
study were recorded and presented in 24 bits. Thus,
we were not comparing the CD standard (i.e.
44.1 kHz, 16 bits) with high-resolution formats but
restricted our experiment to sample rate
discrimination. This choice was based on the fact that
limitations of bit-depth of the CD standard at 16 bits
have been identified and documented [10]. Therefore,
differences between CD standard and high-resolution
audio formats should be easier to detect than the
differences observed in this study."

 
 
So they intentionally selected 24/44 versus 24/88, since they think 16 bit is too easy and already documented?
 
They reference
 
"[10] Stuart, J., “Coding for high-resolution audio
systems”, J. Audio Eng. Soc., vol. 52(3),
pp. 117-144. (March 2004)"

 
Yes I read that part. Bob Stuart http://dhantalradio.com/bt/aes/Journal%20AES%20Vol%2052%20No.%203.pdf does a fine job of describing measurable issues with 16/44.1 but he starts with a bias. He is the head man at the Acoustic Renaissance for Audio (ARA) a bunch fully committed to high res PCM encoding (and very anti-DSD as well fwiw) , so far from bias-free.
 
Stuart however insists on noise-free playback at 120db at the listening position of pure sine waves (roughly equivalent to having an ambulance siren in the room with you) as the only acceptable standard.
 
However his paper does not support his assertions with empirical listening tests so cannot be used as proof that the CD standard limitations are intrinsically audibly detectable below absurd volume levels. The limitations of CD at very high volume levels with null signals is something which M and M tell us as about anyway.
 
May 31, 2012 at 4:35 PM Post #110 of 111
Quote:
^Note: This is testing linearity from 6kHz - 100kHz with three different driver materials, the black line is Polyethelene / Polyimide  successive layering.

 
Just the materials, not the drivers themselves? I'd like to see the drivers in action.
 
The first graph only appears to go up to 20 kHz, like a typical graph or headphone. The second graph has no labeled axes. The third and fourth have no labeled scale on the vertical axes. I don't even know what the fifth is trying to represent because of the poor resolution and grid lines obscuring everything. The last graph suggests that Sony does not make use of hypersonic frequencies, because of the roll off to 20 kHz. And that graph is of an actual headphone, not just driver material.
 
May 31, 2012 at 7:11 PM Post #111 of 111
 
Just the materials, not the drivers themselves? I'd like to see the drivers in action.
 
The first graph only appears to go up to 20 kHz, like a typical graph or headphone. The second graph has no labeled axes. The third and fourth have no labeled scale on the vertical axes. I don't even know what the fifth is trying to represent because of the poor resolution and grid lines obscuring everything. The last graph suggests that Sony does not make use of hypersonic frequencies, because of the roll off to 20 kHz. And that graph is of an actual headphone, not just driver material.

 
Lol, you are correct there.  The last graph isn't from Sony, it's from "mister_terch" it looks like.  For some reason when measuring IEM's it's very difficult to capture what's above 10kHz accurately, that site goldenears says 100Hz to 10kHz is their "safe zone".
 
The fifth one is from Sony's development, they are testing drivers of different material, not only driver material.  The significance is they found polyethelene / polyimide / polyethelene (PET-PI-PET) layering to have a nicer respone up at, err... well all the way up to 100kHz.  That's not for 22kHz->100kHz listening though, it's just to test how the driver material performs overall, it's a pretty complicated process to layer PET-PI-PET several hundred times in opposite directions just to make a successful multi-layer driver, then they used it in the Sony EX700 IEM which had limited success.
 
I don't know why Sony didn't provide FR data from their labs on the SA-5000 or Qualia drivers, surely the customers would like to know the specs are real?  Perhaps it's in a Qualia pamphlet somewhere.
 
To be honest if the hypersonic effect is real it's unlikely you can hear it in a headphone and more likely to exist via speakers.  The 'most' scientific explanation is it enters your eyeball.
 

Users who are viewing this thread

Back
Top