Separate names with a comma.
I bet this was all started by the cable cooking mafia!
It is interesting that you and Tyll and others with similar opinions, always overlook a very pertinent artifact of such tests.
Tyll did not prove that any changes were very small and that they probably would not really be audible.
He DID prove that what we can measure now had changes that were very small. There are many things we cannot measure, but that we can hear.
We cannot measure everything that we hear and you and Tyll assume that what we can measure is most of what is audible and what is audible is also measurable.
To assume this, is called hubris. It is likely that what we know about headphone design in 2014 may very well be 1 part out of 1,000.
I would very much like to see the test that can show the 2-D nature of the HA-S500 soundfield when compared to a Takstar Pro 80. Or the phase artifacts of a Beats Pro. Or the FR lobing that SuperLux have when unboxed, that diminish with time.
If you know of tests that can do this, then please post the test results that map 1:1 with acceptable p-values for the following audible artifacts of a headphone:
2) Instrument dimensionality
3) Harmonic compleatness
5) Instruments out of phase within an in-phase soundfield and headphone
6) FR and Soundstage lobing, discreteness, non-unification
It's a conspiracy!
you really believe that any change in any of those will not translate into some measurable changes? did sound become something more than vibrating air at some point and I wasn't informed? it's not about making up some complex analysis of the sound, it's about recording a change. our softwares can confirm a bit perfect copy, but they couldn't see the difference between 2 recordings from the same phone at the same place on the same dummy head after several days if there was any?
microphones and technology in a studio are good enough to record the band, but for some weird reasons, super specific microphones calibrated with huge precision won't be able to register a sonic change? that is your claim?
and then you can hear it and remember it after a month of use and other daily activities?
sorry but I tend to see one tiny little weak point in that theory.
I always figured that difference I heard after a "burn in period" was just my ears getting used the the fit of the new headphones, the way they project sound, and the new sound signature.
Then again, I've never just set up my headphones and left them playing for 50+ hours until they were ready...that just seems silly. I could be using them instead.
But that just me.
Indeed, it shows a complete misunderstanding of what is going on, or deliberate confusion of the issue.
For this challenge to have any meaning, you would need to show that these are actually intrinsic sonic properties of headphones, and not just things in your head. For example, what on earth is "instrument dimensionality," and can different listeners hear it the same way on the same headphone in an ABX test? Another way to put it -- do ears meet your own challenge?
The whole thing is just a big fat red herring.
It's like showing two very noisy pictures to some people. We're not interested in what some people see (a face, an animal, a tree?), but the difference between the pictures themselves.
Btw, just changing the distance to the picture a bit, or the lightning, or telling people beforehand what they should see will inevitably bias them. Just staring at the picture long enough could result in a person detecting some different pattern. Did the picture burn-in? No.
They can see different things even if it is the same picture.
just saying, xnor has agreed with me twice in 48hours, I feel like I've received a brain conformity certificate.
I'm as proud as if nwavguy contacted me to tell he learned something from one of my posts.
next step: listen to more phones than Jude and launch noggin-fi.org
I agree with a lot of the stuff that some of you guys write. I'm just not a "+1" poster. I rather post why I disagree when I disagree.
Please provide said measurements to differentiate only the Soundstage aspect between said phones.
It's *your* claim that it can be measured. So prove it.
Go out and measure Soundstage. I will wait...
I stand corrected: add straw man on top of the red herring.
Awe, that's cute.
Honestly though, sound staging takes place in the brain, so....yeah...that could possibly be measured through FMRI or something, but when you're discussing sound, you're discussing waves moving through air, nothing more.
I'd ask you to show me a graph (or neurological image) which proves a change in soundstage.
If you make a claim, it's up to you to prove it here. I'm getting a little tired of people making claims and then asking for proof their statement is incorrect.
"I believe that I slip into another dimension when I fall asleep. PROVE THAT I DON'T!"
Not the best sort of statement to be made in the Sound Science forum. We'll be happy to look at anything solid that you have to bring to the table.
stupid photo analogy as often with me:
say I took 2 pictures 100hours apart of an underground room with no window and 1 light bulb. camera on a tripod shot remotely, I didn't even go into the room and the light was never turned off in the process of the experiment. then I tell you from a computer analysis of the 2 pictures that they are almost exactly the same to such a level, any doctor says our eyes can't see it. you then come and ask me to prove to you that the contrast in some shade on the wall has changed and ask by how much.
I might not be able to answer your question, but if the said shade had been modified in any way, the computer would have picked up the difference. because for him one pixel can have 16777216 different values and it can tell every single value appart from every other and that for every single pixel. something no human can dream of doing.
and the way the experiment is done, it doesn't matter if what the camera called "white" something in fact slightly "cyan, because we are not looking for a perfect analysis of the picture, we are looking for a difference from 2 pictures done the same way. so all errors would be duplicated on both pics and non relevant for the test. let's say the sensor sucks, we end up with "only" 5000000 values for a pixel. it's still far above what we as human can do (we see something around 16bit).
in that case if you see something and the computer doesn't, it's not a matter of knowledge or technological advancement. it is just you looking for something that doesn't exist. there is simply no other way to justify it.
for sound it's the same, once digitalized, sound analysis is far beyond what humans can hear already. you take the human failure to convert precise information into your own way of thinking sound (like soundstage), for a failure to know how the sound did actually behave the computer knows a lot more than we do about that sound. you can tell that a FR graph isn't enough and would be right, but if we cannot measure a difference from 2 takes (any kind of difference), there is no difference "period"(obama style).
(the spell checker told me that I misspelled soundstage 50000000000 times in a row, do I need to trust it?)
I highly doubt soundstage takes place in the brain exclusively. The best difference to compare is LCD2/3 vs HD800. I highly doubt it is the doing of my brain in how the instruments sound farther away and the vocalist sound very upfront using the HD800 while the other present the instruments much closer and the vocals farther away. That was subjective but many do (generally) agree how these 2 compare to each other. I don't think there is a way to measure "soundstage" yet and it isn't represented in the frequency response.
BTW since you are asleep and your brain is "making sense" of your past events without your consciousness, you are pretty much in a different dimension since you aren't experiencing reality consciously. Just my opinion, I am sure other psychologist can argue otherwise.