or Connect
Head-Fi.org › Forums › Equipment Forums › Sound Science › I Don't Understand You Subjective Guys
New Posts  All Forums:Forum Nav:

# I Don't Understand You Subjective Guys - Page 14

Quote:
Originally Posted by TWIFOSP

Fourier series uses a convergent approximation to plot a harmonic.  You can only compare fourier series when A and B are of the same signal type, Periodic, Semi, ect.  Music is a combination of harmonic signal types and turns out can not be compared with a fourier transform.

This article basically sums it up:  Music signals can not be accurately compared.  Convergence is a bitch.

http://en.wikipedia.org/wiki/Time%E2%80%93frequency_analysis_for_music_signals

I'm a little confused by your statement, but I think you are confusing Fourier Series with the Discrete Time Fourier Transform. They are not the same thing. What do you mean by "Convergence is a bitch," and where in that wikipedia article does it say that "music signals can not be accurately compared"?

Edited by ultrabike - 7/26/12 at 2:53pm

"The fundamental frequency, or F0, is an essential descriptor of music sound signals. Although single-F0 estimation algorithms are considerably developed, their applications to music signals remain limited, because most music signals contain concurrent harmonic sources. Therefore, multiple-F0 estimation is a more appropriate analysis, which broadens the ranges of applications to source separation, music information retrieval, automatic music transcription, amongst others. The difficulty of multiple-F0 estimation lies in the fact that sound sources often overlap in time as well as in frequency. The extracted information is partly ambiguous. Above all, when musical notes are played in harmonic relations, the partials of higher notes may overlap completely with those of lower notes. Besides, spectral characteristics of musical instrument sounds are diverse, which increases the ambiguity in the estimation of partial amplitudes of sound sources."

from Ph. D cited before. Used for speech recognition,etc...

Fundamental frequency is extracted from Fourier series. How to use this to compare \$150 to \$1000 DAC ? Well i don't know...

Quote:
Originally Posted by Arietites

How to use this to compare \$150 to \$1000 DAC ? Well i don't know...

That was never the point of this thread.

Quote:

"The fundamental frequency, or F0, is an essential descriptor of music sound signals. Although single-F0 estimation algorithms are considerably developed, their applications to music signals remain limited, because most music signals contain concurrent harmonic sources. Therefore, multiple-F0 estimation is a more appropriate analysis, which broadens the ranges of applications to source separation, music information retrieval, automatic music transcription, amongst others. The difficulty of multiple-F0 estimation lies in the fact that sound sources often overlap in time as well as in frequency. The extracted information is partly ambiguous. Above all, when musical notes are played in harmonic relations, the partials of higher notes may overlap completely with those of lower notes. Besides, spectral characteristics of musical instrument sounds are diverse, which increases the ambiguity in the estimation of partial amplitudes of sound sources."

from Ph. D cited before. Used for speech recognition,etc...

Fundamental frequency is extracted from Fourier series. How to use this to compare \$150 to \$1000 DAC ? Well i don't know...

category error - you are citing discussion of real time signal identification - not what limits measurements which can be done with digitized data recordings

Edited by jcx - 7/26/12 at 3:19pm
Quote:
Originally Posted by billybob_jcv

Can the ear sense signal artifacts not detectable by the best spectrum analyzers?  There are analyzers capable of seeing low-level spread-spectrum microwave signals buried under much stronger wideband signals.

Isn't the ear just a pressure transducer?

There's far more than just the ear involved. There are two ears attached to a brain, and the system is acutely sensitive to phase and amplitude relationships between the two ears. It's about how the brain integrates and processes the two signals juxtaposed against one another that we have not learned to measure and qualify.
Quote:
Originally Posted by billybob_jcv

I think the inner workings of the ear->nerve->brain are irrelevant.  The input signal to the ear is differential pressure across the eardrum.  Describe that accurately, and you have described the entire audio spectrum.  Anything else after that interface is *created* by the listener's components in the system (ear, nerves, eyes, skin, nose, tongue, brain).

Irrelevant? You've got to be kidding! Yes indeed, just because we're already steeped in ignorance, let's throw the baby out with the rest of the bath water while we're at it. Wow. This line of reasoning belongs in the JIR!
Quote:
Originally Posted by ultrabike

Quote:
Originally Posted by Xaborus

Why would anyone buy a \$1000 DAC when the \$150 ODAC performs just as good in blind testing as a DAC1?

Sometimes i think most of you guys are just buying an expensive placebo effect.. just like \$1000 Cables.

Edit: Please note- I'm not trolling. Explanation of my viewpoints on the second page.
Quote:
Originally Posted by Xaborus

NOT TROLLING. JUST SCIENCE. 192K is harmful. "
192kHz digital music files offer no benefits. They're not quite neutral either; practical fidelity is slightly worse. The ultrasonics are a liability during playback.
Neither audio transducers nor power amplifiers are free of distortion, and distortion tends to increase rapidly at the lowest and highest frequencies. If the same transducer reproduces ultrasonics along with audible content, any nonlinearity will shift some of the ultrasonic content down into the audible range as an uncontrolled spray of intermodulation distortion products covering the entire audible spectrum. Nonlinearity in a power amplifier will produce the same effect. The effect is very slight, but listening tests have confirmed that both effects can be audible."-- http://people.xiph.org/~xiphmont/demo/neil-young.html

"It's worth mentioning briefly that the ear's S/N ratio is smaller than its absolute dynamic range. Within a given critical band, typical S/N is estimated to only be about 30dB. Relative S/N does not reach the full dynamic range even when considering widely spaced bands. This assures that linear 16 bit PCM offers higher resolution than is actually required.
It is also worth mentioning that increasing the bit depth of the audio representation from 16 to 24 bits does not increase the perceptible resolution or 'fineness' of the audio. It only increases the dynamic range, the range between the softest possible and the loudest possible sound, by lowering the noise floor. However, a 16-bit noise floor is already below what we can hear"

Digital is digital. USB = Optical = Coax. This argument is only valid if you cant physically use usb as an input, and only then and there i admit, you have a valid argument.

Now i will give a single case in point to subjectivists, and it is a very valid point. Placebo effect is very, very real. And if you believe something will make your audio sound better, it simply will sound better. It doesn't matter if its a \$1000 bag of rocks taped to your speakers, it will work if you truely believe it does.

But i personally believe that scientifically proven low-distortion methods of music playback reguardless of price is the best way to obtaining the highest audio quality. My placebo effect is not my wallet, but scientific charts and graphs.

While I agree that there are some wild claims out there, I don't see how your arguments regarding 192kbps, digital transfer mediums, and bit-widths invalidate subjective experience at all. In order to determine all these parameters, subjective experiences and observations had to be collected in the first place. Models regarding hearing thresholds, loudness levels, audibility thresholds, etc. are/were sanity checked with collective experiences of real people (unless papers about it flat out lie.) All these objective models have limitations, and will hopefully be improved as our understanding of ourselves improves. Measurements do not substitute our real world perception, they complement it.

Regarding the design of the audio systems out there, there is a huge collection of topologies built under a different set of requirements, goals, and assumptions. Saying that two "good" but different topologies will sound the same based on our limited knowledge is flawed. In the selection of one topology to the next, there are usually trade-offs and unknowns. I know this is the case in the design of communication systems, and I don't see how audio systems differ in this regard.

I believe a consumer is better served using all tools available to him/her in his/her selection of a product, and that includes BOTH subjective and objective instruments.

Ok,ok, I've got it, yes, I've finally got it! What we need to solve this whole issue are high fidelity walletsl. At least 24bits deep. None of this two bit stuff.
Quote:
Originally Posted by kwkarth

Ok,ok, I've got it, yes, I've finally got it! What we need to solve this whole issue are high fidelity walletsl. At least 24bits deep. None of this two bit stuff.

LOL! Hit the nail in the head! My wife keeps me in check reminding me that our wallet is only 16-bit deep Got to keep the significant other happy ... She is my balancing act ... and so I'm a bang for the buck dude (for now.)

Edited by ultrabike - 7/26/12 at 4:33pm
Quote:
Originally Posted by ultrabike

Quote:
Originally Posted by kwkarth

Ok,ok, I've got it, yes, I've finally got it! What we need to solve this whole issue are high fidelity walletsl. At least 24bits deep. None of this two bit stuff.

LOL! Hit the nail in the head! My wife keeps me in check reminding me that our wallet is only 16-bit deep Got to keep the significant other happy ... She is my balancing act ... and so I'm a bang for the buck dude (for now.)

The highest fidelity of all!
Quote:
Originally Posted by kwkarth

The highest fidelity of all!

Hey man, don't want to find myself locked out the house with my hi-fi stuff in the front yard...

Quote:
Originally Posted by ultrabike

Quote:
Originally Posted by kwkarth

The highest fidelity of all!

Hey man, don't want to find myself locked out the house with my hi-fi stuff in the front yard...

Ya got that right Skippy!
Quote:
Originally Posted by kwkarth

Irrelevant? You've got to be kidding! Yes indeed, just because we're already steeped in ignorance, let's throw the baby out with the rest of the bath water while we're at it. Wow. This line of reasoning belongs in the JIR!

You are missing the point.  Consider basic systems engineering.  The listeners head is a black box.  The input to that black box is all of the stimuli received.  That is ALL that can be affecting the black box.  Inside the black box can be all the magical deciphering you want to consider - but the input stays the same.  The audio signal being transmitted to the black box is a time-varying 3-dimensional differential pressure that strikes the ear drum.  If you know of some other input being delivered to the listener, please enlighten me.  If you want to consider each ear separately, fine - each ear is presented with a different differential pressure gradient.

My point is that it was stated that you can't measure everything, therefore you can't know by measurement whether two systems are actually identical.  I claim that if the two systems produce exactly the same time-varying 3-dimensional differential pressure, then they are producing exactly the same sound.  All the nonsense about "complex music is different than sine waves" really boils down to air pressure - that is what sound is.

The question you should be asking yourself is whether the time-varying differential pressure can be captured in a way that is at least as accurate as the human ear.

Quote:
Originally Posted by kwkarth

Irrelevant? You've got to be kidding! Yes indeed, just because we're already steeped in ignorance, let's throw the baby out with the rest of the bath water while we're at it. Wow. This line of reasoning belongs in the JIR!

when we doing an A/B we are effectively trying to do a null test. we are trying to discard everything we hear that is the same and trying only to tell what's different. it makes sense that our auditory system is a signal processor just like every other component in the chain and adds coloration just like every other component in the chain. however, since it's common in both A and B, it is nulled, so it's fair to take it out of the equation.

but of course u can't do that because the time varying nature of the auditory system sux for doing this. as u know theres much better ways of doing a null test...

This is all just a bunch of small minded BS.

The fact is that measurements are meaningless without the subjective experience.  The subjective experience is the whole point.  Measurements can help us further understand that experience.  That's it.  None of the rest of this matters.

Quote:
Originally Posted by rhythmdevils

This is all just a bunch of small minded BS.

The fact is that measurements are meaningless without the subjective experience.  The subjective experience is the whole point.  Measurements can help us further understand that experience.  That's it.  None of the rest of this matters.

RD, you are a better man than I.  You're still holding out hope, trying to convince people.  I gave up long ago.  Maybe it's time you did too.

New Posts  All Forums:Forum Nav:
Return Home
Back to Forum: Sound Science
• I Don't Understand You Subjective Guys