Head-Fi.org › Forums › Equipment Forums › Headphones (full-size) › Headphones and 'Hi-Res' audio debate
New Posts  All Forums:Forum Nav:

Headphones and 'Hi-Res' audio debate  

post #1 of 8
Thread Starter 

Hi everyone,


I have been following the growing hype around hi-res recordings and there are many DACs that will now handle 384Khz but most ear/headphones I have seen do not have frequency responses that go above ~20kHz meaning that 172kHz will be completely wasted surely?


Am I missing something or are the current crop of ear/headphones simply not good enough to actually playback hi-res audio?


I wont even begin to discuss the limitations of the human auditory system with regards to hi-res :)

post #2 of 8

"I wont even begin to discuss the limitations of the human auditory system with regards to hi-res"

than you are missing the point of the discussion.


Even in hight end products (having marks upto like 28-45kHz) you can see a sudden drop of after 10-12kHz.


There is some point about the high-res of the dac, which amonts to this logic:


Lets say you have a sine wave of a freqency 20kHz,

then with 40kHz sampling, you are roughly able to just show values +1,-1,+1 (or maybe +1,0,-1,0,+1,...); but not the intermediate ones.


That makes the original sine wave appear after the digital processing as square wave in particular you have a large change in voltage)

(that is we are able to determine frequency, but not the shape of the wave).


So if you improve the sampling (or upsample with interpolation) this makes the transition less rough.


The square/since wave is hearable for lower frequencies, but I'm not sure to what frequencies the efect is hearable

(from my experiance it is not hearable 44kHz-192kHz)

post #3 of 8
Thread Starter 

Your comments on higher sampling improving the signal by making them 'less rough' are completely incorrect. By going to higher sampling rates you do not improve anything in the 0 - 22kHz frequency range you are simply able to sample at a higher freqeuncy range.


I think his quote from a well known website describes things better than I can:


'The most common misconception is that sampling is fundamentally rough and lossy. A sampled signal is often depicted as a jagged, hard-cornered stair-step facsimile of the original perfectly smooth waveform. If this is how you envision sampling working, you may believe that the faster the sampling rate (and more bits per sample), the finer the stair-step and the closer the approximation will be. The digital signal would sound closer and closer to the original analog signal as sampling rate approaches infinity.'


'All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling; an infinite sampling rate is not required. Sampling doesn't affect frequency response or phase. The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.'


Do you challenge these comments? :)

post #4 of 8

I will limit myself to this scenario:


Lets say you use 500 samples per second; then you could capture 250Hz wave,

however those 500 points are not sufficient to say what kind of shape the wave takes

(that is you have only values +1,-1,+1,...; for the 250 Hz)

and you can match both sine wave/tringle wave/square wave;

but at 250Hz humans are able to easily discriminate between square/sine/tringa wave.


And most likely the original wave was a some form of sine, but dac(at least a perfect one) will output square wave, as it sees just +1 per 1/500s, then -1 per 1/500s)


So the Nystique theorem only says about reconstructing frequencies, not shapes

I'm guessing that there needs to be additional assumption,

that the sound is composed of the same shapes (like just sines)

Because for example to reconstruct square wave from sines you would have to have infinite amount of them (that mens infinitive frequency and sampling rate)

Edited by xdog - 2/19/14 at 7:35am
post #5 of 8

One additional argument is that:

lets say that we sample even 80k points per one second,

than if we would have 20kHz sine wave, than depending on the phase of the signal, we could get values like:

0,+1,0,-1,0,+... for phase 0

+0.7,+0.7,-0.7,-0.7,+0.7 in case of phase 45 degrees [or 0.5 in case of triangular wave]


From mathematical point of view both those datasets will reconstruct the same sine wave.

But one could imagine our brain persiving different loudness, since in the first case it will recive stronger impuls

[for instance the second signal could be not sufficently strong to invoke neuron response]

post #6 of 8
Thread Starter 

I think you are confusing the concepts around oversampling and sampling above 44.1kHz in the way hi-res audio is being recorded.


Also it might pay you to look through these useful links to get a better understanding. Sine wave or not Nyquist/Shannon used at 44.1kHz can perfectly represent any analog audio recording in the general human hearing range (<20kHz). The shape of the waves are irrelevant as if they are in the audible range they will be lower than 20kHz and can be precisely represented digitally by sampling at 44.1kHz. I agree there is a case to go above 16bit in some instances but with good mastering even that is questionable. If people say they can hear a difference then good for them. Upgrading kit keeps the economy going and that can only be a good thing right? :)





Edited by Krisman - 2/19/14 at 8:20am
post #7 of 8

No I'm not confusing concepts,

Please draw yourself the situation I have described,

Or please provide some picture how are you able to discrimnate between square/triangle/sine wave of frequency X having only 2X points (or even 4X points, or even some finite value of points)

[I'm pretty sure that is not mathematically feasible, unless you assume something about the signal]


I might have made some unrealistic assumptions for example:

that the sample is taken at exactly time T; but in real life [I don't know] it is the average value between T and T+[time between samples], so for example the second case 0.7 vs 1 might be limieted.



post #8 of 8

Even the 24bit-vs-16bit-the-myth-exploded is at least mathematically wrong.

I will assume that the 16bit references linearly voltage so 1=xV, 2=2xV, and so on (I think that this is the case)

If you have 16 bits you are able to represent cerca 65k of values, that is you map the recorded/mesured voltage(or SPL or whatever) to values from 1..65k

16bit is 96db of dynamic range (lg(2^16)*6db=16*6).

You set your recording system that the highest recorded sound is 96db, and thus (on 16bit) the lowest is 6db)

Lets say that you are recording one after another 2 sine waves[for instance 250Hz]:

one which has amplitude of 96db, there having available a set of 65k points (your monitor has most likely only 1k of points) you are able to approximate sine vave very nicely (even perfectly)

but when you try to record a sine of an amplitude od 6db(12db) then you have at you disposition only 2(3)points 0db,6db,(12db) so you are only able to depict the since wave as square wave.


This is somewhat theoretical because if somebody wants to record a sound using just 0.01% of dynamic range he is just doing something wrong.

Or if you have 2 above signals[lets say that the second has a diffrent frequency] simulatanously, then the second if practically unhearable

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Headphones (full-size)
This thread is locked  
Head-Fi.org › Forums › Equipment Forums › Headphones (full-size) › Headphones and 'Hi-Res' audio debate