Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Sep 11, 2015 at 9:58 PM Post #1,246 of 3,525
   
To me "rather audible" means that, when I switch back and forth, I hear an obvious difference.
 

 
So then the question is, how are you listening to hear these obvious differences. As both myself and Roly have pointed out, the artifacts from *common* decimation/quantization methods in widely-available software are way down below the actual level of the music, and the "loudest" parts are way up in the spectrum, past the edge of the commonly held audible range. Thus the only possibilities I can fathom are a) you are cutting out an extremely quiet section of music and jacking up the volume, and also have an upper hearing limit actually near 20k or b) you are switching between resampling algorithms that have passband features that would not be considered flat within the audible range. And there seems to be an underlying assumption that because you can see pre-ringing on a linear low-pass filter then that immediately implies audibility *at actual music listening levels*, which just isn't the case.
 
No arguments on your last point; that's exactly how I do things for my own ABX: convert to 16/44.1 then go back up to the original hi-res spec.
 
Sep 11, 2015 at 10:17 PM Post #1,247 of 3,525
 
But your analogies aren't actually correct.
 
When HD TVs (1920 x 1080) first appeared, it was NOT universally agreed that the extra resolution actually made a significant difference for most customers. Many people in fact argued that there was very little HD content available, and that using "full 1080p HD resolution" on a screen smaller than 30" was a total waste anyway - because nobody could see the difference between 720p and 1080p on a screen that small. However, today, almost every TV of any size is full 1080p HD, and we're having the same argument about 4k.
 
The problem with your argument is that the basic premise is limited. Yes, if there was absolute reliable proof that frequency response above 20 kHz absolutely, positively produces no audible difference, then it would be unnecessary (although I'm still not convinced that having a "safety margin" above the bare minimum isn't still a good idea). However, the proof you're offering isn't at all "absolute" or "conclusive". In fact, most of those tests were conducted with inadequately sized sample groups, using obsolete equipment, and frequently conducted using dubious test methodology. The fact that twenty or thirty people, using 1980's vintage technology, and 1980's vintage recordings is NOT compelling proof that the difference doesn't exist - at least not to me. And, if we were in fact to prove, with properly sized and run tests, that the difference wasn't audible with the best equipment available today, that wouldn't constitute evidence about whether there might be a difference that is audible with the equipment available in twenty years. I simply don't believe that we actually understand 100.0% of how human hearing works; especially since human hearing takes place partly in the brain - and we certainly don't understand anywhere near 100% of how THAT works.)
 
(The reality is that there have been several tests run in recent times which tend to suggest that frequency response above 20 kHz can in fact produce audible effects - in different ways and with different implications. The recent AES paper seems to show that a small sample of individuals was able to "beat the odds" in terms of telling whether a given sample was high resolution or not. Another test I recall reading about produced a result that demonstrated that, while the participants didn't hear what they considered to be an audible difference with band-limited content, the location of instruments in the sound stage was perceived as being shifted with the band-limited version, which is in fact "an audible effect". Note that I don't consider either of those results to be "compelling" either but, when balanced against tests run decades ago, with the audio equipment then current, I think they raise enough questions to make it unreasonable to "fall back" on those outdated results as being "absolute facts" without confirmation.)

 


I don't believe that the analogies are incorrect around the basic point. Increased bit depth does increase the quality of video due to pixilation, but does not increase sound quality in audio beyond 16bit because the sound wave is already perfect and all you are doing is increasing the dynamic range and lowering the noise floor. That extra video resolution of 1080p was not agreed to be an improvement back in the day is neither here or there. The limitations were largely as you say the screen size. With audio, the limitation, beyond 16bit, is not the hardware but our ears. Whereas the eye can determine high res video due to smaller pixilation, there is no pixilation in audio. If you can actually find a home consumer DAC that can actually resolve 24bits, and have the equipment to go with it, it will not make the sound wave more perfect than 16bit digital or 8bit digital for that matter.

The analogy of going beyond 20khz in audio with TVs being able to reproduce frequencies into infra red is valid. In both cases there is no point as it is outside the human range. I don't disagree that higher frequencies than 20khz may produce audible effects, but this is more likely to be distortions in the stereo system reacting to ultrasonic content. There is documented evidence that 24/192 can cause distortion is some stereos because of it.

The bottom line is that you appear to be challenging the established known facts that humans can't hear well above 20khz and can't resolve a dynamic range greater than 16bits. This is a bit like challenging the notion that humans are unable to pick up sonar signals like dolphins and bats. Surely the burden of proof is on those making the claim. And using special pleadings such as "we do not know 100% how human hearing works" to somehow justify then that means we can hear ultrasonic sound waves or have machine like resolution is a well know ploy used in most psuedosciences.
 
Sep 11, 2015 at 11:04 PM Post #1,248 of 3,525
I don't believe that the analogies are incorrect around the basic point. Increased bit depth does increase the quality of video due to pixilation, but does not increase sound quality in audio beyond 16bit because the sound wave is already perfect and all you are doing is increasing the dynamic range and lowering the noise floor. That extra video resolution of 1080p was not agreed to be an improvement back in the day is neither here or there. The limitations were largely as you say the screen size. With audio, the limitation, beyond 16bit, is not the hardware but our ears. Whereas the eye can determine high res video due to smaller pixilation, there is no pixilation in audio. If you can actually find a home consumer DAC that can actually resolve 24bits,

This is the best point of all.  Namely, there aren't any DAC's which ACTUALLY resolve past 20 or at most 21 bits, anyway, folks.  They may be able to receive and process signals at 24 or 32 bits, but they can't actually fully process and output anything beyond the first 21.
 
The other thing is the difference between resolution in video and sampling in audio.  They are fundamentally different.  Thing is that video with pixels is always going to be a DISCRETE approximation of what is, when ignoring the quantum-mechanical scale at least, a CONTINUOUS phenomenon in the real world.  However, audio sampling is different.  Namely, while it uses a discrete approximation of a continuous phenomenon, the waveform can be reproduced EXACTLY within the audible frequency-range from that approximation.  This is due to the Nyquist Sampling Theorem, which says that any waveform with a frequency which never at any point goes higher than the value f can be perfectly reconstructed without any error at all by using samples with a rate of 2f.  Thus why 44.1khz sampling-rate is able to perfectly reproduce audio up to 22Khz in frequency, for example.
 
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem

Anyone proposing that there is an actual audible difference from sample-rates higher than 44.1Khz is fundamentally misunderstanding the concept of the Nyquist Sampling Theorem and how the very idea of Sampling in Fourier Analysis even works, mathematically speaking.  Such misunderstanding makes sense given that the math is fairly high-level and far above the high-school precalculus or calculus that most people top-out at in their lives.  Higher sampling rates are only better if you want extra "room" for DSP's to play around with the waveform during mastering due to the availability of accurate reproduction of much smaller wavelengths, i.e. higher frequencies than 22Khz.  In terms of the actual PLAYBACK of the music, they simply cannot, mathematically speaking, produce an audible difference within the range of human hearing.
 
Bit-depth is a bit more complicated of an issue, but comes down in the end to something similar:  Our ears don't hear as precisely as our eyes see, so unlike the difference between 16-bit and 24 or 32-bit color in video (where our eyes can clearly see the difference in the range of different colors and their relative brightness levels), the actual maximum dynamic range of human hearing when listening to a significantly audible signal (which should be noted to be different than the dynamic range when suddenly hearing noises after being in a completely quiet environment for an extended time) is able to be completely reproduced by 16-bit or PERHAPS, in the most extreme cases, 20-bit audio.  But the latter, the extreme cases, dont' even apply in the case of music, as it is simply impossible to find music with a dynamic range beyond about 25 to 30 decibels at most, and the vast majority of music has less than 15 decibels dynamic range.
 
It's too bad this won't convince people, though.  They'll still be convinced that they are hearing a difference with 32/192 audio compared to 16/44.1.  The placebo-effect and expectation-bias are very strong things when it comes to human perceptioon.
 
Sep 12, 2015 at 1:35 AM Post #1,249 of 3,525
I just bit the proverbial bullet and BOUGHT the AES paper (AES Convention Paper 9174). (Apparently AES papers remain copyrighted, and cannot be reprinted for free, which is why everyone is talking about the paper but not many people seem to have read it.) Note that the title of the paper refers to "filters", but what they're testing is whether the necessary application of band-limiting, as applied to CD content, is audible. Basically, for the study, they took some 24/192k content and band limited it using filters equivalent to those you would use to record CD and DVD audio content (filtered to cut it off at the Nyquist frequencies of 22 kHz and 24 kHz respectively). The results were that the subjects WERE in fact able to tell the "CD version" from the "high-def version" - with a reliability that exceeded random chance (in general, it was between 56% and 66%, which may not be overwhelming, but is statistically better than random). The study also found that the difference was more audible with certain passages than with others, and included some interview data (taken afterwards) where the subjects explained subjectively the differences they claimed to hear (which do coincide with the differences many audiophiles also claim to hear).

If you are member of AES you can download 25 papers this month for free. Having read the paper I think the test was pretty well done, They give enough detail that you can  try to copy the test. They do not provide the much detail on the matlab filters. There is a few issues I see, first of all the reduced the bandwidth and resolution at the same time. I would have like to see them change one variable at a time. They did not also filter the audio at 96k to see the the filter they created were audible. If a 96k filter was audible then it is not the bandwidth limit that is audible but the filter. In this test though the purpose was to test the audibility of typical downsampling and bit reduction.
 
I have purchased the tracks they used and since I am testing some large studio monitors that go out to 40k, I plan to test the same tracks. I will using isotope to process the tracks to see if can repeat their results
 
Sep 12, 2015 at 4:47 AM Post #1,250 of 3,525
  Most people discuss the difference in the audible range of frequency. Personally, I think 16/44.1 is far too inaccurate for 3D sound. Both for amplitude and for timing.
 
Given that I can pinpoint the sound trailing a plane high in the sky, a quick look at the math needed to do so, strongly supports that 16/44.1 is insufficient for that. That again implies that the hearing of humans is more accurate than 16/44.1. 

 
 
Damn good thing that stereo is not  3D then. It has never claimed to be.
 
Imaginationplays a huge valid part in listening to music, just as it does in reading a book. It is supposed to happen. The engineers may be able to suggest depth by varying levels, and even timing, but there is no up/down pan on control board.
 
Congratulations on your imagination: you are using it in exactly the right way!
 
When I was a very young child, I preferred radio to TV. According to me, the pictures were better.
 
Sep 12, 2015 at 1:20 PM Post #1,251 of 3,525
  This is the best point of all.  Namely, there aren't any DAC's which ACTUALLY resolve past 20 or at most 21 bits, anyway, folks.  They may be able to receive and process signals at 24 or 32 bits, but they can't actually fully process and output anything beyond the first 21.
...

 
And they should not. The threshold of pain is at about 120dB.
 
https://en.wikipedia.org/wiki/Audio_bit_depth
 
Still does not change much, but sure raises a lot of new questions.
 
Great point.
 
It seems to me that the human ability to hear, is not where the problem is, rather more the complexity of digital sound reproduction, including jitter and noise effects. This is going nowhere, as the difference going HD is clearly heard by many.
 
Sep 12, 2015 at 1:26 PM Post #1,252 of 3,525
   
It seems to me that the human ability to hear, is not where the problem is, rather more the complexity of digital sound reproduction, including jitter and noise effects. This is going nowhere, as the difference going HD is clearly heard by many.

 
Clearly heard by many when they know the track is HD. Curiously the effect seems to disappear when blinders are put on and they can only use their ears.
 
Sep 12, 2015 at 2:48 PM Post #1,253 of 3,525
   
Clearly heard by many when they know the track is HD. Curiously the effect seems to disappear when blinders are put on and they can only use their ears.


exactly
rolleyes.gif

 
Sep 14, 2015 at 10:14 AM Post #1,254 of 3,525
 
   
But your analogies aren't actually correct.
 
When HD TVs (1920 x 1080) first appeared, it was NOT universally agreed that the extra resolution actually made a significant difference for most customers. Many people in fact argued that there was very little HD content available, and that using "full 1080p HD resolution" on a screen smaller than 30" was a total waste anyway - because nobody could see the difference between 720p and 1080p on a screen that small. However, today, almost every TV of any size is full 1080p HD, and we're having the same argument about 4k.
 
The problem with your argument is that the basic premise is limited. Yes, if there was absolute reliable proof that frequency response above 20 kHz absolutely, positively produces no audible difference, then it would be unnecessary (although I'm still not convinced that having a "safety margin" above the bare minimum isn't still a good idea). However, the proof you're offering isn't at all "absolute" or "conclusive". In fact, most of those tests were conducted with inadequately sized sample groups, using obsolete equipment, and frequently conducted using dubious test methodology. The fact that twenty or thirty people, using 1980's vintage technology, and 1980's vintage recordings is NOT compelling proof that the difference doesn't exist - at least not to me. And, if we were in fact to prove, with properly sized and run tests, that the difference wasn't audible with the best equipment available today, that wouldn't constitute evidence about whether there might be a difference that is audible with the equipment available in twenty years. I simply don't believe that we actually understand 100.0% of how human hearing works; especially since human hearing takes place partly in the brain - and we certainly don't understand anywhere near 100% of how THAT works.)
 
(The reality is that there have been several tests run in recent times which tend to suggest that frequency response above 20 kHz can in fact produce audible effects - in different ways and with different implications. The recent AES paper seems to show that a small sample of individuals was able to "beat the odds" in terms of telling whether a given sample was high resolution or not. Another test I recall reading about produced a result that demonstrated that, while the participants didn't hear what they considered to be an audible difference with band-limited content, the location of instruments in the sound stage was perceived as being shifted with the band-limited version, which is in fact "an audible effect". Note that I don't consider either of those results to be "compelling" either but, when balanced against tests run decades ago, with the audio equipment then current, I think they raise enough questions to make it unreasonable to "fall back" on those outdated results as being "absolute facts" without confirmation.)

 


I don't believe that the analogies are incorrect around the basic point. Increased bit depth does increase the quality of video due to pixilation, but does not increase sound quality in audio beyond 16bit because the sound wave is already perfect and all you are doing is increasing the dynamic range and lowering the noise floor. That extra video resolution of 1080p was not agreed to be an improvement back in the day is neither here or there. The limitations were largely as you say the screen size. With audio, the limitation, beyond 16bit, is not the hardware but our ears. Whereas the eye can determine high res video due to smaller pixilation, there is no pixilation in audio. If you can actually find a home consumer DAC that can actually resolve 24bits, and have the equipment to go with it, it will not make the sound wave more perfect than 16bit digital or 8bit digital for that matter.

The analogy of going beyond 20khz in audio with TVs being able to reproduce frequencies into infra red is valid. In both cases there is no point as it is outside the human range. I don't disagree that higher frequencies than 20khz may produce audible effects, but this is more likely to be distortions in the stereo system reacting to ultrasonic content. There is documented evidence that 24/192 can cause distortion is some stereos because of it.

The bottom line is that you appear to be challenging the established known facts that humans can't hear well above 20khz and can't resolve a dynamic range greater than 16bits. This is a bit like challenging the notion that humans are unable to pick up sonar signals like dolphins and bats. Surely the burden of proof is on those making the claim. And using special pleadings such as "we do not know 100% how human hearing works" to somehow justify then that means we can hear ultrasonic sound waves or have machine like resolution is a well know ploy used in most psuedosciences.

 
Just to be clear - I'm not actually challenging anything one way or the other - because I haven't run a properly controlled test (and, again, even if I personally couldn't hear a difference, that wouldn't prove that nobody can). My main point is that "the established science" may simply not be right. Five hundred years ago, the established science they taught in school was that the Earth was flat, and tomatoes were poisonous; now we know better. When I went to high school, they taught in science class that all matter was made up of protons, neutrons, and electrons - which were the smallest indivisible "pieces" of matter; and that model was good enough to bring us nuclear power plants and the fusion bomb; but now we find that notion quaint, and there's an active debate about whether matter is "really" vibrating 11-dimensional energy strings, or a collection of smaller particles called quarks, or something not quite either one. And, the last time I looked, we still don't know exactly how the human brain works (and "hearing" occurs in both the ears and the brain).
 
Incidentally, for an interesting experiment, go buy yourself one of those new souped up half watt LASER pointers that operates at 720 nm or 840 nm; that's the "invisible infrared" color used by a lot of remote controls; and a LASER puts out a very clean single frequency. Shine the dot somewhere and you will probably find that the "invisible" dot is in fact clearly visible; I can see it quite clearly as a pale pink - and so can most people. So I guess the "science" about IR light being "invisible" is wrong too. (Actually, in order to be visible to most of us, it has to be so bright that it is somewhat dangerous to look at for more than a few seconds, but my point stands - the "commonly accepted fact" is in fact wrong. And, in fact, a TV that was actually able to display long-wave IR, and so make the bright sun in the picture of the desert actually feel warm on your face, would - at least to me - have much better fidelity than the one I have now.)
 
I don't know for sure whether the difference between 16/44k and 24/192k is audible - everything else being exactly equal, but I'm absolutely positive that I don't necessarily trust the "truth" as "discovered" by scientists back when most audiophiles were certain that a Dynaco Stereo 70 and Koss pro4AA's "sounded audibly perfect" because both "covered the entire audible spectrum". And, with many modern DACs with selectable filters, there are differences that many people find audible which seem to coincide with different sample rates and different filter responses producing audible differences. Perhaps there's something there; or perhaps what we're hearing is simply that a given DAC handles 16/44k differently than it handles 24/192k - because it uses a different oversampling multiple; and perhaps the endless discussions in one or two pro sound forum about how certain sample rate converters sound better or worse with certain types of music is all superstition as well (audiophiles have nothing on pros for superstitious beliefs). However, I'm not quite prepared to say that "audio science is at its end because there's lots of equipment available today that's audibly perfect, so there's nothing to improve."
 
Personally, since the science shows clearly that high-res files are in fact superior in quality (frequency response and dynamic range) - whether that superiority is audible or not - then to me that's enough justification for continuing to improve things.... and for studying whether those technical improvements lead to some sort of audible improvements. I can also say that, personally, I'm willing to pay a bit extra for a technical improvement even if that improvement doesn't yield anything that's currently important - or even noticeable. (If it turns out that nobody can hear the difference, that still won't prove that the extra information that's there won't be useful to some new "3D decoder" someone comes out with next year, or some other gadget neither of us can guess at, and so won't prove it "totally useless".) I also simply see the latest "fad" for high-res remasters as being generally a good thing - because at the very least it encourages people to listen to music carefully enough that they are actually hearing it. (I'd rather see people spending money on high-res players that don't sound different than on cheesy 128k MP3 players which they imagine "don't sound much different" - because the latter is a slippery slope I'd rather avoid approaching.) 
 
Now, if you want to start a new thread entitled "What is the best and most practical sample rate and bit depth to use for distributing consumer music?" then I might well be inclined to agree with you on a lot more things.
 
rolleyes.gif

 
Sep 14, 2015 at 10:30 AM Post #1,255 of 3,525
  If you are member of AES you can download 25 papers this month for free. Having read the paper I think the test was pretty well done, They give enough detail that you can  try to copy the test. They do not provide the much detail on the matlab filters. There is a few issues I see, first of all the reduced the bandwidth and resolution at the same time. I would have like to see them change one variable at a time. They did not also filter the audio at 96k to see the the filter they created were audible. If a 96k filter was audible then it is not the bandwidth limit that is audible but the filter. In this test though the purpose was to test the audibility of typical downsampling and bit reduction.
 
I have purchased the tracks they used and since I am testing some large studio monitors that go out to 40k, I plan to test the same tracks. I will using isotope to process the tracks to see if can repeat their results

 
I agree there.... there are many different filter types and options - and they only tested one of them (and proving that a single filter is audible doesn't prove that others are). I would have also liked it if they had included a few more types of test equipment (since they basically used one DAC and one pair of speakers). I've always found electrostatic headphones to be the best thing for picking out minute audible differences, so that would have been my choice there.
 
As for the accusations that the authors "had an agenda".... personally I'm inclined to say that, if they did, I saw no evidence to that effect. (To put it bluntly, while the test produced a "statistically significant" result, it would hardly serve to convince people that there's some sort of significant and obvious difference worth paying for. In fact, it tended more to suggest that the difference was there, but was rather minor and difficult to hear.)
 
I have to admit that I'm at a bit of a disadvantage here in that classical music isn't what I normally listen to, so I personally would not be the most likely person to notice whether the sounds of specific instruments, or the sound of the ambiance in a real concert hall, were or were not "rendered accurately".
 
Sep 14, 2015 at 12:24 PM Post #1,256 of 3,525
 ... whether the sounds of specific instruments, or the sound of the ambiance in a real concert hall, were or were not "rendered accurately".

can't expect recorded music to be "accurate" to start with - the art and illusion starts with microphone choice and positioning and continues throughout the mastering process - pianos are notoriously different sounding in recordings
 
the most you can say is whether you like one or another set of recording choices - I would expect it to be a bad idea to listen for what recordings deliberately manipulate "to taste" when trying to distinguish digital audio sample rate
 
Sep 14, 2015 at 12:55 PM Post #1,257 of 3,525
Just to be clear - I'm not actually challenging anything one way or the other - because I haven't run a properly controlled test (and, again, even if I personally couldn't hear a difference, that wouldn't prove that nobody can). My main point is that "the established science" may simply not be right. Five hundred years ago, the established science they taught in school was that the Earth was flat,
>>>>>>>>>>snip snip

Sorry Keith, but this old bollocks always touches a nerve with me and it's amazing how many otherwise knowledgeable people will trot it out. The truth is that the only reason "the established science" was that the earth was flat was at the insistence of the church and in those days if your views ran counter to the church, nasty, painful things tended to happen to you. Excommunication was probably the best you could hope for, at least then the church ignored you!

Science established that the earth was anything other than flat centuries before the church intervened. No Greek writer, after about 500 BC, considered the earth anything other than non-flat, an Egyptian "scientist" had calculated the diameter to within 2% iirc, centuries before that and the Phoenicians, being a shipfaring nation had guessed due to ships disappearing over the horizon. Even the heliocentric, as opposed to the geocentric, solar system had been proposed centuries BC. All that "lost" science was the result of religious belief, which threw Europe into what we now call "The Dark Ages". But of course it's easier to blame religious ignorance on science isn't it?

A pi** poor example imo.
 
Sep 14, 2015 at 3:25 PM Post #1,258 of 3,525
Sorry Keith, but this old bollocks always touches a nerve with me and it's amazing how many otherwise knowledgeable people will trot it out. The truth is that the only reason "the established science" was that the earth was flat was at the insistence of the church and in those days if your views ran counter to the church, nasty, painful things tended to happen to you. Excommunication was probably the best you could hope for, at least then the church ignored you!

Science established that the earth was anything other than flat centuries before the church intervened. No Greek writer, after about 500 BC, considered the earth anything other than non-flat, an Egyptian "scientist" had calculated the diameter to within 2% iirc, centuries before that and the Phoenicians, being a shipfaring nation had guessed due to ships disappearing over the horizon. Even the heliocentric, as opposed to the geocentric, solar system had been proposed centuries BC. All that "lost" science was the result of religious belief, which threw Europe into what we now call "The Dark Ages". But of course it's easier to blame religious ignorance on science isn't it?

A pi** poor example imo.

 
Not even the church have ever held the idea of a flat earth as official doctrine, mainly, I suspect, because a spherical earth didn't challenge the idea of mankind's central position in god's creation.
 
The Egyptian you're thinking of is Eratosthenes, who was ~16% off in his calculations.
 
Sep 14, 2015 at 3:39 PM Post #1,259 of 3,525
  Not even the church have ever held the idea of a flat earth as official doctrine, mainly, I suspect, because a spherical earth didn't challenge the idea of mankind's central position in god's creation.
 
The Egyptian you're thinking of is Eratosthenes, who was ~16% off in his calculations.


16% off, what a noob!
 
 
 
 
I'm pretty sure I wouldn't be that precise if I had to tell how far my car is parked. humans really kick ass(well some of them).
 
Sep 14, 2015 at 3:46 PM Post #1,260 of 3,525
Sorry Keith, but this old bollocks always touches a nerve with me and it's amazing how many otherwise knowledgeable people will trot it out. The truth is that the only reason "the established science" was that the earth was flat was at the insistence of the church and in those days if your views ran counter to the church, nasty, painful things tended to happen to you. Excommunication was probably the best you could hope for, at least then the church ignored you!

Science established that the earth was anything other than flat centuries before the church intervened. No Greek writer, after about 500 BC, considered the earth anything other than non-flat, an Egyptian "scientist" had calculated the diameter to within 2% iirc, centuries before that and the Phoenicians, being a shipfaring nation had guessed due to ships disappearing over the horizon. Even the heliocentric, as opposed to the geocentric, solar system had been proposed centuries BC. All that "lost" science was the result of religious belief, which threw Europe into what we now call "The Dark Ages". But of course it's easier to blame religious ignorance on science isn't it?

A pi** poor example imo.

 
Actually I disagree.
 
If you actually try to find information about "the frequency range of human hearing" you will find that subject mentioned in a lot of books... and most of them seem to agree that everybody else seems to agree that "the commonly accepted range of human hearing is 20 Hz to 20 kHz". At one point I tried to research exactly where that number came from, and I reached a point where most books were simply quoting other books, or saying that "it was commonly accepted". Now, while it may in fact be true, I am always leery of things that "everybody knows", but nobody seems to want to quote the original research to substantiate. (So, while most people in the middle ages "knew" the world was flat because their priest said so, a lot of people today just seem to accept that "it's commonly known that the limit of human hearing is 20 Hz to 20 kHz", which to me seems a lot like the same blind acceptance of presumed authority. While I found plenty of books and references that states the limits of human hearing as "commonly accepted", I entirely failed to find mention of an actual test, performed by an author of one of those books, to confirm this "well known" information.) Considering that, in the last twenty or thirty years, a lot of "commonly known facts" have turned out NOT to be true, I'm not willing to simply accept this one at face value because it's been repeated a lot - for a very long time.
 
Note that I have run across several references where individuals state that "they have found this to be true" - which is clearly anecdotal; I've also found one reference that "human hearing extends down to 12 Hz under laboratory conditions"; and one or two others claiming to have "detected response to frequencies well above 20 kHz in humans under some circumstances"; and at least one other that claimed to have shown that test subjects heard differences in samples that were band-limited to 20 kHz when compared to those that weren't (that study seemed to show that limiting the bandwidth to 20 kHz caused a shift in the perceived location of some instruments in the sound stage).
 
In short, I don't think that "fact" rises anywhere near the level of certainty necessary to justify using it to claim that further research is pointless or unnecessary.
 
 
 

 
 

 

Users who are viewing this thread

Back
Top