Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Sep 15, 2015 at 12:30 PM Post #1,276 of 3,525
I don't doubt people exist who can hear 20kHz or maybe even a bit beyond. I'm sure my son hears higher than I do. "People can't hear over 20kHz" is just shorthand for the much longer, but more correct statement that would go something like:
"A large percentage of adult human beings have an in-lab frequency hearing limit below 20kHz. The amplitude at which they can hear a tone at their limit will be substantially higher than the amplitude needed to hear tones in the main audible range, and thus their effective hearing limit at normal music-listening volumes will be lower than their lab value."
 
You seem to want absolute truth, but that just isn't in the realm of any statistical procedure to give. If there were, at any point in time, 0.0001% of humans who could benefit from hi-res given their particular music-listening situation, then it's doubtful we're ever going to put in the $$ to figure that out statistically. And even if we did find one of these guys, we're still going to have some false-positive rate on the test, so we can never even be sure. Even this Meridian experiment with filters, if it is all legit, still has some possibility that all the positive results were by chance. The best one can do in a statistical environment is have your own standards on errors and verify to the limits of your ability the assumptions of your test, statistical or otherwise.
 
Sep 15, 2015 at 12:42 PM Post #1,277 of 3,525
  I don't doubt people exist who can hear 20kHz or maybe even a bit beyond. I'm sure my son hears higher than I do. "People can't hear over 20kHz" is just shorthand for the much longer, but more correct statement that would go something like:
"A large percentage of adult human beings have an in-lab frequency hearing limit below 20kHz. The amplitude at which they can hear a tone at their limit will be substantially higher than the amplitude needed to hear tones in the main audible range, and thus their effective hearing limit at normal music-listening volumes will be lower than their lab value."
 
You seem to want absolute truth, but that just isn't in the realm of any statistical procedure to give. If there were, at any point in time, 0.0001% of humans who could benefit from hi-res given their particular music-listening situation, then it's doubtful we're ever going to put in the $$ to figure that out statistically. And even if we did find one of these guys, we're still going to have some false-positive rate on the test, so we can never even be sure. Even this Meridian experiment with filters, if it is all legit, still has some possibility that all the positive results were by chance. The best one can do in a statistical environment is have your own standards on errors and verify to the limits of your ability the assumptions of your test, statistical or otherwise.

 
I agree absolutely. (If no difference was audible, you could conclude that neither the sample rate or the filters made an audible difference; but, if a difference is audible, I can't think of a way to eliminate the possibility that it's due to whatever sample rate conversion process was used. And, even if you were to record the same analog original using two of the same recorder, set to record at different sample rates, then how could you rule out the possibility that it performs differently in other ways at each sample rate, or that the DAC you're using does?)
 
My point is that a lot of people posting on this thread seem to believe that "nobody can hear a difference between different sample rates - because, in order to do so, they would have to be able to hear past 20 kHz, which no human can do". That seems like an absolute statement to me - which we both seem to agree would be impossible to prove.
 
Once we get past that sticking point, I'm quite prepared to agree with you that most listeners, using most equipment, and listening to most samples, are quite unlikely to be able to detect a difference. At which point I will also agree that, as far as an individual is concerned, it's up to them whether they want to test themselves, or buy high-res files on the chance that they can hear a difference, or simply decide whether to "buy the best - just in case" or "not pay extra for something they probably can't hear".
 
As I've mentioned before, I'm inclined to "buy the best just in case", and I've found that many high-res remasters do sound better - although quite possibly the difference isn't due to the higher sample rate, which, to me, justifies the few extra bucks.
 
Sep 15, 2015 at 1:05 PM Post #1,278 of 3,525
   
My point is that a lot of people posting on this thread seem to believe that "nobody can hear a difference between different sample rates - because, in order to do so, they would have to be able to hear past 20 kHz, which no human can do". That seems like an absolute statement to me - which we both seem to agree would be impossible to prove.  
Once we get past that sticking point, I'm quite prepared to agree with you that most listeners, using most equipment, and listening to most samples, are quite unlikely to be able to detect a difference. At which point I will also agree that, as far as an individual is concerned, it's up to them whether they want to test themselves, or buy high-res files on the chance that they can hear a difference, or simply decide whether to "buy the best - just in case" or "not pay extra for something they probably can't hear".
 
As I've mentioned before, I'm inclined to "buy the best just in case", and I've found that many high-res remasters do sound better - although quite possibly the difference isn't due to the higher sample rate, which, to me, justifies the few extra bucks.

 
I think people are also thinking about audibility in the context of music, and thus being able to hear 20kHZ only at, say, +50dB over where you hear 1kHz is probably not going to cut it.
 
The issue isn't that hi-res exists, it's the underlying pitch that "oh, NOW we can deliver you good masters because of hi-res," which is bunk.
 
Sep 15, 2015 at 1:40 PM Post #1,279 of 3,525
I know I start losing it at 16.5khz, so I feel pretty confident when making claims about my own hearing ^_^. and I use mostly IEMs that roll off like mad before 15khz. and I get some audible noise floor from most amp sections that are far above 16bit quantization noise. so high res is useless to me, I can really make that claim. I lack all the circumstances that might create an audible difference. 
 
the little problem with keeping an open mind and accepting people who say they can hear a difference( and to be honest for a great number of reasons, I'm sure a lot of people do hear "a" difference), is how the very huge majority of people making the claim happen to have never tested it under controlled conditions, and even better, usually are strongly opposed to blind testing. it gives a bad feel of "officer, I'm not lying, but I refuse to pass a lie detector test!".
when on the other hand, people saying they can't hear a difference, happen to be a majority of people accepting blind testing and the need of removing bias. that's what makes me believe almost nobody hears a difference. not human hearing, not the number of people on each side, but how the ignorant people tend to rush toward the same side of the argument.  I guess it's a bias of its own for me ^_^.
 
outside of blind testing, do I believe I hear a difference? absolutely!!!  but then again I feel that music is different after I take a ****. I'm the difference, not the music.
I often talk about how I EQ something for a minute, to realize only at the end that I had the EQ bypassed, that's the kind of stuff I experience all the time without control, differences from suggestion instead of actual differences.
 I wouldn't trust myself after a sighted evaluation, so how could I trust some random guy on the net? I cannot.
so I wouldn't say, nobody can tell the difference, because I don't know that. but I can say I have massive trust issues. and rejecting controlled testing is enough for me to believe the guy is a joke. does that make me narrow minded?
 
 

 
Sep 15, 2015 at 3:43 PM Post #1,280 of 3,525
  I know I start losing it at 16.5khz, so I feel pretty confident when making claims about my own hearing ^_^. and I use mostly IEMs that roll off like mad before 15khz. and I get some audible noise floor from most amp sections that are far above 16bit quantization noise. so high res is useless to me, I can really make that claim. I lack all the circumstances that might create an audible difference. 
 
the little problem with keeping an open mind and accepting people who say they can hear a difference( and to be honest for a great number of reasons, I'm sure a lot of people do hear "a" difference), is how the very huge majority of people making the claim happen to have never tested it under controlled conditions, and even better, usually are strongly opposed to blind testing. it gives a bad feel of "officer, I'm not lying, but I refuse to pass a lie detector test!".
when on the other hand, people saying they can't hear a difference, happen to be a majority of people accepting blind testing and the need of removing bias. that's what makes me believe almost nobody hears a difference. not human hearing, not the number of people on each side, but how the ignorant people tend to rush toward the same side of the argument.  I guess it's a bias of its own for me ^_^.
 
outside of blind testing, do I believe I hear a difference? absolutely!!!  but then again I feel that music is different after I take a ****. I'm the difference, not the music.
I often talk about how I EQ something for a minute, to realize only at the end that I had the EQ bypassed, that's the kind of stuff I experience all the time without control, differences from suggestion instead of actual differences.
 I wouldn't trust myself after a sighted evaluation, so how could I trust some random guy on the net? I cannot.
so I wouldn't say, nobody can tell the difference, because I don't know that. but I can say I have massive trust issues. and rejecting controlled testing is enough for me to believe the guy is a joke. does that make me narrow minded?
 
 
 


One thing I lvoe about my Fiio X3ii is that it never seems to have ANY audible noise-floor with ANYTHING! :)  Also, your hearing and mine apparently top out at exactly the same frequency, 16.5Khz!  Haha.

LMAO music is different after you take a ****, huh?  Maybe due to a resonance in your intestines?  AHAHAHAHA, I can't stop laughing after reading that, man you always manage to crack me up, castleofargh >_<
 
It's not narrow-minded to not take folks seriously who reject controlled or blind testing.  Those people are the narrow-minded ones who are being ignorant.  Because when it comes to evaluating stuff like how we perceive the physical world around us (I made sure to include the word "physical" in order to preclude any potential consideration of spiritual perceptions, of course) the only thing we can really rely on is the Scientific Method!  Anything that is "established" without the use of proper scientific control is absolutely bunk, really.
 
I've had teh same experience as you with EQ. . .set up a custom EQ curve, then played some music and was like "dude I can totally hear more bass now," or "wow the mids are so much more forward, NICE," only to realize 30 minuets later when I went to change the EQ again that I had it bypassed the whole time.  I heard more bass or mids because I WAS LISTENING FOR MORE BASS OR MIDS.  Hahahahahahaha.  The power of suggestion and placebo is very, very strong when it comes to the area of human hearing.  "ERMAHGERD HI-RES SOUNDS LIKE, SO MUCH BETTER, MAAAAN!"  Yeah, sure, when you KNOW it's high-res!  I'm not taking a hit off the communal high-res hookah/bong/blunt/whatever, sorry guys.  And neither is castleofargh, apparently *high five*
 
Sep 15, 2015 at 10:32 PM Post #1,281 of 3,525
   
I think people are also thinking about audibility in the context of music, and thus being able to hear 20kHZ only at, say, +50dB over where you hear 1kHz is probably not going to cut it.
 
The issue isn't that hi-res exists, it's the underlying pitch that "oh, NOW we can deliver you good masters because of hi-res," which is bunk.

I think that is the main point.  I have no doubt that there are young people who can hear above 20khz.  The last time I saw an audiologist he told me of one kid who could hear up to 23khz, but two things stand out - they are very young and they are outliers.
 
Even if you could hear up to 20khz, the sound would be so faint that it would be masked by other music content.  It is the same concept in regard to 16bit vs say 20bit.  The noise floor of 16bit is very low but still audible if you pick a silent passage and turn the stereo up loud (assuming no dithering has been applied).  It is however masked with music content so you wouldn't pick it out under any normal listening conditions.
 
What is more relevant is our ability to discriminate within the music content as we age.  The effects of masking increases as we get older.  In healthy ears we normally do not notice the deterioration except in subtle ways such as needing to turn up the TV louder to better hear dialogue over background music, or more concentration required to follow a conversation in a noisy background such as in parties.  If we could instantly turn back the clock to how our hearing was when we were say 16, we'd be blown away with the extra clarity and detail in the music. No amount of hi res would compensate for this progressive loss in hearing detail.
 
I hope that in part this illustrates why I believe that all this attention to hi res audio is futile.  Perhaps not in our time, but I think the biggest breakthrough in hi fidelity will be when genetic engineering allows a regeneration of our hearing to what it was when we were children or very young adults.  Who knows, the bio technology may even allow us to hear like dogs and then appreciate 24/96 playback.
 
Sep 15, 2015 at 11:02 PM Post #1,282 of 3,525
   
Actually I think the example of a Stereo 70 is absolutely pertinent. First, if you look around, you will see many claims that "there is no audible difference between tube and solid state electronics as long as the frequency response and THD remain below audible limits" - and the Stereo 70 would fit the criteria stated in those claims for "a tube amp that shouldn't sound audibly different from a solid state amp of equivalent power as long as you don't overload either one". However, my real point there was that, when most of the tests most people reference were actually performed, those were both "the latest equipment"... and, back when the Stereo 70 was current, a lot of people did in fact claim that "there was no point in doing any further development because it was plenty good to satisfy the abilities of human hearing" - and most of us no longer consider that claim to be true. In fact, that same claim has been made for tube amplifiers, vinyl recordings, cassette recordings, open reel recordings, and CDs... but opinions of whether it is true or not for each of them have changed over time. Perhaps, twenty years from now, people will look back and say "they were right - and CDs really are good enough to sound perfect within the limits of human hearing", but I'm not convinced about that - at least not yet. (Perhaps, instead, everyone will own a $20 pair of headphones - or some sort of other technology for listening to music entirely - through which the difference between CDs and high-res files is obvious. I read one interesting, but somewhat vague, paper claiming that humans had been confirmed to be able to hear well above 20 kHz - using bone conduction rather than "through the air conduction"- which bypasses the mechanisms of the middle ear - which they claimed was "what limited human hearing to 20 kHz".)
 
As for DACs, I agree that no well-designed DAC should have high enough noise or distortion, or a frequency response far enough off-flat, that it should be audible. However, their transient responses can vary considerably depending on how their filter is designed, and I don't recall anyone doing any definitive tests about whether that is audible or not. And transient response is generally shown with an oscilloscope trace picture - so there is no single commonly accepted "spec" to compare. (There seems to be general agreement that time errors become audible at some point - but nobody seems to agree on where that line would be.)
 
I definitely agree with what I consider your most important point - which (to me) is that the single biggest issue with most modern recordings is the mastering itself. Very few modern CDs are produced well enough that they sound anywhere near as good as the format is capable of. I also agree that, for most people, speakers and the acoustics of the room they're located in also probably make a much bigger difference.
 
I do, however, disagree with what I guess would be the logistics of a few of your other statements.
 
Assuming that "the music industry" was monolithic, I agree that I would rather see money spent on better production values and mastering than on higher resolution. However, the production industry is not monolithic. The companies selling DACs are not the same companies who are producing albums. And the choice of whether to deliver a given master at 16/44k or 24/192k is merely a matter of picking a different setting (or, at worst, buying one new piece of equipment). In short, I don't see producing content at a higher resolution as "diverting funds from anywhere else". I also believe that the current obsession with high-res content, even if it turned out to be technically meaningless, is still "a step in the right direction", because at the very least it encourages people to pay attention to the technical aspects of the music they're listening to. (Given the choice, I'd rather have consumers wondering whether 24/192k sounds better rather than wondering if 128k MP3 files are "good enough for them". So I see the trend of simply paying attention to the production quality of the music as a good thing.) In other words, perhaps, if people really are paying attention to what the music they're buying sounds like, and are a little more demanding when they are asked to pay extra for a "high-res version", that will in fact encourage the industry to use better production values all around. (But I do agree that it won't help if people start assuming or imagining that the new version is better because it's high-res alone - to the point of ignoring whether it actually sounds good or not.)
 
I'm also a firm believer in "trickle down technology".... the idea that, if manufacturers of players, and amplifiers, and speakers, work to make their top end products capable of playing flat to 40 kHz, just maybe the end result will be that even their low end products get a tiny bit better - as better technology becomes "the norm". (Maybe, if the DAC vendors get more orders for 24/192k DACs, they'll drop ones that don't even work well at 16/44k from the bottom of their product line, which will mean that you'll end up with a better DAC in the next $20 player you buy - because this year's "cheapest DAC you can buy" will be a little bit better than last year's.)

Those claims made about the Stereo 70, valve amps etc may have been made in hi fi mags (most of which are very subjective and emotional) but I doubt that they had the backing of audio science.  That digital audio was developed and refined around the same time is a case in point.  I was around (just) when reel to reels were the benchmark with home audio (with the right tapes) and we whinged about the record player.  They perhaps were the best, or even perfection, for that time but I don't recall people saying that that no improvements were possible.  Even if they did, the measurements would give lie to that.
 
I have heard of that bone conduction theory and it is just that, a theory without any supporting evidence.  The only study of peer review quality I am aware of that purported to find humans can hear or percieve ultrasonic content, is the Japanese study of the late 90s.  Professor Oohashi and his team designed an experiment which showed that humans can percieve (not hear though) ultrasonic sound.  That would have been a breakthrough finding pointing the way to further research.  Unfortunately, his peers were not able to replicate his findings and his methodology was later found to be flawed.  Oohashi later accepted that the study was flawed, though you still see his paper quoted in some Hi Res and vinylphile forums.
 
I agree with what you say about the HiFi manufacturing industry not being homogenous, with some specialising in speakers or amps and others in DACs or ancillaries.  That is part of the problem.  A bit like cables, DAC and home playback digital technology has plataued.  To stave it from being commoditsed a whole lot of marketing tripe around hi res, high end DACs etc is employed so people keep spending and upgrading when it is not necessary. This is commercial expediency for products that really have matured. That is the point I was making.  An informed consumer would be priortising their HiFi spend on things that really matter ie speakers, room acoustics and quality mastered material.
 
Sep 16, 2015 at 12:47 AM Post #1,283 of 3,525

 
 
Here is the graph of measured hearing for mammals and man. As you can see when you reach the upper limit the sensitivity for all mammals drops dramatically. From the 1969 study. The more I read the old research 20KHz seem to be the outlier and not the norm for anyone over 20.
 
Sep 16, 2015 at 9:53 AM Post #1,284 of 3,525
   
Actually I think the example of a Stereo 70 is absolutely pertinent. First, if you look around, you will see many claims that "there is no audible difference between tube and solid state electronics as long as the frequency response and THD remain below audible limits" - and the Stereo 70 would fit the criteria stated in those claims for "a tube amp that shouldn't sound audibly different from a solid state amp of equivalent power as long as you don't overload either one". 

 
Simply not true.
 
I'm not a relative newbie to this audio thing having owned several Stereo 70s back in the day.
 
If you load it with pure resistive loads the FR of a Stereo 70 performs better than audible limits, but that's irrelevant to actual use with loudspeakers.
 
http://home.indy.net/~gregdunn/dynaco/components/ST70/hfst70.jpg
 
Reprints High Fidelity magazine's 1959 technical review which puts its "Damping factor" at 9, which corresponds to a source impedance of approximately 1 ohm or worse (tested @ 8 ohm resistive load at a probable 1 KHz test frequency).
 
Take a modern speaker that dips to 4 ohms in the audible range and you have an audible frequency response variation.  Take a look at the test results for nonlinear distortion and you have > 0.1 @ < rated output, which is again audible under a critical test.
 
As a rule you have to find an exceptional, not average tubed amp to compare to SS in order to find "No audible difference".   The Stereo 70 may be the best selling tubed audio amp ever, and its performance wasn't bad for the day, but here's another example of why tubes fell out of favor among audiophiles who prefer sonically accurate performance. 
 
Sep 16, 2015 at 10:59 AM Post #1,285 of 3,525
Exactly......  but a significant number of people still insist "the difference must be inaudible" based on those not-totally-relevant measurements.
 
I consider myself to be an "absolute objectivist" - meaning that I do not believe that it's even possible for an audible difference to exist that cannot be measured (if it exists, then it can be measured - because our current technology allows us to make measurements much more precisely, and over a much wider range, than we can hear). However, just because something can be measured doesn't mean that we're currently taking the correct measurements to "see" it.
 
If you go to the right discussion group, you will still find people who ignore things like real-world speaker loading and insist that "the Stereo 70 must sound the same - because its noise floor and THD are inaudibly low", and those people insist that anyone who claims to hear a difference "must be imagining it" - based on those two single numbers - measured under one specific set of conditions. And they totally ignore all the other differences which are clearly visible when you use different tests - such as a load that simulates an actual speaker instead of a resistor. And, if you go to another forum, you'll find a group of people cheerfully declaring that "all good modern DACs sound exactly alike" - based on frequency response and S/N alone, and ignoring several other measurements which clearly show differences, based on an assumption that none of those other measurements are audible.
 
My point is that the way our brains work to locate sounds in space, and to "extract" other information from what our ears pick up, is in fact rather complicated - and still not entirely understood. Therefore, I find the claim that "high-res files can't possibly sound different because the only difference is that they have better frequency response - and that difference only exists beyond the limit of audibility", to be on the same level as those claims that the Stereo 70 "couldn't possibly sound different"... both are based on incomplete information being quoted by people who may not understand that the information is in fact incomplete.
 
True, there have been plenty of tests that show that a typical human being, under typical listening conditions, cannot consciously detect the presence or absence of frequencies above 20 kHz, but that is not at all the same as saying that "frequency response above 20 kHz is useless". In fact, there have been tests that concluded that subjects could detect a difference when an audio signal was bandwidth-limited to 20 kHz, which suggests that there might be something involved besides consciously "hearing" the presence of specific frequencies or not.
 
(Part of the reason I can easily tell the difference between actual bright sunlight and a TV picture of sunlight is that the TV picture lacks the invisible spectrum of the long wave light frequencies we call "heat", so it's bright but it doesn't feel warm; so I guess that being unable to reproduce those invisible frequencies actually does reduce the "fidelity and accuracy" of a TV picture, and having a camera that would record them, and a TV that could play them, actually would be an improvement in fidelity - even though they are invisible. So I guess the jokes about "how silly it would be to make TVs that could reproduce light frequencies we can't see" are somewhat misguided.)
 
So far there seem to be a lot of unknowns and undetermined details involved; perhaps the differences detected in those tests were in fact due to the sample rate conversions and filtering used to create the samples being audible; perhaps frequencies above 20 kHz, and very minute differences in timing which require frequency response above 20 kHz to record and reproduce accurately, while not audible as consciously detected sound, do have something to do with how we determine location. Or perhaps the tests results are simply artifacts of the filtering process used to produce the samples being audible. And, to be honest, I'd like to know..... but, more to the point, until we know for sure, it's premature to label high-resolution audio as "a hoax" or "clearly being different only in people's imaginations". I personally consider "anti-snake-oil" (the act of labeling something as snake oil before you know all the facts) to be technically as bad as the opposite (accepting every claim as true). I'd prefer to wait until a bit more information is available before making blanket assumptions like that.
  Quote:
   
Simply not true.
 
I'm not a relative newbie to this audio thing having owned several Stereo 70s back in the day.
 
If you load it with pure resistive loads the FR of a Stereo 70 performs better than audible limits, but that's irrelevant to actual use with loudspeakers.
 
http://home.indy.net/~gregdunn/dynaco/components/ST70/hfst70.jpg
 
Reprints High Fidelity magazine's 1959 technical review which puts its "Damping factor" at 9, which corresponds to a source impedance of approximately 1 ohm or worse (tested @ 8 ohm resistive load at a probable 1 KHz test frequency).
 
Take a modern speaker that dips to 4 ohms in the audible range and you have an audible frequency response variation.  Take a look at the test results for nonlinear distortion and you have > 0.1 @ < rated output, which is again audible under a critical test.
 
As a rule you have to find an exceptional, not average tubed amp to compare to SS in order to find "No audible difference".   The Stereo 70 may be the best selling tubed audio amp ever, and its performance wasn't bad for the day, but here's another example of why tubes fell out of favor among audiophiles who prefer sonically accurate performance. 

 
Sep 16, 2015 at 11:06 AM Post #1,286 of 3,525
Come on Keith, I proved you wrong (again) and all you've got to fall back on is a lot of apparantly unattributed anecdotes.
 
I think I can come up with a proper attribution for those false claims - you are their author!
 
I think that it would be good for you to take responsibility for them!
 
Sep 16, 2015 at 11:32 AM Post #1,287 of 3,525
  Come on Keith, I proved you wrong (again) and all you've got to fall back on is a lot of apparantly unattributed anecdotes.
 
I think I can come up with a proper attribution for those false claims - you are their author!
 
I think that it would be good for you to take responsibility for them!

 
I'm sorry - I seem to be missing something here - exactly what "claims" are you talking about ???
 
You agree with me that a Stereo 70 in fact does sound quite different in many situations than an "equivalent solid state amp" - even assuming that both are being operated below clipping, and neither is generating what are generally considered to be "audible levels of distortion". We seem to be in agreement there.
 
And, if you Google the subject, you will also find many discussions, reaching from the distant past to the present, arguing that the sole sonic differences between tubes and solid state are related to overload level and that, as long as you absolutely prevent the amplifier from clipping, you would not be able to hear an audible difference between those two amplifiers. (The subject was frequently discussed in magazines before the Internet became popular, but plenty of articles managed to get copied, referenced, and posted, and the topic is still discussed today; one side claiming that "tubes sound different because of x, y, and z" and the other claiming that, if you avoid overload, they don't sound different at all, and that anyone claiming that they do hear a difference "must be imagining it". And since, back in those days, "everybody knew that distortion below 0.5% or so in inaudible", the fact that the distortion spectra of the various types of amplifiers might be audibly different was generally dismissed as "one of those thing people imagined they were hearing".)
 
I don't understand what "claims" I've made that you take exception to.....
 
Sep 16, 2015 at 11:47 AM Post #1,288 of 3,525
Part of the reason I can easily tell the difference between actual bright sunlight and a TV picture of sunlight is that the TV picture lacks the invisible spectrum of the long wave light frequencies we call "heat", so it's bright but it doesn't feel warm; so I guess that being unable to reproduce those invisible frequencies actually does reduce the "fidelity and accuracy" of a TV picture, and having a camera that would record them, and a TV that could play them, actually would be an improvement in fidelity - even though they are invisible. So I guess the jokes about "how silly it would be to make TVs that could reproduce light frequencies we can't see" are somewhat misguided.

 
A bit off topic, but I still think it would be totally silly for a TV to emit frequencies beyond the visible spectrum.  Yes, I can feel the heat of the Sun, but I wouldn't want that from my TV.  The last thing I would want is to get a Sunburn from my TV.  Or a painful burn on my eyes from looking at someone welding on TV (or the Sun, for that matter).  In terms of visible fidelity and accuracy, no, those things would not improve it.  Nobody is arguing that the TV is reproducing all the feelings of actually standing in the Sun when the Sun is on screen, it's only reproducing the images we detect with our eyes.  Just as a speaker or headphone is only reproducing the sounds we hear, nothing else.   
 
IR and UV light in levels accurate to the original source on screen is not something I would want from my television (nor any of the other radiation the Sun emits, nor would I want a video of the ocean to flood my house, nor would I want to smell the stuff Mike Rowe works with), and I wouldn't equate it with "image" fidelity, because it isn't part of the image.  
 
If there is some heretofore yet unknown impact upon our bodies from sound frequencies outside the audible range (at the dB levels produced by headphones/speakers at normal listening volumes), who is to say they are even desirable?  Honest question:  Are sounds at those frequencies (above 20kHz) even produced in a recording studio?  What's to reproduce or "feel" if they aren't even there in the first place?
 
Sep 16, 2015 at 12:18 PM Post #1,289 of 3,525
   
If there is some heretofore yet unknown impact upon our bodies from sound frequencies outside the audible range (at the dB levels produced by headphones/speakers at normal listening volumes), who is to say they are even desirable?  Honest question:  Are sounds at those frequencies (above 20kHz) even produced in a recording studio?  What's to reproduce or "feel" if they aren't even there in the first place?

 
Frequencies that high are definitely produced and captured (a spectrogram of any legit hi-res track should show that). But certainly by 25kHz we aren't using our normal hearing mechanism to sense them, if indeed we are sensing them at all. Any effect is likely quite small (that's my conjecturing), and it is legitimate to question whether any effect they could be shown to have is in fact "musical."
 
Sep 16, 2015 at 1:04 PM Post #1,290 of 3,525
   
A bit off topic, but I still think it would be totally silly for a TV to emit frequencies beyond the visible spectrum.  Yes, I can feel the heat of the Sun, but I wouldn't want that from my TV.  The last thing I would want is to get a Sunburn from my TV.  Or a painful burn on my eyes from looking at someone welding on TV (or the Sun, for that matter).  In terms of visible fidelity and accuracy, no, those things would not improve it.  Nobody is arguing that the TV is reproducing all the feelings of actually standing in the Sun when the Sun is on screen, it's only reproducing the images we detect with our eyes.  Just as a speaker or headphone is only reproducing the sounds we hear, nothing else.   
 
IR and UV light in levels accurate to the original source on screen is not something I would want from my television (nor any of the other radiation the Sun emits, nor would I want a video of the ocean to flood my house, nor would I want to smell the stuff Mike Rowe works with), and I wouldn't equate it with "image" fidelity, because it isn't part of the image.  
 
If there is some heretofore yet unknown impact upon our bodies from sound frequencies outside the audible range (at the dB levels produced by headphones/speakers at normal listening volumes), who is to say they are even desirable?  Honest question:  Are sounds at those frequencies (above 20kHz) even produced in a recording studio?  What's to reproduce or "feel" if they aren't even there in the first place?

 
I'm inclined to agree with you about IR light - and I wouldn't want to actually feel the shock wave from a bomb blast in the movie I was watching either (although I can imagine someone complaining that that lack of IR was "why watching a sunset on TV just didn't seem real".) Personally, I think I would prefer a sort of compromise, where I would feel a bit of warmth - but with strict safety limits - and an option to disable it entirely.
 
However, in the context of the current discussion, at least one study seems to have shown that limiting the bandwidth of a high-res audio recording at least sometimes causes subjects to report a shift in the sound stage of the recording. The subjects didn't hear anything "missing" from the version that was band limited, or even claim that it sounded different, but they perceived the various instruments as occupying slightly different positions in the sound stage. The authors of the test suggested that, even though audio frequencies above 20 kHz weren't directly audible as sounds, perhaps some of the phase cues that we use to resolve the location of sounds cannot be accurately reproduced at the lower bandwidth. 
 
One possible mechanism whereby that might occur is simple cancellation. If, for example, you play the same tone from two speakers near each other, the result will be a cancellation pattern (usually referred to as a comb filter) - and you will hear this pattern as a series of louder and quieter spots as you move your head from left to right. If you delay the sound being sent to one of those speakers by a tiny amount, the location of the nulls and nodes in that pattern will shift. And, under certain circumstances, even moving one of those speakers a fraction of an inch (which corresponds to a very tiny time shift - which would be inaudible by itself) may result in the pattern being shifted significantly more than the distance the speaker was moved. The authors of that particular article suggested that a sample rate of at least 50k would be required in order to ensure that the signal was reproduced accurately enough to ensure that this sort of alteration didn't occur.
 
Please note that this was just one study, and I'm NOT specifically endorsing the results; in fact it's quite possible that what they experienced was simply an artifact of the sample rate conversion - but I do take it as an indication that "all of the facts may not be in yet".
 
In terms of technology, some microphones have response that extends well above 20 kHz, but many to not. Ditto for other studio equipment - some yes; some no. You also need to bear in mind that what information can be resolved is not the same as frequency response alone. For example, if I had two microphones which were absolutely identical, and absolutely didn't respond above 20 kHz, I could still record a single instrument using a pair of them and have the sound being received by one of them 1/100,000 of a second sooner than the other - and that 1/100,000 of a second in difference in timing could be measured by measuring the time difference between the zero crossing points - so the information collected by both of the microphones, relying partly on their position relative to each other, could in fact contain information that either microphone separately couldn't record. I could then use an oscilloscope to determine the comparative location of the signal source and the two microphones. Of course, whether our ears can resolve this sort of detail is one of those things which I consider to still be somewhat undetermined so far. Likewise, a software program, designed to simulate echo or room reverberance, or to produce entirely artificial sounds, could generate "fake" information that extended well above 20 kHz. (As a trivial example, if I were to sample cymbals using a good microphone, then play that sample back backwards at 2x speed as a sound effect, then that altered sample would have frequency response that extended twice as high as the original microphone could record - as long as I did all of the processing at an appropriately high sample rate. In that case, you would absolutely require a bandwidth of at least 40 kHz to accurately reproduce either the minute time difference between those two microphones or all of the overtones in the frequency doubled cymbal (again ignoring whether limiting that would produce an audible difference or not).
 

Users who are viewing this thread

Back
Top