hypersonic effect discussion

Oct 28, 2011 at 8:10 AM Post #77 of 111
Oct 28, 2011 at 11:25 AM Post #78 of 111
^ They're using audible sound, nothing hypersonic. Blind people have been using this technique for years, listening with their ears.

The cool thing about this is the ability to create a 3D model of their surroundings as bats do using (again audible) sound cues.
 
Jan 14, 2012 at 9:58 PM Post #80 of 111
 
Quote:
You would hope they record at 24/192; Labels that record popular genres of music generally aren't that particular except for rare "indie" labels. So someone has actually said 24/192 is overkill & just a huge waste of space, huh ? With that kind of attitude it really isn't much of a surprise that so many recordings have given "mediocrity" a whole new definition (as in garbage). It was'nt a so called "professional" that made this statement was it ? (Though it wouldn't surprise me as there doesn't appear to be any shortage of "pros" in any field who are so clueless they give the real pros a bad name! (this is only my non-professional opinion of course!!!) (& I generally don't care for lawyers either! Hehehe)


It is in fact a giant waste of space.
 
http://www.benchmarkmedia.com/discuss/sites/default/files/Upsampling-to-110kHz.pdf
 
Also, that white paper doesn't cover it, but even the best 24-bit A/D and D/A converters cannot actually resolve the entire dynamic range afforded by 24 bits of quantization. At best, you can get 20 bits or so. That is, unless you cryogenically cool all your electronics.
 
So the absolute best that recording technology can possibly record is roughly 20 bit, 110 kHz, give or take a few kHz depending on the chip/implementation. Anything higher than that is a total waste of space in terms of storage. There are definite exceptions when higher (24 or possibly 32) bit depth can be useful in processing audio files, such as when mastering, mixing, or even using software volume control. But as far as the capture and storage of audio goes, that's the best that electronics can possibly do.
 
And that's before we even look at audibility. To sum it up shortly, the only difference between 24 bit and 16 bit audio is quantization distortion and quantization noise. Quantization distortion is nasty and can easily be audible because it distorts waveforms in distinct patterns. However, quantization distortion is easily eliminated through the use of dither. Proper noise-shaping dither masks audible quantization distortion patterns with very, very low level noise (that is entirely inaudible under normal listening volumes) added to the signal.
 
It's possible to hear this quantization noise if you listen at what would be extremely high, ear-damaging volumes with normal music - by turning up very quiet passages to very loud volumes. But at normal listening volumes, level matching, and with proper dither (not always a given), the difference between 24 bit and 16 bit audio is for all intents and purposes inaudible. Also, occasionally, intermodulation distortion in equipment can create audible artifacts at higher sample rates (i.e. 96 kHz vs. 44.1 kHz). But that has nothing to do with higher sample rates themselves being audible.
 
Similarly, different masters can very easily sound different. This is why high resolution audio sounds better most of the time - not because it is high resolution, but because it was better mastered to begin with. This alone is a good enough reason to buy high resolution audio, but if the same master is available at 16/44.1 or even 24/96 versus 24/192, it's safe to go with the 16/44.1 files.
 
This is a good overview of the topic:
http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded
 
And hydrogenaudio extensively covers ABX'ing high resolution versus CD resolution audio:
http://www.hydrogenaudio.org/forums/index.php?showtopic=49843

 
Quote:

 
Yeah, that's the guy I was thinking of, John Siau, he developed the DAC1 right?
 
Scroll down to "Recent Reviews" click on "Epiphany Acoustics", read comment "The DAC1 sounds serrated and awfully bright".
tongue_smile.gif

 
Anyway the article you linked seems to only cover upsampling, and not recording, or? I believe the other article is the same, and neither of them have any pretty pictures, links or sources.
 
You can't say it's a waste of space, and then link to an article only covering upsampling, which has no effect on space, or?
 
It says something about the AD1853...
 
If they are only talking about measuring opamp chips, then I'm skeptical because the North-Western video guy - with the same line of dScope thought - says he thinks no one can hear the difference between NE5532 and OPA2134 in a blind test, which I find a bit weird, idk...
 

 
Okay, well... if no proof is given, I don't see much reason to believe in it, when there are other papers which state that 24/192kHz is ideal, and actually show at least one pretty picture to defend their claims - http://www.cirrus.com/en/pubs/whitePaper/DS668WP1.pdf
 
 

 
Quote:
 
Yeah, that's the guy I was thinking of, John Siau, he developed the DAC1 right?
 
Scroll down to "Recent Reviews" click on "Epiphany Acoustics", read comment "The DAC1 sounds serrated and awfully bright".
tongue_smile.gif

 
 
Anyway the article you linked seems to only cover upsampling, and not recording, or? I believe the other article is the same, and neither of them have any pretty pictures, links or sources... =/



Yes, this is his (and by the copyright/hosting of the file, Benchmark Audio's) position on sample rates. If you actually read the white paper, the bit on 96 kHz vs. 192 kHz sampling rate is in general terms, for D/A & A/D conversions period - not anything to do with upsampling.
 
I agree that no proof is given, and it would be nice to have (I can't remember if there was another paper that went into more detail), but I'm not about to argue with his conclusion when the difference between 44.1 kHz and 96 kHz is inaudible for all intents and purposes to begin with. 

Quote:
 
Okay, well... if no proof is given, I don't see much reason to believe in it, when there are other papers which state that 24/192kHz is ideal, and actually show at least one pretty picture to defend their claims - http://www.cirrus.com/en/pubs/whitePaper/DS668WP1.pdf
 


I see no pretty picture depicting 192 kHz, nor do I see any more evidence concerning 192 kHz than in the Benchmark white paper.
 
You realize what the impulse response graph is depicting, right? Bandwidth. Take out the frequencies above what 48 kHz can record (i.e. a little under 24 kHz) from the 96 kHz graph, and you're left with the same graph. Since frequencies that high are inaudible anyway (again, except for possible and unwanted intermodulation distortion in the audible band caused by said higher frequencies), the whole premise is a sham.
 
 
Considering that neither one (96 kHz or 192 kHz sample rates) is audible, why do we care in the first place? I know I don't and I sure as heck am not wasting my valuable hard drive space (twin 750 GB hard drives in my laptop are full enough with 200000+ photos) on sample rates that are inaudible, much less sample rates that aren't necessarily even measurably better on the best electronics available.

 
Quote:
I see no pretty picture depicting 192 kHz, nor do I see any more evidence concerning 192 kHz than in the Benchmark white paper.
 
You realize what the impulse response graph is depicting, right? Bandwidth. Take out the frequencies above what 48 kHz can record (i.e. a little under 24 kHz) from the 96 kHz graph, and you're left with the same graph. Since frequencies that high are inaudible anyway (again, except for possible and unwanted intermodulation distortion in the audible band caused by said higher frequencies), the whole premise is a sham.
 
 
Considering that neither one (96 kHz or 192 kHz sample rates) is audible, why do we care in the first place? I know I don't and I sure as heck am not wasting my valuable hard drive space (twin 750 GB hard drives in my laptop are full enough with 200000+ photos) on sample rates that are inaudible, much less sample rates that aren't necessarily even measurably better on the best electronics available.


It looked like the impulse response graph was indicating there is less pre-echo in the 96kHz recording, now that I'm looking more closely... the Y axis on the graph is different, I'm confused.
 
It says "it is the removal of digital decimation and interpolation filters, not the extended frequency range, that produce the audible improvements offered by SACD over conventional PCM"
 
The thing is, people can install a simple upsampler in foobar, or sometimes it's in their driver, they upsample to 96 or 192kHz, and they hear a (slight) difference, so then they believe that there is an audible difference in native 96kHz and 192kHz material too, even if that's not the same as upsampling, what do you think?

 

Quote:
It looked like the impulse response graph was indicating there is less pre-echo in the 96kHz recording, now that I'm looking more closely... the Y axis on the graph is different, I'm confused.
 
It says "it is the removal of digital decimation and interpolation filters, not the extended frequency range, that produce the audible improvements offered by SACD over conventional PCM"
 
The thing is, people can install a simple upsampler in foobar, or sometimes it's in their driver, they upsample to 96 or 192kHz, and they hear a (slight) difference, so then they believe that there is an audible difference in native 96kHz and 192kHz material too, even if that's not the same as upsampling, what do you think?
 

 
Yes, I understand that filtering is a legitimate reason to increase sample rate. However, 44.1 kHz was chosen for a reason - it is considerably higher in frequency than what typical adults can perceive, so the reconstruction filtering has more or less zero impact on audibility. Factor in psychoacoustics with actual music, and the audibility of sounds even in audible frequencies in the 16 kHz - 20 kHz range is almost nil (see lossy compression).
 
Regarding their impulse response graphs, I'm still unsure how this is supposed to apply at all in real-world conditions. Again, our ear/brain system acts as a low-pass filter, so that we don't hear anything above our own ears' frequency response curve (i.e. X dB at 16 kHz, X-Y dB at 18 kHz, X-Y-Z dB at 20 kHz, etc.). Additionally, such impulses as they are demonstrating just don't occur in real sound - the closest thing I can think of is if you cut off a waveform at some voltage at the start or end of a track. That would occur if you don't have gapless playback and there's no sort of fade-in/out imposed on the signal. Of course, this is an undesirable situation to be in in the first place and we could care less about reproducing it or not.
 
As for resampling and hearing differences; well, how the software and hardware handles different sample rates can affect what happens. I can't detail individual cases, but more often than not hardware and the drivers that service it can treat different sample rates differently, resulting in audible artifacts. Obviously not every DAC is going to have this problem, but it does exist and it isn't uncommon. Another problem is that upsampling requires interpolated values between the actual values - and because the interpolated values aren't going to fall perfectly into the quantization levels afforded by the bit rate, you're going to get quantization error. That means quantization distortion and noise, and thus dither to eliminate the distortion. If it's not done right, it could be audible. At the very least, you're raising the noise floor over the original file.
 
Now, if you were to talk about oversampling in the DAC, that'd be an entirely different thing...

 
Okay, well you said "I understand that filtering is a legitimate reason to increase sample rate.", and the discussion is valid since the PS3 community, and the upcoming Fostex HP-A8, cheaper SACD players, DSD over USB, and such... is increasing the popularity and interest a little, in 2012.
 
Not to mention there are cheap 32bit / 384kHz capable DAC/Amp's now (the Musiland MM03), it's going to make consumers think "Oh, my 16/44 is lousy!"
tongue_smile.gif

 
I'm not going to claim I've heard any differences with any of this high-rez stuff... it could just be some phantom distortion in my hardware for all I know, the same applies to the SoX plugin in Foobar, but it does sound different and I don't know what all those algorithms are.
 
 
128x oversampling (whatever that is) and NOS like PCM1704 is a different discussion yeah.
tongue_smile.gif

 
Anyway, this thread is interested in the former...
 

[size=medium]Going to clarify what I stated earlier:[/size]
 
[size=medium]Bitt Depth refers to the number of bits you have to capture audio. The easiest way to envision this is as a series of levels, that audio energy can be sliced at any given moment in time. With 16 bit audio, there are 65,536 possible levels. With every bit of greater resolution, the number of levels double. By the time we get to 24 bit, we actually have 16,777,216 levels. Remember we are talking about a slice of audio frozen in a single moment of time. [/size]
[size=medium]Now lets add our friend Time into the picture. That's where we get into the Sample Rate.[/size]
[size=medium]The sample rate is the number of times your audio is measured (sampled) per second. So at the red book standard for CDs, the sample rate is 44.1 kHz or 44,100 slices every second. So what is the 96khz sample rate? You guessed it. It's 96,000 slices of audio sampled each second. [/size]
 
[size=medium]Space required for of stereo digital audio [/size]
[size=medium]Bit Depth Sample Rate Bit Rate File Size of one stereo minute File size of a three minute song [/size]
[size=medium]16 44,100 1.35 Mbit/sec 10.1 megabytes 30.3 megabytes[/size]
[size=medium]16 48,000 1.46 Mbit/sec 11.0 megabytes 33 megabytes [/size]
[size=medium]24 96,000 4.39 Mbit/sec 33.0 megabytes 99 megabytes [/size]
[size=medium]mp3 file 128 k/bit rate 0.13 Mbit/Sec 0.94 megabytes 2.82 megabytes [/size]
[size=medium]So you see how recording at 24/96 more than triples your file size. Lets take a 3 minute multi-track song and add up the numbers. Just to put the above into greater relief, I included the standard MP3 file's spec. [/size]
 
[size=medium]Hard disk requirements for a multi-track 3 minute song [/size]
[size=medium]Bit depth/sample rate number of mono tracks size per mono track size per song songs per 20 gigabyte hard disk songs per 200 gigabyte hard disk[/size]
[size=medium]16/44.1 8 15.1 megs 121 megs 164 1640[/size]
[size=medium]16/48 8 16.5megs 132 megs 150 1500[/size]
[size=medium]24/96 8 49.5 megs 396 megs 50 500[/size]
[size=medium]16/44.1 16 15.1 megs 242megs 82 820[/size]
[size=medium]16/48 16 16.5 megs 264 megs 74 740[/size]
[size=medium]24/96 16 49.5 megs 792 megs 24 240[/size]
 
[size=medium]you should be noting two things now:[/size]
[size=medium]1. Recording at 24/96 yields greatly increased audio resolution-over 250 times that at 16/44.1 [/size]
[size=medium]2. Recording at 24/96 takes up roughly 3 1/4 times the space than recording at 16/44.1[/size]
 
[size=medium]Now lets get to the subjective side of how music sounds at these different bit depths and sample rates. No one can really quantify how much better a song is going to sound recorded at 24/96. Just because a 24/96 file has 250 times the audio resolution does not mean it will sound 250 times better; it won't even sound twice the quality. In truth, your non-musically inclined friends may not even notice the difference. You probably will, but don't expect anything dramatic. Can you hear the difference between an MP3 and a wave file? If so, you will probably hear the difference between different sample rates. For example, the difference between 22.05 kHz and 44.1 kHz is very clear to most music lovers. A trained ear can tell the difference between 32khz and 44.1. But when 44.1 and 96kHz are compared it gets real subjective. But lets try to be a little objective here.[/size]
 
[size=medium]Lets talk about sample rate and the Nyquist Theory. This theory is that the actual upper threshold of a piece of digital audio will top out at half the sample rate. So if you are recording at 44.1, the highest frequencies generated will be around 22kHz. That is 2khz higher than the typical human with excellent hearing can hear. Now we get into the real voodoo. Audiophiles have claimed since the beginning of digital audio that vinyl records on an analog system sound better than digital audio. Indeed, you can find evidence that analog recording and playback equipment can be measured up to 50khz, over twice our threshold of hearing. Here's the great mystery. The theory is that audio energy, even though we don't hear it, exists as has an effect on the lower frequencies we do hear. Back to the Nyquist theory, a 96khz sample rate will translate into potential audio output at 48khz, not too far from the finest analog sound reproduction. This leads one to surmise that the same principle is at work. The audio is improved in a threshold we cannot perceive and it makes what we can hear "better". Like I said, it's voodoo.[/size]

 
Mischa, Kiteki, and Ben, If I may approach the subject matter from a neuroscience perspective and hopefully to add some light into the subject of voodoo and frequency. First of all, in audio circle, it has been taken almost as universal truth that we cannot hear below 20 hertz hence the limit of 20-20khz exists. In fact, we know at 20 hertz or below it is more like a feel than sound. So it misled many to conclude that whatever we cannot hear has no use to us or it doesn't matter since we cannot process it. However, it has been proven time and again that the human brain can and has the ability to process and discern down to 0.1 hertz and that a change from or a difference of 0.1 hertz to 0.2 hertz can activate or evoke a different EEG or brain response pattern or emotional feel in a person. This has been published frequently by the works done in the field of EEG biofeedback or neurofeedback. In neurofeedback, if a person is trained to recognize the feel at 0.1 hertz and then the person is asked to respond to the limit of 0.2 hertz, the change is immediately recognizeable to the trained person. Our brain is that sensitive and probably more so than any existing measuring device or scope. Therefore when I frequently read about comments that said the difference of a few hertz doesn't matter when it has been proven that it can have an impact on our emotional response and that we can recognize the difference of even as small as 0.1 hertz anywhere in the frequency spectrum and beyond. Furthermore, I believe this also led to the overly used concept of placebo effect in our many arguments on the subject of cable difference or burn-in effect and others etc. The argument usually goes like since objectively we are not able to measure any "single dimensional" differences, therefore, if a person hears something difference between two cables or two bitrate, it must be placebo. Not that placebo effect doesn't exist, but scientifically before we attribute something to placebo we better realize the limitation of our measuring methods. And in this case science has proven that the brain can process a lot more than we can measure.
 
Second, an orchestra has somewhere around 50 to 80 instruments and with that it can make endless numbers of music without repeating itself. The human brain has over 10 trillion synaptic connections, and each inhibiting or dysinhibiting firing of each synaptic connection forms a unit of brain signature to an external or internal event. The combination of the firing of the trillion synaptic connections form the basis of our various emotions, knowing and consciousness. In fact, we have a separate brainwave signature or brainwave composition to a similar song that plays at 16/44 and 24/96. Or that we can discern the difference between the same songs that has a high noise level from that with a low noise level even though objectively they may measure the same. That is also why a mother of a monzygotic twins can tell the difference between the twin when scientifically and genetically they are the same. Or we can tell the difference between frozen orange juice and fresh squeeze orange even though their composition is the same. The link that Kiteki refers to show difference of pre-echo stage and I believe that alone will cause the brain to notice the difference on just that one factor. But music is a complex and multidimensional event to the brain. The brain is exceptional in its capability to notice minute difference even down to change below 0.1 percent. That is why we are able to tell abstract concept like house sound, sound stage, body, headroom, etc.

 
Quote:
Mischa, Kiteki, and Ben, If I may approach the subject matter from a neuroscience perspective and hopefully to add some light into the subject of voodoo and frequency. First of all, in audio circle, it has been taken almost as universal truth that we cannot hear below 20 hertz hence the limit of 20-20khz exists. In fact, we know at 20 hertz or below it is more like a feel than sound. So it misled many to conclude that whatever we cannot hear has no use to us or it doesn't matter since we cannot process it. However, it has been proven time and again that the human brain can and has the ability to process and discern down to 0.1 hertz and that a change from or a difference of 0.1 hertz to 0.2 hertz can activate or evoke a different EEG or brain response pattern or emotional feel in a person. This has been published frequently by the works done in the field of EEG biofeedback or neurofeedback. In neurofeedback, if a person is trained to recognize the feel at 0.1 hertz and then the person is asked to respond to the limit of 0.2 hertz, the change is immediately recognizeable to the trained person. Our brain is that sensitive and probably more so than any existing measuring device or scope. Therefore when I frequently read about comments that said the difference of a few hertz doesn't matter when it has been proven that it can have an impact on our emotional response and that we can recognize the difference of even as small as 0.1 hertz anywhere in the frequency spectrum and beyond. Furthermore, I believe this also led to the overly used concept of placebo effect in our many arguments on the subject of cable difference or burn-in effect and others etc. The argument usually goes like since objectively we are not able to measure any "single dimensional" differences, therefore, if a person hears something difference between two cables or two bitrate, it must be placebo. Not that placebo effect doesn't exist, but scientifically before we attribute something to placebo we better realize the limitation of our measuring methods. And in this case science has proven that the brain can process a lot more than we can measure.
 
Second, an orchestra has somewhere around 50 to 80 instruments and with that it can make endless numbers of music without repeating itself. The human brain has over 10 trillion synaptic connections, and each inhibiting or dysinhibiting firing of each synaptic connection forms a unit of brain signature to an external or internal event. The combination of the firing of the trillion synaptic connections form the basis of our various emotions, knowing and consciousness. In fact, we have a separate brainwave signature or brainwave composition to a similar song that plays at 16/44 and 24/96. Or that we can discern the difference between the same songs that has a high noise level from that with a low noise level even though objectively they may measure the same. That is also why a mother of a monzygotic twins can tell the difference between the twin when scientifically and genetically they are the same. Or we can tell the difference between frozen orange juice and fresh squeeze orange even though their composition is the same. The link that Kiteki refers to show difference of pre-echo stage and I believe that alone will cause the brain to notice the difference on just that one factor. But music is a complex and multidimensional event to the brain. The brain is exceptional in its capability to notice minute difference even down to change below 0.1 percent. That is why we are able to tell abstract concept like house sound, sound stage, body, headroom, etc.

 
Sorry, but that's all a load of junk. Especially the bit about our ear/brain system being more sensitive than any measuring device.
 
How did you make the leap from "we perceive frequencies below 20 Hz" to "placebo can't automatically be attributed to x"? No one at all is arguing that we can't perceive frequencies below 20 Hz. You can literally feel the pressure changes and vibrations - it's just that 20 Hz or so is roughly where the pressure waves begin sounding as a tone. The problem is above 20 kHz - a very different situation. There is not one single study where test subjects have perceived in any way frequencies above their normal listening range (which perhaps for the absolute best ears is in the 23 kHz range) that cannot be attributed to intermodulation distortion or other distortion artifacts in the playback equipment (i.e. the Ooashi study). If you've got evidence to the contrary, I'd love to see it - including your claim that a different brainwave signature is apparent when listening to high resolution music (with the exact same mastering, matched (and normal listening, not elevated) levels, correct noise-shaped dithering, and no hardware-related artifacts different between the two sample rates and bit depths).
 
The problem isn't just automatically attributing differences that are heard to placebo - it's that these sort of differences completely disappear with properly conducted blind testing. It's trivial to measure a difference between all cables, DACs, amplifiers, bit depths/sampling rates, etc. with proper test equipment. Despite that, comparing such devices in blind testing, it has been found that in many cases people can't hear any difference at all when they don't know what they're listening to. By finding the limits of distortion, changes in frequency response, noise, etc. that can in fact be distinguished in blind testing, it is possible to induce that those limits can conservatively be applied to general situations - e.g. extrapolating that a given cable with appropriate RLC properties will almost certainly not be distinguishable from a counterpart in blind testing despite what a listener may say from sighted listening tests - without actually performing the blind test in every single case. This is the basis upon which science is founded - to dismiss such thinking in general is to dismiss the scientific process.
 
Similarly, your examples of two things being perceived as different though they measure the same objectively - flat out, there either is a difference and you're not measuring it, or you're pulling out magic as the explanation. The same song with a high noise level and a low noise level? If you can distinguish between the two, we can measure the difference. It's that simple - and measuring noise in a recording is trivial. A mother of identical twins? Is that really a serious example? Yes, they are genetically identical. No, they are not physically the same. Scars, marks, brain and body development, etc. differ between two twins - again, this is trivial to measure. Frozen versus fresh squeeze orange juice - again, not something that can't be done. Send the samples to a flavor science lab and have them analyze it - they'll be able to tell you what the difference is.
 
Sorry, but there's not a single thing that we can perceive about the world around us that we can't measure and quantify better than our senses can. No, that measuring cannot yet simulate exactly how we perceive things in every case - but it sure can detect changes of any sort far more sensitively than our own senses can.
 
Example, to go with the music theme? How about a recording of a symphony orchestra, so complex and full of many different sounds. Trained listeners can identify the frequency ranges which are most prominent, what sort of instruments are playing, how many of each (with more than a few of a given instrument playing each part, this would be a rough estimate), perhaps what soloist is playing, and maybe even what particular brand/model tympani or gong or whatever is being played. Every one of those differences could be measured, and with the right software (and samples serving as the equivalent of a listener's memory) interpreting it, you could quantify that as well. The typmani or the soloist? Its harmonics or characteristic style could be picked out and analyzed to determine what/who is playing. Etc., etc.
 
But what if you added some noise at -130 dB on your 24 bit recording. 130 dB down? You could measure it; easily. At normal listening volumes, you could not perceive it at all. Similarly, change one of the three third trumpet's instruments from a Bach to a Yamaha. Let's say the difference between the two is down at -110 dB, and that's the only difference in the recording at all. No listener could ever hope to hear that difference, but it's trivial to do a null test between the two to find the exact difference in the recording. Would you be able to nail down exactly what the difference is? If you had recordings of only the two different trumpets, almost certainly.
 
Abstract concepts? Not at all. They're merely words describing human perception of physical phenomenon that can be measured and quantified.
 
House sound? Merely the typical frequency response, distortion characteristics, and driver positioning of a given headphone company. Extend appropriately to other equipment manufacturers.
 
Sound stage? At least this one's a little difficult. There's tons of specific effects that are known to have an affect on soundstage and can be measured. The big one is reverb; which could be called the sum total of the soundstage - it's the sum of all the reflections of a sound in a space - everything but the direct sound traveling through the air (and floor/walls). How rooms behave is measured and modeled every single day as a part of modern acoustic design. Another effect is that of our ears' pinnae and other physical structures on perception of sound, including time delay between ears, frequency response (particularly ear canal resonance), phase response, etc. No, we can't identify every single aspect, and certainly not at will with ease, but we can measure the exact differences between recordings (or between two different headphones on a dummy head, etc.) and use null testing to show us the sum total of differences between the two. Are you familiar with the Smyth SVS Realizer? You should look it up - you'd be amazed at what modern acoustic modeling can do in a commercially available product.
 
Body? What does that term even mean? If it means a given characteristic "sound", that sound can be quantified relative to what a given listener says it is. You might say instruments of a given type sound "warm" compared to other instruments of another type playing the same note. It's trivial to measure them and other examples, find the differences in harmonics, and identify what attributes of the harmonic balance cause that sound. But if you can't consistently identify what "body" or "warm" or "bright" sounds like - and between untrained listeners that's a sure fact - you can't assign an objective sound attribute to the word. That's not a flaw with the measuring; it's a flaw with the labeling.

Headroom? A trivial concept that actually has nothing to do with what we hear but rather denoting that an amplifier has a certain maximum voltage/current output curve, which can be conveniently quantified for real-world use as the dB level in which the amplifier is capable of driving a given load (separately, peak and RMS) over what level the transducer is playing at. Say you're listening to headphone A at 75 dB RMS with peaks at 95 dB. Say that the amplifier is capable of maintaining 110 dB RMS and 115 dB RMS with that load. You have 35 dB of RMS headroom and 20 dB of peak headroom. Another way in which headroom is defined is the peak level that the amplifier is capable of minus the RMS level in decibels that the amplifier is capable. That is, a power amplifier capable of putting out 100 W into 8 ohms RMS but capable of 200 W peak would have 3 dB of headroom.

 
[size=medium][size=14pt]WARNING: THIS IS OFF TOPIC PLEASE DO NOT READ.[/size][/size]
 
[size=medium][size=14pt]I’m sorry ben but what you wrote is an insult to every scientist and engineer and is total bs, and I just can’t stop myself from responding.[/size][/size]
 
[size=medium][size=14pt]First of all:[/size][/size]
 
[size=medium][size=14pt]What takes reason a hundred years to build a bridge to, only takes the wings of faith a single night to reach. There are a million things we can’t measure but we know are there through logic alone. We shape science so that it can explain what we can detect and comprehend. We know there is a super massive black hole at middle of Milky Way but we have no means to detect it or tell where it is. However, by using a modified brilliant version of string theory we were able to prove it exists. We know that dark matter exists but we have no way of detecting it or proving it. There are a million things that we can somehow detect and comprehend that science has yet to catch up to love, faith, etc.[/size][/size]
 
[size=medium][size=14pt]Also, did you know that your brain discards 80% of information it gets from your ears. I’ll give you an example, if I go to class I can tell you what people were wearing and what color was board and chairs etc. but if you ask me to recite which sounds I remember hearing then my answer would be professor and my fellow students. But ask my blind sister the same question and you will be stunned her answer that is totally different than mine. She can even tell what a telephone number is from listening to different dials and can tell who is approaching by the sound of their walk. The difference between her and me, a trained audiophile and a normal person is that they have trained their brain to be able to process more and distinguish stuff that normal people cannot. [/size][/size]
 
[size=medium][size=14pt]Actually there is no reason scientifically that would prevent you from “perceive frequencies below 20 Hz”. Your brain just chooses to discard information it gets because it doesn’t think it’s important. I’ll give you another example why don’t you think your brother is hot for example? Or why do you think some guys are hotter than others? He could very well be the hottest guy on block. However, you would still think it’s gross. It’s cause through millions of years of experience human race realized that mating with close family is major risk and could lead to a lot of diseases. Because people who didn’t mate with their family managed to survive, and people who did mate with their families ended up dying. It’s because your brain filters family from your hot people list. That’s my point your brain puts a speed limit or sound limit at these frequencies. What your implaying is that a car that is moving at speed of 120 miles an hour because of speed limit is not able to surpass that speed; which is total bs. The brain put limit on your hearing because well it’s what it thinks would give you best chance to survive. For example let’s assume that people who can listen to really low frequencies can’t sleep or are always afraid or what not. [/size][/size]
 
[size=medium][size=14pt]Why are audiophiles able to detect things normal people cannot? Why can my blind sister process sound better than I can. Why can a person who runs a marathon each year beat me at long distance race? Your brain functions just like any other muscle if you train it you will see significant results. Why can some people tell difference between 44.1 and 96 and others cannot? Does it mean that they’re lying or that it’s a placebo affect or that it’s in their head? Unfortunately, some people are trained and others aren’t. Most people aren’t really use to listening to songs at 96khz frequency and to expect them to be able to tell defiance from a 44.1 and 96khz song is like expecting you to win a marathon from first try.[/size][/size]
 
[size=medium][size=14pt]Furthermore, just because our brain decided to discard information, because it doesn’t think it’s important, you can’t conclude that we are unable to hear it or detect it. Because there are people who can hear it and we might discover ways of making most people dectect it. Furthermore, even though brain decides to filter it sound you hear, songs at higher frequency still sound better because brain is limited to how it filters song and what your brain decides to filter out ends up affecting what you do actually process and hear. Because even though your brain decides to discard sound information it cannot discard everything you hear. Thus, what you can’t hear affects what you can hear.[/size][/size]
 
 
 
[size=10pt]Quote:[/size]
[size=10pt]Originally Posted by BlackbeardBen [/size]
 
[size=10pt]Sorry, but that's all a load of junk. Especially the bit about our ear/brain system being more sensitive than any measuring device.[/size]
[size=10pt]How did you make the leap from "we perceive frequencies below 20 Hz" to "placebo can't automatically be attributed to x"? No one at all is arguing that we can't perceive frequencies below 20 Hz. You can literally feel the pressure changes and vibrations - it's just that 20 Hz or so is roughly where the pressure waves begin sounding as a tone. The problem is above 20 kHz - a very different situation. There is not one single study where test subjects have perceived in any way frequencies above their normal listening range (which perhaps for the absolute best ears is in the 23 kHz range) that cannot be attributed to intermodulation distortion or other distortion artifacts in the playback equipment (i.e. the Ooashi study). If you've got evidence to the contrary, I'd love to see it - including your claim that a different brainwave signature is apparent when listening to high resolution music (with the exact same mastering, matched (and normal listening, not elevated) levels, correct noise-shaped dithering, and no hardware-related artifacts different between the two sample rates and bit depths).[/size]
[size=10pt]The problem isn't just automatically attributing differences that are heard to placebo - it's that these sort of differences completely disappear with properly conducted blind testing. It's trivial to measure a difference between all cables, DACs, amplifiers, bit depths/sampling rates, etc. with proper test equipment. Despite that, comparing such devices in blind testing, it has been found that in many cases people can't hear any difference at all when they don't know what they're listening to. By finding the limits of distortion, changes in frequency response, noise, etc. that can in fact be distinguished in blind testing, it is possible to induce that those limits can conservatively be applied to general situations - e.g. extrapolating that a given cable with appropriate RLC properties will almost certainly not be distinguishable from a counterpart in blind testing despite what a listener may say from sighted listening tests - without actually performing the blind test in every single case. This is the basis upon which science is founded - to dismiss such thinking in general is to dismiss the scientific process.[/size]
[size=10pt]Similarly, your examples of two things being perceived as different though they measure the same objectively - flat out, there either is a difference and you're not measuring it, or you're pulling out magic as the explanation. The same song with a high noise level and a low noise level? If you can distinguish between the two, we can measure the difference. It's that simple - and measuring noise in a recording is trivial. A mother of identical twins? Is that really a serious example? Yes, they are genetically identical. No, they are not physically the same. Scars, marks, brain and body development, etc. differ between two twins - again, this is trivial to measure. Frozen versus fresh squeeze orange juice - again, not something that can't be done. Send the samples to a flavor science lab and have them analyze it - they'll be able to tell you what the difference is.[/size]
[size=10pt]Sorry, but there's not a single thing that we can perceive about the world around us that we can't measure and quantify better than our senses can. No, that measuring cannot yet simulate exactly how we perceive things in every case - but it sure can detect changes of any sort far more sensitively than our own senses can.[/size]
[size=10pt]Example, to go with the music theme? How about a recording of a symphony orchestra, so complex and full of many different sounds. Trained listeners can identify the frequency ranges which are most prominent, what sort of instruments are playing, how many of each (with more than a few of a given instrument playing each part, this would be a rough estimate), perhaps what soloist is playing, and maybe even what particular brand/model tympani or gong or whatever is being played. Every one of those differences could be measured, and with the right software (and samples serving as the equivalent of a listener's memory) interpreting it, you could quantify that as well. The typmani or the soloist? Its harmonics or characteristic style could be picked out and analyzed to determine what/who is playing. Etc., etc.[/size]
[size=10pt]But what if you added some noise at -130 dB on your 24 bit recording. 130 dB down? You could measure it; easily. At normal listening volumes, you could not perceive it at all. Similarly, change one of the three third trumpet's instruments from a Bach to a Yamaha. Let's say the difference between the two is down at -110 dB, and that's the only difference in the recording at all. No listener could ever hope to hear that difference, but it's trivial to do a null test between the two to find the exact difference in the recording. Would you be able to nail down exactly what the difference is? If you had recordings of only the two different trumpets, almost certainly.[/size]
[size=10pt]Abstract concepts? Not at all. They're merely words describing human perception of physical phenomenon that can be measured and quantified.[/size]
[size=10pt]House sound? Merely the typical frequency response, distortion characteristics, and driver positioning of a given headphone company. Extend appropriately to other equipment manufacturers.[/size]
[size=10pt]Sound stage? At least this one's a little difficult. There's tons of specific effects that are known to have an affect on soundstage and can be measured. The big one is reverb; which could be called the sum total of the soundstage - it's the sum of all the reflections of a sound in a space - everything but the direct sound traveling through the air (and floor/walls). How rooms behave is measured and modeled every single day as a part of modern acoustic design. Another effect is that of our ears' pinnae and other physical structures on perception of sound, including time delay between ears, frequency response (particularly ear canal resonance), phase response, etc. No, we can't identify every single aspect, and certainly not at will with ease, but we can measure the exact differences between recordings (or between two different headphones on a dummy head, etc.) and use null testing to show us the sum total of differences between the two. Are you familiar with the Smyth SVS Realizer? You should look it up - you'd be amazed at what modern acoustic modeling can do in a commercially available product.[/size]
[size=10pt]Body? What does that term even mean? If it means a given characteristic "sound", that sound can be quantified relative to what a given listener says it is. You might say instruments of a given type sound "warm" compared to other instruments of another type playing the same note. It's trivial to measure them and other examples, find the differences in harmonics, and identify what attributes of the harmonic balance cause that sound. But if you can't consistently identify what "body" or "warm" or "bright" sounds like - and between untrained listeners that's a sure fact - you can't assign an objective sound attribute to the word. That's not a flaw with the measuring; it's a flaw with the labeling.

Headroom? A trivial concept that actually has nothing to do with what we hear but rather denoting that an amplifier has a certain maximum voltage/current output curve, which can be conveniently quantified for real-world use as the dB level in which the amplifier is capable of driving a given load (separately, peak and RMS) over what level the transducer is playing at. Say you're listening to headphone A at 75 dB RMS with peaks at 95 dB. Say that the amplifier is capable of maintaining 110 dB RMS and 115 dB RMS with that load. You have 35 dB of RMS headroom and 20 dB of peak headroom. Another way in which headroom is defined is the peak level that the amplifier is capable of minus the RMS level in decibels that the amplifier is capable. That is, a power amplifier capable of putting out 100 W into 8 ohms RMS but capable of 200 W peak would have 3 dB of headroom.
[/size]
 

 
 
Kittens and supermassive black-holes...
 
Frozen orange juice, therefore DSD.
 
Hmm... needs a new thread. ^^
 

 
 
Jan 14, 2012 at 10:30 PM Post #81 of 111
 
Hey, gregorio, I thought you were perma-banned, are you back on head-fi now?
 
 
You know what, there is proof that guitar cables display audible differences, this was discussed recently in a cable thread, i.e. the truth is cables sound different (in that specific case).
 
So.... believing "all cables are nonsense, sound the same", doesn't really help anyone ^^, the same applies to this thread, imho, there's usually specific cases which can break the rules, no need to look at everything in black and white, cynic VS believer, I think that's fruitless, just my view.
 
 
Edit:  What I mean is, cables are nonsense 99% of the time, but there's usually a rule breaker, which is evidently guitar cables, there could be more, like using tin or lead in custom IEM cables, maybe.  Just my view.
 
 
Jan 14, 2012 at 10:32 PM Post #82 of 111
Mischa, sorry to tell you this, but we can detect and measure everything you mentioned in the first paragraph of that last quote 
biggrin.gif

 
Supermassive black hole at the center of the galaxy? Orbits of objects at the center of the galaxy. Those are actual orbits, plotted through observation. Using that we can even calculate the mass of the black hole (approximately anyway, allowing for errors in measurement). Ever heard of Keplar's Laws? Newton made some tweaks, and we can use those to determine the mass of objects using only the size of their orbit's semimajor axis and period.
 
Dark matter? There's a few reasons we know it exists. One is the orbital speed of objects in our galaxy. Using Newton's versions of Keplar's Laws again, we should observe a rapid increase in orbital speed as we approach the center of the galaxy. But we don't. The outer edges rotate much too fast. From this we can definitely conclude that the majority of the galaxy's mass lies outside of the center. Yet the majority of the observable mass is at the center. So we know there's unobservable mass in the halo. Again, this is not guesswork. It's mathematically provable exactly how much dark matter surrounds our galaxy, using observation and measurement. We can also determine from the concentration of ions and the distribution of background radiation in the observable universe how fast the universe is growing and accelerating, and different rates would be linked to different quantities of mass (due to the pull of gravity). From this we can also determine that observable matter only consists of about one sixth of the total mass in the observable universe. Again, this is through observation.
 
So that's 2 down, 999,998 to go. Got anything else we "can't measure"? That is, if you're still around and kiteki isn't just pulling up some random quotes from a decade ago.
 
Oh, and I sort of didn't read the rest of your post because I lost all respect for you. But if you assure me there's something good in there, I'll do it 
wink.gif

 
You know, now that I think about it, dark matter was a really bad example of what you wanted to say. We can't observe dark matter directly, but we can measure it because of its effects. So it's sort of the opposite of what you wanted.
 
Jan 14, 2012 at 10:40 PM Post #83 of 111
 
We can measure frozen orange juice too actually, with a microscope, I believe the ice crystals 'damage' the orange juice, so then when it's back at room temperature it doesn't taste as good as the fresh orange juice.
 
There is a way around that, you have to freeze the orange juice with liquid nitrogen, then it freezes so quickly, there are no crystal formations that will damage it's structure.
 
 
Jan 14, 2012 at 11:24 PM Post #84 of 111
Okay, here's my reply to Mischa's post... I PM'ed it but since there's now a place for it...
 
Quote:
WARNING: THIS IS OFF TOPIC PLEASE DO NOT READ.

 

I’m sorry ben but what you wrote is an insult to every scientist and engineer and is total bs, and I just can’t stop myself from responding.

 

First of all:

 

What takes reason a hundred years to build a bridge to, only takes the wings of faith a single night to reach. There are a million things we can’t measure but we know are there through logic alone.  A million things?  Well, I am in fact rather confident that there's a million mathematical proofs and philosophical deductive arguments.  But everything else - yes, faith, emotions, etc. - we can measure to some degree. We shape science so that it can explain what we can detect and comprehend. We know there is a super massive black hole at middle of Milky Way but we have no means to detect it or tell where it is. WRONG.  Horribly wrong. However, by using a modified brilliant version of string theory we were able to prove it exists.  1. There's no such thing as absolute "proof" outside of mathematics and deductive reasoning.  2. String theory has nothing to do with the detection and measurement of the supermassive black hole at the middle of our galaxy and others, although it of course does have mathematical applications concerning them.  3.  Yes, we can detect, measure, and project the mass, size, and location of the supermassive black hole in the center of the galaxy, by observing its effects on objects we can see.  Just like everything else in the physical world, there is no "proof" for it, only a body of evidence consisting of measurements and supporting maths.  We know that dark matter exists but we have no way of detecting it or proving it.  Similarly, indirect measurements and maths support the existence of dark matter.  Just because we can't measure something directly doesn't mean its effects are unmeasurable. There are a million things that we can somehow detect and comprehend that science has yet to catch up to love, faith, etc.  No doubt that not everything can be measured completely yet.  Even so, we can still measure things such as love, faith, happiness, etc.  - brainwave patterns and trivially projecting concepts/things that influence said emotions.  No, these emotions and feelings cannot be completely measured and quantified, but they can to a very useful (but incomplete) degree.  Oh, and we know from HUP that we'll never be able to measure absolutely everything at once, but we're not concerned with sub-atomic physics here when it comes to audio.

 

Also, did you know that your brain discards 80% of information it gets from your ears. I’ll give you an example, if I go to class Still a student? I can tell you what people were wearing and what color was board and chairs etc. I probably wouldn't remember that myself... but if you ask me to recite which sounds I remember hearing then my answer would be professor and my fellow students. But ask my blind sister the same question and you will be stunned her answer that is totally different than mine. She can even tell what a telephone number is from listening to different dials and can tell who is approaching by the sound of their walk.  Oh, I have no doubt.  I can usually tell who is walking in my house, so I can imagine she is much better at that sort of thing. The difference between her and me, a trained audiophile Trained?  How?  Have you actually done formal listening training (e.g. with Harman's free "How To Listen" software)? and a normal person is that they have trained their brain to be able to process more and distinguish stuff that normal people cannot. Yes, with formal training.  You might be surprised, however, that untrained and trained listeners have the same preferences for loudspeakers as trained listeners.

 

Actually there is no reason scientifically that would prevent you from “perceive frequencies below 20 Hz”. Your brain just chooses to discard information it gets because it doesn’t think it’s important.  Is that so?  Could you explain how  the upper frequency hearing limit in humans decreases with age as a function of damage to the cilia hair cells in the cochlea then?   I’ll give you another example why don’t you think your brother is hot for example? Or why do you think some guys are hotter than others? He could very well be the hottest guy on block. However, you would still think it’s gross. It’s cause through millions of years of experience human race realized that mating with close family is major risk and could lead to a lot of diseases. Because people who didn’t mate with their family managed to survive, and people who did mate with their families ended up dying. It’s because your brain filters family from your hot people list.  And how is that attribute manifested and passed on?  Physically through our genes. That’s my point your brain puts a speed limit or sound limit at these frequencies.  Not true, it's a physical limit within our cochlea.  What your implaying is that a car that is moving at speed of 120 miles an hour because of speed limit is not able to surpass that speed; which is total bs.  Not at all; I'm arguing that the car can't pass 120 mph because it physically is not capable of doing so.  The brain put limit on your hearing because well it’s what it thinks would give you best chance to survive. For example let’s assume that people who can listen to really low frequencies can’t sleep or are always afraid or what not.

 

Why are audiophiles able to detect things normal people cannot?  Can you point me to a double blind ABX test where they can?  You can present all the subjective sighted impressions you want but they're meaningless as objective evidence.  Why can my blind sister process sound better than I can.   She's still going to be no better at identifying specific frequency ranges correctly without specific training.  I don't doubt far better acuity overall, but the problem is correlating to measurements.  That's why she can tell who is approaching or where a sound is coming from better than you or I - she has the formal training (i.e. life for her) to correlate what she hears with reality.  Why can a person who runs a marathon each year beat me at long distance race? Your brain functions just like any other muscle if you train it you will see significant results.  Of course, but formal training is necessary to see relevant results.  Why can some people tell difference between 44.1 and 96 and others cannot?  In properly conducted double blind ABX testing at normal listening volumes and with proper dithering performed, no one has - your claim that there is an audible difference is entirely unfounded.  Does it mean that they’re lying or that it’s a placebo affect or that it’s in their head?  Without blind testing, claims that there are acutally audible differences in the files at normal listening levels are not valid.  Remember, you can never "prove" the null hypothesis - so it is unreasonable to ask someone to do so.  Unfortunately, some people are trained and others aren’t. Most people aren’t really use to listening to songs at 96khz frequency and to expect them to be able to tell defiance from a 44.1 and 96khz song is like expecting you to win a marathon from first try.  No, not at all.  Training helps listeners, yes, but no amount of training has been shown to make humans magically hear (not see) the difference between 44.1 kHz and 96 kHz sampling rates.

 

Furthermore, just because our brain decided to discard information, because it doesn’t think it’s important, you can’t conclude that we are unable to hear it or detect it.  Of course, and that's why double blind testing is absolutely imperative for objective listening tests.  Because there are people who can hear it Possibly but highly unlikely and we might discover ways of making most people dectect it. Furthermore, even though brain decides to filter it sound you hear, songs at higher frequency still sound better because brain is limited to how it filters song and what your brain decides to filter out ends up affecting what you do actually process and hear. What?  Could you clarify this statement - it's incoherent.   Because even though your brain decides to discard sound information it cannot discard everything you hear. Thus, what you can’t hear affects what you can hear.  Yes, of course.  People who hear differences between bit/sample rates, cables, most modern DACs, magic CD markers and sprays, magic rocks you hang on your amplifiers, etc. aren't delusional; they're human.  The differences we hear aren't actually in the sound itself.  Just like you say, they arise from the other factors at play - what we see and what we know about what we're listening to (price of equipment, theoretical but not audible advantages, reviews, etc.).  If you're willing to pay to get that perceived (but not audible in blind testing) improvement, be my guest.  To me it doesn't make sense to spend money like that.

 
 
Jan 14, 2012 at 11:56 PM Post #86 of 111
 
Umm... I think dogs have more receptors in their nose and more cilia hair cellls in their ears or whatever, that's why they're better at hunting than we are, coz they can hear and smell more.
 
On the other hand, they can't listen to music through their face.
 
Edit: ...and quite conviniently, listening "through your face" is said to have impact on realism and soundstage, which are the most difficult factors to measure, afaik, so a bit annoying for the scientists. =P
 
Sure, most of the studies are from a decade ago, but now there are millions of people with a unit that says SACD in their living room, when they didn't even ask for it. =P
 
I don't really care very much personally, I just want to listen to music, and my hunch is microphones (recording) and speakers (playback) have much more impact than 19.5Hz optical illusions and 65kHz rainforest qualia, still kind of interesting nonetheless.
 
The funny thing is the Oohashi type studies have shown that deep tissue in the brain is stimulated, we can only assume it was faked, right?  Some wikipedia entry about flipping a coin doesn't disprove it.
 
Jan 15, 2012 at 12:29 AM Post #87 of 111
Sony has been a big proponent of the notion for some time.  From the current sound that Sennheiser puts out I'd venture to say they're playing along.  Whether for real or marketing purposes I leave to the 'experts' here to figure out.  I once read a statement from Sony about it but I'd have to hunt it down.  Till then I'll avoid getting dredged into anything.  Yes, I'm sure the figures below are nicely massaged to varying degrees, still....
 
SA5000
[size=small]Freq Resp: 6Hz - 110Khz [/size]
[size=small]Qualia [/size]
[size=small]Freq Resp: 5Hz - 120Khz [/size]
 
[size=small]Stax 009[/size]
[size=small]Freq Resp: 5Hz - 42Khz [/size]
 
[size=small]Sennheiser HD800[/size]
[size=small]Freq Resp: 6Hz - 51Khz (-10dB), 14[/size][size=small]Hz - 44Khz (-3dB)[/size]
 
Jan 15, 2012 at 12:45 AM Post #88 of 111
Quote:
 
Umm... I think dogs have more receptors in their nose and more cilia hair cellls in their ears or whatever, that's why they're better at hunting than we are, coz they can hear and smell more.
 
On the other hand, they can't listen to music through their face.
 
Edit: ...and quite conviniently, listening "through your face" is said to have impact on realism and soundstage, which are the most difficult factors to measure, afaik, so a bit annoying for the scientists. =P
 
Sure, most of the studies are from a decade ago, but now there are millions of people with a unit that says SACD in their living room, when they didn't even ask for it. =P
 
I don't really care very much personally, I just want to listen to music, and my hunch is microphones (recording) and speakers (playback) have much more impact than 19.5Hz optical illusions and 65kHz rainforest qualia, still kind of interesting nonetheless.
 
The funny thing is the Oohashi type studies make it look like deep braincore tissue is stimulated.

Dogs can hear up to around 60 kHz, depending on the breed.
 
Can we listen through our face? I still haven't seen a blind test to support that.
 
The Oohashi study, once again (how many times must this be repeated?) may have been fatally flawed. Obviously if there is audible IMD, we'll be able to hear it and it'll show up on a brain scan. It needs to be repeated with that consideration in mind.
 
Quote:
Sony has been a big proponent of the notion for some time.  From the current sound that Sennheiser puts out I'd venture to say they're playing along.  Whether for real or marketing purposes I leave to the 'experts' here to figure out.  I once read a statement from Sony about it but I'd have to hunt it down.  Till then I'll avoid getting dredged into anything.  Yes, I'm sure the figures below are nicely massaged to varying degrees, still....
 
SA5000
[size=small]Freq Resp: 6Hz - 110Khz [/size]
[size=small]Qualia [/size]
[size=small]Freq Resp: 5Hz - 120Khz [/size]
 
[size=small]Stax 009[/size]
[size=small]Freq Resp: 5Hz - 42Khz [/size]
 
[size=small]Sennheiser HD800[/size]
[size=small]Freq Resp: 6Hz - 51Khz (-10dB), 14[/size][size=small]Hz - 44Khz (-3dB)[/size]


Now show us graphs, because specs like these are generally useless. Grado publishes numbers down to 20 Hz or lower, but they all start to roll off rapidly after 100 Hz.
 
Jan 15, 2012 at 12:48 AM Post #89 of 111
 
I'd venture the only conclusive data is we can't 'hear' that extension, the issue arises when we can 'feel' or 'see' frequencies lol.
 
I mean, you can make a window break at a specific frequency, so who knows if you can't make a brain break?  Just theory of course.
 
 

Users who are viewing this thread

Back
Top