The Subconscious Case for HD Audio
May 17, 2023 at 6:00 PM Post #31 of 57
Your presentation of the human ear is IMO a case for demonstrating that the ultrasonic content, if of influence, is unlikely to be of hearing influence. The audible high frequency range is already quite packed at the entrance, having to share a more limited number of hair cells(maybe why our sensitivity at high freqs is lower from the get go? I'm not entirely sure). They're also the hair cells that get damaged the most as we age or experience damaging levels of sounds (being at the entrance, and being for similar amplitude, resonating with higher energy, bad engineering if you ask me^_^, for what some think to be quite important).

Or if you look at the frequency distribution of the cochlea, it might correspond to our evolutionary biology. IE most needed ranges for hearing being that of speech and oncoming storms (so it's OK to lose hearing of those high-frequency waves when you become a senior). One of the reasons there is a concentration of high frequencies at the base of the basilar membrane, is that it's widest and most flexible (the membrane being layer of mechanical inner ear cells, in which the layer of "hair cells" at the base are less rigid). There may be some other nerve innervations that are not part of our conscious auditory connections influencing the "feelings" some people are reported to experience in these ultra-sonic experiments that go above 20kHz...do they get much attention for data and funding? Seems not as it's not part of people's music enjoyment....might be a military interest/funding if it was demonstrated that you could have a device that would consistently make everyone uneasy.

Diagram of frequency range in human cochlea:
basilar-membrane-sound-frequencies-analysis-base-fibres.jpg


IMO, open a window, hold a pet, get your legs up, those have real demonstrated positive impact on your music listening experience and general health.

Being well fed and feeling good also keeps your auditory system at optimal performance: where it keeps the ear fluidity in homeostasis, and membranes not rigid. Why worry about what you probably don't have, and instead enjoy what you do?

Interesting to see how humans compare to other animals (could see from evolutionary standpoint how fish don't have much range, but sea mammals do: having more complex hearing organs and the medium of water extending the upper range):

Animal_hearing_frequency_range.svg.png
 
Last edited:
May 17, 2023 at 8:08 PM Post #32 of 57
Seems some research into Tigers and their roar yields some interesting points, at over 110db the Tigers roar contains a significant amount of infrasonics lower than 18 Hz that has a momentary paralysing effect on the nervous system of its prey, followed by an uneasy feeling, and even noticed by Tiger handlers in Zoo’s,
From that, some have proposed that “sick building syndrome” could be caused by infrasonic vibrations from lifts, air conditioning systems etc …
 
May 17, 2023 at 8:47 PM Post #33 of 57
Seems some research into Tigers and their roar yields some interesting points, at over 110db the Tigers roar contains a significant amount of infrasonics lower than 18 Hz that has a momentary paralysing effect on the nervous system of its prey, followed by an uneasy feeling, and even noticed by Tiger handlers in Zoo’s,
From that, some have proposed that “sick building syndrome” could be caused by infrasonic vibrations from lifts, air conditioning systems etc …

Interesting!!!

I googled this and it seems to be true. . .

https://www.sciencedaily.com/releas...ncies from,and even passing through mountains.
 
May 17, 2023 at 10:17 PM Post #34 of 57
Seems some research into Tigers and their roar yields some interesting points, at over 110db the Tigers roar contains a significant amount of infrasonics lower than 18 Hz that has a momentary paralysing effect on the nervous system of its prey, followed by an uneasy feeling, and even noticed by Tiger handlers in Zoo’s,
From that, some have proposed that “sick building syndrome” could be caused by infrasonic vibrations from lifts, air conditioning systems etc …
And maybe a remaining influence as to early humanoid development in Africa, where our ancestors might need to hear in the lower bass section to scurry on out! But also when it gets to uneasy feeling in the lift...that's a classic example of how it's not just about your hearing. Our inner ear is also an important component of our vestibular system. If you're sick, your vestibular system can be more sensitive. We also know we feel sub-bass through bone conduction.
 
Last edited:
May 17, 2023 at 11:17 PM Post #35 of 57
First, I will say I am a bit saddened you didn’t take the time to return my quote cheek. I was personally hoping for ‘Butt Hurtford’, but I suppose I will have to settle with using my imagination :)

castleofargh said:
I did hold you to high standards because you looked like you could do it (although, you're much, much better at critical thinking when data goes against what you want it to be. But hey, I'm like that too).

I insist on the paper about phase having no place here. I didn't develop because I thought it was quite obvious why, but here we go:
It's an ultra specific experiment, the conclusions as such are just as specific. Sometimes it's easy to see why we can extend some conclusions to wider ranges of conditions, but often you can't and would have to demonstrate that they apply to your conditions. That experiment is very clearly about signal interference between the 2 transducers. What's perceived is not a delay, but the mess in the signal caused by that phase shift. Claiming that's direct delay perception is a stretch.
Let me imagine a new experiment to make my point clearer. Let's do the same with 2 times one tone, and when we notice the amplitude change through interference or maybe some modulation of sort (would that happen if it's the same tone?... anyway). I then declare that the phase delay for what we first notice relates to the highest frequency a human can detect. You readily find that rational absurd, right? But You're using the same logical fallacy when presenting that paper:
Bret Halford said:
Indeed, we can similarly find objective evidence of human hearing WAY above 20 kHz by focusing on timing differences rather than music or other 'informational' signals.
Do you see it, or is it just that much harder to be a critic of your own ideas?

Your thought experiment doesn’t sweep the delay at all if I understand it correctly, indeed it doesn’t sound like you are introducing any time or equivalent phase (at your steady tone). I don’t think it is replicating the same rationale that I use to claim Boson is evidence of ultrasonic detection but maybe I am not understanding your description correctly.

While we’re on the subject of critiquing one’s own ideas, I’ll note that it gets harder the more entrenched the view is. Being exposed to a contrary view of any quality tends to make us ‘double down’ on the sincerely held belief that we feel is threatened. I can appreciate the layers and layers of sedimentary FUD that you wade through in this forum and I encourage you to consider that your (and others in this thread) initial reaction reflected that more than the actual substance of the OP warranted.


Back on topic, I’m not particularly fixated on the coupling mechanism (as an aside, I came across an interesting paper studying breast response in nursing mothers to the ultrasonic content in their baby’s cries. It revved up the milkers, but only when skin contact was present!). But for the record, saying something is ‘evidence’ isn’t a conclusion as you claim.

And yes, time discrimination is absolutely evidence of the equivalent frequency. That’s not ‘my idea’ it comes from a nice old dead guy named Fourier. Perhaps our struggle here is in understanding that monochromatic phase is a time delay?

If you want stuff in the μs, we have plenty of work on ITD. Most showed a minimum in the 10μs delay between ears being perceived(I guess it proves we can hear 100kHz, according to your logical fallacy of fluid permutations between any time and heard frequencies).
That's achieved when simply using test tones. Some tests went lower with tailored signals. But that too is a specific research that does not align or validate your beliefs in any way shape or form. In fact, those experiments clearly show that discrimination for ITDs sucks pond water when using high frequency and people achieve the lowest time with a signal around 1kHz. I'm obviously taking this example to oppose your logic, I'm not trying to claim anything beyond that about the need for high frequency or what can or cannot be heard in general.

Boson paper was using 7k test tone FYI (re: better at low frequencies), it doesn’t sound so different from these studies you mention in conclusion or technique.

‘Fluid [amplitude] permutations between any time’ is actually a pretty good description of transient signals generally subject to Shanon’s theorem (including music both sonic and ultra) - at least assuming you mean fluid figuratively rather than physically. I don’t think you can correctly call a difference in opinion on how much actual info is discernible there or not a fallacy, but I guess it’s your post so suit yourself. My view is it’s simply quite easy to retain a great deal of ultrasonic info in its entirety using even modern commodity grade components, why limit yourself based on stubbornly hedging your bets against human potential.

I’d be very interested in additional resources to better understand discrimination limits. I wouldn’t be surprised at all if our abilities on the specifics are minimal given there’s no conscious detection of statistical note, but that also necessarily limits the scope of the data to that comparably sluggish neocortex…

Something can obviously have no perceivable immediate impact while causing some more or less long term impact of sort (like uncle Putin's special tea, or UV on our skin). I agree, of course, that there can be more than near immediate conscious impact. But if there is no clearly conscious impact even long term that can be demonstrated to actually do something, then how do we tell if it's good, bad, or a thing at all?
The brain wave paper is a minuscule step toward that, but let's not kid ourselves with the conclusions you decided to draw from it in your first post. It's allegedly tied to ultrasounds, but ultrasounds alone do nothing. The impact (whatever that is) occurs after a known time, but listening tests including that delay are inconclusive. Is there a lasting effect? Who knows?
All the paper suggests is that if you listen to hires while wearing a silly hairnet with sensors, a screen with your alpha wave reading will show more red somewhere, and that's all you get.

You're using a reductionist fallacy here (https://en.wikipedia.org/wiki/Fallacy_of_the_single_cause - we’re getting a good list of actual bad logic!).

All any EEG data shows is ‘what happens when you’re wearing an EEG device’ by definition, - what’s your point? All physical data that is not directly observable is captured via probe, that doesn’t invalidate the scientific method outside of our primary senses.

Nor is Electroencephalography a particularly young or fringe method (‘silly’ really?)... the role of alpha and beta waves are quite well understood. We’re not talking about mind reading here, these are frequently observed and correlated neurological emotional states.

IMO, open a window, hold a pet, get your legs up, those have real demonstrated positive impact on your music listening experience and general health.

On this point we absolutely agree. Personally, I find the best enhancement for music, of any resolution, is to first turn off the screen that I am almost certainly inclined to be looking at while doing it… removing all senses other than sound sure helps me focus but is often a bit more impractical than just putting the laptop/phone down. Regular exercise and THC/CBD helps enormously too :wink:

I simply retain the interest and means to spend an additional ~$12 a year to listen to HD audio whenever possible beyond that. Indeed, the quest to push my rig and source well beyond perceivable limits is fun entirely on its own for me and absolutely enhances the listening experience on a purely psychological/emotional level.

VNandor said:
It is not just semantic, a transient by itself is going to have a different different spectrum than a periodic signal that repeats the same transient over and over. You could hear a "transient" by itself if it's played once but you likely won't hear that very same transient if it was played hundreds of thousands of times a second over and over because one signal will contain frequencies that the ear is sensitive to while the other will not. Yes, the brain (or rather, the ears) will be totally oblivious to periodic transients if that signal doesn't contain anything in the audible frequency range.
The time resolution of digital audio is limited by the bit depth, not the sampling rate. Sampling a signal at higher sampling rates allow for higher bandwidth, not higher timing resolution. Sampling a signal more often than twice the highest sampling rate won't net a higher time resolution, however, increasing the bit depth does. I know a site called "troll-audio" does not exactly inspire confidence, especially if you have no way to verify if the math is correct. Here's an other site with a better name that delves into the intricacies of the time resolution of digital audio.
I probably should have taken the time to respond properly on its own to this link, rather than burying it as part of the lengthy exchange with Castle. I can’t blame you (or others) for missing it:
Butt Hurtford said:
The link that quantifies time errors for a 20 kHz sinusoid is derived for (and only valid for) a 20 kHz sinusoid. Higher frequencies will of course have steeper crossover slopes and need more resolution to qualify. The authors note that for Redbook this is indeed the worst case scenario as any higher frequencies will be filtered out. As far as allowable phase error and its relation to bit depth. It’s just reinforcing that for an adequately sampled signal (2*fs) you get complete info, it’s not claiming that you can replicate a transient signal, waveform or (even worse) spike that has timing periods of that same duration. Shannon’s theorem is absolutely applicable here and anywhere you want to extract information from a signal.
Indeed, the folks over at ‘Audio-Troll’ clearly identify the assumption of max 20 kHz audibility as a premise. As such, when fielded in rebuttal to observed ultrasonics, this is a textbook example of assuming the premise in your argument (https://en.wikipedia.org/wiki/Begging_the_question).

VNandor said:
Unfortunately, while I understand how digital audio and sound works, I don't know anything about brain waves. Something I know is that ultrasonic sound can indeed affect us, we just can't hear them. My theory is it probably has to do something with that.
I’d agree we definitely can’t hear them in the conscious sense that we typically use that term. I do suspect that the lower brain and its faster survival focused reactions are a clear part of the puzzle, given the subconscious/conscious discrepancy in detection.

It sure is fun to speculate. Who knows, maybe the mesentery residents are having a conversation with the lower brain. I don’t know whether any studies on intestinal bacteria’s response to ultrasonics has been studied, but I find my gut tells me all sorts of useful info subconsciously…

VNandor said:
If I understand the paper correctly, it also does not account for the comb filtering effect that's caused by emitting the same sound from slightly displaced sources (when d is not zero) which is a very likely cause for the audible differences. 7kHz being off by 3mm corresponds to the phase being ~22 degrees off as their sounds combine. That would attenuate the signal by 0.6dB which is not particularly hard to hear when listening closely.

This seems like a fair point to me. That said, -.6 dB channel match is below most transducer pairing specs for headphones and IEMs at least (the better ones will typically guarantee within 1dB), maybe ribbon speakers like those used in the paper have tighter channel pairing tolerances? I guess mismatch would be more obvious with a single tone, but still, I'm not sure 0.6 dB would be as dramatic as you say.
 
May 17, 2023 at 11:45 PM Post #36 of 57
All any EEG data shows is ‘what happens when you’re wearing an EEG device’ by definition, - what’s your point? All physical data that is not directly observable is captured via probe, that doesn’t invalidate the scientific method outside of our primary senses.

Nor is Electroencephalography a particularly young or fringe method (‘silly’ really?)... the role of alpha and beta waves are quite well understood. We’re not talking about mind reading here, these are frequently observed and correlated neurological emotional states.
Yet the only thing you're basing your conceptions on is a non-peer reviewed graduate paper you cited in your OP: High-Resolution Audio with Inaudible High-Frequency Components Induces a Relaxed Attentional State without Conscious Awareness correct? Some of the other studies the grad paper cited are better known, and were participating studies. The best results are that there can be extra brain waves with various conditions. And if you actually looked up the original studies by neuroscientists, one of their first disclaimers is you can't read into things (one author in this paper is some kind of audio engineer student and the other some kind of psychologist). You can't isolate the human brain to just the afferent auditory pathways. AFAIK, the best the studies have shown is that there's not a consistent pool of people who experienced anything, and for those that do, it's not always a relaxed feeling.
 
Last edited:
May 18, 2023 at 3:38 AM Post #37 of 57
I simply retain the interest and means to spend an additional ~$12 a year to listen to HD audio whenever possible beyond that. Indeed, the quest to push my rig and source well beyond perceivable limits is fun entirely on its own for me and absolutely enhances the listening experience on a purely psychological/emotional level.
You don't have to/can't prove scientifically what is fun or enjoyable to you. What I find fun is not fun for many people and vice versa. It is all subjective.
 
May 18, 2023 at 6:11 AM Post #38 of 57
This seems like a fair point to me. That said, -.6 dB channel match is below most transducer pairing specs for headphones and IEMs at least (the better ones will typically guarantee within 1dB), maybe ribbon speakers like those used in the paper have tighter channel pairing tolerances? I guess mismatch would be more obvious with a single tone, but still, I'm not sure 0.6 dB would be as dramatic as you say.
First off, I made a mistake there and it's not 0.6dB, it is actually lower than that. However, both the theoretical mismatch in volume and mismatch coming possibly from different drivers are not important because the paper measures the mismatch in SPL at that frequency and it was anywhere between 0.2dB and 0.5dB at 2.9mm. Also note that in this case, the signal is essentially mono, the paper isn't talking about mismatch in pairing because there is really no "pairing" to speak of. Think of it as playing back music/test tone at a certain SPL, then trying to play it back again but 0.5dB lower because that is what's actually happening in the test. Picking up on a 0.5dB difference (worst case) is easy with both music and test tones. It is not dramatic but it is easy to hear in a blind test. Consistently discerning a 0.2dB difference (they could not get a better match than this) is quite hard but not impossible.

The link that quantifies time errors for a 20 kHz sinusoid is derived for (and only valid for) a 20 kHz sinusoid.
No, the link quantifies time errors for any and all signals. The error depends on the amplitude of the signal, the frequency of the signal, and the bit depth. Note that the link starts off with deriving the general formula for the "timing error" and then plugs in the numbers for some specific cases. I would also like to point out the distinct lack of sampling rate in the equation.

The authors note that for Redbook this is indeed the worst case scenario as any higher frequencies will be filtered out.
For redbook, the 22kHz full scale sine wave is actually the best case scenario in terms of timing and the author points out that. He also point out in a more typical case, the timing is worse but it doesn't matter that much anyways because the accuracy is not microseconds but nanoseconds in general.
It’s just reinforcing that for an adequately sampled signal (2*fs) you get complete info, it’s not claiming that you can replicate a transient signal, waveform or (even worse) spike that has timing periods of that same duration. Shannon’s theorem is absolutely applicable here and anywhere you want to extract information from a signal.:wink:
It is claiming that you can replicate any and all signals (including transients, spikes and repeating spikes) as long as the signal was properly bandlimited and sampled.
 
Last edited:
May 18, 2023 at 5:19 PM Post #39 of 57
First, I will say I am a bit saddened you didn’t take the time to return my quote cheek. I was personally hoping for ‘Butt Hurtford’, but I suppose I will have to settle with using my imagination :)
I'll have to disappoint here. Maybe 20 years ago with play on words in French, I could have entertained you, but nowadays and in English, I'm way out of my comfort zone. Also, as a modo it might not be the smartest thing for me to try.

I can appreciate the layers and layers of sedimentary FUD that you wade through in this forum and I encourage you to consider that your (and others in this thread) initial reaction reflected that more than the actual substance of the OP warranted.
Sadly, beside the paper on side by side speakers, it's not our first rodeo with the other ones. You're obviously not responsible for all the other guys driving us nuts, but you know how the human brain sees patterns, well, I'm guessing the regulars here all thought something like "here we go again" then sighed before even reading one line of your post other than the title.

‘Fluid [amplitude] permutations between any time’ is actually a pretty good description of transient signals generally subject to Shanon’s theorem (including music both sonic and ultra) - at least assuming you mean fluid figuratively rather than physically. I don’t think you can correctly call a difference in opinion on how much actual info is discernible there or not a fallacy, but I guess it’s your post so suit yourself. My view is it’s simply quite easy to retain a great deal of ultrasonic info in its entirety using even modern commodity grade components, why limit yourself based on stubbornly hedging your bets against human potential.
I'm just not a great fan of truth without proof. That's all there is to my general behavior in this hobby. If once bitten, twice shy, then at this point I'm 3 thousand times a skeptic. This hobby has ruined me in that way.

You wrongly used F=1/T to deduce things that aren't true about frequency heard.
That's been my main point all along, and it's also what @danadam argued against. I happened to focus more on another place where you did it, but they're mistakes with the same origin. You take time T you find lying around about hyper specific listening experiments, and your idea is that it proves we can hear 1/T as a frequency. Or that we need 1/T as the sample rate to get enough time resolution in PCM. You made those conclusions, not the papers.


You're using a reductionist fallacy here (https://en.wikipedia.org/wiki/Fallacy_of_the_single_cause - we’re getting a good list of actual bad logic!).

All any EEG data shows is ‘what happens when you’re wearing an EEG device’ by definition, - what’s your point? All physical data that is not directly observable is captured via probe, that doesn’t invalidate the scientific method outside of our primary senses.

Nor is Electroencephalography a particularly young or fringe method (‘silly’ really?)... the role of alpha and beta waves are quite well understood. We’re not talking about mind reading here, these are frequently observed and correlated neurological emotional states.
No, what I meant was different and also somehow worse ^_^.
My point was, I need to wear the silly looking hairnet to prove that the ultrasonic content in my music is doing something to me. Because nothing else manages to prove it does anything at all for my listening experience.
I meant it should become an audiophile accessory, like the blue light for MQA or the golden Hires logo on Sony players. Something that helps us "hear" the difference with our eyes.
 
May 18, 2023 at 9:09 PM Post #40 of 57
I prefer science that applies in some way to listening to music with optimal fidelity for human ears in the home. None of the points involving brain waves and tiger roars and personal happiness have anything to do with that. However I do think spending $12 a month on HD audio to make yourself happy is a bargain. Onlyfans would probably be considerably more expensive.
 
May 18, 2023 at 11:00 PM Post #41 of 57
VNandor said:
First off, I made a mistake there and it's not 0.6dB, it is actually lower than that. However, both the theoretical mismatch in volume and mismatch coming possibly from different drivers are not important because the paper measures the mismatch in SPL at that frequency and it was anywhere between 0.2dB and 0.5dB at 2.9mm. Also note that in this case, the signal is essentially mono, the paper isn't talking about mismatch in pairing because there is really no "pairing" to speak of. Think of it as playing back music/test tone at a certain SPL, then trying to play it back again but 0.5dB lower because that is what's actually happening in the test. Picking up on a 0.5dB difference (worst case) is easy with both music and test tones. It is not dramatic but it is easy to hear in a blind test. Consistently discerning a 0.2dB difference (they could not get a better match than this) is quite hard but not impossible.

There’s two speakers. Transducers don’t all sound exactly alike, even when they're the same model each fed the same mono signal. Pair matching is absolutely a potential factor, but probably not an important one either way, as they displaced both sides separately.

Discerning 0.2 dB amplitude differences over <10 us periods sure sounds a lot like we might want to retain signal information there too. Particularly if it's essentially trivial to do so with less than a cup of coffee cost per month and even the cheapest of dongle daps.


VNandor said:
No, the link quantifies time errors for any and all signals. The error depends on the amplitude of the signal, the frequency of the signal, and the bit depth. Note that the link starts off with deriving the general formula for the "timing error" and then plugs in the numbers for some specific cases. I would also like to point out the distinct lack of sampling rate in the equation.

For redbook, the 22kHz full scale sine wave is actually the best case scenario in terms of timing and the author points out that. He also point out in a more typical case, the timing is worse but it doesn't matter that much anyways because the accuracy is not microseconds but nanoseconds in general.

It is claiming that you can replicate any and all signals (including transients, spikes and repeating spikes) as long as the signal was properly bandlimited and sampled.

There, bolded right at the end, is where you assume the premise.

Ultrasonics are not being properly sampled and are (hopefully) beyond the bandlimit of the lowpass filter used to make redbook recordings. The corresponding transient signals are thus not recoverable on CD quality audio, regardless of bit depth.

Indeed, that necessary condition’s criteria was the exact formula I used to relate the time delays to its equivalent spectrum! You're using a counter argument that is literally built upon the theorem you are arguing is inapplicable.

It is absolutely true that for 44kHz sampling, you can theoretically retain complete information about the signal up to 22 kHz (in practice using real filters makes this more like 20 kHz as Audio-Troll explicitly assume, but it’s pretty irrelevant on the scale of ultrasonics). However for signals with period comparable to the timing observations captured here, that condition is not satisfied. The derivation for bit depth is based on maximum slope, which is driven by max frequency. If you have higher frequencies than 20 kHz, the time error recovery margin is reduced in absolute terms.


castleofargh said:
I'm just not a great fan of truth without proof. That's all there is to my general behavior in this hobby. If once bitten, twice shy, then at this point I'm 3 thousand times a skeptic. This hobby has ruined me in that way.

You wrongly used F=1/T to deduce things that aren't true about frequency heard.

That's been my main point all along, and it's also what danadam argued against. I happened to focus more on another place where you did it, but they're mistakes with the same origin. You take time T you find lying around about hyper specific listening experiments, and your idea is that it proves we can hear 1/T as a frequency. Or that we need 1/T as the sample rate to get enough time resolution in PCM. You made those conclusions, not the papers.

From the get go I acknowledged we can’t consciously hear those frequencies (hence the title and opening preamble!). The output data from the 2 speaker testing is clearly conscious and thereby in contrast to that…

What I did suggest is that the (quite incredible!) time differentiation humans show on displaced audible tones may be evidence of some of that subconscious ability. Tying them to their corresponding frequency spectrum isn’t dishonest or misleading in that very clear context. I guess I can understand the appearance of math as being some assumed proof, but I’m literally just pointing out the corresponding spectrum to those timings… it’s pretty amazing that humans can do that so precisely, and I definitely think it’s happening subconsciously (I mean who could do that kind of trigonometry in their head :p).

We clearly detect amplitude variation to some level on that time scale from the displaced tone experiments, it really doesn’t seem like a stretch to me to acknowledge we can use existing audio tech to actually include the entirety of that sound information for our subconscious and body to react however it normally would to natural sound that has not been band limited. The technology exists to make it a moot point… why not?


castleofargh said:
No, what I meant was different and also somehow worse ^_^.

My point was, I need to wear the silly looking hairnet to prove that the ultrasonic content in my music is doing something to me. Because nothing else manages to prove it does anything at all for my listening experience.

I meant it should become an audiophile accessory, like the blue light for MQA or the golden Hires logo on Sony players. Something that helps us "hear" the difference with our eyes.

Fair enough :)

I’ll stick to my ADI-2 just telling me what sample frequency it’s locked at. I hate all the different color codes or flashing light timing lol, I’m too old for that crap… just tell me the exact frequency, I don’t want a laser light show from my DAC lol!
 
May 19, 2023 at 1:44 AM Post #42 of 57
Tones are a good model for music playback as long as you understand that audibility with tones is logarithmically more sensitive than anything you'd hear in music playback. You don't need that kind of accuracy to sit on the couch and listen to your favorite album. You especially don't need frequencies you can't even hear. You don't even need the ones at the bleeding edge of the range of hearing that you can barely hear.

There's a myth in audiophoolery that I call the "one more thing..." syndrome. If something is important, people will say a little more than that is important. They say that until the little more being important becomes commonly agreed. Then the cycle repeats with someone saying "well if that's important, we need a little more than that just to be safe." That cycle never ends. It's easy to go down that rabbit hole if you just look at numbers in the abstract and never relate those numbers to actual audible sound.

Human ears have finite limits. Those thresholds have been studied for over a century. We know what the limits are. But audiophiles only focus on the technicalities of the specs of their equipment. They don't research the specs of those fleshy things on the sides of their heads.
 
Last edited:
May 19, 2023 at 5:06 AM Post #43 of 57
Ultrasonics are not being properly sampled and are (hopefully) beyond the bandlimit of the lowpass filter used to make redbook recordings. The corresponding transient signals are thus not recoverable on CD quality audio, regardless of bit depth.

Digital audio captures, out of necessity, band-limited version of the original signal. We assume that for human ears band-limited version in case of 44.1 kHz sampling rate sounds identical and is therefore 100 % "recovered" version of the original signal from the standpoint of human ears. Dogs and bats however need higher sampling rate than 44.1 kHz.
 
May 19, 2023 at 5:15 AM Post #44 of 57
But audiophiles only focus on the technicalities of the specs of their equipment.
It is also interesting how little many talk about say room treatment and other things that do affect the sound quality dramatically and instead are hyper-focused on microscopic things such as ultrasonic frequencies in transient sounds.

They don't research the specs of those fleshy things on the sides of their heads.
The idea that our ears have limits isn't pleasant to many for self-esteem reasons I guess...
 
May 19, 2023 at 5:26 AM Post #45 of 57
Our ears do have limits, and as said previously we’ve known the thresholds of hearing, pain, frequency response and sensitivity for years via external measurements, more in depth reading reveals that those simple microphones, our eardrums are connected to, in average, healthy young adults to what amounts to a 1500 channel mixing console and then to something that has memory recall of familiar sounds …
 

Users who are viewing this thread

Back
Top