First, I will say I am a bit saddened you didn’t take the time to return my quote cheek. I was personally hoping for ‘Butt Hurtford’, but I suppose I will have to settle with using my imagination
castleofargh said:
I did hold you to high standards because you looked like you could do it (although, you're much, much better at critical thinking when data goes against what you want it to be. But hey, I'm like that too).
I insist on the paper about phase having no place here. I didn't develop because I thought it was quite obvious why, but here we go:
It's an ultra specific experiment, the conclusions as such are just as specific. Sometimes it's easy to see why we can extend some conclusions to wider ranges of conditions, but often you can't and would have to demonstrate that they apply to your conditions. That experiment is very clearly about signal interference between the 2 transducers. What's perceived is not a delay, but the mess in the signal caused by that phase shift. Claiming that's direct delay perception is a stretch.
Let me imagine a new experiment to make my point clearer. Let's do the same with 2 times one tone, and when we notice the amplitude change through interference or maybe some modulation of sort (would that happen if it's the same tone?... anyway). I then declare that the phase delay for what we first notice relates to the highest frequency a human can detect. You readily find that rational absurd, right? But You're using the same logical fallacy when presenting that paper:
Bret Halford said:
Indeed, we can similarly find objective evidence of human hearing WAY above 20 kHz by focusing on timing differences rather than music or other 'informational' signals.
Do you see it, or is it just that much harder to be a critic of your own ideas?
Your thought experiment doesn’t sweep the delay at all if I understand it correctly, indeed it doesn’t sound like you are introducing any time or equivalent phase (at your steady tone). I don’t think it is replicating the same rationale that I use to claim Boson is evidence of ultrasonic detection but maybe I am not understanding your description correctly.
While we’re on the subject of critiquing one’s own ideas, I’ll note that it gets harder the more entrenched the view is. Being exposed to a contrary view of any quality tends to make us ‘double down’ on the sincerely held belief that we feel is threatened. I can appreciate the layers and layers of sedimentary FUD that you wade through in this forum and I encourage you to consider that your (and others in this thread) initial reaction reflected that more than the actual substance of the OP warranted.
Back on topic, I’m not particularly fixated on the coupling mechanism (as an aside, I came across an interesting paper studying breast response in nursing mothers to the ultrasonic content in their baby’s cries. It revved up the milkers, but only when skin contact was present!). But for the record, saying something is ‘evidence’ isn’t a conclusion as you claim.
And yes, time discrimination is absolutely evidence of the equivalent frequency. That’s not ‘my idea’ it comes from a nice old dead guy named Fourier. Perhaps our struggle here is in understanding that monochromatic phase is a time delay?
If you want stuff in the μs, we have plenty of work on ITD. Most showed a minimum in the 10μs delay between ears being perceived(I guess it proves we can hear 100kHz, according to your logical fallacy of fluid permutations between any time and heard frequencies).
That's achieved when simply using test tones. Some tests went lower with tailored signals. But that too is a specific research that does not align or validate your beliefs in any way shape or form. In fact, those experiments clearly show that discrimination for ITDs sucks pond water when using high frequency and people achieve the lowest time with a signal around 1kHz. I'm obviously taking this example to oppose your logic, I'm not trying to claim anything beyond that about the need for high frequency or what can or cannot be heard in general.
Boson paper was using 7k test tone FYI (re: better at low frequencies), it doesn’t sound so different from these studies you mention in conclusion or technique.
‘Fluid [amplitude] permutations between any time’ is actually a pretty good description of transient signals generally subject to Shanon’s theorem (including music both sonic and ultra) - at least assuming you mean fluid figuratively rather than physically. I don’t think you can correctly call a difference in opinion on how much actual info is discernible there or not a fallacy, but I guess it’s your post so suit yourself. My view is it’s simply quite easy to retain a great deal of ultrasonic info in its entirety using even modern commodity grade components, why limit yourself based on stubbornly hedging your bets against human potential.
I’d be very interested in additional resources to better understand discrimination limits. I wouldn’t be surprised at all if our abilities on the specifics are minimal given there’s no conscious detection of statistical note, but that also necessarily limits the scope of the data to that comparably sluggish neocortex…
Something can obviously have no perceivable immediate impact while causing some more or less long term impact of sort (like uncle Putin's special tea, or UV on our skin). I agree, of course, that there can be more than near immediate conscious impact. But if there is no clearly conscious impact even long term that can be demonstrated to actually do something, then how do we tell if it's good, bad, or a thing at all?
The brain wave paper is a minuscule step toward that, but let's not kid ourselves with the conclusions you decided to draw from it in your first post. It's allegedly tied to ultrasounds, but ultrasounds alone do nothing. The impact (whatever that is) occurs after a known time, but listening tests including that delay are inconclusive. Is there a lasting effect? Who knows?
All the paper suggests is that if you listen to hires while wearing a silly hairnet with sensors, a screen with your alpha wave reading will show more red somewhere, and that's all you get.
You're using a reductionist fallacy here (
https://en.wikipedia.org/wiki/Fallacy_of_the_single_cause - we’re getting a good list of actual bad logic!).
All any EEG data shows is ‘what happens when you’re wearing an EEG device’ by definition, - what’s your point? All physical data that is not directly observable is captured via probe, that doesn’t invalidate the scientific method outside of our primary senses.
Nor is Electroencephalography a particularly young or fringe method (‘silly’ really?)... the role of alpha and beta waves are quite well understood. We’re not talking about mind reading here, these are frequently observed and correlated neurological emotional states.
IMO, open a window, hold a pet, get your legs up, those have real demonstrated positive impact on your music listening experience and general health.
On this point we absolutely agree. Personally, I find the best enhancement for music, of any resolution, is to first turn off the screen that I am almost certainly inclined to be looking at while doing it… removing all senses other than sound sure helps me focus but is often a bit more impractical than just putting the laptop/phone down. Regular exercise and THC/CBD helps enormously too
I simply retain the interest and means to spend an additional ~$12 a year to listen to HD audio whenever possible beyond that. Indeed, the quest to push my rig and source well beyond perceivable limits is fun entirely on its own for me and absolutely enhances the listening experience on a purely psychological/emotional level.
VNandor said:
It is not just semantic, a transient by itself is going to have a different different spectrum than a periodic signal that repeats the same transient over and over. You could hear a "transient" by itself if it's played once but you likely won't hear that very same transient if it was played hundreds of thousands of times a second over and over because one signal will contain frequencies that the ear is sensitive to while the other will not. Yes, the brain (or rather, the ears) will be totally oblivious to periodic transients if that signal doesn't contain anything in the audible frequency range.
The time resolution of digital audio is limited by the bit depth, not the sampling rate. Sampling a signal at higher sampling rates allow for higher bandwidth, not higher timing resolution. Sampling a signal more often than twice the highest sampling rate won't net a higher time resolution, however, increasing the bit depth does. I know a site called "troll-audio" does not exactly inspire confidence, especially if you have no way to verify if the math is correct.
Here's an other site with a better name that delves into the intricacies of the time resolution of digital audio.
I probably should have taken the time to respond properly on its own to this link, rather than burying it as part of the lengthy exchange with Castle. I can’t blame you (or others) for missing it:
Butt Hurtford said:
The link that quantifies time errors for a 20 kHz sinusoid is derived for (and only valid for) a 20 kHz sinusoid. Higher frequencies will of course have steeper crossover slopes and need more resolution to qualify. The authors note that for Redbook this is indeed the worst case scenario as any higher frequencies will be filtered out. As far as allowable phase error and its relation to bit depth. It’s just reinforcing that for an adequately sampled signal (2*fs) you get complete info, it’s not claiming that you can replicate a transient signal, waveform or (even worse) spike that has timing periods of that same duration. Shannon’s theorem is absolutely applicable here and anywhere you want to extract information from a signal.
Indeed, the folks over at ‘Audio-Troll’ clearly identify the assumption of max 20 kHz audibility as a premise. As such, when fielded in rebuttal to observed ultrasonics, this is a textbook example of assuming the premise in your argument (
https://en.wikipedia.org/wiki/Begging_the_question).
VNandor said:
Unfortunately, while I understand how digital audio and sound works, I don't know anything about brain waves. Something I know is that ultrasonic sound can indeed affect us, we just can't hear them. My theory is it probably has to do something with that.
I’d agree we definitely can’t hear them in the conscious sense that we typically use that term. I do suspect that the lower brain and its faster survival focused reactions are a clear part of the puzzle, given the subconscious/conscious discrepancy in detection.
It sure is fun to speculate. Who knows, maybe the mesentery residents are having a conversation with the lower brain. I don’t know whether any studies on intestinal bacteria’s response to ultrasonics has been studied, but I find my gut tells me all sorts of useful info subconsciously…
VNandor said:
If I understand the paper correctly, it also does not account for the comb filtering effect that's caused by emitting the same sound from slightly displaced sources (when d is not zero) which is a very likely cause for the audible differences. 7kHz being off by 3mm corresponds to the phase being ~22 degrees off as their sounds combine. That would attenuate the signal by 0.6dB which is not particularly hard to hear when listening closely.
This seems like a fair point to me. That said, -.6 dB channel match is below most transducer pairing specs for headphones and IEMs at least (the better ones will typically guarantee within 1dB), maybe ribbon speakers like those used in the paper have tighter channel pairing tolerances? I guess mismatch would be more obvious with a single tone, but still, I'm not sure 0.6 dB would be as dramatic as you say.