Off Topic Thread: Off Topic Is On Topic Here
Apr 7, 2020 at 6:25 PM Post #61 of 184
Modo rambling before impact::rage:
Debating ideas/facts = good
Judging/attacking the guy who disagrees with you = bad
Apes together = strong

Those are the core principles of this forum(I might paraphrase a bit).
I already pretend that I can't read to justify not moderating the hell out of the last posts. You know I'm a one trick modoponey, if I have to act, the entire posts will go and everybody loses.
Swear at the screen like I do, it's therapeutic and you can still try to look like a sir online. You don't know it but allegedly, after reading comments on Headfi, @sander99 often gets up to throw axes into a wall while insulting each and everyone of us, fluently, in 4 languages. Then he sits down and makes his usual aggression free posts. be more like him.
 
Last edited:
Apr 7, 2020 at 8:23 PM Post #62 of 184
The only aural physiognomy I really care about is my own. That is what I hear with and it affects whatever kind of sound reproduction equipment I listen to. My multichannel speaker system sounds fantastic to me and all my friends. You don't need special headsets or computers or custom readings of the shape of your noggin. It's one size fits all. You just sit down and the sound is all around you. If you ever get to Los Angeles (and you could be polite) I would be happy to have you over to listen to some music. I'm betting you would enjoy it if you can get past the attitude.
This is perhaps one of the least objective things I have come across in this thread. You keep quoting "your preference" in sound science. Not everyone needs to share the same opinion, or usage preferences.

How is 10us two times more precision than 55ps? It's several orders of magnitude less than 44.1kHz can offer!

G

I don't know where you got that number from. The spacing between samples at 44.1khz is 22.6 microseconds which is slower than 10 microseconds.

The overall ability to resolve phase and timing is dependent on both bit depth and sampling frequency. Nyquist Shannon theorem works only for well aliased signals that are sampled at the right time. A time difference of 10 microseconds will need the aa filter and sampling frequency to be able to accomodate that, and also the necessary bit depth to preserve the information.

Think of a scenario where the zero crossing delta being 10us between channels for specific pan of object. A Aelta of 10us will fall in between two sampling points, assuming the other channel is sampled right at zero crossing. When you try to reconstruct it you'll be playing a game where your wave start will begin 22us before the first valid sample while the required difference should habe been 10us. A constantly fixed delta of 10us in the stages of design won't work either since this number will vary depending upon the pan.

As I said before, you don't necessarily need higher sampling rates to accomplish this. You can have a different encoding technique where you could just preserve this timing delays while still sampling at 44.1khz but imo it'll be a lot more effort to come across designing an adc, encoder and decoder that does that than just upping the sampling rate to 96, or to be on the safe side, 192khz.
 
Last edited:
Apr 7, 2020 at 8:43 PM Post #63 of 184
Swear at the screen like I do

I do it even better... As soon as I bump into that kind of rudeness, I just hit the reset button and move on to the next post! I'm not required to absorb other people's anger just to figure out something about headphones or whatever. It's too much work and aggravation for me to deal with that stuff.

This is perhaps one of the least objective things I have come across in this thread. You keep quoting "your preference" in sound science.

My physiognomy isn't my preference I was born with it.

My point was the best HRTF for each of us is riding on our shoulders every single minute of every day. With speakers, you don't need to worry if the settings are diddled in correctly. It just works. You can sit next to me on the couch and hear my system perfectly attuned to your skull, just like I do mine. That's why speakers are better. Headphones are fine for what they do and they are the best at not bothering the neighbors. But two channel in headphones can't compare to 5.1 speakers, much less a full blown Atmos setup. I'm sure it's theoretically possible to do sound processing and head tracking to synthesize that kind of experience, but it would be complicated, expensive and subject to all kinds of error. It is more efficient to just put the transducer producing the sound in the spot you want the sound to emanate from.

If you don't have a listening room, and you can afford the equipment to synthesize multichannel sound, and you have the patience to tweak it to your own specifications, and you don't mind listening to music without anyone else joining you, then feel free to do it the hard way.
 
Last edited:
Apr 8, 2020 at 8:18 AM Post #64 of 184
I don't know where you got that number from. The spacing between samples at 44.1khz is 22.6 microseconds which is slower than 10 microseconds.
IIRC, you are some kind of engineer. You may have had some training in sampled-data systems (aka discrete-time systems). The ability to resolve temporal differences between 2 sets of sampled data depends on the nature of the signal (the highest slope therein) and the number of possible values (resolution or bit depth). The steeper the max slope and the larger the number of possible values (states), the better the temporal resolution. For sine waves the calculation is rather straightforward:
1586348245361.png
where ∆t_res is the temporal resolution, f_sig is the frequency of the sine wave (the signal) and N_st is the number of states of the values.
The best case sine wave for 44.1/16 is: f_sig=22050Hz and N_st=65536, then ∆t_res=110ps. I imagine @gregorio mistakenly used the sampling frequency rather than the Nyquist frequency.
Perhaps more "typical" would be f_sig=2205Hz at -20dBFS (N_st=6554), with ∆t_res=11ns

FYI, I've seen a handful of papers giving human auditory temporal discrimination of about 5µs, easily handled by 44.1/16, unless the signal is small and exclusively low frequency.
 
Last edited:
Apr 8, 2020 at 10:30 AM Post #65 of 184
I imagine @gregorio mistakenly used the sampling frequency rather than the Nyquist frequency.

Well spotted, my mistake, I did indeed use 1/(2 * pi * Fs * number_of_levels) when I should have used 1/(2 * pi * Bandwidth * number_of_levels)!

[1] I don't know where you got that number from.
[2] The spacing between samples at 44.1khz is 22.6 microseconds which is slower than 10 microseconds.
[2a] Nyquist Shannon theorem works only for well aliased signals that are sampled at the right time.
[4] Think of a scenario where the zero crossing delta being 10us between channels for specific pan of object. A Aelta of 10us will fall in between two sampling points, assuming the other channel is sampled right at zero crossing.
[4a] When you try to reconstruct it you'll be playing a game where your wave start will begin 22us before the first valid sample while the required difference should habe been 10us.

1. You're correct, my mistake, I apologise for my sloppiness but my basic point was still correct, as 110ps is still nearly 4 orders of magnitude below your quoted 10us!

2. True but of course digital audio doesn't output samples (a "stair-step"), it outputs a continuous wave/function. In other words, a basic tenet of digital audio is that information BETWEEN the sample points is captured and reproduced (provided it's below the Nyquist Freq). Your assertion effectively violates the Nyquist/Shannon Theorem!
2a. The Nyquist/Shannon Theorem clearly states the requirement to band limit the signal (to half the sample rate) in order to capture all the information but I don't recall it saying anything about "sampled at the right time", is that something you've added?

4. Yes, the delta of 10us could fall between two sampling points but that doesn't matter, it will be captured just as accurately as if it fell on the sample point. Again, we don't output a stair-step of sample points we output a continuous wave, with ALL the information between the sample points. If your assertion were valid, it would be impossible to have a sub-sample delay and it's easy to verify that you can; Take a 44.1kFs/S audio file, convert it to say 88.2k, move the waveform by one sample (which of course equals half a sample at 44.1k) and convert it back to 44.1k. This file now has different sample values from the original 44.1k file and reconstructing it will result in the analogue waveform having a delay of about 11us or, convert it back to 88.2k again and observe the half sample delay has been maintained. This isn't just theory, sub sample delays are quite common, they're quite often used in reverb processors for example.
4a. It will be a valid sample, a sample value of 0 is valid and the waveform starting between the sample points will be reconstructed (assuming the appropriate subsequent sample values)!

My explanation might not be very good/clear, Monty's is much better, not least because he can demonstrate the point visually and in practise. Jump to about 20:50 if you want the section specifically on timing resolution:



G
 
Last edited:
Apr 8, 2020 at 10:42 AM Post #66 of 184
It has been several centuries since any single person could absorb all knowledge, and even then it was truly rare. In recent times, it is just a truth that all of us don't know far more than we know. There is no shame in it. There is just so much knowledge now.

Even someone knowledgable about audio can be excused for not understanding what an HRTF is, not knowing the definition of "physiognomy" and that it is completely unrelated to HRTFs, not understanding what a Smyth Realiser A16 is or how it works, not understanding speaker virtualization or spatial audio at all. There is no shame... unless...
My physiognomy isn't my preference I was born with it.

My point was the best HRTF for each of us is riding on our shoulders every single minute of every day. With speakers, you don't need to worry if the settings are diddled in correctly. It just works. You can sit next to me on the couch and hear my system perfectly attuned to your skull, just like I do mine. That's why speakers are better. Headphones are fine for what they do and they are the best at not bothering the neighbors. But two channel in headphones can't compare to 5.1 speakers, much less a full blown Atmos setup. I'm sure it's theoretically possible to do sound processing and head tracking to synthesize that kind of experience, but it would be complicated, expensive and subject to all kinds of error. It is more efficient to just put the transducer producing the sound in the spot you want the sound to emanate from.

If you don't have a listening room, and you can afford the equipment to synthesize multichannel sound, and you have the patience to tweak it to your own specifications, and you don't mind listening to music without anyone else joining you, then feel free to do it the hard way.
... you pretend to have a clue, when you don't.
 
Apr 8, 2020 at 11:14 AM Post #67 of 184
[1] So anyone else will be able to see post #38 and see no mention of binaural: no lie.
[2] Sorry that again I'm going to have to question your credentials (as you seem keen on this). Last time you said you had one Atmos production to your name. Sorry I'm not going to quake in the knees about this.
[3] You say you back up your assertions without actually linking any source (doh).
[3a] But even if you look up "Head-related transfer function" on Wikipedia you will see pinnae included in their summary for another variable of sound before what's neurologically sent to brain.
[4] Apart from the antics we're going on about now with sound perception and cinema quality, you're the only one in these last few posts I've seen that claims headphones can innately render a source "anywhere" (it takes any processing to have overhead sound). I'm not going to resort to saying you're a liar, you're either that far up your know what or willingly obtuse.

1. Anyone else will be able to see post #38, that I clearly mention (and explain) HRTFs with headphones and therefore that I can ONLY be talking about binaural!

2. My credentials are entirely superfluous, I don't even need the credentials of an average high schooler to read and quote basic definitions from Wikipedia! If you want to play that game though, what about your credentials?

3. Is Wikipedia not a source? doh!
3a. Exactly, which is why it's included in a HRTF! Wikipedia - "HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head, PINNA, and torso, before the sound reaches the transduction machinery of the eardrum and inner ear (see auditory system)." - You didn't seem to notice the inclusion of "pinna" when I posted this source previously, even though I bolded and explained it!!!!

4. How is it my fault if you:
A. Yet again make up a false quote and attribute it to me.
B. Can't be bothered to understand the basic principles of how stereo works.
C. Can't be bothered to find out what Binaural/HRTFs are, or look up any of the large body of research and practical implimentations in this area.
D. Then argue about all of the above.
E. Then call me obtuse or "too far up you know what"!

Unbelievable. Or it would be anywhere other than an audiophile forum!!

G
 
Last edited:
Apr 8, 2020 at 11:36 AM Post #68 of 184
1. Anyone else will be able to see post #38, that I clearly mention (and explain) HRTFs with headphones and therefore that I can ONLY be talking about binaural!

2. My credentials are entirely superfluous, I don't even need the credentials of an average high schooler to read and quote basic definitions from Wikipedia! If you want to play that game though, what about your credentials?

3. Is Wikipedia not a source? doh!
3a. Exactly, which is why it's included in a HRTF! Wikipedia - "HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head, PINNA, and torso, before the sound reaches the transduction machinery of the eardrum and inner ear (see auditory system)." - You didn't seem to notice the inclusion of "pinna" when I posted this source previously, even though I bolded and explained it!!!!

4. How is it my fault if you:
A. Yet again make up a false quote and attribute it to me.
B. Can't be bothered to understand the basic principles of how stereo works.
C. Can't be bothered to find out what Binaural/HRTFs are, or look up any of the large body of research and practical implimentations in this area.
D. Then argue about all of the above.
E. Then call me obtuse or "too far up you know what"!

Unbelievable. Or it would be anywhere other than an audiophile forum!!

G

And it doesn't take an audio engineer to know HRTF is not the same as a binaural recording! Your stance was that headphones can localize sound "anywhere" with HRTF alone (and claiming perception of sound stems from just two single points on the head):smirk:. Even a high schooler will tell you they can't hear a sound X feet in front or in back of them with a normal stereo recording, with no virtual DSP, over headphones. You have the outer ear, middle ear, and inner ear: all influence sound perception before you get into efferent and afferent neural pathways.

I suspect you'd still obfuscate, make up your own random outline responses, engage everyone in a mean fashion no matter what forum you'd visit!!
 
Last edited:
Apr 8, 2020 at 12:39 PM Post #69 of 184
[1] And it doesn't take an audio engineer to know HRTF is not the same as a binaural recording!
[2] Your stance was that headphones can localize sound "anywhere" with HRTF alone (and claiming perception of sound stems from just two single points on the head):smirk:.
[3] Even a high schooler will tell you they can't hear a sound X feet in front or in back of them with a normal stereo recording, with no virtual DSP, over headphones.
[4] You have the outer ear, middle ear, and inner ear: all influence sound perception before you get into efferent and afferent neural pathways.
[5] I suspect you'd still obfuscate, make up your own random outline responses, engage everyone in a mean fashion no matter what forum you'd visit!!

1. Please explain how you can have a binaural recording without HRTFs!
"With a simple recording method, two microphones are placed 18 cm (7") apart facing away from each other. This method will not create a real binaural recording. The distance and placement roughly approximates the position of an average human's ear canals, but that is not all that is needed. More elaborate techniques exist in pre-packaged forms. A typical binaural recording unit has two high-fidelity microphones mounted in a dummy head, inset in ear-shaped molds to fully capture all of the audio frequency adjustments (known as head-related transfer functions (HRTFs) in the psychoacoustic research community)" - Wikipedia! However, a binaural recording can be manufactured rather than recorded, much the same as a stereo recording can be manufactured, except using binaural/HRTF principles rather than stereo "panning".

2. Yes, given an accurate HRTF a live binaural recording can localise sound anywhere at any distance, within range of the microphones obviously. In the case of a manufactured binaural recording, then obviously you'd need some additional DSP for distance (EG. A reverb processor), just as you would for a manufactured stereo recording.

3. Huh, what's that got to do with it? I'm not talking about "a normal stereo recording" on headphones, I'm talking about a recording with applied HRTFs on headphones (a binaural recording!). How could you possibly not understand that?

4. Not just the different parts of the ears themselves but also the head shadowing and torso effects. However, the point you seem to consistently missing is what a HRTF actually is and what it does. Given an accurate HRTF then the sound that arrives at the ear drum would be identical to the sound that arrives from any location in real life. This cannot be achieved with any commercial surround speaker format!

5. Pot, kettle, black.

G
 
Last edited:
Apr 8, 2020 at 1:08 PM Post #70 of 184
1. Please explain how you can have a binaural recording without HRTFs!
"With a simple recording method, two microphones are placed 18 cm (7") apart facing away from each other. This method will not create a real binaural recording. The distance and placement roughly approximates the position of an average human's ear canals, but that is not all that is needed. More elaborate techniques exist in pre-packaged forms. A typical binaural recording unit has two high-fidelity microphones mounted in a dummy head, inset in ear-shaped molds to fully capture all of the audio frequency adjustments (known as head-related transfer functions (HRTFs) in the psychoacoustic research community)" - Wikipedia! However, a binaural recording can be manufactured rather than recorded, much the same as a stereo recording can be manufactured, except using binaural/HRTF principles rather than stereo "panning".

2. Yes, given an accurate HRTF a live binaural recording can localise sound anywhere at any distance, within range of the microphones obviously. In the case of a manufactured binaural recording, then obviously you'd need some additional DSP for distance (EG. A reverb processor), just as you would for a manufactured stereo recording.

3. Huh, what's that got to do with it? I'm not talking about "a normal stereo recording" on headphones, I'm talking about a recording with applied HRTFs on headphones (a binaural recording!). How could you possibly not understand that?

4. Not just the different parts of the ears themselves but also the head shadowing and torso effects. However, the point you seem to consistently missing is what a HRTF actually is and what it does. Given an accurate HRTF then the sound that arrives at the ear drum would be identical to the sound that arrives from any location in real life. This cannot be achieved with any commercial surround speaker format!

5. Pot, kettle, black.

G

This is worst obfuscation as compared to your claim that 3D audio is just two horizontal planes. Anyone else will see that "HRTF" and "binaural recording" have their own seperate entries in Wikipedia (again showing how they are seperate concepts). I never said there wasn't a relationship between a HRTF and binaural recording. You were the one who claimed headphones can localize from "anywhere" (with absolutely no disclaimer about binaural recording or virtual surround DSP). You can keep back pedeling now, but last time I checked, most all recordings are stereo and not binaural:rolling_eyes: You also claimed there were just 1 point for the sensory organ and that all decoding of audio is from brain only. Sorry, as someone who took an anatomy and physiology class from a prof who only focused on pars petrosa, sensory perception has many factors. Including even efferent nerve innervation that effects the middle ear muscles or frequency amplification in the inner ear.

Funny you say Pot, kettle, black!
 
Apr 9, 2020 at 4:21 AM Post #71 of 184
IIRC, you are some kind of engineer. You may have had some training in sampled-data systems (aka discrete-time systems). The ability to resolve temporal differences between 2 sets of sampled data depends on the nature of the signal (the highest slope therein) and the number of possible values (resolution or bit depth). The steeper the max slope and the larger the number of possible values (states), the better the temporal resolution. For sine waves the calculation is rather straightforward:
where ∆t_res is the temporal resolution, f_sig is the frequency of the sine wave (the signal) and N_st is the number of states of the values.
The best case sine wave for 44.1/16 is: f_sig=22050Hz and N_st=65536, then ∆t_res=110ps. I imagine @gregorio mistakenly used the sampling frequency rather than the Nyquist frequency.
Perhaps more "typical" would be f_sig=2205Hz at -20dBFS (N_st=6554), with ∆t_res=11ns

FYI, I've seen a handful of papers giving human auditory temporal discrimination of about 5µs, easily handled by 44.1/16, unless the signal is small and exclusively low frequency.

It's humbling to hear those sentences. Truth to be told, I learnt signal processing by myself just by practicing derivations (have quite a bit of interest in math, and of course audio). I didn't have much formal education on that apart from Fourier transforms, and my university project. I'm an Engineer sure, but from a completely different domain - VLSI.

I'm still not convinced that the inter sample deviations fall within the assumptions. I would like to be pointed in the right direction with documents on that but it doesn't make sense to me intuitively. Cameras face things like moire, false colour and other stuff on similar grounds afaik.

I'm making a search using this sentence "nyquist theorem and sub sample artefacts". Will let you know if I find anything interesting.
 
Apr 9, 2020 at 5:15 AM Post #72 of 184
[1] This is worst obfuscation as compared to your claim that 3D audio is just two horizontal planes.
[2] Anyone else will see that "HRTF" and "binaural recording" have their own seperate entries in Wikipedia (again showing how they are seperate concepts). I never said there wasn't a relationship between a HRTF and binaural recording.
[3] You were the one who claimed headphones can localize from "anywhere" (with absolutely no disclaimer about binaural recording or virtual surround DSP).
[4] You can keep back pedeling now,
[4] but last time I checked, most all recordings are stereo and not binaural:rolling_eyes:
[5] You also claimed there were just 1 point for the sensory organ and [5c] that all decoding of audio is from brain only.
[5b] Sorry, as someone who took an anatomy and physiology class from a prof who only focused on pars petrosa ...
[6] Funny you say Pot, kettle, black!

1. POT, KETTLE, BLACK!!!

2. And anyone else will see that a binaural recording is defined by having some form of HRTF, that without some form of HRTF a recording is NOT a "binaural" recording and they can see this by reading the quote in the post to which you're replying!

3. You've got to be joking?? My first response to you in this thread (post #22): "The argument of accurate spatiality is therefore similar to arguing that an image of a white unicorn is more accurate than an image of a pink unicorn! The exception, ironically, is binaural recordings (reproduced on headphones), which ARE spatially accurate, although only relative to a certain generic HRTF." and "HRTFs, etc, replicate all that spatial information being reduced down to the two datum points of your ear drums and therefore headphones with the correct HRTFs, etc., should be "better at localisation" than even the latest surround format with multiple speakers."

My next response to you (post #38): "In fact, this is entirely possible, this calculation is called a HRTF (Head Related Transfer Function). This is inherently BETTER at localisation than speakers, because ..."

And the next (post #47): "Please provide some supporting evidence that most people who've heard a binaural recording (with a compatible HRTF) have not been able to localise "anywhere"".

And the next (post #51): "I absolutely did not say "never mind binaural or not" and I've clearly been stating ears (which include pinna) and HRTFs, which also includes pinna."

And even my very first post in this thread (#2): "This OBVIOUSLY doesn't prove that for an individual listener in their own home, speakers are always better than headphones with a binaural recording suited to their HRTF. " ... And: "Given a good binaural recording suited to an individual's HRTF, headphones can indeed sound better than a 5.1 speaker setup..."

It's hard to imagine how your assertion (of "absolutely no disclaimer about binaural recording") could be more FALSE!!

4. But I'm NOT "back peddling", I'm doing the exact opposite!
4a. And "last time I checked most all recordings are not" Dolby Atmos, didn't seem to stop you from arguing about it though! However, the point (which I've also made clear) is that headphones can be better at localisation than speakers and there are enough binaural recordings (some of which are aimed at mass markets) to make this assertion realisable in practice (for some people) and not just a theoretical possibility.

5. And again, another FALSE quote attributed to me! I claimed there were TWO points and TWO sensory organs, don't you know what the word "binaural" means? Therefore:
5b. How do you think that humans decode localisation information, you think maybe our ears talk to each other and work it out themselves, without the involvement of the brain?
5c. You should indeed be "sorry" and probably sue for your class fees to be returned if your anatomy/physiology prof taught you that the auditory nerves are connected to each other rather than to the brain. You would have been far better served by spending a few minutes on Wikipedia:

"The sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time- and level-differences (or intensity-difference) between both ears, spectral information, timing analysis, correlation analysis, and pattern matching. " ... "The brain utilizes subtle differences in intensity, spectral, and timing cues to allow us to localize sound sources."

6. And sad that I should have to!!

G
 
Apr 9, 2020 at 6:06 AM Post #73 of 184
I think there is a fundamental misunderstanding here. A headphone matched to your hrtf and without phase lag/lead artefacts will not project a soundstage of its own. It'll take the soundstage of the recorded content. If the content has a map relating to space as your brain deciphers, you'll be able to perceive. In my opinion that is what I can call "accuracy". I'd like to be placed at the position of the mic.

Comparing space with stereo recordings is futile, not because you can't perceive space but because you can make multiple ways of creating space and all but one will be accurate to the recording ambience.

Pinna interaction is a necessity but headphones do interact with pinna. Alternately, for iems, you can have the binaural mic placed inside the lobe and compensate for pinna effects within the recording.

On topic of localization precision. We have excellent recognition of left and right, including the distance to some extent. Front back is aided by both vision and the asymmetric nature of earlobe which reflects sound in different ways to different sides. Our top down spatial resolution is super poor and we relate a lot of the top down space to tonality. Try to hear a thunder recording in a headphone. Your mind will automatically image it above your head. Part of sonic imaging is inherently vision and prior correlation-experience related.
 
Apr 9, 2020 at 6:13 AM Post #74 of 184
I'm still not convinced that the inter sample deviations fall within the assumptions. I would like to be pointed in the right direction with documents on that but it doesn't make sense to me intuitively.

I already gave you an example where you can easily create and maintain an "inter sample deviation" for yourself. I also posted a video which actually demonstrates the timing "inter sample deviations" at the output of a DAC with an oscilloscope. In addition, if you have access to Izotope RX, a suite of audio tools used almost ubiquitously in the music/sound recording industry, the Azimuth adjustment tool allows you to shift audio channels by increments of 0.1 of a sample. And lastly, if "inter sample deviations" were not possible, then the Nyquist/Shannon Theorem would be wrong!

Careful about looking for documents though. Not only do audiophile documents routinely quote the fallacy of timing resolution being limited to the sample period but a few scientists and scientific papers do too, notably Kuncher and Bob Stuart and it's a mistake that at one time was made by many, I seem to recall even Ken Pohlmann did at one point but it's a now debunked myth. Hydrogen audio has covered this quite a few times.

G
 
Apr 9, 2020 at 6:53 AM Post #75 of 184
I already gave you an example where you can easily create and maintain an "inter sample deviation" for yourself. I also posted a video which actually demonstrates the timing "inter sample deviations" at the output of a DAC with an oscilloscope. In addition, if you have access to Izotope RX, a suite of audio tools used almost ubiquitously in the music/sound recording industry, the Azimuth adjustment tool allows you to shift audio channels by increments of 0.1 of a sample. And lastly, if "inter sample deviations" were not possible, then the Nyquist/Shannon Theorem would be wrong!

Careful about looking for documents though. Not only do audiophile documents routinely quote the fallacy of timing resolution being limited to the sample period but a few scientists and scientific papers do too, notably Kuncher and Bob Stuart and it's a mistake that at one time was made by many, I seem to recall even Ken Pohlmann did at one point but it's a now debunked myth. Hydrogen audio has covered this quite a few times.

G

I have went through that video long ago and unfortunately It didn't answer my specific question. Regarding the visualizations using instruments, they can be deceptive. I am looking for the pure math that deals with this. Like the link I posted above.

I'm not worried about high frequency components falling in the points between the samples, I'm worried about the timing delta preservation between the channels. A simple 10us delta all the time won't work since the timing delta varies with position of source.

And I don't see phase plots in any of his visualizations, and didn't see signals that can have phase deviations between different frequency components. Fft, as I understand will have both an amplitude and phase component. With certain signals the phase might be uniform, with others, may not be. So I am seeing a lot of missing coverage in the analysis in that video.

I would take the effort to code the same in matlab once I am clear with the thesis part. I'm more of a math/non-linearities guy who loves to look at the derivation in full with all the bounding criterion.

Regarding papers, I think we are mere humans and everyone can make a mistake. But being published as a paper gives better confidence and trust. Neither would I trust hydrogen audio, unless it covers every scenario I ask for, or unless they also published their inferences as a paper.

I'll do my homework to derive the same thing before coming to a conclusion. I do it for everything, I don't like accepting something just because someone said so. In the end I'll have a much better understanding and conclusion.
 
Last edited:

Users who are viewing this thread

Back
Top