johncarm
100+ Head-Fier
- Joined
- Jun 27, 2014
- Posts
- 313
- Likes
- 21
I read the first pages of Gregorio's 24 bit/16 bit digital "myth exploded" thread. Within the first few pages, it was all theory. There should be no audible difference between 16 bits and 24 bits assuming the equipment works in an ideal manner.
But, does the fact that no audio device is perfect affect the ideal bit rates/depths?
Let me clarify what I mean by "no audio device is perfect."
I DO NOT mean that digital signals are band-limited and require anti-aliasing. Nor do I consider the need for dither to be a malfunction.
One "imperfection" could be jitter. Also, maybe nonlinearities in the analog portion of ADCs and DACs. Also, maybe nonlinearities in ADC conversion if any such errors could have a pervasive effect on distortion in the output signal.
So basically the question is, does the existence of real-world problems such as these require higher bit rates or depths?
Also I want to ask a question about distortion as conceive of "in time."
I usually see distortion described as its level relative to 0 dbFS, and it is obviously pretty tiny in digital. This is describing the amplitude of the distortion.
I'm curious if it could also be characterized as a time distortion.
Let me explain where I'm coming from with this. I did some sound synthesis via software in college, and I wanted to make synthesized instruments "sound real." Some kinds of tiny imperfections in time, like randomly varying the phase of a waveform by amounts that were not consciously perceptible, increased the realism. This was more obvious when the waveform had high-frequency transients, i.e. spikes.
A "spike" in a waveform is defined not only by its amplitude and frequency content, but also by the moment in time that it occurs. It's an event, so to speak.
It was interesting---maybe my ear was hearing the relative position of spikes at a high level of precision. We would need some psychoacoustic experimentation to find out.
There is another kind of sound synthesis which involves "events," namely granular synthesis. You start by defining a short sound, and then create a sustained sound by overlapping many instances of the short sound. The sustained sound's characteristics can be modified by choosing the relationship in time among the "grains," as the instances are called.
I'm picturing impulses travelling through the brain's neurons, and a pattern in the nervous system is established by the relative timing of these impulses.
It seems to me that digital distortion arising from sources such as antialiasing signals distorts the shape of a transient, which could then have an effect on the timing of the neuron signal it triggers.
In that case, understanding the effects of distortion would be a matter of understanding the ear's response not to amplitude, but rather to relative timing of many transients.
So my question is, what work has been done on this in psychoacoustics, and is there any analysis or experiment to demonstrate that antialiasing filters don't disrupt the brain's perception of transient timing?
But, does the fact that no audio device is perfect affect the ideal bit rates/depths?
Let me clarify what I mean by "no audio device is perfect."
I DO NOT mean that digital signals are band-limited and require anti-aliasing. Nor do I consider the need for dither to be a malfunction.
One "imperfection" could be jitter. Also, maybe nonlinearities in the analog portion of ADCs and DACs. Also, maybe nonlinearities in ADC conversion if any such errors could have a pervasive effect on distortion in the output signal.
So basically the question is, does the existence of real-world problems such as these require higher bit rates or depths?
Also I want to ask a question about distortion as conceive of "in time."
I usually see distortion described as its level relative to 0 dbFS, and it is obviously pretty tiny in digital. This is describing the amplitude of the distortion.
I'm curious if it could also be characterized as a time distortion.
Let me explain where I'm coming from with this. I did some sound synthesis via software in college, and I wanted to make synthesized instruments "sound real." Some kinds of tiny imperfections in time, like randomly varying the phase of a waveform by amounts that were not consciously perceptible, increased the realism. This was more obvious when the waveform had high-frequency transients, i.e. spikes.
A "spike" in a waveform is defined not only by its amplitude and frequency content, but also by the moment in time that it occurs. It's an event, so to speak.
It was interesting---maybe my ear was hearing the relative position of spikes at a high level of precision. We would need some psychoacoustic experimentation to find out.
There is another kind of sound synthesis which involves "events," namely granular synthesis. You start by defining a short sound, and then create a sustained sound by overlapping many instances of the short sound. The sustained sound's characteristics can be modified by choosing the relationship in time among the "grains," as the instances are called.
I'm picturing impulses travelling through the brain's neurons, and a pattern in the nervous system is established by the relative timing of these impulses.
It seems to me that digital distortion arising from sources such as antialiasing signals distorts the shape of a transient, which could then have an effect on the timing of the neuron signal it triggers.
In that case, understanding the effects of distortion would be a matter of understanding the ear's response not to amplitude, but rather to relative timing of many transients.
So my question is, what work has been done on this in psychoacoustics, and is there any analysis or experiment to demonstrate that antialiasing filters don't disrupt the brain's perception of transient timing?