24bit vs 16bit, the myth exploded!
Aug 16, 2021 at 10:08 PM Post #6,301 of 7,175
I think we shouldn't forget what Castle brought up: the actual signal coming from the mic. It doesn't matter how perfectly modeled your reproduced sine wave is. It starts with your input. And your input is limited to SNR. In most situations, it's the analog input of noise floor and maximum signal/saturation that's a dynamic range below the capabilities of current digital formats. And when it comes to arguments of vinyl having frequency ranges beyond 20khz....you might be able to measure frequencies from a TT, but why assume it's from an original source and not random fluctuations?
 
Aug 16, 2021 at 10:11 PM Post #6,302 of 7,175
Your interpretation seems a bit nonsensical to me, to put it bluntly. If there is an audible difference, I’m quite certain it would be obvious and measurable if the examples could be provided for independent analysis
 
Aug 16, 2021 at 10:29 PM Post #6,303 of 7,175
The key idea here is that the audible frequency content and the frequency content builing the waveform are two completely different things, and looking for a correspondance beteween them doesn't really make sense.
It does make sense, they are exactly the same, except that the inaudible frequency content can be left out.
You do realise by the way that as part of the AD conversion the analog input signal is first low pass filtered to avoid aliasing?
What later is reconstructed is of course the waveform as it was after the low pass filtering. But the only difference between that and the original analog waveform is composed of inaudible frequencies.
(And using a shallow analog filter and oversampling and digital filtering on the AD side can mitigate problems caused by steep analog filtering before you start about that.)

Your argument makes sense, but I don't think that it is sufficient to explain to extremely poor sonic result achieved with 8 KHz sampling.
Yes it does fully explain it. [Edit: My apologies, mistakes in how the test was performed could have conributed to even further degradation of the result. But stil it is fully explainable by the established sound science.]
If you were right, the result would sound like deprived from clarity rather that totally distorted.
I know from personal experience that lack of overtones can give a strong subjective impression of distortion, probably because our brain senses something is wrong and interprets it that way. Besides, I don't recall @Vamp898 using the word distorted.
My point is that conflating the frequency input of the waveform and the frequency content of the waveform structure in a mistake, because there is a mathematical function between them.
It is not a mistake. The total waveform is the summation of the sine waves. Only the audible sinewaves matter.
My interpretation of low rate sampling is that it produces a global waveform distortion.
Of course the waveform without the inaudible frequency content looks different from the waveform with inaudible frequency content. But the difference is not audible. Because the difference is the inaudible frequency content.
Basically, we want to capture perfectly every audible frequency in order to be able to reproduce them. These frequencies are sinewaves and we have to make sure that they are reproduced as perfect sinewaves and not as distorted sinewaves, because the difference is audible.
The bandlimited version of the analog input signal (after low pass filtering) can be perfectly reconstructed from the sampled points. That is what the proven sampling theorem states. We sure don't need frequencies above Fs/2 for that.
the sinewaves produced by the decoding job in the inner ear will not be perfect
The inner ear doesn't produce sinewaves. The hairs in the inner ear just react to the frequencies they are tuned to. By the way: an "imperfect sinewave" is of course always a summation of multiple perfect sinewaves.
The recording experiment made by Vamp898 (see above) should be considered as a proof for what I say.
No. It can be fully explained by the established science of audio. It does not prove that your wild, speculative, unsubstanciated, and totally illogical theories disprove the established science.
 
Last edited:
Aug 17, 2021 at 5:45 AM Post #6,304 of 7,175
Hey guys, why argue? Measure

I used リライト from Asian Kung Fu Generation (2016 Version), 96 KHz 24 bit. I changed the bit rate to 44.1 KHz and changed it back to 96 KHz and let Audacity analyze the difference

Here is the result

1629193300012.png


The biggest peak/difference is right after 20KHz, but as we are all able to see, there is a difference in all other frequencies too.

There is a difference, audacity says so. It exists. There is nothing to argue if it exists or not, its there and its measurable.

You can argue if it does matter, but not if its there.
 
Aug 17, 2021 at 5:55 AM Post #6,305 of 7,175
You are forgetting that even these instruments that produce these low frequencies have harmonics and overtones, and these go further than 4 kHz.
Especially the initial attacks of notes may contain lots of high frequencies. These frequencies probably die out fast, but are important in the sonic character.

Also, If you play a 8 kHz sample rate file, it probably needs to be resampled in real time to a higher sample rate supported by the system you are using and this sample rate conversion can be rude and can sound ugly. Here is an example of how much sample rate conversion quality can matter: I generated 2.5 kHz sine at 8 kHz sample rate (top of the picture). Then I resampled it to 44.1 kHz using the lowest (LQ) and highest (HQ) quality options. The highest quality mode gives really good result, perfect sine while the lowest quality mode struggles a lot.

resamplequality.png
 
Aug 17, 2021 at 6:32 AM Post #6,306 of 7,175
Hey guys, why argue? Measure

I used リライト from Asian Kung Fu Generation (2016 Version), 96 KHz 24 bit. I changed the bit rate to 44.1 KHz and changed it back to 96 KHz and let Audacity analyze the difference

Here is the result

1629193300012.png

The biggest peak/difference is right after 20KHz, but as we are all able to see, there is a difference in all other frequencies too.

There is a difference, audacity says so. It exists. There is nothing to argue if it exists or not, its there and its measurable.

You can argue if it does matter, but not if its there.
What quality settings did you use? I did similar test with white noise: I generated white noise at 96 kHz sample rate (24 bit). Then downsampled it to 44.1 kHz (24 bit) using the best quality setting, but without dither. Then I upsampled it back to 96 kHz. I got this for the difference signal spectrum:

test96.png


I won't lose my sleep over that -140 dB difference below 15 kHz. It is most probably just quantization noise.
 
Aug 17, 2021 at 7:01 AM Post #6,307 of 7,175
@audiokangaroo I see the point you are trying to make and I also think there may be something there
Such exploration is true science whether it proves fruitless or genius is for the future but the endeavour is science
Unfortunately this thread in spite of its name has little to do with science it would be better named sound engineering
You will not find any help in your quest here , I see the personal attacks are already beginning
I wish you all the best in your search for the truth
Is it true science to question evidence of something very solid, well accepted, and oppose it with no actual evidence that it might be wrong? All because someone happened to fill big gaps in his knowledge with intuition and ego? I don’t think that is the role of science at all. We didn’t burn all the books and start all research from scratch after some fool decided that the planet was flat.

Of course it's nothing new, anybody trying to make sense of digital audio without learning a good deal about it, will end up with some varying amount of nonsense. And will have some typical intuition that it is super flawed in some way for some reason. But not every physical phenomenon works in an intuitive way! Why so many audiophiles think otherwise is beyond me.
There are things we need to learn just so our brain can start thinking within that frame of reference, and that will allow to understand and learn some more. And at some point, maybe someone will get a brilliant idea about how reality works.
One cannot just make stuff up from nothing and magically grasp sampling theory, how waves behave, human hearing and psychoacoustic. Kanga badly jumped to conclusion about what was audible, then made up his own idea of why(kind of). He now relies on false axioms to spam more deducted falsehoods, and openly question well established, well demonstrated, well tested knowledge. All because he doesn’t know much of anything on the topic(he probably learned more on waves in the last days than in his entire life before that). And at no point in that already too long spam of blatant nonsense, has he given a hint of being able to admit when he's wrong.
You seeing science within his posts is a little scary.

But you do have a point, this forum isn't a research lab, or even a place where scientists meet. There is no rule to stop someone from posting utter nonsense, clearly. So of course, a bunch of posts(no matter the supported position on a topic), will be gluten and fact free. The sheer number of claims posted in a page without supporting evidence, tells you that while science oriented, little of what's in it is science.
Doesn't mean that complete made up nonsense has the same value as fact based knowledge. Or that ignorant guesses are science.
 
Aug 17, 2021 at 7:52 AM Post #6,308 of 7,175
What quality settings did you use? I did similar test with white noise: I generated white noise at 96 kHz sample rate (24 bit). Then downsampled it to 44.1 kHz (24 bit) using the best quality setting, but without dither. Then I upsampled it back to 96 kHz. I got this for the difference signal spectrum:

test96.png

I won't lose my sleep over that -140 dB difference below 15 kHz. It is most probably just quantization noise.
Congratulations. With testing something different, you got different results.

Of course the sampling works much better with white noise or even a plain Sinus curve.

Thats why i used a complex song with erratic patterns. The more complex and erratic the signal is, the more is it prone to errors.

So yes, it works better with white noise. But i rarely listen to white noise, i listen to music.
 
Aug 17, 2021 at 8:28 AM Post #6,309 of 7,175
Congratulations. With testing something different, you got different results.

Of course the sampling works much better with white noise or even a plain Sinus curve.

Thats why i used a complex song with erratic patterns. The more complex and erratic the signal is, the more is it prone to errors.

So yes, it works better with white noise. But i rarely listen to white noise, i listen to music.
Noise is maximally complex signal and using white noise the spectrum flat making it easy to see how the resampling works on different frequencies.
The result of your test is so bad it must be due to some lousy algorithm used in the resampling.
 
Aug 17, 2021 at 8:45 AM Post #6,310 of 7,175
What do you think I didn't understand about the way a mic works ?
This: "That's true, and then the signal received by the microphone and which is converted into a voltage isn't limited either ( inside the frequency response limits of the microphone, but we have to consider this response at a very low amplitude level, not only around -3dB, which is not relevant for our problem )."

First of all, the acoustic signal (sound pressure wave) is limited. It's limited by the laws of physics, the loss of amplitude over distance, even more so the higher the frequency, the noise floor of the environment (Brownian motion etc.) and that's before we even get to the microphone. A microphone capsule has mass and is therefore subject to the laws of motion of objects with mass; inertia, friction, etc. ALL microphones therefore MUST have a limit to the sound wave frequency to which they can respond. The response of many music recording microphones rolls off significantly above around 14kHz, most others at around 20kHz and only a handful of so go up to or beyond 40Khz. The analogue signal output by a mic is not band limited, HOWEVER, much beyond the mic's stated response, that frequency content is just noise/distortion generated by the mic's internal electronics and is unrelated to the acoustic signal hitting the mic's capsule.

So OF COURSE this is "relevant for our problem"! If there are no recordable frequencies above say 20kHz (besides unwanted noise/distortion), then the questions of recording, reproducing or being able to hear above 20kHz are all moot to start with.
I'm open to learn new things if you have something to explain.
Clearly that's not true, you've had a number of things explained to but you have NOT learned from any of them and you just keep repeating the same self contradictory nonsense.
The waveform is the result of a process. I don't want to be overconfident, but this process seems to have a combinatory dimension and a random dimension.
Again, you really haven't thought through your (false) explanation. If sine wave production were random as you've stated, then we wouldn't need musicians; you could put a violin on a stage and it would randomly start playing itself. Better still, the sine waves it was randomly producing would mean that instead of a violin, it could sound like a piano, the spice girls, a deer being hit by a truck or all three at the same time! How much confidence do you have in your explanation now? Wouldn't any amount of confidence be "overconfident"?
I don't think that explaining the production of the waveform with a mathematical function is a wrong idea.
The production of all natural waveforms is obviously a mechanical process; the plucking or hitting of a string or some other material or the mechanical movement of air by the lungs through the voice box or a wind instrument. However, we can of course manufacture artificial waveforms, in which case it can be an electronic process, such as a signal generator or synthesizer and the only time it's a mathematical process is for certain artificial sounds generated purely in the digital domain. What maths is good for, is expressing the properties of sound waves and of course, sound waves and all the electronics used to record and reproduce them are governed by physical laws, which are again expressed with mathematics.
You have the right to compare my theory with science fiction, but it would be more useful if you could explain precisely what is wrong among the things I tried to explain.
Again, how is it "more useful" if you just completely ignore it and continue on regardless?
I can give you again the example of multiplication, which is a rather simple mathematical function. If you take as input the integer numbers from 0 to 9, you can see that the
output range goes from 0 to 81. The output range is wider than the input range. I dont see any reason it should be any different with the building of a waveform from elementary audio frequencies.
Exactly, it's been explained to you but you have NOT "been open to learn new things from it" and you just repeat the same nonsense regardless! It has been explained to you that sound waves loose high/ultrasonic frequency over distance/time, high/ultrasonic frequencies are not added and they certainly are not multiplied. We also loose high/ultrasonic freqs with analogue signal recording/reproduction, except for the added unwanted electronic distortion and (Johnson) noise. Therefore, you could hardly have picked a worse analogy than simple multiplication, you'd have been far better using subtraction or division as an analogy but of course that would have contradicted your false explanations. However, I can't see why we need any sort of analogy, when we already have the exact, proven mathematical function/s!!
@audiokangaroo I see the point you are trying to make and I also think there may be something there
Such exploration is true science whether it proves fruitless or genius is for the future but the endeavour is science
What "exploration" or "endeavour"? Just making up nonsense explanations that contradicts actual science, without a shred of reliable supporting evidence, is not ANY sort of science, let alone "true science". In fact, it's pretty much the opposite of true science!

In my day, they taught the basics of what science is and the scientific method to all children in middle school, when did they stop teaching it?

G
 
Aug 17, 2021 at 8:46 AM Post #6,311 of 7,175
Noise is maximally complex signal and using white noise the spectrum flat making it easy to see how the resampling works on different frequencies.
The result of your test is so bad it must be due to some lousy algorithm used in the resampling.
I clicked on "change sample rate" in audacity and changed it there + back.

You used the exact same method so we're using the same lousy algorithm which should have caused the exact same picture.

Obviously it doesn't. I checked my Audacity settings, it says "Best quality (slowest)"

I assume you used the same.

If noise would be maximum complexity, why is it so much easier to remove noise than it is to remove vocals or instruments?

Because noise is static and everything else is (more or less) erratic.

The more erratic it is, the more the algorithms struggle.
 
Aug 17, 2021 at 8:52 AM Post #6,312 of 7,175
white noise
Of course, since white noise is more complex than music due to its equal spectral context from DC to Fs/2. Also, it has a mean of 0 and infinite variance. The PDF of a white noise function will give you equal probability with infinite dispersion (standard deviation). Music is usually correlated, and many times, there is collinearity between signals as well. White noise is not collinear. More complex and as hard to describe in terms of pure probability and a Fourier decomposition as white noise there is no signal.
 
Aug 17, 2021 at 8:56 AM Post #6,313 of 7,175
This is what I get when I try the 96 kHz => 44.1 kHz => 96 kHz test with music (chipmunked from 44.1 kHz to 96 kHz):
Triangle dither used in resampling

.
test96music.png
 
Aug 17, 2021 at 9:11 AM Post #6,314 of 7,175
I clicked on "change sample rate" in audacity and changed it there + back.

You used the exact same method so we're using the same lousy algorithm which should have caused the exact same picture.

Obviously it doesn't. I checked my Audacity settings, it says "Best quality (slowest)"

I assume you used the same.

If noise would be maximum complexity, why is it so much easier to remove noise than it is to remove vocals or instruments?

Because noise is static and everything else is (more or less) erratic.

The more erratic it is, the more the algorithms struggle.
I use Tracks / Resample. Clearly the sampling theorem works for me and has always worked as it should. For you it doesn't work. I don't know why that is. Something weird is happening in your system. Are you sure the quality setting is for "high quality conversion" and not for "real time conversion?" My Audacity has selection for both. Also I make sure the result of each conversion is 24 bit.

Vocal are much louder than noise and also I think noise removed sounds to human ears better. Technically noise is more complex, theoretically infinitely complex making it totally random.
 
Aug 17, 2021 at 9:15 AM Post #6,315 of 7,175
If noise would be maximum complexity, why is it so much easier to remove noise than it is to remove vocals or instruments?

Because noise is static and everything else is (more or less) erratic.
This is something very simple to answer. The problem with removing things such as vocals or instruments is that their respective descriptive functions cannot be generated as fast as an already-predefined white noise function. This is due to latency. You will have to analyze every part of, for example, a voice track, to decompose each frequency in real-time. This can be done, but it requires removing a bunch of latency between components and having enough computing power to quickly calculate the functions that describe the waveform. White noise is already known in terms of the generation function, so this is not an issue. Also, there are known noise shaping methods for white noise, on top of the use of randomly constructive noise (dither) that we can use as well.
 

Users who are viewing this thread

Back
Top