24bit vs 16bit, the myth exploded!
May 2, 2017 at 8:00 PM Post #3,841 of 7,175
If you wish to discuss accuracy of the frequency then 44/16 can do so down to 55 picoseconds which is in a around about way the accuracy limits of depicting frequency. However, dither will decrease that number further. 24 bit could depict frequency to a finer level of accuracy. A higher sample rate could as well.

So just for instance 16/44 digitally created could give you 1000 hz and 1000.000055 hz. Would it be possible to determine that difference in the analog world once you construct those signals. Maybe though probably not. Go to 24 bit and the noise in the analog realm will make the smaller differences something that can't be determined by the random motion of the air itself. So certainly at 44/24 (and almost surely dithered 44/16) the ability to digital construct something exceeds the accuracy of its existence physically in air. Random air molecules aren't that stable.

So the practical limit is the bit depth in terms you are asking about. With enough bit depth you can describe to any level of accuracy desired. Sample rate increases however are not required. Within the nyquist band you can perfectly reconstruct the wave without needing additional bandwidth. For audio purposes we have already exceeded the accuracy of what can physically exist.

As for software even something free like Audacity can generate waves I think to 6 decimal places. That isn't the limit of what is possible just how far the software goes. Matlab can construct whatever is mathematically possible.
 
Last edited:
May 2, 2017 at 9:52 PM Post #3,842 of 7,175
Hi people!


What I learnt about more than CD quality, from this forum + a nice person in this forum:
snip

1) With 24 bit, the minimum loudness step (or say loudness presicion) is increased, although we can not descern it.


3) With higher sampling rates minimul frequency decimal we can go is less, in other words higher frequency presicion, altough standart CD quality already exceeds human hearing in reproduction.
snip


About 1 and 3, please only contrubute If you have solid information. I can't say they are 100% true. But please don't tell me go watch xiph videos, or we don't need to talk about things we can't percieve with the audotory system, or any kind of side tracking, please. I have read this threads half of it and checked most of the external links given.

5)So my question is Does high sampling rate also reduces the delays and improves the timing (not asking wheather we discern it or need it)?

Okay, backing up a bit to your earlier post.

I gave an example of timing accuracy of digital at 44/16. The shortcut formula is
1/( number of levels x sample rate x 2pi).

So 1/ (65536 x 44,100 x 2pi) which is for redbook will give you a number of between 55 and 56 picoseconds.

Higher number of bits will mean smaller steps between bits is possible and this increases the timing accuracy. Why? Sampled systems can reconstruct or interpolate the wave between samples accurately. That is how a given sinewave can start in between samples and the time it began is accurately reconstructed in between samples. Imagine a 1 khz sine wave that starts exactly during one sample. The following sample instead of reading a zero reads some non zero amount. Now imagine you move the starting point 10 microseconds after the first sample or less than half a sample time at 44,100 hz rates. Now your second sample will read non-zero, but the level of that reading is lower than in the first instance. For any given sinewave there is only one group of samples that fit. So as further samples are taken only a starting time between samples will fit and could be reconstructed the same way. So if it were redbook, lets us say you move the start time forward by only 1 picosecond. The following sample in infinite precision would also be less, however the amount less will be an amount smaller than the step between the the least significant bit and the next one so it would be missed as having moved. There is a small area of indeterminability about exactly when that sine wave began. Now if I changed to 44/24 sampling the new timing accuracy is .2 femtoseconds or about .0002 picoseconds. In this new instance moving the start of the sine wave forward by 1 picosecond will result in a change which is larger than the least significant bit in a 24 bit system and the reconstruction of that sampled wave can also be done at higher timed accuracy at least in theory.

Now yes, this is highly over-simplified and hopefully gets the point across without being out and out misinformation.

So why the bit about perfect reconstruction. Well the theorem demonstrates that and it is true. The assumptions however are for infinitely long sample times, infinitely steep brickwall filtering and infinite sampling precision. Other mathematicians have worked out that the same ideas about perfect reconstruction work with somewhat less stringent infinities involved.

So people on this subforum get a bit touchy whenever people, which they regularly do, come in and try to tell us how the Shannon-Nyquist theorem isn't really true. It is true. It has been proven, and practical shortcomings of real world implementation have been rigorously worked out as well. Things really do work that way. So with reasonable sample rates and 24 bits the genuine accuracy of the digital system is in excess of what can be physically created due to noise and real world fluctuations that swamp quantization noise or timing limitations.
 
May 2, 2017 at 9:53 PM Post #3,843 of 7,175
Whether that is relevant to human hearing is another matter.l

Since we all hear with human ears, I would say that relevance to human hearing is the only thing that really does matter. And since we all listen to music on our sound systems, I'd say that fidelity in music is more important than in abstract waveforms like square waves. Audiophiles can go off the deep end with "what ifs". It's better to focus on what makes music sound better on our stereos.
 
May 2, 2017 at 11:21 PM Post #3,844 of 7,175
Since we all hear with human ears, I would say that relevance to human hearing is the only thing that really does matter. And since we all listen to music on our sound systems, I'd say that fidelity in music is more important than in abstract waveforms like square waves. Audiophiles can go off the deep end with "what ifs". It's better to focus on what makes music sound better on our stereos.

Yes staying grounded in reality keeps one from going off the rails like a lunatic. :L3000:
 
May 3, 2017 at 7:00 AM Post #3,847 of 7,175
keep an open mind, maybe he trained in a secret temple in Tibet? as a kid in the theater on Wednesday I've seen many documentaries where individuals develop super human senses in such temples.
plus there are all the people living near Smallville or Central City, around radioactive spiders, toxic wastes... they don't register at a statistical level, but that doesn't mean they don't exist.

I remember seeing him in some video of RMAF or some show like that with flames on his shirt, and I remember thinking that if I had that shirt I would probably experience life differently too.
 
May 3, 2017 at 8:31 AM Post #3,848 of 7,175
Okay, backing up a bit to your earlier post.

I gave an example of timing accuracy of digital at 44/16. The shortcut formula is
1/( number of levels x sample rate x 2pi).

So 1/ (65536 x 44,100 x 2pi) which is for redbook will give you a number of between 55 and 56 picoseconds.

Higher number of bits will mean smaller steps between bits is possible and this increases the timing accuracy. Why? Sampled systems can reconstruct or interpolate the wave between samples accurately. That is how a given sinewave can start in between samples and the time it began is accurately reconstructed in between samples. Imagine a 1 khz sine wave that starts exactly during one sample. The following sample instead of reading a zero reads some non zero amount. Now imagine you move the starting point 10 microseconds after the first sample or less than half a sample time at 44,100 hz rates. Now your second sample will read non-zero, but the level of that reading is lower than in the first instance. For any given sinewave there is only one group of samples that fit. So as further samples are taken only a starting time between samples will fit and could be reconstructed the same way. So if it were redbook, lets us say you move the start time forward by only 1 picosecond. The following sample in infinite precision would also be less, however the amount less will be an amount smaller than the step between the the least significant bit and the next one so it would be missed as having moved. There is a small area of indeterminability about exactly when that sine wave began. Now if I changed to 44/24 sampling the new timing accuracy is .2 femtoseconds or about .0002 picoseconds. In this new instance moving the start of the sine wave forward by 1 picosecond will result in a change which is larger than the least significant bit in a 24 bit system and the reconstruction of that sampled wave can also be done at higher timed accuracy at least in theory.

Now yes, this is highly over-simplified and hopefully gets the point across without being out and out misinformation.

So why the bit about perfect reconstruction. Well the theorem demonstrates that and it is true. The assumptions however are for infinitely long sample times, infinitely steep brickwall filtering and infinite sampling precision. Other mathematicians have worked out that the same ideas about perfect reconstruction work with somewhat less stringent infinities involved.

So people on this subforum get a bit touchy whenever people, which they regularly do, come in and try to tell us how the Shannon-Nyquist theorem isn't really true. It is true. It has been proven, and practical shortcomings of real world implementation have been rigorously worked out as well. Things really do work that way. So with reasonable sample rates and 24 bits the genuine accuracy of the digital system is in excess of what can be physically created due to noise and real world fluctuations that swamp quantization noise or timing limitations.

TOP.
Big thanks for the in depth explanation. I know theorem will perfectly work If we measure thte samples coordinates perfectly, which will not be possible in real life thus leading to limitations.
I also do beleive for digital audio, 44.1/24 (or 44.1/16 dithered, altough I don't have extensive knowledge about dithering) is enough for today, and will be enough for many years to come, If not forever.

Thank you very, very much for clarification. That is one solid reply. Got the answer I wondered. Respects, sir.
 
May 3, 2017 at 8:57 AM Post #3,849 of 7,175
May 4, 2017 at 10:53 AM Post #3,850 of 7,175
So the practical limit is the bit depth in terms you are asking about. With enough bit depth you can describe to any level of accuracy desired. Sample rate increases however are not required. Within the nyquist band you can perfectly reconstruct the wave without needing additional bandwidth.

This isn't really correct, or rather it's only partially correct. For it to be correct, the Nyquist-Shannon Theorem would have to be incorrect! You've got the sampling rate part right but not the bit depth part. From what you've said and from our last discussion, I suspect you've got all the information (or nearly all) but you haven't quite managed to fully join all the dots (excuse the pun) or put another way, you haven't fully appreciated all the implications. The part of the story which you appear not to fully appreciate is the statistical nature of digital audio, you appear to think about sample values as individual absolute values rather than as a series of probabilistic ordinates. A sinc function does of course take in the absolute sample values but effectively processes them statistically and part of this statistical process is dither.
It appears you only partially appreciate the purpose, usage and implications of dither. Let me re-phase what I stated in the OP: While the output of a DAC is a continuous waveform, the transfer curve "would be" the infamous stair-step (defined by the quantisation levels/steps). I say "would be" because that's not what happens in practise (if it did, it would NOT satisfy Nyquist-Shannon). What happens in practice is dither: The random modulation of the signal between adjacent quantisation steps/levels, this results in our stepped transfer curve becoming perfectly linear (statistically)! Let's take a worst case scenario, 1 bit. With one bit we only have two values: 0 (which is digital silence) and 1 (which is overload status). If we look at the result of quantisation in terms of these absolute individual values (a stream of individual values) then 1 bit of data gives us precisely zero resolution! However, if we look at these values as a statistical group (including dither) rather than as individual values, then we don't only have values of digital silence and digital overload but also every value in-between (IE. Infinite resolution)! The proof is SACD, which otherwise simply couldn't work at all (as it would have zero resolution).

Consider this quote from Hugh Robjohns (2011 SOS forum):

"The myth of 'digital resolution' also bears a comment here. In undithered systems higher recording levels results in lower quantisation distortion, and some very early digital recording systems were not adequately dithered. ... However, this problem went away nearly thirty years ago, too. A properly dithered system (as all now are and have been for decades) has no quantising distortion at all. None. Consequently, the audio 'resolution' is 100% perfect regardless of recording level....

'Audio resolution' does not vary with recording level, nor the wordlength of the system. Only the absolute level of the noise floor varies with wordlength. And while we're at it, only the recorded audio bandwidth varies with sample rate, too."

I've highlighted that part because there appears to be a tendency to consider dither as somewhat of an optional extra which can be applied to digital audio, rather than as an essential ingredient. The application of dither is not optional, a dithering quantiser is employed in all ADCs (as I mentioned in the OP). Where some of the confusion about the application of dither probably lies is not in the quantisation process but any subsequent operator chosen re-quantisation process. As my OP and Hugh Robjohns mentions, resolution is infinite at any bit depth, IE. ABSOLUTELY ALL the information is there (as Nyquist-Shannon states). However, getting at all that information is the tricky part because some of it is buried in the resultant dither noise. The issue then is purely one of noise and has nothing to do with resolution. Just saying "noise" is not as simple as it first appears either though, because we've got all sorts of different noise, at various different points/places and with various statistical implications. For example we've got thermal noise, which tends to have a Gaussian distribution, acoustic noise which doesn't have a definable distribution, etc.

For the above reasons, your previous explanation/example of greater bit depth improving timing is not correct.

[1] For the dBFS part, I have found it myself. ... This is called quantization error. It is unavoidable

2nd) It is quite sad that YOU DON'T KNOW OR YOU CAN'T EVEN THINK that is possible to create an audio file with more presicion than a tone generator, by using just a plain computer.

Now for demonstration I need to know "For a given computer generated continuous tone let's say a tone that has to be encoded with the frequency of 10,000.55555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555 Hz (200 digits after the decimal), how presice this tone frequency value assignation will be, comparing sample rates of 44.1kHz and 192 kHz, without any dither.

1. That's utter nonsense, of course quantisation error is avoidable! For example, if you don't quantize (or re-quantize), you're obviously going to avoid any quantisation error! Is it just me or is this not ridiculously obvious?

2. Huh, what are you talking about? I'm saying both in the OP and my responses to you that an audio file has infinite resolution, you're the one arguing that it doesn't!

3. Going back to point #1. In your computer, get a 16bit signal generator and generate your sine wave, then write it to a 16bit wav file. Where's the quantisation error going to come from? How is there going to be any sort of error at all? The audio file can store anything you can generate, PERFECTLY! Maybe you want to use say a 64bit float signal generator though, in which case you would need to dither the output to a 16bit file to get infinite resolution. You can't say "no dither" because by doing so you are breaking the rules of digital audio which make it linear in the first place. It's like saying, what's the performance of an electric car if you don't give it any electricity?!

G

Edited
 
Last edited:
May 4, 2017 at 11:36 AM Post #3,851 of 7,175
would you please be so kind as to edit your post and remove the really unnecessary personal comments.
 
May 4, 2017 at 2:51 PM Post #3,852 of 7,175
This isn't really correct, or rather it's only partially correct. For it to be correct, the Nyquist-Shannon Theorem would have to be incorrect! You've got the sampling rate part right but not the bit depth part. From what you've said and from our last discussion, I suspect you've got all the information (or nearly all) but you haven't quite managed to fully join all the dots (excuse the pun) or put another way, you haven't fully appreciated all the implications. The part of the story which you appear not to fully appreciate is the statistical nature of digital audio, you appear to think about sample values as individual absolute values rather than as a series of probabilistic ordinates. A sinc function does of course take in the absolute sample values but effectively processes them statistically and part of this statistical process is dither.
It appears you only partially appreciate the purpose, usage and implications of dither. Let me re-phase what I stated in the OP: While the output of a DAC is a continuous waveform, the transfer curve "would be" the infamous stair-step (defined by the quantisation levels/steps). I say "would be" because that's not what happens in practise (if it did, it would NOT satisfy Nyquist-Shannon). What happens in practice is dither: The random modulation of the signal between adjacent quantisation steps/levels, this results in our stepped transfer curve becoming perfectly linear (statistically)! Let's take a worst case scenario, 1 bit. With one bit we only have two values: 0 (which is digital silence) and 1 (which is overload status). If we look at the result of quantisation in terms of these absolute individual values (a stream of individual values) then 1 bit of data gives us precisely zero resolution! However, if we look at these values as a statistical group (including dither) rather than as individual values, then we don't only have values of digital silence and digital overload but also every value in-between (IE. Infinite resolution)! The proof is SACD, which otherwise simply couldn't work at all (as it would have zero resolution).

Consider this quote from Hugh Robjohns (2011 SOS forum):

"The myth of 'digital resolution' also bears a comment here. In undithered systems higher recording levels results in lower quantisation distortion, and some very early digital recording systems were not adequately dithered. ... However, this problem went away nearly thirty years ago, too. A properly dithered system (as all now are and have been for decades) has no quantising distortion at all. None. Consequently, the audio 'resolution' is 100% perfect regardless of recording level....

'Audio resolution' does not vary with recording level, nor the wordlength of the system. Only the absolute level of the noise floor varies with wordlength. And while we're at it, only the recorded audio bandwidth varies with sample rate, too."

I've highlighted that part because there appears to be a tendency to consider dither as somewhat of an optional extra which can be applied to digital audio, rather than as an essential ingredient. The application of dither is not optional, a dithering quantiser is employed in all ADCs (as I mentioned in the OP). Where some of the confusion about the application of dither probably lies is not in the quantisation process but any subsequent operator chosen re-quantisation process. As my OP and Hugh Robjohns mentions, resolution is infinite at any bit depth, IE. ABSOLUTELY ALL the information is there (as Nyquist-Shannon states). However, getting at all that information is the tricky part because some of it is buried in the resultant dither noise. The issue then is purely one of noise and has nothing to do with resolution. Just saying "noise" is not as simple as it first appears either though, because we've got all sorts of different noise, at various different points/places and with various statistical implications. For example we've got thermal noise, which tends to have a Gaussian distribution, acoustic noise which doesn't have a definable distribution, etc.

For the above reasons, your previous explanation/example of greater bit depth improving timing is not correct.



1. That's utter nonsense, of course quantisation error is avoidable! For example, if you don't quantize (or re-quantize), you're obviously going to avoid any quantisation error! Is it just me or is this not ridiculously obvious?

2. Huh, what are you talking about? I'm saying both in the OP and my responses to you that an audio file has infinite resolution, you're the one arguing that it doesn't!

3. Going back to point #1. In your computer, get a 16bit signal generator and generate your sine wave, then write it to a 16bit wav file. Where's the quantisation error going to come from? How is there going to be any sort of error at all? The audio file can store anything you can generate, PERFECTLY! Maybe you want to use say a 64bit float signal generator though, in which case you would need to dither the output to a 16bit file to get infinite resolution. You can't say "no dither" because by doing so you are breaking the rules of digital audio which make it linear in the first place. It's like saying, what's the performance of an electric car if you don't give it any electricity?!

G

Edited

I don't believe I wrote anything that was incorrect if dither were not in use. Perhaps you overlooked where I said I knew it was over-simplified. I also don't know what discussion convinced you I don't know how dither works or what the results are or what that means. Also despite you acting as if no dither isn't an option, plenty of software will allow you to do things without dithering. It isn't a wise choice to make, but it happens.

Trying to get the point across to someone who doesn't yet know is better done in important chunks in my opinion. Otherwise it seems like so much indecipherable magic.

Now if the previous rambling about this stuff made sense, and you don't get the seemingly magical claims for dither by Gregario, perhaps this will help.

http://bitperfectsound.blogspot.com/2013/09/dither-some-hard-data.html

And the prior article that in simple terms explains what is going on.

http://bitperfectsound.blogspot.com/2013/09/dither.html
 
May 5, 2017 at 6:54 AM Post #3,853 of 7,175
For the orignal post writer @gregorio

Quantisation error is not avoidable in real life digital audio sampling. It's even distrubtion is possible by dither which reduces It's side effect, the quantisation noise, overall. But the assigning presicion is still reduced. Want to see from first hand? Try downsapmling the PCM audio file to 1 bits as you said. You hear quantisation noise of course. But what I'm mentioning is; 2^1=2, you have only 2 values to assign the sample points. The tones (asside from quantisation noise) on the file either will be copmletely silient (assigned to zero), or they will be at the same loudness. This proves that there is an error, and higher the bit depth=less the error will be.

As an engineer in diffrent field, I would be ashamed after a situation like this, instead of making word tricks and still trying to look like I am right. If somebody come and asked me about calculating the square root of 5, with 100 digits afther to decimal I would not say "I use industry standart calculator which is only capable of calculating 8 digits after the decimal." or "Why do I need to calculate for that amount of presicion?". I am the engineer and I am expected to provide a solution to the asker or simply give way. I would say:
1) A good, simple, short, I DON'T KNOW. (Seems like somebody can't say this words and try a bit hard to know everything) + You might find the answer or useful infromation from this source or making a search on this ..... site. (Recommendation)
2) Here you go: http://www.ttmath.org/online_calculator (the exact thing they need)
3) There is no good third option and rambling and talk about a little diffrent topic to show how knowledge they got as some people do, which is basically time and focus loss. I haven't done that on purpose and will try to not do it in the future as an engineer.

Also I would not make jokes about the asker, because I know there is a possibility of the asker is working on some kind of digital coding that I don't or even can't think of, or a field I know and in that field is requeired to calculate with very high presicion. And I actually do know for air to air missles 16 digits after the decimal is not enough even in 1km (short) shots you have potential to miss the target which is in some kind of turn by 1.5 meter from just the growth of the mathematical summing of the error.
Even If someone is really asking an insane number of presicion, I would try to show my interest in a good way like "wow that's super high! How/Why will you do with it !?". Not making fun of him/her. Next time, please don't try to mask the capability of your knowledge by going 3rd option and making some word tricks. Cause some people will understand what you are doing and they will have a judgement about your quality.

3. Going back to point #1. In your computer, get a 16bit signal generator and generate your sine wave, then write it to a 16bit wav file. Where's the quantisation error going to come from? How is there going to be any sort of error at all? The audio file can store anything you can generate, PERFECTLY! Maybe you want to use say a 64bit float signal generator though, in which case you would need to dither the output to a 16bit file to get infinite resolution. You can't say "no dither" because by doing so you are breaking the rules of digital audio which make it linear in the first place. It's like saying, what's the performance of an electric car if you don't give it any electricity?

That's exactly can't saying the 1) "I don't know what will happen If the dither is not used" option but going the 3rd option and blaming me for asking a stupid question like "what's the performance of an electric car if you don't give it any electricity?". Really.

If the humanity were able to put and has process power to extract infinite amount of information when requested, instantly from a just a digital audio file, we wouldn't need supercomputers for simulation, we wouldn't need terabites of hdd, copmanies wouldn't invest millions of dollars to more calculation capable processing devices, and there would not be prizes for the calculation of the first xxx digits of the mathematical term "pi", for a spesific time and so on..

You always backed your thoughts with the Nyquist-Shannon theorem. Digital audio is not 100% equal to Nyquist-Shannon theorem. It's logic and operating principle is the same. The diffrence is that you (or should I say we?, lol) have predefined values for taken sapmles in computer world and also for digital audio. And these values are simply not perfect. Endless presicion values will requeire infinite amount of data. Just a tone's first 10 Million digits after decimal will cost you 10Million bytes which is about 10 megabytes which already exceeds the 1 second wav file size.

......

I am both happy and sad for the explosion of you and your 7 year old original post which you always directed as an religious book, even in diffrent sites. I, as a person who don't have considerable expereince in electrical or signal processing or strong claim in digital audio like many here, am copmletely happy that my logic discerned you and some professional looking sources (eg:xiph's) explanations and many media and tutors on various sites are repeating what these sources say over and over again, can't be the whole story and there needs to be some imperfections other than dynamic range with bit depth and limitations for the taken sample point's requenciny presicion to some extent, with all the pressure from you and this fomus head-fi forum's various members who mostly identify themselves as real life audio engineers/tutors or people who gave their many years on digital and analog audio and understood almost all the concept by heart.

I am sad that how all this years, this flawed, insuffcient explanation is standed strong from the most of the very best of these forum members (except @spruce music from what I saw while I was here, and there might be some others I didn't saw or realised, not including them) who supposed to have tremendous amount of knowledge, and how this flawed explanation faked many people many people. IMPORTANT If you haven't read my older posts:frowning2: I am not talking about wheather the 16 bit audio is not enough for digital audio or not, I am talking about It's not perfection and It's explanation on the original post and in the writers statements)
 
Last edited by a moderator:
May 5, 2017 at 9:11 AM Post #3,854 of 7,175
@HAWX sorry but just no.
you say to forget about recording and playback boundaries that would render your conditions impossible, then you argue about how 1 bit would sound... come on. disregard the DAC but care for how it sounds?

we're talking about the specific case of reconstructing band limited sine wave signal, you bring up computation processing???? Nyquist theorem is about sine waves, not about solving complex operations. this makes no sense and you end up talking about how you have the higher ground and are happy to have found the flaw in many people's logic. look in a mirror.

if you cared more about the theorem instead of repeating that you understand it, you would see how the finite limit you so dearly desire to demonstrate is right there before your eyes. band limiting! it is the hard limit that cannot be avoided and gives, if not a finite resolution(might have to define that more clearly), at least a finite content. if you move the band limiting, you lose data, or create aliasing. both with the direct consequence of non perfect reconstruction of the signal. if it is not band limited we cannot reconstruct it perfectly, even in theory.
if you decide to assign less bit to code the signal, you increase the noise floor. interpret that however you like but that's all you do. the signal is still there and perfect in all the amplitude above the noise floor.

and of course there are very real problems with practical application that would show obvious limits in resolution for multiple reasons, but you're the one insisting so much on disregarding real life encoding and playback boundaries. Greg has simply been following your conditions better than yourself. don't blame him for that.
if you make up a theoretical case, stay theoretical. if you talk about real analog signal, you cannot disregard all the reconstruction steps in real life.

now as I said before, if all your posts were really about claiming that discrete values are discrete, you could have just said so and win the internet with a nice captain obvious meme. but if you're trying to demonstrate something else, I'm still sincerely wondering what it is.
 
May 5, 2017 at 9:48 AM Post #3,855 of 7,175
@castleofargh "you say to forget about recording and playback boundaries that would render your conditions impossible, then you argue about how 1 bit would sound... come on. disregard the DAC but care for how it sounds?"

I have given an axample about how the 1 bit sampling will loose the presicion of the sample's loudness. You don't have to play it back, but If you do, you will understand what I mean. You think that I'm going to say that 16 bit sounds better than 1 bit so that 24 bit does sound better than 16 bit? No, try to understand my explanation about how a sample's have lost the presicion of their loudness values relative to each other. Every sample will be assigned to either same loudness level or just 0. I have been talking about the REAL LIFE DIGITAL AUDIO SAMPLING whole time. You don't have to play the sound in real life, If you know how to measure it (I seriously don't) you will see what I mean. But If you decide to play it back you will have limitations of the analog chain I know but you can still see WHAT PART I SPESIFCALLY MEAN FOR THE BIT DEPTH.

"interpret that however you like but that's all you do. the signal is still there and perfect in all the amplitude above the noise floor. " That is, wrong. That's why I have given 1 bit example. You don't seem to understand what is the purpose of that example.

"Nyquist theorem is about sine waves, not about solving complex operations." OK. You take a sample, and measure It's corrdinates. And you look It's coordinates and reconstruct it. Right? Now can you please tell me, how do you take perfect samples, please. Because what I am saying in short term is, that's not possible.

I know everything is band limited. Even If you have complete perfection for a very short band limited area, you will still need infinite amount of data thus storage. There is no other way.

Rest of your writing is even more irrelevent. Please try to understand my writing fully.
 

Users who are viewing this thread

Back
Top