Why does quieter music require less data? (FLAC file size)
Aug 26, 2016 at 1:05 AM Post #16 of 33
No, I don't understand the topic, not even the very basics, but thanks for trying to help. I wish I understood what the 1.0 and -1.0 means in a track (http://imgur.com/a/K5B79). Max? 1.0 = 16th bit? No..
 
Aug 26, 2016 at 2:42 AM Post #17 of 33
  So there is more audible detail in a larger waveform? It's better to record as loud as you can before clipping?

 
No! You're working on the principal that the limiting factor is the digital audio container, the number of bits available/used. In reality the limiting factor is the acoustic environment, the mics and mic pre amplifier/s. In an acoustic environment we always have a noise floor, a relatively high noise floor in the case of a concert hall (with an audience) and relatively a low noise floor in a commercial recording studio but still always a noise floor. Let's take an example, a studio with a noise floor around -74dB and a musician capable of a dynamic range of 50dB above the noise floor, therefore peaking at -24dB but peaking at -24dB means we have the 6 most significant bits all set to zero. We can of course increase the gain on the mic-pre by 24dB bringing our musician's peaks up to 0dB and no wasted bits. However, we have not increased the audible detail (or fidelity or whatever you want to call it) because we haven't just boosted the level of the musician by 24dB, we've boosted everything hitting the mic by 24dB, including the noise floor! The signal to noise ratio and therefore the audible detail is unchanged, or in theory it would be if we had a theoretically perfect mic-pre amp, which of course we don't. As we turn up the gain on the mic-pre it produces more internal noise, thereby reducing the signal to noise ratio (and audible detail)!
 
The limiting factor of audible detail is the amount of audible detail which actually exists acoustically and can be captured by analogue components, not the number of bits available/used.
 
G
 
Aug 26, 2016 at 8:13 AM Post #18 of 33
  No, I don't understand the topic, not even the very basics, but thanks for trying to help. I wish I understood what the 1.0 and -1.0 means in a track (http://imgur.com/a/K5B79). Max? 1.0 = 16th bit? No..

 
Audacity operates at 32-bit floating point, and the norm is for 32-bit floating point samples to go from -1 to 1, as this helps on-the-fly conversion to other formats. But you can store much larger values as an intermediate step in editing using floating point.
 
16-bit samples are typically stored as two's-complement integers, and thus technically can range from -32768 to 32767. When you load a 16-bit sample into Audacity, it first converts the 16-bit integers into 32-bit floating point numbers.
 
Aug 26, 2016 at 9:02 AM Post #19 of 33
  No, I don't understand the topic, not even the very basics, but thanks for trying to help. I wish I understood what the 1.0 and -1.0 means in a track (http://imgur.com/a/K5B79). Max? 1.0 = 16th bit? No..

They are just the full scale values. If you use 16 bit samples, the max at 1.0 would be a binary value of 1111111111111111, which is 65535 in decimal, and -1.0 would be all zeros.
 
Aug 26, 2016 at 9:12 AM Post #20 of 33
 
16-bit samples are typically stored as two's-complement integers, and thus technically can range from -32768 to 32767. When you load a 16-bit sample into Audacity, it first converts the 16-bit integers into 32-bit floating point numbers.

 
Yes, of course they are. That's what I get for posting when I should be sleeping. Doh.
 
Aug 26, 2016 at 1:36 PM Post #21 of 33
   
No! You're working on the principal that the limiting factor is the digital audio container, the number of bits available/used. In reality the limiting factor is the acoustic environment, the mics and mic pre amplifier/s. In an acoustic environment we always have a noise floor, a relatively high noise floor in the case of a concert hall (with an audience) and relatively a low noise floor in a commercial recording studio but still always a noise floor. Let's take an example, a studio with a noise floor around -74dB and a musician capable of a dynamic range of 50dB above the noise floor, therefore peaking at -24dB but peaking at -24dB means we have the 6 most significant bits all set to zero. We can of course increase the gain on the mic-pre by 24dB bringing our musician's peaks up to 0dB and no wasted bits. However, we have not increased the audible detail (or fidelity or whatever you want to call it) because we haven't just boosted the level of the musician by 24dB, we've boosted everything hitting the mic by 24dB, including the noise floor! The signal to noise ratio and therefore the audible detail is unchanged, or in theory it would be if we had a theoretically perfect mic-pre amp, which of course we don't. As we turn up the gain on the mic-pre it produces more internal noise, thereby reducing the signal to noise ratio (and audible detail)!
 
The limiting factor of audible detail is the amount of audible detail which actually exists acoustically and can be captured by analogue components, not the number of bits available/used.
 
G

OK that was my going assumption based on things like Monty's presentation about high res audio and other things. It just seems like I've heard more detail in larger waveform recordings, volume adjusted. I thought it was likely my imagination, but also the factor that older recordings with more moderate volume levels (or whatever it's called, gain?) also were using older equipment, tape equipment... 
 
I think with the word "volume" you might picture, at least subconciously, a beaker with a liquid in it, right? Which beaker has more "volume" = more water. Then another problem with the language is "resolution." It doesn't mean the same thing in imagery. A picture which higher native resolution captures more detail. But the same word is used in audio. So it might be confusing. Of course, the same word is used in a political context too, to "pass a resolution" and to be resolute. I still don't know what audio resolution is though. If you resolve something, that means you break it down to its bare constitutuents. So something higher in resolution in audio should be getting more detail. Is it? 
 
When i think about the high res debate, I think of " no need for what bats and elephants hear," (ultrasonics and infrasonics). But can't there just be increasing resolution (detail) in a narrow human-audible range (Nyquist frequency), the vocal range in particular? Why can't there be more and more detail captured there? 16-bits is so many samples. More samples doesn't equal more detail? Then why is the word "resolution" used? If you have a higher resolution digital photo it has more detail than the lower resolution version. Because it's more resolving. 
 
I know I'm confusing sample rates with bit depths. But the two are brought in hand-in-hand when there is a "step up" in technology. One doesn't buy a better DAC that is 32-bit but only a sample rate max of 44.1. So I might as well tie them together.
 
 
is the noise floor dependent on atmospheric density? Is it better to do outside recordings near sea level?
 
I think you need at least 20 bits to get certain sounds like the cannons in the 1812 Overture without compression. (http://imgur.com/a/CFf70)
But what if a purely electronic artist wants to use 64-bits or more? Maybe he wants to make dance music for bats and elephants. He's not dealing with mics then anyway. You still have to deal with a noise floor with purely electronic music? Not from samples, but from tones generated by the synthesizer (like FM, not MIDI)? 
 
I'm probably better off not trying to understand this subject. There's a certain limit average sorts have in being able to understand mathematics and science, you bump your head on your ceiling pretty quick, especially on a forum in front of others. Better to hang out with the Michael Fremer types and stuff, who are more emotional and can't think so well.
 
Aug 26, 2016 at 1:39 PM Post #22 of 33
  They are just the full scale values. If you use 16 bit samples, the max at 1.0 would be a binary value of 1111111111111111, which is 65535 in decimal, and -1.0 would be all zeros.

 
Why not display that as the actual 0 to 15 instead of a scale? It shows -1 to 1 whether you set it to 32-bit float, 16-bit or other. Then if you did you could see different size waveforms? Like importing a 16-bit flac into 32-bit float would show a lot of headroom added for editing? instead it shows it near max, as I guess it's converted. I find that all confusing. 
 
Aug 26, 2016 at 4:27 PM Post #23 of 33
 
  They are just the full scale values. If you use 16 bit samples, the max at 1.0 would be a binary value of 1111111111111111, which is 65535 in decimal, and -1.0 would be all zeros.

 
Why not display that as the actual 0 to 15 instead of a scale? It shows -1 to 1 whether you set it to 32-bit float, 16-bit or other. Then if you did you could see different size waveforms? Like importing a 16-bit flac into 32-bit float would show a lot of headroom added for editing? instead it shows it near max, as I guess it's converted. I find that all confusing. 


all those stuff are only interpretations and representations of the signal. the digital data itself is a series of points(samples) and that's all. it can be shown as points, as a wave, as a spectrum, or identified at the end of a sound system as being the sound of a guitar... representations are only for convenience and the scale could show anything from db to volts to bit values depending where in the system we wish to try and interpret the data. you could show several of those things in audacity by changing some visualization settings.  but it doesn't change what the digital data really is.
the PCM system (the lossless formats, excluding DSD) records the vertical value(amplitude) of the signal at a given moment. that's one sample, and the file is made of X samples per seconds. that's all. if the music is overall quieter, then the overall signal will record smaller amplitude values, that's the only difference at the digital level for the file itself. no consideration of quality difference or quantity of information. those stuff are decided by the bit depth(precision of the amplitude) and the sample rate (number of points per second that are recorded), not by the loudness for the file.
 
-your intuition that because the flac file was of smaller size for quiet music, it meant that quiet music might have less information or precision, is shown to be wrong as it doesn't happen when you do the same test with .wav files. wav files result in the same file size whatever the loudness so that argument can't be applied to .wav.
 
-we also know that flac and wave result in the same final information as both as lossless formats. you can go from wave to flac and back, and lose nothing.
 
so from those 2 points you can conclude that the smaller flac size is only a consequence of the compression algorithm of flac. not a loss of data or precision or whatever.
 
 
 
now in relation to the global resolution of the signal that you'll get out of your headphone, all components and all devices generate some matter of noise.  if you record music too quietly, you end up having the music closer to all those noises and the signal to noise ratio will be poor. so from that particular perspective, indeed a signal recorded closer to the maximum amplitude(louder) will be further away from the quiet noise floor, that result in a better SNR and in a way that means a superior resolution. so your initial idea about quiet music and resolution isn't wrong, it's only the relation between smaller file size in flac and signal quality that was wrong.
 
I hope what I say makes some sense, I suck when it comes to pedagogy.
 
Aug 26, 2016 at 4:43 PM Post #24 of 33
   
Why not display that as the actual 0 to 15 instead of a scale? It shows -1 to 1 whether you set it to 32-bit float, 16-bit or other. Then if you did you could see different size waveforms? Like importing a 16-bit flac into 32-bit float would show a lot of headroom added for editing? instead it shows it near max, as I guess it's converted. I find that all confusing. 

 
See my response above: Audacity converts to 32-bit float when importing, so you'll always see -1 to 1. If you change the view to Waveform (dB), you can see a decibel scale instead.
 
Aug 26, 2016 at 5:25 PM Post #25 of 33
 
all those stuff are only interpretations and representations of the signal. the digital data itself is a series of points(samples) and that's all. it can be shown as points, as a wave, as a spectrum, or identified at the end of a sound system as being the sound of a guitar... representations are only for convenience and the scale could show anything from db to volts to bit values depending where in the system we wish to try and interpret the data. you could show several of those things in audacity by changing some visualization settings.  but it doesn't change what the digital data really is.
the PCM system (the lossless formats, excluding DSD) records the vertical value(amplitude) of the signal at a given moment. that's one sample, and the file is made of X samples per seconds. that's all. if the music is overall quieter, then the overall signal will record smaller amplitude values, that's the only difference at the digital level for the file itself. no consideration of quality difference or quantity of information. those stuff are decided by the bit depth(precision of the amplitude) and the sample rate (number of points per second that are recorded), not by the loudness for the file.

 
Quote:
 -your intuition that because the flac file was of smaller size for quiet music, it meant that quiet music might have less information or precision, is shown to be wrong as it doesn't happen when you do the same test with .wav files. wav files result in the same file size whatever the loudness so that argument can't be applied to .wav.

 
The WAVs were the same file size whether quiet or loud, but the FLAC is able to "throw away" (hide?) wasted data to play back the exact same bits. Nonetheless the FLAC manages to shrink down more with a quieter file. How? It means it manages to find less worthless data to throw away/hide for compression. 
 
 
 
-we also know that flac and wave result in the same final information as both as lossless formats. you can go from wave to flac and back, and lose nothing.

 
Yes, but a FLAC can shrink more of a quiet file, while the WAV isn't trying to shrink anything so keeps the max bits (1144) regardless. Then that means the quieter waveform has less data in it, less detail, less sound. So it's better to record loud. The loudness wars was a good thing. The CDs from the 80s don't have as much information in them. When FLACCed they produce smaller files. 
 
When you have a photograph that is 8 megapixel it has more bits than one that has only 2 megapixels. And the 8 megapixel one has more detail. It can't be shrunk down as much with compression without losing that detail. 
 
The details may not be visible. 4 megapixel or so is already approaching the limits of eyesight for most viewing distances and display sizes. Similar, 16 bit/ 44 kiliohertz was chosen because it seemed to capture enough. 
 
so from those 2 points you can conclude that the smaller flac size is only a consequence of the compression algorithm of flac. not a loss of data or precision or whatever.

no i can't. You can't magically have smaller file sizes without actually losing data (not counting wasted / blank space cut out). 
 
Think about images. You have a 4000x8000 blank canvas in an image editor (Photoshop). In the corner you paste in a photo that takes up only like 1/15th of that blank canvas, leaving he rest white. That white space is easily discarded by compressing. Is it like the extra headroom in a quiet audio file? When using FLAC or mp3? well, yes, but my point is that if your photo doesn't just take up 1/15th of the 4,000x8000 (in native size, without stretching or shrinking, of course, then that photo's "real data" when pasted onto that canvas is, say, 640x480. It's low resolution, by today's standards; it's VGA. You can blow it up, but that's increasing how large the pixels display on your screen (or how large they save, thus increasing file too), but it's essentially the same amount of "data". You didn't increase the detail, although perhaps your ability to take it all in depending on your eyesight. But if you import a photo that is 3000x7000 pixels, and that it's native resolution, you know that's going to have a lot more detail in it compared to the 640x480 photo. It's higher resolution. It resolved (captured) more of reality. So it still has some white space (1000 pixels either side multiplied together), but you can't compress it as much. With a quieter audio file you can compress more. Therefore a louder recording, because it can't be FLACed to as small of file size, it has more detail in it, more of the reality it was trying to capture. 
 
The WAV proof doesn't work as explained above. Because a WAV saves the same amount regardless. That's why I brought up FLACs to begin with, so I could avoid discussion about the destructive compression (lossy) that MP3s do. But MP3s also are smaller when the waveform source was smaller. 
 
I just tried both with Ogg Vorbis and FLAC from Audacity after using the amplify feature to maximize the sound (without clipping), exporting the file, then another where I start over with the same raw data (from the CD) and shrink it down a good bit putting in "-18" into the amplify feature... so the waveform becomes pretty small. But the FLAC and OGG have smaller file sizes for the same length and in Foobar it shows that the -18 version is 589 kbps. That's what it shows in the properties of the files in Windows: http://imgur.com/a/GuVCR
How can it store the same amount of data -- why wouldn't the louder one be the same size (or very close)? The discrepency suggests it is like shrinking an image down in a photo editor to a smaller canvas size -- I mean, not just temporarily zooming in the editor (although that would be talking the amount of data that's in RAM), but what you save to, when putting it on hard disk, and the smaller image is now 380x879 when before it was 1024x768 (or whatever). You know that the 380x879 will not only take up less space on the hard disk, but will have less picture data too. It had to remove some of it to get it that small, unless we're talking vector graphics. I think. 
 
I mean I'm not talking about a GIF or PNG that was originally 380x879 then blown up to 1024x768. 
 
I don't see how the same doesn't apply to audio data. It shouldn't take a lot more space to represent larger waves. For instance, if you have a digital drawing that is (in its original creation) done at 640x480 and you double its size to 1280x960 the program ought to be able to store that data as 640^2x480^2, as a shortcut, so it saves space. It should be taught to square the pixel upon opening. It would in a sense keep its original size if working on multiples rather than a new ratio. 
 
I don't see why it wouldn't work that way with audio data. The program (Foobar, iTunes, whatever) is taught to playback a file at multiples of 2, 3, 4... expanding the audio "size" (or how loud it sounds). That way data could be saved, because for some reason it requires less data to save the same audio when it is smaller waveform. 
 
 
now in relation to the global resolution of the signal that you'll get out of your headphone, all components and all devices generate some matter of noise.  if you record music too quietly, you end up having the music closer to all those noises and the signal to noise ratio will be poor. so from that particular perspective, indeed a signal recorded closer to the maximum amplitude(louder) will be further away from the quiet noise floor, that result in a better SNR and in a way that means a superior resolution. so your initial idea about quiet music and resolution isn't wrong, it's only the relation between smaller file size in flac and signal quality that was wrong.

 
Is that why the file size is larger then? The noise makes for less data? 
 
I hope this discussion isn't making everyone facepalm too much. I just seem to hear more music, more detail, when it's a fairly loud recording (large waveforms that are nearly clipping), versus moderate to conservative recordings, say from the 80s, even when it was very electronic sorts of music. And classical recordings seem to have gotten better, but they've all also been recording louder, a "casualty" of the so-called "loudness war." But maybe the loudness war is a good thing. They're recording more audible data that we call music. So we should increase the bit depth too, not just to record the canons in the 1812 Overture, and increase the sample rate because it's not just about bat and elephant hearing but you can capture more detail within a given audible range. 
 
But that is a separate matter. 
 
I'll post this even though I figure later today or tomorrow I'll regret it, thinking how stupid I am, not really understand this. I hate that. I always want to discuss things but then I get a bit shy because I'll probably get shot down. 
 
Aug 26, 2016 at 6:16 PM Post #27 of 33
Resolution in audio is actually very similar to resolution in images. If you increase your image resolution from 480p to 1080p you see more detail (depending on the size of your screen). If you increase your audio sample rate from 1KHz to 2KHz, you hear more detail. In imaging, you will reach a point where you can increase the resolution but won't be able to see any more detail. I think that point is around 300 Pixels Per Inch. Packing any more pixels into a smaller space is pointless because the naked eye can never see that well. Similarly, once you increase the audio sample rate beyond ~40KHz, your ears can not hear that added detail, so there's no point. We have had 44.1KHz sample rate audio equipment for decades, but we have not reached a point yet where all displays are >300PPI, we are only just now getting that sort of pixel density for consumers in modern smart phone displays.
 
You can make similar comparisons between audio bit depth, image color depth, and color range.
 
Aug 26, 2016 at 7:20 PM Post #28 of 33
@ OP. 
  talking about vorbis or anything lossy is plain wrong. you cannot compare lossy and lossless.
 
 
 
finding a correlation between loudness and file size of flac only tells you that flac isn't as efficient at compressing loud signals that it is at compressing quiet signals. and that's all you can conclude.
 
it's like language matter, not a resolution matter.
analogy that would almost work: if I turn a number into the same number in roman numerals, it might not take the same space in this post to write some values, but we can still extract the same meaning with 100% accuracy.
"1" would be "I" in roman number.
frown.gif
in this post they both occupy 1 space, the compression is sucky. now "100" will become "C" woot!!!! I just saved 2 spaces out of 3. do you conclude that 100 is of lower resolution because in roman numbers it use less space than in arabic? of course not so why do it for flac? a given value will take whatever storage space it needs in the language it uses. in english it's "a hundred", in french it's "cent", but the actual information is always there, if they were both samples for music, 1 or 100 would both tell us a position on a vertical axis and do so with both the accuracy of an integer. in what language they are express and whatever correlation with the occupied space you will find doesn't imply anything about resolution.
 
if this doesn't work, I must admit I'm out of ideas
tongue.gif

 
Aug 26, 2016 at 7:28 PM Post #29 of 33
  @ OP. 
  talking about vorbis or anything lossy is plain wrong. you cannot compare lossy and lossless.
 
 
 
finding a correlation between loudness and file size of flac only tells you that flac isn't as efficient at compressing loud signals that it is at compressing quiet signals. and that's all you can conclude.
 
it's like language matter, not a resolution matter.
analogy that would almost work: if I turn a number into the same number in roman numerals, it might not take the same space in this post to write some values, but we can still extract the same meaning with 100% accuracy.
"1" would be "I" in roman number.
frown.gif
in this post they both occupy 1 space, the compression is sucky. now "100" will become "C" woot!!!! I just saved 2 spaces out of 3. do you conclude that 100 is of lower resolution because in roman numbers it use less space than in arabic? of course not so why do it for flac? a given value will take whatever storage space it needs in the language it uses. in english it's "a hundred", in french it's "cent", but the actual information is always there, if they were both samples for music, 1 or 100 would both tell us a position on a vertical axis and do so with both the accuracy of an integer. in what language they are express and whatever correlation with the occupied space you will find doesn't imply anything about resolution.
 
if this doesn't work, I must admit I'm out of ideas
tongue.gif

 
I think that makes sense. 
 
What is the reason there is a negative dB limit in the Amplify function in Audacity? http://imgur.com/a/Be2Bz
I can't hit "OK" there; it's grayed out. When moving the slider the limit is shown to be at -50 dB. 
 
 
edit: when I save the -50 dB file, then reopen it in Audacity and apply +50 dB amplification, to bring it back, then listen to that, there is a hiss -- that's the noise floor being brought into the mix? 
 
Aug 27, 2016 at 5:21 AM Post #30 of 33
  I think with the word "volume" you might picture, at least subconciously, a beaker with a liquid in it, right? Which beaker has more "volume" = more water.

 
Using that analogy, we have to think that the beaker has some pebbles in the bottom, which represent the noise. If we want to increase what the beaker contains we can only increase everything already in the beaker equally, both the water and pebbles, we can't increase the water only. If we do this we would "see" more water and more pebbles but the ratio between them would remain roughly the same, although increasingly our method of increasing the contents adds more pebbles than water!
 
Originally Posted by stalepie /img/forum/go_quote.gif
 
But can't there just be increasing resolution (detail) in a narrow human-audible range (Nyquist frequency), the vocal range in particular? Why can't there be more and more detail captured there? 16-bits is so many samples. More samples doesn't equal more detail? Then why is the word "resolution" used? If you have a higher resolution digital photo it has more detail than the lower resolution version. Because it's more resolving.

 
Analogies with images are only useful up to a point, ultimately there are differences between audio and images which causes the analogy to break down. In this case, we have to change the analogy to be more representative of what actually happens with audio: Therefore, let's say the only visual images we can have are perfect circles. In the analogue world, what we do is measure the circumference of our circle continuously and reproduce those measurements. In digital the approach is entirely different! We start with a mathematical function which can only create perfect circles and nothing else. We then only need to measure two points anywhere on our circle, because any circle created by our function which bisects our two points must be a perfect recreation of our original circle. Measuring more points on our original circle which our function must bisect is not going to make any difference at all, our perfect circle is not going to be any more perfect, it's not going to have any more detail because there is no more detail, we always have perfect circles! Fewer than two points and we can't be sure we're going to get a circle of the same diameter or position but more than two points adds nothing except more data to measure and store.
 
In digital audio we obviously aren't drawing perfect circles, we're "drawing" sine waves but the principle is the same. You are making the classic mistake of applying analogue thinking to digital methodology, where more resolution (measurements and measurement accuracy) in analogue means more detail but in digital it doesn't! Hence:
 
Quote:
  If you increase your audio sample rate from 1KHz to 2KHz, you hear more detail.
 
I think we need to be very careful with our wording here. Further to my last paragraph, we don't get any more "detail" if we up the sample rate from 1kHz to 2kHz, we get exactly the same amount of detail! The only difference is that we get that same amount of detail over a wider band of audio frequencies. If we have say a 300Hz audio signal, we don't get any more detail from a 2kHz sample rate than from a 1kHz sample rate or indeed from any sample rate greater than 2kHz. What our 2kHz sample rate allows is that same detail/resolution on audio signals up to about 1,000Hz rather than up to about 500Hz in the case of a 1kHz sample rate.
 
  [1] I think you need at least 20 bits to get certain sounds like the cannons in the 1812 Overture without compression. (http://imgur.com/a/CFf70)
[2] But what if a purely electronic artist wants to use 64-bits or more? Maybe he wants to make dance music for bats and elephants. He's not dealing with mics then anyway. You still have to deal with a noise floor with purely electronic music? Not from samples, but from tones generated by the synthesizer (like FM, not MIDI)?

 
1. Ultimately our limitation is still not in the digital domain but in the analogue and acoustic domains. Obviously it depends on the amount of acoustic energy the cannon produces and how far away from it we place our mic/s. Too close and we will break our mics, not to mention break our amp/speakers and indeed our ear drums. In practise we don't want to distribute anything with more than about 60dB dynamic range, which equates to about 10 bits of data.
 
2. In theory you could create 64bit signals inside a computer but in practise you could only get a tiny fraction of those 64bits out of your computer (to listen to). Again, our limitation is still not the digital domain but the analogue and acoustic domains. Actually in such an extreme example, our primary limitation is the laws of physics! The loudest sound wave possible (in air) is 194dB, which equates to only about 33bits. Then of course there's the lower limit of 180dB (~30bits), which is so much acoustic energy it will not just burst your ear drums but kill you instantly. In practise, we can't make sound systems capable of reproducing a dynamic range anywhere even remotely close to just 20bits, let alone 64bits and again, even a 60dB dynamic range is too great for comfort for the vast majority of consumers.
 
With a physical synth, rather than a virtual (software synth), we are still limited by a physical noise floor. Obviously not an acoustic noise floor, only an analogue noise floor, the (thermal) noise generated by the electronic components inside the synth. Baring in mind that even the electrons colliding in a single 10kOhm resistor will produce noise around the -138dB level (~23bits) and obviously a physical synth contains far more components than just a single resistor! In practise, I'm not aware of any physical synth capable of a dynamic range greater than could be completely captured with 16bits.

 
G
 

Users who are viewing this thread

Back
Top