HiResSndWizzard
New Head-Fier
- Joined
- Jun 17, 2015
- Posts
- 20
- Likes
- 10
Hi all,
I'm wanting to learn everything about the, how to, of oversampling, vs software upsampling to recover the more or less original sound of the music file. It is definately there, more or less, in every file I have processed to date.
I have been told, you never loose anything by downsampling a file, only when you compress the bit depth. Also, 16 bit is enough depth to capture all of the music, at least the part you really want to hear below 10khz, easily storing up to 22khz with minimal apparent losses.
Where the resolution is lost, (the fine detail), is in the compression from 24 bit in the master file down to 16 bit of CD & mp3.
Is this detail really worth saving? For the record, relating to the "Sampling Theorem", dynamic range decreased by 'log²(N). By removing 4 bits 256 times to form a 20 bit, & again removing another 4 bits 256 times to form a 16 bit.
Volume resolution suffers severe loss of the original dynamic quality by removing the bit depth. View this in a form of steps from the basement to the attic.
The bottom steps are very close, You take them on the run, hundreds of them at a time, the middle steps are normal size, the top steps, say -6 dB to -4 dB, you need a ladder to get up to the next step, by -2 dB you need a helicopter, to clime up to the 0 dB ceiling breaker, you'd need a rocket ship to make the last upper step.
That is log²(N). 65,535 volume steps in 16 bit, 16.7 million volume steps in 24 bit. I think the count starts at -196 dB, so by -84 dB average noise floor, you have used most of your' aloted volume steps, to reach to the top of the file @ 0 dB
Going up in bit depth, can you dither? Well, yes. But you have already lost the sine resolution of the upper file frequency limit, say 22,050 to 24,000 frequencies become mostly triangle, & stay that way for the most part, after dithering from 16 bit to 24 bit.
By researching the subject, I came up with some interesting points of knowledge about this digital high res revolution.
First. I found proof of the above statement that nothing is lost in a sample clock down compression of equal bit depth.
The Nyquest theorem: It is found in the 'Bell Laboratories, "Sampling Theorem,"' of Harry Nyquest & Claude Shannon, where the 'aliasing of any frequencies present in the capture are only reproduced if the sample rate is at least twice the frequency of the highest captured frequency.
If there is not enough samples in playback, the highest frequencies in the bit depth, all of the frequencies above half the sample rate, are folded in reverse frequency progression into the spectrum of sound heard by your' ear.
Ie: (at 44.1 sample 23khz is aliased at 21khz, 24khz is aliased at 20khz, for a file that started as a 48k16bit & downsampled to 44.1k16bit). These frequencies originally captured in the bit depth at 16 bit, are then reproduced into the upper limits of your' audio, as a form of musical related noise.
The aliasing sound can be heard as mostly a loud room echoing, or an increase in the "Live" sound of the recording studio.
It is made up largely of ultrasonic sounds made audioible by the aliases of those sounds being sampled at a lower than the Nyquest frequency on playback.
IM Distortion in digital is called Aliasing. It degrades the reproduction quality, of the original stereo sound image produced by your' favorite speakers.
By upsampling alone, the aliases are erased by most software converters, if an alias filter is employed during upsampling. That is to say, you loose a small part of the resolution package hidden in the bit depth.
The Nyquest theorem: If the upsampling software does not apply aliasing filters, then the high frequency limit of the original recording can be reproduced by simply upsampling to at least twice the original sample capture frequency of the music file.
Under some conditions, I get a better result, if I upsample from 44.1 to 48 first, without the use of an aliasing filter during that upsample.
Then, I again upsample to 96k from 48k, this time using the aliasing filter to remove everything above 24khz, which is generally the original frequency bandwidth of most music.
Therefore, you don't want to add any aliases to the 96k upsample above 24khz original music data when upsampling to twice the original sample rate. Again, the Nyquest theorem..
When processed properly, you get a finished 96k sample file with data existing only below 24khz. All frequencies, including noise above 24khz, between 24khz & 48khz, is digitally suppressed down to around - 144dB or more
Does this all make a lot of money & cents, when it comes to selling or buying so called high res music files? When it is possible to recover that same file, plus or minus a possible noise factor formula of "SNR = sqrt(N)", from a lowly 16 bit mp3 file?
I have achieved some amazing results to say the least, using several methods of resolution recovery in just about any available file of 16 bit or better.
How has the tech world been doing it?: Can it really be done well by anyone, considering the fact that what is recovered by any means of upsampling [as well as the original master] is subject to having up to 16 dB more noise by doubling the record input bandwidth from 20khz to 40 khz, if those frequencies are not suppressed during the original file capture.
That refers to making the original master music file capture at 96khz sample, & having live capture of frequencies that bats would have trouble hearing, everything from 20 Hz to 48kHz.
That is pretty much saying, "what you recover in resolution from the bit depth of mp3's, as well as the market proclaimed Hi Res file, is, pretty much as they say, not much more than musical noise, your' pet bat might enjoy!
Oh! Don't forget the random ultrasonic noises picked up by the microphones around the recording studio, like antique ultrasonic denture cleaners, & other sounds, one would normally never imagine existed in a sound recording studio.
So I am focusing on recovery of the original 8 or so octives of the music found below 10 khz as my goal of improving the resolution of that segment of the Hi Res upgrade to 96k, 24 or 32 bit. 'C', Octive 0, starts @ 16Hz. octive 8 is up there around 4.5kHz. to 9kHz.
Sorry to say this, but the greatest leap in quality of recorded music, has to come from the sound studio engineers that ultimatly create the sound contained in the music file.
What is your take on the best way, "to take what you already bought in your' music collection", & improving it with various sampling methods, rather than going out to the cleaners and re-purchasing all of your' favorite misic tracks with Pono downloads?
Sorry Neil for the slap down of your' new compression file. I'm not sure any high res file sold at any price is worth any extra cost markup above what the average mp3 is worth.
Give me a rundown of how 'you' apply the conversion, & using what hardware, software, & OS?, in any answers to this thread
Thanks for your' thoughts.
I'm wanting to learn everything about the, how to, of oversampling, vs software upsampling to recover the more or less original sound of the music file. It is definately there, more or less, in every file I have processed to date.
I have been told, you never loose anything by downsampling a file, only when you compress the bit depth. Also, 16 bit is enough depth to capture all of the music, at least the part you really want to hear below 10khz, easily storing up to 22khz with minimal apparent losses.
Where the resolution is lost, (the fine detail), is in the compression from 24 bit in the master file down to 16 bit of CD & mp3.
Is this detail really worth saving? For the record, relating to the "Sampling Theorem", dynamic range decreased by 'log²(N). By removing 4 bits 256 times to form a 20 bit, & again removing another 4 bits 256 times to form a 16 bit.
Volume resolution suffers severe loss of the original dynamic quality by removing the bit depth. View this in a form of steps from the basement to the attic.
The bottom steps are very close, You take them on the run, hundreds of them at a time, the middle steps are normal size, the top steps, say -6 dB to -4 dB, you need a ladder to get up to the next step, by -2 dB you need a helicopter, to clime up to the 0 dB ceiling breaker, you'd need a rocket ship to make the last upper step.
That is log²(N). 65,535 volume steps in 16 bit, 16.7 million volume steps in 24 bit. I think the count starts at -196 dB, so by -84 dB average noise floor, you have used most of your' aloted volume steps, to reach to the top of the file @ 0 dB
Going up in bit depth, can you dither? Well, yes. But you have already lost the sine resolution of the upper file frequency limit, say 22,050 to 24,000 frequencies become mostly triangle, & stay that way for the most part, after dithering from 16 bit to 24 bit.
By researching the subject, I came up with some interesting points of knowledge about this digital high res revolution.
First. I found proof of the above statement that nothing is lost in a sample clock down compression of equal bit depth.
The Nyquest theorem: It is found in the 'Bell Laboratories, "Sampling Theorem,"' of Harry Nyquest & Claude Shannon, where the 'aliasing of any frequencies present in the capture are only reproduced if the sample rate is at least twice the frequency of the highest captured frequency.
If there is not enough samples in playback, the highest frequencies in the bit depth, all of the frequencies above half the sample rate, are folded in reverse frequency progression into the spectrum of sound heard by your' ear.
Ie: (at 44.1 sample 23khz is aliased at 21khz, 24khz is aliased at 20khz, for a file that started as a 48k16bit & downsampled to 44.1k16bit). These frequencies originally captured in the bit depth at 16 bit, are then reproduced into the upper limits of your' audio, as a form of musical related noise.
The aliasing sound can be heard as mostly a loud room echoing, or an increase in the "Live" sound of the recording studio.
It is made up largely of ultrasonic sounds made audioible by the aliases of those sounds being sampled at a lower than the Nyquest frequency on playback.
IM Distortion in digital is called Aliasing. It degrades the reproduction quality, of the original stereo sound image produced by your' favorite speakers.
By upsampling alone, the aliases are erased by most software converters, if an alias filter is employed during upsampling. That is to say, you loose a small part of the resolution package hidden in the bit depth.
The Nyquest theorem: If the upsampling software does not apply aliasing filters, then the high frequency limit of the original recording can be reproduced by simply upsampling to at least twice the original sample capture frequency of the music file.
Under some conditions, I get a better result, if I upsample from 44.1 to 48 first, without the use of an aliasing filter during that upsample.
Then, I again upsample to 96k from 48k, this time using the aliasing filter to remove everything above 24khz, which is generally the original frequency bandwidth of most music.
Therefore, you don't want to add any aliases to the 96k upsample above 24khz original music data when upsampling to twice the original sample rate. Again, the Nyquest theorem..
When processed properly, you get a finished 96k sample file with data existing only below 24khz. All frequencies, including noise above 24khz, between 24khz & 48khz, is digitally suppressed down to around - 144dB or more
Does this all make a lot of money & cents, when it comes to selling or buying so called high res music files? When it is possible to recover that same file, plus or minus a possible noise factor formula of "SNR = sqrt(N)", from a lowly 16 bit mp3 file?
I have achieved some amazing results to say the least, using several methods of resolution recovery in just about any available file of 16 bit or better.
How has the tech world been doing it?: Can it really be done well by anyone, considering the fact that what is recovered by any means of upsampling [as well as the original master] is subject to having up to 16 dB more noise by doubling the record input bandwidth from 20khz to 40 khz, if those frequencies are not suppressed during the original file capture.
That refers to making the original master music file capture at 96khz sample, & having live capture of frequencies that bats would have trouble hearing, everything from 20 Hz to 48kHz.
That is pretty much saying, "what you recover in resolution from the bit depth of mp3's, as well as the market proclaimed Hi Res file, is, pretty much as they say, not much more than musical noise, your' pet bat might enjoy!
Oh! Don't forget the random ultrasonic noises picked up by the microphones around the recording studio, like antique ultrasonic denture cleaners, & other sounds, one would normally never imagine existed in a sound recording studio.
So I am focusing on recovery of the original 8 or so octives of the music found below 10 khz as my goal of improving the resolution of that segment of the Hi Res upgrade to 96k, 24 or 32 bit. 'C', Octive 0, starts @ 16Hz. octive 8 is up there around 4.5kHz. to 9kHz.
Sorry to say this, but the greatest leap in quality of recorded music, has to come from the sound studio engineers that ultimatly create the sound contained in the music file.
What is your take on the best way, "to take what you already bought in your' music collection", & improving it with various sampling methods, rather than going out to the cleaners and re-purchasing all of your' favorite misic tracks with Pono downloads?
Sorry Neil for the slap down of your' new compression file. I'm not sure any high res file sold at any price is worth any extra cost markup above what the average mp3 is worth.
Give me a rundown of how 'you' apply the conversion, & using what hardware, software, & OS?, in any answers to this thread
Thanks for your' thoughts.