Head-Fi.org › Forums › Equipment Forums › Sound Science › Double blind test 128Kbps vs lossless? I'll be amazed if you can tell much difference
New Posts  All Forums:Forum Nav:

Double blind test 128Kbps vs lossless? I'll be amazed if you can tell much difference - Page 12

post #166 of 257
Quote:
Originally Posted by Anaxilus View Post

 

Ok, maybe I don't understand something basic here.  Disregarding the whole dynamic range issue for the moment.  The explanation of 24bit sampling as was presented to me by certain Audio companies about 6-7 years ago one of the points of higher bit rates is to increase the resolution of a digital sample and get it closer to an analog curve.  If we imagine a normal analog curve with a peak and a valley a 16 bit digital will sample that section into 16 distinct areas (rectangles) just like approximating the limit of a curve in Calculus.  Everything not sampled by the rectangles under the curve is lost audio data.  By sampling at 24 bits you get 24 rectangles which is 50% more data collection for the same sample section.  Hence you create a smoother sampling with more resolution closer to the original analog signal.  This sampling would apply throughout the entire sample regardless of the volume level or frequency in question right?  I don't understand how the increased data or resolution attained throughout an entire song sampled at 24 bits only exists at a certain dB level?  What am I missing here? 

 

By reading the following along with your previous post:

 

 

"A set of digital audio samples contains data that, when converted into an analog signal, provides the necessary information to reproduce the sound wave. In pulse-code modulation(PCM) sampling, the bit depth will limit quantities such as dynamic range and signal-to-noise ratio. The bit depth will not limit frequency range, which is limited by the sample rate.

By increasing the sampling bit depth, smaller fluctuations of the audio signal can be resolved (also referred to as an increase in dynamic range). The 'rule-of-thumb' relationship between bit depth and dynamic range is, for each 1-bit increase in bit depth, the dynamic range will increase by 6 dB (see Signal-to-noise ratio#Fixed point). 24-bit digital audio has a theoretical maximum dynamic range of 144 dB, compared to 96 dB for 16-bit; however, current digital audio converter technology is limited to dynamic ranges of about 120 dB (20-bit) because of 'real world' limitations in integrated circuit design.[1]"

 

Its is clear that 24bit extends the maximum dynamic range to 144dB.  But that is the maximum.  I don't see anything that says increased resolution does not exists at any level below the maximum of 144dB.  Are you saying at 96dB a 24 bit sample has the same amount of data and resolution as a 16 bit sample?


You're mostly right.  With 16 bit you can have 2^16 different levels of intensity, meaning the the little rectangles drawn up like a riemann sum have 2^16 different possible heights and with 24 bit, they have 2^24 possible heights.  You can have one, two, three etc all the way up to 65536, but you can't go higher than that or lower than zero, and all steps have to be in whole numbers.  One or two, there can't be a 1.5.  The minimum (no signal at all) and the maximum possible signal are determined by the quality and power of the equipment.  24 bit allows you to take more steps which are each 1/256th of one 16 bit step.  Incidentally, the width of each rectangle is the sample rate.  This extra data does capture more accurate and true to life approximation of the actual sound, besides the increased dynamic range.  The real question Is how much of this data is audible to humans.

 

See Also

post #167 of 257

Ah gotcha!  Bit depth is based on Amplitude and not frequency sampling.  Damn those misrepresented charts!  So the resolution I was referring to and hearing in DVD audio samples was the 192khz sampling, nothing (or little) to do w/ 24 bit depth.  Appreciate the edumacation.

 


Edited by Anaxilus - 7/16/10 at 10:28pm
post #168 of 257

 

Quote:
Originally Posted by Ham Sandwich View Post

 


The bit depth tells you the potential dynamic range.  How many rectangles you can stack vertically under the loudest possible sound.  In 16 bit audio you can stick 16 rectangles under the loudest possible sound of 96 dB.  In 24 bit audio you can stick 24 rectangles under the loudest possible sound of 144 dB.  In each case each bit represents about 6 dB.  So the height of a rectangle in 16 bit or 24 bit audio is the same.  No difference in resolution there.  Just a difference in how loud you can go. 

 

The sampling rate tells you how wide each rectangle is.

The bit depth tells you how loud you can go, it does not change how tall each rectangle is.

 

 

I didn't explain that as well as I would like.  It all gets more complicated the deeper you are willing to go.  Each bit does give you about 6 dB of dynamic range.  So in that sense each bit is the same height in 24 bit and 16 bit.  But 16 bits gives you values of 0 to 65535 spread out from 0 to 96 dB.  24 bits gives you values of 0 to 16777215 spread out from 0 to 144 dB.  So the height of each rectangle in 24 bit audio is smaller, more fine.  In that sense 24 bits gives you more resolution of dynamic range.  Is that higher resolution of dynamic range audible once everything is sampled and then converted back to analog for listening?  Sound Science would say the answer to that question is no.

post #169 of 257

See this thread for the discussion of 16/44.1 vs. 24/96.  I don't think it'd be a good idea to take this thread so far off-topic. 

post #170 of 257


 

Quote:

 

Also, I have a hard time believing you can distinguish 24 and 16 bit files of the same master. Did you downsample the 24 bit file to 16 bit (with dither), or did you use a different 16 bit source? There was another discussion thread where it was concluded most (if not all) of the audible difference is because of the mastering, not the bit depth.


I did do a blind listening test.  But I suspect you won't believe me until you're the one running it.  So you'll just have to accept my word or not.  That's as close to proof as I can give.  I know the masters were the same because I made them.  I was mastering the song.

   
Here's the thing.  Dithering involves noise shaping, which does modify the original signal.  And with enough listening, one can actually learn to hear it.  I made it my business to do so because I had to decide which noise shaping algorithm to use.  Mind you, in a regular listening environment, I would never be sensitive enough to pick out distortion from noise shaping.  It's ridiculously subtle.


So when I claim to be able to distinguish the two files, I'm not hearing differences in the actual dynamic range.  96 db is more than enough for anything I've ever made.  I'm just picking out the noise shaping.  That's the very, VERY, minor problem with 16 bit files.  You can't get them without noise shaping, or worse, truncating.

post #171 of 257


 

Quote:
Originally Posted by chengbin View Post




Just one thing about your test. iTune's MP3 encoder probably sucks. Use LAME. Foobar2000 has an internal GUI for LAME that makes LAME MP3 encoding very easy (just make sure to not use VBR new (or fast mode)).

 

I can tell a difference between 96Kbps and 128Kbps, but not 128Kbps vs >128Kbps 


Well at this point I'm going to turn the tables on you: take a track, encode it with iTunes at 128Kbps, then encode the same track in Foobar with LAME at 128Kbps and do a double blind. Let me know how you performed.

post #172 of 257

Is it a recent recommendation ? Two or three years ago, VBR new was defaulted because the quality was better than VBR old.

post #173 of 257

I've discovered rubbish music sounds way better at 0 kbps.

post #174 of 257
I noticed that they are both 1411 kbps. MediaMonkey displays the sample rate.
post #175 of 257
Quote:
Originally Posted by Olli1324 View Post

I noticed that they are both 1411 kbps. MediaMonkey displays the sample rate.


Yes, they're both .wav files. But one is a lossless rip of the original, and the other is a 128kbps mp3 that has been converted to a .wav so that people can't tell the difference just by looking at the file type, sample rate, or file size.

post #176 of 257

Ah, I didn't realise that the 128 converted to wav would have come up as 1411.

 

Anyway, I couldn't tell a difference at all.

post #177 of 257
Thread Starter 
Quote:
Originally Posted by dsf3g View Post


 


Well at this point I'm going to turn the tables on you: take a track, encode it with iTunes at 128Kbps, then encode the same track in Foobar with LAME at 128Kbps and do a double blind. Let me know how you performed.

 

OK, since I created this test, I gotta take this challenge. I downloaded iTunes, enabled the highest quality VBR, and I encoded a 128Kbps song (the same one here), and see if I can hear a difference.

 

Double blind test is the ****. I couldn't tell a difference. 

 

Sorry for the misinformation earlier.

 

P.S. People say iTunes is so user friendly and stuff, but I founded as one of the hardest piece of software to use (I think the only softwares that beats iTunes in difficulty is Audacity and Photoshop). It took a little while to find out how to import a song. It took another while to find where is the convert button. Then it took a really long while to find how to change the default conversion of AAC to MP3, so long I gave up and looked for a guide on Google. Then finally when I converted it, I realize I don't know where iTunes saved the file, and it took some searching to find as well. I consider myself to be VERY tech savvy, and I can't believe iTunes' UI is this horrifically bad.


Edited by chengbin - 7/21/10 at 7:42pm
post #178 of 257
Quote:
Originally Posted by chengbin View Post

 

P.S. People say iTunes is so user friendly and stuff, but I founded as one of the hardest piece of software to use (I think the only softwares that beats iTunes in difficulty is Audacity and Photoshop). It took a little while to find out how to import a song. It took another while to find where is the convert button. Then it took a really long while to find how to change the default conversion of AAC to MP3, so long I gave up and looked for a guide on Google. Then finally when I converted it, I realize I don't know where iTunes saved the file, and it took some searching to find as well. I consider myself to be VERY tech savvy, and I can't believe iTunes' UI is this horrifically bad.

 

Lol, as Apple would say there's an app, err....thread for that.    iTunes was the straw that broke Apples back for me.  
 

post #179 of 257

I just converted all of my FLAC to MP3... Bet you'll never guess Why?

post #180 of 257
Quote:
Originally Posted by chengbin View Post

P.S. People say iTunes is so user friendly and stuff, but I founded as one of the hardest piece of software to use (I think the only softwares that beats iTunes in difficulty is Audacity and Photoshop). It took a little while to find out how to import a song. It took another while to find where is the convert button. Then it took a really long while to find how to change the default conversion of AAC to MP3, so long I gave up and looked for a guide on Google. Then finally when I converted it, I realize I don't know where iTunes saved the file, and it took some searching to find as well. I consider myself to be VERY tech savvy, and I can't believe iTunes' UI is this horrifically bad.


While others, like me, find iTunes very user friendly and easy to find the way around.

Our mileage certainly varies! :)

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Double blind test 128Kbps vs lossless? I'll be amazed if you can tell much difference