Head-Fi.org › Forums › Equipment Forums › Computer Audio › Reasons I don't hear a difference between 128kbps, 256kbps, and ALAC??
New Posts  All Forums:Forum Nav:

Reasons I don't hear a difference between 128kbps, 256kbps, and ALAC?? - Page 3

post #31 of 43
Quote:
Originally Posted by skamp View Post

It would be stupid to dismiss a lossy codec solely on the ground that it incurs slight variations in volume, if that can be fixed with Replaygain.

Also, I actually apply replaygain while transcoding, so that it works on devices/players that don't support Replaygain, like the iPod Classic's original firmware. Are those files audibly different from their source? Obviously they are, since I've applied large negative gains to them! Does it mean that they're not transparent? Absolutely not, it's just a matter of adjusting the volume.

 

hmm I am actually trying to imply that the difference in replaygain values resulting from transcoding doesn't come out of nowhere, its there because there are encoder artifacts... it is not the case of simply applying a flat gain across the entire song like your example.

Or in more simple words, as its not the result of lossy encoder applying a flat gain across the entire song, it cannot be undone by simply applying another flat gain.

 

 

Quote:
Originally Posted by stv014 View Post

 

That is not true, especially at low bit rates the level is often attenuated by a small amount (~0.5 dB). Such difference is already enough for a positive ABX result, even if the files would contain otherwise identical sound. For a fair comparison, the louder file needs to be attenuated on playback to match the levels; with 24-bit output resolution being available on any decent DAC and even onboard HDA codecs, the effect of this on sound quality is negligible compared to that of lossy compression.

 

Is there any support/proof for this? I've never heard that encoder is supposed to purposely alter the volume automatically...

post #32 of 43
Quote:
Originally Posted by spaark View Post

Quote:
Originally Posted by Jaywalk3r View Post

No, that's not correct.


That the quantity of data is reduced does not imply that sound is missing, i.e., no longer producing SPL. At least for audible frequencies, much of the "missing" data is estimated and replaced during playback. Compared to the original data, the estimations inevitably have errors, we should be careful not to assume that the error always results in the lossy passage being quieter than the original. It's just different. If we wish to determine whether or not that difference is audible, we must level match for our ABX tests.
At certain bitrates, lossy encoding can lowpass-filter the sound, so he isn't wrong. At 128 kbps for MP3, the cutoff frequency could be 17 kHz (this appears to be the case with LAME, see: http://wiki.hydrogenaudio.org/index.php?title=LAME#Recommended_settings_details
).

For many/most people, 17 kHz isn't an audible frequency, which is why such low bit-rate encoding may filter off frequencies that "low." That's not inconsistent with what I wrote.

Much of audio compression theory is about how to make small changes to the data matrix representing the audio track so that it acquires certain mathematical properties are guaranteed to hold, properties that allow the modified matrix to be significantly compressed with no further data loss. Consequently, we cannot say that an MP3 file that is only 1/10 the size of the WAV file from which it was transcoded contains only ten percent of the original data. Some of the original data IS lost, but much of that compression comes from the mathematical methods used in the compression process rather than data being discarded.
post #33 of 43
Quote:
Originally Posted by kn19h7 View Post
Is there any support/proof for this? I've never heard that encoder is supposed to purposely alter the volume automatically...

 

You can easily test it: encode some track with LAME at 128 kbps, and then decode the resulting MP3 file. The decoded WAV will be consistently about half a dB quieter than the original. Even if the source WAV contains only a 1 kHz sine wave (something that should not be too hard to encode without artifacts, even at 128 kbps), the decoded version has lower RMS level by ~0.45 dB or ~5 %.


Edited by stv014 - 1/22/13 at 10:08am
post #34 of 43
Very interesting topic and something that I've always wondered about too. Often times, I find it difficult to hear the difference between Apple Lossless and a 256kb AAC file. The only thing found was how good or bad the source was. If the original source was recorded well, subsequent compressed audio formats were good. Vice versa on badly recorded materials.
post #35 of 43
What we want to eliminate with Replaygain is global gain (volume) changes. If volume varies from the source throughout the song, then yes that would be a lossy encoding artifact that may be detected in an ABX session. But I'm not aware of that happening, and in any case, it's not a reason not to use Replaygain when ABXing.
post #36 of 43
Quote:
Originally Posted by stv014 View Post

 

You can easily test it: encode some track with LAME at 128 kbps, and then decode the resulting MP3 file. The decoded WAV will be consistently about half a dB quieter than the original. Even if the source WAV contains only a 1 kHz sine wave (something that should not be too hard to encode without artifacts, even at 128 kbps), the decoded version has lower RMS level by ~0.45 dB or ~5 %.

hmm if the encoder did apply a "hard" gain associated with the bitrate profile, then applying a reverse gain of it (not any other gain values) during blind test seems reasonable.

 

But I am still against the idea of enabling replaygain in blind test when the encoder did not apply a global gain (which I believe should be the case with high bitrates), its like messing things up...

post #37 of 43
Quote:
Originally Posted by kn19h7 View Post

I am still against the idea of enabling replaygain in blind test when the encoder did not apply a global gain (which I believe should be the case with high bitrates), its like messing things up...

If you want the ABX test to provide meaningful results with respect to audio transparency, or lack thereof, of the codec, volume matching is mandatory.
post #38 of 43

Related to the whole hearing differences thing, here's a neat test you can do by Pioneer. Downside is it requires Facebook: https://www.facebook.com/pioneer.electronics/app_328245200605088?ref=ts

 

When I did it I thought it was easy up to around level 6, after that it became hard. Did it twice and failed at the 9th and the 12th song out of 15.

post #39 of 43

What a fun topic. My experiments began with The Rolling Stones tune Love In Vain. I have this tune on an original LP mastered in analog. I made a CD of this tune and put it in my iTunes library as a wav file. I also downloaded the tune in high res flac  from the HDtracks Store, as well as at 256k from the iTunes store. From various media players I played all manner of the tune and found I could not distinguish between any of the files. I made a compilation CD of all files and let one of my friends try to distinguish between them. He had no problem discerning the original analog recording from all the digital files. I suspected that he could do that by listening for pops and crackles present on the analog recording. At any rate, I'm convinced I cannot hear any difference between 256k and hi-res. Nevertheless, I am a big fan of SACD. It's physical and I like that for a multitude of reasons.


Edited by sterling1 - 1/23/13 at 5:48am
post #40 of 43
Thread Starter 
Interesting stuff. I did the mp3 or not test and got 6 for 6 (didn't know why I picked any certain one though, just thought X sounded slightly closer to A/B), but then again this could easily be a slight volume difference.
post #41 of 43
Quote:
Originally Posted by lorriman View Post


Wouldn't that be due to the amping more than the dac?

I used the fiio with the USB (dac and amp) and with the 3.5 jack (no dac but still amp).

So in this case, no.

post #42 of 43

Neat test. I found some tricky because there was no reference sample to compare the compressed audio. I got fooled into one track that I thought was too sibilant and the compressed one sounded cleaner. Just shows how some people's preference to what pleases their ears. The track was Night Lover by Eppu.
 

post #43 of 43

Don't want to resurrect a dead thread or anything, but I was curious about this as well.
I used mp3ornot.com I ran two separate tests:

1. Chrome > Fiio e10 dac/amp > Denon D7000
2. Chrome > Fiio e10 dac/amp > LCD-2 (rev 2.2) (I'm going to anticipate some criticism of this setup, but I formerly ran the LCD with a Schiit Lyr, and I found the difference between the e10 and the Lyr, to my surprise, wasn't astronomical.)

Regardless, these are the setups.

I scored ~66% after 20 rounds, with both setups! 

I was honestly quite surprised with this result, since I expected the LCD-2 to be quite a bit more revealing. To test the amp as a bottleneck, I repeated the setups using only my pc's audio out, and scored similarly again with both headphones.


I repeated this test with some of my own files, ripped at 128 and at lossless ALAC. Here, I was able to notice the differences with 100% distinction.
It is interesting to me that the jump from 128 to 256 is nearly inaudible (as my score is just slightly above the 50% probability baseline), while the jump from 128 to lossless is clearly audible. 
Paradoxically, I repeated the tests with 256/lossless, and did not get a perfect score (my own tests shook out at ~75% under these conditions). 

Illuminating stuff. 





 

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Computer Audio
Head-Fi.org › Forums › Equipment Forums › Computer Audio › Reasons I don't hear a difference between 128kbps, 256kbps, and ALAC??