Can you tell the difference between: FLAC + MP3 + WAV ?
Oct 31, 2011 at 6:32 PM Post #61 of 73
Quote:
Do you have some of your blind test results handy?



I didn't keep a spreadsheet or any permanent digital record of the results.  I spent a good hour or so on it with the help of my girlfriend.  I used both my D5000 and mini-monitors from my HDP for the testing.
 
All the MP3's were created from the FLAC files with foobar2k, or ripped from the CD using Cdex/LAME and compared to the actual CD played from a Samsung Writemaster optical drive.
 
Oct 31, 2011 at 8:32 PM Post #62 of 73
Quote:
Originally Posted by fallingreason /img/forum/go_quote.gif
 
I didn't keep a spreadsheet or any permanent digital record of the results.  I spent a good hour or so on it with the help of my girlfriend.  I used both my D5000 and mini-monitors from my HDP for the testing.
 
All the MP3's were created from the FLAC files with foobar2k, or ripped from the CD using Cdex/LAME and compared to the actual CD played from a Samsung Writemaster optical drive.

 
Try comparing the FLAC to mp3 using foobar's abx comparator plugin, it automatically creates a text file which you can just post the results here easily. It'll take less than 10 minutes if you pick a song that shows a relatively big difference.
 
Nov 7, 2011 at 1:56 PM Post #64 of 73
Quote:
I can easily tell the difference between FLAC and WAV.

 
Well, so can I, but we don't mean by looking at the bitrate 
wink.gif

 
You wouldn't mind doing some ABX tests of course. Right?
 
Nov 7, 2011 at 2:02 PM Post #65 of 73
I'd recommend you work on your internet sarcasm. Make use of italics, smiley faces (feel free to drop these if you want it to sound dry), a clearly exaggerated writing style, and excessively silly adjectives and adverbs.
 
You see, I'm something of a master of textual sarcasm, as I'm sure you are now aware 
tongue.gif

 
Nov 7, 2011 at 2:15 PM Post #66 of 73
You could have tried something like "I can easily tell the difference between FLAC and WAV. It's so obvious. They're definitely not the same at all. Nope 
rolleyes.gif
" But then some objectivist will probably call you out on it anyway.
 
What don't you like about ABX tests? They're the same as any sighted test, just without the sight.
 
Nov 13, 2011 at 5:08 PM Post #67 of 73
As requested....  BTW the % next to the score is the % chance the user is guessing.
 
Quote:
 
Try comparing the FLAC to mp3 using foobar's abx comparator plugin, it automatically creates a text file which you can just post the results here easily. It'll take less than 10 minutes if you pick a song that shows a relatively big difference.

 
     foo_abx 1.3.4 report
foobar2000 v1.1.6
2011/11/13 13:44:56
 
File A: C:\Users\*****\Downloads\Brian McKnight\1997 - Anytime\01. Anytime.flac
File B: C:\Users\*****\Desktop\Anytime.mp3
 
13:44:56 : Test started.
13:45:53 : Trial reset.
13:48:07 : 01/01  50.0%
13:48:59 : 02/02  25.0%
13:50:25 : 02/03  50.0%
13:50:29 : Trial reset.
13:51:33 : 01/01  50.0%
13:52:21 : 02/02  25.0%
13:54:04 : 03/03  12.5%
13:55:21 : 03/04  31.3%
13:56:14 : 03/05  50.0%
13:56:49 : 04/06  34.4%
13:57:01 : Trial reset.
13:57:10 : 00/01  100.0%
13:57:12 : Trial reset.
13:57:35 : 00/01  100.0%
 
13:57:37 : Trial reset.
13:57:49 : 01/01  50.0%
13:57:59 : 02/02  25.0%
13:58:08 : 03/03  12.5%
13:58:16 : 04/04  6.3%
13:58:26 : 05/05  3.1%
13:58:33 : 06/06  1.6%
13:58:42 : 07/07  0.8%
13:59:00 : 08/08  0.4%
13:59:34 : 09/09  0.2%
13:59:52 : 10/10  0.1%
14:01:43 : 10/11  0.6%
14:02:12 : Test finished.
 
 ---------- 
Total: 16/22 (2.6%)
 
 
Used Brian McKnight - Anytime, FLAC 16/48 and converted that file to LAME MP3 320kbps.    Did a couple of trials playing around with the plugin (first time using it) then listened intently on the last trial and got 10/10 (bold), then missed one.  The difference is subtle, but if you know what you're listening for its unmistakable.  The music sounds more analog, real, and there is more space in it.
 
I was using SPDIF from motherboard to NuForce HDP to D5000.  Was not using WASAPI.  
 
_______________________________________________________
Edit:  notice the timing between the ABX selections, it doesn't take long to notice which is which
 
 
Nov 15, 2011 at 8:45 AM Post #69 of 73
WAV is uncompressed lossless, FLAC is an container that allows for compression of lossless files but requires a bit of cpu power to do the decompression so most mp3 players dont support it, mp3 is just compressed lossy music. 
 
There is a huge difference between mp3 and lossless on a waveform analyser (you would see the frequencies above 16khz and below 50hz rolled off, but the difference is very small with your own ears, if the encoder used is decent enough. 
 
Nov 24, 2011 at 7:10 PM Post #70 of 73
I reckon the reason why the soundstage is diminished in MP3 vs FLAC is due to the loss of the "air"... frequencies above 16-18khz. The ear and brain use many tricks to position things around you, and two sounds that are just every so slightly offset left-right need a higher frequency to be differentiable. It is something like the phase offset of the sound to the two ears.. Since if a sound is closer to one ear than the other the waveform is moved forward or back from the other ear's, so that the rise of the wave entering the closer ear happens before the rise of the other. If this is a 100hz sound like a subwoofer, there is not much difference between the level of the two waves to both ears when the sound source is moved a few degrees over, because the waves are so long. This is why sub position in a room doesn't matter much, e.g. you don't hear the fact that there is only one sub on the left side. Positioning a guitar, especially considering rotation of the guitar (so like the head resonating at high frequency and the body being heavier, resonating lower) needs very high f content in order for you to be able to make out whether it is rotating in the guitarists grasp or moving a few degrees over with each strum. The high f has a lot to do with imaging, and there is no limit to how high you need the content to go. It simply increases the spacial resolution of the soundstage.
 
The wavelength in normal air of a 20khz wave is 17mm. That means when you turn your head such that one ear is 1mm closer to the sound source (visualize this for a moment, moving one ear 100mm closer to the sound source turns your world about 45 degrees, so half a degree of rotation), your ears and brain have to hear and calculate the fact that the peaks (or zero crossing or any identifiable point) of each wave are shifted over by about 3 microseconds. For a 20,000hz wave, this means that one ear has the peak of the wave hit it (100% power of the wave) while the other is approaching the peak or dropping away from it with the wave power level of 93%. I am not sure if the brain/ears can sense this accurately in time for this power difference. If the rotation of an object is not half a degree but instead two degrees off center, the wave is at zero signal in one ear and +100% in the other. This is more realistic. What if the wave was eight degrees off centre, so that the peak of each hits at the same time. Well, this is why sine waves are difficult to localize in space. For real-world sounds, the crude positioning is done using lower frequency, where there is an offset, and finer and finer positioning with higher frequency content.
 
The sampling rate, or rather temporal accuracy/resolution of the brain areas processing this difference in phase limit the spacial resolution of your environment (or a recording). Thus this content has to either exist to give a higher spacial resolution, or your brainears have to be super sensitive to a one percent difference in wave height between the two ears for a lower frequency sound at any instant in time.
 
Nov 26, 2011 at 2:00 AM Post #71 of 73


Quote:
I reckon the reason why the soundstage is diminished in MP3 vs FLAC is due to the loss of the "air"... frequencies above 16-18khz. The ear and brain use many tricks to position things around you, and two sounds that are just every so slightly offset left-right need a higher frequency to be differentiable. It is something like the phase offset of the sound to the two ears.. Since if a sound is closer to one ear than the other the waveform is moved forward or back from the other ear's, so that the rise of the wave entering the closer ear happens before the rise of the other. If this is a 100hz sound like a subwoofer, there is not much difference between the level of the two waves to both ears when the sound source is moved a few degrees over, because the waves are so long. This is why sub position in a room doesn't matter much, e.g. you don't hear the fact that there is only one sub on the left side. Positioning a guitar, especially considering rotation of the guitar (so like the head resonating at high frequency and the body being heavier, resonating lower) needs very high f content in order for you to be able to make out whether it is rotating in the guitarists grasp or moving a few degrees over with each strum. The high f has a lot to do with imaging, and there is no limit to how high you need the content to go. It simply increases the spacial resolution of the soundstage.
 
The wavelength in normal air of a 20khz wave is 17mm. That means when you turn your head such that one ear is 1mm closer to the sound source (visualize this for a moment, moving one ear 100mm closer to the sound source turns your world about 45 degrees, so half a degree of rotation), your ears and brain have to hear and calculate the fact that the peaks (or zero crossing or any identifiable point) of each wave are shifted over by about 3 microseconds. For a 20,000hz wave, this means that one ear has the peak of the wave hit it (100% power of the wave) while the other is approaching the peak or dropping away from it with the wave power level of 93%. I am not sure if the brain/ears can sense this accurately in time for this power difference. If the rotation of an object is not half a degree but instead two degrees off center, the wave is at zero signal in one ear and +100% in the other. This is more realistic. What if the wave was eight degrees off centre, so that the peak of each hits at the same time. Well, this is why sine waves are difficult to localize in space. For real-world sounds, the crude positioning is done using lower frequency, where there is an offset, and finer and finer positioning with higher frequency content.
 
The sampling rate, or rather temporal accuracy/resolution of the brain areas processing this difference in phase limit the spacial resolution of your environment (or a recording). Thus this content has to either exist to give a higher spacial resolution, or your brainears have to be super sensitive to a one percent difference in wave height between the two ears for a lower frequency sound at any instant in time.



wow chim, but thanks!
 
Nov 29, 2011 at 9:26 AM Post #73 of 73

Quote:
I reckon the reason why the soundstage is diminished in MP3 vs FLAC is due to the loss of the "air"... frequencies above 16-18khz. The ear and brain use many tricks to position things around you, and two sounds that are just every so slightly offset left-right need a higher frequency to be differentiable. It is something like the phase offset of the sound to the two ears.. Since if a sound is closer to one ear than the other the waveform is moved forward or back from the other ear's, so that the rise of the wave entering the closer ear happens before the rise of the other. If this is a 100hz sound like a subwoofer, there is not much difference between the level of the two waves to both ears when the sound source is moved a few degrees over, because the waves are so long. This is why sub position in a room doesn't matter much, e.g. you don't hear the fact that there is only one sub on the left side. Positioning a guitar, especially considering rotation of the guitar (so like the head resonating at high frequency and the body being heavier, resonating lower) needs very high f content in order for you to be able to make out whether it is rotating in the guitarists grasp or moving a few degrees over with each strum. The high f has a lot to do with imaging, and there is no limit to how high you need the content to go. It simply increases the spacial resolution of the soundstage.
 
The wavelength in normal air of a 20khz wave is 17mm. That means when you turn your head such that one ear is 1mm closer to the sound source (visualize this for a moment, moving one ear 100mm closer to the sound source turns your world about 45 degrees, so half a degree of rotation), your ears and brain have to hear and calculate the fact that the peaks (or zero crossing or any identifiable point) of each wave are shifted over by about 3 microseconds. For a 20,000hz wave, this means that one ear has the peak of the wave hit it (100% power of the wave) while the other is approaching the peak or dropping away from it with the wave power level of 93%. I am not sure if the brain/ears can sense this accurately in time for this power difference. If the rotation of an object is not half a degree but instead two degrees off center, the wave is at zero signal in one ear and +100% in the other. This is more realistic. What if the wave was eight degrees off centre, so that the peak of each hits at the same time.


 
No. Frequencies above ~1-4 kHz (I forget, and there is some overlap) use differences in volume for localisation. Below that is phase difference as you have described. 
 

Quote:
Well, this is why sine waves are difficult to localize in space.

 
Speak for yourself. Okay, I didn't mean that, I'm sure your hearing is fine. But seriously, I don't think you have even bothered to test your theory, which seem s a shame because you have obviously put some time into thinking this through. All it requires is generating a high frequency sine wave (anything above 10kHz) and listening through some speakers.
 

Users who are viewing this thread

Back
Top