Quote: Originally Posted by hybris /img/forum/go_quote.gif Anyone that can hear "massive differences" between 192kbps and 320kbps (assuming a good quality encoder) . . . . Why do the differences have to be "massive"? If there's a difference one can hear, isn't that enough? Isn't it up to the individual to determine, assuming arguendo that they hear a difference, what to do about it (e.g., record in lossless and lose space, etc.)? Quote: Originally Posted by hybris /img/forum/go_quote.gif Bring home a few friends, encode say 30 seconds of a few tracks in 128, 192 and 320kbps mp3 with the lame encoder. Then you put the same track (but with different bitrate) in a playlist several times, say 15 times (each bitrate repeated five times) in random order. Now the rest of you guys who don't know the playlist order try to rate the sound quality, or even guess which bitrate are playing for each of the 15 tracks. I can tell you right now that your results will be completely random, perhaps you're able to place the 128kbps track a few times. That test is not very good, IMO. I'll bet lots of people have trouble hearing differences under those conditions. I can see about five different obvious flaws in it -- bascially it doesn't mirror normal listening conditions or the conditions under which people might be able to hear differences. I think I'll have two songs that I listen to regularly recorded to my MP3 player at 128 (just to start there). Then I'll listen regularly to those two songs over the next week or so in lossless. Then I'll have my wife play the songs at 128 or lossless (her choice) and see if I can tell the difference. Isn't that a fair test -- one that doesn't involve listening to (1) songs you don't know on (2) other people's equipment or equipment you don't regularly use and (3) flipping back and forth rapidly between songs and (4) not trying to determine just differences but being forced to identify which bit rate is which?