Head-Fi.org › Forums › Equipment Forums › Sound Science › How to hear hi-def?
New Posts  All Forums:Forum Nav:

How to hear hi-def?

post #1 of 16
Thread Starter 

How can one hear the difference of flac and mp3 without even focusing? or hear the difference between mogami and stock cables?

Just curious, I mean how can one acquire such a skill?

post #2 of 16

You can't. With practice you can learn to differentiate between high bitrate lossy and lossless, but it will never be a huge difference and not one you'd hear immediately. Take some tests online, compare your own lossy and lossless copies of files with Foobar and the ABX plugin. Try to figure out where the differences are and what they sound like. They're usually most noticeable in the treble.

 

You can't hear the difference between cables, period, unless one is seriously defective.

 

Interesting choice to forum to post this. I think it would be better somewhere else, maybe Sound Science. You're just going to get a lot of "the differences are HUGE!" posts up here.

post #3 of 16
Thread Starter 
Quote:
Originally Posted by Head Injury View Post


Interesting choice to forum to post this. I think it would be better somewhere else, maybe Sound Science.


You are right, only if some mod was kind enough to move this thread there

 

post #4 of 16

Not everything kind of music or peripheral equipment, such as cables, is going to show much difference in a "high-def " (used to be called high-fi)   presentation.  Some recordings may also be so bad at the outset that they may even sound worse in high def since this will highlight their deficiencies.

 

I would also say that you are not going to hear much difference between most cables,  but don't make the error of overgeneralizing to all cables.  There are thousands of cables out there and they are not all going to sound alike.  I have had several cables where the differences between cables were immediately apparent.  I recall in particular going from a conventional cable,  between my portable CD player and portable Stax electrostatic headphone unit, to a used silver cable which I got from someone on this site.  Voila I was suddenly hearing a rich, highly ambient sound which I had never heard with any of several more conventional cables.

 

One problem with cables is that merely handling some cables can affect the sound adversely.  I remember  a set of mid price Monster cables that would lose bass when you first moved them from one unit to another.  So you have to leave them in place for a while before they sounded right.  This obviously makes an immediate A/B comparison between 2 cables difficult.  I have seen the claim that many pros who set up for concerts try to get everything rigged well ahead of time to avoid this problem.

 

Re: digital sound, I am not a fan of the MP3 but I bought some IPODS for my teenage daughters.  I recorded some classical music  from a CD and installed it on their players. (They of course later deleted it).  Initially I was quite impressed with the sound quality but by the next day the music  didn't seem very involving and then I realized I couldn't even hear certain instruments such as a triangle.  So rather than comparing MP3's and their ilk, I would suggest comparing  mp3's with good CD reproduction of the same music.

 

Finally,  you are going to have a hard time distinguishing high-def from low-def source material unless you have high-def playback, especially speakers and headphones.  Electrostatics are the gold standard of high definition playback because of their exceedingly light diaphragms.  Regular dynamic phones and even orthodynamics have quite heavy diaphragms by comparison.  This makes them inherently slow to respond to high frequency detail  and to show overhang/hysteresis when driven hard.   However even among stats there are a lot of differences.  The current best of the best is evidently the Stax SR009 which is going to set you back about $5K and probably as much again for the best stat amp to run it, the BHSE.

 

I find that it is not surprising that   most of the persons who try to debunk the sonic differences between things like cables  are not using what I consider high-def phones.   Accordingly they can't hear differences which are more obvious with more resolving systems.

post #5 of 16

For me, anyway, it depends a lot on what your rig is.  I took some of those 'can you tell the difference between 128 v 320?' tests on a SR225i and an entry-level DAC and amp, and I was about 50/50 accuracy, when I made an effort not to concentrate while listening (closer to 75/25 when I made an active effort at trying).  If I can't tell the difference in casual listening, it's going to be irrelevant for me since I'm a chronic multitasker.

 

I'm on an O2 and GES now (with a $300 DAC), took a bunch of the tests last night in response to another thread here, and had a 100% accuracy in telling 128 and 320 apart.  The O2 is so detailed that I've blind tested friends on my system and they can reliably tell the difference between 16/44, 24/48, and 24/96 recordings, often without any repeats (including when I pretend to switch the track but don't).  The best way I can describe listening to music on an electrostat (or the O2 anyway) is that it's like when you see a highly compressed image (such as a jpg) and notice the artifacting around lines.  On a good electrostat system, you'll immediately tell when something just doesn't sound right around each drumbeat or cymbal crash.

 

I have a reasonably trained ear as a former classical musician, and used to have perfect pitch.  I probably lost it a long time ago since I don't practice an instrument 2-4 hours/day anymore, but I've heard some people claim that formal musical training helps with separating sounds in daily listening.  There was a related piece on NPR recently, based on research from Northwestern's Auditory Neuroscience Laboratory:

http://www.npr.org/blogs/health/2011/08/23/139805307/how-music-may-help-ward-off-hearing-loss

 

Anyway, the best thing is to be realistic about how good your hearing really is and keep that in mind when designing your system.  edstrelow is right about how important the mastering quality is.  There are plenty of redbook recordings I prefer over the hi-res release because it just doesn't sound as good to me.  There's a weird obsession over 24/192 or listening to 'audiophile-grade music'.

post #6 of 16
Quote:
Originally Posted by Elysian View Post

I'm on an O2 and GES now (with a $300 DAC), took a bunch of the tests last night in response to another thread here, and had a 100% accuracy in telling 128 and 320 apart.  The O2 is so detailed that I've blind tested friends on my system and they can reliably tell the difference between 16/44, 24/48, and 24/96 recordings, often without any repeats (including when I pretend to switch the track but don't).


128 vs 320 is usually easier than 320 vs lossless. Have you taken any of those tests?

 

Were these recordings of various bit depths and sampling rates all from the same high res recording, just downsampled? Did you take any precautions to make sure there wasn't any extra distortion in the audible range on the downsampled files? Sound Science would be pretty darn interested in some valid positive results from a test like this! To my knowledge there hasn't been a single one yet.

post #7 of 16

I haven't tried the ABX plugin for FB.  All my 320 vs lossless has been informal, testing something like the first US release vs a remastered edition vs an official 24-bit release.  The online 128 vs 320 tests are pretty straightforward, but my least favorite is the mp3ornot just because I don't like any of the music they use.

 

So, no, they weren't all from the same high res recording downsampled (which, from my unscientific understanding, would be the most accurate way to approach this).  It was more along the lines of 'is it worth looking into building a chain that works well with hi-res files? can I even tell the difference?', since there's quite a bit of debate out there whether DVDA/SACD is even worth it, given how many people admit that they can't tell a difference.  Lately, it seems like a lot of the hi-res files in the pay-for-download sites are just upsampled versions of the original recording, occasionally with tweaked EQ, which many are discovering to their dismay after shelling out a lot of money.

 

It'd probably be pretty easy to setup a reasonable test for Sound Science, but the toughest part would be dealing with copyright issues since a lot of tracks which would be good to test (Massive Attack, Opeth, Porcupine Tree, UNKLE, solo Baroque violin or cello, singer-songwriters, etc.) wouldn't be kosher for public posting, and I think people are best at testing their favorite artists.

post #8 of 16
Quote:
Originally Posted by edstrelow View Post

One problem with cables is that merely handling some cables can affect the sound adversely.  I remember  a set of mid price Monster cables that would lose bass when you first moved them from one unit to another.  So you have to leave them in place for a while before they sounded right.  This obviously makes an immediate A/B comparison between 2 cables difficult.  I have seen the claim that many pros who set up for concerts try to get everything rigged well ahead of time to avoid this problem.


If a cable changes from merely handling it then it is faulty, probably a poorly soldered joint. Leaving a cable in place until it sounds better is ridiculous, just replace it immediately with a cable which is not faulty. It is bad enough to claim a difference between cables (excluding faulty cables) but to falsely claim that professionals believe in cable differences (beyond good basic construction) and actually work their set up routine around cable quality differences is just bare faced lying! Surely audiophiles under the illusion of differences in cables can find some evidence better than making up some ridiculous lie which is so easy to expose by any live sound engineer. The reason we set up well ahead of time is so that the musicians can have a sound check and then have plenty of time to change, relax and prepare for the performance. Absolutely nothing to do with some ridiculous notion of allowing time for cables to settle. "Nice try" to find an excuse for not being able to A/B cables but if you want to be taken seriously, please invent something which cannot so easily be discredited!
Quote:
Originally Posted by Elysian View Post

The O2 is so detailed that I've blind tested friends on my system and they can reliably tell the difference between 16/44, 24/48, and 24/96 recordings, often without any repeats (including when I pretend to switch the track but don't)


All the evidence suggests that you didn't hear a difference between formats but between differently mastered recordings or that your DAC has deficient filters at some sample rates. All else being equal, in the many tests which have been conducted none one has managed to tell a difference between these various sample rates and bit depths. Of course the difficulty is making all things equal, such as mastering and DAC filters.
Quote:
Originally Posted by Elysian View Post

There are plenty of redbook recordings I prefer over the hi-res release because it just doesn't sound as good to me.  There's a weird obsession over 24/192 or listening to 'audiophile-grade music'.

It is a particularly weird obsession seeing as 24/192 as a format is deficient compared to 24/96. Your observation matches the reality of the situation though. The quality of the recording/mastering influences perceived quality orders of magnitude more than whether the format is 320kbps, 16/44.1 or 24/96.

Back to the OT: From my own observation, listening in a recording studio, I can sometimes hear the difference between 320kbps and lossless (or wav). In my experience, comparing like for like, it depends on the genre (and even individual piece) of the music and the quality of the recording. It is not night and day differences though, some slight weaknesses in the high-mids and HF, reverb tails, etc.

G
Edited by gregorio - 9/6/11 at 4:07am
post #9 of 16
Cables sound different as they move?

Gee, they must be constantly changing, considering the rotation of the earth and our expanding universe. Not to mention changes in size due to room temperature and if things are that delicate, then the sunspot count and level of cosmic background radiation must surely be at play, too.

If your ears are golden enough to hear the unmeasurable differences between cables, then surely the measurable channel imbalance in your amp must be driving you insane. Tell us, is the left or right channel louder? Because it is very unlikely that the channels are matched more than +/- 5%, unless you hand-built your amp, bought a lot of extra components and matched them between channels. I did this with a crossover in a pair of speakers. About 20 components and it cost a lot more and took hours to mirror the crossovers.

I doubt your gear is that carefully made. So is the left or right louder? It'd be easy to measure the difference between the two, but I imagine you don't need test gear to tell the difference with miraculous hearing that hears the unmeasurable.

Channel imbalance aside, it is remarkable that the cable believer crowd isn't hopping up and down with indignation when ordinary components drift in value after a few years of use.

Yes, even the most blessed and sacred Blackgate changes value after a few thousand hours of use. A .01uf cap might change into a .016uf cap.

I find it very odd that those with the most magical of hearing skills don't notice this. Why, if electrically identical copper and silver cables have "night and day" differences, then how come nobody notices when a 100 Ohm resistor turns into a 150 Ohm resistor?

Maybe the same voodoo that causes a cable to change frequency response without changing frequency response causes a 100 Ohm resistor that drifted to 150 Ohms to keep working like a 100 Ohm resistor when it is actually a 150 Ohm resistor.

And maybe I'll build a little box with a 5 Ohm resistor on one channel and a 6 Ohm resistor on the other. You wouldn't have any trouble telling those apart, right?

Also, many thanks for letting me know that not being able to "hear" a difference is mostly because I have terrible equipment. I had no idea. I'll leave the Omega 2, Mk. 1, HD-800, HP-2, K-1000, ESL-63s, and Response 2.5 clones out for trash collection since they are obviously no good.

This evening, I need to drop by the stables to feed my pet unicorn. After that, I have to visit my witch doctor to have a hex removed. Maybe we could also examine some chicken entrails to help me find a more expensive stereo that will make the differences apparent. It's just money after all and I must not have spent enough. $30,000 sounds better than $20,000. I know this is true because I used to be a bank teller during undergrad. Sometimes, in the vault, I'd hold $10,000 up to my ear. It did not sound as good as holding $20,000 to my ear. $50,000 sounded even better. And all of us tellers did this so I know it is true. Everyone could hear the difference between a stack of $10,000 and $20,000.
Edited by Uncle Erik - 9/6/11 at 6:07pm
post #10 of 16
Quote:
Originally Posted by Head Injury View Post


128 vs 320 is usually easier than 320 vs lossless. Have you taken any of those tests?

 

Were these recordings of various bit depths and sampling rates all from the same high res recording, just downsampled? Did you take any precautions to make sure there wasn't any extra distortion in the audible range on the downsampled files? Sound Science would be pretty darn interested in some valid positive results from a test like this! To my knowledge there hasn't been a single one yet.


Nah, previous results show benefits of 24 bit over 16 bit for playback...but only if the music is really quiet and the volume is turned way up, such that the noise floor of the 16 bit version is audible.  This is a very fringe case in practice.  But yeah you'll want to downsample and downconvert 24/96 or whatever into 16/44.1 for any type of valid comparison.

 

 

Quote:
Originally Posted by Uncle Erik View Post

If your ears are golden enough to hear the unmeasurable differences between cables, then surely the measurable channel imbalance in your amp must be driving you insane. Tell us, is the left or right channel louder? Because it is very unlikely that the channels are matched more than +/- 5%, unless you hand-built your amp, bought a lot of extra components and matched them between channels. I did this with a crossover in a pair of speakers. About 20 components and it cost a lot more and took hours to mirror the crossovers.

 

I agree here except for 5% channel matching being unlikely.  5% is 10*log(1.05) = 0.21 dB, probably well inaudible.  However, there are definitely headphone amps achieve 0.2 dB channel balance or better at most of the range of the volume control.  Even something like FiiO E7 can do that, though that's helped by the digital volume control.  But even with a basic potentiometer implementation, near the upper part of the range (forget about it near the bottom), 0.2 dB is not that difficult with certain designs and just 1% resistors, without careful hand-picking components.

post #11 of 16
Quote:
Originally Posted by Uncle Erik View Post

Cables sound different as they move?

Gee, they must be constantly changing, considering the rotation of the earth and our expanding universe. Not to mention changes in size due to room temperature and if things are that delicate, then the sunspot count and level of cosmic background radiation must surely be at play, too.


Oh Uncle Erik. They only change with relative movement. The earth's rotation and universe expansion won't affect it! Everyone knows that.

post #12 of 16

Quote:

Originally Posted by edstrelow View Post

I find that it is not surprising that   most of the persons who try to debunk the sonic differences between things like cables  are not using what I consider high-def phones.   Accordingly they can't hear differences which are more obvious with more resolving systems.


I for one welcome our new snobbish overlord...

 

Edstrelow! What a beautiful gospel thou does speaketh... I plead, nay, beg, on mine knees, with mine lowly transducers in hand, that thou might take pitty upon us common folk and our backward ways! Please oh lord, point thine stubby finger so as to put us, your humble subjects, upon a path of audiophilic righteousness, towards the promise land of $5k ESLs, cryo-cables, and tube amplifiers (Lest us not forget thy mighty vibration isolation table and cable stands)! For how dread we the corrupting wrath of your nemesis: The Scientific Method, and all ghastly aberrational claims it makes against your, our, faith.

 

How lucky are we, us common folk, to be under your watchful and loving guidance, oh lord Edstrelow!


Edited by Dr. Strangelove - 9/6/11 at 7:25pm
post #13 of 16

I shouldn't, but biggrin.gif

post #14 of 16
Quote:
Originally Posted by edstrelow View Post

Finally,  you are going to have a hard time distinguishing high-def from low-def source material unless you have high-def playback, especially speakers and headphones. 


Well that rules you out compared to me then, unless of course you have spent more than half a million on your recording studio and have better equipment and acoustic design!
Quote:
Originally Posted by mikeaj View Post

Nah, previous results show benefits of 24 bit over 16 bit for playback...but only if the music is really quiet and the volume is turned way up, such that the noise floor of the 16 bit version is audible.  This is a very fringe case in practice.  But yeah you'll want to downsample and downconvert 24/96 or whatever into 16/44.1 for any type of valid comparison.


Not true. If you turn up the volume so you can hear the noise floor, you are hearing the noise floor of the recording, not the noise floor of the 16bit digital system. The difference between the noise floor of 16bit digital and the noise floor of an average recording is likely to be around 60dB (1000 times) or more. If you tried to turn up your system so that you can hear the digital noise floor, when normal level music (near 0dBFS) comes along, the chances are you would blow your amp, speakers or ears! There are no benefits of 24bit compared to 16bit for playback.

Quote:
Originally Posted by mikeaj View Post

I agree here except for 5% channel matching being unlikely.  5% is 10*log(1.05) = 0.21 dB, probably well inaudible.  However, there are definitely headphone amps achieve 0.2 dB channel balance or better at most of the range of the volume control.  Even something like FiiO E7 can do that, though that's helped by the digital volume control.  But even with a basic potentiometer implementation, near the upper part of the range (forget about it near the bottom), 0.2 dB is not that difficult with certain designs and just 1% resistors, without careful hand-picking components.


0.21dB is orders of magnitude more than the difference between cables. Which is the point Uncle Erik was making.

G
post #15 of 16
Quote:
Originally Posted by gregorio View Post


Not true. If you turn up the volume so you can hear the noise floor, you are hearing the noise floor of the recording, not the noise floor of the 16bit digital system. The difference between the noise floor of 16bit digital and the noise floor of an average recording is likely to be around 60dB (1000 times) or more. If you tried to turn up your system so that you can hear the digital noise floor, when normal level music (near 0dBFS) comes along, the chances are you would blow your amp, speakers or ears! There are no benefits of 24bit compared to 16bit for playback.

 

Does that not depend on the music?  For most all recorded music, the noise floor of the recording should be higher than the digital noise floor, since getting much above 60 dB on microphones, even in a recording studio, is difficult or impossible.  But if a hypothetical instrument were voiced say with peaks -40 dBFS and that was the only thing going on at the time, would not the digital noise floor be higher than the noise of the recording at that point?  Or rather, the noise contributed by the recording of the instrument does not add to the noise level of the 16-bit master because of the 16-bit digital noise floor (with certain other conditions met and further handwaving).  This is a fairly artificial example though, but that was what I was referring to about a potential extremely rare fringe case where it is possible to tell the difference between 24-bit and 16-bit.  Maybe the wording was wrong earlier.

 

Anyway, for any kind of sane recordings and not strange examples concocted in the lab, 16-bit is fine, as in indistinguishable from 24-bit or higher.

 

Your point about the 16-bit digital noise floor being irrelevant because of the levels that implies near 0dBFS resulting in blowing your amp, speakers, or ears still stands though.  (Or raise your hand if you listen in an anechoic chamber with 10 dB SPL ambient or lower.)  96 dB dynamic range is plenty, way more than people reasonably make use of anyway.  These days I'll be glad if producers can make use of 15 dB dynamic range on some recordings...


Edited by mikeaj - 9/7/11 at 12:32pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › How to hear hi-def?