Why 24 bit audio and anything over 48k is not only worthless, but bad for music.

Discussion in 'Sound Science' started by keanex, Apr 30, 2014.
First
 
Back
175 176 177 178 179 180 181 182 183 184
186
Next
 
Last
  1. bigshot
    I think a better analogy is...

    You tell your best friend that you're going to cut school and go fishing. He is afraid he's going to get in trouble, so he goes to school while you head for the creek. You fish all day and catch some beautiful trout. You wrap them up like they came from the market, put them on your front doorstep and ring and run. Your mom finds them and thinks they're a delivery from the market. That night neither you nor your friend who went to school get in trouble because your mom never found out. But you get pan fried brook trout for dinner, and he gets leftover tuna casserole.

    Aesop says: What mom doesn't know won't hurt you... and it might actually be better than doing it by the book!
     
  2. 71 dB
    If it takes 6 minutes to rip one CD, you have spend thousands of hours on that hobby. :stuck_out_tongue_winking_eye:
     
  3. KeithEmo
    Perhaps "proprietary" isn't the correct word - although many people I know would consider it to be correct in this context (someone else owns it and you aren't allowed to change it).
    As you say, "people (can't) tinker with the encoding process and make their own version of AAC" - whereas you CAN do exactly that with MP3.
    AAC currently offers lots of options - but I believe that the actual processing used by each is spelled out in the standard.....
    There is essentially an MP3 DECODER standard.... but there is no MP3 ENCODER standard per-se (or you may prefer to view it as having a huge number of possible variations).
    Your MP3 encoder can discard whatever it likes, using whatever version of perceptual encoding you like; as long as it plays on an MP3 decoder, it is "a valid MP3 file".

    Obviously we each deal with very different segments of the population.
    The majority of people I've spoken to about the subject don't even know that MP3 and AAC are lossy CODECs (and many don't even notice the type of file they're playing).
    This applies to a significant proportion of the customers I speak to officially at Emotiva as well as the majority of my (non-audiophile) personal friends.
    For example, most of the people I speak to who use ITunes don't even know that it used AAC; they simply "RIP their CDs with iTunes" and have no idea what it's set to.
    (Unfortunately, these people are also quite unlikely to run audibility tests either..... they simply leave everything set to the defaults - or blindly follow the instructions of someone they trust.)

     
  4. 71 dB
    Maybe the $1 gets him a hamburger to eat?
     
  5. bigshot
    I've fed in CDs as I work at my computer. It rips in the background.

    I don't manufacture equipment, but my understanding is that proprietary means that you can only include the technology if you get permission from the owner. An open standard means that you can use the technology without permission as long as you pay a mechanical license. There really are just two types of MP3- Frauenhofer and LAME. LAME was a separate standard designed to optimize the MP3 standard. It has the same MP3 suffix but it's a different codec. In the old days there were DACs that implemented MP3 decoding poorly, but that was more of a design error than it was intentional.

    You might find it surprising but a lot of people in this forum talk about the audibility of lossy artifacting without ever doing an audibility test too! They just assume that because the name says "lossy" it must sound inferior to "lossless". Those people are just as misinformed as the ones who don't know how iTunes works. However, the default encoding in iTunes and the iTunes store is AAC 256 VBR which is audibly transparent for just about everyone, so Apple has dummy proofed the process for them. The people who use lossless without understanding just get stuck not having as much music to choose from on their phone or DAP. No one is dummy proofing for them.
     
    JaeYoon likes this.
  6. JaeYoon
    Yeah 256 VBR is a very good idea for me.
    I don't want to buy a 400 GB sd card for $249.

    I got a 256 GB sd card. But like, my entire ripped library is almost that size.
    I also have another library that is bought off music stores that is around an extra 80 gbs. That won't fit together so ripping to lossy for my SD card is a perfect choice to make it all fit.
     
  7. 71 dB
    You work from home? That's convenient…
     
  8. bigshot
    I work all the time. I have a job at a studio during the day and weekends and nights I operate a non-profit digital archive out of my home.
     
  9. pinnahertz
    I don't agree with that assertion either, because that is not MY assertion. If you going to quote me, please do so exactly, and stop making things up!
    That's a fine goal...but you actually do the opposite quite frequently.
    As I said above, I disagree with that assertion! "WE" do NOT know that.
    OK, fine...but you already do use lossy compression, like it, know it, or not. It has a place, it has an application. Please realize that something is not categorically "bad" just because it can be misapplied.
    The GPS example fails: that's not the same as a lossy codec based on perceptual coding.
    Again, your analogy fails. That's not the same as a lossy codec based on perceptual coding, that's just brute-force loss.
    Yes! What you're saying is that lossy codecs can achieve their goal of transparency and not use all the original data. Mission accomplished.
    Wrong analogy again! That's just information loss, and that is absolutely NOT the same as a lossy codec based on perceptual coding!
    So we did change your mind, then.
    It wasn't made as a general statement, the conditions were specific. Go read Bigshot's post again. And you are misapplying science. Again.
    I cannot possibly imagine what THAT analogy is all about. Stupidity?
     
  10. pinnahertz
    I understand that in your binary view of the world if a codec's impact is audible in one test performed by one person out of 8 billion on earth, then it's audible. That's unrealistic, and not how codecs are designed or used. That's your binary view of the world only.
    That's a very important gray area though. And it applies to all codecs, not must lower quality ones.
    Actually file size is the definition of how much data it contains. That data may not be audio data, or usable data, or necessary data, but it IS data. FLAC files are losslessly compressed, and the original audio data can be perfectly recovered, but that's because the actual FLAC file contains LESS DATA using data of a different type to represent actual sample data.
    Here's a link that clearly defines what file size means:
    https://en.wikipedia.org/wiki/File_size

    And FLAC is described here: https://en.wikipedia.org/wiki/FLAC
    • "FLAC uses linear prediction to convert the audio samples. There are two steps, the predictor and the error coding. The predictor can be one of four types (Zero, Verbatim, Fixed Linear and FIR Linear). The difference between the predictor and the actual sample data is calculated and is known as the residual. The residual is stored efficiently using Golomb-Rice coding. It also uses run-length encoding for blocks of identical samples, such as silent passages."
    See? Different data results in perfect storage of audio information...but using LESS DATA. In fact LESS DATA is the entire goal of FLAC.
    Yes, of course. Thank you for being so literal. I thought that much was understood.
    Fine. No disagreement. What does any of that have to do with anything we are discussing now? Or the title of this poor corrupted thread?
     
  11. KeithEmo
    You seem to have a knack to "deciding" when things should and should not be 'taken literally".......

    When discussing DATA there is no ambiguity about "exactly the same" or "different".
    You do a bit compare; if the bits are identical - it passes; if a single bit is different - it fails.
    There is no ambiguity and the definition is quite well established.

    The easiest way that "WE" know that LOSSY CODECs are lossy is that they are described that way (if they didn't alter the data then they would be LOSSLESS CODECs).
    FLAC is lossless because, if I take a WAV file, convert it to FLAC, convert it back again, and compare the new copy to the original, they will be IDENTICAL.
    All of the bits will be the same........ therefore, when we convert out original file to FLAC, data was rearranged, but NO DATA WAS LOST OR PERMANENTLY ALTERED.
    There is a topological difference between discarding or altering data and simply changing its format.
    The encoding used by FLAC DOES NOT DISCARD ANY DATA - it simply stores it temporarily in a different format.

    Your statement is incorrect - FLAC does NOT store less data - it stores 100% of the original data in a more compact format.
    Not only won't you hear a difference, but no test known to man will be able to detect one.... which is the difference between "no audible difference" and simply "no difference".
    If I were to convert a WAV file to AAC, then convert the AAC file back to a WAV, then do a bit compare, the result will tell me if the process was lossless or not... a simple binary fact.
    When we look at the digital data we will find that it is NOT the same.......
    (So, when we use AAC, the original data CANNOT be recovered exactly; but, when we use FLAC, it can.)

    The title of "this poor corrupted thread" is based on a claim that is far overreaching...... which is why I dispute it.
    The thread (and the article) don't say that "most audiophiles would be silly to digitize audio at a sample rate over 48k" or that "for most people 48k is more than good enough" (I would probably agree with those claims).
    It makes a blanket assertion that using higher sample rates has no value (or even negative value) - and NEVER has any positive value.
    (And nowhere does the original article suggest that lossy compression is "good enough" either - therefore all of this discussion about lossy compression is far afield from the original thread.)

    And, just for the record, I make no apology for "taking reality literally".

    If I were to say "most swans are white", I believe I would be statistically correct.
    (I would also have correctly described the experience of most occupants of North America.)
    If I were to say that "ALL swans are white" I would be wrong.
    The fact that most people might not realize that I'm wrong doesn't make me right - it just makes that most people don't KNOW I'm wrong (and I will have contributed to their incorrect "knowledge" by providing them with incorrect information).
    If I wanted to avoid saying something that was untrue, I might say that "it is statistically very unlikely that you'll ever see a swan that's any color except white in North America".
    (There is a species of BLACK swan that lives mostly in Australia - although there are a few in the UK, and New Zealand - and there are probably a few in North American zoos.)
    There is an excellent book on the subject (named "The Black Swan") which discusses the pitfalls of propagating errors and inaccurate generalizations - among other things.

     
  12. KeithEmo
    I NEVER said that "lossy compression was categorically bad" or that "lossy compression should never be used"; in fact, I do occasionally use it myself.
    However, since I place a high priority on intellectual certainty, and a low priority on storage efficiency, I use it very rarely.
    (I would personally prefer to pay a little extra to KNOW that I've got all the data, or, at the very worst, that I haven't contributed more errors to those already present that I can't avoid.)
    I think lossy compression is an excellent solution to the engineering goal of "how can I store music in a lot less space in such a way that the majority of people will think it sounds good".

    I do have a nasty habit of generalizing and treating both the GPS location system and the street mapping system used in most street navigators as a single process - which they are not.
    The actual mechanism whereby the GPS system figures out your location is a form of triangulation, based on satellite broadcasts, and is simply limited by the accuracy of the process itself.
    (GPS location data used to be deliberately corrupted, and provided at reduced accuracy, as a way to prevent terrorists and foreign governments from using it, but that was discontinued some time ago.)
    HOWEVER, the way that information is correlated with features like street addresses is somewhat closer to a form of lossy perceptual compression.
    The GPS system uses longitude and latitude..... but it is a database in your "street navigator" that maps that information to local features like building address numbers.
    And that database includes a lot of "perceptual approximations" (for example, it knows that the house at one end of the block is #100, and the house at the other is #120, so it ASSUMES that #110 is half-way between them).
    The database OMITS the specific information about the longitude and latitude of each individual home based on the assumption that it isn't critical - which is why it sometimes puts you in front of the wrong building.
    At some point, it was decided that, in the interest of minimizing database size, some points would be stored exactly, while approximating others was "good enough".
    I could list several specific situations where my navigator is able to return me to a physical location I've told it to store within a few feet, yet, when I ask it to take me to that address by map, it is off by as much as a hundred yards.

    Since it's up to you and me to decide whether being put in front of the house next door to the one we entered is "a critical error" or "good enough" I would consider that to be a "decision based on perception".
    I would perceive it as a problem if a missile that was aimed at my next door neighbor were to hit my house by mistake; or an assassin were to show up at my door by mistake; but people who end up next door due to a GPS error usually seem to find the Emotiva office pretty quickly.
    (Even though the GPS system itself is accurate to a few yards, the street map shared by most popular systems seems to have the location of our main office incorrect by about thirty yards.)
    I don't know what percentage of errors that occur with a typical street navigator occur due to measurement inaccuracy, what percentage are due to actual data errors in the map, and what percentage are due to "lossy approximations"... but there are certainly some of each.
    (And, yes, those errors occur often enough that I DO check the address on the door before assuming that the system has brought me to the correct building - because it frequently does not.)

    I'm guessing you wouldn't be happy if, when you checked your checkbook, the answer you got was: "Your balance is about $1100; it may be a few cents one way or the other, but the difference is trivial, so don't worry about it."
    (And that's how I feel about my digital audio files.)

    You seem to prefer the logic path: "If we decide to discard information, then we need to determine if the loss is audible, and, if it is, whether the benefits outweigh the costs".
    I prefer the simpler: "If we ensure that NO information is lost, then those other questions are clearly moot."
    (And, with digital audio files, thanks to a lot of effort spent figuring out how to store and transmit computer files accurately, it happens to be very simple to confirm - or fail to confirm - that no change has occurred.)
    Note that the logic is "one way".
    If the data has not changed, then I know with certainty that it will be audibly the same.
    But, if the data HAS changed, then I have to either test it to see if the change was inaudible THIS TIME, or ABSOLUTELY TRUST whatever has changed it to have done so inaudibly.

     
    Last edited: Nov 29, 2017
  13. MrIEM
    There is a valid reason to use his bitrates and depths at the recording stage. Additive noise. If you are recording voice, instruments and digital sources it's hard to avoid having to re-route and process sounds multiple times. That means you're doubling and tripling noise. With 16 bits it can easily reach a point where the noise is a distraction. With higher depths it's not.

    As for sample rate, the same logic applies. Re-processing and re-routing sounds can knock the edges off. If the bitrate is excessive to begin with when finalized at 16 bits it will still sound perfect. If you start where you want to end-up though you limit your ability to play with the sound before artifacts become a problem.
     
  14. bigshot
    Is intellectual certainty a cure for OCD? You probably don't need intellectual certainty if you simply do a controlled listening test to determine your perceptual thresholds and then go with it and not worry any more. That's what I did. I don't care about theoretical sound I've proven to myself that I can't hear. I guess that's a form of intellectual certainty too!
     
    Last edited: Nov 29, 2017
  15. KeithEmo
    Intellectual certainty is a cure for ANY sort of uncertainty :beerchug:

    Obviously a lot of all this depends on what you're hoping for (or expecting).
    Personally, when I listen to music, my goal is to hear exactly what the artist and mixing engineer intended.
    And any doubt that this isn't the case takes away from my enjoyment.
    (I may choose to alter a file if I detect what I consider to be a flaw, but I absolutely do not want anyone or anything making that decision for me.)

    As I mentioned before, when I look at a picture on my monitor, I may not know what the original looked like - so I calibrate my monitor; that way I can trust that it is showing me what the original looked like.
    I don't want to use my judgment to decide whether what I'm seeing is as close to the original as I can tell.... that's extra work.... I'd rather just have a guarantee I can trust that it is.
    I look at my audio system much the same way; I may not know for sure what the original sounded like, so I rely on my system to let me hear what's there without alteration.
    (Of course I cannot know that the music file I have is accurate to the true original; but at least I can ensure that I don't change it any further.)

    Back when MP3 was the norm, I recall various claims that "MP3 files sounded just like the original"; however, when I listened to them, I occasionally heard artifacts in certain recordings. (I don't recall the settings involved.)
    And, when I investigated, I found that I WAS able to make up special test files which NO current MP3 encoder was able to encode without artifacts.
    When you dig into the encoders, and the assumptions they make, it's often not difficult to figure out how to "trick" them (I used to test computer network products for a living).
    And, with today's modern sampling synthesizers, any waveform I can concoct in a test file MIGHT turn up in electronic music.

    A similar situation occurs when a video tape master is converted to a DVD......
    There is an algorithm which sets a filter level which is used to remove tape noise (which is necessary if you want to achieve a reasonable level of compression).
    However, in specific instances, a particular visual feature (in this case dark swirling clouds or smoke), sometimes ends up "tricking the intelligence", and being INCORRECTLY removed by the filter.
    (The encoder usually does a very accurate job - but, in some small percentage of instances, it gets it significantly wrong.)

    I haven't tried this with AAC, so it's possible that it NEVER makes mistakes and removes something that might have been audible....
    However, because of the complexity of the algorithms involved, I would have to run an awful lot of tests to claim that I was 100.0% sure it would NEVER happen.
    Alternately, I would have to listen carefully to EVERY file I encoded to ensure that "it wasn't the one where the process failed".

    Now, to a lot of people, reducing the size of their music library by 75%, even at the risk that one of their 10,000 songs might contain a single audible artifact, might seem like an excellent tradeoff.
    However, since I have no issue whatsoever with storage size, and I'll admit to being a bit OCD when it comes to knowing what's going on, I'd still rather take the sure thing.
    It also boils down to a matter of process.
    If I wanted to fit a bunch of files on an iPod, I could encode them, then carefully compare each to the original to confirm that it was audibly perfect.
    However, since I'm not going to discard the original copy anyway, that's an awful lot of extra work (the time it would take is worth more to me than the cost of buying a bigger SD card).
    (My "library drive" is simply one of the three copies of each file I retain - one live copy plus two backups - so, if I were to create another AAC encoded copy, it would simply be another copy to keep track of.)

    Also, to be honest, I tend to compartmentalize how I prioritize my music.
    I could probably cheerfully get rid of about 90% of the CDs I currently own - and simply listen to whatever version of those tunes I can punch up on Tidal.
    However, for the small percentage of my collection that I very much care about, a single bit out of place counts as a "fatal flaw".

    And, much as everyone on this thread likes to disparage such claims.....
    I absolutely have encountered situations where certain details or flaws became audible on a new piece of equipment (that were totally inaudible on my previous equipment).
    Therefore, I am very disinclined to believe with absolute certainty that I cannot possibly discover differences tomorrow that really were inaudible today.
    (And, again, the EASIEST way to ensure against that is simply to keep the original file intact.)

    And, yes, I DID have quite a few MP3 copies of albums "back in the day"....
    And, yes, it cost me a lot of money to go out and buy the CDs for all of them when I realized there was an audible difference.

     
First
 
Back
175 176 177 178 179 180 181 182 183 184
186
Next
 
Last

Share This Page