Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Nov 28, 2017 at 12:39 PM Post #2,761 of 3,525
I think a better analogy is...

You tell your best friend that you're going to cut school and go fishing. He is afraid he's going to get in trouble, so he goes to school while you head for the creek. You fish all day and catch some beautiful trout. You wrap them up like they came from the market, put them on your front doorstep and ring and run. Your mom finds them and thinks they're a delivery from the market. That night neither you nor your friend who went to school get in trouble because your mom never found out. But you get pan fried brook trout for dinner, and he gets leftover tuna casserole.

Aesop says: What mom doesn't know won't hurt you... and it might actually be better than doing it by the book!
 
Nov 28, 2017 at 12:44 PM Post #2,762 of 3,525
My whole music library is AAC 256 VBR. I've ripped tens of thousands of CDs to the music server and boxed the discs up in the garage. It all fits on one disc drive. It's sorted automatically by iTunes. And it sounds perfect on my best equipment. For me, that's all a win with no loss.

If it takes 6 minutes to rip one CD, you have spend thousands of hours on that hobby. :stuck_out_tongue_winking_eye:
 
Nov 28, 2017 at 12:46 PM Post #2,763 of 3,525
Perhaps "proprietary" isn't the correct word - although many people I know would consider it to be correct in this context (someone else owns it and you aren't allowed to change it).
As you say, "people (can't) tinker with the encoding process and make their own version of AAC" - whereas you CAN do exactly that with MP3.
AAC currently offers lots of options - but I believe that the actual processing used by each is spelled out in the standard.....
There is essentially an MP3 DECODER standard.... but there is no MP3 ENCODER standard per-se (or you may prefer to view it as having a huge number of possible variations).
Your MP3 encoder can discard whatever it likes, using whatever version of perceptual encoding you like; as long as it plays on an MP3 decoder, it is "a valid MP3 file".

Obviously we each deal with very different segments of the population.
The majority of people I've spoken to about the subject don't even know that MP3 and AAC are lossy CODECs (and many don't even notice the type of file they're playing).
This applies to a significant proportion of the customers I speak to officially at Emotiva as well as the majority of my (non-audiophile) personal friends.
For example, most of the people I speak to who use ITunes don't even know that it used AAC; they simply "RIP their CDs with iTunes" and have no idea what it's set to.
(Unfortunately, these people are also quite unlikely to run audibility tests either..... they simply leave everything set to the defaults - or blindly follow the instructions of someone they trust.)

Some of your info is out of date there... AAC isn't proprietary. It's been an open standard for years now. Open doesn't mean that people can tinker with the encoding process and make their own version of AAC. Every current AAC encoder works exactly the same. The encoding and decoding is performed by stock cut and paste burned right into the chips of the DAC. There's no difference in quality. It's a standard, even if it is an open standard. And it doesn't work the same as an MP3... it's an MP4 which is a totally different and more advanced compression scheme. AAC is audibly transparent, which means to human ears, it's identical to the original. No loss in fidelity. And generation loss is also transparent for more generations than anyone would be likely to need to re-encode. From a practical standpoint it's all positives and no drawbacks.

I think you're projecting a bit on other people about lossy. Everyone knows that it throws out data. It says so right there in the name "lossy". They just don't care because it's inaudible information. I don't care about things I can't hear. I never have. I focus on improving things I *can* hear. That gets me a lot further when it comes to sound quality, because the best sounding systems sound the best because of the way they present the core audible frequencies. What you hear is what matters. A truly great system will sound just as good with high rate lossy as they do with lossless. The only argument I've heard in favor of lossless is that it assuages people's OCD. I can totally understand that. If I had anxiety over bitrates, I'd want to make sure my file sizes were portly too I suppose.

Lossy... lossless... none of that matters. What matters is how the music sounds. Audio reproduction has advanced to the point where bitrates don't matter. They're nothing more than advertising points, especially in blu-ray where absurd bitrates are touted as being "necessary". The truth is that redbook is plenty and high bitrate AAC sounds exactly like lossless. So if it doesn't make you neurotic to be throwing out unnecessary bits, then it makes sense to use it. My whole music library is AAC 256 VBR. I've ripped tens of thousands of CDs to the music server and boxed the discs up in the garage. It all fits on one disc drive. It's sorted automatically by iTunes. And it sounds perfect on my best equipment. For me, that's all a win with no loss.

The reason I share my listening test with people is so they can find out for themselves where their line of transparency lies. That is very important. If you know that, then you don't have to worry about lossy throwing out something important. You know that above a certain rate, it's identical to lossless. That should be a comforting thought.
 
Nov 28, 2017 at 12:48 PM Post #2,764 of 3,525
Nov 28, 2017 at 12:59 PM Post #2,765 of 3,525
If it takes 6 minutes to rip one CD, you have spend thousands of hours on that hobby. :stuck_out_tongue_winking_eye:

I've fed in CDs as I work at my computer. It rips in the background.

As you say, "people (can't) tinker with the encoding process and make their own version of AAC" - whereas you CAN do exactly that with MP3.

I don't manufacture equipment, but my understanding is that proprietary means that you can only include the technology if you get permission from the owner. An open standard means that you can use the technology without permission as long as you pay a mechanical license. There really are just two types of MP3- Frauenhofer and LAME. LAME was a separate standard designed to optimize the MP3 standard. It has the same MP3 suffix but it's a different codec. In the old days there were DACs that implemented MP3 decoding poorly, but that was more of a design error than it was intentional.

most of the people I speak to who use ITunes don't even know that it used AAC; they simply "RIP their CDs with iTunes" and have no idea what it's set to.
(Unfortunately, these people are also quite unlikely to run audibility tests either..... they simply leave everything set to the defaults - or blindly follow the instructions of someone they trust.)

You might find it surprising but a lot of people in this forum talk about the audibility of lossy artifacting without ever doing an audibility test too! They just assume that because the name says "lossy" it must sound inferior to "lossless". Those people are just as misinformed as the ones who don't know how iTunes works. However, the default encoding in iTunes and the iTunes store is AAC 256 VBR which is audibly transparent for just about everyone, so Apple has dummy proofed the process for them. The people who use lossless without understanding just get stuck not having as much music to choose from on their phone or DAP. No one is dummy proofing for them.
 
Nov 28, 2017 at 1:56 PM Post #2,766 of 3,525
I've fed in CDs as I work at my computer. It rips in the background.



I don't manufacture equipment, but my understanding is that proprietary means that you can only include the technology if you get permission from the owner. An open standard means that you can use the technology without permission as long as you pay a mechanical license. There really are just two types of MP3- Frauenhofer and LAME. LAME was a separate standard designed to optimize the MP3 standard. It has the same MP3 suffix but it's a different codec. In the old days there were DACs that implemented MP3 decoding poorly, but that was more of a design error than it was intentional.



You might find it surprising but a lot of people in this forum talk about the audibility of lossy artifacting without ever doing an audibility test too! They just assume that because the name says "lossy" it must sound inferior to "lossless". Those people are just as misinformed as the ones who don't know how iTunes works. However, the default encoding in iTunes and the iTunes store is AAC 256 VBR which is audibly transparent for just about everyone, so Apple has dummy proofed the process for them. The people who use lossless without understanding just get stuck not having as much music to choose from on their phone or DAP. No one is dummy proofing for them.
Yeah 256 VBR is a very good idea for me.
I don't want to buy a 400 GB sd card for $249.

I got a 256 GB sd card. But like, my entire ripped library is almost that size.
I also have another library that is bought off music stores that is around an extra 80 gbs. That won't fit together so ripping to lossy for my SD card is a perfect choice to make it all fit.
 
Nov 28, 2017 at 3:51 PM Post #2,767 of 3,525
Nov 28, 2017 at 5:06 PM Post #2,768 of 3,525
I work all the time. I have a job at a studio during the day and weekends and nights I operate a non-profit digital archive out of my home.
 
Nov 28, 2017 at 6:33 PM Post #2,769 of 3,525
I'm not sure that I agree with your primary assertion: "That 'everyone knows the difference between lossy and lossless'". From my experience, a lot of people seem NOT to know the difference.
I don't agree with that assertion either, because that is not MY assertion. If you going to quote me, please do so exactly, and stop making things up!
That's why I tend to argue against statements which might conceivable mislead people who actually don't know into thinking that they are the same.
That's a fine goal...but you actually do the opposite quite frequently.
I think we're sort of discussing two different things here.

As far as I'm concerned, in terms of the technology, there's nothing to try.
Whenever I listen to a piece of music I've never heard before, I don't know what it sounds like, so I'm relying on my system to let me find out - by playing it accurately.
We all KNOW that lossy CODECs alter the information; they "say so right on the package"; so there's nothing to question.
As I said above, I disagree with that assertion! "WE" do NOT know that.
To me the choice between lossless and lossy would be like the choice between buying a GPS that at least claims that it will take me to the exact right address.....
And one that is advertised as: "It never takes you to the exact right place; in fact it specifically avoids taking you to the exact right place; but it will get you close enough that you won't mind".
Personally, rather than wonder how big the error is, I'd rather just buy the one that takes me to the right place.
(And, in order to convince me to deliberately take the inaccurate version, they're going to have to offer a pretty compelling reason....... and, to me, smaller file size just isn't a compelling reason.)
That's why I personally am never going to try or use lossy compression..... because, to me, it has at least potential serious drawbacks, and no significant benefits.
OK, fine...but you already do use lossy compression, like it, know it, or not. It has a place, it has an application. Please realize that something is not categorically "bad" just because it can be misapplied.
The GPS example fails: that's not the same as a lossy codec based on perceptual coding.
However, as far as my statement about cumulative errors summing..... well, that's just math.
If you were to ask me "what does 2 + 2 = " I wouldn't go out and buy a bunch of marbles, put two in my left hand, two in my right hand, then put them together and confirm that I now have four on the table.
I would use math and logic to figure out what to expect..... based on how the process works.
Again, your analogy fails. That's not the same as a lossy codec based on perceptual coding, that's just brute-force loss.
Now, on every lossy audio CODEC I've ever read the description of, there is a design intent to ensure that the first generation copy will be "audibly identical to the original" - at least as much as possible.
However, I've never seen any that claim that there is any mechanism included that will prevent iterative changes from summing to a value greater than a single change.
Yes! What you're saying is that lossy codecs can achieve their goal of transparency and not use all the original data. Mission accomplished.
If you were to tell me that you were going to walk for one block in a random direction from your home.... we can both agree that you will end up one block from home.
However, if you were to tell me that you're going to walk for one block in a random direction, then, starting from there, walk for one block in a random direction, and repeat the process five times.....
MATH tells me that, at least some of the time, you will end up more than one block from home.
(Remember that we've included nothing in the process to ensure that this doesn't happen.)
Wrong analogy again! That's just information loss, and that is absolutely NOT the same as a lossy codec based on perceptual coding!
However, I don't dispute that running a lossy CODEC multiple times on the same content MAY, IN SOME CASES, still result in a final copy that is audibly indistinguishable from the original.
And neither do I dispute that, in a specific situation, and with a specific CODEC, a certian person may have had that experience.
So we did change your mind, then.
However, I do oppose making it as a general statement, when the science suggests that we're looking at the exception and not the general case.
It wasn't made as a general statement, the conditions were specific. Go read Bigshot's post again. And you are misapplying science. Again.
(And, yes, if someone were to suggest that "a cup of Drano is a great cure for a stomach ache" I would probably argue against that too...... WITHOUT trying it.)
I cannot possibly imagine what THAT analogy is all about. Stupidity?
 
Nov 28, 2017 at 6:49 PM Post #2,770 of 3,525
1)
Some things are grey - and some are black and white - but, in many cases, which applies to something depends on your point of view.
For example...... in terms of DATA, the question is clear black and white, either data is retained accurately or not.
I can do a bit compare - and the result will be a simple black and white pass or fail.
(Personally, I like black and white, I can run a checksum on my music library and KNOW, with absolute certainty, that it's exactly the same as yesterday..... and nothing has been changed.)

In terms of technology, lossy CODECs aren't "a grey area" at all.
Lossy CODECs discard information.... this is a given.
Likewise, either the result is or is not audibly identical to the original.... and that's also black and white.
I understand that in your binary view of the world if a codec's impact is audible in one test performed by one person out of 8 billion on earth, then it's audible. That's unrealistic, and not how codecs are designed or used. That's your binary view of the world only.
The only grey area I see would be with lower quality CODECs.... where we concede that the losses are audible, but there is a question of opinion about whether the loss is justified.
(The "grey" arises because it's a matter of opinion whether the loss is significant or not.... and whether we consider the cost to be justified by the benefits.)
There is also an area of UNCERTAINTY..... it may turn out that, on 95% of all files processed, the result is perfect, but on 5% it is not......
(If so, we may still claim - in black and white - that "on 95% of a random selection of processed files nobody can hear the difference".)
That's a very important gray area though. And it applies to all codecs, not must lower quality ones.
2)
My problem with your "size assertion" is simply that it isn't true.
Your assumption that "if the file is the same size then it contains the same amount of data" is entirely incorrect.
It is in fact quite simple to make a file larger or smaller without changing the amount of data it contains; or to add or remove data without changing the size of the file.

When we initially run the CODEC, we can assume that, if the file got smaller, then information was discarded.......but that is a VERY special case.
There are several unstated assumptions on which that assertion is based...... and lots of exceptions.
For example, I can compress a file using FLAC, and the file will get smaller, but NO information will have been discarded.
Likewise, any process that makes the information LESS CORRECT, but does so in a way that doesn't result in LESS information, may leave the file the same size or even make it larger.
Actually file size is the definition of how much data it contains. That data may not be audio data, or usable data, or necessary data, but it IS data. FLAC files are losslessly compressed, and the original audio data can be perfectly recovered, but that's because the actual FLAC file contains LESS DATA using data of a different type to represent actual sample data.
Here's a link that clearly defines what file size means:
https://en.wikipedia.org/wiki/File_size

And FLAC is described here: https://en.wikipedia.org/wiki/FLAC
  • "FLAC uses linear prediction to convert the audio samples. There are two steps, the predictor and the error coding. The predictor can be one of four types (Zero, Verbatim, Fixed Linear and FIR Linear). The difference between the predictor and the actual sample data is calculated and is known as the residual. The residual is stored efficiently using Golomb-Rice coding. It also uses run-length encoding for blocks of identical samples, such as silent passages."
See? Different data results in perfect storage of audio information...but using LESS DATA. In fact LESS DATA is the entire goal of FLAC.
Your base assertion that "reduction in size is the goal...make that the ONLY goal...of a lossy codec" is incorrect.
The goal of a lossy CODEC is to reduce the size of the file while avoiding altering the contents in an audible way....
And the indicator of success would be that the file has gotten smaller but remains audibly the same.
Yes, of course. Thank you for being so literal. I thought that much was understood.
However, there are several possible "indicators of failure"........
- if the file got larger that would be a definite fail
- if the file sounded audibly different that would be a definite fail
- if the file sounded the same and remained the same size that would be a sort of null result (a waste of time but no harm done).

There is also the potential for "generational failure"...... which is a concept that is applied deliberately in certain copy protection schemes (including the original CD-R music protection scheme).
In "generational failure" the copy is functionally the same as the original - but only in CERTAIN regards - while being very different in other ways.
In one such copy protection scheme, the user was allowed to make a copy of an "original".
However, even though the copy was AUDIBLY identical to the original, the user was unable to make a copy of that copy.
Therefore, while the copy was AUDIBLY identical, it was inferior in OTHER WAYS.
(For a user whose goal was strictly to listen, the copy was 100% perfect; for a user who wished to copy it, it was "broken".)
Fine. No disagreement. What does any of that have to do with anything we are discussing now? Or the title of this poor corrupted thread?
 
Nov 29, 2017 at 9:58 AM Post #2,771 of 3,525
You seem to have a knack to "deciding" when things should and should not be 'taken literally".......

When discussing DATA there is no ambiguity about "exactly the same" or "different".
You do a bit compare; if the bits are identical - it passes; if a single bit is different - it fails.
There is no ambiguity and the definition is quite well established.

The easiest way that "WE" know that LOSSY CODECs are lossy is that they are described that way (if they didn't alter the data then they would be LOSSLESS CODECs).
FLAC is lossless because, if I take a WAV file, convert it to FLAC, convert it back again, and compare the new copy to the original, they will be IDENTICAL.
All of the bits will be the same........ therefore, when we convert out original file to FLAC, data was rearranged, but NO DATA WAS LOST OR PERMANENTLY ALTERED.
There is a topological difference between discarding or altering data and simply changing its format.
The encoding used by FLAC DOES NOT DISCARD ANY DATA - it simply stores it temporarily in a different format.

Your statement is incorrect - FLAC does NOT store less data - it stores 100% of the original data in a more compact format.
Not only won't you hear a difference, but no test known to man will be able to detect one.... which is the difference between "no audible difference" and simply "no difference".
If I were to convert a WAV file to AAC, then convert the AAC file back to a WAV, then do a bit compare, the result will tell me if the process was lossless or not... a simple binary fact.
When we look at the digital data we will find that it is NOT the same.......
(So, when we use AAC, the original data CANNOT be recovered exactly; but, when we use FLAC, it can.)

The title of "this poor corrupted thread" is based on a claim that is far overreaching...... which is why I dispute it.
The thread (and the article) don't say that "most audiophiles would be silly to digitize audio at a sample rate over 48k" or that "for most people 48k is more than good enough" (I would probably agree with those claims).
It makes a blanket assertion that using higher sample rates has no value (or even negative value) - and NEVER has any positive value.
(And nowhere does the original article suggest that lossy compression is "good enough" either - therefore all of this discussion about lossy compression is far afield from the original thread.)

And, just for the record, I make no apology for "taking reality literally".

If I were to say "most swans are white", I believe I would be statistically correct.
(I would also have correctly described the experience of most occupants of North America.)
If I were to say that "ALL swans are white" I would be wrong.
The fact that most people might not realize that I'm wrong doesn't make me right - it just makes that most people don't KNOW I'm wrong (and I will have contributed to their incorrect "knowledge" by providing them with incorrect information).
If I wanted to avoid saying something that was untrue, I might say that "it is statistically very unlikely that you'll ever see a swan that's any color except white in North America".
(There is a species of BLACK swan that lives mostly in Australia - although there are a few in the UK, and New Zealand - and there are probably a few in North American zoos.)
There is an excellent book on the subject (named "The Black Swan") which discusses the pitfalls of propagating errors and inaccurate generalizations - among other things.

I understand that in your binary view of the world if a codec's impact is audible in one test performed by one person out of 8 billion on earth, then it's audible. That's unrealistic, and not how codecs are designed or used. That's your binary view of the world only.
That's a very important gray area though. And it applies to all codecs, not must lower quality ones.
Actually file size is the definition of how much data it contains. That data may not be audio data, or usable data, or necessary data, but it IS data. FLAC files are losslessly compressed, and the original audio data can be perfectly recovered, but that's because the actual FLAC file contains LESS DATA using data of a different type to represent actual sample data.
Here's a link that clearly defines what file size means:
https://en.wikipedia.org/wiki/File_size

And FLAC is described here: https://en.wikipedia.org/wiki/FLAC
  • "FLAC uses linear prediction to convert the audio samples. There are two steps, the predictor and the error coding. The predictor can be one of four types (Zero, Verbatim, Fixed Linear and FIR Linear). The difference between the predictor and the actual sample data is calculated and is known as the residual. The residual is stored efficiently using Golomb-Rice coding. It also uses run-length encoding for blocks of identical samples, such as silent passages."
See? Different data results in perfect storage of audio information...but using LESS DATA. In fact LESS DATA is the entire goal of FLAC.
Yes, of course. Thank you for being so literal. I thought that much was understood.

Fine. No disagreement. What does any of that have to do with anything we are discussing now? Or the title of this poor corrupted thread?
 
Nov 29, 2017 at 10:38 AM Post #2,772 of 3,525
I NEVER said that "lossy compression was categorically bad" or that "lossy compression should never be used"; in fact, I do occasionally use it myself.
However, since I place a high priority on intellectual certainty, and a low priority on storage efficiency, I use it very rarely.
(I would personally prefer to pay a little extra to KNOW that I've got all the data, or, at the very worst, that I haven't contributed more errors to those already present that I can't avoid.)
I think lossy compression is an excellent solution to the engineering goal of "how can I store music in a lot less space in such a way that the majority of people will think it sounds good".

I do have a nasty habit of generalizing and treating both the GPS location system and the street mapping system used in most street navigators as a single process - which they are not.
The actual mechanism whereby the GPS system figures out your location is a form of triangulation, based on satellite broadcasts, and is simply limited by the accuracy of the process itself.
(GPS location data used to be deliberately corrupted, and provided at reduced accuracy, as a way to prevent terrorists and foreign governments from using it, but that was discontinued some time ago.)
HOWEVER, the way that information is correlated with features like street addresses is somewhat closer to a form of lossy perceptual compression.
The GPS system uses longitude and latitude..... but it is a database in your "street navigator" that maps that information to local features like building address numbers.
And that database includes a lot of "perceptual approximations" (for example, it knows that the house at one end of the block is #100, and the house at the other is #120, so it ASSUMES that #110 is half-way between them).
The database OMITS the specific information about the longitude and latitude of each individual home based on the assumption that it isn't critical - which is why it sometimes puts you in front of the wrong building.
At some point, it was decided that, in the interest of minimizing database size, some points would be stored exactly, while approximating others was "good enough".
I could list several specific situations where my navigator is able to return me to a physical location I've told it to store within a few feet, yet, when I ask it to take me to that address by map, it is off by as much as a hundred yards.

Since it's up to you and me to decide whether being put in front of the house next door to the one we entered is "a critical error" or "good enough" I would consider that to be a "decision based on perception".
I would perceive it as a problem if a missile that was aimed at my next door neighbor were to hit my house by mistake; or an assassin were to show up at my door by mistake; but people who end up next door due to a GPS error usually seem to find the Emotiva office pretty quickly.
(Even though the GPS system itself is accurate to a few yards, the street map shared by most popular systems seems to have the location of our main office incorrect by about thirty yards.)
I don't know what percentage of errors that occur with a typical street navigator occur due to measurement inaccuracy, what percentage are due to actual data errors in the map, and what percentage are due to "lossy approximations"... but there are certainly some of each.
(And, yes, those errors occur often enough that I DO check the address on the door before assuming that the system has brought me to the correct building - because it frequently does not.)

I'm guessing you wouldn't be happy if, when you checked your checkbook, the answer you got was: "Your balance is about $1100; it may be a few cents one way or the other, but the difference is trivial, so don't worry about it."
(And that's how I feel about my digital audio files.)

You seem to prefer the logic path: "If we decide to discard information, then we need to determine if the loss is audible, and, if it is, whether the benefits outweigh the costs".
I prefer the simpler: "If we ensure that NO information is lost, then those other questions are clearly moot."
(And, with digital audio files, thanks to a lot of effort spent figuring out how to store and transmit computer files accurately, it happens to be very simple to confirm - or fail to confirm - that no change has occurred.)
Note that the logic is "one way".
If the data has not changed, then I know with certainty that it will be audibly the same.
But, if the data HAS changed, then I have to either test it to see if the change was inaudible THIS TIME, or ABSOLUTELY TRUST whatever has changed it to have done so inaudibly.

I don't agree with that assertion either, because that is not MY assertion. If you going to quote me, please do so exactly, and stop making things up!
That's a fine goal...but you actually do the opposite quite frequently.
As I said above, I disagree with that assertion! "WE" do NOT know that.
OK, fine...but you already do use lossy compression, like it, know it, or not. It has a place, it has an application. Please realize that something is not categorically "bad" just because it can be misapplied.
The GPS example fails: that's not the same as a lossy codec based on perceptual coding.
Again, your analogy fails. That's not the same as a lossy codec based on perceptual coding, that's just brute-force loss.
Yes! What you're saying is that lossy codecs can achieve their goal of transparency and not use all the original data. Mission accomplished.
Wrong analogy again! That's just information loss, and that is absolutely NOT the same as a lossy codec based on perceptual coding!
So we did change your mind, then.
It wasn't made as a general statement, the conditions were specific. Go read Bigshot's post again. And you are misapplying science. Again.

I cannot possibly imagine what THAT analogy is all about. Stupidity?
 
Last edited:
Nov 29, 2017 at 10:41 AM Post #2,773 of 3,525
Just as a bit of interesting trivia... 44.1K covers the full spectrum of frequencies that humans can hear- 20Hz to 20kHz, with a bit to spare. Higher sampling rates extend the frequency response higher, far beyond our ability to hear, but the core frequencies below 20kHz are rendered exactly the same at 44.1 as they are at 192. So whatever it is that you seem to think is clearly audible isn't audible with human ears. Perhaps a bat!

However, it is possible that your equipment isn't designed to deal with super high frequencies and is adding distortion down in the audible range. So if you are positive you are hearing a difference, it is almost certainly noise, not music.

There is a valid reason to use his bitrates and depths at the recording stage. Additive noise. If you are recording voice, instruments and digital sources it's hard to avoid having to re-route and process sounds multiple times. That means you're doubling and tripling noise. With 16 bits it can easily reach a point where the noise is a distraction. With higher depths it's not.

As for sample rate, the same logic applies. Re-processing and re-routing sounds can knock the edges off. If the bitrate is excessive to begin with when finalized at 16 bits it will still sound perfect. If you start where you want to end-up though you limit your ability to play with the sound before artifacts become a problem.
 
Nov 29, 2017 at 12:20 PM Post #2,774 of 3,525
Is intellectual certainty a cure for OCD? You probably don't need intellectual certainty if you simply do a controlled listening test to determine your perceptual thresholds and then go with it and not worry any more. That's what I did. I don't care about theoretical sound I've proven to myself that I can't hear. I guess that's a form of intellectual certainty too!
 
Last edited:
Nov 29, 2017 at 1:26 PM Post #2,775 of 3,525
Intellectual certainty is a cure for ANY sort of uncertainty :beerchug:

Obviously a lot of all this depends on what you're hoping for (or expecting).
Personally, when I listen to music, my goal is to hear exactly what the artist and mixing engineer intended.
And any doubt that this isn't the case takes away from my enjoyment.
(I may choose to alter a file if I detect what I consider to be a flaw, but I absolutely do not want anyone or anything making that decision for me.)

As I mentioned before, when I look at a picture on my monitor, I may not know what the original looked like - so I calibrate my monitor; that way I can trust that it is showing me what the original looked like.
I don't want to use my judgment to decide whether what I'm seeing is as close to the original as I can tell.... that's extra work.... I'd rather just have a guarantee I can trust that it is.
I look at my audio system much the same way; I may not know for sure what the original sounded like, so I rely on my system to let me hear what's there without alteration.
(Of course I cannot know that the music file I have is accurate to the true original; but at least I can ensure that I don't change it any further.)

Back when MP3 was the norm, I recall various claims that "MP3 files sounded just like the original"; however, when I listened to them, I occasionally heard artifacts in certain recordings. (I don't recall the settings involved.)
And, when I investigated, I found that I WAS able to make up special test files which NO current MP3 encoder was able to encode without artifacts.
When you dig into the encoders, and the assumptions they make, it's often not difficult to figure out how to "trick" them (I used to test computer network products for a living).
And, with today's modern sampling synthesizers, any waveform I can concoct in a test file MIGHT turn up in electronic music.

A similar situation occurs when a video tape master is converted to a DVD......
There is an algorithm which sets a filter level which is used to remove tape noise (which is necessary if you want to achieve a reasonable level of compression).
However, in specific instances, a particular visual feature (in this case dark swirling clouds or smoke), sometimes ends up "tricking the intelligence", and being INCORRECTLY removed by the filter.
(The encoder usually does a very accurate job - but, in some small percentage of instances, it gets it significantly wrong.)

I haven't tried this with AAC, so it's possible that it NEVER makes mistakes and removes something that might have been audible....
However, because of the complexity of the algorithms involved, I would have to run an awful lot of tests to claim that I was 100.0% sure it would NEVER happen.
Alternately, I would have to listen carefully to EVERY file I encoded to ensure that "it wasn't the one where the process failed".

Now, to a lot of people, reducing the size of their music library by 75%, even at the risk that one of their 10,000 songs might contain a single audible artifact, might seem like an excellent tradeoff.
However, since I have no issue whatsoever with storage size, and I'll admit to being a bit OCD when it comes to knowing what's going on, I'd still rather take the sure thing.
It also boils down to a matter of process.
If I wanted to fit a bunch of files on an iPod, I could encode them, then carefully compare each to the original to confirm that it was audibly perfect.
However, since I'm not going to discard the original copy anyway, that's an awful lot of extra work (the time it would take is worth more to me than the cost of buying a bigger SD card).
(My "library drive" is simply one of the three copies of each file I retain - one live copy plus two backups - so, if I were to create another AAC encoded copy, it would simply be another copy to keep track of.)

Also, to be honest, I tend to compartmentalize how I prioritize my music.
I could probably cheerfully get rid of about 90% of the CDs I currently own - and simply listen to whatever version of those tunes I can punch up on Tidal.
However, for the small percentage of my collection that I very much care about, a single bit out of place counts as a "fatal flaw".

And, much as everyone on this thread likes to disparage such claims.....
I absolutely have encountered situations where certain details or flaws became audible on a new piece of equipment (that were totally inaudible on my previous equipment).
Therefore, I am very disinclined to believe with absolute certainty that I cannot possibly discover differences tomorrow that really were inaudible today.
(And, again, the EASIEST way to ensure against that is simply to keep the original file intact.)

And, yes, I DID have quite a few MP3 copies of albums "back in the day"....
And, yes, it cost me a lot of money to go out and buy the CDs for all of them when I realized there was an audible difference.

Is intellectual certainty a cure for OCD? You probably don't need intellectual certainty if you simply do a controlled listening test to determine your perceptual thresholds and then go with it and not worry any more. That's what I did. I don't care about theoretical sound I've proven to myself that I can't hear. I guess that's a form of intellectual certainty too!
 

Users who are viewing this thread

Back
Top