Bits Are Bits! Right?

Feb 1, 2025 at 10:20 PM Post #16 of 48
Hey there,

I just received a cheap CD player from SMSL earlier today. I've not used any CD player in the last 15 years or so. Instead I purchased the CD's, ripped them in Apple Lossless Encoder to my computer hard drive. I wanted to get back to playing CD's so here I am. Well, playing the same CD in this cheap player I hear a significant improvement in bass, mids, sound stage, and a "quickness" I hadn't heard before. Same DAC. Same amplifier. Same headphone. Same sound level. Is this a bias on my part? Are the bits that Apple didn't encode actually making a difference in sound reproduction even though they shouldn't? Something else?

I'm sure that I'll get a lot of feedback on this noobie question. All are appreciated.

Thanks!
 
Feb 2, 2025 at 5:24 AM Post #17 of 48
ALAC might possibly lower the level a bit to provide safe headroom to prevent clipping in encoding/decoding. Level matching would most likely make the difference disappear.
 
Feb 2, 2025 at 5:35 AM Post #18 of 48
If Apple's lossless encoder didn't loose any bits, there wouldn't be any reduction in the file size (data compression), but there is!
It doesn’t lose bits, it just efficiently encodes them. Over simplistically, it effectively does the equivalent of writing say “50 zeroes” instead of “0” fifty times. No bits are lost, on decode the exact same “0” fifty times are reconstituted, hence why it’s called a lossless codec. I know you know this, maybe you’re just getting caught up in semantics of the word “lose”?
I guess I won’t get much love for this post, but my experience is, that it is quite difficult and certainly way more expensive for streaming chain to at least reach the sound quality of very cheap CD transport.
Not a lot of love, because “experience” doesn’t necessarily constitute reliable evidence. When transporting/transferring analogue audio, the laws of physics dictate there must be some loss of information and each different method of transporting it (vinyl, cassette, AM or FM radio, even just down a wire, etc.) will lose different amounts and types of information, EG. Add different amounts of noise and types of distortion. Digital audio on the other hand does not, that is why it was invented in the first place. It does not matter what method of transportation is used, the data is the same, there is no loss of information and there is no difference. Maybe you’re getting confused by the fact that the information transported in the first place maybe different; it maybe a different master (as you mention) or historically, a lossy codec employed at a bitrate that isn’t transparent, due to restrictive data bandwidth. The latter has been avoidable for many years, it isn’t difficult and is only marginally more expensive (for a higher tier, higher bitrate stream).

G

Edit: Spelling changed as per 2leftears observation!
 
Last edited:
Feb 2, 2025 at 8:26 AM Post #19 of 48
I know you know this, maybe you’re just getting caught up in semantics of the word “loose”?
Yes, I already mentioned in an earlier post that this may be down to semantics, albeit I was referring to the word "bits"

I do wonder how many of the audiophile misconceptions encountered re. digital data transmission stem from the failing to understand the difference between bits as a data signal carrier, and bits as an information carrier. The two are NOT the same and lossless compression is one technology that emphasises the difference. You would hope people are aware of the difference, but the debates encountered across the many forums sometimes make me wonder.

On a lighter note: before talking about semantics, maybe we should care about spelling. Right now I feel the urge to tighten those "loose" bits; them rattling around can't be any good... :xf_wink:
 
Feb 2, 2025 at 10:24 AM Post #20 of 48
I do wonder how many of the audiophile misconceptions encountered re. digital data transmission stem from the failing to understand the difference between bits as a data signal carrier, and bits as an information carrier.
Yes, it’s quite shocking how many audiophiles are prepared to argue that digital audio is in effect analogue audio, IE. Demonstrate they don’t know what digital audio is or the difference with analogue audio. Just look at all the threads (and employment of snake oil products) in other subforums here regarding maintaining/improving the “quality” of digital data signals and how vociferously they’ll argue against anyone presenting the basic facts of what digital data is and how data signals work.

BTW, good spot on the typo. In all fairness, I was talking about semantics “loosely”. That’s my excuse and I’m sticking to it, :L3000:

G
 
Last edited:
Feb 2, 2025 at 1:43 PM Post #21 of 48
Yes, my remark was about semantics. A wav file and a flac file encoded from it are not the same size. The flac file is smaller. If it wasn't, there wouldn't be a point in using flac. Smaller size means fewer bits, but fortunately there is equal amount of information.
 
Feb 2, 2025 at 2:25 PM Post #22 of 48
Yes, my remark was about semantics. A wav file and a flac file encoded from it are not the same size. The flac file is smaller. If it wasn't, there wouldn't be a point in using flac. Smaller size means fewer bits, but fortunately there is equal amount of information.
I thought FLAC was also designed to better survive transmission. Fold in those wingtips. Bouncing music across satellites and through miles of wire has it's wear and tear, and error correction isn't perfect.

I thought it would be obvious that a CD transport would be cleaner. But I'm a Layman, what do I know. I don't even have the sense to be embarrassed by this, but I do want to be humble about it :)

Also, I don't think FLAC and WAV sound the same.

And no, I don't belong in this forum. Sorry ish
 
Last edited:
Feb 2, 2025 at 2:43 PM Post #23 of 48
I thought FLAC was also designed to better survive transmission. Fold in those wingtips. Bouncing music across satellites and through miles of wire has it's wear and tear, and error correction isn't perfect.
I haven't heard about that.

I thought it would be obvious that a CD transport would be cleaner. But I'm a Layman, what do I know. I don't even have the sense to be embarrassed by this, but I do want to be humble about it :)
Why would that be obvious?

Also, I don't think FLAC and WAV sound the same.
They should if they are from the same origin.

And no, I don't belong in this forum. Sorry ish
So why are you here if you feel that way? I am not in control whether you are here or not. You are.
 
Feb 2, 2025 at 3:04 PM Post #24 of 48
I thought FLAC was also designed to better survive transmission. Fold in those wingtips. Bouncing music across satellites and through miles of wire has it's wear and tear, and error correction isn't perfect.
Error correction is effectively perfect and therefore there is no wear and tear, that’s the basis of digital data as laid out and proven by Claude Shannon in 1948 and which enables the digital age. You appear to be confusing analogue information with digital information. Take an obvious example of a modern smartphone, which is capable of hundreds of billions of instructions per second and each instruction consists of 64 bits, so over a trillion of them a second. So, if there were only one bit error in a trillion, your smartphone would potentially crash every second. There would be no smartphones or indeed a digital age if everything crashed every second. Clearly, data integrity/error correction must be way better than one error in a trillion.
I thought it would be obvious that a CD transport would be cleaner.
If anything, CD transport would be more error prone (less clean) because CDs are liable to damage and the early error correction employed in CD is limited in the number of consecutive errors it can correct.
Also, I don't think FLAC and WAV sound the same.
The data entering the DAC processor is identical, therefore they must sound the same. That doesn’t guarantee that your brain will necessarily perceive them as the same though, just that the output of your audio gear is identical.

G
 
Feb 2, 2025 at 3:41 PM Post #25 of 48
Kind responses, thank you.

I don't think error correction is perfect, but I have no science to offer. I will continue to question my beliefs and appreciate what I can learn from these forums. Ultimately, I think if you enjoy what you hear you're not wrong.

I do personally A/B occasionally since first trying FLAC fifteen years ago (not blind, not X) and consistently prefer WAV over FLAC from the same source. It's both louder and more quiet at the same time (dynamic?), also denser. I'm consistently told I'm wrong about this, so will continue to question my beliefs. FLAC is my format of choice when downloading though.

I'll return to lurking :)
 
Last edited:
Feb 2, 2025 at 4:02 PM Post #26 of 48
I don't think error correction is perfect, but I have no science to offer.
I gave a practical demonstration of how it must be, otherwise the digital age could not exist. You could refer to Shannon’s 1948 paper (“A mathematical theory of communication”) where he mathematically proves how it can be effectively perfect. Or you could look up the internet and Ethernet protocols which dictate that all errors are detected and then that the correct data is resent, thereby ensuring perfect error correction. You can of course think whatever you like but it appears to be in contradiction to probably the most well demonstrated scientific fact in the whole of human history!
I do personally A/B occasionally (not blind, not X) and consistently prefer WAV over FLAC from the same source. It's both louder and more quiet at the same time (dynamic?), also denser.
No one is questioning your preference or whether you perceive one as louder than the other. We’re just pointing out that WAV and FLAC are identical at the point of conversion, so by definition there are not any differences and therefore the differences you’re perceiving must be purely the result of your own perception, and have nothing to do with the file format.

G
 
Feb 2, 2025 at 4:20 PM Post #27 of 48
Photos sent over the internet in the 90's resulted in many errors. The algorithms to correct them drastically improved over time. I don't think the same attention was paid to music. But this is perhaps a myth for me to unlearn. I'm in over my head here.

Thanks again :)
 
Feb 2, 2025 at 4:25 PM Post #28 of 48
Kind responses, thank you.

I don't think error correction is perfect, but I have no science to offer. I will continue to question my beliefs and appreciate what I can learn from these forums. Ultimately, I think if you enjoy what you hear you're not wrong.
There are many different error correction topologies used for digital data, depending on storage medium & transmission technology. Some are more robust than others.

  • Some have error detection and perfect data recovery, and will return an error message if 100% recovery wasn't feasible (e.g. computer hard disk drives). I.e. it is either 100% correct data, or it is an unrecoverable error and a read abort with an error message.
  • Some have error detection and error correction, and will either rewrite the medium or move the data to a different location on the medium before 100% error correction becomes impossible (e.g. computer solid state drives).
  • Others have error detection, and data re-transmission upon detection of a corrupted package (e.g. Ethernet)
  • Others again have error detection, but no data re-transmission upon detection of a corrupted package (e.g. USB Audio)
  • The music (audio) CD (CD-DA) has error detection and error correction, with 100% recoverable error correction up to a certain error rate. Beyond that error rate it will employ error concealment, in conjunction with the CIRC data cross-interleave encoding which greatly improves the ability for error concealment.
  • The data CD (CD-ROM) inherits its fundamental data structure from the audio CD, and employs the same basic level of error correction. But whilst the further ability for error concealment makes sense in context of audio, it doesn't make sense for computer data disks, which is why the CD-ROM omits the error concealment, but adds an additional layer of data redundancy and error correction instead (at a slight loss of data capacity).
 
Last edited:
Feb 2, 2025 at 4:50 PM Post #29 of 48
Photos sent over the internet in the 90's resulted in many errors. The algorithms to correct them drastically improved over time. I don't think the same attention was paid to music. But this is perhaps a myth for me to unlearn. I'm in over my head here.
I’m not sure what you’re talking about, maybe the very lossy compression that had to be employed on photographs in the 90’s due to the very low speeds/bandwidth available through the internet at that time? The transport control protocol (TCP) used by the internet for the World Wide Web, streaming media, email, etc., does not have algorithms to correct errors, the packets of data containing errors are simply resent. As there are no correction algorithms, they obviously cannot have “drastically improved over time”.

You are correct that the same attention was not paid to music though, as far as I’m aware, absolutely no attention was paid to music at all or to cakes, diapers and a whole host of other things that have nothing to do with the internet. The ONLY thing that was paid attention to was digital data, that’s it, nothing else. The internet has no idea whether that data represents music, cake recipes, enquires for diapers or a video of a cat being stupid. You are also correct that you’re in over your head here!

You clearly don’t understand the fundamental basics of digital information and I’m not sure what to advise you. I was in your shoes many years ago but I don’t remember what resources I used to learn and they’re probably not available now anyway. I would recommend the “digital data” and “transport control protocol” pages on Wikipedia but maybe someone else here can suggest a more easily digestible resource?

G
 
Last edited:
Feb 2, 2025 at 5:29 PM Post #30 of 48
I have no specific expertise re. the internet data protocol, but it is worth noting that the server/client software employs an application layer protocol on top of the internet TCP/IP protocol (e.g. HTTP for the www, and FTP, IMAP, etc.)

I would imagine that data-resending upon detection of corrupted packets is attempted only a maximum number of times before an excessive error rate would be detected and I guess that it is up to the programs and data sending/receiving server/client application layer protocols to decide how to deal with that. Back in the 90's that may well have resulted in visible errors in images, prioritising data transmission rates over data quality if the client was downloading an image.

Whilst the fundamental internet TCP/IP protocol would be unaware an image was being sent, the server and client software HTTP protocol might certainly be aware of that, and the client software might decide to continue the download and display a part-complete image (which will look quite different depending on e.g. JPEG baseline vs. progressive format)

I have no expertise on audio data streaming over the internet, but internet data transmission rates can be quite choppy; your connection may be fast, but there still may be contention on the server. Since I don't do music (or movie) streaming, I have no idea how a streaming client would deal with that (error message, concealment, dropping the bit rate, something else?)
 
Last edited:

Users who are viewing this thread

Back
Top