Dali's Soft Magnetic Composite Driver
Apr 6, 2024 at 11:28 AM Post #151 of 231
Where is this border (what is this indicator)?
The minimum bitrate for audible transparency is dependent on codec. Some are more efficient and can have a lower bitrate
With a high degree of probability, the opposite situation is also possible.
For example, bluetooth headphones Mark Levinson No. 5909, which support LDAC and are considered one of the best.
The argument for headphone quality is more likely EQ settings of the app, as well as the headphone's transducer and dampening designs. So, what others have been saying, is that in real world practice lossless LDAC is overboard for audible reproduction.
 
Apr 6, 2024 at 11:42 AM Post #154 of 231
What are their arguments (reasons)?
Studies indicate that AAC is transparent at 128kbps stereo, for example. That means it's the limit of average human hearing. There are theoretical advantages of having a higher bitrate for getting higher frequency or dynamic range. But if you only need a format that meets the range of human hearing (for music reproduction), then it's not necessary to go further.
 
Apr 6, 2024 at 11:54 AM Post #155 of 231
So, what others have been saying, is that in real world practice lossless LDAC is overboard for audible reproduction.
I see another reason-hypothesis here (possibly an agreement). Apparently this could collapse the huge audio market of the wired industry.
There is only one argument that immediately pulls the rug out from under an audiophile’s feet: the question of whether there are technical losses during sound transmission. If you can technically ensure lossless sound transmission, there will be no questions for you.

There is another argument.
I have wonderful DALI iO-12 bluetooth headphones; these headphones have three main connection modes. Guess which connection mode sounds worse? For me (and many other real owners) - passive wired mode, lossless sound transmission. The remaining modes are all active and the quality of sound reproduction is very close (USB is better in quality).
Back this year, I had a stationary music system worth 15K. As soon as I heard the sound of the DALI iO-12, I immediately sold my system. Why? I have not heard a 14x improvement in my system compared to the DALI iO-12. I’ll tell you a secret, there wasn’t even a twofold improvement in sound quality.
DALI iO-12 are unique headphones.
 
Last edited:
Apr 6, 2024 at 1:27 PM Post #156 of 231
Studies indicate that AAC is transparent at 128kbps stereo, for example. That means it's the limit of average human hearing. There are theoretical advantages of having a higher bitrate for getting higher frequency or dynamic range. But if you only need a format that meets the range of human hearing (for music reproduction), then it's not necessary to go further.
Example.
We listen to music in DALI iO-12 headphones (via Bluetooth) using Qobuz or Deezer streaming services. When listening, the difference in sound quality between Deezer and Qobuz (clearly better sound quality) is clearly heard. I think in both cases (Qobuz or Deezer) I get these 128kbps stereo.
Question. Why do I hear a difference in sound quality when the same codec is used?
 
Apr 6, 2024 at 1:28 PM Post #157 of 231
If you think SBC is perfect for sound, why would companies spend a lot of money on developing other bluetooth codecs?
For the same reason Sony developed SACD as a consumer format, for the same reason 24/96 and then 24/192/384 and other so called hi-res formats were developed for consumers, for the same reason R2R and NOS DACs are developed, etc. IE. As a marketing differentiator. However, there are potentially legitimate reasons, for example to lower latency for certain applications (not music playback) or to make it more robust to signal degradation, although I don’t know if LDAC actually achieves either of these legitimate reasons.
Where is this border (what is this indicator)?
With SBC it’s a profile lower than the recommended “high quality”. For other codecs, it depends on the codec, as already mentioned.
With a high degree of probability, the opposite situation is also possible.
But we don’t know what probability and it’s still a possibility.
For example, bluetooth headphones Mark Levinson No. 5909, which support LDAC and are considered one of the best.
And those HPs would sound identical if they didn’t support LDAC and only provided SBC (in “high quality” mode).
There is only one argument that immediately pulls the rug out from under an audiophile’s feet: the question of whether there are technical losses during sound transmission.
Sure, that sort of view is common amongst audiophiles about all sorts of things to do with digital audio but is due to them being suckered by audiophile marketing BS!

G
 
Apr 6, 2024 at 1:33 PM Post #158 of 231
Question. Why do I hear a difference in sound quality when the same codec is used?
A number of potential reasons:
1. It’s actually a different master or a different source of the same master.
2. Some sort of processing, such as loudness normalisation.
3. The internet, a server or your connection is busy and the bitrate has been stepped down.
4. There’s no actual difference in sound quality but you’re perceiving one due to perceptual/cognitive bias.

G
 
Last edited:
Apr 6, 2024 at 1:46 PM Post #159 of 231
A number of potential reasons:
1. It’s actually a different master or a different source of the same master.
2. Some sort of processing, such as loudness normalisation.
3. The internet, a server or your connection is busy and the bitrate has been stepped down.
4. There’s no actual difference in sound quality but you’re perceiving one due to perceptual/cognitive bias.
Processing (2), Bitrate (3) - this is possible, the rest is unlikely.
 
Apr 6, 2024 at 3:13 PM Post #160 of 231
Processing (2), Bitrate (3) - this is possible, the rest is unlikely.
I don't think bitrate is really an issue with today's internet. Cell data now exceeds 100Mbps. Same is true with BT if your headphones are close to your phone (as far as bitrate not dropping). If there is an audible difference, it very well can be:
1: Different masters of the song
2: Different sound level of the song (normalization)
3: These are different apps, so they can have different EQ settings
 
Last edited:
Apr 6, 2024 at 3:24 PM Post #161 of 231
I don't think bitrate is really an issue with today's internet. Cell data now exceeds 100Mbps. Same is true with BT if your headphones are close to your phone. If there is an audible difference, it very well can be:
1: Different masters of the song
2: Different sound level of the song (normalization)
3: These are different apps, so they can have different EQ settings
Yeah, that’s probably two, three.
 
Apr 6, 2024 at 3:56 PM Post #163 of 231
Studies indicate that AAC is transparent at 128kbps stereo, for example. That means it's the limit of average human hearing. There are theoretical advantages of having a higher bitrate for getting higher frequency or dynamic range. But if you only need a format that meets the range of human hearing (for music reproduction), then it's not necessary to go further.
The qualitative study I linked to by kamedo2 differs from that conclusion, notable degradation of signal quality at 128k AAC is observed using the 27 testing samples of music and significant degradation at 237k SBC using those same test samples. Bigshot and gregorio seem to also concur that 192k AAC is the threshold for transparency in AAC.
 
Apr 6, 2024 at 4:10 PM Post #164 of 231
Okay, but why is there a difference in sound quality between Tidal and Qobuz when you listen to them through Roon (the song master is the same, the codec is the same).
But Tidal and Qobuz are different services right? Do you have both apps, or are they served through the Roon app? If they're different services, then you can't be certain they are the same master or normalization.
The qualitative study I linked to by kamedo2 differs from that conclusion, notable degradation of signal quality at 128k AAC is observed using the 27 testing samples of music and significant degradation at 237k SBC using those same test samples. Bigshot and gregorio seem to also concur that 192k AAC is the threshold for transparency in AAC.
I was going off Wikipedia: which cites the ITU stating 128kbps VBR as transparent. Maybe there's some discrepancy between version of codec or settings. I've gone back in this thread, and gregorio mentions 170kbps being transparent, though most recording are transparent at 128kbps. Either way, this doesn't change my point that the point of transparency can be different with a given codec (each has its own efficiency) and that it's well below the 990kbps LDAC lossless.
 
Last edited:
Apr 6, 2024 at 6:03 PM Post #165 of 231
Well the best nuanced term for a lossy file that's 24/96 would be "lossy hi-res". Pretty sure marketers wouldn't go for that :grinning: So at least marketing "hi-res" (that includes lossy) and "lossless hi-res" would be a compromise. Consumers would understand lossless hi-res as being more premium, as they do now with Dolby Digital vs Dolby TrueHD.
Anything that can play a frequency above 48kHz is allowed to call itself hi-res. Fidelity or bit depth aren't involved. At least that's what Sony's golden logo means.

Some people have better standards, but it's up to them. There is no obligation to do better. In the music industry, some got really mad at songs that used any sampled track at 44 or 48kHz, even if the guy used more than a hundred tracks recorded at 24/192 to mix that song, just one little funny short sample picked from an old CD or mp3 and some libraries would not accept that song in their hi-res library. And to show how arbitrary this is, as Gregorio mentioned, you can take some old recording from the 60's, record that out of your turntable because no better source can be found of that record, set your ADC at 32/384 for no reason, and it is an acceptable hires album for most of those same hires libraries.


The concept of fidelity connoted by the words "high resolution" is very strong subjectively, but actual fidelity is irrelevant to the definition of that logo, so why should it matter to them if it's lossy or not? It only matters to some because "lossy" gives a terrible mental image as a word, even if lossy can and usually is audibly clean. It's more a battle of words than any active effort toward better fidelity. Not that any of it is going to change how transducers suck more than almost anything else.
 

Users who are viewing this thread

Back
Top