iPhone AAC vs. Aptx and Aptx-hd real world

Jan 26, 2025 at 3:35 AM Post #361 of 375
You guys still listen to music? I just read sheet music notation now and can hear the music in my imagination. It’s the highest resolution encoding available.
 
Last edited:
Jan 26, 2025 at 4:34 AM Post #363 of 375
So say it then. AAC is the best there is.
Nothing will sound more accurate to human ears than AAC at high data rates.

“The best” is something else. Signal processing might produce something that sounds subjectively “better”, but it won’t be as accurate.
 
Jan 26, 2025 at 5:33 AM Post #364 of 375
Oh, I'm not arguing …
So you don’t even know that you’re arguing, that’s funny!
Sure, but you can't say two contradictory statements at the same time. There is either better sound than AAC or not.
He didn’t make “two contradictory statements at the same time” and as AAC isn’t sound, it’s a digital codec, then obviously it cannot be “better sound”. A real engineer would know that, in fact even a beginner student engineer should but you don’t, what does that tell us? An engineer or student engineer would know that it’s about the “Threshold of Transparency”, not “better sound” and this has been extensively studied and tested for the last 25 years or so by both the scientific community and the engineering community, including by the international standards and engineering organisations, the ITU, EBU, IEEE and numerous other organisations.

What you have stated is a false dichotomy and therefore a fallacious argument. An engineer would know this, along with all the above but you apparently don’t know ANY of it, demonstrating yet again that your claim of being an engineer was a lie!
Yeah, that's funny when logic gets you, isn't it 😂
What’s even funnier is someone claiming to be on the side of logic and/or science while magically avoiding any semblance of logic or science, just endlessly arguing fallacies, repeating BS assertions and just flat out lying, all while failing to provide even a single scrap of reliable evidence to support their claims. Just keep on digging your hole deeper and deeper! lol

G
 
Last edited:
Jan 26, 2025 at 5:34 AM Post #365 of 375
That's not all! Stop CD production, all high end audio gear, fcuk stop producing wires. All these wired in headphones are gimmicks, nobody needs those, there is no difference to AAC Bluetooth. You simply can't hear the difference because 20 years ago a 256k codec was ideal for space saving 😂 This is science! I feel the millennial wibe that's for sure. They know bS the best.
Instead of false analogies and strawman arguments trying to bait people into making absolute claims so they're as wrong as you when you make your own, how about you consider the real situation and stop a fight that seems entirely supported by your ego and logical fallacy?
Your actual argumentation despite so many posts seems limited to this:
-1/more is better.
-2/try that one device I keep bringing up.
Did I miss something? I admit to not reading all the boring, pointless back and forth, so tell me if there is more.

If not, let's address those 2 points. More is better is reasonable. Clearly oversimplistic for real life, but who wouldn't want the idea of better whatever? It's appealing, I at least get that much.
But do you get more sound? Let's say the recording had a use for a bigger encoding box(which is often debatable for consumer release IMO), we still have to consider the level of fidelity of our playback gear, and the level of noise where we're using it. You ignore that or didn't think of it, you just jump to conclusion from a false, massively oversimplistic model. If you're going to bring up engineering, maybe start by walking the walk.
But wait, there's more! You also entirely ignore the idea of psychoacoustic encoding and human hearing limits. Which is kind of a problem when judging lossy encoders/decoders, as their entire reason for existing and working so well, is the psychoacoustic model they are based on. Auditory masking is real, ignoring it is bad analysis.

IMO, as your reasoning is missing most of the relevant variables, you probably should reflect on it a little. As for actual evidence of audibility, like most people, you have offered none. So, no evidence and false logic is what had you going for several pages. Still proud and feeling sarcastic?


About 2/ now. Can I assume that you keep bringing up one device because all your generalized certainty and motivation on this discussion comes from how you feel using that on one anecdotal setup? As an engineer, surely you know the value of that. Issues can come from the codec, from the human or from the gear, so we're circling back to the same issue, why did you decide to blame the codec and ignore the rest?


Until I see supporting evidence that you actually can hear a difference, I will elect to start my hypothesis with human error. Then, if a miracle occurs and you substantiate your claim, that probably still wouldn't prove the codec itself is at fault.
If I take my own anecdotal experiences, I failed many ABX tests with lossy tracks vs lossless. Of course, I'm talking about files I encoded myself, not about judging different masters and inventing stories about codecs to rationalize some false tests.
Beside low bitrate tests that are audible, I have had a few situations where the audibility was clear:
1/ the bitrate I set in my device was not the one used because of bad reception between the source and my wireless stuff. A few solutions will lock the rate and the music will just keep dropping when facing bad signal, but the default BT standard is to drop to a lower rate or even lower encoding solution until the connection is stable.

2/ Intersample clipping. It is a fact that if nothing is done about it, lossy files tend to end up with higher clipping. Some DAC manufacturers leave a few dB of headroom for that. Some encoders (like iTunes' encoder) move the signal down a bit to mitigate the issue, but then of course that signal is now a quieter and god knows how many people misinterpreted that for whatever other sound difference. Some apps have a loudness feature of sort that will take all the highly compressed tracks most likely to show intersample clipping, and take them down a few dB, solving the issue. And of course on most devices, you can just yourself lower the digital volume of the source by 2 or 3dB, and be done. In 2025 if the difference you're hearing is still intersample clipping on lossy files, I would argue that in a more or less direct way, it's your fault.

3/There are some legendary killer tracks, usually the more famous those become, the more likely it is that the codec makers would tackle that flaw(if they can). Nowadays, we tend to talk about that for abandoned MP3 encoders.

4/On occasion, something will not be done right. I've had that happen to me twice with freshly released DAPs. They would play a format, but something didn't sound right, and after making them aware of it, some firmware update made things right.

Because I stay away from ultrasounds and gear that could make a mess out of it instead of just rolling it off, I have never had all the sound differences(direct of resulting distortions from extra ultrasonic content) that could come from such scenarios. But I'd consider that possibility too, depending on sample rate for the lossless track used for reference. with the same idea, most lossy files are at 48kHz nowadays, while we tend to test them against 16/44 PCM. So it could be a good idea to check that the difference in sample rate isn't causing the sound difference instead of a lossy codec(or check that if either is resampled, that it's not done so badly that it's audible).

And last off course, the simple possibility that the codec is not transparent. Obviously that's also worth testing, it just happens that we have to first properly control everything else for a positive result to be conclusive about the codec.



I think most of those situations won't even exist for the typical user, so after checking for loudness difference(not by ear!!!!!!!!!!), and possible clipping, most people can go straight to testing the codec that they should always convert themselves from the lossless file they will use to ABX. I was just trying to make a fairly exhaustive list because I know I'm talking to a serious engineer. :wink:
 
Jan 26, 2025 at 6:10 AM Post #367 of 375
A bit of ChatGPT to science experts... (It doesn't "know" as it never experienced it as a human.... but...)

Is it true or fals that humans de facto can't distinguish sounds transfered via AAC Bluetooth codec to a headphone from any other source including other Bluetooth codecs or wired headphone connections?

It's false to claim this as a universal truth, but true for most practical cases. Here’s why:


• For most people: The AAC codec is good enough that they wouldn’t notice any difference compared to other codecs or wired connections, especially in everyday listening scenarios with typical consumer headphones. Factors like background noise, the quality of the headphones, and personal hearing ability matter more than the codec itself.


• For audiophiles or critical listening: Some can tell the difference. AAC performs inconsistently across devices because it relies on the encoding and decoding process, which varies depending on the hardware/software. It also has higher latency compared to some other codecs.





In short: For casual listening, differences are negligible or non-existent. For high-end audio setups or trained ears, other codecs (like LDAC or aptX) or wired connections might offer a noticeable improvement.


So it seams, that anyone outside of this "scientific" forum has a different opinion... I think you should take on the world and try to stop further reasearch to the subject as it's futile. AAC has already won decades ago...
 
Jan 26, 2025 at 6:35 AM Post #369 of 375
A bit of ChatGPT to science experts... (It doesn't "know" as it never experienced it as a human.... but...)
[rant] Observe the gradual transition of human knowledge towards "the world according to ChatGPT".

I have been warning about this for quite a while now :rolling_eyes:. Unintentional misinformation is getting engrained and slowly gaining unjustified credence.

LLMs are not ready for prime-time until semantic AI has been developed further. [/rant]
 
Jan 26, 2025 at 8:32 AM Post #370 of 375
A bit of ChatGPT to science experts...
And again (!), we ask for reliable evidence and the only thing you present is fallacies, lies and now quotes from ChatGPT. Do you have any actual reliable evidence or is it all just BS?
In short: For casual listening, differences are negligible or non-existent. For high-end audio setups or trained ears, other codecs (like LDAC or aptX) or wired connections might offer a noticeable improvement.
If you’re going to cite what ChatGPT states then well done, you’ve managed to prove yourself WRONG even by YOUR OWN standards of evidence! VNandor’s ChatGPT response states:

High-End Systems: On high-end audio systems, the argument is that more details in the music become audible, potentially revealing compression artifacts. However, scientific studies indicate that AAC at high bitrates remains audibly transparent even on such systems unless you're using extremely low bitrates (e.g., 96 kbps or lower).”. - Even your own source disagrees with you, that’s funny!

And also, how do you know what level of “audio setup” I (or others here) have/am used to or what level of listening skills (trained ears)?

So it seams, that anyone outside of this "scientific" forum has a different opinion...
Your own quoted source has the same opinion we’ve been stating, so according to this latest nonsense assertion, that means ChatGPT must be someone inside “this “scientific” forum”. Thanks for that wonderful example of your logic! lol
I think you should take on the world and try to stop further reasearch to the subject as it's futile. AAC has already won decades ago...
Oh good, let’s end with another falsehood and fallacious (strawman) argument for good measure, even though you’ve already been accused of falsehoods and fallacious arguments. How deep is your hole now, any sign of the bottom yet or are you just going to keep digging until you can’t think of any more ridiculousness?

G
 
Last edited:
Jan 26, 2025 at 8:55 AM Post #371 of 375
When you have to depend on AI chat to make your arguments for you, it’s time to admit that you really don’t know what you’re talking about.

Say goodnight, Gracie.
 
Feb 14, 2025 at 10:55 AM Post #372 of 375
Are people still getting mad that AAC & Vorbis are already fully transparent by 160 ~ 192kbps VBR. Having to ignore they don't even have the hard limits that MP3 has where even 320kbps/V0 can suck, Like being able to do 350 ~ 580kbps In heavy situations which MP3 can't. I hope they do realise that AAC/Vorbis Is what Consoles since the PS2 & 360 have done to fit audio into single carts/Discs.
 
Feb 14, 2025 at 11:34 AM Post #373 of 375
Are people still getting mad that AAC & Vorbis are already fully transparent by 160 ~ 192kbps VBR. Having to ignore they don't even have the hard limits that MP3 has where even 320kbps/V0 can suck, Like being able to do 350 ~ 580kbps In heavy situations which MP3 can't. I hope they do realise that AAC/Vorbis Is what Consoles since the PS2 & 360 have done to fit audio into single carts/Discs.
The PS2 uses ADPCM for audio processing (and also supports outputting Dolby Digital or DTS). AptX and LDAC are based off ADPCM.
 
Feb 18, 2025 at 2:40 AM Post #374 of 375
The PS2 uses ADPCM for audio processing (and also supports outputting Dolby Digital or DTS). AptX and LDAC are based off ADPCM.
Yeah for voice/effects/etc but 95% of PS2 games used Vorbis/LAME even in DVD-DL ones. Wavpack Hybrid Is also based off ADPCM but It a ultra version since on WV 5.8. It can reach transpency at 384kbps ABR, LossyWAV allows TVBR in Wavpack Hybrid.

Only a few games managed to get true DD 5.1 since the 2MB SPU struggled, 98% of surround PS2 games were Pro logic 2.
 
Feb 18, 2025 at 3:36 AM Post #375 of 375
Yeah for voice/effects/etc but 95% of PS2 games used Vorbis/LAME even in DVD-DL ones. Wavpack Hybrid Is also based off ADPCM but It a ultra version since on WV 5.8. It can reach transpency at 384kbps ABR, LossyWAV allows TVBR in Wavpack Hybrid.

Only a few games managed to get true DD 5.1 since the 2MB SPU struggled, 98% of surround PS2 games were Pro logic 2.
Sorry, all technical specs say it was based on ADPCM....with Dolby Digital and DTS being bitstream for DVDs (or later games got Dolby Surround Pro Logic II).

https://en.wikipedia.org/wiki/PlayStation_2_technical_specifications#Audio
 

Users who are viewing this thread

Back
Top