1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

iPhone AAC vs. Aptx and Aptx-hd real world

Discussion in 'Sound Science' started by neil74, Oct 4, 2017.
First
 
Back
7 8 9 10 11 12 13 14 15 16
18 19 20
Next
 
Last
  1. shortwavelistener
    So, are there any Mastered for iTunes music files that are smashed to the levels aka badly remastered?
     
  2. Monstieur
    aptX HD is not even true 16-bit. The bits are discarded at the very beginning due to its ADPCM like encoding.
     
    shortwavelistener likes this.
  3. PiSkyHiFi
    There is no discarding at the beginning.... The beginning being the point where the algorithm knows nothing of what to reduce or what is safe to discard without losing the accuracy of representation. I mean ADPCM uses a differential signal to initially reduce the amount of bits required to represent the *same* digital signal as best as possible, only discarding information when unusual cases arise.

    Imagine a single 1KHz maximum-volume sine wave that is represented in 16 bit PCM, it rises and falls and with each sample from zero to the upper limit (32575 high), through the zero point and down to the lower limit (-32576 low), then back to zero and repeat. ADPCM starts by storing the difference between each sample, instead of the absolute value of the sample. The difference between each sample is going to be a lot smaller number than actual range of 16 bits.

    We can calculate the maximum differential between each sample for this scenario, based on the maximum angle of each sample step being 45 degrees (it's just a single sine wave) - so within a quarter of the samples for one period of this wave, the wave has gone from the zero point up to the maximum level with an initial and maximum angle of 45 degrees.
    So, roughly divide the absolute range here (0 to 32575 high) by the maximum step size which is (44.1 KHz / (4 * 1KHz)) - say roughly a factor of 10, the first 2 values in this sample will be zero followed by (32575 / 10) - roughly 3200, then around 6400 for the next sample etc...

    So ADPCM in this case only needs about 10% of the original range in order to represent the same original signal, that's not 10% of the information though, we're using binary, so 10% of the range is actually roughly a saving of 3 bits per sample (2 to the power 3 is 8 - roughly 10)

    So straight away, ADPCM without doing anything else is able to represent this sine wave with approx. 13 bits instead of 16 - without any loss at all.

    That's the starting point, it's actually more complicated than that of course, with adaptive ranges (range step size can vary) and predictive waveforms that are subtracted out to still reduce the range of the differential representation in terms of total information.

    If the encoded data rate is a reasonably high proportion of the original PCM data rate, then quite a lot of the signal comes through very close to losslessly. It's not trying to select frequencies to reduce their accuracy using pyscho-acoustic models like other codecs will, it's just another technique to start from and then use other methods to adapt the compression.

    AptX uses sub-bands initially too, to help isolate and restrict possible ranges, based on narrower frequency bands.

    So.... I hope some of that made sense to you.

    This is sound science.
     
    Last edited: Jul 27, 2018
    shortwavelistener likes this.
  4. bigshot
    I wouldn't be surprised if some crept in, but the standards for qualification as Mastered for iTunes would preclude that. You can google up the specs on the Apple site if you're interested.
     
  5. castleofargh Contributor
    the guidelines and main concern for the all "mastered for itune" label, was basically to limit the chances of getting intersample clipping(which can occur more easily on lossy formats and in general on lower sample rate PCM). so aside from Apple trying to make anything they use their own for money, the all thing can pretty much be summarized as "we lower the gain slightly when needed". which in itself isn't a bad idea, all lossy encoders should care for that(or all mastering engineer should stop sticking the signal at -0.1dB). but it's subjectively less impressive than "mastered for itune" ^_^.
     
  6. bigshot
    Don't they have some sort of requirement about the master file format? Or am I mixing it up with MQA?
     
  7. Monstieur
    The aptX implementation of ADPCM does not use sufficient bits for anything close to lossless representation. It's not that audible to humans, but a spectrogram shows the loss immediately - and it's far worse than MP3 or AAC.
     
    Last edited: Jul 27, 2018
  8. PiSkyHiFi
    I don't think any ADPCM codec does... It was initially a codec for transfer of voice.

    I'd agree that at the same data rate, AAC is a lot more transparent, it's called advanced for a reason.

    However, Aptx HD is 576 Kbps... This is about 40% of uncompressed redbook, which means the cleverness of AAC is not as effective as you are throwing less away. AAC is going to have more issues with phase reproduction for instance than Apt X at this rate, because AAC completely deconstructed the original signal with a DCT and selective precision for different frequencies and AptX is merely trying to approximate the original waveform in sub-bands. They are just different, but when the data rate is actually a large percentage of uncompressed, the differences between lossy codecs that cover the range of frequencies we can hear become smaller.
     
  9. PiSkyHiFi
    That does mean that for encoding redbook CD audio, AptX HD is a *lot* better than AptX.
     
  10. Steve999
    I knew I'd vaguely remembered this from somewhere. I've just got to say, I'm extremely grateful that someone took on the tremendous challlenge of trying to get Sun Ra's catalog in the best possible shape. If that's all the good the Mastered for Itunes format and process and Itunes Plus AAC VBR encoding ever does, that's good enough for me. :L3000: But I have a feeling it's done a lot more good than just this. I never put this all together before. I'm feeling really good about my Apple Music subscription now.

    Here's the link:

    https://jazztimes.com/news/sun-ra-music-archive-reissues-21-albums-exclusively-for-itunes/

    And here's my limited cut-and-paste:

    upload_2018-7-27_19-22-29.png
     
    Last edited: Jul 27, 2018
  11. shortwavelistener
    Like this?

    Untitled.png
     
    Last edited: Jul 27, 2018
  12. bigshot
    It's so transparent, it's transparent.... just like redbook and high bit/sampling rate audio and a bunch of other compressed codecs at a sufficient data rate. The advantage of AAC over other compression codecs is that it achieves transparency at a lower data rate than most other codecs.
     
  13. PiSkyHiFi
    No mate.... more transparent doesn't mean it is transparent at all - I've told you, transparent is when you don't know if you're listening to a sound system or not.

    You have absolutely no idea what I'm talking about do you.

    You mentioned sound science, but I see nothing scientific about comparing equipment and concluding that because they sound the same with compressed and uncompressed sources of the same content, they are transparent.

    It's possible your ears aren't great - it's possible your mind isn't great, it's possible you were having a bad day.
    It's possible the equipment was crap - or even just not up to the demands of such a task.

    The fact that you know if it's a sound system or not is sufficient to say it is not transparent, that does not preclude degrees of transparency, which imply degrees of opaque.

    Now pay attention, or I'll shall taunt you a second time.
     
  14. PiSkyHiFi
    Is it possible that that ADPCM decoder is rendering to something other than 16 bit ? It's like the original was mastered with a low pass filter into 16 bit, showing nothing above 21KHz, probably deliberately.

    It's like the ADPCM file has aliasing right up to 22050 Hz - either just noise from the codec or possibly rendered into floating point or something.

    I recall using wav files that had details all the way up to 22050 and their AAC versions losing details above about 18.5 KHz - kind of the opposite of what I'm seeing here.

    Edit: I guess you're just pointing out a nicely mastered recording having noise added by the ADPCM codec.

    Try 576Kbps 24 bit ADPCM if you can, see what this shows. All we can tell with the graph is where aliasing is added really, accuracy of signal is not revealed just by looking at frequencies present.

    Edit again, not aliasing, sorry, I mean artifacts.
     
    Last edited: Jul 27, 2018
  15. bigshot
    OK then AAC 256 and other high bitrate lossy codecs are *audibly identical* to redbook and HD audio. Once it hits the point that it's audibly identical, the sound quality can't get any better as long as your ears are . The file just gets bigger.

    That isn't at all unusual. Super audible frequencies in commercial music is undesirable. It's often filtered out in the mix. Ultra sonic frequencies can't improve sound quality, they can only degrade it. See the article in my sig, CD Sound Is All You Need.

    By the way, AAC filters off frequencies above 18kHz if you use 192. If you use 256 or 320 it goes up to the edge of human hearing.
     
    Last edited: Jul 28, 2018
First
 
Back
7 8 9 10 11 12 13 14 15 16
18 19 20
Next
 
Last

Share This Page