ALAC vs. FLAC
Oct 20, 2016 at 7:57 PM Post #166 of 183
  What did you use to generate the 125-tone test signal?  Did you apply 10kHz HPF before the DUT and the 8kHz LPF after the DUT? What devices did you test?


I tested DACs, preamps all at line level. Didn't provoke me to continue with things like power amps.
 
Now I don't remember it being a 125 tone test signal.  I seem to remember something like 12 or 15 tones.
 
I also used wideband noise, with different sections filtered out of the noise to see if any IMD artefacts showed up in the filtered area ( usually filtered out an octave at a time).  I used recordings of things with significant output in the 20-40 khz range (jangling keys, cymbals, and such) looking for artefacts at lower frequencies.  I digitally generated a few squarewaves with different base frequencies not at even multiples to see what showed up when these were mixed. 
 
While the fact it isn't a commonly done measurement would not mean it is of no value, I would think you would see something like this used more often were it to show results that correlate with audible differences regular measurements miss.  And that doesn't seem to be the case.
 
Oct 20, 2016 at 8:35 PM Post #167 of 183
 
I tested DACs, preamps all at line level. Didn't provoke me to continue with things like power amps.
 
Now I don't remember it being a 125 tone test signal.  I seem to remember something like 12 or 15 tones.
 
I also used wideband noise, with different sections filtered out of the noise to see if any IMD artefacts showed up in the filtered area ( usually filtered out an octave at a time).  I used recordings of things with significant output in the 20-40 khz range (jangling keys, cymbals, and such) looking for artefacts at lower frequencies.  I digitally generated a few squarewaves with different base frequencies not at even multiples to see what showed up when these were mixed. 
 
While the fact it isn't a commonly done measurement would not mean it is of no value, I would think you would see something like this used more often were it to show results that correlate with audible differences regular measurements miss.  And that doesn't seem to be the case.

Ok, well, I'm not sure what you did exactly, but the test in the paper utilized 125 tones 1kHz apart.  They used a 10kHz HPF to clean up the generator and keep anything out of the test range, then used test bandwidth of 30Hz to 8kHz, again with an 8kHz LPF to keep the test signal out of the analyzer and just look at the resulting products.  They had over 100dB of dynamic range, and that was with 1988 hardware.  Nothing else in your description even comes close to this.  You did some reasonable, but pretty conventional tests.  If you try to use recordings as the test signal you run into issues with reference, and analyzer bandwidth vs amplitude response time.  Transients and high resolution FFTs don't get along well.  The problem with using noise would be, again, low resolution.  
 
There is one company marketing an analyzer system that uses a methodology related to SCT, but their system is intended for production testing and is so completely out of reach of the small lab (much less hobbyist) that we just won't see it.  The problem I see is SCT is hard to do properly, and while it presents good data, it's new and unfamiliar, even after 28 years.  Of course, the guy that could have pushed it along is no longer with us (Jensen), so that's probably part of the reason it stalled.  That doesn't mean it's not valid, and perhaps even one of the big keys to audible differences in devices that measure similarly with conventional methods.  The brass ring would be correlation with audibility and sound quality.  
 
I recognize replicating the test in the paper is energetic.  I've tried myself, still missing a few key components to pull it off.  Just generating the test signal is non-trivial, then you really need a few good analog filters (they built theirs with Jensen's 990 opamps...I have a few around yet).  We really need an REW-level software generator, and somebody to put together the analog filters.  The FFT part we have.  
 
I doubt anyone would see the value or point of spectral contamination testing without the full rig, though.  
 
Oct 20, 2016 at 9:58 PM Post #168 of 183
  Ok, well, I'm not sure what you did exactly, but the test in the paper utilized 125 tones 1kHz apart.  They used a 10kHz HPF to clean up the generator and keep anything out of the test range, then used test bandwidth of 30Hz to 8kHz, again with an 8kHz LPF to keep the test signal out of the analyzer and just look at the resulting products.  They had over 100dB of dynamic range, and that was with 1988 hardware.  Nothing else in your description even comes close to this.  You did some reasonable, but pretty conventional tests.  If you try to use recordings as the test signal you run into issues with reference, and analyzer bandwidth vs amplitude response time.  Transients and high resolution FFTs don't get along well.  The problem with using noise would be, again, low resolution.  
 
There is one company marketing an analyzer system that uses a methodology related to SCT, but their system is intended for production testing and is so completely out of reach of the small lab (much less hobbyist) that we just won't see it.  The problem I see is SCT is hard to do properly, and while it presents good data, it's new and unfamiliar, even after 28 years.  Of course, the guy that could have pushed it along is no longer with us (Jensen), so that's probably part of the reason it stalled.  That doesn't mean it's not valid, and perhaps even one of the big keys to audible differences in devices that measure similarly with conventional methods.  The brass ring would be correlation with audibility and sound quality.  
 
I recognize replicating the test in the paper is energetic.  I've tried myself, still missing a few key components to pull it off.  Just generating the test signal is non-trivial, then you really need a few good analog filters (they built theirs with Jensen's 990 opamps...I have a few around yet).  We really need an REW-level software generator, and somebody to put together the analog filters.  The FFT part we have.  
 
I doubt anyone would see the value or point of spectral contamination testing without the full rig, though.  


Okay, bad memory.  I read some version of this.  At one time it was attached to an article about Jensen and Sokolich.  Assuming that memory isn't bad too.
 
http://www.tmr-audio.com/pdf/jon_risch_biwiring.pdf
 
The latter part of this loosely describes the version in the Jensen and Sokolich article.
 
So what frequency did this other spectral contamination signal start with?  Obviously it extends to at least 125 khz.  I am not so sure I see the direct connection with audio frequencies if the test signal is ultrasonic.  So there must be some idea(s) missing in how I am viewing this.  The near ultrasonic I get, but why so far beyond 20 khz?
 
Oct 20, 2016 at 10:17 PM Post #169 of 183
 
Okay, bad memory.  I read some version of this.  At one time it was attached to an article about Jensen and Sokolich.  Assuming that memory isn't bad too.
 
http://www.tmr-audio.com/pdf/jon_risch_biwiring.pdf
 
The latter part of this loosely describes the version in the Jensen and Sokolich article.
 
So what frequency did this other spectral contamination signal start with?  Obviously it extends to at least 125 khz.  I am not so sure I see the direct connection with audio frequencies if the test signal is ultrasonic.  So there must be some idea(s) missing in how I am viewing this.  The near ultrasonic I get, but why so far beyond 20 khz?

(from the paper) "One of our favorite excitation patterns is made up of  energy at 120Hz intervals from 10kHz t0 25kHz with the analysis window between 30Hz and 8kHz.  Another pattern utilizes excitation at supersonic frequencies to show resulting  cross modulation products in the audio range.  Another interesting possibility  could use energy covering the entire audio range except for  an empty " window"  in the mid-frequency range."
 
However, using the proper filters is very important in extending the dynamic range of the test. The block diagram tells the story.
 
However, since nothing was standardized either on the excitation pattern side or the analysis side, I guess you could use a multitude of different types of excitation patterns.  The important part seems to be using many frequencies that push up into the higher end where the potential for nonlinearity may be present, and extending the dynamic range of the analysis system. 
 
Oct 20, 2016 at 10:51 PM Post #170 of 183
  (from the paper) "One of our favorite excitation patterns is made up of  energy at 120Hz intervals from 10kHz t0 25kHz with the analysis window between 30Hz and 8kHz.  Another pattern utilizes excitation at supersonic frequencies to show resulting  cross modulation products in the audio range.  Another interesting possibility  could use energy covering the entire audio range except for  an empty " window"  in the mid-frequency range."
 
However, using the proper filters is very important in extending the dynamic range of the test. The block diagram tells the story.
 
However, since nothing was standardized either on the excitation pattern side or the analysis side, I guess you could use a multitude of different types of excitation patterns.  The important part seems to be using many frequencies that push up into the higher end where the potential for nonlinearity may be present, and extending the dynamic range of the analysis system. 


I used a variant of the second one described.  Left octaves blank each time. Went up to 20 khz.  Didn't do the filtering in the analog end.  I would digitally filter the recorded signal, which allowed me to listen to the blank octave, allowed me to amplify the blank octave.  I didn't uncover much that was interesting.  So I understand this is not exactly according to the block diagram. 
 
Jan 30, 2017 at 4:49 PM Post #172 of 183
  If an Apple Lossless track has a higher bitrate like over 1000kbps, does it take more time to fully decompress?

 
Compared to what?
 
Jan 30, 2017 at 5:42 PM Post #174 of 183
Well I'm assuming that a larger ALAC file with a higher bitrate takes a little more time to decompress than an ALAC file that has a bitrate in the 800kbps range.

 
In theory, sure.  In practice, it's such a trivial amount of data for any modern processor, there's not much point in measuring it.
 
Are you worried about some kind of problem?
 
Jan 30, 2017 at 6:28 PM Post #176 of 183
No I'm not worried about anything. Just curiousity. I wish there was a way to tell if the tracks are fully decompressed or not.

If they weren't decompressed they would sound like noise, if you could get the stream to a DAC at all.  You can't partially decompress a stream and expect to get good audio,  it's pretty much an all/nothing deal. A compressed bitstream is completely different than an uncompressed one.  
 
Jan 30, 2017 at 6:34 PM Post #177 of 183
If they weren't decompressed they would sound like noise, if you could get the stream to a DAC at all.  You can't partially decompress a stream and expect to get good audio,  it's pretty much an all/nothing deal. A compressed bitstream is completely different than an uncompressed one.  
OK well there's no noise so that would mean full decompression is pretty much instant am I correct?
 
Jan 30, 2017 at 7:17 PM Post #178 of 183
OK well there's no noise so that would mean full decompression is pretty much instant am I correct?

The fact that you're getting good audio means it was decompressed.  There's no "fully" since there's also no "partly" when it comes to decompression. It's either decompressed, or not.
 
No processing is technically "instant", though "instant" is also relative to perception of time.  I'm not sure why there's any concern for how long it takes a stream to decompress, it comes out fully baked, done, ready to hear.  If decoding latency were 1ms or 500ms, would it matter? You'd never know either way, you have no reference for when decompression starts.  
 
Decompression time becomes an issue when audio and picture have to sync, but picture pretty much always takes longer, and there aren't any consumer-level applications for lossless video codecs.  Picture with sound applications almost always have a means of reestablishing sync.  
 
Mar 10, 2017 at 2:23 PM Post #179 of 183
There are many threads about downloading CD's to ITunes as ALAC files, but is an album downloaded from a site like HD Tracks going to sound noticeably better? I have bought my favorite albums from LP, to MFSL LP's, to CD's, to MFSL CD's, and now I am looking at these hi bit count albums that cost $18. Is it worth it?
 
Mar 10, 2017 at 2:29 PM Post #180 of 183
There are many threads about downloading CD's to ITunes as ALAC files, but is an album downloaded from a site like HD Tracks going to sound noticeably better? I have bought my favorite albums from LP, to MFSL LP's, to CD's, to MFSL CD's, and now I am looking at these hi bit count albums that cost $18. Is it worth it?

 
It might be if the mixing/mastering on the HD versions were ALWAYS better, which simply isn't a given.
 

Users who are viewing this thread

Back
Top