1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice


Discussion in 'Sound Science' started by jonasras, Mar 5, 2013.
2 3 4 5 6 7 8 9 10 11
  1. spruce music

    I tested DACs, preamps all at line level. Didn't provoke me to continue with things like power amps.
    Now I don't remember it being a 125 tone test signal.  I seem to remember something like 12 or 15 tones.
    I also used wideband noise, with different sections filtered out of the noise to see if any IMD artefacts showed up in the filtered area ( usually filtered out an octave at a time).  I used recordings of things with significant output in the 20-40 khz range (jangling keys, cymbals, and such) looking for artefacts at lower frequencies.  I digitally generated a few squarewaves with different base frequencies not at even multiples to see what showed up when these were mixed. 
    While the fact it isn't a commonly done measurement would not mean it is of no value, I would think you would see something like this used more often were it to show results that correlate with audible differences regular measurements miss.  And that doesn't seem to be the case.
  2. pinnahertz
    Ok, well, I'm not sure what you did exactly, but the test in the paper utilized 125 tones 1kHz apart.  They used a 10kHz HPF to clean up the generator and keep anything out of the test range, then used test bandwidth of 30Hz to 8kHz, again with an 8kHz LPF to keep the test signal out of the analyzer and just look at the resulting products.  They had over 100dB of dynamic range, and that was with 1988 hardware.  Nothing else in your description even comes close to this.  You did some reasonable, but pretty conventional tests.  If you try to use recordings as the test signal you run into issues with reference, and analyzer bandwidth vs amplitude response time.  Transients and high resolution FFTs don't get along well.  The problem with using noise would be, again, low resolution.  
    There is one company marketing an analyzer system that uses a methodology related to SCT, but their system is intended for production testing and is so completely out of reach of the small lab (much less hobbyist) that we just won't see it.  The problem I see is SCT is hard to do properly, and while it presents good data, it's new and unfamiliar, even after 28 years.  Of course, the guy that could have pushed it along is no longer with us (Jensen), so that's probably part of the reason it stalled.  That doesn't mean it's not valid, and perhaps even one of the big keys to audible differences in devices that measure similarly with conventional methods.  The brass ring would be correlation with audibility and sound quality.  
    I recognize replicating the test in the paper is energetic.  I've tried myself, still missing a few key components to pull it off.  Just generating the test signal is non-trivial, then you really need a few good analog filters (they built theirs with Jensen's 990 opamps...I have a few around yet).  We really need an REW-level software generator, and somebody to put together the analog filters.  The FFT part we have.  
    I doubt anyone would see the value or point of spectral contamination testing without the full rig, though.  
  3. spruce music

    Okay, bad memory.  I read some version of this.  At one time it was attached to an article about Jensen and Sokolich.  Assuming that memory isn't bad too.
    The latter part of this loosely describes the version in the Jensen and Sokolich article.
    So what frequency did this other spectral contamination signal start with?  Obviously it extends to at least 125 khz.  I am not so sure I see the direct connection with audio frequencies if the test signal is ultrasonic.  So there must be some idea(s) missing in how I am viewing this.  The near ultrasonic I get, but why so far beyond 20 khz?
  4. pinnahertz
    (from the paper) "One of our favorite excitation patterns is made up of  energy at 120Hz intervals from 10kHz t0 25kHz with the analysis window between 30Hz and 8kHz.  Another pattern utilizes excitation at supersonic frequencies to show resulting  cross modulation products in the audio range.  Another interesting possibility  could use energy covering the entire audio range except for  an empty " window"  in the mid-frequency range."
    However, using the proper filters is very important in extending the dynamic range of the test. The block diagram tells the story.
    However, since nothing was standardized either on the excitation pattern side or the analysis side, I guess you could use a multitude of different types of excitation patterns.  The important part seems to be using many frequencies that push up into the higher end where the potential for nonlinearity may be present, and extending the dynamic range of the analysis system. 
  5. spruce music

    I used a variant of the second one described.  Left octaves blank each time. Went up to 20 khz.  Didn't do the filtering in the analog end.  I would digitally filter the recorded signal, which allowed me to listen to the blank octave, allowed me to amplify the blank octave.  I didn't uncover much that was interesting.  So I understand this is not exactly according to the block diagram. 
  6. Maddog510
    If an Apple Lossless track has a higher bitrate like over 1000kbps, does it take more time to fully decompress?
  7. watchnerd
    Compared to what?
  8. Maddog510
    Well I'm assuming that a larger ALAC file with a higher bitrate takes a little more time to decompress than an ALAC file that has a bitrate in the 800kbps range.
  9. watchnerd
    In theory, sure.  In practice, it's such a trivial amount of data for any modern processor, there's not much point in measuring it.
    Are you worried about some kind of problem?
  10. Maddog510
    No I'm not worried about anything. Just curiousity. I wish there was a way to tell if the tracks are fully decompressed or not.
  11. pinnahertz
    If they weren't decompressed they would sound like noise, if you could get the stream to a DAC at all.  You can't partially decompress a stream and expect to get good audio,  it's pretty much an all/nothing deal. A compressed bitstream is completely different than an uncompressed one.  
  12. Maddog510
    OK well there's no noise so that would mean full decompression is pretty much instant am I correct?
  13. pinnahertz
    The fact that you're getting good audio means it was decompressed.  There's no "fully" since there's also no "partly" when it comes to decompression. It's either decompressed, or not.
    No processing is technically "instant", though "instant" is also relative to perception of time.  I'm not sure why there's any concern for how long it takes a stream to decompress, it comes out fully baked, done, ready to hear.  If decoding latency were 1ms or 500ms, would it matter? You'd never know either way, you have no reference for when decompression starts.  
    Decompression time becomes an issue when audio and picture have to sync, but picture pretty much always takes longer, and there aren't any consumer-level applications for lossless video codecs.  Picture with sound applications almost always have a means of reestablishing sync.  
  14. roulduke
    There are many threads about downloading CD's to ITunes as ALAC files, but is an album downloaded from a site like HD Tracks going to sound noticeably better? I have bought my favorite albums from LP, to MFSL LP's, to CD's, to MFSL CD's, and now I am looking at these hi bit count albums that cost $18. Is it worth it?
  15. RRod
    It might be if the mixing/mastering on the HD versions were ALWAYS better, which simply isn't a given.
    castleofargh likes this.
2 3 4 5 6 7 8 9 10 11

Share This Page