1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Lossless vs. Lossy

Discussion in 'Sound Science' started by so74, Oct 30, 2013.
First
 
Back
1 2 3
5 6
Next
 
Last
  1. gregorio
    No, to be honest I'm not sure what you mean by a lossless master. More to the point perhaps, I'm not sure you do either!
     
    You seem to be under the impression that the whole recording, mixing and mastering process occurs at some fixed "hi-rez" bit depth/sample rate and that the resultant intermediate master can either be distributed as is or further processed ("mixed-down"), where data is lost and what you're left with is therefore a sort of lossy 16/44.1 final master. If I've understood correctly, what you seem to be saying is that you want to get your hands on this lossless "hi-rez" intermediate master? If this is the case, I see this type of demand quite often from audiophiles but at the risk of sounding insulting, all it does is demonstrate a complete lack of understanding of the basic principles of the recording, mixing and mastering processes. This misunderstanding among many audiophiles is not entirely surprising, they often appear from a pro-audio perspective to be gullible towards the marketing from various companies selling "hi-rez", "reference quality" or "studio" masters, recordings or equipment. The reason that this is just marketing is that this "intermediate master" I mentioned above doesn't actually exist, I just made it up to try and follow audiophile logic! In reality, the recording, mixing and mastering process NEVER occurs at a single bit depth and sampling frequency, but ALWAYS occurs using a variety of bit depths and sample rates not only at different points in the recording/mixing/mastering process but even at the same time! There is no audio file format to which these simultaneously different sample rates and bit depths can be exported and no equipment which could play it back even if such a file format did exist!
     
    Audio is usually professionally digitised at a sample rate of >10mHz and at bit depth of around 6bits or so. We can't write a file in this format or process it in a DAW, so it's decimated to say 24/96, 24/192 or whatever and then stored as a wav or aiff. So even before we can play it back, let alone before we can start to mix and process this recording, it's already "lossy". However, we NEVER mix at 24bit resolution (for reasons explained here), most commonly we mix at 64bit float or 56bit fixed, while the actual processing can occur at 48bit fixed, 32 or 64bit float or indeed quite commonly even using some combination of these various bit depths. Additionally, some of the processors we use (particularly the non-linear ones) often upsample, process and then downsampled again.So, our intermediate master only ever exists virtually, is at 56bit fixed or 64bit float format and already contains one or many "lossy"processes. If we want to print this mix for distribution, we have to truncate or dither from our 56bit fixed or 64bit float to 24bit, 16bit or whatever distribution format is required by the client. Bare in mind that even DSD (SACD) goes through essentially this same process, you can't mix or process DSD in it's raw state so it's usually converted to 24/96, processed and mixed as standard (at 32bit, 48bit, 56bit, 64bit) and then converted back to DSD for distribution. In other words, ALL recordings and masters are "lossy" (I've employed your use of the term "lossy" in this post) and have to be "lossy", regardless of whether you are listening to DSD, 24/192 or 16/44. The only question is at what point does this inevitable loss of data affect the quality of what you are listening to. The answer, as far as a final or production/distribution master is concerned, is somewhere below the CD standard of 16/44 and therefore you can consider a 16/44 master to be lossless or more accurately, for the loss of data to be irrelevant.
     
    Again, I'm not intending to be insulting, just pointing out why what you are saying/asking for makes no sense.
     
    G
     
  2. nick_charles Contributor
     
    This will be easy then, I expect you can manage 20/20 
     
    A
    B
     
  3. adamlr

    (i dont know why it wont show the full quote...) 
     
    anyway, cheers, much appreciated. i think ill go back to lurking now, this thread has become quite interesting 
     
    [​IMG] 
     
  4. StudioSound
    This example is a very simple solo piano piece which has been encoded at a relatively quiet level.
    I suspect that the piece is simple enough, and the bitrate high enough, that nothing was discarded when encoding to a "lossy" format.
    They're also 24/96 files, which are not at all common with lossy, and I don't know how that might affect the encoder.
     
    So no, I cannot tell the difference with your cherry-picked example.
     
    Any time I have tried encoding complex tracks for comparison - particularly tracks which have been mastered "hot" as is standard practice today (automatically adjusted to -0.1dB) it's generally not that difficult to tell lossy and lossless apart.
     
    Of course you can find cases where the "lossy" encoder has not had to discard anything and has not run into clipping, but that's rare, and disk space is cheap enough these days that it's not worth the possibility of discarding data. The only exception would be if you are putting files on a portable device, as the upper limit seems to be 160GB.
     
  5. xnor
    So StudioSound, go ahead and choose a track yourself then. I'd be surprised if you could ABX 256 kbps AAC not to mention high bitrate 320, which you claim to be able to "easily tell the difference".
     
  6. StudioSound

    Check this topic for someone else noticing the deficiencies of 256K AAC: http://www.head-fi.org/t/689031/itunes-aac-swirlies-i-dont-get-it
    And check this video (you only need to watch 5-10 minutes) for examples of distortion caused by AAC encoding: https://www.youtube.com/watch?v=BhA7Vy3OPbc&t=10m15s
     
  7. nick_charles Contributor
     
    Actually I chose that example as one that should be easy - noise would curtail the decay and time domain problems would cause pitch issues i.e warbling which should be easy to detect - so yes it was cherry picked but not for the reasons you thought - nevertheless here is another example from a Punk Rock combo - anticipating your 20/20 DBT log 
     
     
    http://www.divshare.com/download/24739912-32e
     
    http://www.divshare.com/download/24739913-956
     
  8. nick_charles Contributor
  9. nick_charles Contributor
     
    With lossy encoding stuff always gets thrown away - even a 320K mp3 file is no more than about 1/5th or perhaps 1/4 at worst of the size of the corresponding wav file, stuff will often be lost at the high end 19K and above and the psychoacoustic model also throws away frequencies that are predicted to be masked by proximal frequencies the encoding may allocate more or less bits for frequency bands in a particular frame dependent on the signal level but will not decide not to cut stuff out if it happens to have a few bits to spare  it cuts stuff out when it considers it irrelevant i.e masked - even allowing for better efficiency in encoding the audio data you are losing a lot of audio data itself - If you are saying you can't hear what has been discarded then that is exactly how it is meant to work ! 
     
  10. StudioSound
     
    Is 0.4% enough? I got to 8/8 before being bored of listening to the same thing over and over.
     
    foo_abx 1.3.4 report
    foobar2000 v1.2.6
    2013/11/07 19:40:52

    File A: H:\Downloads\A.wav
    File B: H:\Downloads\B.wav

    19:40:52 : Test started.
    19:40:59 : 01/01  50.0%
    19:41:07 : 02/02  25.0%
    19:41:14 : 03/03  12.5%
    19:41:36 : 04/04  6.3%
    19:42:24 : 05/05  3.1%
    19:43:35 : 06/06  1.6%
    19:45:41 : 07/07  0.8%
    19:47:16 : 08/08  0.4%
    19:47:44 : Test finished.

     ----------
    Total: 08/08 (0.4%)
     
  11. nick_charles Contributor
     Are you in the UK ? (+5 hours)  
     
    I'd rather see 20/20 or at least 10/10 as it is possible to guess luckily - nevertheless I'll accept that. 
     
  12. xnor
     
    This has nothing to do with what I was asking.
     
    a) We don't know what iTunes/Apple did to that file, what the source was.
    b) Listening to M/S separately (and mostly 128 kbps) has what exactly to do with showing how you can easily distinguish lossless from high bitrate lossy? Precisely, nothing.
     
  13. brunk
    No, it doesn't need to go into a regression loop. You are (again) not being rational. We can get masters now, today, in 2013. There are people who record/master in 24/96 and DSD, granted it's like 1% of mastering engineers where the majority use tape which disqualifies a comparison over the internet, but we can make a comparison.
     
  14. xnor
    What on earth are you talking about?
     
  15. Don Hills
     
    In addition to your good description of the process, there is one other reason to consider the CD as "lossless". The mastering engineer has a certain sound in mind. He knows when he hears it. Assuming he's producing a 44.1/16 CD master, if it doesn't sound exactly the way he wants he will tweak the input until he gets it. So the CD master is the "lossless" version(*). If he also has to produce a "hi res" version (e.g. 24/96 or SACD), he is again going to make the result sound the way he wants. (This is assuming that the two versions are intended to sound the same, which sadly is rarely the case.)
     
    (*) And many mastering engineers have tales of cases where what came back from the pressing plant didn't sound like what they sent, yet it nulled perfectly when compared with what was sent...
     
First
 
Back
1 2 3
5 6
Next
 
Last

Share This Page