MQA: Revolutionary British streaming technology
Jan 11, 2017 at 11:20 AM Post #631 of 1,869
I'm extremely sceptical about the benefits of the MQA technology itself to the point where I'm uninterested. What I am interested in is if they're using better masters (or remastering) for the MQA albums and that's what the improvement is. Been searching high and low for the answers to that.

 
I suspect it is not a true remastering, though there may be some exceptions, considering the number of files available at Tidal.  It would seem that something akin to a signal EQ is occurring in the process to convert original tracks to an MQA format.   From what I could see, the MQA version is different enough to not be directly comparable to a FLAC version, in the same way that different masters are not comparable.  I guess, then, the answer is sort of, depending on the technical definition of mastering.
 
Wondering if a fully decoded MQA file could recorded to PCM and to be compared in a blind listening test.  Can't just do the test with software alone, until a proper decoder is developed that would be able to be used in fast switching with precise volume level matching.  It could be done with the right hardware, but it would be challenging to set up and proctor.
 
Jan 11, 2017 at 11:23 AM Post #632 of 1,869
 
 
we can pretend to argue forever, if you believe ultrasound content will save your life, then go for stuff with high sampling rate and be happy(DSD 128 must be the real bomb). if like me you know for a fact that you can't hear the difference past 16/44 on correctly implemented DACs, ultrasounds will always be a waste of space however you encode them. it's really that simple IMO.

 
Agreed.  
 
I stopped caring about high resolution once I failed enough ABX tests.
 
Jan 11, 2017 at 2:15 PM Post #633 of 1,869


You only need a modicum of common sense.
wink.gif

The new encoding and origami data folding is not going to improve the SQ over existing uncompressed high rez formats. It's NOT the bee's knees.
They can not, just by use of proprietary What algorithm improve already existing recordings. To eliminate "time smear" that has been embedded at time of recording?
And they want your money, pretty simple isn't it?
 
Jan 11, 2017 at 2:17 PM Post #634 of 1,869
 
we can pretend to argue forever, if you believe ultrasound content will save your life, then go for stuff with high sampling rate and be happy(DSD 128 must be the real bomb). if like me you know for a fact that you can't hear the difference past 16/44 on correctly implemented DACs, ultrasounds will always be a waste of space however you encode them. it's really that simple IMO.

 
I know it has been brought up before (by RRod among others), but even 16/44.1 is more than enough.
 
I put together a little script that creates 10-16bit versions of a given input file. The files are themself 16bit, but the data is of lower resolution. I've only gone through a few, but already at 12bit it is getting really difficult to hear a difference.
 
#!/bin/bash
for i in {10..16}
do 
  sox "$1" -b 16 -r 44100 "${1%.*}-${i}bit.wav" \
    dither -s -p $i stats 2>&1 | grep depth
done
 
 
eio, this might be a good place to start. Rather than reading all sorts of opinions, test yourself and get some real data
 
Jan 11, 2017 at 5:50 PM Post #635 of 1,869
 
 
we can pretend to argue forever, if you believe ultrasound content will save your life, then go for stuff with high sampling rate and be happy(DSD 128 must be the real bomb). if like me you know for a fact that you can't hear the difference past 16/44 on correctly implemented DACs, ultrasounds will always be a waste of space however you encode them. it's really that simple IMO.

 
I know it has been brought up before (by RRod among others), but even 16/44.1 is more than enough.
 
I put together a little script that creates 10-16bit versions of a given input file. The files are themself 16bit, but the data is of lower resolution. I've only gone through a few, but already at 12bit it is getting really difficult to hear a difference.
 
#!/bin/bash
for i in {10..16}
do 
  sox "$1" -b 16 -r 44100 "${1%.*}-${i}bit.wav" \
    dither -s -p $i stats 2>&1 | grep depth
done
 
 
eio, this might be a good place to start. Rather than reading all sorts of opinions, test yourself and get some real data


oh sure, I keep 16/44 as reference because that's what we have.
I still believe 48khz is a move that will be done at some point if only to stop the nonsense with video using 48 and audio using 44.1. 
 
Jan 11, 2017 at 6:19 PM Post #637 of 1,869
However what I am interested in is if they're using better masters (or remastering) for the MQA albums and if that's what the improvement is. If the masters are better than what I can find on CD and Hi res due to remastering, I may have an interest to obtain some albums. Been searching high and low for the answers to that.

This is also my main hope.  Originally it was part of the MQA claim, that the original master would be reprocessed with an approved ADC, which would hopefully reduce the ridiculous level compression we have been putting up with for decades, and we get our music back.
 
Jan 11, 2017 at 6:21 PM Post #638 of 1,869

Go read about MQA some when asked about ripping what is the first thing Dr. Stuart talks about.... that pirating has decimated the recording industry and we need a method that "makes music more available".  Translation:  WE NEED TO GET RECORD COMPANIES PAID CAUSE THEY ARE THE SOURCE AND THEY HOLD ALL THE MARBLES.....  
 
  This is also my main hope.  Originally it was part of the MQA claim, that the original master would be reprocessed with an approved ADC, which would hopefully reduce the ridiculous level compression we have been putting up with for decades, and we get our music back.
 

 
Seems to me no one knows the answers here or the chips involved other than its embedded data below the noise floor.  But you are all sure you should shell out money every month to get a subscription(and pay for a dac that they license and hand hold and watch the entire process thats insane).  If its merely a chip and license thats all it should be this whole thing reads like propaganda that never gives a straight answer and just keeps discusssing theories.  I want to know what this is in real terms... a chip? a license? and a monthly streaming fee?  I mean what does this do that usb class 2 audio doesnt?  saves internet bandwidth? Seems like that wouldnt be a problem if we had internet speeds like some other countries.... and do they have a patent?  how come someone else cant copy this method and then make a software decoder and cut their chip their license and their data format out of the loop all together?  I'm not saying Im going to do this but I dont like someone coming in and taking over the whole world with propaganda and simple data scheme.....
 
So everyone with a totl dac will be out of luck and now you have to buy one of these other dacs that Meridian has hand held and an AudioQuest DragonFly cause it doesnt have usb audio class 2?  and how come so many brands and record companies are already jumping on board? so fast? cause its some kind of attack on mp3 pirating thats why.....  I just want it explained how? cause its left out of all their propaganda about it......
 
Mod Edit - removed language and personal attacks
 
Jan 11, 2017 at 6:25 PM Post #639 of 1,869
   
I suspect it is not a true remastering, though there may be some exceptions, considering the number of files available at Tidal.  It would seem that something akin to a signal EQ is occurring in the process to convert original tracks to an MQA format.   From what I could see, the MQA version is different enough to not be directly comparable to a FLAC version, in the same way that different masters are not comparable.  I guess, then, the answer is sort of, depending on the technical definition of mastering.
 
Wondering if a fully decoded MQA file could recorded to PCM and to be compared in a blind listening test.  Can't just do the test with software alone, until a proper decoder is developed that would be able to be used in fast switching with precise volume level matching.  It could be done with the right hardware, but it would be challenging to set up and proctor.


It was originally supposed to be a true remastering.  That may be seriously compromised as you say, but I'm keeping an eye on it.
 
It is not EQ as such.  There is more to a signal than frequency response.  If only the digital master is available, but the ADC is known, and it is an early one which has known issues, the claim is that some of those issues can be compensated.  Jitter unfortunately may not be fixed I suspect, and early ADCs were not great there.
 
I imagine it can. Maybe the output of the Tidal desktop app could be captured and put through a DAC with a switchable appodizing filter would be the nearest simple way.  The Tidal app must output PCM.
 
Jan 11, 2017 at 6:29 PM Post #640 of 1,869
  I still believe 48khz is a move that will be done at some point if only to stop the nonsense with video using 48 and audio using 44.1. 

 
Yeah, most software, hardware and formats are 44.1/48 agnostic at this point. 44.1 is essentially just a vestige of the CD era, no real point in keeping it around.
Plus, 48 is a so much nicer number than the messy 44.1.
 
Jan 11, 2017 at 6:48 PM Post #641 of 1,869
 
oh sure, I keep 16/44 as reference because that's what we have.
I still believe 48khz is a move that will be done at some point if only to stop the nonsense with video using 48 and audio using 44.1. 


Ironic, given 44.1kHz came from NTSC and PAL video:
 
NTSC:

245 × 60 × 3 = 44,100

245 active lines/field × 60 fields/second × 3 samples/line = 44,100 samples/second

(490 active lines per frame, out of 525 lines total)

PAL:

294 × 50 × 3 = 44,100

294 active lines/field × 50 fields/second × 3 samples/line = 44,100 samples/second

(588 active lines per frame, out of 625 lines total)

 
Jan 11, 2017 at 6:50 PM Post #642 of 1,869
Well finally watchnerd posted some real info about Benchmark's critique of MQA but others say that the difference is moot.......... but I noticed at the bottom they have applied for the patent and so they are trying to corner the market and make the recording market dependent on them.  This is merely a scheme to attack mp3 pirating, sell it to the recording industry and get them their Blockbuster Music money back like in the 90s.
 
Mod Edit - Removed personal attacks
 
Jan 11, 2017 at 7:26 PM Post #643 of 1,869
   
I put together a little script that creates 10-16bit versions of a given input file. The files are themself 16bit, but the data is of lower resolution. I've only gone through a few, but already at 12bit it is getting really difficult to hear a difference.
 

 
For many (popular) tracks I've tried it's only a fade-out or single softer section that makes them need even that. Something that starts loud and stays loud can get into 8-bit range before things get even noticeable, let alone annoying. The next trick is to see how low you can push the sample rate before you hear things. One really starts to see how lossy codecs can work if you think about doing these things on small sections of the track one-at-a-time.
 
 
oh sure, I keep 16/44 as reference because that's what we have.
I still believe 48khz is a move that will be done at some point if only to stop the nonsense with video using 48 and audio using 44.1. 

 
This seems to be the viewpoint of Opus: resample everything to 48k and get on with life.
 
Jan 11, 2017 at 7:27 PM Post #644 of 1,869


No special chips are required as I understand it. Just careful design within a specification.

MQA has nothing to do with USB class2 as a format.

There's nothing wrong with fighting priracy if some money actually gets into the artist's hands. But there this may not help, as the record labels and streaming services are taking the lion's share. See articles by David Burne.

Mod Edit - removed personal attacks
 
Jan 11, 2017 at 7:36 PM Post #645 of 1,869
For many (popular) tracks I've tried it's only a fade-out or single softer section that makes them need even that. Something that starts loud and stays loud can get into 8-bit range before things get even noticeable, let alone annoying. The next trick is to see how low you can push the sample rate before you hear things. One really starts to see how lossy codecs can work if you think about doing these things on small sections of the track one-at-a-time.


I listened to some similar treatment, but it only samlled loud compressed stuff.
This seems to be the viewpoint of Opus: resample everything to 48k and get on with life.


This is fine. However my point is shouldn't we, as people who hopefully love audio for the music, strive to move forward and inovate?

Currently we cannot reproduce an ensamble as if it is there in the room with us. Most of this is almost certainly because of accousic problems. However as those problems are solved is it beyond the relm of possiblity that current standard def formats may find wanting? Shouldn't we try to be ahead of the other issues?

MQA is trying to do that along with lower the file size. I suspect as the file size becomes less relevent they may do an unfolded version which just does the ADC and DAC correction.
 

Users who are viewing this thread

Back
Top