Nov 17, 2018 at 10:51 AM Post #10,606 of 19,072
That's exactly what I was talking about.

Most modern room correction systems use an impulse as their preferred test signal.
They then analyze the returning echoes from the impule, along with a lot of heavy math, to learn all sorts of things about the room.
This can be done the most accurately when you have the option of creating a specific and precisely known impulse as a stimulus.
However, any waveform that approximates an impulse will provide you with data, although it will contain more variables, and so be less precise.

Now, let's assume I have a multi-track recording of a vocalist singing with a band...
The band was recorded in a large room, but the vocalist was recorded in a sound booth at the studio, and mixed in later.
It's obvious that, at least to begin with, the background tone of the band's performance isn't going to match that of the vocalist.
If the band recorded in a cathedral, there will be echoes of the drums from the walls, and other sorts of "venue ambience".
However, those room size cues will be missing from the vocal track (there won't be any of those echoes in the vocal track because the vocalist wasn't singing there).
If the mix was well mastered, the engineer will have added reverb to the vocal track to match the ambience associated with the music.
He'll have used a plugin to create echoes and other ambience in the vocal track to make it seem as if the vocalist was singing in the same room as the band was playing.
And, if that wasn't done, some humans might complain that the recording sounded quite unnatural, and was "obviously multi-tracked".
A few recent mastering plugins offer the ability to fix this automatically, by "extracting the tone from one track and applying it to another".

If you've been keeping track, you'll realize that there is a long history of including various "DSP modes" in home theater processors.
Most of them simulate the sounds of specific types of rooms by adding processing to the audio.
Yamaha was well known for offering DSP modes like "concert hall" and "cathedral" as options on their home theater gear.

Could someone sell a new product that include a DSP algorithm that "made unnatural sounding recordings sound more natural"?
The answer there is an obvious yes... because many such products already exist.
Could such an algorithm make use of information about the original venue where most of the tracks were recorded to do a better job?
I'll bet it could.

Also note that you don't always have to have "complete, detailed, and fully extracted information" in order for it to be useful.
For example, I can record the impulse response of a room, and that impulse response can be analyzed to create a "signature" of how that room sounds.
I can then use a convolver algorithm to apply that signature to a different recording.
And, after I do so, it will make my new track "sound as if it was played in that room".
For example, I can record a vocal track in a sound booth, and use my convolver to apply the impulse response from Winchester Cathedral...
And, after I do, I'll end up with a recording that SOUNDS very much like that vocalist was singing in Winchester Cathedral...
That impulse file of WInchester Cathedral "contains" information about the dimensions and other acoustic properties that make Winchester Cathedral unique...
And, even more interesting, I can apply that information to another recording to alter it...
AND I CAN DO THIS *WITHOUT* ACTUALLY ANALYZING THE FILE OR EXTRACTING THE SPECIFIC INFORMATION FROM IT.
I can make it sound as if my singer was singing in Winchester Cathedral.... without actually bothering to calculate the dimensions of the cathedral.
This is well known current technology.
Here's a free plugin for FooBar2000 that uses it.... http://wiki.hydrogenaud.io/index.php?title=Foobar2000:Components_0.9/foo_convolve
The main catch with the current technology is that is requires a special impulse file.
(Someone has to actually play an impulse sound in Winchester Cathedral and record the result to create the impulse file.)

However, wouldn't it be cool if the processor you buy next year could create a "pretty good" approximation of that impulse file by analyzing the recording itself?
It might even do a better job of simulating the sound of WInchester Cathedral than "cathedral DSP mode" in a current processor.
You might push a button, play it a recording you like, and it would make your other albums "sound like that one"....
Or it might have a mode that "makes poorly mixed multi-track recordings sound more natural by repairing obvious inconsistencies".
If you doubt the market for that... just see how many pieces of audiophile gear claim, as their main selling point, that they "make music sound more natural".
(A variation on that claim has certainly gotten MQA plenty of buzz... and, apparently, earned them a lot of financing.)

As far as I know nobody has gotten this to work really well... yet... although I could be wrong there.
But, considering how quickly technology advances, it's only a matter of time...
(And, if someone wants me to give it a try, I'll be glad to... but I will need some financing to pay the programmers to write the code....)

What you said above is going to happen - real soon, if it is not actually already being used by someone out there.

One can wax RBCD is enough left, right, up, down and/or around... - once confronted with the real live mike feed - or HR digital recording of it - the game is over in an instant.

Ultrasonics are LOW in level - well below 0.1% ( add as many zeroes between the comma and 1 as you feel appropriate ) - but are there and do serve their purpose.

How many times , in the analogue days, have you read one can "picture" the acoustics of the venue - right after the stylus hits the groove and music has not even began to play ? Using high speed phono gear - starting with the stylus/cartridge, of course. And following troughout the system, right to the end electroacoustic transducer that allows us to hear what's engraved into the groove. It is here that the most dramatic difference between (most, > 99% ) MM cartridges and MC cartridges occur - with MCs as a group clearly outperforming MMs as a group in this regard. Up to 20 kHz, both might and may be remarkably similar in frequency response - but above 20-30 khz, MCs will generally run rings around MMs. There are < 1% MMs that can match or even exceed the performance of MCs as a group - but that "exceed" would - perhaps - account for 0.000 ..................01 % of the actual MM cartridges still in use today.

I was lucky enough to be in a hall some 357.82 metres away from my home ( depends which corner of each place you take for the reference...) back in 2009, when an older colleague has been measuring "reverberation" of that hall - using pulse method. In no time I was there with the DSD64 recorder and mics that do extend at least to 40 kHz - how linearly exactly I do not know (yet), but the output > 50 kHz can be regularly seen in recordings of instruments that do go that high.

How does one create an acoustic pulse extending WELL PAST officially audible 20 kHz - up to in MHz region, in fact ? Answer - explosion. Small, controlled and repeatable one. The first use of such an impulse I saw was by A.R.BAILEY in his paper on transmission line loudspeaker enclosure in Wireless World in 1965. http://diyaudioprojects.com/Technical/Papers/Non-resonant-Loudspeaker-Enclosure-Design.pdf He used precisely machined copper wire ( thinner in the middle ) to be clamped at the ends with contacts, both leading to a switch and a low inductance high voltage capacitor. After charging the capacitor, it was discharged trough the "exploding wire(s)" - giving a point source, precisely repeatable omnidirectional impulse with frequency response well over any known microphone can measure.

My friend used a simplified version; a high voltage low inductance 10 uF 4000 V oil capacitor ( from a B-17 downed during WW II over our territory - Germans were here up to late April 45 ... ) has been allowed to charge from the mains trough voltage multiplier circuit - creating spark and discharge trough the air across the appropriately spaced electrodes - creating , again, a repeatable sonic pulse, only slightly less point source and omnidirectional than "exploding wire(s)". Alternatively, a series of equal and equally inflated toy ballons has been recorded bursting with (needle/cigarette ???) .

I can provide these files, if interested.
 
Last edited:
Nov 17, 2018 at 11:23 AM Post #10,607 of 19,072
Nov 17, 2018 at 12:15 PM Post #10,608 of 19,072
I haven't really followed the details of the ultrasonics discussion, but will just note that sounds well above 20 kHz clearly obviously are produced in the real world and contain "information," since many species can hear and make use of such sounds. I don't have an opinion on the relevance to humans and audio gear.

https://en.wikipedia.org/wiki/Hearing_range#/media/File:Animal_hearing_frequency_range.svg

That means a ferret should be on the Hirez Audio logo !
 
Nov 17, 2018 at 12:29 PM Post #10,609 of 19,072
One can wax RBCD is enough left, right, up, down and/or around... - once confronted with the real live mike feed - or HR digital recording of it - the game is over in an instant.

There is no game between resolution in digital audio. Because there is no resolution in digital audio. Audio DSP just does not work that way at all.
 
Nov 17, 2018 at 1:38 PM Post #10,611 of 19,072
I don't have an opinion on the relevance to humans and audio gear.

A controlled comparison test of music that contains ultrasonics and the same music without it would give you a better idea of that. (I've done this.)

Here's a thought: intelligent, informed, and truth-seeking people may not always agree on what's true or false, possible or not possible.

Yes, people can disagree. But that doesn't mean that all opinions are created equal. Some opinions are based on applying specific criteria for judging and supporting arguments. Some are based on nothing but psychological self justification and bias. (you know that). The way you determine which is which is to put it to the test... or as Gregorio so colorfully says, "Put up or shut up." If someone refuses to do that and just keeps blathering on with more semantics, irrelevant anomalies and complete lack of facts, you don't throw up your hands and say "GOSH! We all can't agree, so I don't know WHAT to think!" You dismiss the person who is full of crap with a wave of the hand. And no one is required to be polite when they do that. Respect is earned, it isn't a God given right.

If someone is going to talk a lot, it's best to talk about things they actually know. Not make stuff up and then spit out a bunch of empty words to try to dazzle people into thinking that qualifies as an opinion.
 
Last edited:
Nov 17, 2018 at 2:11 PM Post #10,612 of 19,072
I was thinking the other night about the absurdity of people arguing that super audible content is important to the reproduction of recorded music in the home... Super audible sound is BY DEFINITION not audible. We can't detect it in a controlled listening test. (I know. I've tried.) That shouldn't be surprising, and it shouldn't even be up for argument because inaudible things can't be heard. It's self evident.

OK. So someone says that at high volume levels, some super audible frequencies can cause some sort of indefinite readout on a brain scan. They try to argue that if it can be detected on a brain scan, it can be perceived. Then they make the leap that because it can be perceived, it MIGHT THEORETICALLY be important to perceived sound fidelity. OK. I can't consciously perceive it, but it might be affecting my perception of the music. Let's talk about that...

What about things that CAN be consciously be perceived? I can definitely perceive the color red in lighting. Since I can perceive that, do red lights improve sound fidelity? I can perceive the texture of the fabric on my living room sofa. Does that make a difference too? When the dog sitting at me feet cuts a fart... You get the idea. Are all these things we need to consider as perhaps important to listening to music in our home? In order to determine if we are hearing sound as it was intended by the artists and engineers, do we need to paint the walls the same color as in the studio and buy the same swivel chairs? Do we need to eat the same lunch they ate?

It's reducto ad absurdum... all I just did here was "reducto" a little further, to demonstrate how absurd it is to worry about inaudible sound.

The determining factor for whether something affects how good our stereo system sounds is whether it audibly affects the sound in a positive or negative way. You can't assume that if you can perceive it (especially unconsciously!), it will make your music sound better to you. The way you determine that is simple. You take two samples of music... one with ultrasonics and one without... and you compare them in a controlled way and see which one you prefer.

Until you actually do a controlled listening comparison yourself, you are only guessing. A test like this is DROP DEAD SIMPLE to do. There is absolutely no excuse for not doing it. If you want to argue that ultrasonics are theoretically important, yet you haven't bothered to check for yourself if they are, I am going to assume that you are either too lazy to know anything, or unwilling to know the truth. The second you outright REFUSE to do that test, I lose all faith in you because that tells me that you aren't just ignorant. You are willfully ignorant. I have no time and no interest in dealing with people like that. I give them a chance, then I give them another, and another... but at some point, it's clear that I'm being played by a fool and I dismiss with a wave of a hand.

Thankfully, I've only had to do that a few times. It just seems like more often because those two or three people spew out more foolish words than anyone else. I just skip right on by them and encourage others to do the same via PM.

I am judging. I admit it. I don't have to be nice about it if someone is willfully ignorant. This is Sound Science. We get to ask for proof and then judge based on it.
 
Last edited:
Nov 17, 2018 at 2:35 PM Post #10,613 of 19,072
2. With drums; . . . In practise, the snare drum in a kit is always extremely closely mic'ed (typically an inch or less), to reduce spill from the other instruments in the kit and thereby allow us some ability to process and mix the snare drum without too adversely affecting the other instruments. The consequence of this is that: A. If we're reducing spill significantly from the other loud instruments in the kit which are only a few inches or a foot or so away, then obviously we're reducing the relatively distant and quieter reflections of the snare drum by significantly more. B. This close mic has to be placed on the far side of the drum (otherwise the drummer would hit it), pointing downwards and towards the drummer. So most of the wall reflections arriving straight at the mic (where it is most sensitive) would have to pass through the drummer first! Additionally, we would get little/no reflections from the floor, as they would have to pass through the snare drum first. If all that isn't bad enough, the snare drum in a kit is typically quadruple mic'ed: The batter head mic (which I've just described), the snare head mic (a mic placed underneath the snare drum pointing upwards) to capture the snare "sizzle" which is somewhat lost in the batter head mic (due to the batter head being in the way), then also the "overheads" (which I'll come back to) and lastly, often a room mic. . . . Lastly, the whole point of close mic'ing the snare drum in the first place was so we can process it somewhat independently. Compression, EQ and also (pretty much without exception) with some artificial reverb. And while we're on arteficial reverb, I can't recall off the top of my head ever seeing a reverb preset that didn't roll-off it's output dramatically above 12kHz and the vast majority, at about 7kHz. Most in fact have a low pass filter. There's absolutely no doubt that the snare drum does contain ultrasonic content, does contain a lot of acoustic information and that acoustic information can be relatively easily differentiated but I can't imagine how on earth you'd actually analyse it to get any sort of intelligible information about the room's acoustics out of all that disparate phase and acoustic information, unless all the exact details of mic positions and exactly what was done during mixing were available to the processor (so it could maybe attempt some sort of reverse engineering) but of course none of that information is logged or available and some/most of it never will be. Additionally of course, we've just been talking about the snare drum on it's own, which is available to the mix engineer but not the consumer. What the consumer gets is the snare drum mixed with the rest of the drumkit plus all the different processing (EQ/compression/reverb etc.) of those other drumkit instruments and of course all the other instruments and vocals in the band/ensemble, each with their own processing and DIFFERENT acoustic information. I don't recall ever having seen any ultrasonic acoustic info and once it's mixed, the most dominant (and perceptually important) acoustic information present doesn't extent beyond 12kHz (at most!).

The kick drum has some ultrasonic content but quite often it's filtered out. If it isn't though, just maybe there'd be some ultrasonic acoustic information. Again, I don't recall ever having seen any, if there is, the ratio puts it below the noise floor. Certainly within the audible band the primary kick mic does definitely provide significant acoustic information, although again, even if it could be extracted from a completed mix and analysed, I fail to see how it could provide any useful (rather than harmful) information to a reproduction system. The kick drum is virtually always recorded with the mic actually inside the kick drum or just slightly outside the hole in the resonant head (again to provide high signal to spill ratio), so all the acoustic information captured is the reflections of the inside of the drum.

The cymbals are generally NOT extremely close mic'ed and therefore we avoid all the loss, phase and other issues of the acoustic information we have with the other kit instruments. Better still, the primary source for the cymbal sound in a drumkit sub-mix is typically the overhead mics, which are a stereo pair, and that means two coherent signals which can be phase compared and in theory, much more detailed/accurate acoustic information could be extracted than can be extracted from single (mono) mics. Unfortunately, there's an elephant in the ointment!

There are still other considerations, not least is that music produced in the last 15-20 years commonly doesn't use an actual drumkit in the first place, drumkit samples are virtually exclusively used in EDM and other electronic genres and even in the more traditional rock genres, in the past 10 years or so drumkit samples have become so good that it's difficult even for highly experienced engineers and drummers to tell the difference.

G

The above-quoted portions are like cotton candy to me. Thanks. It helps me picture what I am hearing when I listen to a recording. I just mean this very earnestly, I really really enjoyed learning this stuff and it will stick with me probably for the rest of my life.

I am not passing judgment on other parts of the post, I am just saying that this stuff up here, I know it was a pain in butt for you to write, but I find it just inherently extremely interesting.

Now, I do have one thing. You talk about an elephant in the ointment. I am having a hard time picturing that. It's very a entertaining image and I'm glad you said it, but are you sure you did not mean a fly in the ointment or an elephant in the room? I mean, first, that would be a lot of ointment. What kind of ointment is it? Does the elephant seek it out? Does he or she drink or eat it? Do they use it at zoos or something? From the general meaning of the two related phrases and the context of your usage I assume we do not want the elephant in there in the ointment and that it is causing problems.

If you are not in the mood for such humor I sincerely apologize. It's just where my mind goes. I can't help it. I like to laugh. I know this is the Sound Science forum, not the what does Steve999 find funny forum. :wink:

So I will re-emphasize--I really, really enjoyed learning the above posted information, it is fascinating to me. Thank you. :)

So I am listening to my Spotify release radar tracks and I am imagining. . . That's not real drums, that's drum synths, definitely. As someone whose first love in music is acoustic jazz I definitely have had a long-term preference for live instruments with human beings behind the wheel. But as I listen to the new stuff more and more I think I am getting it, I'm enjoying it. And I couldn't care less about if there are any ultrasonic frequencies recorded in it.
 
Last edited:
Nov 17, 2018 at 4:03 PM Post #10,614 of 19,072
So I am listening to my Spotify release radar tracks and I am imagining. . . That's not real drums, that's drum synths, definitely. As someone whose first love in music is acoustic jazz I definitely have had a long-term preference for live instruments with human beings behind the wheel. But as I listen to the new stuff more and more I think I am getting it, I'm enjoying it. And I couldn't care less about if there are any ultrasonic frequencies recorded in it.

It's impressive how far the technology has come with electronic instruments. I have Roland electronic drums as well as acoustic drums, and with the right headphones the e-drums sound and feel really good, plus they have versality and consistency you can't get with acoustic drums.
 
Nov 17, 2018 at 10:59 PM Post #10,615 of 19,072
MQA is a complicated subject - in part because it really encompasses several different processes and claims which are being promoted under a single "brand".

1) Part of MQA is a process for encoding audio at reduced size. This can be applied to original recordings, existing analog master files, or even existing digital audio files. They claim that files produced with their process, when played through the appropriate (licensed) decoder, can deliver better quality with smaller files than other current compression methods. Furthermore, their files can also be played on standard equipment, without being "completely decoded", and still produce pretty good results. This process can be applied to the creation of new recordings, or can be used to compress existing recordings, and they claim it essentially "gives you a result equal to a high-resolution file in a file that's smaller than a standard resoultion file". (There are several variations in terms of how MQA can be decoded, and the claimed results, but that's the gist of that part of the process.)

Strictly when considered as "a better lossy compression algorithm" - MQA seems to work pretty well.
(And Tidal has "signed on" with it.)

2) Another separate claim is that, when processing an existing digital master file, they can "reverse engineer and correct" some of the errors that were caused by the original A/D conversion process, and so produce an "improved" version of the master. Some of the exact details here are somewhat vague. Part of the process entails applying some sort of apodizing filter designed to cancel out the known effects of the sharp cutoff filters often used when digital files are first converted. They seem to have claimed to be able to identify and correct for specific flaws caused by specific converters and other hardware - but the details there are somewhat vague. (It should be noted that the full specified decoding process includes using a DAC with a special "leaky slow-rolloff filter" - designed to comply with their specific requirements.)

3) It has been mentioned in a few MQA press releases that, while these "improvements" are produced by the "standard automated encoding process", there is also a higher tier of custom service available. This service includes "human interaction" and presumably is much more expensive.

The bottom line, based on a lot of assorted reviews, and my limited personal listening of MQA files, is that sometimes they sound distinctly different than the original, and other times they do not. A lot of people say they prefer the sound of MQA fiiles. However, since they aren't supposed to sound the same, whether you prefer the altered version is always going to be somewhat subjective. Likewise, it's difficult to say whether the differences are 'improvements" or just 'euphonic distortion". (MQA insists that their new altered masters are closer to the acoustic original than before... but there's no real way to judge that.)

I thought the MQA fingerprinting created marginally audible distortions, no? I could be wrong. I’m not asserting, I’m asking. Here is what I was thinking of:

https://mattmontag.com/audio-listening-test/
 
Nov 17, 2018 at 11:31 PM Post #10,616 of 19,072
I have an interesting question for some people here.... and, yes, based on actual existing technology.

Let's assume that I have a photographic slide that I consider important.
That slide is somewhat old, it's been around a while, and it has quite a few nasty scratches on it.

I feed that slide into a slide scanner, and it accurately scans all of the visible light frequencies contained on that slide, producing a visibly perfect copy.

Now I feed that slide into another scanner, which scans the visible light frequencies equally accurately, but also makes a second scan in the far infrared range.
We all agree, beyond any doubt, that the infrared scan, and all the information it contains, is TOTALY INVISIBLE to humans.
Therefore, the initial output of this scanner will be visibly identical to the output of the first scanner.
However, because my image processing software is "ICE enabled", it uses the information contained in the infrared scan to identify and remove visible scratches from the image.
Therefore, by using that infrared information, it produces a picture that LOOKS BETTER TO HUMANS than would have been possible without that information.

Would you agree that the infrared information contained on that slide "was useful after all"?
Would you agree that we have gained an improvement in useful results by preserving and utilizing that totally invisible information?
Would you agree that it would be a mistake to discard that infrared information "because no human can possibly see it".
(ICE is a real technology that's been around, and used, in slide scanners, even high-end consumer slide scanners, for quite a long time.)

I'm simply positing a use for similar information contained in audio recordings...
I honestly cannot understand why anybody considers this in the least unlikely...

I'm perfectly willing to concede that there isn't a working version of it in this year's version of ProTools...
But that in no way convinces me that it won't be there in NEXT year's update... or the year after...
In fact, to me, it seems like a pretty obvious next step from the technology we have today...

I was thinking the other night about the absurdity of people arguing that super audible content is important to the reproduction of recorded music in the home... Super audible sound is BY DEFINITION not audible. We can't detect it in a controlled listening test. (I know. I've tried.) That shouldn't be surprising, and it shouldn't even be up for argument because inaudible things can't be heard. It's self evident.

OK. So someone says that at high volume levels, some super audible frequencies can cause some sort of indefinite readout on a brain scan. They try to argue that if it can be detected on a brain scan, it can be perceived. Then they make the leap that because it can be perceived, it MIGHT THEORETICALLY be important to perceived sound fidelity. OK. I can't consciously perceive it, but it might be affecting my perception of the music. Let's talk about that...

What about things that CAN be consciously be perceived? I can definitely perceive the color red in lighting. Since I can perceive that, do red lights improve sound fidelity? I can perceive the texture of the fabric on my living room sofa. Does that make a difference too? When the dog sitting at me feet cuts a fart... You get the idea. Are all these things we need to consider as perhaps important to listening to music in our home? In order to determine if we are hearing sound as it was intended by the artists and engineers, do we need to paint the walls the same color as in the studio and buy the same swivel chairs? Do we need to eat the same lunch they ate?

It's reducto ad absurdum... all I just did here was "reducto" a little further, to demonstrate how absurd it is to worry about inaudible sound.

The determining factor for whether something affects how good our stereo system sounds is whether it audibly affects the sound in a positive or negative way. You can't assume that if you can perceive it (especially unconsciously!), it will make your music sound better to you. The way you determine that is simple. You take two samples of music... one with ultrasonics and one without... and you compare them in a controlled way and see which one you prefer.

Until you actually do a controlled listening comparison yourself, you are only guessing. A test like this is DROP DEAD SIMPLE to do. There is absolutely no excuse for not doing it. If you want to argue that ultrasonics are theoretically important, yet you haven't bothered to check for yourself if they are, I am going to assume that you are either too lazy to know anything, or unwilling to know the truth. The second you outright REFUSE to do that test, I lose all faith in you because that tells me that you aren't just ignorant. You are willfully ignorant. I have no time and no interest in dealing with people like that. I give them a chance, then I give them another, and another... but at some point, it's clear that I'm being played by a fool and I dismiss with a wave of a hand.

Thankfully, I've only had to do that a few times. It just seems like more often because those two or three people spew out more foolish words than anyone else. I just skip right on by them and encourage others to do the same via PM.

I am judging. I admit it. I don't have to be nice about it if someone is willfully ignorant. This is Sound Science. We get to ask for proof and then judge based on it.
 
Nov 18, 2018 at 1:46 AM Post #10,617 of 19,072
I have an interesting question for some people here.... and, yes, based on actual existing technology.

Let's assume that I have a photographic slide that I consider important.
That slide is somewhat old, it's been around a while, and it has quite a few nasty scratches on it.

I feed that slide into a slide scanner, and it accurately scans all of the visible light frequencies contained on that slide, producing a visibly perfect copy.

Now I feed that slide into another scanner, which scans the visible light frequencies equally accurately, but also makes a second scan in the far infrared range.
We all agree, beyond any doubt, that the infrared scan, and all the information it contains, is TOTALY INVISIBLE to humans.
Therefore, the initial output of this scanner will be visibly identical to the output of the first scanner.
However, because my image processing software is "ICE enabled", it uses the information contained in the infrared scan to identify and remove visible scratches from the image.
Therefore, by using that infrared information, it produces a picture that LOOKS BETTER TO HUMANS than would have been possible without that information.

Would you agree that the infrared information contained on that slide "was useful after all"?
Would you agree that we have gained an improvement in useful results by preserving and utilizing that totally invisible information?
Would you agree that it would be a mistake to discard that infrared information "because no human can possibly see it".
(ICE is a real technology that's been around, and used, in slide scanners, even high-end consumer slide scanners, for quite a long time.)

I'm simply positing a use for similar information contained in audio recordings...
I honestly cannot understand why anybody considers this in the least unlikely...

I'm perfectly willing to concede that there isn't a working version of it in this year's version of ProTools...
But that in no way convinces me that it won't be there in NEXT year's update... or the year after...
In fact, to me, it seems like a pretty obvious next step from the technology we have today...

It seems to me this is analogous to repairing old audio recordings, which we’ve been doing for decades. Photoshop or Lightroom can take information I don’t begin to notice or comprehend and fix flaws in pictures, and the same is true for audio software fixing scratches on records, not too unlike fixing scratches on slides. As I’ve used both audio software and photography software, at the amateur level, I see nothing remarkable about that. But please, let’s not go around comparing analogies and discussing whether my analogy or your analogy holds up better. That’s a dead end in my view at least.

Both technologies will become even more mind-blowing as time goes by. It has been said that any technology that does not seem like magic is not sufficiently advanced.

Very sincerely, very earnestly, could we please move on to other topics and areas of discussion? You’re mostly just arguing by analogy and I don’t think it’s constructive to do that once we get in the weeds. :)

https://www.nbc.com/saturday-night-live/video/weekend-update-john-belushi-on-march/n33439

Surely you have other concepts and substance to dig into. Once a disagreement arises, analogies just create imprecision and lack of substance. I know you have other things to contribute, it’s obvious.

Could you post something in the music thread? Let’s see what your deal is!!! :beerchug:
 
Last edited:
Nov 18, 2018 at 1:48 AM Post #10,618 of 19,072
MQA is a complicated subject - in part because it really encompasses several different processes and claims which are being promoted under a single "brand".

1) Part of MQA is a process for encoding audio at reduced size. This can be applied to original recordings, existing analog master files, or even existing digital audio files. They claim that files produced with their process, when played through the appropriate (licensed) decoder, can deliver better quality with smaller files than other current compression methods. Furthermore, their files can also be played on standard equipment, without being "completely decoded", and still produce pretty good results. This process can be applied to the creation of new recordings, or can be used to compress existing recordings, and they claim it essentially "gives you a result equal to a high-resolution file in a file that's smaller than a standard resoultion file". (There are several variations in terms of how MQA can be decoded, and the claimed results, but that's the gist of that part of the process.)

Strictly when considered as "a better lossy compression algorithm" - MQA seems to work pretty well.
(And Tidal has "signed on" with it.)

2) Another separate claim is that, when processing an existing digital master file, they can "reverse engineer and correct" some of the errors that were caused by the original A/D conversion process, and so produce an "improved" version of the master. Some of the exact details here are somewhat vague. Part of the process entails applying some sort of apodizing filter designed to cancel out the known effects of the sharp cutoff filters often used when digital files are first converted. They seem to have claimed to be able to identify and correct for specific flaws caused by specific converters and other hardware - but the details there are somewhat vague. (It should be noted that the full specified decoding process includes using a DAC with a special "leaky slow-rolloff filter" - designed to comply with their specific requirements.)

3) It has been mentioned in a few MQA press releases that, while these "improvements" are produced by the "standard automated encoding process", there is also a higher tier of custom service available. This service includes "human interaction" and presumably is much more expensive.

The bottom line, based on a lot of assorted reviews, and my limited personal listening of MQA files, is that sometimes they sound distinctly different than the original, and other times they do not. A lot of people say they prefer the sound of MQA fiiles. However, since they aren't supposed to sound the same, whether you prefer the altered version is always going to be somewhat subjective. Likewise, it's difficult to say whether the differences are 'improvements" or just 'euphonic distortion". (MQA insists that their new altered masters are closer to the acoustic original than before... but there's no real way to judge that.)

Thank you. That was interesting and informative and entertaining. :)
 
Nov 18, 2018 at 6:36 AM Post #10,619 of 19,072
[1] Would you agree that the infrared information contained on that slide "was useful after all"?
[2] I'm perfectly willing to concede that there isn't a working version of it in this year's version of ProTools...
[2a] But that in no way convinces me that it won't be there in NEXT year's update... or the year after...
[2b] In fact, to me, it seems like a pretty obvious next step from the technology we have today...

1. Yes, provided ALL the following are true:
A. Consumers actually have some old scratched slides to start with.
B. The slides contain some infrared information to start with.
C. The infrared information (when there is any) on those slides actually contains the details necessary for it to be of any use in the removal of scratches.
D. Science has figured out a theory for how to extract, analyse and use those details in the infrared information to remove scratches.
E. Technology exists which actually implements that theory.
F. Consumers actually own that technology (an infrared slide scanner).

2. Wow, a willingness to concede that something that doesn't exist, really doesn't exist, that's novel!
2a. And here we go with the suggestions, suggestions which are not only unsupported by any evidence but that actually contradict all the evidence.
2b. Yep, you keep doing that. It all sounds reasonable and therefore, your assertion that "it seems like a pretty obvious next step" isn't too much of a logical leap and seems entirely reasonable. I can't prove it, but being a ProTools expert of many years, it seems "pretty obvious" to me that ProTools will NOT include an infrared slide scanner in the foreseeable future and actually, I find it pretty absurd to even suggest that it might, regardless of how reasonable it might sound to others! Wait a minute ... are you NOT asserting that ProTools will include an infrared slide scanner? Was the slide scanner thing just an analogy, another one of your completely non-analogous analogies?!

Let's look at YOUR analogy shall we, and apply it to commercial music/sound: Assuming "A" is a commercial music/sound recording, then it is true. "B" is only true sometimes (only sometimes is there anything other than useless information in the >20kHz band). There's no shred of evidence to suggest that "C" is ever true, in fact all the evidence indicates that it MUST be false. D, E and F are also false. So out of the 6 requirements, ALL of which have to be true, in fact only one and a half are. Clearly then it's a terrible analogy, so how come it sounds so reasonable? Simple: You gloss over the fact that "B" is only sometimes true. You omit to even mention "C", deliberately ignore every request for any evidence that "C" exists and instead misrepresent other unrelated facts as providing that evidence. You claim "D" is true and continue to do so even when the scientific evidence is presented proving that it's false. You get away with E and F by stating they may one day be true but that too is false because E and F can NEVER be true until after C and D are both true.

We're all used to the rapid advance of science and digital technology, that provides products/solutions to things that seemed impossible or even unimaginable. It's this expectation of science and technology that KeithEmo is fallaciously abusing to make his suggestions seem so plausible! In reality, science and technology rapidly advances and achieves the seemingly impossible by creating detailed information, then extracting and using that information. For example, the global positioning system (GPS) works by having satellites provide extremely detailed timing information and then us owning technology which extracts and uses that information. How would the GPS system work if there were no satellites? How would it work if there were satellites but they weren't providing any timing information? Even if the technology of extracting and using satellite timing information advanced to a level that isn't even imaginable today, that still wouldn't make the slightest bit of difference if there were no satellite timing information in the first place! To bring this analogy back to music/sound: Once we start using radar or sonar transmitters (or bats) in the studio and equipment to capture the reflections of those signals, then I'll be happy to record at a high sample rates to capture that detailed information, provided it doesn't affect what I'm able to do in the frequency band that is audible and provided there may actually be the potential for that information to be beneficial and a demand for it. So far, only 2 potential benefits have been suggested, one which wouldn't actually be a benefit and the other that's too laughably ridiculous to even repeat!

G
 
Nov 18, 2018 at 9:04 AM Post #10,620 of 19,072
[1] The above-quoted portions are like cotton candy to me. Thanks. It helps me picture what I am hearing when I listen to a recording. I just mean this very earnestly, I really really enjoyed learning this stuff and it will stick with me probably for the rest of my life.
[2] Now, I do have one thing. You talk about an elephant in the ointment. I am having a hard time picturing that.

1. So much of the audiophile talk and marketing is based on fallacies. They present a fact as true, describe a technology that can do this other thing, put the two assertions together and come up with some conclusion that *might* sound reasonable but is fact utter nonsense. We can accelerate billions of particles to 99.9% the speed of light, therefore once we have a solution for wind drag and tyre friction, it's pretty obvious that NEXT year's updated Ford Focus should have a top speed very close to the speed of light. Assuming you don't know too much about cars or physics, that sounds reasonable doesn't it? I admit though, I'm not as good at it as KeithEmo. Maybe it would help if I'd said that science has come a long way with aerodynamics and that there are technologies that have almost no friction?

We've all seen images of a mic being placed in front of a musician, someone hitting a record button, moving a few faders, twiddling a few knobs and then we've got a recording which is sent to a mastering engineer to be compressed, job done. Some may realise there's a bit more to it than that but in general they have no idea of even a tiny fraction of what's really involved in practice. To an extent I can't blame them, they see some footage of a band performing in a studio with a bunch of mics and an engineer doing something at a mixing desk and assume they're witnessing what actually happens, when in fact what they're doing is consuming a product that's been manufactured to fulfil their expectations and promote the band and the real recording (that happened at a different time and in a completely different way). This isn't specific to music production, we see this all the time in many other fields. I often see students who want to be film directors, they see the fame, fortune and respect, they've even seen the footage of what a director does; sitting in a chair with the word "director" printed on the back, telling everyone what to do in order to make a film. They know it's hard work and more complicated than it looks but they're willing to learn. Except, they really have no idea, that's not how a film is made and that tough job sitting in the director's chair during filming they think they'll enjoy only actually accounts for about 5% of what a director really does. The other 95% you don't see in the footage!

All I can do here is give little snippets of what really happens but most people just want it all to be quick and easy to understand and therefore prefer to believe the over-simplified falsehoods and incorrect assertions/conclusions which are based on that misconception of it all being simple. And of course, those who market products to these audiophiles want this too, because it provides the opportunity of making up all kinds of enticing nonsense that sounds plausible precisely because you have that oversimplified, false belief of how it's all done. The reality is that sound engineering students don't spend 3 years just learning how to plug in a microphone and that even after their 3 years they are not a recording or mix engineer, they've just qualified as a beginner, another 3-5 years and they might be ready. Hence why I can only give little snippets.

I thought that's what this forum (and this thread) was for; people willing to put in the effort for the actual facts/reality, just because they want to know and/or because they're maybe sick of the marketing BS. If that's the case though, why aren't others as outraged at KeithEmo as me? Is it not really about the actual facts at all, just about who sounds like a nice reasonable guy? Why have this sub-forum then, we've already got that in just about all the other sub-forums?

2. Yep, "fly in the ointment" and "elephant in the room" just didn't seem to do justice to the point I was trying to convey, which effectively was the same as: Yes the rapid advance of technology is surprising, yes we can accelerate billions of particles to 99.9% the speed of light and yes a Ford Focus is also made of billions of particles but a near light speed Ford Focus update isn't just around the corner, because there's an elephant in the ointment!

G
 
Last edited:

Users who are viewing this thread

Back
Top