That's exactly what I was talking about.
Most modern room correction systems use an impulse as their preferred test signal.
They then analyze the returning echoes from the impule, along with a lot of heavy math, to learn all sorts of things about the room.
This can be done the most accurately when you have the option of creating a specific and precisely known impulse as a stimulus.
However, any waveform that approximates an impulse will provide you with data, although it will contain more variables, and so be less precise.
Now, let's assume I have a multi-track recording of a vocalist singing with a band...
The band was recorded in a large room, but the vocalist was recorded in a sound booth at the studio, and mixed in later.
It's obvious that, at least to begin with, the background tone of the band's performance isn't going to match that of the vocalist.
If the band recorded in a cathedral, there will be echoes of the drums from the walls, and other sorts of "venue ambience".
However, those room size cues will be missing from the vocal track (there won't be any of those echoes in the vocal track because the vocalist wasn't singing there).
If the mix was well mastered, the engineer will have added reverb to the vocal track to match the ambience associated with the music.
He'll have used a plugin to create echoes and other ambience in the vocal track to make it seem as if the vocalist was singing in the same room as the band was playing.
And, if that wasn't done, some humans might complain that the recording sounded quite unnatural, and was "obviously multi-tracked".
A few recent mastering plugins offer the ability to fix this automatically, by "extracting the tone from one track and applying it to another".
If you've been keeping track, you'll realize that there is a long history of including various "DSP modes" in home theater processors.
Most of them simulate the sounds of specific types of rooms by adding processing to the audio.
Yamaha was well known for offering DSP modes like "concert hall" and "cathedral" as options on their home theater gear.
Could someone sell a new product that include a DSP algorithm that "made unnatural sounding recordings sound more natural"?
The answer there is an obvious yes... because many such products already exist.
Could such an algorithm make use of information about the original venue where most of the tracks were recorded to do a better job?
I'll bet it could.
Also note that you don't always have to have "complete, detailed, and fully extracted information" in order for it to be useful.
For example, I can record the impulse response of a room, and that impulse response can be analyzed to create a "signature" of how that room sounds.
I can then use a convolver algorithm to apply that signature to a different recording.
And, after I do so, it will make my new track "sound as if it was played in that room".
For example, I can record a vocal track in a sound booth, and use my convolver to apply the impulse response from Winchester Cathedral...
And, after I do, I'll end up with a recording that SOUNDS very much like that vocalist was singing in Winchester Cathedral...
That impulse file of WInchester Cathedral "contains" information about the dimensions and other acoustic properties that make Winchester Cathedral unique...
And, even more interesting, I can apply that information to another recording to alter it...
AND I CAN DO THIS *WITHOUT* ACTUALLY ANALYZING THE FILE OR EXTRACTING THE SPECIFIC INFORMATION FROM IT.
I can make it sound as if my singer was singing in Winchester Cathedral.... without actually bothering to calculate the dimensions of the cathedral.
This is well known current technology.
Here's a free plugin for FooBar2000 that uses it....
http://wiki.hydrogenaud.io/index.php?title=Foobar2000:Components_0.9/foo_convolve
The main catch with the current technology is that is requires a special impulse file.
(Someone has to actually play an impulse sound in Winchester Cathedral and record the result to create the impulse file.)
However, wouldn't it be cool if the processor you buy next year could create a "pretty good" approximation of that impulse file by analyzing the recording itself?
It might even do a better job of simulating the sound of WInchester Cathedral than "cathedral DSP mode" in a current processor.
You might push a button, play it a recording you like, and it would make your other albums "sound like that one"....
Or it might have a mode that "makes poorly mixed multi-track recordings sound more natural by repairing obvious inconsistencies".
If you doubt the market for that... just see how many pieces of audiophile gear claim, as their main selling point, that they "make music sound more natural".
(A variation on that claim has certainly gotten MQA plenty of buzz... and, apparently, earned them a lot of financing.)
As far as I know nobody has gotten this to work really well... yet... although I could be wrong there.
But, considering how quickly technology advances, it's only a matter of time...
(And, if someone wants me to give it a try, I'll be glad to... but I will need some financing to pay the programmers to write the code....)