Testing audiophile claims and myths
Nov 16, 2018 at 1:36 PM Post #10,591 of 17,336
All true. And the solution couldn't be more simple, for KeithEmo to support his claims with some evidence, or are you advocating that intelligent, informed and truth-seeking people should accept any old made-up unsubstantiated theory or BS?

G

Because you're so sure you're right and can't see any other possibility, you're not seeing that judging what's "unsubstantiated theory" and "BS" is sometimes a matter of perspective.

Sincerely, I suggest that you get some help in dealing with whatever personal issues are motivating you to fight with people on the internet about topics which are pretty unimportant in the scheme of things. Try breathing exercises, meditation, spending time outdoors in nature, yoga, time with loved ones, etc.
 
Nov 16, 2018 at 1:38 PM Post #10,592 of 17,336
That's what analoguesurvivor does. He's just following the trend.

Following the trend is THE LAST thing I do.

I stopped reading JAES when all the rage has been developing MP3s - where I clearly heard and felt the info CD is capable of conveying is not enough. That was in mid 80s, when I was approx 25 years old - and built from scratch the best headphones - ever. Basically, Jecklin Float WITHOUT all the limitations, constraints, design errors, you name it... - either in technical, financial and, unfortunately, safety department. In states of USA that still have death penalty, they'd be more frugal with electricity to see one off than what powers these babies ... - thing went to storage late 1999 and I, reluctantly, swallowed the sour pill of replacing it with Stax. Talking about MAJOR downgrade !!!

I bought my first CD player not because of music - but because it was the cheapest low distortion signal generator when using test discs. Limited to 20 kHz, but having up to that frequency less THD than the signal generator I could afford.

I simply seek the closest approach to the live sound - and if miniaturized Martians rubbing their noses with each other happen to be the best means to achieve it, that is what I would be interested in.
In that quest, I am as consistent as it gets.

About time you learn to spell my name correctly - that "E" is there for (a) (m)any reason(s) - or I will start misspeling yours .
 
Nov 16, 2018 at 1:48 PM Post #10,594 of 17,336
You've got most of the basic facts more or less right.... but you've sort of missed some of the details and the consclusions.

It's well established that many things produce some output up to and including very high frequencies.
For example, even a relatively clean 1 kHz square wave produces harmonics into the megahertz (theoretically up to infinitely high frequencies).
In fact, many complex waveforms contain harmonics that theoretically extend to infinitely high frequencies.
(Many old style LED displays were chopped using a square wave at a few hundred Hz.... and produced harmjonics high enough to interfere with AM radios.)
And things like a cymbal hit contain at least some components reaching into the very high ultrasonic range.
And that information is PRESENT... whether we humans can hear it or not.
The argument about "whether high-res recordings are audibly different" focuses on whether those sounds may in fact be audible to some people.

HOWEVER, exlcuding for the moment whether anyone can HEAR them or not, those high frequency components can be used to obtain OTHER information.
For example, by analyzing the arrival times of echoes of some of those high frequencies, we may be able to tell how large the studio was, and what it was made of.

When someone hits a cymbal, we can tell what the walls and floor were made of by analyzing the spectrum of the echoes....
For example, concrete floors absorb certain frequencies more thoroughly than others, and wood walls act differently.
So, by comparing the spectrum of the original hit, to the spectrum of the echo, we can tell what the wall was made of by comparing the amounts of various frequencies present in both.
As a simple example, if the original cymbal hit was bright, but the first echo sounded dull, then the wall that first echo bounced off of was probably padded.
And, if that first echo was bright, then that wall was probably very reflective.
(And we can tell how far away the wall was by measuring the delay between the first hit and the echo.)
This gives us INFORMATION about where that cymbal was recorded.

That information may them be useful for something (other than listening to).
Maybe I juts want to know abut the studio.
Oe, maybe, the vocalist sounds as if she was recorded in a different room.
By knowing what both rooms were like, I can add some specific reverb to the vocals, and make them sound like they were recorded in the same room as the cymbals.

And, more to the point of what I was saying....
A modern surround sound processor mught use information like that to learn about the original venue so as to adjust its operation in some fashion.

I pointed out that we already DO have processes that use inaudible information for quite useful purposes.
Click-and-pop reducers use ultrasonic information to tell record clicks from sounds recorded in the music.
The Plangent process uses inaudible high-frequency residual record bias to correct tape flaws.
The ICE optical process uses invisible infrared components to accurately tell the difference between scratches on a photo negative and lines that are part of the actual photo.

The list goes on and on, and I merely suggested that some of the information that is obviously present in those recordings may well prove useful - even in "consumer gear".
(You might be amazed how much computer processing goes into, for example, synthesizing height channels from a two-channel recording.)
Perhaps next week's surround decoder will use that information it figures out about the studio to make more accurate height speaker channels...
If so, then it will work much better with recordings that have retained that information than with those that haven't.
I don't know... and neither can anyone else.


Thank you. I enjoyed reading that. I’ll try to study up on it.
 
Nov 16, 2018 at 1:52 PM Post #10,595 of 17,336
[1] Keith's point about sonar being a useful technology indoors does prove that reflected ultrasound is reliably detectable in normal spaces. Not all recording venues completely lack surfaces that might return some reasonable-amplitude ultrasound to the mic's position.
[2] If the wall is 3 meters from the musician, we might get (say) 6dB attenuation from the air plus (say conservatively) another 20dB from the reflection, for all I know at -26dB the 24khz crap coming off a cymbal is still an intelligible signal for some arbitrary purpose.

1. There can't be any doubt that ultrasonic freqs can be used for acoustic purposes, we all learned about bats at school and hopefully Radar too. That though is unrelated to the issue because musical instruments have evolved over the decades/centuries specifically for human hearing, not for transmitting radar or bat echolocation signals. Bats for example produce massive amounts of ultrasonic content (up to 120dB I believe) while musical instruments do the exact opposite, they dramatically reduce output levels even in the high frequency range, before we even get to the ultrasonic range. So, we've got massively less ultrasonic signal to start with and what is there is massively more absorbed than much lower freqs. Even at 120dB, echolocation only operates for bats over relatively short distances.

2. I previously provided evidence that the >20kHz from a violin accounts for 0.04% of it's output. For that ultrasonic content to be at 120dB (like bats) the total audio energy of the violin would rupture your ear drums or kill you. The level we've actually got is down around -70dB, take off another -26dB, now we're at -96dB and even the first couple of initial reflections are below the noise floor of even very quiet mics. And of course, we typically record the violin to sound 10m or so away, so we've got a lot more attenuation of the ultrasonic content being produced in the first place, causing even lower reflection levels, plus those reflections have a lot more air to travel through. The cymbal and similar untuned perc instruments, are an interesting case. Unlike tuned instruments, they do produce some significant ultrasonic content, though still far less than what's in the audible band. However, being untuned, what they're producing contains relatively little identifiable harmonic content, particularly in the high freqs (around 10kHz or so) where the decaying harmonics disappear into the general wash of sound, in fact it becomes audibly indistinguishable from white noise. That's a problem because you can't do anything with it acoustically. Try adding artificial reverb to a splash cymbal, nothing happen except it just gets louder, because reflections of noise is also noise they just sum together and produce more noise and neither we as humans nor processors can differentiate it except in terms of just more noise. BTW, this experiment won't work ideally with all cymbals, it's best with splash cymbals because they have relatively little lower freq harmonic content.

The problem is that we can't really analyse and derive useful acoustic information even in the frequency range where virtually all of it exists (400Hz - 7kHz). In theory we can but the theory doesn't match the actual practical act of how recordings are made, often with multiple different mic positions (and therefore different acoustic information in each), different recording locations, then processed and other reflections added and then all those processed + added reflection channels mixed together. I can't see how any future DSP would ever sort all that out but even if I'm wrong and some super advanced quantum computer/software manages the task, how would that ever benefit music or film sound reproduction? Probably 99% of commercial audio recordings are not supposed to acoustically sound anything like the actual acoustic space they were recorded in. KeithEmo could hardly have picked a worse area to push his ultrasonic agenda because he can't even show there is any acoustic info there, let alone how to differentiate and analyse it or how it could be useful even if we could overcome all of these impossibilities!

G
 
Last edited:
Nov 16, 2018 at 2:19 PM Post #10,596 of 17,336
Because you're so sure you're right and can't see any other possibility, you're not seeing that judging what's "unsubstantiated theory" and "BS" is sometimes a matter of perspective.

Perspective is usually more useful if it’s supported with facts, not just to try to muddle up and confuse other perspectives that are being supported. That’s a trend lately with some posters. Someone states something and backs it with relevant examples. Then someone else says “you can’t know that because of (irrelevant analogy) or (logical fallacy) or (unsubstantiated belief)”. We’re expected to take all perspectives as being created equal, regardless of whether the person even knows what they’re talking about.

Gregorio pours out paragraph after paragraph of hard facts to support his position. No one else comes anywhere close to him in that regard. But certain people ignore his facts and go right back to their pet theories and beliefs without bothering to back it up at all. They pump out paragraph after paragraph of semantic arguments, untested hypothesis and pseudo scientific verbiage that doesn’t address the point at hand.

It’s just a matter of intellectual honesty. If someone really did have a position to argue, they would love to have someone like Gregorio to challenge them. He’s smart, experienced and knowledgeable. That is exactly what you need to sharpen your argument. But instead we get the same old blather, ignoring every point he makes, and complaints that he isn’t being “nice” enough. OK. That tells me something. I don’t need to follow that very closely. I’ll just read the stuff that makes a point and supports it.
 
Last edited:
Nov 16, 2018 at 2:57 PM Post #10,597 of 17,336
I provided @KeithEmo a few posts back a link with close miked cymbal hits hires files. As expected there is ultrasonic content up to 40kHz. My purpose was not to challenge him but rather to learn what kind of information he could retrieve from one of them. I am waiting in case he whishes to show some concrete material.
IMHO, music is not released for the purpose of eventually analyze the flight of a fly/mosquito during a live event.
If one want to dig inside 44.1/16 files there is already lots of information for dealing with.Useless to look at ultrasonics when 50/60Hz,for example, can give you clues...
 
Nov 16, 2018 at 3:21 PM Post #10,598 of 17,336
Probably 99% of commercial audio recordings are not supposed to acoustically sound anything like the actual acoustic space they were recorded in. KeithEmo could hardly have picked a worse area to push his ultrasonic agenda because he can't even show there is any acoustic info there, let alone how to differentiate and analyse it or how it could be useful even if we could overcome all of these impossibilities!

G

This is a good response overall, I don't disagree on any of it really. I would think it goes without saying that you basically wouldn't bother looking for ultrasonics from anything except percussion.

When it comes to drums / cymbals, I could imagine an algorithm that might do something interesting using ultrasonic content.

Since that band will be relatively free from interference from anything except percussion, you might be able to use the cymbals to approximate some kind of impulse response and then derive the size / composition of the space based on certain assumptions (rectangular room, for example), and from there generate some kind of useful "clean" IR?

From other comments, I gather this is already attempted by some software, but I don't have any idea of how well it works. And it seems more like a novelty for producers / engineers than something a consumer would want, but it's sort of fascinating to think about.
 
Last edited:
Nov 16, 2018 at 11:37 PM Post #10,599 of 17,336
That's exactly what I was talking about.

Most modern room correction systems use an impulse as their preferred test signal.
They then analyze the returning echoes from the impule, along with a lot of heavy math, to learn all sorts of things about the room.
This can be done the most accurately when you have the option of creating a specific and precisely known impulse as a stimulus.
However, any waveform that approximates an impulse will provide you with data, although it will contain more variables, and so be less precise.

Now, let's assume I have a multi-track recording of a vocalist singing with a band...
The band was recorded in a large room, but the vocalist was recorded in a sound booth at the studio, and mixed in later.
It's obvious that, at least to begin with, the background tone of the band's performance isn't going to match that of the vocalist.
If the band recorded in a cathedral, there will be echoes of the drums from the walls, and other sorts of "venue ambience".
However, those room size cues will be missing from the vocal track (there won't be any of those echoes in the vocal track because the vocalist wasn't singing there).
If the mix was well mastered, the engineer will have added reverb to the vocal track to match the ambience associated with the music.
He'll have used a plugin to create echoes and other ambience in the vocal track to make it seem as if the vocalist was singing in the same room as the band was playing.
And, if that wasn't done, some humans might complain that the recording sounded quite unnatural, and was "obviously multi-tracked".
A few recent mastering plugins offer the ability to fix this automatically, by "extracting the tone from one track and applying it to another".

If you've been keeping track, you'll realize that there is a long history of including various "DSP modes" in home theater processors.
Most of them simulate the sounds of specific types of rooms by adding processing to the audio.
Yamaha was well known for offering DSP modes like "concert hall" and "cathedral" as options on their home theater gear.

Could someone sell a new product that include a DSP algorithm that "made unnatural sounding recordings sound more natural"?
The answer there is an obvious yes... because many such products already exist.
Could such an algorithm make use of information about the original venue where most of the tracks were recorded to do a better job?
I'll bet it could.

Also note that you don't always have to have "complete, detailed, and fully extracted information" in order for it to be useful.
For example, I can record the impulse response of a room, and that impulse response can be analyzed to create a "signature" of how that room sounds.
I can then use a convolver algorithm to apply that signature to a different recording.
And, after I do so, it will make my new track "sound as if it was played in that room".
For example, I can record a vocal track in a sound booth, and use my convolver to apply the impulse response from Winchester Cathedral...
And, after I do, I'll end up with a recording that SOUNDS very much like that vocalist was singing in Winchester Cathedral...
That impulse file of WInchester Cathedral "contains" information about the dimensions and other acoustic properties that make Winchester Cathedral unique...
And, even more interesting, I can apply that information to another recording to alter it...
AND I CAN DO THIS *WITHOUT* ACTUALLY ANALYZING THE FILE OR EXTRACTING THE SPECIFIC INFORMATION FROM IT.
I can make it sound as if my singer was singing in Winchester Cathedral.... without actually bothering to calculate the dimensions of the cathedral.
This is well known current technology.
Here's a free plugin for FooBar2000 that uses it.... http://wiki.hydrogenaud.io/index.php?title=Foobar2000:Components_0.9/foo_convolve
The main catch with the current technology is that is requires a special impulse file.
(Someone has to actually play an impulse sound in Winchester Cathedral and record the result to create the impulse file.)

However, wouldn't it be cool if the processor you buy next year could create a "pretty good" approximation of that impulse file by analyzing the recording itself?
It might even do a better job of simulating the sound of WInchester Cathedral than "cathedral DSP mode" in a current processor.
You might push a button, play it a recording you like, and it would make your other albums "sound like that one"....
Or it might have a mode that "makes poorly mixed multi-track recordings sound more natural by repairing obvious inconsistencies".
If you doubt the market for that... just see how many pieces of audiophile gear claim, as their main selling point, that they "make music sound more natural".
(A variation on that claim has certainly gotten MQA plenty of buzz... and, apparently, earned them a lot of financing.)

As far as I know nobody has gotten this to work really well... yet... although I could be wrong there.
But, considering how quickly technology advances, it's only a matter of time...
(And, if someone wants me to give it a try, I'll be glad to... but I will need some financing to pay the programmers to write the code....)

This is a good response overall, I don't disagree on any of it really. I would think it goes without saying that you basically wouldn't bother looking for ultrasonics from anything except percussion.

When it comes to drums / cymbals, I could imagine an algorithm that might do something interesting using ultrasonic content.

Since that band will be relatively free from interference from anything except percussion, you might be able to use the cymbals to approximate some kind of impulse response and then derive the size / composition of the space based on certain assumptions (rectangular room, for example), and from there generate some kind of useful "clean" IR?

From other comments, I gather this is already attempted by some software, but I don't have any idea of how well it works. And it seems more like a novelty for producers / engineers than something a consumer would want, but it's sort of fascinating to think about.
 
Nov 17, 2018 at 1:13 AM Post #10,600 of 17,336
None of this suggest any utility in recording ultrasonics.

Also nothing in his writing is backed up by any evidence that calculating the acoustics or structure of the original venue would be beneficial to the reproduction of the record.

MQA is a good example of the fact that audiophiles demand impossible things and the fact that unscrupulous companies who pretend to serve them the moon on a plate succeed over engineers who try to deliver actual improvements on an everyday basis.
 
Last edited:
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Nov 17, 2018 at 3:23 AM Post #10,601 of 17,336
None of this suggest any utility in recording ultrasonics.

Also nothing in his writing is backed up by any evidence that calculating the acoustics or structure of the original venue would be beneficial to the reproduction of the record.

MQA is a good example of the fact that audiophiles demand impossible things and the fact that unscrupulous companies who pretend to serve them the moon on a plate succeed over engineers who try to deliver actual improvements on an everyday basis.

I thought the MQA fingerprinting created marginally audible distortions, no? I could be wrong. I’m not asserting, I’m asking. Here is what I was thinking of:

https://mattmontag.com/audio-listening-test/
 
Last edited:
Nov 17, 2018 at 7:27 AM Post #10,602 of 17,336
[1] And, yes, the echoes of bat signals do contain acoustic information...And bats do use them to collect data about things like... where the walls are... and what they're made of. And, yes, they seem to be somewhat better at analyzing that data than we are... at least for now.
[2] No.... I'm really done being baited into responding this time

1. Stop with the schoolboy BS! What schoolboy doesn't know about bat echolocation? Now all you have to do is give a single example of an album that contains some bat echolocation signals (and their reflections). If you can't, then explain what possible relevance this has to music/sound production and how you're not trying to promote a correlation fallacy as fact.

2. What do you mean "you're done being baited"? You repeatedly and deliberately make-up BS, thereby attempting to pervert whole point of this forum and insult it's members, and YOU'RE the one being baited? This thread is "testing audiophile claims and myths" NOT "Let's make up a whole new bunch of unsubstantiated audiophile BS claims"! Who's baiting who here? The level of hypocrisy is staggering!!

Because you're so sure you're right and can't see any other possibility, you're not seeing that judging what's "unsubstantiated theory" and "BS" is sometimes a matter of perspective.

Great, if there's another/different perspective that is factually or scientifically valid, then by definition of that alternative perspective being factually/scientifically valid there MUST be some valid evidence to support it. Show me some, anything.

[1] I would think it goes without saying that you basically wouldn't bother looking for ultrasonics from anything except percussion.
[2] When it comes to drums / cymbals, I could imagine an algorithm that might do something interesting using ultrasonic content. ...

1. In theory, that would seem entirely logical and I'd agree. In practice though, the vast majority of the time it couldn't work.
2. With drums; I'd certainly agree there could be some acoustically useful ultrasonic content, particularly with the snare drum, because we've got a high level of signal with a reasonable level of ultrasonic content, a pronounced transient and a fairly short decay which won't interfere too much with any acoustic reflections and therefore should make differentiating the direct signal from the reflections relatively simple. In theory then, the snare drum is the most logical place to look for analysable ultrasonic acoustic information. But then we run into that pesky problem of what actually happens in the real world in practise! In practise, the snare drum in a kit is always extremely closely mic'ed (typically an inch or less), to reduce spill from the other instruments in the kit and thereby allow us some ability to process and mix the snare drum without too adversely affecting the other instruments. The consequence of this is that: A. If we're reducing spill significantly from the other loud instruments in the kit which are only a few inches or a foot or so away, then obviously we're reducing the relatively distant and quieter reflections of the snare drum by significantly more. B. This close mic has to be placed on the far side of the drum (otherwise the drummer would hit it), pointing downwards and towards the drummer. So most of the wall reflections arriving straight at the mic (where it is most sensitive) would have to pass through the drummer first! Additionally, we would get little/no reflections from the floor, as they would have to pass through the snare drum first. If all that isn't bad enough, the snare drum in a kit is typically quadruple mic'ed: The batter head mic (which I've just described), the snare head mic (a mic placed underneath the snare drum pointing upwards) to capture the snare "sizzle" which is somewhat lost in the batter head mic (due to the batter head being in the way), then also the "overheads" (which I'll come back to) and lastly, often a room mic. That's a great deal of conflicting phase and acoustic information, particularly between the two main snare mics (batter and snare head mics) because the distance between them causes severe phase cancellations, so much so, that the phase of the snare head mic is commonly "flipped" (180deg out of phase). Lastly, the whole point of close mic'ing the snare drum in the first place was so we can process it somewhat independently. Compression, EQ and also (pretty much without exception) with some artificial reverb. And while we're on arteficial reverb, I can't recall off the top of my head ever seeing a reverb preset that didn't roll-off it's output dramatically above 12kHz and the vast majority, at about 7kHz. Most in fact have a low pass filter. There's absolutely no doubt that the snare drum does contain ultrasonic content, does contain a lot of acoustic information and that acoustic information can be relatively easily differentiated but I can't imagine how on earth you'd actually analyse it to get any sort of intelligible information about the room's acoustics out of all that disparate phase and acoustic information, unless all the exact details of mic positions and exactly what was done during mixing were available to the processor (so it could maybe attempt some sort of reverse engineering) but of course none of that information is logged or available and some/most of it never will be. Additionally of course, we've just been talking about the snare drum on it's own, which is available to the mix engineer but not the consumer. What the consumer gets is the snare drum mixed with the rest of the drumkit plus all the different processing (EQ/compression/reverb etc.) of those other drumkit instruments and of course all the other instruments and vocals in the band/ensemble, each with their own processing and DIFFERENT acoustic information. As I explained (and provided the evidence), science doesn't yet even have a theory on how we could analyse all that and get intelligible results. And lastly, acoustic information in the ultrasonic range is the last place to look, even with the raw snare drum recordings which contain significant ultrasonic content, because of the way we record it, I don't recall ever having seen any ultrasonic acoustic info and once it's mixed, the most dominant (and perceptually important) acoustic information present doesn't extent beyond 12kHz (at most!).

So where else could we look.

The kick drum has some ultrasonic content but quite often it's filtered out. If it isn't though, just maybe there'd be some ultrasonic acoustic information. Again, I don't recall ever having seen any, if there is, the ratio puts it below the noise floor. Certainly within the audible band the primary kick mic does definitely provide significant acoustic information, although again, even if it could be extracted from a completed mix and analysed, I fail to see how it could provide any useful (rather than harmful) information to a reproduction system. The kick drum is virtually always recorded with the mic actually inside the kick drum or just slightly outside the hole in the resonant head (again to provide high signal to spill ratio), so all the acoustic information captured is the reflections of the inside of the drum. I suppose if someone wanted their recording to sound like the band/ensemble were all inside a kick drum then it would be useful, otherwise it would be harmful.

The cymbals are generally NOT extremely close mic'ed and therefore we avoid all the loss, phase and other issues of the acoustic information we have with the other kit instruments. Better still, the primary source for the cymbal sound in a drumkit sub-mix is typically the overhead mics, which are a stereo pair, and that means two coherent signals which can be phase compared and in theory, much more detailed/accurate acoustic information could be extracted than can be extracted from single (mono) mics. Unfortunately, there's an elephant in the ointment! Unlike the drums, cymbals by design have a long (and high level) decay, which is effectively random noise and we cannot even differentiate, let alone analyse, reverb/reflections that are going to be at minimum 20dB below that random noise. Now I need to be a little more precise here, because KeithEmo has already tried to misrepresent this fact. White noise, while random, has equal intensity at all freqs, giving it equal power spectral density. This means that white noise has statistical probability properties and that provides a potential differentiation and analysis opportunity. We could in theory have a signal that is the same level or even somewhat lower than white noise, analyse this combined signal and justly assume that deviations from the constant statistical probability of white noise must represent the signal that is buried in the white noise. While not perfectly accurate (we only ever have a probability of accuracy) it has been shown to be accurate enough to be audibly indistinguishable (in dbx tests, subjects were unable to distinguish the original signal from the extracted signal). However, there's two points to consider: Firstly, the signal can only be a little lower than the white noise. Once it falls below a certain level, it can no longer modulate/sum with the freqs within the white noise by enough to fall outside of it's statistical amplitude range. Secondly, while all this development and research is interesting and might somehow lead to a useful application in the future, the real problem, with real recordings and real cymbals is that while it might sound like white noise, it isn't, it's random noise but it's not specifically white noise (white noise only exists as a mathematically constructed, generated signal, it doesn't occur in the natural world) and therefore, the whole house of cards comes crashing down because this random noise no longer has the statistical probability properties to compare against. In a real recording, we cannot hear (nor differentiate with DSP) the reflections/reverb caused by the cymbals because it's below the level of the direct cymbal sound, which is random (but not white) noise. And BTW, there's an additional problem if we're looking only in the ultrasonic range. A near coincident stereo pair is phase coherent BUT only up to the high freq range. Very high freqs have very short wavelengths and the small distance that must exist between the mic capsules therefore causes phase incoherency. So any calculations based on phase/timing (such as reflection arrival times for example) are going to be incorrect.

There are still other considerations, not least is that music produced in the last 15-20 years commonly doesn't use an actual drumkit in the first place, drumkit samples are virtually exclusively used in EDM and other electronic genres and even in the more traditional rock genres, in the past 10 years or so drumkit samples have become so good that it's difficult even for highly experienced engineers and drummers to tell the difference.

G
 
Last edited:
Nov 17, 2018 at 8:52 AM Post #10,603 of 17,336
Now, let's assume I have a multi-track recording of a vocalist singing with a band...
The band was recorded in a large room, but the vocalist was recorded in a sound booth at the studio, and mixed in later.
It's obvious that, at least to begin with, the background tone of the band's performance isn't going to match that of the vocalist.
If the band recorded in a cathedral, there will be echoes of the drums from the walls, and other sorts of "venue ambience".
However, those room size cues will be missing from the vocal track (there won't be any of those echoes in the vocal track because the vocalist wasn't singing there).
If the mix was well mastered, the engineer will have added reverb to the vocal track to match the ambience associated with the music.
He'll have used a plugin to create echoes and other ambience in the vocal track to make it seem as if the vocalist was singing in the same room as the band was playing.
And, if that wasn't done, some humans might complain that the recording sounded quite unnatural, and was "obviously multi-tracked".
A few recent mastering plugins offer the ability to fix this automatically, by "extracting the tone from one track and applying it to another".

If you've been keeping track, you'll realize that there is a long history of including various "DSP modes" in home theater processors.
Most of them simulate the sounds of specific types of rooms by adding processing to the audio.
Yamaha was well known for offering DSP modes like "concert hall" and "cathedral" as options on their home theater gear.

Could someone sell a new product that include a DSP algorithm that "made unnatural sounding recordings sound more natural"?
The answer there is an obvious yes... because many such products already exist.
Could such an algorithm make use of information about the original venue where most of the tracks were recorded to do a better job?
I'll bet it could.

I can only assume you made your "baiting" cry in your earlier post as some sort of attempt to cover the fact that you intended some severe baiting of your own. You and I both know that you know nothing about mixing or mastering (as the quote above demonstrates) and you are also aware that I've been a professional engineer for many years. So, what reason other than baiting could there possibly be for posting a bunch of utter BS about mixing/mastering to an actual mixing and mastering engineer??

For everyone else, I'm sure KeithEmo's post sounds entirely reasonable to you. It certainly doesn't sound unreasonable but that's what he's good at (and I hope Emotiva pays him well for)! However, it is in fact all utter nonsense, almost every single line above is incorrect/false and the couple of assertions which are actually correct he's used to promote a conclusion/assertion which is false! But how you would you recognise all this utter BS unless you had practical experience in a professional recording studio and experienced for yourself how recordings are actually created? It really is impressive, how does one make such utter BS/nonsense sound so believable and reasonable? Politicians pretty much do that for a living and many/most don't achieve it as well as KeithEmo just has, again, I hope he's well paid for it!

Particularly from the responses from @Phronesis but also one or two others, I'm starting to get the impression that you actually prefer reasonable sounding BS to the actual facts/science. If so, either you're in the wrong place or this forum is mis-named and I'm the one in the wrong place? Either way, unless there is anyone here interested in the actual facts, there's no point going through the above, explaining why it's all utter BS and what the actual facts are, because at best all I'd be doing is attempting to ruin your enjoyment, belief and support of very reasonable sounding BS.

G
 
Last edited:
Nov 17, 2018 at 9:21 AM Post #10,604 of 17,336
^ I have no issue with vigorous debate about technical issues. What I have a problem with is personal attacks and rudeness. Other people may be reluctant to call out rudeness, but I'm not.
 
Last edited:
Nov 17, 2018 at 10:51 AM Post #10,605 of 17,336
That's exactly what I was talking about.

Most modern room correction systems use an impulse as their preferred test signal.
They then analyze the returning echoes from the impule, along with a lot of heavy math, to learn all sorts of things about the room.
This can be done the most accurately when you have the option of creating a specific and precisely known impulse as a stimulus.
However, any waveform that approximates an impulse will provide you with data, although it will contain more variables, and so be less precise.

Now, let's assume I have a multi-track recording of a vocalist singing with a band...
The band was recorded in a large room, but the vocalist was recorded in a sound booth at the studio, and mixed in later.
It's obvious that, at least to begin with, the background tone of the band's performance isn't going to match that of the vocalist.
If the band recorded in a cathedral, there will be echoes of the drums from the walls, and other sorts of "venue ambience".
However, those room size cues will be missing from the vocal track (there won't be any of those echoes in the vocal track because the vocalist wasn't singing there).
If the mix was well mastered, the engineer will have added reverb to the vocal track to match the ambience associated with the music.
He'll have used a plugin to create echoes and other ambience in the vocal track to make it seem as if the vocalist was singing in the same room as the band was playing.
And, if that wasn't done, some humans might complain that the recording sounded quite unnatural, and was "obviously multi-tracked".
A few recent mastering plugins offer the ability to fix this automatically, by "extracting the tone from one track and applying it to another".

If you've been keeping track, you'll realize that there is a long history of including various "DSP modes" in home theater processors.
Most of them simulate the sounds of specific types of rooms by adding processing to the audio.
Yamaha was well known for offering DSP modes like "concert hall" and "cathedral" as options on their home theater gear.

Could someone sell a new product that include a DSP algorithm that "made unnatural sounding recordings sound more natural"?
The answer there is an obvious yes... because many such products already exist.
Could such an algorithm make use of information about the original venue where most of the tracks were recorded to do a better job?
I'll bet it could.

Also note that you don't always have to have "complete, detailed, and fully extracted information" in order for it to be useful.
For example, I can record the impulse response of a room, and that impulse response can be analyzed to create a "signature" of how that room sounds.
I can then use a convolver algorithm to apply that signature to a different recording.
And, after I do so, it will make my new track "sound as if it was played in that room".
For example, I can record a vocal track in a sound booth, and use my convolver to apply the impulse response from Winchester Cathedral...
And, after I do, I'll end up with a recording that SOUNDS very much like that vocalist was singing in Winchester Cathedral...
That impulse file of WInchester Cathedral "contains" information about the dimensions and other acoustic properties that make Winchester Cathedral unique...
And, even more interesting, I can apply that information to another recording to alter it...
AND I CAN DO THIS *WITHOUT* ACTUALLY ANALYZING THE FILE OR EXTRACTING THE SPECIFIC INFORMATION FROM IT.
I can make it sound as if my singer was singing in Winchester Cathedral.... without actually bothering to calculate the dimensions of the cathedral.
This is well known current technology.
Here's a free plugin for FooBar2000 that uses it.... http://wiki.hydrogenaud.io/index.php?title=Foobar2000:Components_0.9/foo_convolve
The main catch with the current technology is that is requires a special impulse file.
(Someone has to actually play an impulse sound in Winchester Cathedral and record the result to create the impulse file.)

However, wouldn't it be cool if the processor you buy next year could create a "pretty good" approximation of that impulse file by analyzing the recording itself?
It might even do a better job of simulating the sound of WInchester Cathedral than "cathedral DSP mode" in a current processor.
You might push a button, play it a recording you like, and it would make your other albums "sound like that one"....
Or it might have a mode that "makes poorly mixed multi-track recordings sound more natural by repairing obvious inconsistencies".
If you doubt the market for that... just see how many pieces of audiophile gear claim, as their main selling point, that they "make music sound more natural".
(A variation on that claim has certainly gotten MQA plenty of buzz... and, apparently, earned them a lot of financing.)

As far as I know nobody has gotten this to work really well... yet... although I could be wrong there.
But, considering how quickly technology advances, it's only a matter of time...
(And, if someone wants me to give it a try, I'll be glad to... but I will need some financing to pay the programmers to write the code....)

What you said above is going to happen - real soon, if it is not actually already being used by someone out there.

One can wax RBCD is enough left, right, up, down and/or around... - once confronted with the real live mike feed - or HR digital recording of it - the game is over in an instant.

Ultrasonics are LOW in level - well below 0.1% ( add as many zeroes between the comma and 1 as you feel appropriate ) - but are there and do serve their purpose.

How many times , in the analogue days, have you read one can "picture" the acoustics of the venue - right after the stylus hits the groove and music has not even began to play ? Using high speed phono gear - starting with the stylus/cartridge, of course. And following troughout the system, right to the end electroacoustic transducer that allows us to hear what's engraved into the groove. It is here that the most dramatic difference between (most, > 99% ) MM cartridges and MC cartridges occur - with MCs as a group clearly outperforming MMs as a group in this regard. Up to 20 kHz, both might and may be remarkably similar in frequency response - but above 20-30 khz, MCs will generally run rings around MMs. There are < 1% MMs that can match or even exceed the performance of MCs as a group - but that "exceed" would - perhaps - account for 0.000 ..................01 % of the actual MM cartridges still in use today.

I was lucky enough to be in a hall some 357.82 metres away from my home ( depends which corner of each place you take for the reference...) back in 2009, when an older colleague has been measuring "reverberation" of that hall - using pulse method. In no time I was there with the DSD64 recorder and mics that do extend at least to 40 kHz - how linearly exactly I do not know (yet), but the output > 50 kHz can be regularly seen in recordings of instruments that do go that high.

How does one create an acoustic pulse extending WELL PAST officially audible 20 kHz - up to in MHz region, in fact ? Answer - explosion. Small, controlled and repeatable one. The first use of such an impulse I saw was by A.R.BAILEY in his paper on transmission line loudspeaker enclosure in Wireless World in 1965. http://diyaudioprojects.com/Technical/Papers/Non-resonant-Loudspeaker-Enclosure-Design.pdf He used precisely machined copper wire ( thinner in the middle ) to be clamped at the ends with contacts, both leading to a switch and a low inductance high voltage capacitor. After charging the capacitor, it was discharged trough the "exploding wire(s)" - giving a point source, precisely repeatable omnidirectional impulse with frequency response well over any known microphone can measure.

My friend used a simplified version; a high voltage low inductance 10 uF 4000 V oil capacitor ( from a B-17 downed during WW II over our territory - Germans were here up to late April 45 ... ) has been allowed to charge from the mains trough voltage multiplier circuit - creating spark and discharge trough the air across the appropriately spaced electrodes - creating , again, a repeatable sonic pulse, only slightly less point source and omnidirectional than "exploding wire(s)". Alternatively, a series of equal and equally inflated toy ballons has been recorded bursting with (needle/cigarette ???) .

I can provide these files, if interested.
 
Last edited:

Users who are viewing this thread

Back
Top