Nov 18, 2018 at 5:29 PM Post #10,636 of 19,070
Back in the day, the recording labels did compete for the sound quality they managed ultimately to bring to the customer.

With the advent of digital recording - and particularly trough the limitatrions of the early digital, which unfortunately culminated in de facto standard for recording/distributing music - the (in)famous RBCD - sound quality took a big hit and nosedived to almost an afterthought. Things got better only after the sample rate of digital went high enough - around the end of the millenium - but an entire generation, about 20 or so years, has been lost - and the good recording techniques already mastered in the analogue days had to be re-discovered by the new generation of recording engineers.

Here a nice video on the use of the what would be most likely termed as "unnecessary overkill" by the @bigshot & Co - 35 mm magnetic tape that has been used for making vinyl records, and, much later, also made available on CD. But the true sound of these tapes is available either on the original pressing LPs - or HR ( PCM and DSD ) digital downloads that have recently been produced from the 35 mm masters.



Try to listen to at least ONE of these recordings, either on record or HR digital - and you might begin to understand that, although admittedly expensive, "overkill" DOES produce better results. These recordings are now roughly 60 years old - no one ( except in the studio while making them ) back then could reproduce them with anything approaching the quality available today - if not exactly to masses, then to most interested in sound quality enough to dedicate that quest enough money to allow for the equipment that can show these recordings in proper light.

The recording engineers of the past were trying to use the best equipment available, cost be damned - and not use the equipment mostly for the ability to reduce the cost trough clever use of equipment - relegating the sound quality to the back seat .
 
Nov 18, 2018 at 7:04 PM Post #10,637 of 19,070
I like your comparison to the GPS system.
However, I will note that, in many situations where we don't happen to have beacons already in place, we EXTRACT information from existing sources.
For example, we calculate the locations of cosmic events, based on information extracted from background radiation levels, pulsar emissions, and other cosmic events.
And, long before we had GPS satellite beacons, people were navigating based on the position of naturally occurring "beacons".
(For example, by pointing a sextant at the sun and the moon, and calculating our position based on information we extracted about their positions.)
This is analogous.

So, the GPS timing signal is analogous to acoustic information and if it isn't there, (as it isn't in the case of ultrasonic freqs), then GPS won't work. So you're suggestion is to use say the sun, stars or other cosmic events instead of the GPS timing signal. As "This is analogous" then you've similarly got some suggestions for what we could use instead of reflections/reverb on a recording, to provide us with acoustic information? Great, let's hear them then! Or are you saying we can actually extract pulsar emissions from an album and some how use that? As ridiculous as that sounds, it's no more ridiculous what you seem to be suggesting, so I really can't tell!

For everyone interested in facts, let's look at some:

As mentioned before, instruments like the snare drum in a drum kit are mic'ed extremely closely and as they produce some significant amount of ultrasonic content, we can record it and you can see it in a spectral analysis. However, the consequence of such close mic'ing is that we largely loose everything else (especially the relatively distant reflections), which is of course the point. So, if we want to record those reflections, we have to move the mic significantly further away. In practise that won't work when recording a drumkit (because of spill) but let's say for now we're just recording a solo, unaccompanied snare drum. Let's say the mic is 30ft (10m) away to keep the figures simple and because in live gigs the ideal seating position will be at least that far way. So what will actually happen to the sound received by our mic? I've mentioned before about high freq absorption but that's not the whole story, because in addition to air absorption we've also got air damping. Let's say from an inch away we've got a 110dB snare hit, which we'll say is 0dBFS and most likely the ulrasonic freqs are at around -40dB. With a mic position 10m away we loose about 52dB (of the entire sonic and ultrasonic range), due to air damping, leaving us with -48dB in the audible range and -88dB in the ultrasonic range. However, we've also lost roughly an additional 12dB of the ultrasonic range due to high freq air absorption, so now we're down to -100dB and that's for the direct sound. For the reflections the situation is different because our closely positioned snare drum mic was already quite far away from the reflection source (the walls), so while we're moving 10m away from the drum, our relative distance to the wall reflections is only a little more than it was. So closely mic'ed, the ratio of the reflections to the direct sound was most likely around 50dB but at 10m we're much closer to parity, with the reflections most likely somewhere around -50dB but the ultrasonic content will be significantly lower than the -100dB of the direct signal because we've got wall absorption and some more air to consider, most likely it's around -120dB.

Apart from the actual air damping and absorption amounts, these figures can vary quite a bit, depending on the exact size of the room, how far away from the walls the snare drum is placed and how far away the mic is relative to the walls (both close and far mic'ed). I'm just giving an average from my experience and being somewhat conservative to avoid objections of cherry picking the most favourable figures for my argument. We got one more variable to consider, we can turn the mic up or rather, we can't turn a mic up, that's impossible but what we can do is turn up the amplification of the signal coming out of the mic. So, when we move the mic 10m away we can simply turn the mic's output up by say 48dB and again hit our 0dBFS. Our entire spectrum level is up to -52dB and the ultrasonic reflection level up to say about -70dB. So, that's ridiculously low but there's something there that maybe could be extracted. Well no! Remember we can't turn up a mic, only it's output and that means we've not only turned up the mic by 48dB but also the noise floor of the recording venue, the self noise of the mic and increased the noise produced by the amp. When closely mic'ed we probably had a combined noise floor down at -100dB or so, in a very well isolated studio with a particularly quiet mic, but after 48dB gain our noise floor is closer to about -45dB, putting our ultrasonic reflection level some 20 times or so lower than the noise floor. That's just in theory of course, in disputes about these sorts of levels, there often seems to be the assumption that a mic is somehow infinitely sensitive and everything is captured down to an infinitely quiet level but just buried in noise. This is of course a fallacy, just as it's a fallacy to assume a Ford Focus could travel at near light speed if it weren't for tyre fiction and wind resistance. In practise, once we get into the noise floor of the mic itself, that's it, there's nothing there to even potentially extract. Those ultrasonic reflections might exist at those extremely low levels but we can't record them (or of course hear them) and if we can't record them, they're obviously not in any recordings and there's nothing there to be extracted! Furthermore, using analogies of analogue tape or vinyl are particularly ridiculous because it's only when the marketing guys started pushing high sample rates to consumers that there became a need to actually put something up there. Before that time (around the turn of the millennium) there were no studio mics spec'ed beyond 20kHz and while they produced some response to particularly loud ultrasonic freqs (when very close mic'ed), it was greatly reduced and, their noise floors were higher, in addition to the much higher noise floors of vinyl and tape of course! Again, there maybe some future tech that can extract acoustic info and if so, it would be very useful in the studio (although still useless to the consumer), but whatever happens in the future, the ultrasonic range is the very last place to look for that info! If *someone* is looking for a marketing gimmick to push ultrasonic freqs, they're barking up completely the wrong tree. Not that they'll let a few inconvenient facts get in the way of a good story though!

G
 
Last edited:
Nov 18, 2018 at 7:16 PM Post #10,638 of 19,070
With the advent of digital recording -

Oh good, we've gone from one guy pushing a totally ridiculous fantasy future to another one pushing a totally ridiculous fantasy history. So, that covers just about everything, all we need to do now is join it all together, maybe an Edison disk packed with ultrasonic acoustics would make everyone happy? Jeez, I thought the Cables forum was bad!

G
 
Nov 18, 2018 at 7:37 PM Post #10,639 of 19,070
Oh good, we've gone from one guy pushing a totally ridiculous fantasy future to another one pushing a totally ridiculous fantasy history. So, that covers just about everything, all we need to do now is join it all together, maybe an Edison disk packed with ultrasonic acoustics would make everyone happy? Jeez, I thought the Cables forum was bad!

G


If you can make that ultrasonic Edison roll available in 784kHz DSD, I’ll take two! Just make sure to charge me a lot, because if it’s not expensive, it won’t sound as good...
 
Nov 18, 2018 at 10:59 PM Post #10,640 of 19,070
Technology has a way of advancing much more rapidly than we can even imagine.
I used to think it would be a very long time before "computer driven cars" were considered to be safe enough to be allowed on public streets.
Yet here we... and they... are.

Here's a thought.... for fans of multi-channel recording.

If we wanted to be able to reproduce a recording of an orchestra very accurately...
We could record each instrument on its own audio track, keep track of the exact location of each, and encode all of that information into the recording.
Then, when we played back the recording, our decoder could figure out which speakers to send each track to so that the instrument appeared in the correct location.
It could calculate things so that, no matter how many speakers we had, or where they were located in the room, each instrument seemed to come from the correct apparent physical location.
Yes, it would take a lot of information, and a lot of computing power, but it at least seems possible, right?

If I'd suggested that this was possible ten years ago, at least a few people here would have "called BS" and said "it was both useless and impossible".
But, if you haven't gotten the joke yet, this isn't science fiction... it's a description of Dolby Atmos (and DTS:X).
And, if you buy a new home theater reciever this year, even a relatively cheap one, you'll probably be getting it.

If you can make that ultrasonic Edison roll available in 784kHz DSD, I’ll take two! Just make sure to charge me a lot, because if it’s not expensive, it won’t sound as good...
 
Nov 18, 2018 at 11:18 PM Post #10,641 of 19,070
Technology has a way of advancing much more rapidly than we can even imagine.
I used to think it would be a very long time before "computer driven cars" were considered to be safe enough to be allowed on public streets.
Yet here we... and they... are.

Here's a thought.... for fans of multi-channel recording.

If we wanted to be able to reproduce a recording of an orchestra very accurately...
We could record each instrument on its own audio track, keep track of the exact location of each, and encode all of that information into the recording.
Then, when we played back the recording, our decoder could figure out which speakers to send each track to so that the instrument appeared in the correct location.
It could calculate things so that, no matter how many speakers we had, or where they were located in the room, each instrument seemed to come from the correct apparent physical location.
Yes, it would take a lot of information, and a lot of computing power, but it at least seems possible, right?

If I'd suggested that this was possible ten years ago, at least a few people here would have "called BS" and said "it was both useless and impossible".
But, if you haven't gotten the joke yet, this isn't science fiction... it's a description of Dolby Atmos (and DTS:X).
And, if you buy a new home theater reciever this year, even a relatively cheap one, you'll probably be getting it.


I don’t see why people would have doubted that audible sound could be reproduced in an object oriented model. What does that have to do with the point you’re trying to make about inaudible ultrasonics being viable in the future? I see this as yet another false equivalence.

Not to say that it’s not great that most receivers have valuable features like Atmos. And working room eq.
 
Nov 18, 2018 at 11:27 PM Post #10,642 of 19,070
Technology has a way of advancing much more rapidly than we can even imagine.
I used to think it would be a very long time before "computer driven cars" were considered to be safe enough to be allowed on public streets.
Yet here we... and they... are.

Technology also goes around in round about ways. Analogsurvivor's example of an esoteric high-end audio source... it was also the same time of Cinemascope and 70mm Panovision in movie theaters (the heyday of big screens that could take advantage of higher resolving power). Then with the advent of multiplexes and smaller screens...there was less of a demand for high resolving film.

While technology is easily accessible for autonomous cars, and it's proven they are now more accurate then humans...I do think it's still an uphill battle to make them the de-facto for travel. People, they just love their cars and will fight tooth and nail to be the driver (even if they're now more distracted with their cell phones while being stuck in stop and go traffic that could have been alleviated with autonomous travel).
 
Nov 19, 2018 at 1:01 AM Post #10,643 of 19,070
So, the GPS timing signal is analogous to acoustic information and if it isn't there, (as it isn't in the case of ultrasonic freqs), then GPS won't work. So you're suggestion is to use say the sun, stars or other cosmic events instead of the GPS timing signal. As "This is analogous" then you've similarly got some suggestions for what we could use instead of reflections/reverb on a recording, to provide us with acoustic information? Great, let's hear them then! Or are you saying we can actually extract pulsar emissions from an album and some how use that? As ridiculous as that sounds, it's no more ridiculous what you seem to be suggesting, so I really can't tell!

For everyone interested in facts, let's look at some:

As mentioned before, instruments like the snare drum in a drum kit are mic'ed extremely closely and as they produce some significant amount of ultrasonic content, we can record it and you can see it in a spectral analysis. However, the consequence of such close mic'ing is that we largely loose everything else (especially the relatively distant reflections), which is of course the point. So, if we want to record those reflections, we have to move the mic significantly further away. In practise that won't work when recording a drumkit (because of spill) but let's say for now we're just recording a solo, unaccompanied snare drum. Let's say the mic is 30ft (10m) away to keep the figures simple and because in live gigs the ideal seating position will be at least that far way. So what will actually happen to the sound received by our mic? I've mentioned before about high freq absorption but that's not the whole story, because in addition to air absorption we've also got air damping. Let's say from an inch away we've got a 110dB snare hit, which we'll say is 0dBFS and most likely the ulrasonic freqs are at around -40dB. With a mic position 10m away we loose about 52dB (of the entire sonic and ultrasonic range), due to air damping, leaving us with -48dB in the audible range and -88dB in the ultrasonic range. However, we've also lost roughly an additional 12dB of the ultrasonic range due to high freq air absorption, so now we're down to -100dB and that's for the direct sound. For the reflections the situation is different because our closely positioned snare drum mic was already quite far away from the reflection source (the walls), so while we're moving 10m away from the drum, our relative distance to the wall reflections is only a little more than it was. So closely mic'ed, the ratio of the reflections to the direct sound was most likely around 50dB but at 10m we're much closer to parity, with the reflections most likely somewhere around -50dB but the ultrasonic content will be significantly lower than the -100dB of the direct signal because we've got wall absorption and some more air to consider, most likely it's around -120dB.

Apart from the actual air damping and absorption amounts, these figures can vary quite a bit, depending on the exact size of the room, how far away from the walls the snare drum is placed and how far away the mic is relative to the walls (both close and far mic'ed). I'm just giving an average from my experience and being somewhat conservative to avoid objections of cherry picking the most favourable figures for my argument. We got one more variable to consider, we can turn the mic up or rather, we can't turn a mic up, that's impossible but what we can do is turn up the amplification of the signal coming out of the mic. So, when we move the mic 10m away we can simply turn the mic's output up by say 48dB and again hit our 0dBFS. Our ultrasonic freq level is up to -52dB and the ultrasonic reflection level up to say about -70dB. So, that's ridiculously low but there's something there that maybe could be extracted. Well no! Remember we can't turn up a mic, only it's output and that means we've not only turned up the mic by 48dB but also the noise floor of the recording venue, the self noise of the mic and increased the noise produced by the amp. When closely mic'ed we probably had a combined noise floor down at -100dB or so, in a very well isolated studio with a particularly quiet mic, but after 48dB gain our noise floor is closer to about -45dB, putting our ultrasonic reflection level some 20 times or so lower than the noise floor. That's just in theory of course, in disputes about these sorts of levels, there often seems to be the assumption that a mic is somehow infinitely sensitive and everything is captured down to an infinitely quiet level but just buried in noise. This is of course a fallacy, just as it's a fallacy to assume a Ford Focus could travel at near light speed if it weren't for tyre fiction and wind resistance. In practise, once we get into the noise floor of the mic itself, that's it, there's nothing there to even potentially extract. Those ultrasonic reflections might exist at those extremely low levels but we can't record them (or of course hear them) and if we can't record them, they're obviously not in any recordings and there's nothing there to be extracted! Furthermore, using analogies of analogue tape or vinyl are particularly ridiculous because it's only when the marketing guys started pushing high sample rates to consumers that there became a need to actually put something up there. Before that time (around the turn of the millennium) there were no studio mics spec'ed beyond 20kHz and while they produced some response to particularly loud ultrasonic freqs (when very close mic'ed), it was greatly reduced and, their noise floors were higher, in addition to the much higher noise floors of vinyl and tape of course! Again, there maybe some future tech that can extract acoustic info and if so, it would be very useful in the studio (although still useless to the consumer), but whatever happens in the future, the ultrasonic range is the very last place to look for that info! If *someone* is looking for a marketing gimmick to push ultrasonic freqs, they're barking up completely the wrong tree. Not that they'll let a few inconvenient facts get in the way of a good story though!

G

Thanks @gregorio for the added information. As a man interested in facts I did some homework with close mic'ed cymbals 176.4/24 samples.
Useful to mention that the ultrasonics levels do count less than 0.1dB RMS TPL when comparing original file with the Low Pass Filtered one.
Useful to mention that I didn't find echolocation clues in the ultrasonics.
B.R.
 
Nov 19, 2018 at 8:46 AM Post #10,644 of 19,070
Interesting.

Could you please provide a few details about which algorithms you used to analyze the information when you were looking for that "echolocation information".
I don't know offhand which commercial programs currently available do that - ot what exact analysis technique they use.
(MatLab is usually good for general purpose "number crunching".)
It would have involved looking for "unique and identifyable energy bursts", then analyzing the signal later to find and identify the specific "echoes" associated with them.
(I would have started by trying to identify the highest peaks, figuring out the decay envelopes associated with them, then looking for anomalous bumps or dips.)
I would have expected it to take some serious software development to determine exactly what methods would work best and develop a working software prototype.
I assume you did something a little more concise than "shifting the pitch and listening for echoes a human would recognize".
(If you could tell us what you did that didn't work, perhaps the next person who tries to do it can avoid that dead end...)

Thanks @gregorio for the added information. As a man interested in facts I did some homework with close mic'ed cymbals 176.4/24 samples.
Useful to mention that the ultrasonics levels do count less than 0.1dB RMS TPL when comparing original file with the Low Pass Filtered one.
Useful to mention that I didn't find echolocation clues in the ultrasonics.
B.R.
 
Nov 19, 2018 at 9:24 AM Post #10,645 of 19,070
It occurred to me that, if you want to do some general research on "how to extract information from noisy and chaotic signals", there are a few people who have used it lately for various things.

The US Navy, of course, continuously researches both active and passive SONAR.
(With active SONAR you send out a ping; but, by doing so, you also advertise your location. Passive SONAR simply means picking out things like echoes and engine noises passively and extracting information, like where that submarine is, from that noise.)
Unfortunately (if you're a researcher), the Navy isn't that big on sharing.

I hear the Earth Science folks have also been doing a lot lately with analyzing the echoes of the sounds made by earthquakes to visualize structures inside the Earth.
(For everything from looking for oil to mapping the internal structure of the planet.)
This seems to me as if the math would be very much like what you would need to locate walls by listening to echoes from music.)

I think they're all working with similarly complex situations - with lost of noise and very low-level signals.

Thanks @gregorio for the added information. As a man interested in facts I did some homework with close mic'ed cymbals 176.4/24 samples.
Useful to mention that the ultrasonics levels do count less than 0.1dB RMS TPL when comparing original file with the Low Pass Filtered one.
Useful to mention that I didn't find echolocation clues in the ultrasonics.
B.R.
 
Nov 19, 2018 at 9:40 AM Post #10,646 of 19,070
I figure there are hundreds of ways one can differentiate between dacs if one has the proper instruments to do so...but if we could all bear in mind that all of this preferably should end up enrichening the experience of listening to music..by HUMAN BEINGS.
Yet we keep going back to the supposedly faulty ways of blindstests.

Now I usually drink water from a glass or a bottle. Does this mean that if scientists suddenly brought forth the proposition that glasses don't work perfectly, I should then revert back to using my hands as a means to shovel water into my mouth?

I am sure science will figure out a way to do better tests in the future, no doubt, but why should a future heureka undermine the best tests we have for judging sound quality now?
If we are to believe most of the audiophile crowd we should throw blindtesting in the trash and wholly base our purchases on sighted testing.
Bias doesn't exist...except for in other people's lives. Real music afficionados can hear stuff dogs can't...mostly because the canine never knows where exactly to listen for the ultrasonic content of titanium violin with spiderweb strings.
 
Last edited:
Nov 19, 2018 at 10:03 AM Post #10,647 of 19,070
^ I absolutely agree that blind tests are better than sighted. The question is how best to do blind tests and how to interpret them. If blind tests appear to give null results, but there are doubts about how the tests were done or interpreted, that undermines the ability of the tests to provide evidence that there are really no significant audible differences (i.e., could be false negative results). This stuff needs to be done to scientific standards, subject to scrutiny by qualified scientists. The tests cited in the first post of this thread generally don't seem to meet those standards.
 
Last edited:
Nov 19, 2018 at 10:32 AM Post #10,648 of 19,070
I don't see any problems with blindtesting...other than they often show people what they don't want to hear/see.

The past few pages, here as well as in the dac thread, seem to have nosedived back into obfuscation.
I have absolutely no problems with blindtests if a) they are properly set up (no visual cues and matched volume levels) and b) the actual person(s) testing have as much time on his/her hands in order to explore every possible thing that pops up.
This way you can listen all day to one dac/amp and then shift...or shift all the time if you find that to be a better way forth.

It is a very simple and accurate test in order to find out if you prefer one over the other or if you indeed are able to distinguish between the two.
This test will most likely never make any scientific journals, sure, but it may have saved you a couple of thousand dollars.
If you can't hear any differences between two units over the course of a weekend shifting from hd800s to studio monitors...then you can't hear a difference. Simple as. If you can then good for you. I'd personally be very interested in seeing some kind of doctor perform some hearing tests then....mostly because scientific anomalies interest me.

Everything else just seems like obfuscating scientific trifle that indeed will matter to spermwhales and owls on the hunt for the next upgrade to their rigs.
 
Nov 19, 2018 at 11:00 AM Post #10,649 of 19,070
I don't see any problems with blindtesting...other than they often show people what they don't want to hear/see.

The past few pages, here as well as in the dac thread, seem to have nosedived back into obfuscation.
I have absolutely no problems with blindtests if a) they are properly set up (no visual cues and matched volume levels) and b) the actual person(s) testing have as much time on his/her hands in order to explore every possible thing that pops up.
This way you can listen all day to one dac/amp and then shift...or shift all the time if you find that to be a better way forth.

It is a very simple and accurate test in order to find out if you prefer one over the other or if you indeed are able to distinguish between the two.
This test will most likely never make any scientific journals, sure, but it may have saved you a couple of thousand dollars.
If you can't hear any differences between two units over the course of a weekend shifting from hd800s to studio monitors...then you can't hear a difference. Simple as. If you can then good for you. I'd personally be very interested in seeing some kind of doctor perform some hearing tests then....mostly because scientific anomalies interest me.

Everything else just seems like obfuscating scientific trifle that indeed will matter to spermwhales and owls on the hunt for the next upgrade to their rigs.
Life would be simpler if everything was binary: yes or no, good guys or bad guys, black or white.

Are blind tests good or bad? Are they simple or complex? Do they have problems or are they accurate? Are they easy to perform or difficult? Do sighted tests have zero value or some value?
It's just not that simple. It all depends on what you want to test, both the item (DACs, file formats, amplifiers, etc.) and what you are asking (same/different, better/worse, etc.) That will determine what factors are important in setting up the test.

Some tests are easy for nearly anyone to perform, but some tests require care if you want the results to have any meaning. You might have to think a bit, which seems to scare some people.

I, for one, welcome the fact that most of life lies between the pure black and pure white. I like shades of gray and the full spectrum of colors. That is not just obfuscating scientific trifle.
 
Nov 19, 2018 at 11:03 AM Post #10,650 of 19,070
I was thinking the other night about the absurdity of people arguing that super audible content is important to the reproduction of recorded music in the home...

For the record, although I'm willing to entertain the idea that ultrasonics might have some value to the listening experience... someday, somehow... I don't argue that it's important to home listening in today's world. It's more of an intellectual curiosity - either trying to put the final nail in the coffin, or finding some good reason to pry it open again.

@gregorio - thanks for the long response regarding the practical problems in extracting any useful information from the ultrasonic band of a finished / mixed recording. It all makes sense and agrees completely with the limited knowledge I have about recording. I would totally agree that if you intend to extract an impulse response (or any other direct measurement) from a finished recording, you are going to have a very hard time. I won't really speculate as to what one might actually be able to learn from analyzing ultrasonic content, but we can definitely agree that the task is not a simple one, whatever the end goal is.

Also, I know that over-reliance on analogies is pretty dangerous here now, but here's one - if astronomers are able to look at stars behind a black hole by algorithmically removing the distortion of spacetime itself (not to mention all kinds of other interference), I imagine there is hope for similarly fancy processing of audio.
 

Users who are viewing this thread

  • Back
    Top