Hi-Rez - Another Myth Exploded!

Sep 18, 2011 at 1:15 PM Post #91 of 156
Hello G,
I wasn't trying to get too detailed or specific in my previous e-mail.
I understand that the mastering engineering is usually creating a master for the common man to listen to anywhere, not create a master for the audiophile sound system.
Which explains why we are all "punished" by overcompression, etc when we try to enjoy pop recordings on our audiophile systems.
Question, since professional recording standards are usually 48 k or 96 k would it make more sense for us to prefer 48 k consumer formats?
C


I know you weren't getting too specific but it seemed like a good opportunity to bring the mastering point up.

44.1kS/s is the standard for CD (as you know) and 48kS/s was/is the standard for DTV and film (Dolby Digital 5.1, DTS, etc.). The music standard is becoming more and more 96kS/s, a lot of the professional processing plugins are designed to operate better at this sampling rate. Also, most DACs upsample (or oversample) to a high degree, and alleviate reconstruction filter difficulties. To be honest, the difference between these sample rates is minute, given the choice, go for 48kS/s but even a mastering engineer would struggle to tell them apart. I'd far rather have a well mastered recording at 44.1k than a not so well mastered recording at 96k (the point of my last post). The only formats to be a little wary of is 192kS/s and higher but even so, the quality of the mastering is by far the most important factor, not the format.

G
 
Sep 18, 2011 at 1:28 PM Post #92 of 156
Would it be possible to see a picture of your room? And the treatments for example?


I'll provide you two photos, one of my current studio, well dubbing theatre more than music studio but I do some music in there too. I'll also give you a picture of the Massenburg mastering studio. Notice that the Massenburg studio is covered in diffusers rather than just absorpsion. BTW, the precise length of each one of those little pieces of wood is mathematically calculated!





G

 
Sep 18, 2011 at 1:32 PM Post #93 of 156
Love that Massenburg studio....I drool every single time I see it.
 
Sep 20, 2011 at 1:58 PM Post #94 of 156
Interesting web page: http://www.channld.com/vinylanalysis1.html 
 
While it shows recorded harmonics from bell percussion extending out to 96kHz (and likely beyond), this page doesn't specifically answer two questions:
 
- Whether failure to render harmonics above 48kHz has an audible effect (e.g., realism of the sound of the instrument).
 
- Whether filtering of signals above 48kHz has an audible effect (e.g., aliasing artifacts).
 
Sep 20, 2011 at 2:24 PM Post #95 of 156
While it shows recorded harmonics from bell percussion extending out to 96kHz (and likely beyond), this page doesn't specifically answer two questions:
 
- Whether failure to render harmonics above 48kHz has an audible effect (e.g., realism of the sound of the instrument).
 
- Whether filtering of signals above 48kHz has an audible effect (e.g., aliasing artifacts).


There are still a number of problems, not just this one: After many decades and many tests there is still no proof that humans can perceive anything above 22kHz, let alone 48kHz or 96kHz.

Apart from bell percussion no instruments produce any energy above 48kHz, for example a violin produces only 0.04% of it's sound energy above 20kHz. No standard studio mics can record above 48kHz (very few go beyond 20kHz). There are virtually no head phones or speakers which reproduce 96kHz. And lastly, no producer can mix frequencies which they can't hear! Haven't I stated all this before?

G
 
Sep 20, 2011 at 3:51 PM Post #96 of 156


Quote:
There are still a number of problems, not just this one: After many decades and many tests there is still no proof that humans can perceive anything above 22kHz, let alone 48kHz or 96kHz.

Apart from bell percussion no instruments produce any energy above 48kHz, for example a violin produces only 0.04% of it's sound energy above 20kHz. No standard studio mics can record above 48kHz (very few go beyond 20kHz). There are virtually no head phones or speakers which reproduce 96kHz. And lastly, no producer can mix frequencies which they can't hear! Haven't I stated all this before?


Pardon my ignorance, but do we know for a fact the 'inaudible' spectrum has no affect on the audible spectrum?  For example, something like constructive interference.
 
For the sake of argument, what if it was the lack of speakers, mics, headphones and people that can produce, record or master 'X' frequency was a contributing factor in the lack of true fidelity in the process.  
 
On the one hand I get your point about the sufficiency or adequacy regarding the various formats.  However, you yourself acknowledge the recording process is not perfect.
 
Here's one DBT I'd like to see performed.  Ask a subject to identify a live performance versus the same recorded sample played back.  If we truly know all there is to know about audio reproduction and have the means to fully reproduce it then the subject should be at a loss to tell the difference between the two.
 
 
Sep 20, 2011 at 4:15 PM Post #97 of 156
Quote:
Pardon my ignorance, but do we know for a fact the 'inaudible' spectrum has no affect on the audible spectrum?  For example, something like constructive interference.
 
Here's one DBT I'd like to see performed.  Ask a subject to identify a live performance versus the same recorded sample played back.  If we truly know all there is to know about audio reproduction and have the means to fully reproduce it then the subject should be at a loss to tell the difference between the two.


If inaudible sounds affected audible frequencies, the effect on audible frequencies would have been recorded in the first place and would appear on both a 44.1kHz file or 96kHz file. For example, if a 50kHz sound has some effect on a 5kHz sound, the microphone would capture that altered 5kHz wave. If you mean during playback, a 44.1kHz and 96kHz file would sound different in the audible range. That would be easy to test, but I don't know of any blind tests (except for the flawed Oohashi stuff) that has shown a difference.
 
Why test live versus recorded? There would be obvious differences because no speaker sounds completely real and no microphone will capture everything without distortion. Not to mention mistakes by the musicians, or other differences between plays. None of the differences would be related to inaudible frequencies. You'd at best prove correlation, not causation. Why not just use a 96/88.2kHz file versus a 48/44.1kHz file using the same mastering? Why complicate things?
 
Sep 20, 2011 at 4:24 PM Post #98 of 156
Pardon my ignorance, but do we know for a fact the 'inaudible' spectrum has no affect on the audible spectrum?  For example, something like constructive interference.
 
For the sake of argument, what if it was the lack of speakers, mics, headphones and people that can produce, record or master 'X' frequency was a contributing factor in the lack of true fidelity in the process.  
 
On the one hand I get your point about the sufficiency or adequacy regarding the various formats.  However, you yourself acknowledge the recording process is not perfect.
 
Here's one DBT I'd like to see performed.  Ask a subject to identify a live performance versus the same recorded sample played back.  If we truly know all there is to know about audio reproduction and have the means to fully reproduce it then the subject should be at a loss to tell the difference between the two.


There have been quite a number of tests comparing 20kHz limited recordings and 48kHz (96kS/s) recordings. Obviously using lab equipment to record and replay the frequencies and so far there has not been any evidence found that music in these higher frequencies can be perceived in any way. Constructive interference only operates to create the phenomena of standing waves, Inter-modulation distortion (IMD) is an interesting phenomena but for it to occur in the ear, all the modulating frequencies must be within the hearing spectrum. For example playing a 25kHz sine wave and a 15kHz sine wave should cause IMD in the ear but it doesn't because the 25kHz is filtered out by the ear before we hear it and therefore IMD doesn't occur. It's pretty easy to set up an experiment like this for yourself (providing you have cans which go higher than 20kHz). You just have to make sure you don't get IMD occurring in your speakers or cans.

I'm not sure how you could get your DBT to work in practice. The main difficulty is that in a live performance, perception is influenced by a combination of sights, emotions and expectation, none of which can be captured in a recording. How would you eliminate these influences to just test sound reproduction? Additionally there are technical problems of just capturing the sound waves, mic choice and positioning for example. If it were possible to conduct an accurate DBT then I believe an experienced listener should quite easily be able to tell the difference. If we are talking about an orchestra for example, then the live performance would have say 100 individual sound sources, plus reflections from all directions as opposed to just two speakers or a pair of cans (which in themselves are not generally very linear). There are a number of weaknesses throughout the recording and playback chain which should make differences quite obvious. Out of all the various weaknesses though, the area of digital conversion is the least of our problems, digital conversion and distribution is the most linear part of the recording and playback chain. My argument in this thread is against 178.4kS/s sample rates and higher, as with these sample rates we start moving away from the level of digital linearity we have already attained.

G
 
Sep 20, 2011 at 11:33 PM Post #100 of 156
 
Here's one DBT I'd like to see performed.  Ask a subject to identify a live performance versus the same recorded sample played back.  If we truly know all there is to know about audio reproduction and have the means to fully reproduce it then the subject should be at a loss to tell the difference between the two.
 
During live recordings, different instruments have different radiation patterns, whereas during playback, you are limited to the speaker's polar radiation pattern. These patterns then interact with the room's acoustics, which, aside from the overall reverberation decay time, also differ based on where the listener is sitting/observing from. I'm not going to say that setting up a test like the one you speak of would be impossible, just that the difficulties render it impractical and any viable test setup would have severe limitations.
 
 
 
Sep 21, 2011 at 1:30 AM Post #101 of 156
I think it was yo yo ma and some other artists that did this, but I recall a cd he recorded where the artists played together simultaneously in studios in different parts of the world. Using an approach like that it might be possible to let the music come together in whatever listening room you happened to be in, much the same way it would in a live scenario.

A neat possibility is having artists who record a piece this way essentially be able to perform with artists today or in the far future.
 
Sep 21, 2011 at 6:50 AM Post #102 of 156
If one uses a DAC with oversampling filter (like 64x) then what's the dfifference if the original signal was 44.1 or 48 or 96? It still gets oversampled before conversion to analog, no?
 
Sep 21, 2011 at 2:23 PM Post #103 of 156
If one uses a DAC with oversampling filter (like 64x) then what's the dfifference if the original signal was 44.1 or 48 or 96? It still gets oversampled before conversion to analog, no?


Oversampling, the most common technique used in DACs, shouldn't make much difference to which sample rate of original file you use. As I mentioned earlier, there are some very slight potential benefits to 96kS/s files but any advantages or not are already build into the files during recording. It's only 176.4kS/s files and higher that have problems.

G
 
Sep 21, 2011 at 4:51 PM Post #104 of 156
Just wanted to invite folks here to have a look at a thread on Computer Audiophile (yes, I started it) : http://www.computeraudiophile.com/content/Better-Understanding-ShannonNyquist
 
Got lots of good feedback from audio engineers, university professors, authors of audiophile playback software, etc., a couple of whom are quite familiar with details of the recording and playback process.  There's at least one academic research article cited as well.  Some of the information there does seem to run counter to a few things stated in this thread as fact.  That doesn't mean the folks there are necessarily correct and those here not so, or vice versa; but I do think it's healthy to get different perspectives from (apparently) knowledgeable people.
 
Sep 21, 2011 at 5:09 PM Post #105 of 156
It's an interesting thread, but the person with the most substantial contribution (wgscott) says at the end that he's not sure how this applies to audio. I'm guessing he's a math or other professor? I understand his desire not to toss away data (it's a professional habit), but it's not clear that that preference has any real bearing on the quality of recorded music. It's a "just in case" sort of thinking that's not, as far as I can tell, borne out by practical results.

Also, the place to worry about cut off was at 22kHz, but as gregorio has repeatedly mentioned in these threads, recording and mixing happens at 44.1kHxz, which takes care of the transient concerns. That your CD is lower is not going to affect what you can or cannot hear.

Finally, this seems an argument from authority, particularly in how you frame the initial post.
 

Users who are viewing this thread

Back
Top