Any benefits from having a higher sample rate?
Mar 23, 2010 at 4:05 PM Post #76 of 106
Nyquist–Shannon sampling theorem - Wikipedia, the free encyclopedia

Have fun reading that. It essentially boils down to the same thing many people have said. Whenever a signal of audio/image data is interpolated to re-sample/re-size it there will always be some artifacts created. The only way there wouldn't be is if a sinc or jinc function was used but these functions have boundary conditions extending to infinity. A lanczos function is a windowed sinc function or a function with non infinite boundaries and once you use a window of about 5 samples or more it becomes very close to a true sinc function. If you are using foobar2000 you can enable its re-sampler in the dsp preferences and it will use a more accurate interpolation function than the ones the sound card drivers typically have enabled by default.

As others have said the comments about dacs supporting oversampling are a bunch of hype and in truth it degrades the signal. Listening to recordings that are over 44.1 khz has better quality but what point are we able to tell a difference?

graph_lanczos_lobes.jpg
 
Mar 24, 2010 at 4:24 PM Post #78 of 106
Does anyone know of any research that measures the human brain's response to music comparing regular and high sampling rate recorded music?

There was a paper I've since lost which used some type of brain imaging to see if the brain reacts differently to high sample rate music vs regular sample rate and had a significant results, but someone said the statistical calculation was done wrong. How did that pan out? Were there any replication attempts?
 
Mar 24, 2010 at 9:43 PM Post #79 of 106
Quote:

Originally Posted by b0dhi /img/forum/go_quote.gif
Does anyone know of any research that measures the human brain's response to music comparing regular and high sampling rate recorded music?

There was a paper I've since lost which used some type of brain imaging to see if the brain reacts differently to high sample rate music vs regular sample rate and had a significant results, but someone said the statistical calculation was done wrong. How did that pan out? Were there any replication attempts?



You aren't thinking of Oohashi's paper are you ?
 
Mar 24, 2010 at 10:57 PM Post #80 of 106
A lot of the things in this thread are over my head (not good with engineering), so I'm just wondering if there's any easy way to get a higher sample rate than 44kHz? Do any CD ripping programs give me the tools to do that?
 
Mar 24, 2010 at 11:50 PM Post #81 of 106
Quote:

Originally Posted by theKraken11 /img/forum/go_quote.gif
I'm just wondering if there's any easy way to get a higher sample rate than 44kHz?


you really don't want to upsample...look at those harmonic distortion figures I measured here(I used libsamplerate 0.17 at the highest quality SINC): SlySoft Forum - View Single Post - ReClock 1.8.5.5

notice how the THD/THD+N increase and the SNR lowers the more I increase the sampling rate.

and you may wanna read this too: Upsampling vs. Oversampling for Digital Audio - Audioholics
Quote:

Oversampling has been around for a very long time and has been used extensively in audio products to not only improve sound quality through 'better' filtering but to make these same products much cheaper. Upsampling, on the other hand, is relatively newer and debated greatly. The effects of upsampling are no doubt overstated. By carefully designing the sampler, ADC, digital processing path, and oversampling DAC, the upsampling and asynchronous rate transfer can, in my opinion, be avoided.


oversampling happens in the DAC chip, and you want it as high as possible(128X at all rates on the AK4396)...the "miracle" AK4396 DAC is supposed to be able to reach the "R2R" chips performance(the supposedly better sounding DAC's): Google Translate
Quote:

"This DAC is a wide departure from other delta-sigma DACs designed by us and others like BurrBrown, Analog Devices and Cirrus Logic. The AK4396 is an entirely new modulator, pioneered and patented by AKM . It achieves something unique. In the past, many of the old Phillips and BurrBrown shares were R-2R * based products. These older products were looked upon as some of the best. One of the reasons was high frequency noise. In older R 2R-shares, HF noise was not present. In all delta-sigma shares prior to the AK4396, everyone has fought HF noise caused from the delta-sigma modulator with the insertion of large filters and other parts to attempt to solve a problem created by The delta-sigma design. The AK4396 Effectively today does not suffer any RF modulator-induced noise and is over 60dB better than the nearest Cirrus and BB devices. All of this HF noise can cause many audible artifacts downstream. That is the 'miracle' we believe is making the difference today. This part gives you the performance and linearity of a delta-sigma device with the noise performance of an R-2R hand, something that was never previously available."


R2R DAC's are supposedly better sounding because they'd output a cleaner waveform...but luckily delta/sigma DAC's use LPF opamps to clear this mud as I understand it: Mother of Tone - Conversion Techniques

IMHO upsampling really should be avoided, all it does is creating artifacts due to interpolation, make the sound brighter and more colored and feed the DAC w/ a distorted signal....in some cases, it can lower the jitter or make up for a dark sounding set up, so as usual there's no hard rules in audio.
 
Mar 25, 2010 at 12:28 AM Post #82 of 106
Quote:

Originally Posted by nick_charles /img/forum/go_quote.gif
You aren't thinking of Oohashi's paper are you ?


That's the one! Thank you.
 
Mar 25, 2010 at 1:45 AM Post #83 of 106
Quote:

Originally Posted by b0dhi /img/forum/go_quote.gif
That's the one! Thank you.


That paper has sparked some robust debate
wink.gif


Oohashi does not provide the raw data so I cannot do the stats myself. There have been a few replication studies. The only one I have seen is Ashihara and Kiryu. They suggested that Oohashi's results may have been due to IMD caused by the arrangement of normal transducers and supertweeters. They altered the speaker arrangement and the IMD went away and so did the differences between the conditions, so it is hard to make any strong conclusions either way.

Oohashi used an A...B...B...A test protocol for the subjective tests which I am a bit skeptical about, subjects always knew that the conditions were different between trlal 1 and 2 and 3 and 4. But beyond that the methods were okay, but I am no neurophysiologist !

One interesting related finding of 30 years of psychophysics research is that high frequency sounds at high enough levels can have marked phsyiological effects including nausea, vomiting and headaches.
 
Apr 23, 2010 at 2:43 PM Post #84 of 106
Hi All, new to this forum but I thought I'd give my take on this...I am actually in the digital imaging business and many of the comparisons here are somewhat invalid. In imaging the reconstruction filter is generally you eyes. Samples are displayed verbatim in pixels and you eyes/brain do the low pass filtering. In digital audio, you need to build a reconstruction filter which is impossible in the analog domain (ie with actual physical components) because the time domain version of the filter requires knowledge of the signal's future (it is non-causal). You need to do this because your transducers require a time domain analog signal.

This is why oversampling/upsampling is popular and necessary in digital audio for signals where the filter required cutoff is too close to important information (ie CD). Oversampling means you get to build an analog filter with a cutoff nowhere near your band of interest and do the filtering around your original cutoff in the digital domain where you can make a non-causal filter (ie in the time domain can use the signal's future since you have the data).

I resample all my CDs from 16/44.1 to 24/176.4 (oversampling) offline and encode the result as FLAC files (was making DVD-A discs but I just started listening with computer). One of the big advantages of using an exact multiple of the original sample rate is by zero filling your samples means your original baseband signal is completely in tact (you haven't changed it at all) with images of your signal repeated in the frequency domain. This means everything from this point is based on how good you want to make your filter. Part of making this filter good is (a) filter window size, (b) calculation precision, (c) output at a higher bit depth after your ultra-precise calculations :wink:

(a) and (b) are where oversampling offline can outdo the oversampling built into a DAC or a real-time oversampling DSP. For my own oversampling I use very large windows and 64bit floating point with the computer FPU set to its highest precision mode. Even at 176.4 the DAC will do its own oversampling but it means its digital filter cutoff is very far away from the audio band and generally filter problems get bad as you approach the cutoff. (a) and (b) also let you create really accurate digital crossovers if you want to make multichannel versions to drive crossoverless speakers..

DACs would generally do (c) and this is actually a key to CD oversampling where the output bit depth is 24 bit in that the new samples can be at 1/256 steps of the original samples. Also we are not creating the new samples we are FINDING them. An exact reconstruction filter will find an exact value for each new sample. We have a mathematical model which tells us what each sample should be it is just up to use to get as close as we can.

One thing to note which may be a problem why some DACs to not correctly oversample is that it is possible to have a sample peak at a higher value that the highest peak at the old sample rate. This is because the samples which represent a waveform need not lie at a peak, and so the extra intermediate samples may indeed lie closer to a peak...so performing the filter offline lets you optimize to ensure no peak thresholding occurs and the new data set makes the best use of the dynamic range available...

Anyway, just my ramblings - if anyone is interested in a 24/176.4 flac of something they would like to compare feel free to PM me...
 
Apr 23, 2010 at 5:18 PM Post #85 of 106
Awesome 1st post Macaque.
 
Apr 23, 2010 at 5:49 PM Post #86 of 106
Quote:

Originally Posted by macaque /img/forum/go_quote.gif
I resample all my CDs from 16/44.1 to 24/176.4 (oversampling) offline and encode the result as FLAC files ...


Have you done any blind tests to see if the 3x size multiplication (assuming FLAC reduces the size by 50%) has any audible benefit ?
 
Apr 23, 2010 at 7:04 PM Post #87 of 106
Quote:

Originally Posted by nick_charles /img/forum/go_quote.gif
Have you done any blind tests to see if the 3x size multiplication (assuming FLAC reduces the size by 50%) has any audible benefit ?


I haven't done blind tests (only me switching the same track at 44.1, 88.2, and 176.4 on DVD-A or switching between files on the computer). The 176.4 always sound the best to me on the DVD-A player, I just got the sound card this week (Audiotrak HD2 Advance DE, flashed firmware to allow ESI drivers and therefore native 88.2 and 176.4) so I haven't done enough testing with that yet.

As far as file size is concerned storage is very very cheap compared to most things in the audiophile world
ksc75smile.gif
!

The gear in question:

Pre: creek obh-12 passive volume control
Amp: musical fidelity A3.2cr
Speakers: PMC FB1

I tried bypassing the volume pot with the Audiotrak but I have to run with the Audiotrak and anywhere from -13 to -20db on its volume control. From what I gather either the ENVY24 or the AKM DAC is doing a digital attenuation of the data before the DAC which I would like to avoid so I am sticking with the outboard pot and leaving the Audiotrak at 0 dB.
 
Apr 24, 2010 at 5:21 AM Post #88 of 106
macaque;6580822 said:
I resample all my CDs from 16/44.1 to 24/176.4 (oversampling) offline and encode the result as FLAC files (was making DVD-A discs but I just started listening with computer). One of the big advantages of using an exact multiple of the original sample rate is by zero filling your samples means your original baseband signal is completely in tact (you haven't changed it at all) with images of your signal repeated in the frequency domain.

(b) also let you create really accurate digital crossovers if you want to make multichannel versions to drive crossoverless speakers..



I am not that technically, but these two statements make so much sense to my simple mid. I hate maths and really don't know what to do with remainders - best if there isn't - Hence I dug deep for my Dac that samples 44.1 to 176.4 and 48 to 192 these are not selectable and are fixed.

I will not go into speakers until I have a preamp that has a dac, and digital crossovers going to multiple amps dedicated to a specialised frequency band that a dedicated driver can capitialise on said frequency - leaving no room for inefficiencies. Why are humans so slow.
frown.gif
 
Apr 24, 2010 at 10:29 AM Post #89 of 106
Quote:

Originally Posted by theKraken11 /img/forum/go_quote.gif
A lot of the things in this thread are over my head (not good with engineering),..


Ah good, I thought I was the only one that couldn't understand what the likes of Dan Lavery were explaining.
frown.gif
 
Apr 24, 2010 at 11:31 AM Post #90 of 106
Quote:

Originally Posted by spookygonk /img/forum/go_quote.gif
Ah good, I thought I was the only one that couldn't understand what the likes of Dan Lavery were explaining.
frown.gif



simple link: Upsampling vs. Oversampling for Digital Audio

oversampling in the DAC: you want it as high as possible to decrease aliasing and increase the conversion resolution. the AK4396 does it at 128X rate for all sampling rates.

upsampling in the source: terrible idea IMO(and that link conclusion agrees), all it does is feeding worthless interpolated data and increase THD+N dramatically(the sound will appear brighter and more distorted):

 

Users who are viewing this thread

Back
Top