1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Any benefits from having a higher sample rate?

Discussion in 'Sound Science' started by seamless sounds, Jul 30, 2009.
First
 
Back
1 2 3 4
6 7
Next
 
Last
  1. MrOutside
    i suggest leeperry go and listen to some of these upsampling dacs as many of them sound very sweet.
    Upsampling doesnt increase aliasing at all; downsampling leads to aliasing.
    learn your stuff pls. Dont post those THD graphs, because theyre stupid. Listen to the music not a graph.
    If you're that concerned about such low THD ruining your music, stay away from tubes, stay away from vinyl, hell, stay away from headphones, cased speakers. In fact, dont listen to music at all.
    audiophile-grade equipment is generally from companies that dont just throw-in upsampling so they can say "we have 9001 mhz sample rate!11" its generally to assist in making the DAC/whatever sound nice because that is what will get them good reviews and repeated customers.
     
  2. leeperry
    Quote:

    Originally Posted by MrOutside /img/forum/go_quote.gif
    Dont post those THD graphs, because theyre stupid. Listen to the music not a graph.



    well that's what mp3 does to your music : http://www.head-fi.org/forums/5785853-post42.html

    resampling is pretty much the same, I posted a lot of benchmarks on the Reclock forum...too bad the imagehost went AWOL..

    if you want your source to be compromised, be my guest...resample it, EQ it, compress it, reverb it [​IMG]
    Quote:

    Originally Posted by MrOutside /img/forum/go_quote.gif
    stay away from tubes, stay away from vinyl



    hehe, I do...no worries ^^

    I only tolerate vynil when it's been recorded w/ some killer equipment that's pretty much dead silent(who would have thought vynil could sound this good?)

    I've got a James Brown OCD, I just wanna have all his goodness..whatever remastered CD(no loudness war ***) or vynil :

    [​IMG]

    I've got a 24/96 of this LP that just sounds mind blowing! but if I could get a direct copy of the master tape on DVD-A, then I'd drop it in the blink of an eye....and vynil noise fits funk rather well anyway [​IMG]

    but I want my 7.1 TrueHD/DTSHD lossless movies as pristine as possible, so no resampling for me kthx [​IMG]
     
  3. moonboy403
    My crappy expensive dac which upsamples everything to 24/192khz sounds so bad that I'll take it over most dacs that I heard any day of the week.

    But what do I know? It doesn't measure as good as the Essence or any of my past dacs for that matter in terms of measurements I can find which are S/N ratio and dynamic range. Heck, even my soundcard can match it. My hearing must be going bad. [​IMG]
     
  4. leeperry
    Quote:

    Originally Posted by moonboy403 /img/forum/go_quote.gif
    My crappy expensive dac which upsamples everything to 24/192khz sounds so bad that I'll take it over most dacs that I heard any day of the week.



    oh right! it sounds good because it upsamples to 24/192, I knew it all along! 24/192 is the magic number [​IMG]

    get ready for nex'gen, it'll be 32/384! [​IMG]
     
  5. moonboy403
    Leeperry, out of curiosity, what dacs have you heard that you're really familiar with?
     
  6. anwaypasible
    i feel like putting this to rest.
    sampling is the same thing as DOT PER INCH when describing a computers mouse resolution.

    another example..
    sampling is just like the amount of pixels in a picture.
    and quite frankly, this is the easiest and closest comparison.
    what is in front of the lens before the picture is taken is the actual raw/original (comparable to the raw sound).
    when the camera takes a photo, the lens (and included analog to digital chips) chop up the scene beforth the lens into a grid. each x,y RGB coordinate is called a pixel.
    the more pixels taken from the raw scene, the sharper and more visually pleasent the photo is (color accuracy doesnt count when talking about the number of pixels).

    the sampling rate is comparable to the camera taking a picture for the following reasons.
    the raw/original sound is beforth the microphone.
    what happens when you record audio is, the waveform goes before an analog to digital convertor chip.
    analog to digital and/or digital to analog chips are the pieces of hardware that have sample rates.
    what the analog to digital chip does is just like the camera when it takes the raw/original and chops it up into pixels, except for audio, those pixels are called 'samples'.

    and this is where you need to have advanced knowledge of how things work to speak about why things are the way they are.

    so your analog to digital chip took the waveform at a resolution of 44.1khz samples.
    (this is like saying the analog to digital chip in the camera took a photo at a resolution of 3 mega pixels)

    what companies dont tell you, those analog to digital chips can distort.. and what distort means, there is an x,y coordinate that isnt perfectly aligned.
    most x,y graphs start at 1x - 1y and work their way up.
    in the example, the grid goes up to 4x - 3y.

    [​IMG]

    a perfect analog to digital chip will accurately create a sample at each intersecting coordinate (0,0 1,1 2,1 3,1 ~ 1,2 1,3 1,4)

    when an analog to digital chip distorts, it creates a distorted coordinate.. such as 1.30,2.60.
    the chip tried to create coordinate 1,3 but made 1.30,2.60
    as seen in the example below

    [​IMG]

    when an analog to digital convertor distorts.. the coordinate 1,3 remains empty.
    so when you play the waveform on your pc, you wont hear anything at all for coordinate 1,3 if your computer is playing that waveform at a sampling rate of 44.1khz.
    that is a loss of detail.

    when you increase the sampling rate on your computer, the digital to analog chip has more coordinates available.. and if you have more coordinates available, the chances of your digital to analog chip reading data from coordinate 1.30,2.60 has now become substantially higher.
    normally the digital to analog chip on your computer will match each coordinate (1,3 with 1,3) when the sampling rate is at the same 44.1khz that the waveform was originally recorded.

    but when you turn the sampling rate up on your computers sound card, the digital to analog chip starts to look for data 'in-between' the lines. and a good example of data in-between the lines is the 1.30,2.60 error.

    the data that was read at coordinate 1.30,2.60 is totally free from harm, but the analog to digital convertor missed its mark.
    rather than not hearing NEITHER coordinate 1,3 or 1.30,2.60 you can increase the sampling rate to a higher resolution than what the raw/original waveform was recorded with.

    this is exactly why when you turn up the sample rate there is more detail in the music. coordinates like 1.30,2.60 are now being converted into an analog signal and can be heard.

    if your analog to digital chip didnt made zero mistakes and all the coordinates have rounded numbers with no decimal places, then increasing your sampling rate on your digital to analog chip is not going to find any hidden coordinates.
    but analog to digital chips vary in quality and therefore they do make mistakes when creating coordinates.
    digital to analog chips vary in quality also and therefore they can and do make mistakes when reading coordinates.

    this is why the resolution (sample rate) is so important when recording/playing-back audio.
    ideally you want both chips looking at a resolution as high as they can go.
    just like when taking a picture, you want the picture to look its best.. therefore you purchase a camera with the highest megapixels available.

    although you dont want to record audio at a higher sampling rate than what you can play back.
    for example.. you recorded yourself on a microphone at 96khz sample rate, but your computer can only play back that audio recording at 48khz sample rate.
    this means @ 48khz there are coordinates that exist and are not being picked up by your digital to analog convertor, thus you hear a loss of quality/detail.

    as a re-cap.. higher sample rates read coordinates with decimals such as 1.30,2.60

    the maximum error fluctuation of 44.1khz is derived by a net-catch of 64khz (thus meaning 64khz will find absolutely all of the errors created by an analog to digital convertor with a recording rate of 44.1khz)
    that is a government mandated quality-control law.

    88.2khz drains the amplifiers voltage rail, resulting in the waveform seeking amperage from an outside powersource, thus increasing (well maximizing really) the harmonic distortion and signal to noise ratio.
    what happens at 88.2khz is really just the digital to analog chip sucking up so much current that the amplifiers voltage rail stops looking at a 'buffer over-run' thus maximizing the amplifiers ability to amplify sound.
    an example..
    it is like doing aerobics with a sweatshirt on, and then taking that sweatshirt off to feel free and have unobstructed movement.
    (it maximizes the signal to noise ratio.. just like taking the sweatshirt off maximizes arm and torso movement)

    i'm quite confident that this write-up has informed those of you who had no clue how dac's and adc's work (and why their resolution/sample-rate is so important).

    mp3 encoding simply removes x,y coordinates from the grid.. and that makes the file size smaller.
    the differences in mp3 encoders are simply what coordinates are removed.
    the codecs are programmed to remove certain frequencies for example.
    others are programmed to begin with removing harmonic distortion and transients before taking the loud coordinates out of the grid (lame mp3 to be specific).
    the lame mp3 codec is also programmed to remove coordinates that have no or little audio data (digital silence).

    if you cant hear the difference between 44.1khz and 64khz you only have your ears and/or your speakers to blame.
    most often their is a lack of quality in the speakers to allow the listener to hear the difference.
    anything bose/klipsch or below is considered consumer-audio.
    and consumer-audio is junk! (although bose is supposed to have the best detail and klipsch is supposed to be the loudest.. but they make products for average people with average needs)

    and for the record.. bose is supposed to be 'bang-for-your-buck' performance, until you realize that you can buy speakers and build your own cabinets for a price of $250 for two bookshelf speakers that will blow the quality of your listening experience higher than you thought attainable in your own home.
     
  7. jcx
  8. thisbenjamin Contributor
    Quote:

    Originally Posted by jcx /img/forum/go_quote.gif
    awesome!

    did you use DadaDodo ?




    +1

    I came back here to reference some of Dan's comments, and saw this.. wow. anwaypasible's post is such a mess of repeating ideas, obvious information and general slop - take a look at the user's other posts, it's a common thread.
     
  9. anwaypasible
    mmhm.. and the new technology that is coming out looks like ray-tracing.

    a picture being drawn strictly with dots - todays technology
    [​IMG]

    imagine a picture being drawn with other pictures - tomorrows sound processors
    i cant find a picture to give an example

    but rasterization plants itself as a single pixel.. ray tracing plants itself as a triangle and gathers all pixels around it.

    that is why ray tracing needs spatial processing.
    you'll read that ray tracing is used to create a path for waves/pixels - but it is the actual spatial processing taking grid-notes from the ray tracing that calculate the 'regions of varying propagation velocity, absorption characteristics, and reflecting surfaces.'

    spatial:
    2. existing or occurring in space; having extension in space.

    ray tracing creates the space.. spatial processing are the particles/waves within that space.

    rasterization.. well, you have to do all calculations (inaccurately) in your head and create one drawing at a time (a flipbook) and then run those drawings back2back at 30 frames per second.

    same stuff nvidia is talking about - there will now be an environment (virtual space) for such particles in space.
    one great example would be a fireworks display. the pack leaves the chute and soars into the air, then the explosion occurs and the particles lit on fire scatter (based on accurately set physics settings.. weight/direction/velocity)

    in other words..
    grabbing one pixel at a time leaves data behind (nuances and such)
    grabbing chunks of pixels (in the form of triangles - and all thus around the triangle) will gather more information/detail)

    basically, i see it as the picture above made from dots.. to.. somebody pushing their face through hotwax/plastic.
    being that ray tracing creates 'space' then the sound will have room to bounce around and be captured while it is bouncing around.

    and the best way to compare ray tracing to the hotwax/plastic is quite easy:
    the hot sheet of plastic/wax is the triangle, and the 'pressing of the face into the plastic/wax' is the spatial processing (literally.. the spatial processing records the molecular structure of the plastic/wax as the sheet stretches and conforms to whatever face is pushing against it.)

    rasterization..
    you would have to make a sheet of plastic/wax pixel by pixel until the sheet was finished.
    then you would have to move the pixel outwards one at a time - that is like trying to make that same face buldge into the plastic/wax with a hammer and threading needle.

    sure, you might have the ability to change the pixels x,y coordinates (and again the coordinates of pushing in or pulling out) but then you will have gaps in the sheet where the pixel was just at.
    the spatial processing will automatically fill in those holes.
    and when recording.. spatial processing can say 'hey, go wherever the hell you want and i'll meet you there (and follow/record the path you took to get there).

    the biggest thing in audio is.. the processing will have more detail because there wont be any holes in the sheet from pushing a pixel inwards or outwards.

    kinda like a robot with only up/down left/right (4) possible movements in their arm, compared to a robot with up/down left/right diagnal1/diagnal1 diagnal2/diagnal2 (8).

    and again.. the limit of 4 or 8 movements is completely released, because now there is space (also known as unlimited servos).

    (and truly.. the space is only as fine/detailed as the resolution it was programmed to be, but usually it is a humongous lump sum of previous technology)

    i like the robot example and the sheet stretching example - and what it can do for sound has already been said.


    ACTUALLY
    the closest thing i can think of to give you an 'idea' is the spectrogram visualization in the foobar audio player.

    that visual has had an upgrade.. eheheh
    (imagine watching bubbles in boiling water.. the new spectrograms are like liquid/gas)

    sometimes when you play with 3d programs you can 'pretend' to use spatial processing, but in reality you are just moving a wire and the 3d engine is adding/removing pixels to keep the texture wrap from busting open (think of the solid sphere demo, where you can stretch the bubble and manipulate it - the 3d engine is just filling in or deleting pixels as necessary)
    spatial processing will allow you to program how the pixels in the texture will move, not by wire and relying on an engine to fill in any voids, but by telling the pixels to do what you want and calculating tears damage from that point on.

    putting a bullet hole into a piece of wood is now like stacking a bunch of crates 8 wide x 10 tall x 6 deep.. then shove the projectile through those stacked crates.
    those crates now become little pieces of wood flying about.

    another example that would really show off the new technology would be watching a knife stab into a piece of fabric and then proceed to move downward (slicing/cutting into the fabric as the blade moves down)

    now we can program the software to determine what kind of fabric we are working with.
    you can stick the knife into silk.. wool anything inbetween - and the ray tracing rendering will calculate the visual differences between stabbing silk or wool.

    silk is very lite and will blow around easy.. wool is thick and stubborn.
    if you wanted to give an example of stabbing silk or wool with todays technology, you would have to program the wires to move manually, and then you would have to program the texture to follow (with precision stifness)
    and i dont see many 3d frames being broken on the spot neither.
    a rip in fabric would need a wire frame adjustment AND breakage.
     
  10. twylight
    I block your great wall of babbling text with my +5 Shield
     
  11. xnor
    Quote:

    Originally Posted by twylight /img/forum/go_quote.gif
    I block your great wall of babbling text with my +5 Shield



    hehehe, love that

    @anwaypasible:
    Your comparisons are very poor and partly nonsense. These explanations are wrong and people will be more confused after reading it than before - if they read it at all. Please learn how to structure a text, find out what a paragraph is, what the shift key is good for ...

    Sorry for being off-topic.


    @leeperry:
    I don't know what you do wrong when you do those 'tests', but mine have shown otherwise. Btw, ever verified your claims with ABX tests? Guess not. Anyway please don't post such rubbish.
    kthxbye
     
  12. techenvy
    Quote:

    Originally Posted by Dan Lavry /img/forum/go_quote.gif
    Since the early 1990's, virtually all DA's have built in up-sampling. There are 2 reasons you want that:

    1. Without up-sampling, making a post DA good analog filter (required to remove the image energy) is virtually an impossible task. Up-sampling moves the image energy up to higher frequencies, separating the wanted audio energy from the unwanted image energy making a good analog filtering possible. The higher the up-sampling, the more separation (gap) thus making the filtering easier.

    2. DA's do not generate narrow pulses (RZ signals - return to zero), because there is not much energy in narrow pulses. Instead, the DA output signal (before the analog anti imaging filter) is a NRZ signal (NRZ - not return to zero). The signal looks like "steps" (not like narrow pulses), and such signal does contain the needed energy to drive the analog filter. However, when you use NRZ signals without up-sampling, you lose some high frequencies. The flatness response is compromised.

    You can see a plot in my web site, in a paper "Sampling, Oversampling, Imaging and Aliasing".

    Look at page 3 for a plot titled Sin(X)/X plots for X2,X4,X8 and X16.

    http://www.lavryengineering.com/white_papers/sample.pdf

    You can see that at X2 up-sampling you still lose around .8dB at 20KH. With X4 up-sampling, the loss is around .2dB at 22KHz. At X8 it is less then .1dB, at X16 the loss in a non issue...

    The horizontal axis is frequency (0-320KHz). Your interest is mostly 0-22KHz (audible range), shown by the red line marked "22". The vertical axis is dB loss in amplitude. The curve that drops fastest is the X2 up-sampling. At X16, you 3dB loss is all the way up near 320KHz, with hardly any loss at 20KHz.

    True, one can compensate for the loss, but it takes a lot of DSP (signal processing). It is not possible to compensate well for the Sin(X)/X curve with analog circuits.

    So the bottom line is: you do not want a DA without any up-sampling. You may not need to up-sample a lot, but you need some up-sampling to enable good filtering and flat response. Most DA's today up-sample between X64 to X1024. This is overkill for flat response, and is very good from an analog filter standpoint. The reasons for up-sampling so much are is due to modern DA converter architectures such as sigma delta designs.

    My answer is a bit technical, but a proper and solid answer must be based on technical understanding of the issues. I try my best to simplify it for the casual reader (no math, minimal engineering terminology). I hope what I wrote is not too difficult to understand.

    Regards
    Dan Lavry
    Lavry Engineering





    thankyou thankyou

    inteested to get your opinion on the best sound card for my sony vaio aw190
    currently setup is not so good, i run hdmi from laptop to lcd tv, then opitcal toslink out to my pioneer 800c dolby headphone reciever, then out to my audio technica at ha25d headamp(sadly lacking bass), but whatever im getting knew head amp and am deciding to get one with usb input so im interested in trying osund card as well... i have 8 gigs memory, dont know if tha matters probably not,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

    ----------------so what soundcard would sound best or would you reccomend for my laptop setup,, id really like to be able te eq sound right on my computer screen, currently im bout to order a used presonance mini eq just for this reason, if for anything else it will be fun, currently i use an old onkyo 8511 to eq (doesnet have sub outs and bass is terrible), so thats reason for the mini eq,,,,,,, like you care[​IMG]

    -----------ok so for my second question is, do you think the signal would sound better coming out of my laptop v hdmi and into my LG lcd tv and then optical out like descried before,,,,,,, or do you think that i should go down usb straight to headamp,,,,, i have no idea how i will incorporate my 800c dolby headphone as it has only one 3.5mm stereo out.
    ------fyiu i previously had an onkyo tx 606 hdmi reciever and the headphone jack sounded twice as good as my current onkyo tx8511, but i never used so i pawned that behemooth reciever!

    thankyou

    gear in use[​IMG]ioneer dhp 2000,dhp800 IR;s, Denon 7000, ue superfi 5 EB,
    Westone 3, sony mdr6000 RF's (dont buy unless you like beeping noises!),
    bose around ear, and 800c pioneer base station dolby head reciever and audio technica AT HA25 d headamp(sadly lacking the bass i desire)
     
  13. leeperry
    Quote:

    Originally Posted by xnor /img/forum/go_quote.gif
    @leeperry:
    I don't know what you do wrong when you do those 'tests', but mine have shown otherwise. Btw, ever verified your claims with ABX tests? Guess not. Anyway please don't post such rubbish.




    resampling increases harmonic distortion, easy to measure w/ WaveSpectra....and to hear too.

    it's not because you write complete bs on a blog, that it makes it all of a sudden true my dear friend [​IMG]
     
  14. chinesekiwi
    I haven't read the thread so I dunno if it's been posted but hey, don't let science get in the way of your thinking....

    A Sampling Theory paper by Dan Lavry.

    Basically says you cannot hear more than what the 44.1 kHz sampling rate captures and in fact it is worse if you upsample.
    http://www.lavryengineering.com/docu...ing_Theory.pdf
    Bottom of page 26 + page 27 for the choice to the point quote.
     
  15. chinesekiwi
    Quote:

    Originally Posted by anwaypasible /img/forum/go_quote.gif
    mmhm.. and the new technology that is coming out looks like ray-tracing.

    a picture being drawn strictly with dots - todays technology
    [​IMG]




    um, you do know that dot matrix printers 'print' the picture in the same way thus the technology's been around a very long while.
     
First
 
Back
1 2 3 4
6 7
Next
 
Last

Share This Page