Watts Up...?
Oct 30, 2017 at 11:48 PM Post #421 of 4,668
for Rob,

i have a mac desktop and laptop. i've read about the disadvantages of driverless usb. very simply can i run a windows emulator like parallels on my mac's and load it up when using roon to ensure bit perfect results and the passing of data in case of errors to do with timing etc that macs usb 2 do not do? will the windows emulator work here as it seems to have done with someone else before on headfi? or do you need real windows hardware? i noted you also said windows 10 with chrome as if to make a point that something didn't work could you explain that. many thanks mk.

just came across this which states on roon that bit perfect playback is achieved for mac on roon by 'Core Audio'. doesn't this contradict the driverless scenario above?

"The software achieves bit-perfect playback using CoreAudio (Mac)"

 
Last edited:
Oct 31, 2017 at 12:13 AM Post #422 of 4,668
Although the Chord Windows driver re-sends faulty packets, I suspect that driverless is OK in practice. To check, just run DoP DSD; if it isn't bit perfect, you will hear drop-outs.
 
Oct 31, 2017 at 4:19 AM Post #423 of 4,668
in the long run however when a laptop needs replacing due to age etc i take it new windows hardware with chord windows driver is the best way to go? thanks mk
 
Oct 31, 2017 at 7:49 AM Post #425 of 4,668
I hate windows. Sorry just me but I was shipped a surface notebook by mistake, yes free. Thinking jeez maybe I could use it for the garden to listen to Hugo. I packed it after turning it on and went back to Apple, I’m hooked.....good luck tho mk.
 
Oct 31, 2017 at 7:06 PM Post #426 of 4,668
Rob, can you help me understand something in the first deck?

Time resolution and frequency are related. Rise time from zero (50%) to peak (100%) is 1/4 of the period, so a 20 kHz signal has a period of 50 microsecs and rises in 12.5. If the human ear/brain can resolve 4 microsecs, why can't we hear higher frequencies? A signal that rises in only 4 microsecs is at least 60 kHz. Since we can only hear to about 20 kHz (which is generous, most people can't hear 20 kHz), it seems like our transient response rise time resolution is 12.5 micros, not 4.

By this rise time metric, CD has a 11 microsec rise time resolution (1/4 of period of 22,500 Hz).

Or, is the 4 microsec resolution only detected as arrival instant delay from one ear to the other (interaural time difference), not rise time within a signal? Even if interaural time difference were as low as 4 micros, that seems to apply to localization, not transient response.
 
Last edited:
Oct 31, 2017 at 11:53 PM Post #428 of 4,668
Rob, can you help me understand something in the first deck?

Time resolution and frequency are related. Rise time from zero (50%) to peak (100%) is 1/4 of the period, so a 20 kHz signal has a period of 50 microsecs and rises in 12.5. If the human ear/brain can resolve 4 microsecs, why can't we hear higher frequencies? A signal that rises in only 4 microsecs is at least 60 kHz. Since we can only hear to about 20 kHz (which is generous, most people can't hear 20 kHz), it seems like our transient response rise time resolution is 12.5 micros, not 4.

By this rise time metric, CD has a 11 microsec rise time resolution (1/4 of period of 22,500 Hz).

Or, is the 4 microsec resolution only detected as arrival instant delay from one ear to the other (interaural time difference), not rise time within a signal? Even if interaural time difference were as low as 4 micros, that seems to apply to localization, not transient response.

The issue here is non-linearity, in that the reconstruction of transients timing is non-linear when you do not use an ideal sinc function interpolation filter. The instance that nonlinearities come into play means that you can't relate time domain to the frequency domain.

I will give you an example. If I said a DAC had random jitter of 100 nS (a huge figure) you would have no problems saying this was audible, as it would create measurable and audible intermodulation products in the audio bandwidth. The transient timing error from an interpolation filter can be thought of exactly like jitter, in that the transient timing is constantly changing from the original un-sampled signal in the ADC. Except that the timing error is much bigger than 100 nS... To give you an idea how big the errors can be, take a look at this slide:

Slide10.JPG
 
Nov 1, 2017 at 3:30 AM Post #429 of 4,668
That might partly explain why Sony's ZX1 sounds so horrendously artificial and contrived, to my ears.

Sony implemented a very interesting Digital/Analog Hybrid circuitry in their ZH1ES amp that blends their FPGA S-Master Class D amp with a Class-A amp to improve switching activity issue. Should be interesting if you can check it out and see if it's still artificial.
 
Nov 1, 2017 at 3:52 AM Post #430 of 4,668
If the new dave is davinia, is the new mojo mojette? :)
 
Nov 1, 2017 at 4:24 AM Post #431 of 4,668
Nov 1, 2017 at 5:39 AM Post #432 of 4,668
Rob, can you help me understand something in the first deck?

Time resolution and frequency are related. Rise time from zero (50%) to peak (100%) is 1/4 of the period, so a 20 kHz signal has a period of 50 microsecs and rises in 12.5. If the human ear/brain can resolve 4 microsecs, why can't we hear higher frequencies? A signal that rises in only 4 microsecs is at least 60 kHz. Since we can only hear to about 20 kHz (which is generous, most people can't hear 20 kHz), it seems like our transient response rise time resolution is 12.5 micros, not 4.

By this rise time metric, CD has a 11 microsec rise time resolution (1/4 of period of 22,500 Hz).

Or, is the 4 microsec resolution only detected as arrival instant delay from one ear to the other (interaural time difference), not rise time within a signal? Even if interaural time difference were as low as 4 micros, that seems to apply to localization, not transient response.

In reality, you can't compare the physical waveforms to hearing capability. Between a sound wave and the brain there are a series of mechanisms - the eardrum, bones, cochlea, neurons - which have their own physical limitations. The cochlea which decodes the frequencies is only capable of a certain range of frequencies and probably evolution has shaped the other parts sinto optimise the dynamic range for these frequencies too. The brain may still capable of detecting time differences more minute than 1/20000s, which is important for the spatial location of sounds,, bute sound must be within the physical capabilities of the ear. Makes sense?
 
Nov 1, 2017 at 7:47 AM Post #433 of 4,668
Imagine you have a long row of microphones that can only register 200hz at the upper limit. Now imagine that the spacing between these microphones is shorter than the 200hz wavelength (172cm). You have a system that can only record sounds as high as 200hz, but has a temporal accuracy that is much greater, due to the short distance between each microphone. This is the ear. Claims of 4uS accuracy should be thoroughly investigated and the papers looked at in detail.

The truth is never as simple as presented
 
Nov 1, 2017 at 11:19 AM Post #434 of 4,668
Leo and sq225917 have paraphrased my last point which is that perhaps our ITD (interaural time difference) can resolve smaller resolution than frequency rise time. But if that was Rob's point, the improvement would be in spatial image, not in transient response. And even this would be lost with many recordings; those close-miced so whatever image we hear is artificially constructed during mixing.

Rob's response suggests his point was that most DACs don't achieve 1/4 period rise times with transient signals. The construction filters spread the transient over time in order to avoid other distortions like pre-ringing. I take Rob's point as, this cure is worse than the disease. Smearing the transients has a bigger impact on sound perception than we think. To avoid smearing them the DAC needs to operate at smaller time resolution.
 
Nov 1, 2017 at 2:31 PM Post #435 of 4,668
Imagine you have a long row of microphones that can only register 200hz at the upper limit. Now imagine that the spacing between these microphones is shorter than the 200hz wavelength (172cm). You have a system that can only record sounds as high as 200hz, but has a temporal accuracy that is much greater, due to the short distance between each microphone. This is the ear. Claims of 4uS accuracy should be thoroughly investigated and the papers looked at in detail.

The truth is never as simple as presented

I don't quite follow you. If the separation is exactly one wavelength or a multiple of wavelengths, you will record exactly the same signal. So if you place the microphones at at 1/4, 1 1/4 or 105 1/4 wavelengths separation your measurement will be exactly the same (attenuation aside). What matters is really the phase difference (and in case of full wavelength distances, the phase difference is zero), and in the experiment you could measure this very accurately when you subtract both signals.

The brain actually uses different mechanisms to detect the phase delays depending on the frequencies involved. The brain uses phase delays to detect time differences at low frequencies (i.e. difference in rise times of the sound waves) and group delays (in simple words the delay in change of amplitudes between several waves) for higher frequencies.
 

Users who are viewing this thread

Back
Top