What effect does an audio processor have on headphones?
Dec 11, 2014 at 3:03 AM Thread Starter Post #1 of 3

D126

Head-Fier
Joined
Jan 9, 2013
Posts
82
Likes
11
If I were to use a Modi/Magni combo with my Laptop would I be getting optimal audio versus my desktop where I have an DX/Modi/Magni combo?
 
How exactly does an audio processor effect sound quality? I thought all it did was change things to Dolby/Surround Sound... Does SNR correspond to an audio processor (or is it a manufacturer way of rating an overall processor/DAC/Amp setup)? How exactly would onboard hinder a Modi/Magni in producing clean audio?  
 
Also, are USB and Optical DACs equal?
 
Dec 11, 2014 at 5:18 AM Post #2 of 3
 
How exactly does an audio processor effect sound quality? 

 
It doesn't, really, aside from any special effects you need the processor or DSP to do (a soundcard has a DSP, plus a DAC and sometimes a headphone amp). If you don't need virtual surround for example then it's not necessary - there are software level EQs that work with any computer.
 
The only times a DSP would really affect SQ is if it's meant to correct environmental issues with the sound system. For example, software level crossfeed (feeds a range of frequencies across both channels to simulate speakers, where each ear hears a bit of the speaker on the other side) tries to minimize the basic issue with headphone listening where you can't hear each speaker on the other ear. In serious car audio systems, you need a DSP to apply a time correction profile - essentially a custom time delay (in microseconds) to the nearer speakers so they the soundwaves from each is synchronized. This results in having a consistent soundstage along the dashboard, including bass drums and guitars that seem to be there even with the subwoofer in the trunk. The microsecond corrections make it seem like you're listening to speakers where the tweeter(+midrange), midwoofer, and subwoofer are placed as they would be on a nearfield system at home, so even while you're sitting off-center in a car, assuming you installed the speakers to minimize reflections, the DSP can center the vocals on the dash and then have everything else in a proportional and believable location relative to that.
 
 
Does SNR correspond to an audio processor (or is it a manufacturer way of rating an overall processor/DAC/Amp setup)? 

 
AFAIK, if you're transmitting a digital signal, it doesn't matter, unless there's some kind of interference. I think that's actually just for the analog stages. The question is whether whatever computer you use has any noise through USB or sometimes even SPDIF caused by bad USB designs (you might notice how red motherboards now state that they are designed to work with USB DACs)
 
 
 
How exactly would onboard hinder a Modi/Magni in producing clean audio?  

 
If your motherboard isn't sending a consistent current or a clean signal out of its USB port then that can be a problem. And no, the onboard audio circuitry won't matter for USB since it bypasses the DSP chip on it (and the soundcard altogether).
 
 
Also, are USB and Optical DACs equal?

 
Different. As above, USB bypasses all DSP chips; SPDIF goes through the DSP. That means any processing done by your computer will get sent as it is to the optical output - so you can get virtual surround with the optical version but not USB.
 
USB soundcards like the Xonar U3 are different from DACs as they have their own DSP chips. That's the difference between a USB soundcard and a USB DAC.
 
  If I were to use a Modi/Magni combo with my Laptop would I be getting optimal audio versus my desktop where I have an DX/Modi/Magni combo?

 
If that laptop has a DSP on it it won't work with the USB DAC; but if you don't use that for games anyway then it won't matter.
 
Dec 11, 2014 at 9:12 AM Post #3 of 3
Inside the DAC, the chip actually performing the D->A conversion must be fed a signal format called "I2S". So, no matter what type of digital connection is used, USB, TOSLINK optical S/PDIF or coax S/PDIF, the digital data must be converted to I2S before being sent to the actual DAC chip. Everything upstream of the DAC chip serves only two purposes: To get all the data bits to the DAC chip, and to make sure the timing of those bits is as close to perfect as possible.

So - why are there different ways to get the digital bits from one box to another? In my jaded and cynical humble opinion it is because the companies that build these interfaces know that whoever owns the the patents on the most popular interface design will make a boatload of money in licensing for years to come. TOSLINK was Toshiba. S/PDIF was Sony & Philips, I2S was Philips, USB was a consortium of computer & electronics manufacturers that included Intel, IBM, Microsoft, Compaq, etc.

I'm sure there are miniscule differences in the details of the clock timing, leading edge detection, EMI rejection, etc, etc between all the myriad of digital interface standards - and IMHO, I doubt any of those miniscule differences make any audible difference to the resulting music being generated by the headphones.
 

Users who are viewing this thread

Back
Top