Ok, I read a little bit on delta sigma converters and I get the general idea. Convert each sample to a sequence of pulses such that the duty cycle corresponds to the desired discrete output level. Filter the high-freqeuncy inaudible noise.
My intuition is still failing me regarding volume control. I looked at the specs for the CS4382 DAC used on the Audigy 2 and X-Fi soundcards. The DAC itself has per-channel volume control. In the block diagram, this volume control seems to be occuring before the delta sigma converter which suggests that the actual sample value is modified in some way first. Ultimately, what I'm getting at is whether or not you effectively throw away information (bits) by attenuating the signal. If so, what does this mean qualitatively? For every -6dB attenuation, does this correspond to one less bit, such that if the attenatuation is -48dB on a 16-bit signal, its as if I was using only 8 bits? If so, that seems bad, and it seems you would be better off loading with a POT if you want to attenuate rather than effectively reduce the bit precision in half, although, people have said they can't tell the difference. Is this related to how a good analog-domain amp can "bring out details" even if you are not necessarily increasing the overall volume or dynamic range?
When you adjust the volume level on your computer with the control panel, does it directly effect the volume control on the CS4382 DAC? What is happening when you adjust volume control in an application such as a game (ie. music and FX settings)? Is this setting combined with the windows setting and mapped to a final volume control setting for the DAC? Or do the samples actually get attenuated (ie. throwing away bits) in software before even sending them to the soundcard?