Yikes, there is an awful lot of incorrect science out there......
1) While the differences between the various sample rates and bit depths may not be obvious to all listeners, or with all equipment, or will all music, I think it's pretty widely agreed that they are audible a significant percentage of the time. Besides that, whenever you convert one sample rate to another you introduce slight alterations to your content. For both of those reasons, it seems to make obvious sense that you want to play your music bit-perfect. (Assuming that you want it to be as close to the original as possible, then you don't introduce extra changes, right?) I would suggest anyone wondering about this to listen for themselves.
2) Latency per-se is meaningless for listening to music. If you're in a studio, recording multiple tracks that have to be synchronized, then it is critical that latency (delay) be kept to an absolute minimum. However, when you're listening to music, you aren't going to hear the difference in whether the music plays 3 milliseconds after you hit the Play button or 30 milliseconds after you hit it. The real issue for playback is that you want to minimize variations in timing - and, arguably, systems with the most latency overall are also likely to have latency that varies the most.
Luckily for us, most USB DACs these days use an asynchronous USB input, which means that the DAC controls the timing, and the computer really doesn't make much difference.
3) The biggest problems for USB audio DACs are things like dropouts (where the computer gets distracted, and forgets to send packets for long enough that the music stops until things catch up). Changing those latency and buffer settings can help in this situation; a bigger buffer takes longer to fill up, and longer to empty; whether this is good or bad will depend on where the bottleneck lies in your particular system. Likewise, with some player software, and some DAC drivers, and some computers, you may find that ASIO works better than WASAPI - or vice versa. However, in the end, it really doesn't matter as long as the bits all get there intact. (Often a particular buffer setting will work better with a particular computer and drivers; however, there is no generally preferable setting; it's simply a matter of trying the different options and finding settings for buffer size and latency that work best on your particular system. DO not assume that the lowest latency - or the highest - will be the best choice for your setup.)
I mentioned that most USB DACs are asynch these days. if yours isn't, then, as they say, all bets are off - because, if the DAC doesn't control the timing, then the computer does.... in which case, in order to get the best sound quality, you DO need to optimize everything on the computer to give you the "smoothest" data feed (with the least variation in timing). In general, though, if you're doing it this way, then you're spending a lot of effort to try and match the performance that an asynch input gives you by default.
However, do try to avoid sweeping generalized assumptions. ASIO is NOT "better" or "worse" than WASAPI; it's more accurate to say that, on some computers, and with some DAC drivers, one or the other works better.