Before digitizing analog audio for myself, I had always thought that the 44.1 K sample rate was the main weakness of CD audio, not the 16 bits bit depth. I was wrong.
There's a right and a wrong way to digitize CD audio from analog.
The right way: Digitize to 24 bits. Make all intermediate calculations at 32 bits. Dither down to 16 bits for output.
In other words, use a software option that rounds to one of the two adjacent 16 bit values at random, rather than simply to the nearest 16 bit value. There are various interpretations of "at random", but all are said to effectively sound like rounding to the nearest 19 bit value.
The wrong way: Make all intermediate calculations at 16 bits, or round to the nearest 16 bit value. There is actually software out there, e.g. for removing record scratches and tape hiss, that insists on working with 16 bit precision only. Such software is utter trash.
One can hypothesize all night long on why dithering works, but the I could hear a dramatic difference.
I'd venture to say that most of what we don't like in the sound of our rigs is the 16 bit source material. Read what anyone says after listening to decent 24 bit DVD audio. I can't wait for storage capacities to increase and lawyers to retire so we can listen to 24 bit iPods.
Meanwhile, I made the assertion years ago on various forums that we should be compressing audio directly from 24 bit sources, for better sound quality. Most audio compression formats don't make any internal reference to the bit depth, they're just approximate information about the wave forms as continuous data. Then, all it would take would be one enterprising company to design an audio player that used more than 16 bits in playback, to get even better sound quality.
I dispute any contention that the 17th bit doesn't matter. Nothing in the linked article surprises me or seems relevant. Our ears are amazingly sensitive systems.