I think a clarification of terminology is called for here.
First of all there is "sample rate", "bit depth" and "bit rate".
1. "Sample rate" is measured in kHz, as in Red Book CD's are 44.1kHz, according to the "Nyquist frequency" where most humans can't hear anything beyond 20kHz or so (if you can hear beyond 20kHz don't blame the messenger, please talk to Nyquist.), and if you sample your analog music at roughly twice that frequency, you capture all the audible information reasonably well. Again I am only relaying the design decision. Please don't shoot the messenger.
2. "Bit depth" is how many bits you use to store each sample of data. Again Red Book CD's use 16 bits, or 65,536 different values or levels.
3. "Bit rate" is measured in bits/second (or kilo-bits/second or mega-bits/second) which represents how fast the bitstream is transferred over a medium. Again Red Book CD's bit rate is 1411200 bit/s, or roughly 1.4Mb/s. This is calculated from 44100 * 16 * 2. The last (*2) is due to the left and right channels you have to account for. For lossy compressed music such as MP3, AAC or whatever, sometimes the songs are specified in "bit rates" as in 320kbps MP3 or 192kbps AAC, this signifies the level of compression (down from 1.4Mb/s) that is applied (irreversibly) to the file. A higher bit rate song/file retains more of the original information compared to the uncompressed version, as the algorithm is "lossy", which means it throws away (supposedly inaudible) information to achieve the level of compression required, to save disk space being one of the chief reasons.
I hope all is clear as mud now.