A 1-bit DAC is the easiest and cheapest to implement, as it has only two possible output levels. With only two levels, there is - ignoring other factors - in theory no distortion from the D/A conversion itself. With the current technology, increasing the clock frequency to the MHz range is also cheap, as is the DSP necessary to implement the oversampling and noise shaping. By contrast, a high resolution and accurate resistor ladder (that could match the ability of a delta-sigma DAC to convert 24 bit PCM data at less than 0.001% distortion) is more difficult and expensive. Basically, the complexity is moved from the analog domain to the digital one, where sound quality becomes a function of speed and transistor count, which can be increased at low cost, unlike analog accuracy. The use of oversampling also allows for a digital reconstruction filter, making the analog lowpass filter simpler and cheaper.
The reason why multi-bit DACs are still used is that in a 1-bit format, it is impossible to implement a proper dither that makes the quantization error uncorrelated to the input signal (the sum of the dither noise and the signal will get clipped). A low resolution (e.g. 4-bit) multi-bit DAC avoids the dithering problem, in addition to reducing the total amount of quantization noise, and is a good compromise overall.