With RAID it is best to buy your replacements from different sources and batches at the initial time of purchase. You shouldn't be scrambling for replacements the second something goes wrong. This, obviously, adds to the cost rather significantly, but it is the right way to do it in my opinion.
Controller cards are good for offloading I/O away from the CPU and the rest of the system. If you want redundancy within a server which is doing something other than pure file storage, such as a server running SQL, etc., then you'll want hardware RAID since the CPU has enough to worry about already. Controller cards are also useful for situations where you want fast external storage, but even then, having a controller card isn't always about RAID, such cards often feature JBOD compatibility for a reason.
Solid software RAID solutions do exist, even at the enterprise level, and not having a controller makes things a lot simpler when a failure occurs. Sure, you are going to eat up system resources with software RAID, but on a dedicated storage machine, a NAS for example, this isn't going to matter.
For almost all home server data storage use, I would argue that hardware RAID is the wrong choice. For the cost of a hardware RAID controller card and its eventual replacement, you can build a dedicated NAS capable of handling quite a bit of load with a $200 investment over the drives themselves.
At home I run a dedicated FreeNAS machine based on an AMD Zacate board. It currently has 5x 2TB drives under RAID-Z2 (two disk parity) with a flash drive for the OS. The total cost of the machine, including drives, was under $700. (This was back before the flooding happened.)
The NAS not only handles my AD shares, but my server uses it for large storage as well. Both the server and the NAS have 2x Gigabit Intel NICs that I picked up for around $30 each at some point. With 2x teamed ports on each end and a managed switch in the middle, my transfer rates between the server and the NAS (and any other box with network teaming, such as my workstation) is averaging 110MB/s over the network, which isn't bad. (Reads can sometimes be faster depending on how much is held in memory versus waiting for disk I/O.)
Will it beat hardware RAID with enterprise disk? No, not really, but it doesn't have to. I have never been left wanting for more throughput even when the NAS is under simultaneous use for streaming and general usage.