A few replies, but it's late and I'm not going to quote the posts I'm replying to. Ooops.
It's really too bad there aren't more viable platforms for low power computing. A simple ARM or MIPS CPU, a moderate amount of RAM and a DMA NIC are all you really need to do this. Unfortunately, aside from the prebuilt NAS units there aren't really any affordable products out there that offer this. It's really too bad we need to hack up Linksys routers and the like to get a simple low power computer for an embedded device. Epia is expensive, but might be worth it depending on what energy costs. It's really rather wasteful to use a full-fledged computer just to serve files to one or two client machines when it could be done with 1/10th the power usage if only the hardware were more readily available.
If you're going to build it yourself, I'd choose some hardware with modern interconnects. Yes, you could use a 386, but you're then limited to the speed of 16 bit ISA for both your disks and for your network. With a total available bus bandwidth that (IIRC) a 100mbit NIC can saturate, you're in trouble and you'll be lucky to get 5MB/s out of it without RAID (and probably much less due to the lack of DMA). At the very least I'd recommend a P3-class CPU, that gets you bus mastering PCI for your disks and network. Still not enough to saturate a gigabit LAN, especially with RAID - but it should get you into the 30-50MB/s range sans-RAID. For top performance you'll need a PCIe or PCI-X NIC and disk controller. PCI-X has been around for a while, but you'll only find it on server-class boards, which are $$$, as are the NICs and disk controllers for that bus. PCIe is much newer and it's hard to find NICs and disk controllers that use it. Your best bet if you want to go that route is to find a motherboard with an onboard PCIe NIC (this is getting easier, but many - most even - still use PCI for their onboard gigabit NICs). Most have the disk controller on the faster bus, however. With this setup your bottleneck will be the disks and not the bus. If you want to do software RAID5 you'll need (some) CPU speed, otherwise it doesn't really matter. RAID1 and RAID0 are trivial and barely touch the CPU at all. As was said though, you can build a workable system for pretty much nothing, aside from the cost of the disks, it just depends on the performance you're looking for as to what you should build. Streaming audio from it will work with virtually any hardware you can put together for free, this is a low bandwidth application, as long as you're prepared to wait while the files slowly copy over initially.
'Hardware' RAID these days is basically equivalent to software raid with the disadvantage that the on-disk format is proprietary and undocumented. Unless you're spending at least a few hundred bucks on a RAID controller, all of the actual RAID work happens in the driver, much like winmodems and most printers these days. With processor speed as cheap as it is it's really not worth it, especially for a dedicated device. I'd recommend software RAID unless you're willing to plop down the coin for a good RAID card. This also allows you to have cross-card arrays if you want many disks and have several cheap controllers. I would recommend putting a maximum of one disk per channel on any IDE controller though, the IDE bus doesn't handle reading/writing from/to multiple devices at the same time very well at all and it'll kill performance.
Freenas is based on FreeBSD, but the distribution includes everything you need for a working system. You don't need an OS, the stripped down FreeBSD core is an integral part of it. All you need is a computer and Freenas to set one up. I don't know if the NAS storage itself is accessible from the web interface (which is normally used for configuration), but it definitely is via FTP, Windows shares, NFS, etc. You should be able to set it up to serve other web data as well, but it might take some work.
Jumbo frames - I wouldn't really worry about it. GigE is still probably faster than your disks are anyway. You should be able to make 75MB/s on the wire easily without jumbo frames, and probably can better that with good hardware. If you're just concerned about streaming audio and video, even a cheap 100mbit network with a hub would suffice. Few media files reach anywhere near the 10MB/s or so you can realize with such a setup. As was mentioned, a vanilla GigE setup can easily do 5x that which is fast enough for any media currently available.
Data degradation - there are integrity checks in multiple places in the network stack to detect this. When an error is detected, the network stack will usually automatically retransmit the data until it's correct - you'll get a performance hit but you won't get the wrong data. For the rare case where a retransmit isn't possible (or fails several times), you'll get an outright failure. If the bits you get out of the network are wrong, your network is seriously broken. Network hardware is designed to fail completely or work completely, there is no middle ground (statistically relevant, anyway). The bit error rate of modern networks is astronomically low to begin with, and when errors do occur they should be detected by the layer 2 FCS. If that fails, TCP includes a CRC check. Basically you stack 3 one-in-a-million chances on top of each other to get a one-in-10-lifetimes chance of an error being missed and the wrong data getting passed.
And that post was too long. Sorry.