Manyak
100+ Head-Fier
- Joined
- Aug 3, 2011
- Posts
- 113
- Likes
- 16
Quote:
Zpool is not RAID. It's more like SPAN and doesn't start to act like RAID until you us some level of RAIDZ (1-3) and mirroring.
Yeah, except if you're worried about maintining uptime after an HDD failure you'd definitely be using RAIDZ. If you aren't then this whole conversation is pointless because you should be making separate backups no matter what
Oh, I also forgot to mention that one of the greatest advantages of ZFS over RAID is it's very hard to mix RAID levels behind mirroring and striping. ZFS can have any combination of RAIDZ as well as mirroring it chooses. You could, for example, have two RAIDZ-3 arrays of 10 disks (7 usable disks with 3-drive failure protection) that are mirrored so if on RAIDZ zpool goes down the other is still running. You just get an error saying, "This disk is dead. You might want to replace it so you don't lose any data." It's not nearly as simple to do that with RAID.
It works in exactly the same way. Whatever management software the RAID card comes with alerts you, locally and/or on the the network or via email, and the card also starts beeping really loudly unless you turn that option off. That said, you are correct in that a RAIDZ-3 doesn't exist for hardware solutions, but RAIDZ-1 = RAID5 and RAIDZ-2 = RAID6. They work in exactly the same way, distributing the parity information across all the drives in the array/pool.
There is some level of it that you can do in some of the more advanced implementations but ZFS allows you do to any configuration you can think of. Should you wish, you could have ten 10-drive RAIDZ-1 zpools all acting as a RAIDZ-3 zpool. It'd be a ridiculous amount of data redundancy but you could do it.
You can do that on a RAID card too. They're called nested RAID levels - RAID 0+1, 10, 50, and 60. If you want even more redundancy such as the setup you described above, you can create several hardware-based RAID1 arrays and then create a software RAID6 array (or even a RAIDZ-3 array) on top of them. Either way it's not really necessary whichever way you want to go.
And no, RAID isn't the preamp with ZFS being the amp. That would imply that you'd mix the two, which you wouldn't do. You'd lose half the benefits of ZFS and only have the basic checksum and versioning support. You'd lose all the benefits of data integrity. That's why you never here someone saying, "Buy this RAID box to use with your ZFS implementation/" They always say, "Find a box that can do JBOD and let ZFS handle everything."
You wouldn't, but you could - which also applies to preamps in certain situations. But like I said, this is the most important difference between using RAID and zpools - with RAID you need everything hooked up to a single card or multiple identical cards (IF they even support it), while with ZFS you just need a bunch of disks connected any which way you want. This gives ZFS the upper hand in cost and expandability. However, there is one small catch - if you want to use an external enclosure for lots of storage (we're talking business/enterprise level here, not your average USB enclosures) - a lot of the time you will need a RAID card anyway to support an external connection via SAS or FC with an expansion backplane in the enclosure. You don't necessarily have to use the RAID functions, but you may need the card anyway.
A better comparison would be that RAID-5 + RSYNC + some level of system-based checksum monitoring is equivalent to RAIDZ-1.
And no, I wouldn't compare it to NTFS, HFS+ or EXT4. Why? Because you don't compare things based on the most basic level of shared functionality (all are filesystems) you compare it to what it's actually used as, which is so much more. Then, I wouldn't even compare HFS+ to NTFS because even with all the issues with HFS+ (or EXT4 for that matter) it's still miles beyond NTFS. None are particularly modern, though. Now, if you really wanted to compare a filesystem to ZFS you'd need to compare it to BTRFS. It's got a lot of the same features as ZFS. It is, however, very early days and is still little more than an experiment. It's not ready for prime-time.
Yes, other filesystems are really lacking in MANY ways compared to ZFS. No argument there. But still, ZFS is a filesystem and LVM combined. You compare the LVM part to other LVMs, and you compare the filesystem part to other filesystems. You can't say that "NTFS is better than RAID" because they have nothing to do with each other. That's what I'm trying to say here.
Now, if you REALLY wanted to make a pro for RAID over ZFS it's quite simple: You can't run a ZFS zpool attached to a Windows OS. There is no support, on any level. You can get support on Linux and OS X (not to mention Solaris, obviously) but that's really it. No one's taken the time to put it on Windows. Though, ZFS is mostly used on servers and in that context it only matters what the host runs. Be it something like FreeNAS (full, native support) or Ubuntu (kernel-extension support) you can use it just fine and then set up sharing with any other OS. With FreeNAS it's as simple as flipping a switch.
Even in enterprises, ZFS is only used in SANs and dedicated storage servers. Front-end servers don't normally use it, as they don't need that sort of security or expandability because the entire servers are normally redundant. Most commonly a simple RAID1 setup is used instead, even on *nix servers.
Also, RAID is much easier to maintain when you want the server to be a hypervisor In an ideal virtualization scenario you'd want each VM to have it's own separate HDD/array so they don't slow each other down. Having multiple RAID arrays is a lot simpler and easier to maintain than having to install a host OS, because then you wouldn't be able to use a hypervisor.
So I would say my getting "it" was not the problem, no.
We've really strayed off-topic here...talking about FC SANs and ZFS when the OP was just worried about his WD MyBook failing