What makes a hard drive reliable ??
Sep 16, 2011 at 10:46 PM Post #76 of 80

 
Quote:
Zpool is not RAID. It's more like SPAN and doesn't start to act like RAID until you us some level of RAIDZ (1-3) and mirroring.
 
Yeah, except if you're worried about maintining uptime after an HDD failure you'd definitely be using RAIDZ. If you aren't then this whole conversation is pointless because you should be making separate backups no matter what :)
 
Oh, I also forgot to mention that one of the greatest advantages of ZFS over RAID is it's very hard to mix RAID levels behind mirroring and striping. ZFS can have any combination of RAIDZ as well as mirroring it chooses. You could, for example, have two RAIDZ-3 arrays of 10 disks (7 usable disks with 3-drive failure protection) that are mirrored so if on RAIDZ zpool goes down the other is still running. You just get an error saying, "This disk is dead. You might want to replace it so you don't lose any data." It's not nearly as simple to do that with RAID.
 
It works in exactly the same way. Whatever management software the RAID card comes with alerts you, locally and/or on the the network or via email, and the card also starts beeping really loudly unless you turn that option off. That said, you are correct in that a RAIDZ-3 doesn't exist for hardware solutions, but RAIDZ-1 = RAID5 and RAIDZ-2 = RAID6. They work in exactly the same way, distributing the parity information across all the drives in the array/pool.
 
There is some level of it that you can do in some of the more advanced implementations but ZFS allows you do to any configuration you can think of. Should you wish, you could have ten 10-drive RAIDZ-1 zpools all acting as a RAIDZ-3 zpool. It'd be a ridiculous amount of data redundancy but you could do it.
 
You can do that on a RAID card too. They're called nested RAID levels - RAID 0+1, 10, 50, and 60. If you want even more redundancy such as the setup you described above, you can create several hardware-based RAID1 arrays and then create a software RAID6 array (or even a RAIDZ-3 array) on top of them. Either way it's not really necessary whichever way you want to go.
 
And no, RAID isn't the preamp with ZFS being the amp. That would imply that you'd mix the two, which you wouldn't do. You'd lose half the benefits of ZFS and only have the basic checksum and versioning support. You'd lose all the benefits of data integrity. That's why you never here someone saying, "Buy this RAID box to use with your ZFS implementation/" They always say, "Find a box that can do JBOD and let ZFS handle everything."
 
You wouldn't, but you could - which also applies to preamps in certain situations. But like I said, this is the most important difference between using RAID and zpools - with RAID you need everything hooked up to a single card or multiple identical cards (IF they even support it), while with ZFS you just need a bunch of disks connected any which way you want. This gives ZFS the upper hand in cost and expandability. However, there is one small catch - if you want to use an external enclosure for lots of storage (we're talking business/enterprise level here, not your average USB enclosures) - a lot of the time you will need a RAID card anyway to support an external connection via SAS or FC with an expansion backplane in the enclosure. You don't necessarily have to use the RAID functions, but you may need the card anyway.
 
A better comparison would be that RAID-5 + RSYNC + some level of system-based checksum monitoring is equivalent to RAIDZ-1.
 
And no, I wouldn't compare it to NTFS, HFS+ or EXT4. Why? Because you don't compare things based on the most basic level of shared functionality (all are filesystems) you compare it to what it's actually used as, which is so much more. Then, I wouldn't even compare HFS+ to NTFS because even with all the issues with HFS+ (or EXT4 for that matter) it's still miles beyond NTFS. None are particularly modern, though. Now, if you really wanted to compare a filesystem to ZFS you'd need to compare it to BTRFS. It's got a lot of the same features as ZFS. It is, however, very early days and is still little more than an experiment. It's not ready for prime-time.
 
Yes, other filesystems are really lacking in MANY ways compared to ZFS. No argument there. But still, ZFS is a filesystem and LVM combined. You compare the LVM part to other LVMs, and you compare the filesystem part to other filesystems. You can't say that "NTFS is better than RAID" because they have nothing to do with each other. That's what I'm trying to say here.
 
Now, if you REALLY wanted to make a pro for RAID over ZFS it's quite simple: You can't run a ZFS zpool attached to a Windows OS. There is no support, on any level. You can get support on Linux and OS X (not to mention Solaris, obviously) but that's really it. No one's taken the time to put it on Windows. Though, ZFS is mostly used on servers and in that context it only matters what the host runs. Be it something like FreeNAS (full, native support) or Ubuntu (kernel-extension support) you can use it just fine and then set up sharing with any other OS. With FreeNAS it's as simple as flipping a switch.
 
Even in enterprises, ZFS is only used in SANs and dedicated storage servers. Front-end servers don't normally use it, as they don't need that sort of security or expandability because the entire servers are normally redundant. Most commonly a simple RAID1 setup is used instead, even on *nix servers.
 
Also, RAID is much easier to maintain when you want the server to be a hypervisor In an ideal virtualization scenario you'd want each VM to have it's own separate HDD/array so they don't slow each other down. Having multiple RAID arrays is a lot simpler and easier to maintain than having to install a host OS, because then you wouldn't be able to use a hypervisor.
 
So I would say my getting "it" was not the problem, no.


 
We've really strayed off-topic here...talking about FC SANs and ZFS when the OP was just worried about his WD MyBook failing :p
 
Sep 16, 2011 at 10:51 PM Post #77 of 80
Seems relevant to the thread title to me.  
popcorn.gif
popcorn.gif

 
Sep 16, 2011 at 11:45 PM Post #78 of 80


Quote:
Seems relevant to the thread title to me.  
popcorn.gif
popcorn.gif



Well yeah, but talk about overkill. Suggesting that someone set up a Solaris server with a 20-drive RAID-Z3 array over iSCSI because he's worried that his MyBook might fail is like suggesting that you carry around an RSA B-52 and a 1kVA UPS to use as your portable amp.
o2smile.gif

 
Sep 17, 2011 at 1:41 AM Post #79 of 80
I never suggested that. I said you could if you so wanted. My idea of the perfect home server costs $900 and includes six 2 TB drives in a ZFS RAIDZ-2 zpool, giving you 8 TB of usable storage and a two-disk failure protection. From what I've seen the speeds over LAN on that setup are pretty nice as well.
 
Manyak, I'm going to avoid quoting everything (for brevity) but I'm sure you can figure out what bits are responding to what.
 
————
 
Yes, but my point wasn't to say you don't have to use RAIDZ in a zpool, it was to point out that a zpool doesn't require a RAIDZ or mirrored pair and is thus not directly comparable to RAID, as you had implied.
 
————
 
Yes, they do work in a very similar fashion, with that I agree. But I wasn't saying they didn't. I was saying that the great difference is that between the two is that aside from the standard RAID levels you can't really mix and match to build your own custom solution, certainly not as easily as you can with ZFS. In fact, the only real limiter with ZFS is that it requires a lot of RAM to run at speed. However, for large arrays you can simply dedicate a SSD to cache instead of the system memory, which keeps the speed up.
 
————
 
Sure, you can create hardware RAID arrays and then fill in the gap with a software RAID but then you lose all the benefits of hardware RAID and add all the problems of a software RAID. You are literally getting the worst of both worlds. It drastically increases the chance of a massive failure. So while it's possible (if rather complicated) it's by no means optimal or advisable. ZFS was designed specifically for it's ability to do exactly these sorts of things.
 
————
 
Actually, there is no need for a RAID card. A simple port multiplier expansion card will do the trick or any basic controller card that supports JBOD and not the full set of RAID features. Oh, and if you'er doing a multi-drive installation and you're using USB you're doing it wrong for so, so many reasons.
 
————
 
But ZFS isn't multiple parts. It's one thing. You can compare it both to other filesystems and to different RAID implementations. Saying you can't compare the two is just an attempt to avoid a comparison that doesn't always show in RAID's favor. Though, sometimes it does. It depends on the intended use. A hardware RAID implementation still has speed but if redundancy and data integrity are of equal concern then ZFS now offers a viable alternative. Unless you want to use it directly attached to a Windows installation. Then it fails.
 
————
 
You know with ZFS there's no reason you couldn't have multiple installations each with their own zpool on a box of disks, right? In that sense anything RAID can do ZFS can do. As for maintainability, that's debatable. The reason I think ZFS isn't overly used in enterprise is because, in my experience, large companies do not like open source because there's no one to turn to for help if things go wrong. The reason Microsoft is so popular in the enterprise world has nothing to do with the quality of their products (which I will maintain are never as good as the competitions) and more to do with the fact that Microsoft dedicates a lof of resources to supporting their enterprise customers.
 
————
 
But yeah, I can see this is going a bit beyond the original post but we did that a few pages back anyway. Now we're talking about strategies to make sure it doesn't happen in the future. In that sense, since we're talking about home users, I would suggest that unless you are completely uncomfortable with rolling your own solution (which some people are) then a NAS running ZFS is a better option than RAID for two reasons: Firstly, it's much more stable in terms of data integrity and given how most audiophiles feel about their music libraries I'd say that's paramount. Secondly, extreme speeds is not as much of a concern. Even if the array ran at 50 Mbps that would still be more than enough to seamlessly stream HD audio content. Of course, the system I have in mind has been know to run twice that.
 
Sep 17, 2011 at 10:04 AM Post #80 of 80


Quote:
I never suggested that. I said you could if you so wanted. My idea of the perfect home server costs $900 and includes six 2 TB drives in a ZFS RAIDZ-2 zpool, giving you 8 TB of usable storage and a two-disk failure protection. From what I've seen the speeds over LAN on that setup are pretty nice as well.
 
I wasn't saying that you did, I was just putting things in perspective for others. :)
 
By the way, my home server is actually running ZFS. I've got 8x1TB drives in various RAID arrays for VMs and 8x2TB drives in RAIDZ-2, all attached to a RAID card with battery backed cache, connected to the network via a Quad-gigabit NIC with LACP. I've also got a second server - it doesn't have any major storage (just a single RAID1 array), but it serves as a failover for services that I need 100% uptime for (AD, DNS, Group Policy, RADIUS, VPN, and so on).
 
————
 
Sure, you can create hardware RAID arrays and then fill in the gap with a software RAID but then you lose all the benefits of hardware RAID and add all the problems of a software RAID. You are literally getting the worst of both worlds. It drastically increases the chance of a massive failure. So while it's possible (if rather complicated) it's by no means optimal or advisable. ZFS was designed specifically for it's ability to do exactly these sorts of things.
 
Yeah it's not something you should ever do, I was just pointing out that it's possible. :)
 
————
 
Actually, there is no need for a RAID card. A simple port multiplier expansion card will do the trick or any basic controller card that supports JBOD and not the full set of RAID features. Oh, and if you'er doing a multi-drive installation and you're using USB you're doing it wrong for so, so many reasons.
 
Yes but that expansion card in the enclosure has to connect to a SAS port in the first place, and no motherboard that I've seen has SFF-8088 connectors in the back. So you need a card that has them, and most cards that do are RAID cards. Yes there are HBA's that have them too, but they're not very common.
 
And yeah let's not even start with USB :p
 
————
 
You know with ZFS there's no reason you couldn't have multiple installations each with their own zpool on a box of disks, right? In that sense anything RAID can do ZFS can do. As for maintainability, that's debatable. The reason I think ZFS isn't overly used in enterprise is because, in my experience, large companies do not like open source because there's no one to turn to for help if things go wrong. The reason Microsoft is so popular in the enterprise world has nothing to do with the quality of their products (which I will maintain are never as good as the competitions) and more to do with the fact that Microsoft dedicates a lof of resources to supporting their enterprise customers.
 
The problem with using ZFS for that isn't that it can't do it, but that it becomes a royal pain in the ass to use it. Hypervisors such as vSphere don't support ZFS directly, so it becomes a similar problem as if you were using Windows. You either have to give up the speed and versatility of the hypervisor and install *nix as your host OS, work around it using two separate storage servers that mirror each other (remember you want uptime!), or you use RAID. That last option is almost always the easiest and most cost-effective one.
 
Enterprises do actually use it a lot, but just like I said, it's usually limited to storage servers. You've always got to weigh the costs vs the benefits, and sometimes that extra bit of data integrity isn't worth the effort (and time is money!). Remember - both systems protect against hard drive failures, ZFS just adds protection against data corruption. The chances of a front-end server having data corruption is pretty much zero, and if it does happen it's very easy to restore from a backup. It's the storage servers - where really important data is constantly written to and read from - that benefit.
 
On a side note, there's one really, really nice thing out of all of MS's stuff, which is Active Directory & Group Policy. It's the only part of their server software that I think is better than *nix. Compared to Linux/LDAP it's extremely easy to set up and maintain and it's a lot more powerful. Besides, would you want to answer calls from 1000 people asking where the Start button went? :p


 
Sorry I cut out a lot of your post but at this point we're not even disagreeing on anything, we're just arguing about how we're arguing and nitpicking about technicalities that make no real difference.
 
Bottom line: RAID protects against HDD failures, ZFS protects does the same and adds protection against data corruption, and you pick the one that works best for your needs.
 

Users who are viewing this thread

Back
Top