Quote:
Originally Posted by xnor /img/forum/go_quote.gif
Heh yeah, I'm sure you know what a checksum is (and how such algorithms work)? Heard of AccurateRip?
You can easily verify if something went wrong (Not that there is a need to!) and this actually happens automatically at rip-time if you use e.g. EAC or similar tools.
Most filesystems also incorporate checksums or similar algorithms to detect errors.
|
you totally 100% miss the point.
and you get things wrong, too.
wrong: redbook has NO checksums. go check (lol). dvd, now THAT is a true filesystem but redbook is not a filesystem, its simply a stream of data. this is why 'ripping' is an inexact art. you can know that you get the same block over and over again on re-reads but that does NOT mean its the true bit-block.
how you missed the point: even if you used zfs and had lots of checking done, how on earth does that fix the errors that hit the ram chips while the data is in memory?
the only way to truly fix this is data integrity where metadata is checked at EVERY hand-over (device drivers, ram, disks, everything!). linux has some work in that area (I've not seen it in freebsd yet) and I believe solaris can do that kind of checking. regular MS os's can't; they simply do not have the kind of architecture to allow for 'hand off' meta data. its only been in linux for a year or so, or something very recent like that.
at least having ECC will ensure that you minimize the errors to the data WHILE the data is in ram.
lots of things go thru ram. having the ram be a bit more sure is only going to help. the 'speed loss' may matter for games, but I'm not at all talking about gaming, here. I never was.