Please help me with my survey (about storage)
Jul 7, 2014 at 4:38 PM Post #226 of 255
I think the more technology improves the better choices become more available to more people.  Now if our own government would get out of  the way and stop
 
impeding progress we could speed everything up and really open up this world wide web... The picture on the right is my computer from 6-2-11 when I had 3 Tb, now I have 6Tb and I am going way over-board...
 
Jul 7, 2014 at 5:30 PM Post #227 of 255
  I think the more technology improves the better choices become more available to more people.  Now if our own government would get out of  the way and stop
 
impeding progress we could speed everything up and really open up this world wide web... The picture on the right is my computer from 6-2-11 when I had 3 Tb, now I have 6Tb and I am going way over-board...

Lovely, I see you are (were?) another ASUS fan :).
 
Jul 7, 2014 at 6:38 PM Post #228 of 255

Norfolk, Va area...
read through your post and agree...also would like that 3D Samsung works and coordinates into other tech pieces well.
Craftyhack, you may want to address how fast some of these "new" technologies are going to be obsolete in five years: problems like storage compliance changing, companies being gobbled-up or bankrupt, proprietary design-- even simple design factor limitations, (like that heat dissipation), rip-off conglomerates, and my favorite- government interference/ intervention...
 
Jul 7, 2014 at 6:58 PM Post #229 of 255
 
Norfolk, Va area...
read through your post and agree...also would like that 3D Samsung works and coordinates into other tech pieces well.
Craftyhack, you may want to address how fast some of these "new" technologies are going to be obsolete in five years: problems like storage compliance changing, companies being gobbled-up or bankrupt, proprietary design-- even simple design factor limitations, (like that heat dissipation), rip-off conglomerates, and my favorite- government interference/ intervention...

Ya, I was going to keep going, but post was getting too long :D.  As far as compliance, yep we are having to replace many arrays with new versions that support hardware encryption, where that same tech has now trickled down to consumer drives and OSes (256bit AES, definitely good enuff).  In the enterprise space we have to pay licensing for hardware encryption that a consumer gets for free in any current gen SSD(aka ripoff conglomerates, whose support that we pay 20% a year for and pretty much sucks vs. how much we pay, but whatever :/).  Ironically OCZ, the one probably most notorious for crappy SSDs in the beginning, went bankrupt and got acquired.  They had good SSDs too... they had too many variations if ya ask me, too dang confusing, I think they were just trying too hard with too many designs and options at the same time or their product management was not aligned... or something like that.  But, they made a huge contribution to getting SSDs into the hands of consumers, so I am glad they were around!  Heat is probably the biggest enemy in a data center, we have had our CRACs go out (a loooong time ago), and that was very, very bad :).  I think that cooling is the #1 energy consumer in the DC behind compute power, is currently the limiting factor regarding density, and makes DC design SO much more complicated.  I imagine that the design of portable electronics is the same... which SS helps solve handily... vs. mech HDDs anyway. 
 
Jul 7, 2014 at 8:45 PM Post #230 of 255

very well said.
In the area I live in, we have the worlds largest Navy port and NATO inclusion.
  There are so many ways the security and daily operations of our bureaucracies could be made more efficiently and economically feasible.
You would think that a simple combination of private and government research groups working together could solve this and share the new discoveries and methods.
... I believe a new type of material will be found; like a composite of ceramic wafer sandwiched with something else may be in our future to help combat many of these problems. 
Naturally, (like the drug companies), we will be paying both for that research and marketing.
AND, when/then during its launch; and then again when the updates start to come up every year....(can someone say Microsoft).  
 
out
 
Jul 8, 2014 at 1:27 AM Post #231 of 255
Ya, I was going to keep going, but post was getting too long :D.  As far as compliance, yep we are having to replace many arrays with new versions that support hardware encryption, where that same tech has now trickled down to consumer drives and OSes (256bit AES, definitely good enuff).  In the enterprise space we have to pay licensing for hardware encryption that a consumer gets for free in any current gen SSD(aka ripoff conglomerates, whose support that we pay 20% a year for and pretty much sucks vs. how much we pay, but whatever :/).  Ironically OCZ, the one probably most notorious for crappy SSDs in the beginning, went bankrupt and got acquired.  They had good SSDs too... they had too many variations if ya ask me, too dang confusing, I think they were just trying too hard with too many designs and options at the same time or their product management was not aligned... or something like that.  But, they made a huge contribution to getting SSDs into the hands of consumers, so I am glad they were around!  Heat is probably the biggest enemy in a data center, we have had our CRACs go out (a loooong time ago), and that was very, very bad :).  I think that cooling is the #1 energy consumer in the DC behind compute power, is currently the limiting factor regarding density, and makes DC design SO much more complicated.  I imagine that the design of portable electronics is the same... which SS helps solve handily... vs. mech HDDs anyway. 


Totally agree with the heat comment. It's why Dell and lots of other vendors are pushing for running gear hotter, the longevity you lose is more than offset by the cooling cost savings. But I'd have to disagree on the compute power being the limit on density, its still SAN storage on a raw capacity basis when SSD just isn't economical. When you need 100 TB for your data warehouse cube SSDs just don't cut it unfortunately..
 
Jul 8, 2014 at 4:15 AM Post #232 of 255
Totally agree with the heat comment. It's why Dell and lots of other vendors are pushing for running gear hotter, the longevity you lose is more than offset by the cooling cost savings. But I'd have to disagree on the compute power being the limit on density, its still SAN storage on a raw capacity basis when SSD just isn't economical. When you need 100 TB for your data warehouse cube SSDs just don't cut it unfortunately..

Sorry, I was meaning that we have had challenges on density where heat has in the past limited our compute power density and still does (once we redesigned data center to eliminate power delivery per CUFT as the bottleneck several years ago now).  In one major example mech HDDs were what killed us as heat caused premature failures, where excessive heat was caused by server designs that were too dense and didn't allow for proper cooling of the mech HDDs.  This is even in a well designed DC layout(e.g. cold rows maintained at spec, where we could easily have lowered spec given our CRACs... but cold row temp can only go so low before other concerns occur).  I am not including stupid things like channeling cooled air directly from CRAC through narrow diameter ducting straight to the poorly designed blade server chassis intakes, which would be a crazy requirement IMO for commodity server blades :).  I am referring specifically to the now extinct BL10s from HP which crammed two 2.5" drives side by side into the front of the BL10 blade, with the rest of the server components crammed into the slim enclosure in between the drives at the front of the blade and the exhaust end of the enclosure, and then those blades were crammed together into the blade chassis.  Basically you were staring at nothing but 2.5" drives when looking at a fully populated BL10 blade chassis... without the possibility of fans inside the blades to aid in pulling air over the drives.  There just wasn't room for fans big enough to do anything as the blade width(when looking at the blade inserted into the chassis and racked) was equal to the height of a 2.5" drive, and the height of the blade equal to the width of two 2.5" drives.  Basically looking at a rack with a few populated BL10 blade chassis was like looking at 2.5" arrays in a SAN, but without the cooling architecture of a SAN.  Given 2.5" drives running at 7200RPM (or faster) aren't known for their ability to stay cool without good ventilation, and given the problem was made worse by the heat generated by the rest of the blade server preventing any other type of effective conduction or convection cooling for the HDDs(rather heating them up more), those enterprise class supposed high temp safe 2.5" drives dropped like flies.  We always configured each HDD pair in a BL10 in RAID 1(BL10 only could hold 2 2.5" drives), sometimes we would lose the second before the mirror even finished rebuilding after replacing the first failed drive, probably due to the high I/O of the rebuild... it was that bad.  BTW, I am talking low I/O servers given that is what BL10s were designed to for, like very low traffic web servers, nothing that would have made a 2.5" overheat in normal (non crappy blade design) scenarios.  Had those 2.5" drives been SSD(or SAN booting if that was even possible with BL10s, not sure as we didn't do much SAN booting back then)... I am thinking we wouldn't have had the same issues.  But, in the combination I described, ultimately it was heat (and bad server design relative to cooling) that limited our compute density until we could get rid of all of the BL10s.  That is the long version of what I meant by heat limiting density.
 
Regarding SAN storage, we aren't running into density issues as was forecasted years ago, at least not to the same degree... not even in the same league as we anticipated.  This is given the changes in app architecture that have supported (required really) moving to DAS.  THe biggest example is what I will call our big mama DH(sort of a DH, but not really :D); data spread across HADOOP clusters indexed via SOLR.  This new stuff supports our new application architecture as well as new features in our current apps that use real time analytics which in turn leverage HUGE near real time updated reference data sets to do some really cool stuff.  All of this only possible with HADOOP or similar technologies which do not recommend or even really support SAN presented storage, so we dodged the SAN density constraint again.  We certainly could not do the same things via any RDBMS architecture that I am aware of, even with big data appliances facilitating.  This is due to MANY bottlenecks in a typical RDBMS architecture that one hits one right after the other.  For this new distributed data processing and storage architecture we haven't really needed SSD (yet) given the massive parallel nature of processing in the HADOOP clusters; our bottlenecks are elsewhere... like network.  Even on 10GbE we hit bottlenecks in HADOOP clusters, especially when crossing subnets, so we are moving to 40 and 100 now at the appropriate layers of the network to mitigate.  Who knew THAT was coming 10 years ago, fantastically magical and cool tech!
 
We were certainly worried about SAN and were exploring the many options to scale SAN/NAS usage... but fundamental changes in app and data processing architectures just made that worry more or less disappear overnight, and we had new and different things to worry about, but SAN density or I/O with the drives themselves was never *really* one of them for very long, *especially* once SSD came along :).  That said, if SSD were cheaper than HDD, it would be a no brainer for us to use SSD for DAS in these servers as well, the reduction of power draw and heat generation alone for our thousands of servers in these clusters would be realized immediately and be very nice indeed!!  The unneeded performance boost would just be future proofing a bit I think when we once again upgrade infrastructure and will have moved the bottleneck back to the servers.
 
In other cases like LDAP directories or relatively small but very high I/O local DBs that don't warrant the cost of SAN and the associated HBAs, FC infrastructure, etc. we have migrated to DAS SSD for EXCELLENT results yet again.  We have effectively removed the use case of needed I/O performance justifying the cost of SAN assuming a LOT of space isn't *also* needed(rare to need more than what can be done with 14 local SSDs in an IBM x3630, most cases need far less).  We run these types of apps with replicas for scaling and redundancy vs. shared DAS or SAN architectures removing yet another SAN use case... HA access to the data. An example is the underlying architecture of MS Active Directory using an LDAP directory and robust replication strategies to fulfill both HA and scaling requirements.
 
Our backups have also moved to DAS (where we were originally tape, then migrated to SAN, and after just finishing that migration moved to DAS :)), again removing what was a huge SAN (and FC network) bottleneck.
 
We are mainly left with SAN to support a small subset of the traditionally architected(aka legacy) solutions we host... basically the RDBMS stuff with mega DBs, whether traditional reporting DH architectures or application DB tiers.  The biggest remaining and most common use case left for SAN right now for us is VMWare based virtualization where the majority of our 60k server OS instances are virtual.  Given ESXi we are pretty much entirely dependent on SAN for virtualization... no local HDD space is even really needed.  All of the VMDKs boot and run straight from the SAN, and most VMDKs are small, and as most VMs are typically low I/O to the SAN, RAM and CPU end up being our bottleneck, not SAN.  The tier 1 SAN/Tier 1 app combo I referenced basically correlates to a small subset of our traditionally architected applications who have VERY large Oracle/DB2 RDBMS DBs (many TB that must be in the same DB schema) with thousands of concurrent r/w connections.  BUT, even in that scenario, for the majority of our app instances speed of SAN is no longer the bottleneck given SSD, now it is number of processing threads given we have hit the upper limit on number of threads that a single server has available on the ix86 and RISC servers we use (mainly HP and a few IBM for x86 and IBM for RiSC(going away in favor of RHEL on ix86)).  Unfortunately our application architecture doesn't allow true active/active RAC as we can only distribute certain types of processing threads.  Basically our app was developed long before RAC was even conceived, roughly 20 years ago now, and wasn't designed to handle independent middleware nodes accessing the DB in r/w mode concurrently. Given that we cannot scale compute for an RDBMS linearly as Oracle RAC is designed to do which would probably then make SAN the bottleneck... SAN density capability limiting compute density has never happened (or lasted for long), and when it did it was an I/O issue, especially once SSDs started to be used.  For the apps we have that do support true active/active (like traditional DH populated with hundreds of ETL type feeds and apps like BO or OBIEE providing the analytics), we have moved towards big data type "appliances" like Exadata and Vertica, so again moving away from "traditional" SAN/NAS architecture and its bottlenecks that would indirectly limit compute density.
 
Basically, in all of the places we *thought* we were going to hit issues due to SAN(primarily I/O) that would limit compute density it just hasn't happened.  We haven't been constrained by storage density limiting compute density, given we bought more than enough time by moving to SSD for RDBMS to become obsolete for massive data processing needs. Now the limiting factors have become all kinds of other things depending on the architectures of the apps in question... at least design limitations.  What I mean by that would be another WOT :), but basically we occasionally have issues with workloads changing after hosts had storage provisioned for their DBs so that too many hosts are trying to leverage the same access gateway, director, and SAN port(s in the case of multipathing) causing either saturation issues or port CPU utilization issues. This is easily remedied by redistributing hosts' paths to storage.  We are almost entirely an FC shop vs. FCoE BTW... if FCoE we would have been in better shape had we needed to be.
 
Jul 8, 2014 at 12:15 PM Post #233 of 255
Jul 8, 2014 at 1:37 PM Post #234 of 255
I was just reading that Sandisk is soon releasing 6-8TB SSDs in a 2.5" form factor, that's nucking futs.
eek.gif

Love the Avatar, JC is the MAN!
 
Awesome, I hadn't seen that, thanks for posting!  I have been watching Samsung pretty close given their approach at new mem chip design to solve density issues, I had no idea that suppliers had already obliterated the 2.5 mech HDD density to this degree!
 
Given 4TB already here(-ish... cannot find pricing yet, and given we go through a partner who in turn OEMs from another supplier, I dunno when we will get access to them in our DC), and those 6-8 TB's next year, it isn't hard to fathom 1TB or more in a portable media player in 2015 I don't think...?  Maybe the Geek Wave folks will be the ones, they have already done the "DAD" upgrade to upgrade the xd128 to 256GB(https://www.indiegogo.com/projects/geek-wave-it-s-not-a-next-gen-ipod-it-s-a-no-compromise-portable-music-player/contributions/new?perk_amt=256&perk_id=2084988), I wouldn't be surprised to see them do another option later for the grandad upgrade to 512GB or more...
 
More info here, good stuff:  http://www.sandisk.com/enterprise/sas-ssd/optimus-max-ssd/ , I could only find otherOpimus models maxing at 2TB, like here for around $3K:  http://www.shopblt.com/item/sandisk-sdlloc6r-020t-5ca1-2tb-optimus-eco-ssd/sandsk_sdlloc6r020t5ca1.html  
 
That already blows away the biggest I know of from Hitachi at 1.2TB for 10K RPM (WD now, whatever :)):  http://www.hgst.com/hard-drives/enterprise-hard-drives/enterprise-sas-drives/ultrastar-c10k1200, although they are just "a bit" cheaper at $400 a pop if bought in 20 packs:  http://www.amazon.com/Ultrastar-C10K1200-HUC101212CSS600-Internal-Drive/dp/B00CD8O0S0
 
Pretty cool stuff right now, crazy how fast tech keeps improving... I have read they said Moore was crazy when back when he first said how fast tech would continue to evolve... and he just keeps getting proven right somehow.  The guy was a genius and had some guts to put that theory out there when it was so easy to be proven wrong over time, and had to depend on so many things to be right... but from your article... looks like they are saying they will be improving SSD density on a Moore's law timeline...
 
Jul 8, 2014 at 7:27 PM Post #235 of 255
I have about 1 TB of music in FLAC format, and that same music in LAME mp3 v0 format that uses about 350MB in storage.  As the quality of portable music playing increased, I went from LAME mp3 v5 to v0, but I find that it's a pain that basically nothing affordable and high quality also offers 400MB or more of on-board storage.  It's like there's a race among player manufacturers to improve real and perceived quality of reproduction, but also forgetting that higher quality music takes more storage, or assuming that users are OK with wasting huge amounts of time in selecting and moving just portions of their library to the local player. Storage is cheap, but you'd never know that looking at today's portable music products.
 
Jul 8, 2014 at 9:35 PM Post #236 of 255
My needs/use case is fairly special.  I'm a bit of a PC junkie, so my rig is very custom.  However, I'm not above reusing a hard drive here or there.  This has led to a 150gb hdd that's dedicated to music storage.  However, this drive is nearly a decade old, so I don't trust my 80gb music collection to this sucker alone.  I backup the music drive to my 2tb HDD on a nightly basis so I have a bit of redundancy in case of failure.  

For other media (not including games) I use a folder in the 2tb drive.  All together, it totals about 350-400gb.  If you want to factor games into the equation, it'd tip the scale over to the 500GB-1TB side of things.
 
Jul 8, 2014 at 11:55 PM Post #237 of 255


Since Intel® still has the reliability lead for SATA solid-state secondary storage, I'll wait for their 1 TB versions, which will allow me to use those Western Digital® Caviar Red® HDDs, under consideration as main drives as of 8 July 2014, for network-attached storage for routine backup of documents, app data, and of course audio and video.
 
Jul 9, 2014 at 1:15 AM Post #238 of 255
The conversation of 10-15K Datacenter-grade SAS drives and even SSD drives is certainly interesting, but it's completely unnecessary to be integrated in the type of appliances being discussed here. The data requirements of streaming music data, even the most absurd, high resolution audio file you care to throw at it (let's say you want the appliance to process and play an 8 channel, 192Khz, 24 bit WAV file). That requires less than 5MB (less than 40Mb) per second of data transfer. Even if you include overhead from it being a drive for the OS as well, we are just on the other spectrum of SSD's making sense. Heck, those speeds were possible back in the day when PIO was the only standard. I could see making a ruggedized environment appliance, where people don't want to, or can't provide a standard electronic environment and need an SSD for that purpose. But I for one wouldn't pay extra for that. When a 1TB SSD is around $450, and a 2TB 15mm 2.5" hard drive is $140, I'll pocket the savings and use regular ole' spinning rust. It's ideal for this purpose where cheap storage density outweighs the need for speed. A smart memory-based caching system by the Application on the appliance more than makes up for metadata display while the display is in use.
 
I love SSD's as much as anyone. Without the extensive use of SSD's in my home SAN array as a read and write cache, I'd never be able to support as many virtual machines as I do along with my video and audio media libraries. But an audio appliance, as much as we treasure these things, are really not expensive propositions hardware wise. The real battle that I'd like to see a company tackle is a real OS that offers a proper 10ft interface and proper backup systems for my media library. An appliance cannot, and never will be my only place of media storage, so I'm surprised how many appliances out there build the device as if that's the only place my media should be.
 
Jul 9, 2014 at 6:25 PM Post #240 of 255
  The conversation of 10-15K Datacenter-grade SAS drives and even SSD drives is certainly interesting, but it's completely unnecessary to be integrated in the type of appliances being discussed here. The data requirements of streaming music data, even the most absurd, high resolution audio file you care to throw at it (let's say you want the appliance to process and play an 8 channel, 192Khz, 24 bit WAV file). That requires less than 5MB (less than 40Mb) per second of data transfer. Even if you include overhead from it being a drive for the OS as well, we are just on the other spectrum of SSD's making sense. Heck, those speeds were possible back in the day when PIO was the only standard. I could see making a ruggedized environment appliance, where people don't want to, or can't provide a standard electronic environment and need an SSD for that purpose. But I for one wouldn't pay extra for that. When a 1TB SSD is around $450, and a 2TB 15mm 2.5" hard drive is $140, I'll pocket the savings and use regular ole' spinning rust. It's ideal for this purpose where cheap storage density outweighs the need for speed. A smart memory-based caching system by the Application on the appliance more than makes up for metadata display while the display is in use.
 
I love SSD's as much as anyone. Without the extensive use of SSD's in my home SAN array as a read and write cache, I'd never be able to support as many virtual machines as I do along with my video and audio media libraries. But an audio appliance, as much as we treasure these things, are really not expensive propositions hardware wise. The real battle that I'd like to see a company tackle is a real OS that offers a proper 10ft interface and proper backup systems for my media library. An appliance cannot, and never will be my only place of media storage, so I'm surprised how many appliances out there build the device as if that's the only place my media should be.

Totally agreed about your points, and the real constraint is price of SSD, but the only use case isn't the streaming in a media device.  From a user use case perspective you have:
 
1) Moving data to/from the device
2) Browsing content (and totally agree that there needs to be a good interface that also integrates multiple sources well, Google Music is pretty good and close, but I don't know of a media player that integrates well/natively with Google music to manage replication, etc.)
3) Indexing content (even with my AK240/ZX1 which are flash based, when I add/remove content, the indexing takes a minute or two and that is ~100 GB of FLAC, so number of files vs. space consumed is low, if we are talking a TB of say 320k mp3s on a mech hard drive, that will take a looong time).
 
Then you have the engineering use cases:
 
1) Heat
2) Power consumption
3) Size of device(s) (for HDD you are constrained to one huge "thing", for flash/ssd based, you can put the packages on the device PCB, much more flexible)
4) Cost :frowning2:
5) Capacity
 
In all but 4) for enginering use cases, SSD kills HDD assuming the latest technologies are used (i.e. use USB 3.0 or Thunderbolt, not USB 1.0 :)), for 5) in engineering use cases, if size of the media device isn't a concern, then 5) is a wash.
 

Users who are viewing this thread

Back
Top