advertisement


Reliability of SSd

garyi

leave blank
Happy to accept I might be unlucky but I have had four ssd drives now and only one has lasted more than 6 months.

With a sandisk 256 one here just gone tits up after purchase in April. Not even seen let alone recognised on anything I plug it into.

Thank god for timemachine.
 
Gary,
how does this "tits up going" go about?
A normal HD, I can understand, but how does an SSD fail? Do you simply start to loose sectors and with that data (parts) or does the whole thing fail with all data lost immediately?

PS: I'm on my very first SSD right now, and fascinated by the speed of it!
 
Happy to accept I might be unlucky but I have had four ssd drives now and only one has lasted more than 6 months.

With a sandisk 256 one here just gone tits up after purchase in April. Not even seen let alone recognised on anything I plug it into.

Thank god for timemachine.
it all depends on which SSD you buy. Across the board there is quite a mixture of technologies and controllers. IIRC Sandisk have known problems with reliability, as do some OCZ.

Last time I looked into it the Crucial M4 stood out as a well proven choice.

We also selected the OCZ Vertex 4 for server usage and had absolutely no problems with very heavy usage over the last 18 months or so.

Our hosting servers are typically using Intel SSDs which again have had no problems with heavy usage.
 
Yea nothing lost because the point of the ssd for me was super fast start ups etc with the main stuff being on servers/elsewhere.

Still its a pain in the arse.

As for how they go tits up, two ways it would appear.

1. Oz Petrol drives. Avoid like a small pellety turd. Just general issues which constantly led to errors on the hard drive you had to fix with Disk Utility before eventually failing entirely. Reformatting the drive made it good for another few months or so. Went through two like this before giving up.

2. This sandisk. Working happily away on the mac then the finder crashes, then all apps crash one by one. Cannot get out sensibly so force restarted mac where by the mac does not even see there is a knackered drive in the bay. Have removed and inserted into two hot bays neither recognise there is a drive in them. Luckily ordered from amazon who as usual are excellent and have already put a new one in the post for delivery tomorrow.

Still a pain in the arse though.
 
I recently got a Samsung Sad to put in my old MacBook pro. Made a big difference and the silence is great. No issues so far.
 
I had an OCZ Vertex that was great for nearly three years then without warning bricked. Fortunately OCZ has a 3 year warranty and it was replaced within a week.

Not had any problems with two other SSDs, Vertex 4 and Corsair Force. All used regularly as boot drives.
 
Not sure if this is valid, but the ssd in my MacBook Pro Retina is about a year old now. I haven't hammered it, but zero problems so far.
 
Always ensure TRIM is enabled, this is not automatically done in most OSes unless the profile is recognised by the OS. Unless you explicitly set it, there is a good chance the drive is not TRIM enabled. Under MacOS for example TRIM only auto enables when it's a SSD with Apple Firmware on it ) i.e. in SSDs supplied by Apple, natch) So various TRIM enablers exist for third party and aftermarket drives: usually freeware...
 
Ive had one for over 6 years and thats still going strong a couple of newer ones for 6 months and as yet no issues. PC not mac. I would think you have something going on thats killing them.
 
Fox have always enabled trim, willing to accept I am just being unlucky here.

I swear to god though this macbook is exhibiting typical HD issues now, which is a sanddisk one about a year old. Plenty of strange pauses, stuff taking an age to install, fans spinning up with plenty of spinning beach balls etc etc
 
Gary,
how does this "tits up going" go about?
A normal HD, I can understand, but how does an SSD fail? Do you simply start to loose sectors and with that data (parts) or does the whole thing fail with all data lost immediately?

PS: I'm on my very first SSD right now, and fascinated by the speed of it!

SSD memory is limited in the number of write cycles it can have, in the range of around 100,000 times. Normally this is not a problem as there is a piece of software in the SSD device that uses a wear leveling algorithm that makes sure that the locations are balanced in their usage. If this goes on the fritz for any reason, like its badly written, and the same piece of memory keeps getting written to, then it will fail.
 
There's a good reason why SSDs supplied by EMC for their SANs cost vastly more than consumer versions.

I've had one Samsung micro SSD fail within 10 months on a Lenovo laptop. It's replacement has lasted four years. I use SSDs in a couple of servers at home, but only for the relatively "read only" O/S partitions. No issues thus far (OCZ and Sandisk). I would still be wary of using consumer grade SSDs for all partitions though unless RAIDed to give some sort of protection against failure.

As with HDDs, the more you pay the less your failure rate although nothing is ever guaranteed.

In trays of Enterprise class SSDs in a SAN with RAID redundancy and hot spares by all means use it for a SQL Server high transaction volume database, but don't try this at homE with consumer grade kit. Maybe in a few years ....
 
I am an engineer for Western Digital and would echo the write cycle limitation mentioned earlier. HDD may be less shock and heat resistant due to being an electro mechanical device but for data reliability HDD is still king.

SSD's come in various flavours from single level cell (SLC) at the high end - MLC (multi level cell) at the low end, and TLC (triple) at the very low end.

As with all things, its all about fitness for purpose. If you have a low write application, then SSDs are fine, if you are writing often, then SSDs wont last. SLC are pricey.

Finally, HDDs tend to degrade prior to failure (unless you drop it into the toilet) so you stand a pretty good job of getting to your data. Conversely when SSDs fail they fall off a cliff and the data is kaput.
 
I am an engineer for Western Digital and would echo the write cycle limitation mentioned earlier. HDD may be less shock and heat resistant due to being an electro mechanical device but for data reliability HDD is still king.

SSD's come in various flavours from single level cell (SLC) at the high end - MLC (multi level cell) at the low end, and TLC (triple) at the very low end.

As with all things, its all about fitness for purpose. If you have a low write application, then SSDs are fine, if you are writing often, then SSDs wont last. SLC are pricey.

Finally, HDDs tend to degrade prior to failure (unless you drop it into the toilet) so you stand a pretty good job of getting to your data. Conversely when SSDs fail they fall off a cliff and the data is kaput.
That is interesting and reflects my experience on price of Enterpise SSDs vs. consumer units and their applicability.

What are the technical differences that give rise to differing longevities between HDDs such as WDs RE4s vs. the Reds which I use in my NAS, and greens/blues?
 
SSDs, with their finite useable life (as mentioned above, in the vicinity of 100,000 write/erase cycles per block) should really be regarded as consumable items.

HDDs have a mechanism for identifying "bad sectors" and marking these as unavailable so that future write operations do not try to write to one of these "bad sectors". SSDs use a similar approach for marking logical blocks as unavailable and many go one step further via over-provisioning (a bullshit term which should read "under-utilisation") which reserves a chunk of storage space which is "released" a logical block at a time as and when any logical block is marked unavailable. This is why you will see SSDs marked as 240GB capacity instead of 256GB as 16GB has been reserved to allow the controller to re-allocate reserve logical blocks to replace "dead" logical blocks without reducing the published capacity.

S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) was introduced to monitor the occurrence rate of a number of disk error conditions as a mechanism to help predict when a drive could be likely to fail.

SSDs also make use of S.M.A.R.T. and a number of tools are available that will allow users to interrogate the data maintained to produce a report on SSD condition. One such tool is available as a freeware download from SSD Life:

SSD Life homepage

There are a number of factors that can negatively influence the "life" of an SSD and some are rather technical but one that needs to be mentioned is the myth that "wear leveling" is beneficial. Wear leveling is achieved via re-writing data after initial write which, in fact, just increases the total number of writes and - as a result - has the effect of reducing the usable life of an SSD. Over-provisioning achieves a similar end-result without the overhead of "write amplification" (a term used to describe the ratio between writes executed versus writes required - ideally 1:1).

I'm on my third SSD (all used as system drives in a Wintel environment)...

#1 was an el cheapo 120GB SATA6G drive that occasionally used to take too long to wake up during boot and fail to be recognized resulting in boot-failures (gave it away)

#2 was an OCZ Vertex 3 120GB SATA6G drive bought to replace #1 and which worked rather well except 120GB turned out to be a bit too small (also donated - this time to a colleague building a new PC as SSD Life seemed to think it would last another 5-6 years)

#3 is another OCZ - this time their Vertex 4 in 256GB capacity and which has performed extremely well for over a year now (and SSD Life predicts another 9 years of bliss)

I've also heard horror stories about SSD failures, but most of these have been from users of low-price devices and few from users of SSDs from reputable manufacturers.

A good indicator of SSD quality is the warranty period on offer - if its 5 years, buy it!

My next SSD? If soon, will probably be OCZ's latest Vector model - as it comes with some intriguing design concepts, performs well and has a 5 year warranty.

See review at HardwareCanucks:

OCZ Vector SSD Review

Personally, I'd never revert to an HDD as a boot disk...
:cool:
 
On the other hand I've had consistently poor experience with HDDs. A very large google study found no greater reliability between brands or between enterprise class and basic designs. Their advice was expect failure and plan accordingly.

In moving to SSDs for server databases we plan for failure just as we did with HDD but get vastly better performance.

From what I can see enterprise SSDs simply include more redundancy (ie greater number of memory chips vs logical capacity) and offer proven controller technology.
 
You can try this nifty little tool....http://www.wdc.com/en/products/productpicker/EnterpriseAV/

RED is a small scale NAS drive and could start to suffer when in an enclosure with more than 4 - 6 drives (performance drop due to rotary vibration). The REs and above are for larger more data centre related storage - Traditionally called the enterprise sector. There are very real differences in the components involved and the design aims from the start are very specific. You wouldn't plough a field with a motorbike. P{M me if you want miore info
 


advertisement


Back
Top