SCIENTIFIC-LINUX-USERS Archives

September 2012

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Sean Murray <[log in to unmask]>
Reply To:
Sean Murray <[log in to unmask]>
Date:
Wed, 5 Sep 2012 10:34:24 +0200
Content-Type:
multipart/signed
Parts/Attachments:
text/plain (2145 bytes) , smime.p7s (1921 bytes)
On 09/04/2012 09:21 PM, Konstantin Olchanski wrote:
> On Sun, Sep 02, 2012 at 05:33:24PM -0700, Todd And Margo Chester wrote:
>>
>> Cherryville drives have a 1.2 million hour MTBF (mean time
>> between failure) and a 5 year warranty.
>>
>
> Note that MTBF of 1.2 Mhrs (137 years?!?) is the *vendor's estimate*.
Its worse than that, if you read their docs, that number is based on an
average write/read rate of (intel) 20G per day, which is painfully little
for a server.

I looked at these for caching data but from our use case and of course
assuming the MTBF to be accurate, the MTBF would be 6 months

Sean

>
> Actual failure rates observed in production are unknown, the devices have
> not been around long enough.
>
> However, if you read product feedback on newegg, you may note that many SSDs
> seem to suffer from the "sudden death" syndrome - a problem we happily
> no longer see on spinning disks.
>
> I guess the "5 year warranty" is real enough, but it does not cover your
> costs in labour for replacing dead disks, costs of down time and costs of lost data.
>
>>
>> ... risk of dropping RAID in favor of just one of these drives?
>>
>
> To help you make a informed decision, here is my data.
>
> I have about 9 SSDs in production use (most are in RAID1 pairs), oldest has been
> running since last October:
> - 1 has 3 bad blocks (no RAID1),
> - 1 has SATA comm problem (vanishes from the system - system survives because
>    it's a RAID1 pair).
> - 0 dead so far
>
> I have about 20 USB and CF flash drives in production used as SL4/5/6 system disks,
> some in RAID1, some as singles, oldest has been in use for 3 ( or more?) years.
> There are zero failures, except 1 USB flash has a few bad blocks, except
> for infant mortality (every USB3 and all except 1 brand USB2 flash drives
> fail within a few weeks).
>
> All drives used as singles are backed up nightly (rsync).
>
> All spining disks are installed in RAID1 pairs.
>
> Would *I* use single drives (any technology - SSD, USB flash, spinning)?
>
> Only for a system that does not require 100% uptime (is not used
> by any users) and when I can do daily backups (it cannot be in a room
> without a GigE network).
>
>




ATOM RSS1 RSS2