SCIENTIFIC-LINUX-USERS Archives

September 2012

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Nico Kadel-Garcia <[log in to unmask]>
Reply To:
Nico Kadel-Garcia <[log in to unmask]>
Date:
Mon, 3 Sep 2012 07:57:30 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (29 lines)
On Mon, Sep 3, 2012 at 1:17 AM, Todd And Margo Chester
<[log in to unmask]> wrote:

> Hmmmmmm.  Never had a bad hardware RAID controller.  Had several
> mechanical hard drives go bad.

Lord knows I have. The worst were these "they fell of my uncle's truck
in southeast Asia" adapters with the chip numbers apparently burned
off them, pretending to the kernel that they were an Adaptec chipset.
They didn't even fit properly in the cases, due to badly machined and
mounted mounting plates. I managed to get those shipped back as
unacceptable: any cost savings in getting cheaper hardware faster was
completely wasted in the testing failures and the "just wedge them
in!!!" and resulting failures as the cards popped loose when the
systems warmed up.

I've also dealt with some high performance disk controller failures in
bleeding edge hardware. It's why I tend to avoid bleeding edge
hardware: let someone who needs that bleeding edge debug it for me.

> Anyone have an opinion(s) on SSD's in a small work group server?

They're very expensive for what is, usually, unnecessary though large
performance gains. For certain high performance proxy or database
performance, they're invaluable. Monitoring system performance for
them can get..... a little weird. I've seen system loads go over 150
on SL 6 compatible systems, thought the systems were still active and
responsive and performing well.

ATOM RSS1 RSS2