SCIENTIFIC-LINUX-USERS Archives

April 2017

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Konstantin Olchanski <[log in to unmask]>
Reply To:
Konstantin Olchanski <[log in to unmask]>
Date:
Tue, 11 Apr 2017 09:44:30 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (32 lines)
On Tue, Apr 11, 2017 at 11:13:25AM +0200, David Sommerseth wrote:
> 
> But that aside, according to [1], ZFS on Linux was considered stable in
> 2013.  That is still fairly fresh, and my concerns regarding the time it
> takes to truly stabilize file systems for production [2] still stands.
> 

Why do you worry about filesystem stability?

So what if it eats your data every 100 years due to a rare bug? You do have
backups (filesystem stability does not protect you against
fat-fingering the "rm" command), and you do archive your data, yes?
You do have a hot-spare server (filesystem stability does not protect
you against power supply fire) and you do have disaster mitigation plans
(filesystem stability does not protect you against "server is down,
you are fired!").

So what if it eats your data every 1 week due to a frequent bug? How is that
different from faulty hardware eating your data? (like the cheap intel pcie ssd
eating all data on xfs and ext4 within 10 seconds of booting). You build
a system, you burn it in, you test it, if it works, it works, if it does not,
you throw zfs (or the hardware) into the dumpster, start again with yfs, qfs, whatever.

How else can you build something that works reliably? Using only components annointed
by the correct penguin can only take you so far.

-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada

ATOM RSS1 RSS2