SCIENTIFIC-LINUX-USERS Archives

July 2013

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Brown, Chris (GE Healthcare)" <[log in to unmask]>
Reply To:
Brown, Chris (GE Healthcare)
Date:
Thu, 25 Jul 2013 17:57:30 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (51 lines)
Overview: http://www.nexenta.com/corp/zfs-education/203-nexentastor-an-introduction-to-zfss-hybrid-storage-pool-


The ZIL:
See:
https://blogs.oracle.com/realneel/entry/the_zfs_intent_log
https://blogs.oracle.com/perrin/entry/the_lumberjack
http://nex7.blogspot.com/2013/04/zfs-intent-log.html

Accordingly it is actually quite ok to use cheap SSD.
Two things to do if doing so however:
1) low latency is key keep this in mind when selecting the prospective SSD to use
2) Mirror and stripe the vdev EG: RAID10 ZIL 4x SSD to safe

The L2ARC:
https://blogs.oracle.com/brendan/entry/test
http://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/

Accordingly with the L2ARC it is also ok to use cheap SSD same above to two rules apply. However due to the nature of the cache data a striped vdev of 2 SSD is fine as well.


Foregoing details but one can also achieve the same sort of general idea to a point as the above with an external journal with ext4.
Also with BTRFS mkfs.btrfs -m raid10 SSD SSD SSD SDD -d raid10 <disk> <disk> <disk> <disk> 

- Chris

-----Original Message-----
From: [log in to unmask] [mailto:[log in to unmask]] On Behalf Of Graham Allan
Sent: Thursday, July 25, 2013 12:54 PM
To: Yasha Karant
Cc: scientific-linux-users
Subject: Re: Large filesystem recommendation

I'm not sure if anyone really knows what the reliability will be, but the hope is obviously that these SLC-type drives should be longer-lasting (and they are in a mirror).

Losing the ZIL used to be a fairly fatal event, but that was a long time ago (ZFS v19 or something). I think with current ZFS versions you just lose the performance boost if the dedicated ZIL device fails or goes away.
There's a good explanation here:
  http://www.nexentastor.org/boards/2/topics/6890

Graham

On Thu, Jul 25, 2013 at 10:41:50AM -0700, Yasha Karant wrote:
> How reliable are the SSDs, including actual non-corrected BER, and 
> what is the failure rate / interval ?
> 
> If a ZFS log on a SSD fails, what happens?  Is the log automagically 
> recreated on a secondary SSD?  Are the drives (spinning and/or SSD) 
> mirrored? Are primary (non-log) data lost?
> 
> Yasha Karant

ATOM RSS1 RSS2