SCIENTIFIC-LINUX-USERS Archives

July 2013

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Brown, Chris (GE Healthcare)" <[log in to unmask]>
Reply To:
Brown, Chris (GE Healthcare)
Date:
Thu, 25 Jul 2013 19:31:12 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (90 lines)
I would actually direct ZOL support questions directly at the zfs-discuss mailing list: http://zfsonlinux.org/lists.html

Also we(The GEHC Compute Systems Team) work with SL/Fermi via our internal GE Linux distribution based on SL called HELiOS (http://helios.gehealthcare.com).
See:  http://scientificlinuxforum.org/index.php?showtopic=1336

- Chris

-----Original Message-----
From: [log in to unmask] [mailto:[log in to unmask]] On Behalf Of Yasha Karant
Sent: Thursday, July 25, 2013 1:42 PM
To: scientific-linux-users
Subject: Re: Large filesystem recommendation

Based upon the information below, zfs is under consideration for our disk farm server system.  At one point, we had to run lustre to meet an external funding "recommendation" -- but we do not have that aegis at present.  However, one important question:

Porting a file system to an OS environment is not always trivial, and can result in actual performance (and in some cases, reliability) reduction/degradation.  Is the port of zfs to ELNx x86-64 (N currently
6) professionally supported, and if so, by which entity?  Do understand that I regard SL as being professionally supported because there are
(paid) professional staff working on SL via Fermilab/CERN -- and TUV EL definitely is so supported.

I found:   Native ZFS on Linux
Produced at Lawrence Livermore National Laboratory
from:
http://zfsonlinux.org/

that references:

http://zfsonlinux.org/zfs-disclaimer.html

Is LLNL actually supporting zfs?

Yasha Karant

On 07/25/2013 10:57 AM, Brown, Chris (GE Healthcare) wrote:
> Overview: 
> http://www.nexenta.com/corp/zfs-education/203-nexentastor-an-introduct
> ion-to-zfss-hybrid-storage-pool-
>
>
> The ZIL:
> See:
> https://blogs.oracle.com/realneel/entry/the_zfs_intent_log
> https://blogs.oracle.com/perrin/entry/the_lumberjack
> http://nex7.blogspot.com/2013/04/zfs-intent-log.html
>
> Accordingly it is actually quite ok to use cheap SSD.
> Two things to do if doing so however:
> 1) low latency is key keep this in mind when selecting the prospective 
> SSD to use
> 2) Mirror and stripe the vdev EG: RAID10 ZIL 4x SSD to safe
>
> The L2ARC:
> https://blogs.oracle.com/brendan/entry/test
> http://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/
>
> Accordingly with the L2ARC it is also ok to use cheap SSD same above to two rules apply. However due to the nature of the cache data a striped vdev of 2 SSD is fine as well.
>
>
> Foregoing details but one can also achieve the same sort of general idea to a point as the above with an external journal with ext4.
> Also with BTRFS mkfs.btrfs -m raid10 SSD SSD SSD SDD -d raid10 <disk> 
> <disk> <disk> <disk>
>
> - Chris
>
> -----Original Message-----
> From: [log in to unmask] 
> [mailto:[log in to unmask]] On Behalf Of 
> Graham Allan
> Sent: Thursday, July 25, 2013 12:54 PM
> To: Yasha Karant
> Cc: scientific-linux-users
> Subject: Re: Large filesystem recommendation
>
> I'm not sure if anyone really knows what the reliability will be, but the hope is obviously that these SLC-type drives should be longer-lasting (and they are in a mirror).
>
> Losing the ZIL used to be a fairly fatal event, but that was a long time ago (ZFS v19 or something). I think with current ZFS versions you just lose the performance boost if the dedicated ZIL device fails or goes away.
> There's a good explanation here:
>    http://www.nexentastor.org/boards/2/topics/6890
>
> Graham
>
> On Thu, Jul 25, 2013 at 10:41:50AM -0700, Yasha Karant wrote:
>> How reliable are the SSDs, including actual non-corrected BER, and 
>> what is the failure rate / interval ?
>>
>> If a ZFS log on a SSD fails, what happens?  Is the log automagically 
>> recreated on a secondary SSD?  Are the drives (spinning and/or SSD) 
>> mirrored? Are primary (non-log) data lost?
>>
>> Yasha Karant

ATOM RSS1 RSS2