There is an ELRepo user who has mentioned ZFS on the ELRepo list and has a
repo with packages in there. I don't know what the status of ZFS in ELRepo
is, but given Akemi comments on the link I think the ELRepo folks have talked
about this before.
Alas my history may not go far enough back to follow the full story there.
http://lists.elrepo.org/pipermail/elrepo/2013-February/001660.html
Pat
On 03/22/2013 02:54 PM, Farkas Levente wrote:
>
> Is there any special requirement for zfs other then these 4 rpms? Since all
> kmod addon rpms for rhel collected in elrepo. Wouldn't it be better to add
> zfs to elrepo rather then sl-addon?
>
> 2013.03.22. 20:30, "Pat Riehecky" <[log in to unmask]
> <mailto:[log in to unmask]>> ezt írta:
>
> The forums are 'unofficial' so go ahead and put whatever you feel is
> appropriate.
>
> Pat
>
> On 03/22/2013 02:08 PM, Brown, Chris (GE Healthcare) wrote:
>
> All,
>
> The preliminary thought process on ZOL was to include it in SL
> addons for SL 6.4.
> This will give the SL community a chance to shake it down and
> provide thoughts/feedback on ZOL.
> Additionally this provides further permeation of the ZOL project
> beyond us (GEHC HELiOS), LLNL, Ubuntu, other distros which have
> integrated it and its existing user base.
> Linux is and has been in dire need of a file system like ZFS and its
> features and maturity level. Thus far in our evaluation and testing
> ZOL has been nothing short of impressive.
>
> If things go well with ZOL for SL the next step would be promote ZFS
> into SL 6.5 directly.
>
>
> Pat,
> Let me know if it would be ok to additionally post the below to the
> unofficial SL forums and scientific-linux-users.
>
> FAQ for Questions, Comments, and concerns with ZOL in SL <<
>
> *QuickLinks*
> - Main ZFS on Linux Website: http://zfsonlinux.org/
> - ZOL FAQ: http://zfsonlinux.org/faq.html
> - SPL Source Repository: https://github.com/zfsonlinux/spl
> - ZFS Source Repository: https://github.com/zfsonlinux/zfs
> - ZFS Announce:
> https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#!forum/zfs-announce
> <https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#%21forum/zfs-announce>
> - ZFS_Best_Practices_Guide (written for Solaris but most things
> still apply):
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> - ZFS_Evil_Tuning_Guide (written for Solaris but most things still
> apply):
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
>
> For Providing feedback or general use and installation questions use
> the following mailing lists
> --> scientific-linux-users
> Or
> --> Existing zfs-discuss
> (https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#!forum/zfs-discuss
> <https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#%21forum/zfs-discuss>)
>
> - For developmental type questions use the following existing ZOL
> mailing list:
> -->
> https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#!forum/zfs-devel
> <https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups#%21forum/zfs-devel>
>
> - To report SPL bugs:
> --> https://github.com/zfsonlinux/spl/issues
>
> - To report ZFS bugs:
> --> https://github.com/zfsonlinux/zfs/issues
>
> *Some general best practices to note with ZOL*
> - SSD ZIL and L2ARC devices help performance in general
> - Always try and use low latency SSD devices!
> - Multipath your disks using multiple HBA whenever possible
> - Something to note in ZOL was this change in rc12:
> -->
> https://github.com/zfsonlinux/zfs/commit/920dd524fb2997225d4b1ac180bcbc14b045fda6
> --> Translation: How ZOL better handles and avoids the situation
> with the BSD and native Solaris ZFS described here:
> http://www.nex7.com/node/12
> - Try to limit pools to no more than 48 disks
> - If using ZOL to store KVM Virtual Machine images I have found the
> following setup yields the best performance:
> --> Using ZVOL's formatted with ext4 with a larger blocksize yield
> the best performance when combined with KVM and RAW thick or thin
> provisioned file backed disks.
> --> Create a zvol as follows (Example): zfs create -V 100G -o
> volblocksize=64K das0/foo
> --> After that a simple mkfs.ext4 -L <zvolname> /dev/das0/foo
> --> mount command (Example) mount /dev/das0/foo /some/mount/point -o
> noatime
> --> /some/mount/point is exported via NFS v3
> --> Feel free to enable NFS async for additional performance however
> with the understanding on the implications of doing so
> --> Additionally the qemu/kvm VM disk cache policy set to none with
> the IO policy set to threaded.
>
> *ZFS Examples*
> # Limit ZFS ARC else it defaults to total system ram size minus 1GB
> # Example 64GB
> # Create and add the following to /etc/modprobe.d/zfs.conf -->
> options zfs zfs_arc_max=68719476736
>
> ZFS Zpool Creation Commands (examples are using dm-multipath disks)
> # EX: Create a 24 Disk Raidz2 (Raid6) pool
> zpool create das0 raidz2 mpathb mpatha mpathc mpathd mpathe mpathf
> mpathg mpathh mpathi mpathj mpathk mpathl mpathm mpathn mpatho
> mpathp mpathq mpathr mpaths mpatht mpathu mpathv mpathw mpathx
>
> # EX: Create a 24 Disk Striped Mirrors (Raid10) pool
> zpool create das0 mirror mpathb mpatha mirror mpathc mpathd mirror
> mpathe mpathf mirror mpathg mpathh mirror mpathi mpathj mirror
> mpathk mpathl mirror mpathm mpathn mirror mpatho mpathp mirror
> mpathq mpathr mirror mpaths mpatht mirror mpathu mpathv mirror
> mpathw mpathx
>
> # EX: Create a 24 Disk Striped Mirrors (Raid10) pool with ashift option
> # Also set ashift=12 this is required when dealing with Advanced
> Format drives
> # Using it with Non AF drives can give a performance boost with some
> workloads
> # Using it does decrease overall pool capacity
> zpool create -o ashift=12 das0 mirror mpathb mpatha mirror mpathc
> mpathd mirror mpathe mpathf mirror mpathg mpathh mirror mpathi
> mpathj mirror mpathk mpathl mirror mpathm mpathn mirror mpatho
> mpathp mirror mpathq mpathr mirror mpaths mpatht mirror mpathu
> mpathv mirror mpathw mpathx
>
> Zpool autoexpand option
> # Can be specified at pool creation time
> # Can be set at anytime
> # Needed if you want to add drives to an existing pool
> # At pool creation time: -o expand=on
> # After pool creation: zpool set expand=on
>
> Add two SSD striped together as Read cache (l2arc) to a pool
> zpool add das0 cache /dev/<disk-by-path> /dev/<disk-by-path>
>
> Add two SSD striped together as Write cache ZFS intent Log (ZIL) to
> a pool
> Add striped zil devices: zpool add das0 log /dev/<disk-by-path>
> /dev/<disk-by-path>
>
> Create a ZFS Filesystem
> # EX: create ZFS filesystem named foo in pool das0
> zfs create das0/foo
>
> Create a ZFS zvol
> # EX: create zvol named foov in pool das0
> zfs create -V 100G das0/foo
>
> Create sparse ZFS zvol with custom blocksize
> zfs create -s -V 500G -o volblocksize=64K das0/foo
>
> Grow a ZFS zvol
> # EX: grow foov on pool das0 to 500G from 100G
> # Note if zvol is formatted with and FS use that FS tool to grow the
> FS after resize
> # EX: resize2fs (ext4/ext3) xfs_growfs (xfs)
> zfs set volsize=500G das0/foov
>
> Export and Import a zpool
> # EX: zpool created from devices listed in /dev/mapper and
> /dev/disk/by-id
> zpool export
> zpool import -f -d /dev/mapper -d /dev/disk/by-id -a
>
> - Chris
>
> -----Original Message-----
> From: [log in to unmask]
> <mailto:[log in to unmask]>
> [mailto:[log in to unmask]
> <mailto:[log in to unmask]>] On Behalf
> Of Andras Horvath
> Sent: Friday, March 22, 2013 12:29 PM
> To: Pat Riehecky
> Cc: Brown, Chris (GE Healthcare);
> [log in to unmask]
> <mailto:[log in to unmask]>
> Subject: Re: [SCIENTIFIC-LINUX-DEVEL] Initial ZFS On Linux (ZOL)
> YAML for SL-addons
>
> The zfs package did install fine now.
>
> Thanks!
>
>
> On Fri, 22 Mar 2013 12:16:34 -0500
> Pat Riehecky <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>
> The sl-addons repo has a fixed zfs-modules-dkms package now.
>
> you will want to run a 'yum clean expire-cache' to catch the
> updates.
>
> Since DKMS is in EPEL6 where it is actively maintained, adding it to
> SL Addons would create work on maintenance without adding much.
>
> It does make loading some packages more complex,
>
> Pat
>
>
> On 03/22/2013 12:00 PM, Andras Horvath wrote:
>
> Hi,
>
> I've just tried to install ZFS on a recently update 6.4
> system (64
> bit). Yum complains about dependency error, see below.
>
> Even manually installing spl-modules-dkms doesn't help,
> though it's
> complaining about this one.
>
> BTW, is it normal that the dkms package is available only
> from Epel
> repo and not the main one?
>
>
> Thanks,
> Andras
>
>
> # lsb_release -d
> Description: Scientific Linux release 6.4 (Carbon)
>
> # yum install --disableplugin=fastestmirror
> --enablerepo=sl-addons
> zfs Loaded plugins: protectbase, refresh-packagekit, security
> 82 packages excluded due to repository protections
> Setting up Install Process
> Resolving Dependencies
> --> Running transaction check
> ---> Package zfs.x86_64 0:0.6.0-rc14.el6 will be installed
> --> Processing Dependency: zfs-modules for package:
> zfs-0.6.0-rc14.el6.x86_64 --> Running transaction check
> ---> Package zfs-modules-dkms.noarch 0:0.6.0-rc14.el6 will be
> installed --> Processing Dependency: spl-modules-dkms = X for
> package: zfs-modules-dkms-0.6.0-rc14.el6.noarch --> Finished
> Dependency Resolution Error: Package:
> zfs-modules-dkms-0.6.0-rc14.el6.noarch (sl-addons) Requires:
> spl-modules-dkms = X Installed:
> spl-modules-dkms-0.6.0-rc14.el6.noarch (@sl-addons)
> spl-modules-dkms = 0.6.0-rc14.el6 You could try using
> --skip-broken
> to work around the problem
>
>
> On Fri, 22 Mar 2013 11:16:14 -0500
> Pat Riehecky <[log in to unmask] <mailto:[log in to unmask]>>
> wrote:
>
> The HELiOS folks, Connie, and I have been communication
> about this
> for some time. So the quick turn around on our end is not
> unexpected.
>
> I've just now posted the packages to sl-addons (x86_64
> ONLY).
>
> Thanks Chris!
>
> Pat
>
> On 03/22/2013 11:08 AM, Brown, Chris (GE Healthcare) wrote:
>
> Team,
>
> Attached is the YAML for the initial ZFS inclusion
> into SL addons.
>
> *Release Explanation*
>
> ZFS on Linux (ZOL) is currently manually added to HELiOS
> (Healthcare Enterprise Linux Operating System).
>
> HELiOS is a spin of Scientific Linux created and
> maintained by the
> GE Healthcare Compute Systems team (CST).
>
> HELiOS strives to be as upstream as possible thus
> including ZOL
> into SL helps us better maintain upstream purity in
> HELiOS.
>
> Including ZOL in SL also allows the rest of the SL
> community to
> benefit from our work with ZOL.
>
> **Release Notes**
>
> Core ZFS development started in 2001 with ZFS being
> Officially
> released by Sun in 2004.
>
> Testing and evaluation of ZOL by CST has shown
> better performance,
> scalability, and stability, then BTRFS.
>
> As a result of the maturity of core ZFS, ZOL
> inherits ZFS features
> which are many years ahead of BTRFS.
>
> Testing by CST with ZOL on a proper hardware setup
> with proper SSD
> ZIL/L2ARC devices has shown ZOL to yield better
> performance then
> both BTRFS and native Solaris ZFS.
>
> Also performance tests of ZOL by CST has yielded better
> performance results then running the equivalent
> tests on
> Sun/Oracle or Nexenta ZFS storage appliances.
>
> Additional testing by CST of ZOL has shown that when
> combined with
> GlusterFS the results are a very powerful, redundant
> and almost
> infinitely scalable storage solution.
>
> ZOL is the work of Lawrence Livermore National
> Laboratory (LLNL)
> under Contract No. DE-AC52-07NA27344 (Contract 44)
> between the
> U.S. Department of Energy (DOE) and Lawrence
> Livermore National
> Security, LLC (LLNS) for the operation of LLNL.
>
> regards,
>
> *Chris Brown*
> *GE Healthcare Technologies
> **/Compute Systems Architect/*
>
>
>
>
> --
> Pat Riehecky
>
> Scientific Linux developer
> http://www.scientificlinux.org/
>
--
Pat Riehecky
Scientific Linux developer
http://www.scientificlinux.org/
|