SCIENTIFIC-LINUX-USERS Archives

July 2014

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Patrick J. LoPresti" <[log in to unmask]>
Reply To:
Patrick J. LoPresti
Date:
Wed, 9 Jul 2014 15:48:40 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (36 lines)
On Wed, Jul 9, 2014 at 1:27 PM, Lamar Owen <[log in to unmask]> wrote:
>
> I don't recall if I had to specify that option or not with CentOS 5.10:
> +++++++++++++++++++++++++++
> [root@backup-rdc ~]# df -h
> Filesystem            Size  Used Avail Use% Mounted on
> ...
> /dev/mapper/plates-cx3--80
>                        27T   26T  805G  98% /opt/plates
> /dev/mapper/vg_opt-lv_backups
>                       5.8T  5.4T  365G  94% /opt/backups
> [root@backup-rdc ~]# blkid

You are getting a little bit lucky, I think... The failure happens
when the first 16TB of the block device (as opposed to file system)
are in use. Since XFS allocates blocks from allocation groups all over
the disk, it is improbable that the first 16TB is ever actually in use
until the entire file system fills up.

We have had a few dozen ~30TB XFS storage systems in the field for the
past several years, and I only ever saw this failure once. At the time
the file system was maybe 85% full. But then our files are typically
in the 100s of gigabytes, which perhaps makes it more likely to
trigger (?)

I swear I am not making up this problem; see e.g.
http://www.doc.ic.ac.uk/~dcw/xfs_16tb/

Anyway, inode64 is the recommended mount option for large XFS file
systems unless you have some specific legacy need (like exporting via
NFSv2 to 32-bit Solaris... guess how I know)

Cheers.

 - Pat

ATOM RSS1 RSS2