SCIENTIFIC-LINUX-USERS Archives

November 2008

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jon Peatfield <[log in to unmask]>
Reply To:
Jon Peatfield <[log in to unmask]>
Date:
Thu, 20 Nov 2008 00:18:27 +0000
Content-Type:
TEXT/PLAIN
Parts/Attachments:
TEXT/PLAIN (80 lines)
On Wed, 19 Nov 2008, Miles O'Neal wrote:

> Our local vendor built us a Supermicro/Adaptec
> system with 16x1TB SATA drives.  We have a 12TB
> partition that they built as EXT2.  When I tried
> to add journaling, it took forever, and then the
> system locked up.  On reboot, the FS was still
> EXT2, and takes hours (even empty) to fsck.  Based
> on the messages flying by I am also not confident
> fsck rally understands a filesystem this large.

Last year I had some pain with ext3 on an sl4 box with fsck going into an 
infinite loop.  That was with a 6.4TB file system and after some searching 
I found that the latest e2fsck included a fix.  Apparently the bug was 
caused by accidentally using floating point in a calculation and the 
rounding errors caused it to stop progressing... (or maybe I mis-read the 
report).

On our box fsck would stop making any progress once it had processed 
though about 3.5 TB of the disk - this is in the walking all the 
inode-tables stage.

The fixed fsck still took close to 24 hours to check the disk though. 
This was after a nasty hardware failure made worse by a firmware bug in 
the external raid box.

After cursing loudly and getting the 'fixed' fsck to check it I ended up 
dumping the data splitting things up and restoring - in part to avoid 
getting close to those limits.  In fact I needed to dump/restore anyway 
'cos I was re-organising disks and changing raid levels etc.

I did seriously think about switching to XFS (or anything else) but 
decided against it at the time 'cos I just didn't have time to investigate 
all the implications.

> Is the XFS module stable on 5.1 and 5.2?  (The
> vendor installed 5.1 because that's what they
> have, but I ran "yum update").
>
> Anyone have experience with filesystems this large
> on a Linux system?

On 'a Linux system' yes, on SL not yet.  Note that some vendors use XFS a 
lot especially the one which employed the people who wrote it...

>  Will XFS work well for this?
>
> If any of you have successfully used EXT3 on a
> filesystem this large, are there any tuning tips
> you recommend?  I was thinking of turning on
> dir_index, but somewhere I saw a warning this
> nmight not work with other OSes.  Since we do have
> some Windows and Mac users accessing things via
> SMB, I wasn't sure that was safe. either.

If you see warnings like that they are *usually* that the on disk format 
might not be understood by older systems (say a Windows ext3 layer).

By the time that apps like samba (for SMB) are involved it is *just* a 
unix file-system and how that is implemented on-disk isn't relevant.

My home directory is currently exported from a box which is using the 
following ext3 filesystem features:

$ tune2fs -l /dev/mapper/CingulumRaid00-home | grep features
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file

(needs_recovery is 'cos it is mounted atm).  Some or most of those were 
added automatically when the fs was created.  On sl5 you should see most 
or all of those...

That works fine for SMB access from Windows/Macs or AFP access from Macs 
or NFS access from UNIX/Linux/Macs...

> This is a 64bit system. 8^)

Of course... :-)

  -- Jon

ATOM RSS1 RSS2