Subject: | |
From: | |
Reply To: | |
Date: | Wed, 19 Nov 2008 16:29:03 -0600 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
XFS has been stable for a while (at least on 32 bit), I wouldn't worry
too much about that. It is designed to handle large filesystems like
this. My largest experience is a 6x500GB array in raid5, so roughly
2.5TB. Works fine as a single volume in Debian Etch (4.0). One thing a
lot of others tend to do, if the card supports it, is carve/autocarve.
This breaks up the raid into several volumes under 2TB (or some selected
size) that later gets laced back together using LVM. XFS also allows
you to tune for the stripe sizes and other various aspects of the raid.
Hope that helps at least a little,
Mark
Miles O'Neal wrote:
> Our local vendor built us a Supermicro/Adaptec
> system with 16x1TB SATA drives. We have a 12TB
> partition that they built as EXT2. When I tried
> to add journaling, it took forever, and then the
> system locked up. On reboot, the FS was still
> EXT2, and takes hours (even empty) to fsck. Based
> on the messages flying by I am also not confident
> fsck rally understands a filesystem this large.
>
> Is the XFS module stable on 5.1 and 5.2? (The
> vendor installed 5.1 because that's what they
> have, but I ran "yum update").
>
> Anyone have experience with filesystems this large
> on a Linux system? Will XFS work well for this?
>
> If any of you have successfully used EXT3 on a
> filesystem this large, are there any tuning tips
> you recommend? I was thinking of turning on
> dir_index, but somewhere I saw a warning this
> nmight not work with other OSes. Since we do have
> some Windows and Mac users accessing things via
> SMB, I wasn't sure that was safe. either.
>
> This is a 64bit system. 8^)
>
> Thanks,
> Miles
>
>
--
Mr. Mark V. Stodola
Digital Systems Engineer
National Electrostatics Corp.
P.O. Box 620310
Middleton, WI 53562-0310 USA
Phone: (608) 831-7600
Fax: (608) 831-9591
|
|
|