Jorge,
On Wed, 17 Aug 2005, Jorge Izquierdo (UAM) wrote:
> Thank you Connie, I've already read that but I thought it won't be
> necessary to update to 4.1 to achieve the support for filesystems larger
> than 2TB and I hoped that somebody on the list gave me some hint to
> resolve this issue without the need for update.
I do not think you understand. I think if you use the GPT option and the
mke2fs -b 4096 when you make the filesystem and the latest errata kernel
and not LVM2 then you should be ok.
Do you think the above does not work?
-Connie Sieh
>
> But I will try with SL4.1 (or maybe before eith XFS) if there is no
> other solution.
>
> Thanks again
>
> Jorge
>
>
>
> On Wed, 2005-08-17 at 17:30, Connie Sieh wrote:
> > Jorge,
> >
> > On Wed, 17 Aug 2005, Jorge Izquierdo (UAM) wrote:
> >
> > > Hi everybody, I'm new in the list so I apologize if the mail is not so
> > > clear as it should be.
> > >
> > > I'm having some troubles with SL4.0 trying to configure a filesystems of
> > > 5TB in my SCSI storage (Promise vtrack 15110).
> > > When the system starts, the output messages shows that my SCSI device is
> > > correctly detected and the information about the size of the RAID
> > > storage created is right:
> > >
> > > scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 1.3.11
> > > <Adaptec 39320A Ultra320 SCSI adapter>
> > > aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 67-100Mhz,
> > > 512 SCBs
> > >
> > > (scsi0:A:0): 160.000MB/s transfers (80.000MHz DT, 16bit)
> > > Vendor: Promise Model: 14 Disk RAID5 Rev: V0.0
> > > Type: Direct-Access ANSI SCSI revision: 04
> > > scsi0:A:0:0: Tagged Queuing enabled. Depth 4
> > > SCSI device sda: 2532706176 2048-byte hdwr sectors (5186982 MB)
> > > SCSI device sda: drive cache: write through
> > > sda: sda1 sda2
> > > Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
> > >
> > > (as you can see, I'm using the Adaptec 39320A HBA with the driver
> > > aic79xx included with SL)
> > >
> > > But the problems come when I try to configure my storage with fdisk and
> > > then I try to create the ext3 filesystem with mke2fs. First of all,
> > > fdisk doesn't allow me to create a unique partition with the full
> > > capacity of my disk array, but when I select a partition size larger
> > > than 4TB it doesn't work properly. Is this normal? It seems the 64 bits
> > > support is not reflected at the time of partition creation.
> > >
> > > So I decided to create 2 partitions (sda1 and sda2 as reflected in the
> > > dmesg text) one of 4 TB and the other one with the rest. And here comes
> > > the second problem, I create a ext3 FS on the sda partition with: mke2fs
> > > -j -m 0 -b 4096 /dev/sda1
> > > And when I mount my new filesystem and I test the mounted filesystems I
> > > get the next information:
> > > $# mount /dev/sda1 /mnt
> > > $# df -h /mnt
> > > Filesystem Size Used Avail Use% Mounted on
> > > /dev/sda1 2.0T 103M 2.0T 1% /mnt
> > >
> > > So the size of my partition is only 2TB when I created a 4TB partition
> > > with fdisk. Any ideas about this behaviour? Should I create my
> > > filesystem in any other way to support larger than 2TB sizes? Is the 64
> > > bits support broken anywhere with SL 4.0? Am I missing something?
> >
> > >From the Upstream Vendor Update 1 release notes. I realize that you are
> > on 4.0 and not 4.1 but the info here is of importance. Note that if you
> > have the latest errata kernel you have the kernel from 4.1 .
> > ----------------------------------------------------------------------------
> >
> > o Scientific Linux 4.1 provides support for disk devices
> > that are larger than 2 terabytes (TB). Although there is limited
> > support for this feature in the Scientific Linux 4.0 release,
> > 4.1 contains many improvements (both in user space programs and
> > in the kernel). In general, 4.1 is considered a requirement for
> > support of disk devices larger than 2 TB.
> >
> > Please note the following guidelines and restrictions related to large
> > device support:
> >
> > . Typical disk devices are addressed in units of 512 byte blocks. The
> > size of the address in the SCSI command determines the maximum device
> > size. The SCSI command set includes commands that have 16-bit block
> > addresses (device size is limited to 2 GB), 32-bit block addresses
> > (limited to addressing 2 TB), and 64-bit block addresses. The SCSI
> > subsystem in the 2.6 kernel has support for commands with 64-bit block
> > addresses. To support disks larger then 2 TB, the Host Bus Adapter
> > (HBA), the HBA driver, and the storage device must also support 64-bit
> > block address. We have tested the QLogic qla2300 driver and the
> > Emulex lpfc driver, included in Scientific Linux 4.1,
> > on an 8 TB logical unit on a Winchester Systems FX400 (rev. 3.42B and
> > above is required).
> >
> > . The commonly-used MS-DOS partition table format can not be used on
> > devices larger than 2 TB. For devices larger than 2 TB, the GPT
> > partition table format must be used. The parted utility must be used
> > for the creation and management of GPT partitions. To create a GPT
> > partition, use the parted command mklabel gpt.
> >
> > Scientific Linux requires that all block devices be initialized with a valid
> > partition table, even if there is a single partition encompassing the
> > entire device. This requirement exists to prevent potential problems
> > caused by erroneous or unintended partition tables on the device.
> >
> > . The / and /boot directories must be located on devices that are 2 TB
> > in size or less.
> >
> > . Various issues with LVM2 on large devices are fixed in
> > Scientific Linux 4.1. Do not use LVM2 on devices larger than 2
> > TB prior to installing 4.1.
> >
> > As noted above, Scientific Linux requires that a partition table be written to
> > the block device, even when it is used as part of an LVM2 Volume
> > Group. In this case, you may create a single partition that spans the
> > entire device. Then, be sure to specify the full partition name (for
> > example, /dev/sda1, not /dev/sda), when you use the pvcreate and
> > vgcreate commands.
> >
> > . The maximum size disk that can be a member of an md software RAID
> > set is 2 TB. The md RAID device itself can be larger than 2 TB.
> > Devices have been tested up to 8 TB.
> >
> > . Various issues with e2fsprogs that occur on devices larger than 4 TB
> > are addressed in Scientific Linux 4.1. Prior to 4.1,
> > these issues can be worked around by specifying mke2fs -b 4096 when
> > making an ext2 or ext3 filesystem. The workaround is not necessary in
> > 4.1.
> >
> > The ext2 and ext3 filesystems have an internal limit of 8 TB. Devices
> > up to this limit have been tested.
> >
> > You may want to use the mke2fs -T largefile4 command to speed up the
> > creation of large filesystems.
> >
> > . The GFS filesystem is limited to 16 TB on 32-bit systems, and 8
> > exabytes (EB) with 64-bit systems. GFS filesystem
> > sizes up to 8 TB have been tested.
> >
> > . NFS partitions greater than 2 TB have been tested and are supported.
> >
> > . Scientific Linux 4.1 user space tools are compiled
> > for large file support. However, it is not possible to test every
> > program in this mode. Please file a problem report if issues arise
> > when using the tools for large file support.
> >
> > . The inn program does not function correctly with devices larger than
> > 2 TB. This will be addressed in a future release of Scientific Linux.
> >
> > ----------------------------------------------------------------------------
> >
> > So make sure you use mke2fs -b 4096 when making a file system greater than
> > 4GB on 4.0 . I think the problem has to do with "wrap around", which
> > surely does not seem good.
> >
> > -Connie Sieh
> >
> > >
> > > Thanks for any help or any suggestion
> > >
> > > Jorge
> > >
> > >
> > >
>
|