SCIENTIFIC-LINUX-USERS Archives

November 2010

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jon Peatfield <[log in to unmask]>
Reply To:
Jon Peatfield <[log in to unmask]>
Date:
Fri, 12 Nov 2010 17:46:46 +0000
Content-Type:
TEXT/PLAIN
Parts/Attachments:
TEXT/PLAIN (104 lines)
On Fri, 12 Nov 2010, Diederick Stoffers wrote:

> Hi all,
>
> My lab uses a Dell Poweredge R905 server running Scientific Linux 5.4. 
> It has three 1TB disks that are combined in a RAID5 configuration to 
> create one 2TB partition. I would like to replace these with three 3TB 
> disks to create one 6TB partition in RAID5.
>
> I read online there might be issues with running 3TB disks, some people 
> have encountered an upper limit of 2.1TB for a single partition due to 
> addressing issues. However, this should not be a problem when running 
> Extensible Firmware Interface (EFI). Can anyone confirm that this will 
> work on my system with the current OS?

Sadly sl5 will not install on the one machine I tested using EFI firmware. 
RH have a document (sorry can't find it right now) explaining that the 
support for EFI in anaconda will be added in EL6.

However, since EL5/sl5 won't boot from an md RAID-5 set you are probably 
using a hardware RAID controller - like the PERC/6 or H700 or similar.

Those support slicing up a single RAID-5 into several smaller 'Virtual 
Disks' (VDs) to be presented to the OS as different disks.

The only restriction for the BIOS firmware on 2TB is for the 'boot disk' 
so it is easy enough to slice off a little to install into (using the old 
fashioned disk label) and have the rest configured either using GPT labels 
or in fact chop it all up into <2TB chunks if you prefer...

e.g. we have one Dell PE2950 box with a PERC/6i with 8x750G disks set up 
with RAID-5, presenting 3 VDs each under 2TB thus:

(Output from MegaCli -LdInfo -LALL -aALL)

Adapter 0 -- Virtual Drive Information:
Virtual Disk: 0 (Target Id: 0)
Name:templ-vd0
RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:1800000MB
State: Optimal
Stripe Size: 64kB
Number Of Drives:7
Span Depth:1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Access Policy: Read/Write
Disk Cache Policy: Disk's Default
Virtual Disk: 1 (Target Id: 1)
Name:templ-vd1
RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:1800000MB
State: Optimal
Stripe Size: 64kB
Number Of Drives:7
Span Depth:1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Access Policy: Read/Write
Disk Cache Policy: Disk's Default
Virtual Disk: 2 (Target Id: 2)
Name:templ-vd2
RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:689280MB
State: Optimal
Stripe Size: 64kB
Number Of Drives:7
Span Depth:1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Access Policy: Read/Write
Disk Cache Policy: Disk's Default

The PERC VDs show up in /proc/partitions as:

$ cat /proc/partitions
major minor  #blocks  name

    8     0 1843200000 sda
    8     1     104391 sda1
    8     2 1843089255 sda2
    8    16 1843200000 sdb
    8    17 1843193646 sdb1
    8    32  705822720 sdc
    8    33  705815743 sdc1
...

anyway sl5 has been working on that box for about 3 years.

In fact we join most of them back together as a single LVM-VG (really 
don't ask why...)

The reason that all these PERC VDs are <2TB is that I was being very very 
conservative.

-- 
/--------------------------------------------------------------------\
| "Computers are different from telephones.  Computers do not ring." |
|       -- A. Tanenbaum, "Computer Networks", p. 32                  |
---------------------------------------------------------------------|
| Jon Peatfield, _Computer_ Officer, DAMTP,  University of Cambridge |
| Mail:  [log in to unmask]     Web:  http://www.damtp.cam.ac.uk/ |
\--------------------------------------------------------------------/

ATOM RSS1 RSS2