SCIENTIFIC-LINUX-USERS Archives

October 2015

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Lamar Owen <[log in to unmask]>
Reply To:
Lamar Owen <[log in to unmask]>
Date:
Thu, 8 Oct 2015 13:03:11 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (48 lines)
On 10/07/2015 04:10 PM, Larry Linder wrote:
> It appears that at least from our experiments SL cannot handle more than 4
> hard disks and this goes back to at least 5.9.   SL 6.7 identifies the disks
> as all sda - sde but fails to mount 5th disk.

I'm sure that it is not SL that cannot handle more than 4 hard disks, 
but that something else is wrong with this system.

> Any good ideas as to where this thing went wrong or how to fix problem.
> LVM still has the same problem when dealing with so many devices.

I have a CentOS 6 box that at one time had about 27 physical volumes on 
27 physical LUNs in one volume group, with a single logical volume 
containing a 30+TB XFS filesystem.  It has been a few releases back, 
since I've consolidated those PV's into some larger LUNs (long story, 
but related to CentOS 5 on VMware ESX 3.5 and ESX being limited to no 
larger than 2TB LUNs and to a set of drive upgrades in our EMC Clariion 
storage systems).  This is on fibre channel, and with multipath in place 
the last drive device was around /dev/sdav or so (blew the mind of one 
of our interns, who didn't understand that the drive after /dev/sdz is 
/dev/sdaa!).

Likewise another machine with CentOS 7 on it had 15 PV's on a fibre 
channel SAN until I consolidated it down to a RAID 1 MD between two LUNs 
of 11TB each.  No problems.

Back in CentOS 3 and 4 days I ran one server with a RAID5 set on eight 
160GB PATA drives (a GSI Model 4C four-channel ISA IDE interface card, 
and, yes, it was very slow, but it did work).  The resulting MDRAID 
volume was data only; the OS was on a (don't laugh) DAC960-drive 
external SCSI box with 12 18GB 10K RPM drives and ran quite well, all 
things considered.

> It looks like we will try BSD and some other Linux editions to see if they
> have these limitations.   My guess is that upstream vendor made the free
> RHT "criple ware" deliberately.
Something else is going on here, and your guess is likely rather wrong.  
I don't know what exactly is going wrong, but I have several EL 5, 6, 
and 7 systems with far more than five drives that are working fine.

One of these is an old Netburst Xeon box running EL6 with six SATA 
drives in the chassis and 15 LUNs on the fibre channel.  The six SATA 
drives are on three separate controller cards.  One of the cards has two 
eSATA connectors, and I use those for imaging and for backups frequently.

I have not seen any artificial limits on the number of 'drives' (with 
fibre channel you don't think that way, you think in terms of LUNs).

ATOM RSS1 RSS2