SCIENTIFIC-LINUX-USERS Archives

October 2015

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Larry Linder <[log in to unmask]>
Reply To:
Larry Linder <[log in to unmask]>
Date:
Sat, 10 Oct 2015 12:57:28 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (84 lines)
Removed 5 th disk and tried a LVM install and it did not see disk 2, 3, 4 and 
you could not find if they were mounted or ? -  nothing.

Decided to go back to orig. hand lay out and when you get it done it says that 
sda must have a GPT disk Label.   There are no provisions to do this and its 
not done automatically in any scheme we can find.
The scheme the guys used months ago does not work of allowing it to create 
disk label and then bail out.
Ounce you try to use the LVM there is no way to get back to square 1.

This is really getting DUMB.   Right next to this box is a another box with 
6.2 loaded and using a manual partitioning scheme that has worked for us 
forever.   We never tried to use LVM on this system. 

How do you not allow a user to install the system any way he may need it.
The failure to allow the fifth disk to be installed is pretty bad. 

The last try will be an install with a new mother board and new disks and then 
we quit fooling around.

My only complaint is that how do a bunch of kids take a nice operating system 
and totally louse it up. 

Larry Linder

On Friday October 9 2015 12:14 pm, Larry Linder wrote:
> On Thursday October 8 2015 1:03 pm, Lamar Owen wrote:
> > On 10/07/2015 04:10 PM, Larry Linder wrote:
> > > It appears that at least from our experiments SL cannot handle more
> > > than 4 hard disks and this goes back to at least 5.9.   SL 6.7
> > > identifies the disks as all sda - sde but fails to mount 5th disk.
> >
> > I'm sure that it is not SL that cannot handle more than 4 hard disks,
> > but that something else is wrong with this system.
> >
> > > Any good ideas as to where this thing went wrong or how to fix problem.
> > > LVM still has the same problem when dealing with so many devices.
> >
> > I have a CentOS 6 box that at one time had about 27 physical volumes on
> > 27 physical LUNs in one volume group, with a single logical volume
> > containing a 30+TB XFS filesystem.  It has been a few releases back,
> > since I've consolidated those PV's into some larger LUNs (long story,
> > but related to CentOS 5 on VMware ESX 3.5 and ESX being limited to no
> > larger than 2TB LUNs and to a set of drive upgrades in our EMC Clariion
> > storage systems).  This is on fibre channel, and with multipath in place
> > the last drive device was around /dev/sdav or so (blew the mind of one
> > of our interns, who didn't understand that the drive after /dev/sdz is
> > /dev/sdaa!).
> >
> > Likewise another machine with CentOS 7 on it had 15 PV's on a fibre
> > channel SAN until I consolidated it down to a RAID 1 MD between two LUNs
> > of 11TB each.  No problems.
> >
> > Back in CentOS 3 and 4 days I ran one server with a RAID5 set on eight
> > 160GB PATA drives (a GSI Model 4C four-channel ISA IDE interface card,
> > and, yes, it was very slow, but it did work).  The resulting MDRAID
> > volume was data only; the OS was on a (don't laugh) DAC960-drive
> > external SCSI box with 12 18GB 10K RPM drives and ran quite well, all
> > things considered.
> >
> > > It looks like we will try BSD and some other Linux editions to see if
> > > they have these limitations.   My guess is that upstream vendor made
> > > the free RHT "criple ware" deliberately.
> >
> > Something else is going on here, and your guess is likely rather wrong.
> > I don't know what exactly is going wrong, but I have several EL 5, 6,
> > and 7 systems with far more than five drives that are working fine.
> >
> > One of these is an old Netburst Xeon box running EL6 with six SATA
> > drives in the chassis and 15 LUNs on the fibre channel.  The six SATA
> > drives are on three separate controller cards.  One of the cards has two
> > eSATA connectors, and I use those for imaging and for backups frequently.
> >
> > I have not seen any artificial limits on the number of 'drives' (with
> > fibre channel you don't think that way, you think in terms of LUNs).
>
> Thank You
> I was sure that there has to be a common thread.  There is another system
> available that has an identical hardware suite.   We can unplug the disk
> pack and try it on another hardware set.
> Then we will know if we have a bad mother board.
>
> Larry Linder

ATOM RSS1 RSS2