SCIENTIFIC-LINUX-USERS Archives

March 2009

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Miles O'Neal <[log in to unmask]>
Reply To:
Miles O'Neal <[log in to unmask]>
Date:
Thu, 5 Mar 2009 16:20:12 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (30 lines)
recap: new 64 bit Intel quadcore server with Adaptec SATA RAID
controller, 16x1TB drives.  1 drive JBOD for OS.  The rest are
setup as RAID6 with 1 spare.  We've tried EL5.1 + all yum updates,
and EL5.2 stock.  We can't get /dev/sdb1 (12TB) stable with ext2
or xfs (ext3 blows up in the journal setup).

So I decided to carve /dev/sdb up into a dozen partitions and
use LVM.  Initially I want to use one partition per LV and make
each of those one xfs FS.  Then as things grow I can add a PV
(one partition per PV) into the appropriate VG and grow the LV/FS.
Between typos and missteps, I've had to build up and tear down the
LV pieces several times.  And now I get messages such as

  Aborting - please provide new pathname for what used to be /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0-part6
or
  Device /dev/sdb6 not found (or ignored by filtering).

I clean it all up, wipe out all the files in /etc/lvm/*/*
(including cache/.cache), and try again, still broken.

I tried rebooting.  Still broken.

How can I fix this short of a full reinstall?

The whole LVM system feels really kludgy.  I suppose there's
not a better alternative at this time?

Thanks,
Miles

ATOM RSS1 RSS2