recap: new 64 bit Intel quadcore server with Adaptec SATA RAID
controller, 16x1TB drives.  1 drive JBOD for OS.  The rest are
setup as RAID6 with 1 spare.  We've tried EL5.1 + all yum updates,
and EL5.2 stock.  We can't get /dev/sdb1 (12TB) stable with ext2
or xfs (ext3 blows up in the journal setup).

So I decided to carve /dev/sdb up into a dozen partitions and
use LVM.  Initially I want to use one partition per LV and make
each of those one xfs FS.  Then as things grow I can add a PV
(one partition per PV) into the appropriate VG and grow the LV/FS.
Between typos and missteps, I've had to build up and tear down the
LV pieces several times.  And now I get messages such as

  Aborting - please provide new pathname for what used to be /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0-part6
or
  Device /dev/sdb6 not found (or ignored by filtering).

I clean it all up, wipe out all the files in /etc/lvm/*/*
(including cache/.cache), and try again, still broken.

I tried rebooting.  Still broken.

How can I fix this short of a full reinstall?

The whole LVM system feels really kludgy.  I suppose there's
not a better alternative at this time?

Thanks,
Miles