Miles O'Neal wrote:
> recap: new 64 bit Intel quadcore server with Adaptec SATA RAID
> controller, 16x1TB drives.  1 drive JBOD for OS.  The rest are
> setup as RAID6 with 1 spare.  We've tried EL5.1 + all yum updates,
> and EL5.2 stock.  We can't get /dev/sdb1 (12TB) stable with ext2
> or xfs (ext3 blows up in the journal setup).
> 
> So I decided to carve /dev/sdb up into a dozen partitions and
> use LVM.  Initially I want to use one partition per LV and make
> each of those one xfs FS.  Then as things grow I can add a PV
> (one partition per PV) into the appropriate VG and grow the LV/FS.
> Between typos and missteps, I've had to build up and tear down the
> LV pieces several times.  And now I get messages such as
> 
>   Aborting - please provide new pathname for what used to be /dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0-part6
> or
>   Device /dev/sdb6 not found (or ignored by filtering).
> 
> I clean it all up, wipe out all the files in /etc/lvm/*/*
> (including cache/.cache), and try again, still broken.
> 
> I tried rebooting.  Still broken.
> 
> How can I fix this short of a full reinstall?
> 
> The whole LVM system feels really kludgy.  I suppose there's
> not a better alternative at this time?

How large a filesystem are you trying to create?
What blocksize are you using?
What research have you done?

> 
> Thanks,
> Miles
> 


-- 

Cheers
John

-- spambait
[log in to unmask]  [log in to unmask]
-- Advice
http://webfoot.com/advice/email.top.php
http://www.catb.org/~esr/faqs/smart-questions.html
http://support.microsoft.com/kb/555375

You cannot reply off-list:-)