SCIENTIFIC-LINUX-USERS Archives

April 2014

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jeff Siddall <[log in to unmask]>
Reply To:
Jeff Siddall <[log in to unmask]>
Date:
Fri, 4 Apr 2014 13:29:42 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (33 lines)
On 04/04/2014 01:17 PM, CS_DBA wrote:
> Hi All;
>
> We have a server running Scientific Linux 6.4 with a large 4.1TB RAID 10
> volume
>
> We keep all of our KVM's on this volume
>
> Today we found that we are unable to access the VM's
>
> After some checking we found errors such as the ones below in
> /var/log/message
>
> We unmounted the RAID volume (umount /dev/md0) thinking we could run
> some diagnostics and maybe fix it, however I'm less than up to speed on
> managing RAID volumes...
>
> Can someone help me debug this? How do I run a "check" on a RAID volume?

cat /proc/mdstat

will tell you the state of the array(s).  Run that and report back.

The errors in the log imply the drive or controller ata5.01 has problems.

The EXT4-fs error implies the underlying device (md0) cannot be read.

Remember the "0" in RAID10 means that both of the underlying RAID1 
arrays need to be clean or you won't be able to use the RAID0. 
Unfortunately this also implies that one of your RAID1 arrays is unreadable.

Jeff

ATOM RSS1 RSS2