SCIENTIFIC-LINUX-USERS Archives

September 2012

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Steven J. Yellin" <[log in to unmask]>
Reply To:
Steven J. Yellin
Date:
Mon, 17 Sep 2012 00:22:53 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (85 lines)
     I believe that this is the interpretation of /proc/mdstat:
Consider, for example,

  md2 : active raid1 sda3[0]
         2096384 blocks [2/1] [U_]

The device is /dev/md2.  It is raid1, meaning partitions on two disk 
drives mirror each other.  One of the two is /dev/sda3; the other isn't 
given because something went wrong, but I'll guess it would be /dev/sdb3 
if sdb were working, and you would also have in the md2 line an 
identification of the second participant in the mirror, "sdb3[1]".  The 
mirrored partitions have 2096384 blocks, which I think means about 2 GB. 
The "[2/1]" means there should be 2 disks in md2, but there is actually 1. 
The "[U_]" would be "[UU]" if md2 were in perfect working order, but the 
second drive in md2 is absent.  Perhaps "U" stands for "up", or for 
"available" in some language other than English.
     You can have a hot spare.  If the md2 line read
  md2 : active raid1 sdc3[2] sdb3[1] sda3[0]
  then the mirror would still consist of sda3 and sdb3.  You can tell 
sdc3 is the spare because of its "[2]"; only [0] and [1] are needed for 
successful mirroring.

Steven Yellin

On Sun, 16 Sep 2012, Todd And Margo Chester wrote:

> On 09/10/2012 12:41 PM, Jeff Siddall wrote:
>> On 09/10/2012 02:52 PM, Todd And Margo Chester wrote:
>>> On 09/10/2012 10:05 AM, Jeff Siddall wrote:
>>>> ME software RAID1 is very reliable
>>> 
>>> Have you had a software RAID failure? What was the alert?
>>> And, what did you have to do to repair it?
>> 
>> Never had a "software" failure.  I have had [too] many hardware
>> failures, and those show up with the standard MD email alerts (example
>> attached below).
>> 
>> Jeff
>> 
>> ----------
>> 
>> This is an automatically generated mail message from mdadm
>> 
>> A DegradedArray event had been detected on md device /dev/md1.
>> 
>> Faithfully yours, etc.
>> 
>> P.S. The /proc/mdstat file currently contains the following:
>> 
>> Personalities : [raid1] [raid6] [raid5] [raid4]
>> md3 : active raid1 sda5[0]
>>        371727936 blocks [2/1] [U_]
>> 
>> md0 : active raid1 sda1[0]
>>        104320 blocks [2/1] [U_]
>> 
>> md2 : active raid1 sda3[0]
>>        2096384 blocks [2/1] [U_]
>> 
>> md1 : active raid1 sda2[0]
>>        16779776 blocks [2/1] [U_]
>> 
>> unused devices: <none>
>> 
>
> Hi Jeff,
>
>   Thank you.
>
>   I do not understand what I am looking at.  All four
> entries are RAID1, meaning two drives in the array.
> But what two drives go together?
>
>   What does the "[U_]" stand for?  Up?  Should
> md1 be [D_] for down?
>
>   What does [2/1] stand for?
>
>   And, just out of curiosity, is it possible to have
> a hot spare with the above arrangement?
>
> -T
>

ATOM RSS1 RSS2