First of all what do you mean by raid 10 do you mean Raid 1+0 or striped raid 5s which is some times called raid 10, 5+0 or 50

Either case the drive numbers don't add up.
For raid 1+0 with hot spares you need an even number of drives plus your spares.

Also note raid 1+0 hurts your performance unless you have 4 or more drives in the volume in cases with just 2 drives classic raid 1 is faster



-- Sent from my HP Pre3


On Jun 1, 2013 2:09 AM, Steve Bergman <[log in to unmask]> wrote:

Hi,

I have a server that I'm getting ready to put into production. It has 4 SATA drives which I've configured as a 2-drive raid1 array with 2 hot spares. I'd like the performance of raid10, but need a little better fault tolerance. Devoting all 4 drives to raid1 seems fault-tolerance overkill. It's come to my attention that mdadm supports a sort of raid1e-like "raid10" mode with 3 drives, and that I have a choice of a near or far configuration. 3 drive raid10 with 1 hot spare sounds perfect. 2-drive fault tolerance is perfect. And the improved performance sound good.

The workload is mostly read. But the main performance concern is *lots* of small random writes when I rebuild large Cobol C/ISAM files. All in all, it looks appealing enough that I'd like to reinstall using this mode. The filesystem would be ext4, as long as this doesn't introduce some compelling reason to switch to XFS. My initial inclination is to stick with the near configuration.

But I notice that raid10 is marked as experimental in the vanilla 2.6.32 kernel. And I figured I'd better run this by the list, since I have no experience with this configuration, and I'll be living with the decision for years to come.

Thanks for any advice,
Steve Bergman