SCIENTIFIC-LINUX-USERS Archives

April 2015

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Konstantin Olchanski <[log in to unmask]>
Reply To:
Konstantin Olchanski <[log in to unmask]>
Date:
Fri, 17 Apr 2015 18:59:06 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (42 lines)
On Mon, Apr 13, 2015 at 03:28:39PM -0700, jdow wrote:
> The 3Ware RAID cards I have vastly outstrip the motherboard built in
> Intel RAID implementations for a RAID 5 setup. (I don't consider
> RAID 1 to be economically sensible for most uses.) A four disk RAID
> 5 SSD configuration can be breathtaking fast, too.

In my experience, all raid cards have rather limited total bandwidth.

For example-
- consider 6TB disks, each can read data at 160 Mbytes/sec when connected to an on-board (intel) SATA port.
- connect 8 of them to an 8-port RAID card.
- sure enough, through the raid card you can read each disk at this speed.
- now try to read all disks at the same time (8 programs reading from 8 disks)
- what I see is the grand total data coming in is around 500-600-700 Mbytes/sec instead of 8*160=1280 Mbytes/sec.
- (observe the same with writing instead of reading)
- what I have measured is the raid card internal bandwidth (which includes the PCIe bandwidth,
  you should better be connected at PCIe x4 or better x8 or better x16 link lanes).
- some vendors are honest and this bandwidth is written down in spec sheets (making it
  abvious that the raid cards were designed in the days of 1TB disks that could never
  reach 100 Mbytes/sec read or write).
- now if you put RAID6 on top of this, the rate goes down again.
- if instead of software RAID6 you use hardware RAID6, maybe you gain some bandwidth back,
  I have no data for this.
- if instead, I connect all the disks to onboard ports (no raid card), using the sadly
  discontinued ASUS Z87 mobo with 10 SATA ports (6 intel + 4 marvell), I do see
  total bandwidths in the 1000 Mbytes/sec range (those measurements were done with 8x4TB disks).

All this proves is that intel SATA ports on the intel internal interconnect
are much faster than SATA ports connected to a not-super-fast microprocessor
on a raid card connected to the system by an x4 or x8 PCIe bus (have not see x16 raid cards yet).

Speaking of a theoretical 4xSSD hardware RAID5 configuration, I would be surprised
if it reaches 1000 Mbytes/sec read/write speeds (assuming 500 Mbytes/sec SSDs).
Software RAID5 with 4xSSDs connected to onboard SATA (with a 4GHz CPU)
would probably get all the way to 2000 Mbytes/sec (assuming 500 Mbytes/sec SSDs).

-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada

ATOM RSS1 RSS2