Steve, RAID 5 is typically slower for writes than RAID 1. I think it
is supposed to be just as fast for reads. But of course RAID 10
generally gives better performance than either RAID 1 or 5, for both
reads and writes. :)
Incidentally, I've never seen any good performance comparison for more
complicated RAID setups using more disks. Say you wanted one volume
with the fastest IO you could get. The traditional answer is, "Buy 4
of the very fastest SCSI disks you can get, and run them in RAID 10."
But, that is sort of a silly answer, because it assumes you are using
only 4 disks, but in reality those 4 15,000 rpm SCSI disks
might cost as much 12 7,200 RPM IDE disks. So is there some more
complicated RAID configuration that would give you faster IO using
those 12 IDE disks? I'd bet there is, but I don't know.
Basically, for different types of disks (size, speed, and cost), what
are the optimum RAID configurations for various different trade-off
point of storage vs. speed vs. cost? If anyone's done a good study on
that anywhere, I'd like to see it.
Of course if you are mounting all the disks locally, there are
practical limits too how many you can stuff into one box. But once
you start talking about stand-alone storage boxes talking over a
HyperSCSI, iSCSI, or Fiber Channel SAN to your other servers, the
possibilities start looking much more open ended...