[vdr] mdadm software raid5 arrays?
henrik-vdr at prak.org
Wed Nov 18 18:28:41 CET 2009
On Tue, Nov 17, 2009 at 03:34:59PM +0000, Steve wrote:
> Alex Betis wrote:
>> I don't record much, so I don't worry about speed.
> While there's no denying that RAID5 *at best* has a write speed
> equivalent to about 1.3x a single disk and if you're not careful with
> stride/block settings can be a lot slower, that's no worse for our
> purposes that, erm, having a single disk in the first place. And reading
> is *always* faster...
Thanks for putting some numbers out there. My estimate was more theory
> Example. I'm not bothered about write speed (only having 3 tuners) so I
> didn't get too carried away setting up my 3-active disk 3TB RAID5 array,
> accepting all the default values.
> Rough speed test:
> #dd if=/dev/zero of=/srv/test/delete.me bs=1M count=1024
> 1073741824 bytes (1.1 GB) copied, 13.6778 s, 78.5 MB/s
> #dd if=/srv/test/delete.me of=/dev/null bs=1M count=1024
> 1073741824 bytes (1.1 GB) copied, 1.65427 s, 649 MB/s
Depending on the amount of RAM, the cache can screw up your results
quite badly. For something a little more realistic try:
sync; dd if=/dev/zero of=foo bs=1M count=1024 conv=fsync
The first sync writes out fs cache so that you start with a
clean cache and the "conv=fsync" makes sure that "dd" doesn't
finish until it has written its data back to disk.
After the write you need to make sure that your read cache is not still
full of the data you just wrote. 650 MB/s would mean 223 MB/s per disk.
That sounds a bit too high.
Try to read something different (and big) from that disk before running
the second test.
> Don't know about anyone else's setup, but if I were to record all
> streams from all tuners, there would still be I/O bandwidth left.
> Highest DVB-T channel bandwidth possible appears to be 31.668Mb/s, so
> for my 3 tuners equates to about 95Mb/s - that's less than 12 MB/s. The
> 78MB/s of my RAID5 doesn't seem to be much of an issue then.
Well, I guess DVB-S2 has higher bandwidth. (numbers anybody?)
But more importantly: The rough speedtests that you used were under
zero I/O load.
I/O-load can have some nasty effects. E.g. if your heads have to jump
back and forth between an area from where you are reading and an area
to which you are recording. In the case of one read stream and several
write streams in theory you could adjust the filesystem's allocation
strategy so that available areas near your read region are used for
writing (though I doubt that anybody ever implemented this strategy in
a mainstream fs) but when you are reading several streams even
caching, smart io schedulers, and NCQ can not completely mask the
fact that in raid5 you basically have one set of read/write heads.
In a raid1 setup you have two sets of heads that you can work with.
(Or more if you are willing to put in more disks.)
Basically raid5 and raid1+0 scale differently if you add more disks.
If you put in more disks into raid5 you gain
* more capacity (each additional disk counts fully) and
* more linear read performance.
If you put in more disks into raid1+0 it depends on where you put the
additional disks to work.
If you grow the _number of mirrors_ you get
* more read performance (linear and random)
* more redundancy
If you grow the _number of stripes_ you get
* more read and write performance (linear and random)
* more capacity (but only half of the additonal for 2 disk mirror sets)
More information about the vdr