[asterisk-users] Saftware RAID1 or Hardware RAID1 with Asterisk
Gordon Henderson
gordon+asterisk at drogon.net
Tue Aug 21 03:17:06 CDT 2007
On Tue, 21 Aug 2007, Vidura Senadeera wrote:
> Dear All,
>
> I would like to get community's feedback with regard to RAID1 ( Software or
> Hardware) implementations with asterisk.
>
> This is my setup
>
> Motherboard with SATA RAID1 support
> CENT OS 4.4
> Asterisk 1.2.19
> Libpri/zaptel latest release
> 2.8 Ghz Intel processor
> 2 80 GB SATA Hard disks
> 256 MB RAM
> digium PRI/E1 card
>
> Following are the concerns I am having
>
> I'm planing to put this asterisk server in production enviorment which is
> having E1 connection to the asterisk server, approximately
> 20 con-current calls, Music on hold, voice mail boxes.
>
> 1. If I use Software RAID, what would be the impact to my deployment?
> ( problems that I have to face with regard to the call flow )
> 2. If I use Hardware based RAID 1, what would be the impact to the system?
> 3. According to your practical experiance what is the ideal solution among
> both options?
With my other hat on I build and maintain many servers with disk
capacities ranging from 80GB to over 6TB... All using Linux software RAID.
I've been using Linux s/w RAID for over 8 years now.
So with RAID-1 done in hardware, the impact to the system, CPU, etc.
should be no more (or less) than running a single SCSI or SATA drive. You
write the data over the (PCI) bus once and the hardware takes care of
writing it to both drives behind your back. Similarly for reading (where
it might only read from one drive or from alternative drives) you only see
one transaction over the PCI bus.
You do (sometimes) need the hardware RAID controller to be supported by
Linux and this is a weak area. Some controllers just look like a standard
drive, so they are transparent to the system, but then you need to use
either the BIOS utilities to set it up in the first place, or (typically)
a Windows utility, although some controllers are now being supported by
Linux with user-land tools to manage and check the arrays.
Doing it in software requires double the PCI bandwidth for writes, but the
same as a single drive or hardware controller for reads. AIUI, the current
software RAID-1 reads alternatively from the disks. So on writes. The
overhead in terms of CPU power is minimal - write the same block twice,
and if the hardware is good, then both writes can be transfered over the
PCI bus rapidly, into the cache on the drives and the writes then take
place in parallel, so performance wise, it's really no worse than single
drive (and it's important to note than it's no better than a single drive
on reads too, despite many threads on the linux-raid list suggesting
otherwise!)
RAID-1 doesn't require parity calculations, so the software overhead
really is quite small (especially when you compare it to the relatively
huge times it takes to actually get the data to/from the disks)
So things that are important: Make sure the hardware to each drive is as
independent as possible. Hard to do these days as there is probably only
one SATA controller chip on the motherboard. You also need to see what
happens when a drive dies - is it going to crowbar the entire SATA chip
and block the other drive? Is the driver going to recognise it quickly
enough and so on. (Some early SATA drives weren't good at this)
And the "usual" - make sure all the hardware has it's own interrupts.
For the absolute maximun performance, (and minimum overheard) then you
need a motherboard with multiple PCI buses - put the disks on one bus, the
PRI card on another.
If terms of disk b/w needed - if we're using g711, then it's 64KB/sec, and
20 calls streaming to voicemail is 1.3MB/sec. A single modern drive ought
to be able to sustain 60MB/sec read or writes, so there is plenty of
overhead, as long as asterisk is relatively sensible about buffering disk
write/reads (which I think it is)
So I'd say "go for it", but do take the time, if possible to build a
custom kernel for your hardware, and at the BIOS level, turn off all
drivers that you won't be using - eg. on-board sound, then 2nd network
port, USB (if you're not using it, don't enable it!) and so on, and make
sure you have a custom compiled kernel for your exact hardware
requirements with no modules loaded other than the Zap/TDM, etc., ones.
And I'd also say "go for it" because I have similarly specd. servers doing
similar tasks also running asterisk. I won't put a server in a remote data
centre these days without it either booting off flash, or using at least
RAID-1.
Remember to put your swap on RAID-1 too.
Here is one of my servers in a similar setup to yours:
$ cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 hdc1[1] hda1[0]
248896 blocks [2/2] [UU]
md2 : active raid1 hdc2[1] hda2[0]
995904 blocks [2/2] [UU]
md3 : active raid1 hdc3[1] hda3[0]
2000000 blocks [2/2] [UU]
md5 : active raid1 hdc5[1] hda5[0]
38081984 blocks [2/2] [UU]
md6 : active raid1 hdc6[1] hda6[0]
38708480 blocks [2/2] [UU]
unused devices: <none>
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 236M 38M 186M 17% /
tmpfs 249M 0 249M 0% /dev/shm
/dev/md3 1.9G 1.2G 643M 65% /usr
/dev/md5 36G 29G 5.3G 85% /var
/dev/md6 37G 30G 4.7G 87% /archive
$ cat /proc/swaps
Filename Type Size Used
Priority
/dev/md2 partition 995896 326712 -1
It has 2 x 80GB IDE drives and I've partitioned them (because that's my
preference), but one thing I do, is name the md (RAID) partitons after the
partition names, so md1 is /dev/hda1 and /dev/hdb1 and so on. Makes life
easy if changing a drive.
Gordon
Ps. There appears to be asterisk version 1.2.24 now, although I must have
missed the announcements from Digium about it...
Pps. Stick another 256MB of RAM in it if possible. It'll only cost pennies
and might help with buffering stuff like MoH - or even copy MoH, prompts,
etc. into a RAM disk at boot time...
More information about the asterisk-users
mailing list