[Asterisk-Users] RAID affecting X100P performance...

Carlos Hernandez carlosh at linuxservices.co.nz
Wed Jul 21 21:23:39 MST 2004


I hope this one is within context for the thread.
I have been pondering for a while on building a high availability 
asterisk cluster..
I know it'd be matter of having a master 'service' router, selecting 
form a poll of asterisk servers (at least two) and if any of them falls 
down, the other would be picked up, etc.

Some of the things to look at would be...
how to maintain configuration files and databases constantly mirrored...?
how to share the same telephony resources.. or which hardware will be 
necesary to support that, or if you'd need an external switch to do all 
this...?

If ayone has experience with this, or can give pointers, it'd be much 
appreciated. Yes, you're right, with Open Source tools and affordable 
hardware...

Thanks,
Carlos

Question, should I have opened a new thread for this discussion?


Scott Laird wrote:

>
> On Jul 21, 2004, at 7:01 PM, Steven Critchfield wrote:
>
>> BTW, my raid card on my Dell 2450 had this output
>> nash5:/home/critch# hdparm -tT /dev/sda
>>
>> /dev/sda:
>>  Timing buffer-cache reads:   128 MB in  0.61 seconds =209.84 MB/sec
>>  Timing buffered disk reads:  64 MB in  2.52 seconds = 25.40 MB/sec
>>
>> and I don't have xeons, let alone another 10% to give up to deal with
>> the drives. I doubt you are getting 200mb/s real data movements over IDE
>> and software raid.
>
>
> I don't have my notes in front of me, but I was seeing almost 200 
> MB/sec reads through bonnie with a multi-GB sample size the last time 
> I tried all of this.  Admittedly, it's been a couple years, and I was 
> pushing a *lot* of drives (16, in the last incarnation).  Heck, I was 
> getting 70-80 MB/sec with software RAID-5 over 5 SCSI drives in 1999.  
> That's quite a bit better then the 25 MB/sec you're seeing.  Like I 
> said, I haven't had good experiences with any low-cost SCSI RAID 
> cards--they all run out of CPU horsepower before your drives run out 
> of throughput or your host CPU gets loaded.
>
> Not that it really matters most of the time--most real-world jobs are 
> limited by random I/O or cost, not streaming disk throughput.  
> Particularly if networks are involved.
>
>> Raid5 compared to mirror should be easier on raid5. Write 3k data over 4
>> drive raid5 array and you only write 4k total and the xor is cheap and
>> easy to do. 3k over a mirror means 6k is written. So if you do a 1/3rd
>> more slow task work, which is more of a cpu hit?
>
>
> Do the math again.  With RAID 5, writting a small amount of data with 
> a cold cache will cost 1 read from each drive plus 1 write to two 
> drives.  For 5 drives, that's a total of 7 I/Os, which is way worse 
> then the 2 you'd need for RAID 1.  As the writes get bigger then the 
> RAID block size, reads drop to 0 and you're only paying a slight 
> overhead for RAID 5, but small writes really suck with RAID 5.
>
>>> For this (and a number of other reasons), you're best off avoiding RAID
>>> 5 if you care about random I/O performance.  It can be made to go fast,
>>> but you'll need to throw a lot of cash at it.  The same amount of cash
>>> will frequently get you better performance with RAID 0+1 (or 1+0,
>>> depending on how you look at things).
>>
>>
>> Truethfully, if you want cheap, forget raid and just keep a cold spare
>> drive. You probably are going to be down for a while anyways.
>
>
> Personally, if the server doesn't have any critical data (like DNS or 
> a non-VM Asterisk server), it's easier to keep a live spare system, 
> and plan on failing over as quickly as possible.  With a bit of design 
> work, most services that don't require a lot of state are happiest 
> this way, and it lets you get away with dirt cheap servers.
>
> If you have something like a busy file server, database, mail server, 
> voicemail server, or anything with heavyweight sessions, then it's 
> time to look at RAID.
>
>> oddly enough, there isn't much if any difference these days at the
>> physical level. It is just the interface and the set of specs on the
>> interface. SCSI drives usually will give you warning of their problems.
>
>
> IDE with SMART will warn, too.  The big difference is tagged 
> queueing--almost all SCSI drives support it, and almost no IDE drives 
> do.  It lets the controller hand the drive a long list of I/Os to 
> perform, and the drive can optimize the order of the reads to best fit 
> rotational and seek latency.  In some cases, this will give you 2x the 
> performance.  That, plus the fact that you can get 15k RPM SCSI drives 
> gives SCSI a substantial per-drive advantage and an unbeatable latency 
> advantage.  For me, most of the time, IDE's cost advantage wins out, 
> though.
>
>> If it is in a business setup, it should be hotswappable if you bother
>> with raid at all.
>
>
> Hot swap is nice, when it works.  For two drives, if you can afford 
> scheduled downtime and you don't have a lot of systems to babysit, 
> then it's probably overkill.  At my last job, I rolled out hundreds of 
> 1-drive hot-swap systems, just because the local help couldn't be 
> trusted to open up a case and swap drives without breaking things.  It 
> all depends on the environment.  In most of my environments today, I 
> wouldn't bother with SCSI unless I was planning on serious amounts of 
> random disk I/O.
>
>
> Scott
>
> _______________________________________________
> Asterisk-Users mailing list
> Asterisk-Users at lists.digium.com
> http://lists.digium.com/mailman/listinfo/asterisk-users
> To UNSUBSCRIBE or update options visit:
>   http://lists.digium.com/mailman/listinfo/asterisk-users
>
>
>
>



More information about the asterisk-users mailing list