[asterisk-dev] Asterisk scalability (was: Improve scheduler performance under high load)

Johansson Olle E oej at edvina.net
Tue Feb 17 01:54:20 CST 2009


16 feb 2009 kl. 19.17 skrev Steve Murphy:

> On Mon, 2009-02-16 at 08:35 +0100, Johansson Olle E wrote:
>
>> Now, can anyone start a discussion on the way we handle threads? If  
>> we
>> run on a quad-core or a system with dual quad core CPUs, we have
>> capactiy for an enormous quantity of calls, with at least one thread
>> per call. Can a modern Linux/Unix thread scheduler handle 10 000
>> threads efficently?
>>
>> Oh, I think I just started that discussion. Looking forward to your
>> feedback!
>> /O
>
> Olle--
>
> Wow, it's been over a year since I played with chan_sip to try and
> increase the speeds at which it could process incoming calls!

[... parts removed ...]

Murf,
Thanks for the very good feedback!

So we have two different issues:

  - Callrate per second
  - The maximum number of calls

They are of course related. I've been focusing a lot in my work on the
last issue, testing on dual-core and quad-core new systems from IBM
(a bit different than your single-core 32 bit system...).

With SIP (for some reason, I did not test IAX2 ;-) ) we clearly see  
that the
system dies around 1600 simultaneous channels being trashed by all the
RTP traffic using the p2p rtp bridge. irqbalance fails miserably at  
that point
and lack of call quality makes all calls sound like the Swedish chef.
Unusable is a good description.

We gained a bit by switching Ethernet cards and drivers and doing some
optimization, but it wasn't much - just a few more calls.

Seems like I will restart these tests for a customer in March to get  
more data.
My conclusion so far is the same as Stephen Uhler's at the Atlanta
Astridevcon - the IP stack is our problem here.

The FreeSwitch dev team claims that they can process a higher load
of RTP on similar machines, which surprises me. I have only seen
this on PowerPoints so far, but if they've solved it we should be able
to do it too.

/O



More information about the asterisk-dev mailing list