[asterisk-dev] rtp scalability improvement...

Tim Panton tim at mexuar.com
Mon Mar 20 07:40:49 MST 2006


On 20 Mar 2006, at 13:48, Roy Sigurd Karlsbakk wrote:

>>>> PS: As of number of packets sent thru computer:
>>>>
>>>> 50 packets per second * 400 channels * 2 in/out = 40 000 packets  
>>>> per second.
>>>>
>>>> Performance dies not because of system calls, but because 20 000
>>>> interruptions/sec happened at that moment.
>>>> It's named IRQ poisoning. Because IRQ switching is longer that  
>>>> standard
>>>> task switching.
>>>> <snip/>
>>>
>>> These IRQ storms are only happening on crappy network hardware.  
>>> My testing was one with intel gigabit NICs with large buffers,  
>>> effectively producing < 100 interrupts per second. kernel  
>>> profiling showed time was indeed spent in system calls
>>>
>>
>> Yet another reason IAX trunking wins, 50 channels in a trunk means  
>> 49 fewer packets,
>> hence fewer context switches.
>>
>> Is there a halfway house here, with a kernel driver that just  
>> aggregates a number of rtp
>> packets and hands them all back in one delimited buffer?
>
> That doesn't work with the current implementation of the sip  
> jitterbuffer :(

I don't know the jitterbuffer code, but couldn't you have something  
like (I've been writing
java recently so my C is probably wrong) :

struct packet_tag{
   struct in_addr from;
   struct in_addr to;
   short src_port
   short len;
};
struct packet_list{
   int pkt_count;
   struct packet_tag[/*pkt_count*/];
   char data[];
};

Each 'read' from the rtp device gives you back a 'packet_list'  of  
all the 'recently'
recived packets (from any host) to this port; As I said, aiming to be a
halfway house between mmaping the raw packets and the current
situation, with iax trunking as a guiding light....

Tim.


Tim Panton
tim at mexuar.com






More information about the asterisk-dev mailing list