[asterisk-users] asterisk manager interface stability

Lee Jenkins lee at datatrakpos.com
Fri May 18 18:49:46 MST 2007


Matt Florell wrote:
> On 5/18/07, Lee Jenkins <lee at datatrakpos.com> wrote:
>> Matt Florell wrote:
>> > On 5/16/07, Lee Jenkins <lee at datatrakpos.com> wrote:
>> >> Matt Florell wrote:
>> >> > The issue has more to do with the sheer amount of data passed to the
>> >> > client from within the Asterisk application when you have 50-100+
>> >> > clients connected to the AMI on full output mode. Running a 
>> system with
>> >> > FreePBX/Trixbox especially generates vast amounts of output that 
>> has to
>> >> > be generated on every AMI connection for every client. This is not
>> >> > trivial and can result in lockups very easily, although this has 
>> gotten
>> >> > much better since the early 1.0 versions.
>> >> >
>> >> > The new Asterisk Manager web API in 1.4 is a good step where 
>> sending of
>> >> > Actions does not require an actual Telnet conneciton to the AMI, 
>> but I
>> >> > think to be able to handle larger numbers of concurrent connections
>> >> that
>> >> > a separate send-only and a separate receive-only type of 
>> interface be
>> >> > built where Asterisk would just output all AMI data to a single
>> >> > server-like application that would then broadcast it to all 
>> connected
>> >> > clients. This would remove the burden of so many connections going
>> >> > directly into Asterisk and would allow for much larger scaling of
>> >> > AMI-type applications that require real-time output of AMI events.
>> >> >
>> >>
>> >> I definitely agree here personally.  Clients could connect to this
>> >> "proxy" and subscribe to only the events that are interesting or
>> >> applicable.
>> >>
>> >> > As for how to go about doing this, I can't help you there. I did
>> >> build a
>> >> > very specialized version of something like this 4 years ago for the
>> >> > astGUIclient project called the Asterisk Central Queue 
>> System(ACQS) It
>> >> > is based on 1.0 Asterisk but it still works with 1.2 and 1.4. It is
>> >> > limited in what it does, but it does scale much better than using
>> >> direct
>> >> > AMI connections.
>> >>
>> >> I've been considering writing something like this for a project 
>> that I'm
>> >> thinking about doing that would require potentially high number of
>> >> concurrent clients to consume AMI services.
>> >>
>> >>  From your experience, does the software that you wrote require
>> >> significant CPU to cache and then doll out the kind of volume of
>> >> messages that AMI can send?
>> >
>> > One of the great parts about removing the broadcasting of AMI events
>> > outside of the Asterisk process is that the broadcast server process
>> > can exist on a separate physical server removing any kind of overhead
>> > on the Asterisk server.
>> >
>> > In my experience doing the "proxy" on the same machine uses less CPU
>> > resources than the same number of AMI connected clients, and doesn't
>> > have any of the deadlock issues that can happen with a lot of direct
>> > AMI connections.
>> >
>> > For my application(ACQS) I use MySQL as a storage engine for all of
>> > the recent events received and sent so that they can be independantly
>> > queried by any client apps that need to see them.
>> >
>> > MATT---
>> >
>>
>> Neat.  So the clients use a polling model?  Individual clients then
>> query only for events that are interesting?
>>
>> Warm Regards,
>>
>> Lee
> 
> Yes, the clients only connect to the MySQL database and can query the
> events as they need to for their display.
> 
> MATT---
> 
> 

Cool.  I hadn't thought of doing it that way.  My idea was to somehow 
keep an in memory cache for client connections.  As events were received 
from the AMI, poll a hash table in memory and forward the event to 
client connections who have registered interest in that event.

-- 

Warm Regards,

Lee





More information about the asterisk-users mailing list