[asterisk-app-dev] External media to bridges

Paul Belanger paul.belanger at polybeacon.com
Wed Mar 12 15:53:45 CDT 2014


On Wed, Mar 12, 2014 at 1:02 PM, Matthew Jordan <mjordan at digium.com> wrote:
> On Tue, Mar 11, 2014 at 3:18 PM, Paul Belanger
> <paul.belanger at polybeacon.com> wrote:
>> Greetings,
>>
>> I had a chance to play a little with bridges today, mostly load
>> testing / profiling Asterisk 11 and 12.  One of the issues I had out
>> of the box was playing external audio from asterisk into a bridge.
>> For the purpose of my testing I was simply using a local channel to
>> drop audio in the bridge using MusicOnHold.
>>
>> Obviously not the best setup, but for a real world example I would
>> need to play audio into the bridge external to Asterisk and not over
>> channel like SIP.  How do people see this working?
>
> There's some fundamental concepts here that are going to be the same
> no matter what we do. Asterisk is Asterisk: AMI, dialplan, ARI: they
> all use the same basic building blocks. They just expose them
> differently (and in some cases, use them in a much more fun way).
>
> * Channels move media between some 'thing' and Asterisk. Asterisk, in
> this case, is either a bridge or some form of generation/termination
> in a dialplan application.
> * Bridges mix media between channels.
> * Local channels - which always come in a pair - move media between a
> bridge/dialplan application and some other bridge/dialplan
> application. They do this by having a virtual 'thing' that hands the
> media back and forth between the two channel pairs. This is typically
> called the Local bridge - but it's just as easy to think about it as a
> little frame relay device widget that passed media in both directions.
>
> To answer your question then: it is working as intended. Bridges are
> not a source of media; they should not be a source of media; they are
> the thing that mixes and directs the media. In Asterisk, your source
> of media can either be:
>  * A device communicating over some 'real' channel
>  * A dialplan application, communicating over a Local channel
>
>> I know we have mentioned HTTP with cache headers in the past but
>> according to file 'Local channels will also impact scaling.' so I'm
>> trying to see how we'd come up with a solution.
>
> When we say Local channels are inefficient, that's typically in
> relation to things that are much more efficient. In general, passing a
> frame through a Local channel is not *terribly* expensive - it's just
> a lot more expensive than having two "real" channels be natively
> bridged. Once the media is in the core, that extra hop isn't a whole
> lot more work. Because a softmix bridge manipulates the media, the
> media has to be in the core. Hence, the Local channel doesn't really
> add much overhead (if any) to this scenario.
>
> Since the Local channel is almost certainly not the thing impacting
> your scaling, that means that making a bridge into a media source -
> whatever that would look like - doesn't help the situation. More on
> what probably is limiting your scaling below.
>
>> All of my example would be piping something from the OS level into
>> asterisk using a Local channel, but that doesn't appear to be the best
>> option.  So, the next step would be some specific module compiled into
>> asterisk?
>>
>
> I think you're jumping to a conclusion about what impacted your test
> without understanding what is happening. This is one of those cases
> where profiling your scenario would be absolutely necessary to make
> any kind of strong statement, but there are some general assumptions
> that I think are safe to make about what happens when you load a
> system up with multi-party bridges.
>
> In a multi-party bridge, every channel that presents a frame to the
> softmix bridging technology has to be taken and mixed together. Say we
> have n channels participating. If each channel delivers a frame to be
> mixed, we have to take all n frames and turn them into a new frame; we
> then have to deliver that frame to all n participants. This is done by
> some mixing thread; there is a single thread that does this job. This
> is a problem that scales linearly: each participant you add is another
> channel that can send a frame and has to receive the mixed frame. At
> some point in time, as you add more participants, you will overload
> the thread doing the mixing where it can no longer deliver frames fast
> enough to not cause audio degradation. Eventually, that thread will
> peg out a CPU.
>
> So: when you throw a large number of participants into a single
> bridge, you're going to eventually hit a max limit. The asymptotic
> complexity of visiting every participant in a container on a single
> thread of execution is O(n). Type of container doesn't matter - to
> speed that up, you need to parallelize. That's not a problem. That's
> computer science.
>
> Question: for single announcer, multiple listener, would a different
> bridging model help?
>
> Answer: somewhat. A holding bridge 'knows' that its participants will
> never give it meaningful audio, so it drops those frames. Because
> there's no gathering of media, that takes some burden off of the
> processing. The asymptotic complexity of the problem is still O(n)
> however; you've just removed a constant factor from the equation.
>
> Question: why not multi-thread the delivery of the frames?
> (multi-threading gathering isn't super useful, since there's a single
> choke point in the mixer. You might get some parallelism by reserving
> portions of an array for each thread, then synchronizing the gathering
> threads with the mixing thread, but not a lot - and the complexity of
> all that probably isn't worthwhile.)
>
> Answer: because in most normal use cases of Asterisk, this is not the
> bottle neck. For a low number of participants, multi-threading the
> delivery is going to hurt you; see Amdahl's Law. There is another way
> to gain parallelism in this case, however, in a more generic fashion.
>
> Question: well, what can we do to parallelize the situation?
>
> Answer: multiple bridges.
>
> If you wanted to scale this out, you would need multiple multi-party
> bridges. Each multi-party bridge hosts some number of your
> participants; this should scale with the number of execution units on
> your hardware. Use Local channels to pass audio between the
> multi-party bridges: this is a small overhead, as the (a) the media is
> already in the core, and (b) from the perspective of each multi-party
> bridge, the Local channel is just another frame source/sink. As you
> add participants, balance them between the multi-party bridges. This
> coarse grain parallelism scales the problem across whatever hardware
> you're running on, without requiring strange mechanisms that hurt
> other scenarios in the softmix code.
>
Thanks for taking the time to reply. And this is what I was seeing on
our testing. More bridges with less channels worked much better then
less bridges with more channels.  Your talking point about Local
channels is correct to, so it seem they don't impact as much as I was
assuming.

One thing I would be interested in figuring how, is how to determine
from Asterisks POV, when you have too many channels in your bridge.  I
need to figure out a way to detect that and autoscale out into another
bridge.

-- 
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belanger at polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger



More information about the asterisk-app-dev mailing list