[asterisk-users] The new ConfBridge application is now in Asterisk Trunk!

Shaun Ruffell sruffell at digium.com
Mon Apr 25 11:52:46 CDT 2011


On Mon, Apr 25, 2011 at 10:07:56AM -0500, David Vossel wrote:
> On Monday, April 25, 2011 9:49:05 AM, David Backeberg wrote:
>> On Mon, Apr 25, 2011 at 10:40 AM, C. Savinovich
>>>
>>> Does this ConfBridge requires a hardware timing source?
>> 
>> No, and neither does MeetMe with modern DAHDI.
>> 
>>> Will I be able to use this on any virtual server without having the need
>>> special changes to the VM setup?
...
>> To mix audio, the code takes lots of audio slices and merges them with
>> an algorithm. But if the underlying cpu doesn't provide consistent,
>> reliable ticks, as potentially happens in virtualization, then good
>> luck with what's going to happen to your audio mixing.
> 
> The MeetMe dependency on DAHDI is for the audio mixing.  ConfBridge has no
> dependency on DAHDI for anything.

In my opinion if you don't have any telephony hardware, eliminating the
dependency on DAHDI is a good thing if possible. One less thing to
install, compile, update, and worry about.

However to clarify, DAHDI by default does not currently rely on a
consistent tick when there is not a physical hardware timing source
installed. What DAHDI requires is accurate sense of wall time and the
ability to run at around 10ms intervals. This allows it to know how much
audio to mix each time the timer runs.

This change from depending on a reliable tick to having a reliable
notion of time is what was required to get DAHDI conferencing to work
consistently in virtualized environments.

Also, I'm not surprised that a purely userspace solution (again, when
not used in conjunction with any real telephony hardware) should offer
higher performance because:

a) There are two less userspace <-> kernel transitions for each packet
to mix.

b) I imagine each independent conference is/can be mixed in parallel on
SMP systems. DAHDI conferencing currently mixes all channels in the
context of either the timer or the "master_span". So with conferencing
you are always (currently) limited by what can be done on a single core.

c) DAHDI often times perform extra SLIN <-> U/A-law translations when
conferencing. Channels queue up audio in the companded form (ulaw or
alaw). So if you have a DADHI channel in linear mode, the audio will get
companded to ulaw on write, the timer runs and transcodes the audio to
signed linear format for mixing, then the mixed audio is transcoded back
to ulaw to queue on the channel, then transcoded back to linear when
asterisk reads from the channel.

It is possible to improve b) and c). The intent is to eventually do so
for users who for various reasons must use MeetMe. However that work is
still out in the future.

Yes...this email is not 100% related to the ConfBridge announcement, but
someone out there might find it interesting...

Cheers,
Shaun

-- 
Shaun Ruffell
Digium, Inc. | Linux Kernel Developer
445 Jan Davis Drive NW - Huntsville, AL 35806 - USA
Check us out at: www.digium.com & www.asterisk.org



More information about the asterisk-users mailing list