[asterisk-dev] BugID 014021, Zap/DAHDI timers, internal timing and packetization

Dan Austin Dan_Austin at Phoenix.com
Sat Dec 13 17:56:50 CST 2008


Russell wrote:
> Dan Austin wrote:
>> If you've followed my poor description this far, my question
>> is can the /dev/[zap|dahdi]/timer interface support different
>> clockrate for each channel that opens it?  If it cannot, then
>> bug 014021 cannot be solved in 1.4 or 1.6.0, at least not with
>> trivial changes to the code.  It might be possible to resolve
>> it in 1.6.1 using the new res_timing_* interfaces.

> Yes, you can set a rate for each open DAHDI timer.

> Take a look at res_timing_dahdi.c in Asterisk trunk or 1.6.1.
> Specifically, the function dahdi_timer_set_rate() is how you set it.

Thanks for the response.  The function ast_settimeout does the
same thing if zap/dadhi is available.

I experimented with adding calls to ast_settimeout when a channel
was allocated to set the timeout to match the framing requirements.
I then watched any call to ast_streamfile reset the timing back to
20ms.  Even more interesting is that the rate/timing calculation in
ast_streamfile would cause the rate to be set to 160, then quickly to
91 or 93, then shortly after that to 0.

It would appear that to media timing to be consistent, that we need
to store the timing requirement in the channel structure so that it
could be restored if is changed by an internal function, such as
ast_streamfile.

After inserting the additional ast_settimeout calls, I added some
additional logging to the smoother_feed/read functions and made a
change to ast_rtp_write to only read one packet at a time from
the smoother.  A test call between a 20ms and a 30ms endpoint
showed that the 20ms client was not pulling frames out of the smoother
fast enough (20ms every 30ms) and we would run out of smoother buffer.

I reverted the 1 packet at a time change to ast_rtp_write and noted that
the 20ms client rtp stream went back to this pattern of packets every 30ms-
(1)20ms packet-(2)20ms packets-(1)20ms packet.  Before the changes,
a similar pattern would be observed on the 30ms clients RTP stream.


I have been using the internal_timing and packetization features for
close to two years and never noticed the problem.  I use Cisco phones
exclusively, and they appear to be able to handle the 10ms of consistent
jitter that exists in Asterisk then bridging a 20ms client to a 30ms
client.  Once I looked into 014021, I did a few packet captures and
confirmed the issue.

I know the issue can be solved, but I am not sure it can be solved
in 1.4 or 1.6.0 since the fix will likely need changes to the
channel structure.

Dan



More information about the asterisk-dev mailing list