[asterisk-dev] Timestamps in RTP bridged calls

Olle E. Johansson oej at edvina.net
Thu Jul 3 15:26:58 CDT 2014


On 03 Jul 2014, at 19:45, Matthew Jordan <mjordan at digium.com> wrote:

> On Wed, Jul 2, 2014 at 4:58 AM, Olle E. Johansson <oej at edvina.net> wrote:
>> Related issue:
>> 
>> https://issues.asterisk.org/jira/browse/ASTERISK-23142
>> 
>> 
>> In the big jitterbuffer patch in 2006 ther was code that sets a flag on a AST_FRAME
>> that it contains time stamp information. This is set on all incoming RTP audio frames.
>> 
>> When sending RTP we reset the timestamp to the one in the frame if this flag is set.
>> 
>> Now, if we have a call on hold this is dangerous.
>> 
>> Alice calls Bob and he answers.
>> -> we take the incoming TS and send out to Bob in the RTP stream
>> 
>> Alice puts Bob on hold
>> -> we activate MOH and raise the TS with 160 for every RTP packet
>> 
>> Alice puts Bob off hold
>> -> We get RTP from Alice with a new time stamp and reset ours
>> 
>> This can lead to a big jump in time stamps and in our case lead to loss of audio.
> 
> If I'm following the scenario, this could be two different problems:
> (1) A jump in the timestamp from Alice causes a jitterbuffer on either
> Alice or Bob (depending on how it was put on a channel) to view the
> audio stream as being skewed. The jitterbuffer should to be reset when
> this occurs (and if it isn't, that's bad)
You should never run an Asterisk jitterbuffer in a bridged call.
But it is of course possible.

> (2) The outbound stream faithfully replicating the timestamp from the
> inbound side causes the jump in timestamps to be reflected across the
> bridge, confusing Bob's UA.
> 
> Problem #1 shouldn't happen so long as the inbound channel driver
> raises a HOLD, SRCUPDATE, or SRCCHANGE control frame. This should
> cause the abstract jitterbuffer to resync itself to the new stream
> (with its new timestamps). (There may be some issues with this in 11
> with func_jitterbuffer, but getting a pcap illustrating this has been
> problematic for the issues filed for it so far. I think we fixed it in
> 12 when things with the jitterbuffer got consolidated between the
> framehook provided by the function and the jitterbuffer hooks in
> bridging).
> 
> Looking at what you highlighted to be the offending code:
> 
>    if (ast_test_flag(frame, AST_FRFLAG_HAS_TIMING_INFO)) {
>        rtp->lastts = frame->ts * rate;
>    }
> 
> This code is old... as in, it goes back at least to r31159, which
> means the reason for its inclusion is going to be hard to determine.
> It may be that we thought at the time that reproducing the timestamps
> from the inbound stream was a 'good thing' if it was available... I
> don't know.
> 
>> I don't see any reason to change the TS like this in the outbound stream. It's an
>> unrelated stream and we set the TS on different grounds.
> 
> I would think we would either (a) pass it through when in a p2p
> bridge, or (b) generate it completely separately when we are in a core
> bridge.
> 
>> We could shange the SSRC when this happens, but I already have systems complaining over the way Asterisk change SSRC every time we detect a problem, so I don't think it's a good idea (TM).
> 
> Agreed.
> 
>> I can understand why the jitter buffer for an incoming RTP stream needs to know if a
>> stream has timing info, but I don't see a reason for us using the same timing on the
>> outbound stream.
> 
> Depends on the what we mean by outbound stream :-)
> 
> In 1.8/11, the jitterbuffer provided by channel drivers and managed by
> the bridging performed in ast_generic_bridge is put on the write side
> of a channel - which could be construed as a part of the outbound
> media stream. That means we don't de-jitter jittered audio coming off
> of a channel, we de-jitter it on the channel that it is writing to in
> a two party bridge (which is odd and strange, but that is what
> happens). So as long as we mean 'outbound stream' to be the frame at
> the point in time it is handed off to the RTP implementation, then
> we're good; at that point, we're out of any jitter buffer. Up until
> the frame is written to the RTP implementation, it needs timing
> information.
This is when writing, after jitter buffers and all I guess. Just before writing to the network.
> 
> Now, the jitterbuffer put on a channel by func_jitterbuffer is a
> framehook, and is put on the read side of a channel (which makes a lot
> more sense, since the media coming off of a channel is generally where
> think of jitter coming from).
> 
>> 
>> Does anyone see any harm in deleting the piece of code that resets the timestamp,
>> overriding the calculations for a new time stamp already done in the RTP write code?
>> 
> 
> I don't think so... what happened when you removed it? :-)
> 

Everything started working, the gateway that had problems started happily relaying auio.
As stated in a separate e-mail, we've run a lot of different tests in our testbed and have
no issues. We have *not* tested stuff like reframing from 20 to 30 ms which seems to use
the same flag.

/O




More information about the asterisk-dev mailing list