[asterisk-dev] IAX internet draft (draft-guy-iax-00)

Steve Kann stevek at stevek.com
Mon Mar 6 14:51:31 MST 2006


Tim Panton wrote:

>
> On 6 Mar 2006, at 20:01, Derek Smithies wrote:
>
>> Hi,
>>
>>>> Consequently, there are times when the timestamp (on the sending
>>>> side)
>>>> has to be "tweaked" high by 3, to differentiate between two
>>>> different
>>>> frames. A lagrq frame does not increase the oseqno at the sending
>>>> side.
>>>
>>>
>>> I don't see why you say that. I think the +3 stuff is a work around
>>> for a
>>> specific implementation problem, not a protocol requirement.
>>> The combination of oseqno with ack'ness makes a packet unique.
>>>
>> I am sorry - I do remember reading something about this +3 business 
>> was a
>> jiter buffer saviour. From the code:
>>
>> /* On a dataframe, use last value + 3 (to accomodate jitter buffer
>> shrinking) if appropriate unless it's a genuine frame */
>> if (genuine) {
>> /* genuine (IAX LAGRQ etc) must keep their clock-based stamps */
>> if (ms <= p->lastsent)
>> ms = p->lastsent + 3;
>> } else if (abs(ms - p->lastsent) <= MAX_TIMESTAMP_SKEW) {
>> /* non-genuine frames (!?) (DTMF, CONTROL) should be pulled into
>> the predicted stream stamps */
>> ms = p->lastsent + 3;
>> }
>
>
> I'm none too clear on the purpose of this, but it seems to be specific to
> the way that the current jitterbuffer is implemented, not intrinsic to 
> the protocol.
>
> I'm deliberately staying away from looking too deeply at the IAXclient 
> codebase,
> it should be possible to implement it just from the RFC (I couldn't, I 
> needed
> some advice). Perhaps SteveK can comment?


It was a workaround for the _old_ jitterbuffer; it's not needed for the 
present jitterbuffer. The old jitterbuffer was pretty simple, it just 
added a variable delay to the processing of frames as they were 
received. In conditions where it wanted to lower the delay, it would 
lower the delay by 2ms with each packet it received, and a situation 
like this could occur.

Initial delay would be d.

Packet would come in with timestamp t. It would be scheduled to be 
processed at time t+d.
Next packet would come in with time t+1. It would be scheduled to be 
processed at time t+1+(d-2), or t+d-1.
Next packet would come in with time t+1. It would be scheduled to be 
processed at time t+2+(d-4), or t+d-2.

In this way, the frames would be processed in the opposite order. If 
they were DTMF frames, it would be very bad. At the time, the 
timestamping on voice frames could also end up this way, and they'd be 
played backwards.

So, this +3 business caused this improper reordering wouldn't happen.

The present jitterbuffer (in asterisk and iaxclient) always processes 
things in order.

I don't know if the spec should include this as a requirement or not: It 
was really a workaround for a broken implementation -- but it would 
still be needed for asterisk < 1.2, or very old iaxclient 
implementations (the new jitterbuffer was in iaxclient before asterisk).

-SteveK





More information about the asterisk-dev mailing list