[asterisk-dev] definition of RTP jitter - potential bug in Asterisk

John Todd jtodd at digium.com
Thu Sep 3 13:37:26 CDT 2009


Klaus -
   Can you open a bug on this?  I don't want this to get lost.

   But let me ask a few questions: What does "jitter" even measure?  I  
had interpreted jitter to mean the difference (via some smoothing  
mechanism) in milliseconds between the delays of subsequent packets in  
a uni-directional stream.  Why would frequency, or any codec-layer  
computation be involved in this determination?

   In the RFC, section 4 describes timestamps as 64-bit numbers which  
can be NTP-synchronized time (though any source may do, as the deltas  
are what is important.)

   Section 5.1 describes RTP (not RTCP) timestamps in more detail,  
with RTP timestamps being 32-bit numbers that increment  
monotonically.  An excerpt:
"The clock frequency is dependent on the format of data carried as  
payload and is specified statically in the profile or payload format  
specification that defines the format, or MAY be specified dynamically  
for payload formats defined through non-RTP means. If RTP packets are  
generated periodically, the nominal sampling instant as determined  
from the sampling clock is to be used, not a reading of the system  
clock. As an example, for fixed-rate audio the timestamp clock would  
likely increment by one for each sampling period. If an audio  
application reads blocks covering Schulzrinne, et al. Standards Track  
[Page 14]

RFC 3550 RTP July 2003 160 sampling periods from the input device, the  
timestamp would be increased by 160 for each such block, regardless of  
whether the block is transmitted in a packet or dropped as silent."


There is more below describing RTCP as using NTP (wallclock)  
timestamps, which may eliminate some of this confusion, maybe?

PS: I think you meant RFC3550, below.

JT


On Sep 3, 2009, at 11:58 AM, Klaus Darilion wrote:

> looks like the some scaling is done in trunk too (res/ 
> res_rtp_asterisk.c)
>
>
> Klaus Darilion schrieb:
>> Hi!
>>
>> In Asterisk 1.6.2.0-beta4, the jitter is internally calculated in
>> seconds (see calc_rxstamp() in main/rtp.c).
>>
>> When the jitter is sent in RTCP reports, the jitter value is  
>> multiplied
>> with 65536 (e.g. ast_rtcp_write_sr()).
>>
>> This value of 65536 is all over rtp.c. Also the received RTCP jitter
>> reports are scaled by 65536.
>>
>> I wonder where this 65536 comes from? RFC 3350 describes the jitter  
>> in
>> "timestamp units". The RTP timestamps are the sample times, and  
>> thus the
>>  usual timestamp unit is 1s/8000.
>>
>> Thus, I think if jitter is computed locally in seconds, the scaling
>> should use 8000 instead of 65536.
>>
>> I also observed that the sent jittervalue differs much from the  
>> incoming
>> jitter value (e.g. tested with eyebeam and pjsip).
>>
>> Thus, I think the scaling with 65536 is wrong. If there is a reason  
>> why
>> 65536 is used, please describe.
>>
>> thanks
>> Klaus



---
John Todd                       email:jtodd at digium.com
Digium, Inc. | Asterisk Open Source Community Director
445 Jan Davis Drive NW -  Huntsville AL 35806  -   USA
direct: +1-256-428-6083         http://www.digium.com/






More information about the asterisk-dev mailing list