[asterisk-users] Fwd: RTP stats explaination

Dave Platt dplatt at radagast.org
Fri May 18 12:27:01 CDT 2012


> In our app we do not forward packet immediately. After enough packet
> received to increase rtp packetization time (ptime) the we forward the
> message over raw socket and set dscp to be 10 so that this time
> packets can escape iptable rules.
> 
>>From client side the RTP stream analysis shows nearly every stream as
> problematic. summery for some streams are given below :
> 
> Stream 1:
> 
> Max delta = 1758.72 ms at packet no. 40506
> Max jitter = 231.07 ms. Mean jitter = 9.27 ms.
> Max skew = -2066.18 ms.
> Total RTP packets = 468 ? (expected 468) ? Lost RTP packets = 0
> (0.00%) ? Sequence errors = 0
> Duration 23.45 s (-22628 ms clock drift, corresponding to 281 Hz (-96.49%)
> 
> Stream 2:
> 
> Max delta = 1750.96 ms at packet no. 45453
> Max jitter = 230.90 ms. Mean jitter = 7.50 ms.
> Max skew = -2076.96 ms.
> Total RTP packets = 468 ? (expected 468) ? Lost RTP packets = 0
> (0.00%) ? Sequence errors = 0
> Duration 23.46 s (-22715 ms clock drift, corresponding to 253 Hz (-96.84%)
> 
> Stream 3:
> 
> Max delta = 71.47 ms at packet no. 25009
> Max jitter = 6.05 ms. Mean jitter = 2.33 ms.
> Max skew = -29.09 ms.
> Total RTP packets = 258 ? (expected 258) ? Lost RTP packets = 0
> (0.00%) ? Sequence errors = 0
> Duration 10.28 s (-10181 ms clock drift, corresponding to 76 Hz (-99.05%)
> 
> Any idea where should we look for the problem?

A maximum jitter of 230 milliseconds looks pretty horrendous to me.
This is going to cause really serious audio stuttering on the
receiving side, and/or will force the use of such a long "jitter
buffer" by the receiver that the audio will suffer from an
infuriating amount of delay.  Even a local call would sound as if
it's coming from overseas via a satellite-radio link.

I suspect it's likely due to a combination of two things:

(1) The fact that you are deliberately delaying the forwarding
    of the packets.  This adds latency, and if you're forwarding
    packets in batches it will also add jitter.

(2) Scheduling delays.  If your forwarding app fails to run its
    code on a very regular schedule - if, for example, it's delayed
    or preempted by a higher-priority task, or if some of its code
    is paged/swapped out due to memory pressure and has to be paged
    back in - this will also add latency and jitter.

Pushing real-time IP traffic up through the application layer like
this is going to be tricky.  You may be able to deal with issue (2)
by locking your app into memory with mlock() and setting it to run
at a "real-time" scheduling priority.

Issue (1) - well, I really think you need to avoid doing this.
Push the packets down into the kernel for retransmission as quickly
as you can.  If you need to rate-limit or rate-pace their sending,
use something like the Linux kernel's traffic-shaping features.

Is there other network traffic flowing to/from this particular
machine?  It's possible that other outbound traffic is saturating
network-transmit buffers somewhere - either in the kernel, or in
an "upstream" communication node such as a router or DSL modem.
If this happens, there's no guarantee that "high priority" or
"expedited delivery" packets would be given priority over
(e.g.) FTP uploads... many routers/switches/modems don't pay
attention to the class-of-service on IP packets.

To prevent this, you'd need to use traffic shaping features on
your system, to "pace" the transmission of *all* packets so that
the total transmission rate is slightly below the lowest-bandwidth
segment of your uplink.  You'd also want to use multiple queues
to give expedited-deliver packets priority over bulk-data packets.
The "Ultimate Linux traffic-shaper" page would show how to
accomplish this on a Linux system;  the same principles with
different details would apply on other operating systems.




More information about the asterisk-users mailing list