[Asterisk-Dev] changing codec during call

Steve Kann stevek at stevek.com
Fri Feb 25 13:20:24 MST 2005


Jesse Kaijen wrote:

>Just had my dinner and look at all the reactions! :P
>Thanks!
>
>  
>
>>Steve Underwood wrote:
>>
>>    
>>
>>>Steve Kann wrote:
>>>
>>>      
>>>
>>>>Race Vanderdecken wrote:
>>>>
>>>>        
>>>>
>>>>>Detecting the the bandwidth constraint constraint might be possible
>>>>>using RTCP. http://www.nwfusion.com/news/tech/2003/1117techupdate.html
>>>>>
>>>>>I have not looked to see if Asterisk is using RTCP, but that would be
>>>>>the correct way to control and detect.
>>>>> 
>>>>>
>>>>>          
>>>>>
>>>>IAX2 now supports sending all of the parameters that are described in 
>>>>the _extended_ RTCP XR stuff you quote there (the basic RTCP RR does 
>>>>not support all of this).
>>>>
>>>>But I still fail to see how you can determine from this information 
>>>>alone, whether reducing your bandwidth usage by 5kbps or whatever is 
>>>>going to affect the call quality in a positive or negative way.
>>>>
>>>>Certainly, it would be good network policy for us to lower our 
>>>>consumption when we see congestion (like TCP does), but it is not 
>>>>necessarily going to improve the quality of our call.
>>>>        
>>>>
>>>High packet loss rate, or even high jitter, might give us a clue that 
>>>a lower bit rate would be beneficial. 
>>>      
>>>
>>If you read my previous message on the topic, depending on the location 
>>of the congestion, it might just make things worse.
>>
>>If the congestion is a very low bandwidth last-hop, it might help.  But 
>>if the congestion is on a high-bandwidth link (and caused by other 
>>applications, or even line errors), it will just make things sound 
>>worse.   For example, imagine that the loss is on some 45mbps link in 
>>the middle of the internet, being caused by some network-unfriendly 
>>application.  If we lower our bandwidth consumption, it's unlikely to 
>>make any difference at all in the loss rate.  However, it will probably 
>>make the loss _more_ noticable to the user, and not less noticable.  
>>(i.e. going from 20ms uLaw frames, to 60ms highly-compressed frames will 
>>probably end up causing the same loss percentage to be much more 
>>noticable, and more difficult to conceal).
>>
>>    
>>
>Well here's the deeper problem I'm going to investigate. And try to solve.
>If I can't solve the problem or a monitor makes it worse than that research
>is also of great value. Guessing what is better won't solve the discussion.
>Some lab-tests already gave as result that changing codec to often (6 times
>in 8 sec) lowers the MOS-value. And the user hears some clicking (DUH!). 
>Having a adaptive jitterbuffer is certainly better for the MOS than changing
>codec. But that's not what I want. I want to combine these solutions to get
>an even higher result.
>The reason I like to use the IAX-protocol is that the new jitterbuffer is
>based on the E-MOS algorithm (PLEASE CORRECT ME IF I'M WRONG). **see_below**
>  
>
I did read that paper, and tried implementing the algorithm as best I 
could, but I found that the estimation they used for the pareto 
distribution didn't seem to work as expected, especially when clock skew 
is an issue. Also, it seemed really inefficient; Basically, it required 
you to, for each packet, run through a 500-member history two times, 
doing some mathematical functions (exp, pow, log etc), and then do 
several calculations to determine what the jitter and skew would be at 
different loss rates. It seemed to me that it was actually much more 
efficient to just calculate that directly.

Also, the key thing that the E-MOS algorithm does is to adjust the 
jitterbuffer such that at low-jittter, you base it on an estimation of 
the jitter with low-loss, but with high jitter you use an estimation 
with high loss. I looked at what I saw, and realized that a fairly good 
approximation of that technique could be achieved by just basing in on 
the history with high loss (I think I use 4%), and then add a small 
constant to that (presently, that's 2 * the frame size). The effect 
seems to be more or less the same.

This can be tuned further based on experience, but it seems to do what I 
want most of the time. The only times it doesn't, is when it gets 
nonsense timestamps; those are things we need to fix (And I'm hoping 
tzanger will fix the obvious issues I pointed out to him).


>Also the JB_reports are getting send which are quite handy for monitoring.
>
>So far the only problem I have is changing codec during a call. I hope the
>option of opening a second call isn't the solution. Because making the call
>to the PSTN doesn't give you the option to 'open' a new call and close the
>other.
>
>If I have that problem solved and let a call change codec in a neat way, I
>will work on the monitor and ask your opinion on that one quite often I'm
>sure of that.
>  
>

Are you working in the asterisk code, or the iaxclient code?

Either way, the IAX2 protocol should certainly be able to handle this, 
and iaxclient could easily be made to handle this. Asterisk may be 
another issue, because applications get the frames directly, and may 
handle (or not) translation themselves, so it may not be trivial to make 
them able to handle a call whose format has changed.






More information about the asterisk-dev mailing list