[Asterisk-Dev] changing codec during call

Steve Kann stevek at stevek.com
Fri Feb 25 12:05:53 MST 2005


Steve Underwood wrote:

> Steve Kann wrote:
>
>> Race Vanderdecken wrote:
>>
>>> Detecting the the bandwidth constraint constraint might be possible
>>> using RTCP. http://www.nwfusion.com/news/tech/2003/1117techupdate.html
>>>
>>> I have not looked to see if Asterisk is using RTCP, but that would be
>>> the correct way to control and detect.
>>>  
>>>
>>
>> IAX2 now supports sending all of the parameters that are described in 
>> the _extended_ RTCP XR stuff you quote there (the basic RTCP RR does 
>> not support all of this).
>>
>> But I still fail to see how you can determine from this information 
>> alone, whether reducing your bandwidth usage by 5kbps or whatever is 
>> going to affect the call quality in a positive or negative way.
>>
>> Certainly, it would be good network policy for us to lower our 
>> consumption when we see congestion (like TCP does), but it is not 
>> necessarily going to improve the quality of our call.
>
>
> High packet loss rate, or even high jitter, might give us a clue that 
> a lower bit rate would be beneficial. 

If you read my previous message on the topic, depending on the location 
of the congestion, it might just make things worse.

If the congestion is a very low bandwidth last-hop, it might help.  But 
if the congestion is on a high-bandwidth link (and caused by other 
applications, or even line errors), it will just make things sound 
worse.   For example, imagine that the loss is on some 45mbps link in 
the middle of the internet, being caused by some network-unfriendly 
application.  If we lower our bandwidth consumption, it's unlikely to 
make any difference at all in the loss rate.  However, it will probably 
make the loss _more_ noticable to the user, and not less noticable.  
(i.e. going from 20ms uLaw frames, to 60ms highly-compressed frames will 
probably end up causing the same loss percentage to be much more 
noticable, and more difficult to conceal).

> I have no idea how to meaningfully determine that a higher bit rate 
> would be harmless. Probing with the higher rate every time the loss 
> and jitter are low is quite likely to cause far too much juggling of 
> rates.


There's a lot of research on this kind of stuff out there for TCP 
behavior and algorithms.   If we wanted to make our protocol 
network-friendly, we would definately slow down our rates when we saw 
loss, to do congestion-avoidance like TCP does.  But I'm not sure we 
want our VoIP streams to be as nice as TCP is, because in general, the 
VoIP streams are going to be considered a higher priority in the 
network.  (And surely, those RTP streams that fight against IAX sure 
aren't going to slow down!).

-SteveK






More information about the asterisk-dev mailing list