[Asterisk-Dev] changing codec during call

Steve Kann stevek at stevek.com
Fri Feb 25 12:31:31 MST 2005


Steve Underwood wrote:

> Steve Kann wrote:
>
>> Steve Underwood wrote:
>>
>>> Steve Kann wrote:
>>>
>>>> Race Vanderdecken wrote:
>>>>
>>>>> Detecting the the bandwidth constraint constraint might be possible
>>>>> using RTCP. 
>>>>> http://www.nwfusion.com/news/tech/2003/1117techupdate.html
>>>>>
>>>>> I have not looked to see if Asterisk is using RTCP, but that would be
>>>>> the correct way to control and detect.
>>>>>  
>>>>>
>>>>
>>>> IAX2 now supports sending all of the parameters that are described 
>>>> in the _extended_ RTCP XR stuff you quote there (the basic RTCP RR 
>>>> does not support all of this).
>>>>
>>>> But I still fail to see how you can determine from this information 
>>>> alone, whether reducing your bandwidth usage by 5kbps or whatever 
>>>> is going to affect the call quality in a positive or negative way.
>>>>
>>>> Certainly, it would be good network policy for us to lower our 
>>>> consumption when we see congestion (like TCP does), but it is not 
>>>> necessarily going to improve the quality of our call.
>>>
>>>
>>>
>>>
>>> High packet loss rate, or even high jitter, might give us a clue 
>>> that a lower bit rate would be beneficial. 
>>
>>
>>
>> If you read my previous message on the topic, depending on the 
>> location of the congestion, it might just make things worse.
>>
>> If the congestion is a very low bandwidth last-hop, it might help.  
>> But if the congestion is on a high-bandwidth link (and caused by 
>> other applications, or even line errors), it will just make things 
>> sound worse.   For example, imagine that the loss is on some 45mbps 
>> link in the middle of the internet, being caused by some 
>> network-unfriendly application.  If we lower our bandwidth 
>> consumption, it's unlikely to make any difference at all in the loss 
>> rate.  However, it will probably make the loss _more_ noticable to 
>> the user, and not less noticable.  (i.e. going from 20ms uLaw frames, 
>> to 60ms highly-compressed frames will probably end up causing the 
>> same loss percentage to be much more noticable, and more difficult to 
>> conceal).
>
>
> I fully realise this. That' why I said "clue" rather than something 
> like "solid indication" :-) If you could work out that the bandwidh in 
> use is too like to flood the local tributary, you could double every 
> packet, or send overlapping packets, in a fight with the other users 
> of the main channel. Evil, but possibly self-beneficial. :-)


I wouldn't do that;  I'd use FEC (3,2 or 4,5 or whatever).. 

>>
>>> I have no idea how to meaningfully determine that a higher bit rate 
>>> would be harmless. Probing with the higher rate every time the loss 
>>> and jitter are low is quite likely to cause far too much juggling of 
>>> rates.
>>
>>
>>
>>
>> There's a lot of research on this kind of stuff out there for TCP 
>> behavior and algorithms.   If we wanted to make our protocol 
>> network-friendly, we would definately slow down our rates when we saw 
>> loss, to do congestion-avoidance like TCP does.  But I'm not sure we 
>> want our VoIP streams to be as nice as TCP is, because in general, 
>> the VoIP streams are going to be considered a higher priority in the 
>> network.  (And surely, those RTP streams that fight against IAX sure 
>> aren't going to slow down!).
>
>
> I don't this any of that TCP stuff has much meaning for streaming.

I'm not saying we should behave like TCP does (although, congestion 
avoidance would be helpful even if you had a big link, where the big 
link was being filled entirely with IAX traffic)..  But if we're trying 
to figure out what jitter or lost packet means in terms of congestion 
through the internet, you definately need to understand what TCP is 
doing, because the patterns there are all determined by TCP's behavior.  
(these days, seemingly by BitTorrent over TCP's behavior specifically :)).




More information about the asterisk-dev mailing list