[asterisk-dev] channel.c locking and other

Russell Bryant russell at digium.com
Mon Apr 13 11:23:28 CDT 2009


Mark Michelson wrote:
> Russell Bryant wrote:
>> Damien Wedhorn wrote:
>>> 4 why does __ast_queue_frame use the channel lock. Couldn't we queue and 
>>> dequeue based on a list lock? This is the biggest for me. If we used a 
>>> list lock or similar, I could remove most (if not all) chan locking in 
>>> chan_skinny. May apply to other channels as well. However, I'm not 
>>> familiar enough with all the ins and outs of channel.c (and how it 
>>> relates to all the other code) to know if this is feasible.
>> This is a valid suggestion.  You are correct that it would simplify some 
>> of the channel locking complexity in channel driver code.  A patch that 
>> implements this technique would be welcome.
>>
>> It may be a bit more complicated than it sounds when you get into the 
>> details.  This lock would also have to be used to synchronize access to 
>> the channel's timing interface, for example.  I'd have to put some more 
>> thought into this to think of all of the dirty details ...
>>
> 
> This suggestion applies to chan_local as well. Big time.

In fact, this suggestion applies to _every_ channel driver, as they all
have to deal with deadlock avoidance when queuing frames on to a
channel.  In most cases, a channel driver has a pvt lock held when it
wants to queue a frame, so it has to do some juggling to avoid a
deadlock.  A frame list lock would simplify this situation quite a bit.

> In a recent code review I posted (http://reviewboard.digium.com/r/219/), I took 
> a different approach to this problem than what is suggested here. Instead of 
> adding a list lock to the channel's frame queue, I instead added a task 
> processor to the local_pvt struct. I pushed the frame queueing operation to the 
> task processor so that it would be executed in a different thread. My idea could 
> be made more generic by adding a taskprocessor to the ast_channel struct instead 
> and changing the ast_queue_frame function to push the frame queuing execution to 
> the task processor instead. In addition, there could be API calls added for 
> pushing tasks to the channel task processor.
> 
> I'll list some pros and cons to the approaches listed here:
> 
> 1. Lock on the channel's frame queue.
>      + Does not require any extra threads
>      - As Russell suggested, this may not be as straightforward as it seems.
>      - Only would solve deadlock issues with regards to queueing frames.

Agreed.  The first con is still a little bit of an unknown.  I've had
this in the back of my mind for a few weeks and have not come up with
any show stoppers.  I think it's actually quite doable.

> 2. Taskprocessor on the ast_channel.
>      + Allows for a generic method of asynchronously pushing tasks to a channel*.

For cases where this could be useful, let me propose an alternate solution.

There is already a scheduler context in the ast_channel struct.
However, it is not utilized as much as it could be.  I propose that we
update the ast_waitfor() and ast_read() code to process tasks in the
channel's scheduler context.  That way, you can have tasks executed
asynchronously on a channel without adding a thread to do it.

>      - Essentially will double the number of running threads on a system (could 
> be the killer for this suggestion)

Yeah ...

>      - Does not solve problems where we must lock a channel and perform a 
> synchronous operation**

If what you need to do is lock a channel and execute a task
synchronously, then what is the benefit of queuing it up for another
thread to do for you, when all the current thread is going to do is
sleep until the other thread is done?

> * I found this to be necessary when fixing up chan_local. The reason was that I 
> needed to grab a channel's lock for performing tasks like setting flags.

Understood.  In fact, I think something local to the local_pvt in
chan_local is required to get the desired effect that you were going
for.  If you had a taskprocessor on the ast_channel, or you used a
scheduler context on the channel, you still have to lock the channel to
queue the task, so you haven't gained anything.

> ** This could be fixed by adding an API call to the task processor system such 
> as ast_taskprocessor_push_sync. This would use either a sem_t or ast_cond_t to 
> wait until the task processor has finished executing its task. For situations 
> where we have the tech_pvt locked but need to perform a task that involves 
> locking the channel, this would work wonderfully. However, as I learned while 
> messing with chan_local, you must be careful with your locking habits if trying 
> to do something like this. For example, don't try to lock a channel from a 
> synchronous task processor callback if the thread which pushed the task already 
> has that channel locked; that is guaranteed to cause a deadlock.


-- 
Russell Bryant
Digium, Inc. | Senior Software Engineer, Open Source Team Lead
445 Jan Davis Drive NW - Huntsville, AL 35806 - USA
Check us out at: www.digium.com & www.asterisk.org



More information about the asterisk-dev mailing list