[asterisk-dev] [Code Review]: Threadpool Support

rmudgett reviewboard at asterisk.org
Mon Jan 7 14:57:25 CST 2013



> On Jan. 4, 2013, 12:48 p.m., rmudgett wrote:
> > /trunk/main/threadpool.c, lines 38-52
> > <https://reviewboard.asterisk.org/r/2242/diff/1/?file=32361#file32361line38>
> >
> >     Moving thread objects between containers can fail and result in the thread moving the thread object between containers getting stuck waiting for a thread to join.
> >     
> >     An alternative is to just have one container of threads with a thread state value.  You can then manually keep track of the number of threads in each state.
> 
> Mark Michelson wrote:
>     I took a look at the various places where threads are linked into containers. There are four places where it happens:
>     
>     1) Idle threads become active when work gets added
>     2) Active threads go idle when a worker thread has no more work to perform
>     3) Newly created threads are added to the active thread container
>     4) Active threads are moved to the zombie container when the threadpool needs to shrink
>     
>     So examining each:
>     
>     1) If linking fails, then there's no worry that we will block trying to join the thread since it is currently idle and can be shut down immediately
>     2) If linking fails, then there's no worry that we will block trying to join the thread because it has just gone idle and can be shut down immediately
>     3) If linking fails, then it's possible we could block trying to join the thread because it starts its life in the active state and could be performing work.
>     4) If linking fails, then it's possible we could block while trying to join the thread because the thread is still potentially actively pursuing work.
>     
>     So of the four, the final two are the ones to worry about.
>     
>     The big problem I have with changing this from its current model is the potential performance hit. Right now, changing an active thread idle means grabbing any thread from the active container and then adding it into the idle container. Now it means having to iterate over a container until an active thread is found. Same goes for other thread transitions. I'd also have to keep a count of the number of threads in each state since I can no longer just get the container count.
>     
>     Consider also that linking can only fail as a result of some bad internal issues, such as not having memory to allocate a node. This means that
>     
>     I) The possibility of ao2_link() failing is very remote
>     II) When it does fail, it's indicative of wider system issues
>     
>     Given all of this, I don't want to overhaul what I've done. Instead, I'll modify the existing code to work around the two cases where I mentioned there could be a blocking issue.
>     
>     3) This is the trickier of the two problem cases. I think what I'll have to do is separate worker thread allocation from worker thread starting. This way, I can allocate the thread, link it into the active container and then start the worker thread if the linking succeeded.
>     4) This one can be easily solved by simply not removing the thread from the active container if we can't link to the zombie container. The net result is that the threadpool will not appear to have shrunk by the proper amount, but it is less catastrophic than blocking the threadpool's control thread.
> 
> Mark Michelson wrote:
>     Actually, case number 1) will require a small modification. If we can't link the idle thread into the active container, we need to make sure not to tell the worker thread to wake up due to its being alive. We'll simply adjust to leave it idle instead.

Instead of using three ao2 containers just have three lists and move the ao2 object between the lists.  That way no nodes need to be allocated.


- rmudgett


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviewboard.asterisk.org/r/2242/#review7612
-----------------------------------------------------------


On Dec. 11, 2012, 12:53 p.m., Mark Michelson wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviewboard.asterisk.org/r/2242/
> -----------------------------------------------------------
> 
> (Updated Dec. 11, 2012, 12:53 p.m.)
> 
> 
> Review request for Asterisk Developers.
> 
> 
> Summary
> -------
> 
> This review adds a generic threadpool for Asterisk.
> 
> The threadpool implementation here is very similar to the one implemented in Asterisk SCF. Since this is implemented in C, the "is a" semantics offered by C++ are not available, so that had to be worked around.
> 
> The threadpool, when created, creates a taskprocessor. The threadpool itself exists as the private data on a taskprocessor listener. In this way, the threadpool can react to changes on the taskprocessor by informing its threadpool listener.
> 
> The threadpool informs its listeners of various changes:
> - When a task gets pushed into the threadpool
> - When the threadpool's task list has become empty
> - When the state of the threadpool's threads has changed, such as when an active thread goes idle or an idle thread is destroyed.
> 
> The threadpool listener can react to these changes as it sees fit. This allows for different policies to be adopted by different modules.
> 
> The offers some options for automatic behavior for common forms of operation. At allocation, an idle timeout can be specified in order to allow for idle threads to automatically get removed from the threadpool once they have been idle for a certain amount of time. Also, an automatic increment can be specified if the threadpool has tasks added to it and no idle threads are available to handle the task. With these, it may be possible for listeners only to intervene in certain situations. More options can possibly be added if they are not too policy-specific.
> 
> This set of changes is dependent on the set of taskprocessor changes introduced in https://reviewboard.asterisk.org/r/2200 . Since these changes were developed in the same branch as the taskprocessor changes, it means that all the taskprocessor changes are also included in this review. While there are minor changes in the taskprocessor code here as compared to the code in review 2200, they are very minor changes, and so close scrutiny of the taskprocessor changes is not as important as the threadpool code itself.
> 
> 
> This addresses bug ASTERISK-20691.
>     https://issues.asterisk.org/jira/browse/ASTERISK-20691
> 
> 
> Diffs
> -----
> 
>   /trunk/include/asterisk/taskprocessor.h 377805 
>   /trunk/include/asterisk/threadpool.h PRE-CREATION 
>   /trunk/main/taskprocessor.c 377805 
>   /trunk/main/threadpool.c PRE-CREATION 
>   /trunk/tests/test_threadpool.c PRE-CREATION 
> 
> Diff: https://reviewboard.asterisk.org/r/2242/diff
> 
> 
> Testing
> -------
> 
> A suite of unit tests have been added to ensure that the threadpool works as expected. They all pass.
> 
> 
> Thanks,
> 
> Mark
> 
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.digium.com/pipermail/asterisk-dev/attachments/20130107/6db9a671/attachment-0001.htm>


More information about the asterisk-dev mailing list