<p>Richard Mudgett has uploaded this change for <strong>review</strong>.</p><p><a href="https://gerrit.asterisk.org/8757">View Change</a></p><pre style="font-family: monospace,monospace; white-space: pre-wrap;">res_pjsip.c: Split ast_sip_push_task_synchronous() to fit expectations.<br><br>ast_sip_push_task_synchronous() did not necessarily execute the passed in<br>task under the specified serializer.  If the current thread is any<br>registered pjsip thread then it would execute the task immediately instead<br>of under the specified serializer.  Reentrancy issues could result if the<br>task does not execute with the right serializer.<br><br>The original reason ast_sip_push_task_synchronous() checked to see if the<br>current thread was a registered pjsip thread was because of a deadlock<br>with masquerades and the channel technology's fixup callback<br>(ASTERISK_22936).  A subsequent masquerade deadlock fix (ASTERISK_24356)<br>involving call pickups avoided the original deadlock situation entirely.<br>The PJSIP channel technology's fixup callback no longer needed to call<br>ast_sip_push_task_synchronous().<br><br>However, there are a few places where this unexpected behavior is still<br>required to avoid deadlocks.  The pjsip monitor thread executes callbacks<br>that do calls to ast_sip_push_task_synchronous() that would deadlock if<br>the task were actually pushed to the specified serializer.  I ran into one<br>dealing with the pubsub subscriptions where an ao2 destructor called<br>ast_sip_push_task_synchronous().<br><br>* Split ast_sip_push_task_synchronous() into<br>ast_sip_push_task_wait_servant() and ast_sip_push_task_wait_serializer().<br>ast_sip_push_task_wait_servant() has the old behavior of<br>ast_sip_push_task_synchronous().  ast_sip_push_task_wait_serializer() has<br>the new behavior where the task is always executed by the specified<br>serializer or a picked serializer if one is not passed in.  Both functions<br>behave the same if the current thread is not a SIP servant.<br><br>* Redirected ast_sip_push_task_synchronous() to<br>ast_sip_push_task_wait_servant() to preserve API for released branches.<br><br>ASTERISK_26806<br><br>Change-Id: Id040fa42c0e5972f4c8deef380921461d213b9f3<br>---<br>M channels/chan_pjsip.c<br>M channels/pjsip/dialplan_functions.c<br>M include/asterisk/res_pjsip.h<br>M res/res_pjsip.c<br>M res/res_pjsip/config_system.c<br>M res/res_pjsip/config_transport.c<br>M res/res_pjsip_header_funcs.c<br>M res/res_pjsip_history.c<br>M res/res_pjsip_outbound_publish.c<br>M res/res_pjsip_outbound_registration.c<br>M res/res_pjsip_pubsub.c<br>M res/res_pjsip_refer.c<br>M res/res_pjsip_transport_websocket.c<br>13 files changed, 185 insertions(+), 71 deletions(-)<br><br></pre><pre style="font-family: monospace,monospace; white-space: pre-wrap;">git pull ssh://gerrit.asterisk.org:29418/asterisk refs/changes/57/8757/1</pre><pre style="font-family: monospace,monospace; white-space: pre-wrap;">diff --git a/channels/chan_pjsip.c b/channels/chan_pjsip.c<br>index 6b26648..dde7416 100644<br>--- a/channels/chan_pjsip.c<br>+++ b/channels/chan_pjsip.c<br>@@ -718,7 +718,7 @@<br>      can occur between this thread and bridging (specifically when native bridging<br>         attempts to do direct media) */<br>    ast_channel_unlock(ast);<br>-     res = ast_sip_push_task_synchronous(session->serializer, answer, session);<br>+        res = ast_sip_push_task_wait_serializer(session->serializer, answer, session);<br>     if (res) {<br>            if (res == -1) {<br>                      ast_log(LOG_ERROR,"Cannot answer '%s': Unable to push answer task to the threadpool.\n",<br>@@ -2502,10 +2502,10 @@<br> <br>        req_data.topology = topology;<br>         req_data.dest = data;<br>-        /* Default failure value in case ast_sip_push_task_synchronous() itself fails. */<br>+    /* Default failure value in case ast_sip_push_task_wait_servant() itself fails. */<br>    req_data.cause = AST_CAUSE_FAILURE;<br> <br>-       if (ast_sip_push_task_synchronous(NULL, request, &req_data)) {<br>+   if (ast_sip_push_task_wait_servant(NULL, request, &req_data)) {<br>           *cause = req_data.cause;<br>              return NULL;<br>  }<br>diff --git a/channels/pjsip/dialplan_functions.c b/channels/pjsip/dialplan_functions.c<br>index aa376f8..ce347dc 100644<br>--- a/channels/pjsip/dialplan_functions.c<br>+++ b/channels/pjsip/dialplan_functions.c<br>@@ -897,7 +897,7 @@<br>   func_args.field = args.field;<br>         func_args.buf = buf;<br>  func_args.len = len;<br>- if (ast_sip_push_task_synchronous(func_args.session->serializer, read_pjsip, &func_args)) {<br>+   if (ast_sip_push_task_wait_serializer(func_args.session->serializer, read_pjsip, &func_args)) {<br>                ast_log(LOG_WARNING, "Unable to read properties of channel %s: failed to push task\n", ast_channel_name(chan));<br>             ao2_ref(func_args.session, -1);<br>               return -1;<br>@@ -1219,7 +1219,7 @@<br>             mdata.media_type = AST_MEDIA_TYPE_VIDEO;<br>      }<br> <br>- return ast_sip_push_task_synchronous(channel->session->serializer, media_offer_write_av, &mdata);<br>+  return ast_sip_push_task_wait_serializer(channel->session->serializer, media_offer_write_av, &mdata);<br> }<br> <br> int pjsip_acf_dtmf_mode_read(struct ast_channel *chan, const char *cmd, char *data, char *buf, size_t len)<br>@@ -1390,7 +1390,7 @@<br> <br>         ast_channel_unlock(chan);<br> <br>- return ast_sip_push_task_synchronous(channel->session->serializer, dtmf_mode_refresh_cb, &rdata);<br>+  return ast_sip_push_task_wait_serializer(channel->session->serializer, dtmf_mode_refresh_cb, &rdata);<br> }<br> <br> static int refresh_write_cb(void *obj)<br>@@ -1438,5 +1438,5 @@<br>                rdata.method = AST_SIP_SESSION_REFRESH_METHOD_UPDATE;<br>         }<br> <br>- return ast_sip_push_task_synchronous(channel->session->serializer, refresh_write_cb, &rdata);<br>+      return ast_sip_push_task_wait_serializer(channel->session->serializer, refresh_write_cb, &rdata);<br> }<br>diff --git a/include/asterisk/res_pjsip.h b/include/asterisk/res_pjsip.h<br>index b01d6f5..e937018 100644<br>--- a/include/asterisk/res_pjsip.h<br>+++ b/include/asterisk/res_pjsip.h<br>@@ -1543,28 +1543,92 @@<br> int ast_sip_push_task(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data);<br> <br> /*!<br>- * \brief Push a task to SIP servants and wait for it to complete<br>+ * \brief Push a task to SIP servants and wait for it to complete.<br>  *<br>- * Like \ref ast_sip_push_task except that it blocks until the task completes.<br>+ * Like \ref ast_sip_push_task except that it blocks until the task<br>+ * completes.  If the current thread is a SIP servant thread then the<br>+ * task executes immediately.  Otherwise, the specified serializer<br>+ * executes the task and the current thread waits for it to complete.<br>  *<br>- * \warning \b Never use this function in a SIP servant thread. This can potentially<br>- * cause a deadlock. If you are in a SIP servant thread, just call your function<br>- * in-line.<br>+ * \note PJPROJECT callbacks tend to have locks already held when<br>+ * called.<br>  *<br>- * \warning \b Never hold locks that may be acquired by a SIP servant thread when<br>- * calling this function. Doing so may cause a deadlock if all SIP servant threads<br>- * are blocked waiting to acquire the lock while the thread holding the lock is<br>- * waiting for a free SIP servant thread.<br>+ * \warning \b Never hold locks that may be acquired by a SIP servant<br>+ * thread when calling this function.  Doing so may cause a deadlock<br>+ * if all SIP servant threads are blocked waiting to acquire the lock<br>+ * while the thread holding the lock is waiting for a free SIP servant<br>+ * thread.<br>  *<br>- * \param serializer The SIP serializer to which the task belongs. May be NULL.<br>+ * \warning \b Use of this function in an ao2 destructor callback is a<br>+ * bad idea.  You don't have control over which thread executes the<br>+ * destructor.  Attempting to shift execution to another thread with<br>+ * this function is likely to cause deadlock.<br>+ *<br>+ * \param serializer The SIP serializer to execute the task if the<br>+ * current thread is not a SIP servant.  NULL if any of the default<br>+ * serializers can be used.<br>  * \param sip_task The task to execute<br>  * \param task_data The parameter to pass to the task when it executes<br>- * \retval 0 Success<br>- * \retval -1 Failure<br>+ *<br>+ * \note The sip_task() return value may need to be distinguished from<br>+ * the failure to push the task.<br>+ *<br>+ * \return sip_task() return value on success.<br>+ * \retval -1 Failure to push the task.<br>+ */<br>+int ast_sip_push_task_wait_servant(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data);<br>+<br>+/*!<br>+ * \brief Push a task to SIP servants and wait for it to complete.<br>+ * \deprecated Replaced with ast_sip_push_task_wait_servant().<br>  */<br> int ast_sip_push_task_synchronous(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data);<br> <br> /*!<br>+ * \brief Push a task to the serializer and wait for it to complete.<br>+ *<br>+ * Like \ref ast_sip_push_task except that it blocks until the task is<br>+ * completed by the specified serializer.  If the specified serializer<br>+ * is the current thread then the task executes immediately.<br>+ *<br>+ * \note PJPROJECT callbacks tend to have locks already held when<br>+ * called.<br>+ *<br>+ * \warning \b Never hold locks that may be acquired by a SIP servant<br>+ * thread when calling this function.  Doing so may cause a deadlock<br>+ * if all SIP servant threads are blocked waiting to acquire the lock<br>+ * while the thread holding the lock is waiting for a free SIP servant<br>+ * thread for the serializer to execute in.<br>+ *<br>+ * \warning \b Never hold locks that may be acquired by the serializer<br>+ * when calling this function.  Doing so will cause a deadlock.<br>+ *<br>+ * \warning \b Never use this function in the pjsip monitor thread (It<br>+ * is a SIP servant thread).  This is likely to cause a deadlock.<br>+ *<br>+ * \warning \b Use of this function in an ao2 destructor callback is a<br>+ * bad idea.  You don't have control over which thread executes the<br>+ * destructor.  Attempting to shift execution to another thread with<br>+ * this function is likely to cause deadlock.<br>+ *<br>+ * \param serializer The SIP serializer to execute the task.  NULL if<br>+ * any of the default serializers can be used.<br>+ * \param sip_task The task to execute<br>+ * \param task_data The parameter to pass to the task when it executes<br>+ *<br>+ * \note It is generally better to call<br>+ * ast_sip_push_task_wait_servant() if you pass NULL for the<br>+ * serializer parameter.<br>+ *<br>+ * \note The sip_task() return value may need to be distinguished from<br>+ * the failure to push the task.<br>+ *<br>+ * \return sip_task() return value on success.<br>+ * \retval -1 Failure to push the task.<br>+ */<br>+int ast_sip_push_task_wait_serializer(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data);<br>+<br>+/*!<br>  * \brief Determine if the current thread is a SIP servant thread<br>  *<br>  * \retval 0 This is not a SIP servant thread<br>diff --git a/res/res_pjsip.c b/res/res_pjsip.c<br>index 7c99297..803d93f 100644<br>--- a/res/res_pjsip.c<br>+++ b/res/res_pjsip.c<br>@@ -2743,7 +2743,7 @@<br> <br> int ast_sip_register_service(pjsip_module *module)<br> {<br>-       return ast_sip_push_task_synchronous(NULL, register_service, &module);<br>+   return ast_sip_push_task_wait_servant(NULL, register_service, &module);<br> }<br> <br> static int unregister_service(void *data)<br>@@ -2759,7 +2759,7 @@<br> <br> void ast_sip_unregister_service(pjsip_module *module)<br> {<br>- ast_sip_push_task_synchronous(NULL, unregister_service, &module);<br>+        ast_sip_push_task_wait_servant(NULL, unregister_service, &module);<br> }<br> <br> static struct ast_sip_authenticator *registered_authenticator;<br>@@ -3009,7 +3009,7 @@<br>                 return CLI_SHOWUSAGE;<br>         }<br> <br>- ast_sip_push_task_synchronous(NULL, do_cli_dump_endpt, a);<br>+   ast_sip_push_task_wait_servant(NULL, do_cli_dump_endpt, a);<br> <br>        return CLI_SUCCESS;<br> }<br>@@ -4485,21 +4485,30 @@<br>      return 0;<br> }<br> <br>+static struct ast_taskprocessor *serializer_pool_pick(void)<br>+{<br>+   struct ast_taskprocessor *serializer;<br>+<br>+     unsigned int pos;<br>+<br>+ /*<br>+    * Pick a serializer to use from the pool.<br>+    *<br>+    * Note: We don't care about any reentrancy behavior<br>+      * when incrementing serializer_pool_pos.  If it gets<br>+         * incorrectly incremented it doesn't matter.<br>+     */<br>+  pos = serializer_pool_pos++;<br>+ pos %= SERIALIZER_POOL_SIZE;<br>+ serializer = serializer_pool[pos];<br>+<br>+        return serializer;<br>+}<br>+<br> int ast_sip_push_task(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data)<br> {<br>         if (!serializer) {<br>-           unsigned int pos;<br>-<br>-         /*<br>-            * Pick a serializer to use from the pool.<br>-            *<br>-            * Note: We don't care about any reentrancy behavior<br>-              * when incrementing serializer_pool_pos.  If it gets<br>-                 * incorrectly incremented it doesn't matter.<br>-             */<br>-          pos = serializer_pool_pos++;<br>-         pos %= SERIALIZER_POOL_SIZE;<br>-         serializer = serializer_pool[pos];<br>+           serializer = serializer_pool_pick();<br>  }<br> <br>  return ast_taskprocessor_push(serializer, sip_task, task_data);<br>@@ -4523,9 +4532,8 @@<br> <br>     /*<br>     * Once we unlock std->lock after signaling, we cannot access<br>-      * std again.  The thread waiting within<br>-      * ast_sip_push_task_synchronous() is free to continue and<br>-    * release its local variable (std).<br>+  * std again.  The thread waiting within ast_sip_push_task_wait()<br>+     * is free to continue and release its local variable (std).<br>   */<br>   ast_mutex_lock(&std->lock);<br>    std->complete = 1;<br>@@ -4535,14 +4543,10 @@<br>        return ret;<br> }<br> <br>-int ast_sip_push_task_synchronous(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data)<br>+static int ast_sip_push_task_wait(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data)<br> {<br>        /* This method is an onion */<br>         struct sync_task_data std;<br>-<br>-        if (ast_sip_thread_is_servant()) {<br>-           return sip_task(task_data);<br>-  }<br> <br>  memset(&std, 0, sizeof(std));<br>     ast_mutex_init(&std.lock);<br>@@ -4565,6 +4569,42 @@<br>        ast_mutex_destroy(&std.lock);<br>     ast_cond_destroy(&std.cond);<br>      return std.fail;<br>+}<br>+<br>+int ast_sip_push_task_wait_servant(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data)<br>+{<br>+     if (ast_sip_thread_is_servant()) {<br>+           return sip_task(task_data);<br>+  }<br>+<br>+ return ast_sip_push_task_wait(serializer, sip_task, task_data);<br>+}<br>+<br>+int ast_sip_push_task_synchronous(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data)<br>+{<br>+       return ast_sip_push_task_wait_servant(serializer, sip_task, task_data);<br>+}<br>+<br>+int ast_sip_push_task_wait_serializer(struct ast_taskprocessor *serializer, int (*sip_task)(void *), void *task_data)<br>+{<br>+   if (!serializer) {<br>+           /* Caller doesn't care which PJSIP serializer the task executes under. */<br>+                serializer = serializer_pool_pick();<br>+         if (!serializer) {<br>+                   /* No serializer picked to execute the task */<br>+                       return -1;<br>+           }<br>+    }<br>+    if (ast_taskprocessor_is_task(serializer)) {<br>+         /*<br>+            * We are the requested serializer so we must execute<br>+                 * the task now or deadlock waiting on ourself to<br>+             * execute it.<br>+                */<br>+          return sip_task(task_data);<br>+  }<br>+<br>+ return ast_sip_push_task_wait(serializer, sip_task, task_data);<br> }<br> <br> void ast_copy_pj_str(char *dest, const pj_str_t *src, size_t size)<br>@@ -5192,7 +5232,7 @@<br>     * We must wait for the reload to complete so multiple<br>         * reloads cannot happen at the same time.<br>     */<br>-  if (ast_sip_push_task_synchronous(NULL, reload_configuration_task, NULL)) {<br>+  if (ast_sip_push_task_wait_servant(NULL, reload_configuration_task, NULL)) {<br>          ast_log(LOG_WARNING, "Failed to reload PJSIP\n");<br>           return -1;<br>    }<br>@@ -5209,7 +5249,7 @@<br>      /* The thread this is called from cannot call PJSIP/PJLIB functions,<br>   * so we have to push the work to the threadpool to handle<br>     */<br>-  ast_sip_push_task_synchronous(NULL, unload_pjsip, NULL);<br>+     ast_sip_push_task_wait_servant(NULL, unload_pjsip, NULL);<br>     ast_sip_destroy_scheduler();<br>  serializer_pool_shutdown();<br>   ast_threadpool_shutdown(sip_threadpool);<br>diff --git a/res/res_pjsip/config_system.c b/res/res_pjsip/config_system.c<br>index dfd9240..ed2b5d2 100644<br>--- a/res/res_pjsip/config_system.c<br>+++ b/res/res_pjsip/config_system.c<br>@@ -282,5 +282,5 @@<br> <br> void ast_sip_initialize_dns(void)<br> {<br>-        ast_sip_push_task_synchronous(NULL, system_create_resolver_and_set_nameservers, NULL);<br>+       ast_sip_push_task_wait_servant(NULL, system_create_resolver_and_set_nameservers, NULL);<br> }<br>diff --git a/res/res_pjsip/config_transport.c b/res/res_pjsip/config_transport.c<br>index 15c0376..dd7c704 100644<br>--- a/res/res_pjsip/config_transport.c<br>+++ b/res/res_pjsip/config_transport.c<br>@@ -267,7 +267,7 @@<br> {<br>         struct ast_sip_transport_state *state = obj;<br> <br>-      ast_sip_push_task_synchronous(NULL, destroy_sip_transport_state, state);<br>+     ast_sip_push_task_wait_servant(NULL, destroy_sip_transport_state, state);<br> }<br> <br> /*! \brief Destructor for ast_sip_transport state information */<br>diff --git a/res/res_pjsip_header_funcs.c b/res/res_pjsip_header_funcs.c<br>index 6c0f915..798a1cd 100644<br>--- a/res/res_pjsip_header_funcs.c<br>+++ b/res/res_pjsip_header_funcs.c<br>@@ -153,7 +153,7 @@<br>     .type = "header_datastore",<br> };<br> <br>-/*! \brief Data structure used for ast_sip_push_task_synchronous  */<br>+/*! \brief Data structure used for ast_sip_push_task_wait_serializer  */<br> struct header_data {<br>        struct ast_sip_channel_pvt *channel;<br>  char *header_name;<br>@@ -480,11 +480,11 @@<br>     header_data.len = len;<br> <br>     if (!strcasecmp(args.action, "read")) {<br>-            return ast_sip_push_task_synchronous(channel->session->serializer, read_header,<br>-                                                                                         &header_data);<br>+          return ast_sip_push_task_wait_serializer(channel->session->serializer,<br>+                 read_header, &header_data);<br>       } else if (!strcasecmp(args.action, "remove")) {<br>-           return ast_sip_push_task_synchronous(channel->session->serializer, remove_header,<br>-                                                                                       &header_data);<br>+          return ast_sip_push_task_wait_serializer(channel->session->serializer,<br>+                 remove_header, &header_data);<br>     } else {<br>              ast_log(AST_LOG_ERROR,<br>                                "Unknown action '%s' is not valid, must be 'read' or 'remove'.\n",<br>@@ -539,14 +539,14 @@<br>   header_data.len = 0;<br> <br>       if (!strcasecmp(args.action, "add")) {<br>-             return ast_sip_push_task_synchronous(channel->session->serializer, add_header,<br>-                                                                                  &header_data);<br>+          return ast_sip_push_task_wait_serializer(channel->session->serializer,<br>+                 add_header, &header_data);<br>        } else if (!strcasecmp(args.action, "update")) {<br>-           return ast_sip_push_task_synchronous(channel->session->serializer, update_header,<br>-                                                                                       &header_data);<br>+          return ast_sip_push_task_wait_serializer(channel->session->serializer,<br>+                 update_header, &header_data);<br>     } else if (!strcasecmp(args.action, "remove")) {<br>-           return ast_sip_push_task_synchronous(channel->session->serializer, remove_header,<br>-                                                                                       &header_data);<br>+          return ast_sip_push_task_wait_serializer(channel->session->serializer,<br>+                 remove_header, &header_data);<br>     } else {<br>              ast_log(AST_LOG_ERROR,<br>                                "Unknown action '%s' is not valid, must be 'add', 'update', or 'remove'.\n",<br>diff --git a/res/res_pjsip_history.c b/res/res_pjsip_history.c<br>index ab035a2..eed06ee 100644<br>--- a/res/res_pjsip_history.c<br>+++ b/res/res_pjsip_history.c<br>@@ -1385,7 +1385,7 @@<br>    ast_cli_unregister_multiple(cli_pjsip, ARRAY_LEN(cli_pjsip));<br>         ast_sip_unregister_service(&logging_module);<br> <br>-  ast_sip_push_task_synchronous(NULL, clear_history_entries, NULL);<br>+    ast_sip_push_task_wait_servant(NULL, clear_history_entries, NULL);<br>    AST_VECTOR_FREE(&vector_history);<br> <br>      ast_pjproject_caching_pool_destroy(&cachingpool);<br>diff --git a/res/res_pjsip_outbound_publish.c b/res/res_pjsip_outbound_publish.c<br>index 8befbc1..4894e55 100644<br>--- a/res/res_pjsip_outbound_publish.c<br>+++ b/res/res_pjsip_outbound_publish.c<br>@@ -1070,7 +1070,7 @@<br>                 return NULL;<br>  }<br> <br>- if (ast_sip_push_task_synchronous(NULL, sip_outbound_publisher_init, publisher)) {<br>+   if (ast_sip_push_task_wait_servant(NULL, sip_outbound_publisher_init, publisher)) {<br>           ast_log(LOG_ERROR, "Unable to create publisher for outbound publish '%s'\n",<br>                        ast_sorcery_object_get_id(client->publish));<br>               ao2_ref(publisher, -1);<br>@@ -1514,8 +1514,8 @@<br>         */<br>   old_publish = current_state->client->publish;<br>   current_state->client->publish = publish;<br>-      if (ast_sip_push_task_synchronous(<br>-               NULL, sip_outbound_publisher_reinit_all, current_state->client->publishers)) {<br>+     if (ast_sip_push_task_wait_servant(NULL, sip_outbound_publisher_reinit_all,<br>+          current_state->client->publishers)) {<br>           /*<br>             * If the state object fails to re-initialize then swap<br>                * the old publish info back in.<br>diff --git a/res/res_pjsip_outbound_registration.c b/res/res_pjsip_outbound_registration.c<br>index 2839ecb..8a90849 100644<br>--- a/res/res_pjsip_outbound_registration.c<br>+++ b/res/res_pjsip_outbound_registration.c<br>@@ -1480,7 +1480,7 @@<br>          return -1;<br>    }<br> <br>- if (ast_sip_push_task_synchronous(new_state->client_state->serializer,<br>+ if (ast_sip_push_task_wait_serializer(new_state->client_state->serializer,<br>              sip_outbound_registration_regc_alloc, new_state)) {<br>           return -1;<br>    }<br>@@ -1850,8 +1850,7 @@<br>      struct sip_ami_outbound *ami = arg;<br> <br>        ami->registration = obj;<br>-  return ast_sip_push_task_synchronous(<br>-                NULL, ami_outbound_registration_task, ami);<br>+  return ast_sip_push_task_wait_servant(NULL, ami_outbound_registration_task, ami);<br> }<br> <br> static int ami_show_outbound_registrations(struct mansession *s,<br>diff --git a/res/res_pjsip_pubsub.c b/res/res_pjsip_pubsub.c<br>index 69c256d..9e0718f 100644<br>--- a/res/res_pjsip_pubsub.c<br>+++ b/res/res_pjsip_pubsub.c<br>@@ -1318,7 +1318,8 @@<br>   destroy_subscriptions(sub_tree->root);<br> <br>  if (sub_tree->dlg) {<br>-              ast_sip_push_task_synchronous(sub_tree->serializer, subscription_unreference_dialog, sub_tree);<br>+           ast_sip_push_task_wait_servant(sub_tree->serializer,<br>+                      subscription_unreference_dialog, sub_tree);<br>   }<br> <br>  ao2_cleanup(sub_tree->endpoint);<br>@@ -1665,7 +1666,8 @@<br>    }<br>     recreate_data.persistence = persistence;<br>      recreate_data.rdata = &rdata;<br>-    if (ast_sip_push_task_synchronous(serializer, sub_persistence_recreate, &recreate_data)) {<br>+       if (ast_sip_push_task_wait_serializer(serializer, sub_persistence_recreate,<br>+          &recreate_data)) {<br>                ast_log(LOG_WARNING, "Failed recreating '%s' subscription: Could not continue under distributor serializer.\n",<br>                     persistence->endpoint);<br>            ast_sorcery_delete(ast_sip_get_sorcery(), persistence);<br>diff --git a/res/res_pjsip_refer.c b/res/res_pjsip_refer.c<br>index 7d892f6..3de2514 100644<br>--- a/res/res_pjsip_refer.c<br>+++ b/res/res_pjsip_refer.c<br>@@ -316,7 +316,15 @@<br>            /* It's possible that a task is waiting to remove us already, so bump the refcount of progress so it doesn't get destroyed */<br>                 ao2_ref(progress, +1);<br>                pjsip_dlg_dec_lock(progress->dlg);<br>-                ast_sip_push_task_synchronous(progress->serializer, refer_progress_terminate, progress);<br>+          /*<br>+            * XXX We are always going to execute this inline rather than<br>+                 * in the serializer because this function is a PJPROJECT<br>+             * callback and thus has to be a SIP servant thread.<br>+          *<br>+            * The likely remedy is to push most of this function into<br>+            * refer_progress_terminate() with ast_sip_push_task().<br>+               */<br>+          ast_sip_push_task_wait_servant(progress->serializer, refer_progress_terminate, progress);<br>          pjsip_dlg_inc_lock(progress->dlg);<br>                 ao2_ref(progress, -1);<br> <br>@@ -963,7 +971,8 @@<br> <br>     invite.session = other_session;<br> <br>-   if (ast_sip_push_task_synchronous(other_session->serializer, invite_replaces, &invite)) {<br>+     if (ast_sip_push_task_wait_serializer(other_session->serializer, invite_replaces,<br>+         &invite)) {<br>               response = 481;<br>               goto inv_replace_failed;<br>      }<br>diff --git a/res/res_pjsip_transport_websocket.c b/res/res_pjsip_transport_websocket.c<br>index 974b150..6335943 100644<br>--- a/res/res_pjsip_transport_websocket.c<br>+++ b/res/res_pjsip_transport_websocket.c<br>@@ -377,7 +377,7 @@<br> <br>        create_data.ws_session = session;<br> <br>- if (ast_sip_push_task_synchronous(serializer, transport_create, &create_data)) {<br>+ if (ast_sip_push_task_wait_serializer(serializer, transport_create, &create_data)) {<br>              ast_log(LOG_ERROR, "Could not create WebSocket transport.\n");<br>              ast_taskprocessor_unreference(serializer);<br>            ast_websocket_unref(session);<br>@@ -396,13 +396,13 @@<br>          }<br> <br>          if (opcode == AST_WEBSOCKET_OPCODE_TEXT || opcode == AST_WEBSOCKET_OPCODE_BINARY) {<br>-                  ast_sip_push_task_synchronous(serializer, transport_read, &read_data);<br>+                   ast_sip_push_task_wait_serializer(serializer, transport_read, &read_data);<br>                } else if (opcode == AST_WEBSOCKET_OPCODE_CLOSE) {<br>                    break;<br>                }<br>     }<br> <br>- ast_sip_push_task_synchronous(serializer, transport_shutdown, transport);<br>+    ast_sip_push_task_wait_serializer(serializer, transport_shutdown, transport);<br> <br>      ast_taskprocessor_unreference(serializer);<br>    ast_websocket_unref(session);<br></pre><p>To view, visit <a href="https://gerrit.asterisk.org/8757">change 8757</a>. To unsubscribe, visit <a href="https://gerrit.asterisk.org/settings">settings</a>.</p><div itemscope itemtype="http://schema.org/EmailMessage"><div itemscope itemprop="action" itemtype="http://schema.org/ViewAction"><link itemprop="url" href="https://gerrit.asterisk.org/8757"/><meta itemprop="name" content="View Change"/></div></div>

<div style="display:none"> Gerrit-Project: asterisk </div>
<div style="display:none"> Gerrit-Branch: master </div>
<div style="display:none"> Gerrit-MessageType: newchange </div>
<div style="display:none"> Gerrit-Change-Id: Id040fa42c0e5972f4c8deef380921461d213b9f3 </div>
<div style="display:none"> Gerrit-Change-Number: 8757 </div>
<div style="display:none"> Gerrit-PatchSet: 1 </div>
<div style="display:none"> Gerrit-Owner: Richard Mudgett <rmudgett@digium.com> </div>