<p>Friendly Automation <strong>merged</strong> this change.</p><p><a href="https://gerrit.asterisk.org/c/asterisk/+/12871">View Change</a></p><div style="white-space:pre-wrap">Approvals:
  Kevin Harwell: Looks good to me, but someone else must approve
  George Joseph: Looks good to me, approved
  Friendly Automation: Approved for Submit

</div><pre style="font-family: monospace,monospace; white-space: pre-wrap;">func_jitterbuffer: Add audio/video sync support.<br><br>This change adds support to the JITTERBUFFER dialplan function<br>for audio and video synchronization. When enabled the RTCP SR<br>report is used to produce an NTP timestamp for both the audio and<br>video streams. Using this information the video frames are queued<br>until their NTP timestamp is equal to or behind the NTP timestamp<br>of the audio. The audio jitterbuffer acts as the leader deciding<br>when to shrink/grow the jitterbuffer when adaptive is in use. For<br>both adaptive and fixed the video buffer follows the size of the<br>audio jitterbuffer.<br><br>ASTERISK-28533<br><br>Change-Id: I3fd75160426465e6d46bb2e198c07b9d314a4492<br>---<br>A doc/CHANGES-staging/func_jitterbuffer_video.txt<br>M funcs/func_jitterbuffer.c<br>M include/asterisk/abstract_jb.h<br>M include/asterisk/rtp_engine.h<br>M main/abstract_jb.c<br>M main/rtp_engine.c<br>M res/res_rtp_asterisk.c<br>7 files changed, 237 insertions(+), 24 deletions(-)<br><br></pre><pre style="font-family: monospace,monospace; white-space: pre-wrap;"><span>diff --git a/doc/CHANGES-staging/func_jitterbuffer_video.txt b/doc/CHANGES-staging/func_jitterbuffer_video.txt</span><br><span>new file mode 100644</span><br><span>index 0000000..02f9b0b</span><br><span>--- /dev/null</span><br><span>+++ b/doc/CHANGES-staging/func_jitterbuffer_video.txt</span><br><span>@@ -0,0 +1,6 @@</span><br><span style="color: hsl(120, 100%, 40%);">+Subject: func_jitterbuffer</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+The JITTERBUFFER dialplan function now has an option to enable video synchronization</span><br><span style="color: hsl(120, 100%, 40%);">+support. When enabled and used with a compatible channel driver (chan_sip, chan_pjsip)</span><br><span style="color: hsl(120, 100%, 40%);">+the video is buffered according to the size of the audio jitterbuffer and is</span><br><span style="color: hsl(120, 100%, 40%);">+synchronized to the audio.</span><br><span>diff --git a/funcs/func_jitterbuffer.c b/funcs/func_jitterbuffer.c</span><br><span>index e4c68ed..9daa6ca 100644</span><br><span>--- a/funcs/func_jitterbuffer.c</span><br><span>+++ b/funcs/func_jitterbuffer.c</span><br><span>@@ -62,8 +62,9 @@</span><br><span>            </syntax></span><br><span>              <description></span><br><span>                  <para>Jitterbuffers are constructed in two different ways.</span><br><span style="color: hsl(0, 100%, 40%);">-                        The first always take three arguments: <replaceable>max_size</replaceable>,</span><br><span style="color: hsl(0, 100%, 40%);">-                 <replaceable>resync_threshold</replaceable>, and <replaceable>target_extra</replaceable>.</span><br><span style="color: hsl(120, 100%, 40%);">+                     The first always take four arguments: <replaceable>max_size</replaceable>,</span><br><span style="color: hsl(120, 100%, 40%);">+                        <replaceable>resync_threshold</replaceable>, <replaceable>target_extra</replaceable>,</span><br><span style="color: hsl(120, 100%, 40%);">+                 and <replaceable>sync_video</replaceable>.</span><br><span>                       Alternatively, a single argument of <literal>default</literal> can be provided,</span><br><span>                  which will construct the default jitterbuffer for the given</span><br><span>                  <replaceable>jitterbuffer type</replaceable>.</para></span><br><span>@@ -76,12 +77,17 @@</span><br><span>                         <para>target_extra: This option only affects the adaptive jitterbuffer. It represents</span><br><span>                  the amount time in milliseconds by which the new jitter buffer will pad its size.</span><br><span>                    Defaults to 40ms.</para></span><br><span style="color: hsl(120, 100%, 40%);">+                        <para>sync_video: This option enables video synchronization with the audio stream. It can be</span><br><span style="color: hsl(120, 100%, 40%);">+                    turned on and off. Defaults to off.</para></span><br><span>                     <example title="Fixed with defaults" language="text"></span><br><span>                      exten => 1,1,Set(JITTERBUFFER(fixed)=default)</span><br><span>                     </example></span><br><span>                     <example title="Fixed with 200ms max size" language="text"></span><br><span>                        exten => 1,1,Set(JITTERBUFFER(fixed)=200)</span><br><span>                         </example></span><br><span style="color: hsl(120, 100%, 40%);">+                      <example title="Fixed with 200ms max size and video sync support" language="text"></span><br><span style="color: hsl(120, 100%, 40%);">+                  exten => 1,1,Set(JITTERBUFFER(fixed)=200,,,yes)</span><br><span style="color: hsl(120, 100%, 40%);">+                    </example></span><br><span>                     <example title="Fixed with 200ms max size, resync threshold 1500" language="text"></span><br><span>                         exten => 1,1,Set(JITTERBUFFER(fixed)=200,1500)</span><br><span>                    </example></span><br><span>@@ -91,6 +97,9 @@</span><br><span>                         <example title="Adaptive with 200ms max size, 60ms target extra" language="text"></span><br><span>                  exten => 1,1,Set(JITTERBUFFER(adaptive)=200,,60)</span><br><span>                  </example></span><br><span style="color: hsl(120, 100%, 40%);">+                      <example title="Adaptive with 200ms max size and video sync support" language="text"></span><br><span style="color: hsl(120, 100%, 40%);">+                       exten => 1,1,Set(JITTERBUFFER(adaptive)=200,,,yes)</span><br><span style="color: hsl(120, 100%, 40%);">+                 </example></span><br><span>                     <example title="Set a fixed jitterbuffer with defaults; then remove it" language="text"></span><br><span>                   exten => 1,1,Set(JITTERBUFFER(fixed)=default)</span><br><span>                     exten => 1,n,Set(JITTERBUFFER(disabled)=)</span><br><span>@@ -133,6 +142,7 @@</span><br><span>                   AST_APP_ARG(max_size);</span><br><span>                       AST_APP_ARG(resync_threshold);</span><br><span>                       AST_APP_ARG(target_extra);</span><br><span style="color: hsl(120, 100%, 40%);">+                    AST_APP_ARG(sync_video);</span><br><span>             );</span><br><span> </span><br><span>               AST_STANDARD_APP_ARGS(args, parse);</span><br><span>@@ -151,6 +161,11 @@</span><br><span>                           "jbtargetextra",</span><br><span>                           args.target_extra);</span><br><span>          }</span><br><span style="color: hsl(120, 100%, 40%);">+             if (!ast_strlen_zero(args.sync_video)) {</span><br><span style="color: hsl(120, 100%, 40%);">+                      res |= ast_jb_read_conf(&jb_conf,</span><br><span style="color: hsl(120, 100%, 40%);">+                         "jbsyncvideo",</span><br><span style="color: hsl(120, 100%, 40%);">+                              args.sync_video);</span><br><span style="color: hsl(120, 100%, 40%);">+             }</span><br><span>            if (res) {</span><br><span>                   ast_log(LOG_WARNING, "Invalid jitterbuffer parameters %s\n", value);</span><br><span>               }</span><br><span>diff --git a/include/asterisk/abstract_jb.h b/include/asterisk/abstract_jb.h</span><br><span>index b300d12..3e2467d 100644</span><br><span>--- a/include/asterisk/abstract_jb.h</span><br><span>+++ b/include/asterisk/abstract_jb.h</span><br><span>@@ -44,7 +44,8 @@</span><br><span> enum {</span><br><span>         AST_JB_ENABLED = (1 << 0),</span><br><span>     AST_JB_FORCED =  (1 << 1),</span><br><span style="color: hsl(0, 100%, 40%);">-        AST_JB_LOG =     (1 << 2)</span><br><span style="color: hsl(120, 100%, 40%);">+       AST_JB_LOG =     (1 << 2),</span><br><span style="color: hsl(120, 100%, 40%);">+      AST_JB_SYNC_VIDEO =   (1 << 3)</span><br><span> };</span><br><span> </span><br><span> enum ast_jb_type {</span><br><span>@@ -89,6 +90,7 @@</span><br><span> #define AST_JB_CONF_TARGET_EXTRA "targetextra"</span><br><span> #define AST_JB_CONF_IMPL "impl"</span><br><span> #define AST_JB_CONF_LOG "log"</span><br><span style="color: hsl(120, 100%, 40%);">+#define AST_JB_CONF_SYNC_VIDEO "syncvideo"</span><br><span> </span><br><span> /* Hooks for the abstract jb implementation */</span><br><span> /*! \brief Create */</span><br><span>diff --git a/include/asterisk/rtp_engine.h b/include/asterisk/rtp_engine.h</span><br><span>index 57f29b4..758fad7 100644</span><br><span>--- a/include/asterisk/rtp_engine.h</span><br><span>+++ b/include/asterisk/rtp_engine.h</span><br><span>@@ -2801,6 +2801,17 @@</span><br><span> struct ast_json *ast_rtp_instance_get_stats_all_json(struct ast_rtp_instance *instance);</span><br><span> </span><br><span> /*!</span><br><span style="color: hsl(120, 100%, 40%);">+ * \brief Retrieve the sample rate of a format according to RTP specifications</span><br><span style="color: hsl(120, 100%, 40%);">+ * \since 16.7.0</span><br><span style="color: hsl(120, 100%, 40%);">+ * \since 17.1.0</span><br><span style="color: hsl(120, 100%, 40%);">+ *</span><br><span style="color: hsl(120, 100%, 40%);">+ * \param format The media format</span><br><span style="color: hsl(120, 100%, 40%);">+ *</span><br><span style="color: hsl(120, 100%, 40%);">+ * \retval The sample rate</span><br><span style="color: hsl(120, 100%, 40%);">+ */</span><br><span style="color: hsl(120, 100%, 40%);">+int ast_rtp_get_rate(const struct ast_format *format);</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+/*!</span><br><span>  * \since 12</span><br><span>  * \brief \ref stasis topic for RTP and RTCP related messages</span><br><span>  *</span><br><span>diff --git a/main/abstract_jb.c b/main/abstract_jb.c</span><br><span>index b375739..7549008 100644</span><br><span>--- a/main/abstract_jb.c</span><br><span>+++ b/main/abstract_jb.c</span><br><span>@@ -41,6 +41,8 @@</span><br><span> #include "asterisk/utils.h"</span><br><span> #include "asterisk/pbx.h"</span><br><span> #include "asterisk/timing.h"</span><br><span style="color: hsl(120, 100%, 40%);">+#include "asterisk/rtp_engine.h"</span><br><span style="color: hsl(120, 100%, 40%);">+#include "asterisk/format_cache.h"</span><br><span> </span><br><span> #include "asterisk/abstract_jb.h"</span><br><span> #include "fixedjitterbuf.h"</span><br><span>@@ -53,6 +55,9 @@</span><br><span>         JB_CREATED =              (1 << 2)</span><br><span> };</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+/*! The maximum size we allow the early frame buffer to get */</span><br><span style="color: hsl(120, 100%, 40%);">+#define MAXIMUM_EARLY_FRAME_COUNT 200</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span> </span><br><span> /* Implementation functions */</span><br><span> /* fixed */</span><br><span>@@ -568,6 +573,8 @@</span><br><span>               }</span><br><span>    } else if (!strcasecmp(name, AST_JB_CONF_LOG)) {</span><br><span>             ast_set2_flag(conf, ast_true(value), AST_JB_LOG);</span><br><span style="color: hsl(120, 100%, 40%);">+     } else if (!strcasecmp(name, AST_JB_CONF_SYNC_VIDEO)) {</span><br><span style="color: hsl(120, 100%, 40%);">+               ast_set2_flag(conf, ast_true(value), AST_JB_SYNC_VIDEO);</span><br><span>     } else {</span><br><span>             return -1;</span><br><span>   }</span><br><span>@@ -832,6 +839,11 @@</span><br><span> #define DEFAULT_RESYNC  1000</span><br><span> #define DEFAULT_TYPE AST_JB_FIXED</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+struct jb_stream_sync {</span><br><span style="color: hsl(120, 100%, 40%);">+    unsigned int timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+       struct timeval ntp;</span><br><span style="color: hsl(120, 100%, 40%);">+};</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span> struct jb_framedata {</span><br><span>   const struct ast_jb_impl *jb_impl;</span><br><span>   struct ast_jb_conf jb_conf;</span><br><span>@@ -841,11 +853,21 @@</span><br><span>  int timer_interval; /* ms between deliveries */</span><br><span>      int timer_fd;</span><br><span>        int first;</span><br><span style="color: hsl(120, 100%, 40%);">+    int audio_stream_id;</span><br><span style="color: hsl(120, 100%, 40%);">+  struct jb_stream_sync audio_stream_sync;</span><br><span style="color: hsl(120, 100%, 40%);">+      int video_stream_id;</span><br><span style="color: hsl(120, 100%, 40%);">+  struct jb_stream_sync video_stream_sync;</span><br><span style="color: hsl(120, 100%, 40%);">+      AST_LIST_HEAD_NOLOCK(, ast_frame) early_frames;</span><br><span style="color: hsl(120, 100%, 40%);">+       unsigned int early_frame_count;</span><br><span style="color: hsl(120, 100%, 40%);">+       struct timeval last_audio_ntp_timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+      int audio_flowing;</span><br><span>   void *jb_obj;</span><br><span> };</span><br><span> </span><br><span> static void jb_framedata_destroy(struct jb_framedata *framedata)</span><br><span> {</span><br><span style="color: hsl(120, 100%, 40%);">+      struct ast_frame *frame;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span>   if (framedata->timer) {</span><br><span>           ast_timer_close(framedata->timer);</span><br><span>                framedata->timer = NULL;</span><br><span>@@ -859,11 +881,15 @@</span><br><span>          framedata->jb_obj = NULL;</span><br><span>         }</span><br><span>    ao2_cleanup(framedata->last_format);</span><br><span style="color: hsl(120, 100%, 40%);">+       while ((frame = AST_LIST_REMOVE_HEAD(&framedata->early_frames, frame_list))) {</span><br><span style="color: hsl(120, 100%, 40%);">+         ast_frfree(frame);</span><br><span style="color: hsl(120, 100%, 40%);">+    }</span><br><span>    ast_free(framedata);</span><br><span> }</span><br><span> </span><br><span> void ast_jb_conf_default(struct ast_jb_conf *conf)</span><br><span> {</span><br><span style="color: hsl(120, 100%, 40%);">+      ast_clear_flag(conf, AST_FLAGS_ALL);</span><br><span>         conf->max_size = DEFAULT_SIZE;</span><br><span>    conf->resync_threshold = DEFAULT_RESYNC;</span><br><span>  ast_copy_string(conf->impl, "fixed", sizeof(conf->impl));</span><br><span>@@ -886,6 +912,44 @@</span><br><span>     jb_framedata_destroy((struct jb_framedata *) framedata);</span><br><span> }</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+static struct timeval jitterbuffer_frame_get_ntp_timestamp(const struct jb_stream_sync *stream_sync, const struct ast_frame *frame)</span><br><span style="color: hsl(120, 100%, 40%);">+{</span><br><span style="color: hsl(120, 100%, 40%);">+     int timestamp_diff;</span><br><span style="color: hsl(120, 100%, 40%);">+   unsigned int rate;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+  /* It's possible for us to receive frames before we receive the information allowing</span><br><span style="color: hsl(120, 100%, 40%);">+       * us to do NTP/RTP timestamp calculations. Since the information isn't available we</span><br><span style="color: hsl(120, 100%, 40%);">+       * can't generate one and give an empty timestamp.</span><br><span style="color: hsl(120, 100%, 40%);">+         */</span><br><span style="color: hsl(120, 100%, 40%);">+   if (ast_tvzero(stream_sync->ntp)) {</span><br><span style="color: hsl(120, 100%, 40%);">+                return ast_tv(0, 0);</span><br><span style="color: hsl(120, 100%, 40%);">+  }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+   /* Convert the Asterisk timestamp into an RTP timestamp, and then based on the difference we can</span><br><span style="color: hsl(120, 100%, 40%);">+       * determine how many samples are in the frame and how long has elapsed since the synchronization</span><br><span style="color: hsl(120, 100%, 40%);">+      * RTP and NTP timestamps were received giving us the NTP timestamp for this frame.</span><br><span style="color: hsl(120, 100%, 40%);">+    */</span><br><span style="color: hsl(120, 100%, 40%);">+   if (frame->frametype == AST_FRAME_VOICE) {</span><br><span style="color: hsl(120, 100%, 40%);">+         rate = ast_rtp_get_rate(frame->subclass.format);</span><br><span style="color: hsl(120, 100%, 40%);">+           timestamp_diff = (frame->ts * (rate / 1000)) - stream_sync->timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+  } else {</span><br><span style="color: hsl(120, 100%, 40%);">+              /* Video is special - internally we reference it as 1000 to preserve the RTP timestamp but</span><br><span style="color: hsl(120, 100%, 40%);">+             * it is actualy 90000, this is why we can just directly subtract the timestamp.</span><br><span style="color: hsl(120, 100%, 40%);">+               */</span><br><span style="color: hsl(120, 100%, 40%);">+           rate = 90000;</span><br><span style="color: hsl(120, 100%, 40%);">+         timestamp_diff = frame->ts - stream_sync->timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+    }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+   if (timestamp_diff < 0) {</span><br><span style="color: hsl(120, 100%, 40%);">+          /* It's possible for us to be asked for an NTP timestamp from before our latest</span><br><span style="color: hsl(120, 100%, 40%);">+            * RTCP SR report. To handle this we subtract so we go back in time.</span><br><span style="color: hsl(120, 100%, 40%);">+           */</span><br><span style="color: hsl(120, 100%, 40%);">+           return ast_tvsub(stream_sync->ntp, ast_samp2tv(abs(timestamp_diff), rate));</span><br><span style="color: hsl(120, 100%, 40%);">+        } else {</span><br><span style="color: hsl(120, 100%, 40%);">+              return ast_tvadd(stream_sync->ntp, ast_samp2tv(timestamp_diff, rate));</span><br><span style="color: hsl(120, 100%, 40%);">+     }</span><br><span style="color: hsl(120, 100%, 40%);">+}</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span> static struct ast_frame *hook_event_cb(struct ast_channel *chan, struct ast_frame *frame, enum ast_framehook_event event, void *data)</span><br><span> {</span><br><span>         struct jb_framedata *framedata = data;</span><br><span>@@ -928,6 +992,77 @@</span><br><span>                return frame;</span><br><span>        }</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+ if (ast_test_flag(&framedata->jb_conf, AST_JB_SYNC_VIDEO)) {</span><br><span style="color: hsl(120, 100%, 40%);">+           if (frame->frametype == AST_FRAME_VOICE) {</span><br><span style="color: hsl(120, 100%, 40%);">+                 /* Store the stream identifier for the audio stream so we can associate the incoming RTCP SR</span><br><span style="color: hsl(120, 100%, 40%);">+                   * with the correct stream sync structure.</span><br><span style="color: hsl(120, 100%, 40%);">+                     */</span><br><span style="color: hsl(120, 100%, 40%);">+                   framedata->audio_stream_id = frame->stream_num;</span><br><span style="color: hsl(120, 100%, 40%);">+         } else if (frame->frametype == AST_FRAME_RTCP && frame->subclass.integer == AST_RTP_RTCP_SR) {</span><br><span style="color: hsl(120, 100%, 40%);">+                  struct ast_rtp_rtcp_report *rtcp_report = frame->data.ptr;</span><br><span style="color: hsl(120, 100%, 40%);">+                 struct jb_stream_sync *stream_sync = NULL;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                  /* Determine which stream this RTCP is in regards to */</span><br><span style="color: hsl(120, 100%, 40%);">+                       if (framedata->audio_stream_id == frame->stream_num) {</span><br><span style="color: hsl(120, 100%, 40%);">+                          stream_sync = &framedata->audio_stream_sync;</span><br><span style="color: hsl(120, 100%, 40%);">+                   } else if (framedata->video_stream_id == frame->stream_num) {</span><br><span style="color: hsl(120, 100%, 40%);">+                           stream_sync = &framedata->video_stream_sync;</span><br><span style="color: hsl(120, 100%, 40%);">+                   }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                   if (stream_sync) {</span><br><span style="color: hsl(120, 100%, 40%);">+                            /* Store the RTP and NTP timestamp mapping so we can derive an NTP timestamp for each frame */</span><br><span style="color: hsl(120, 100%, 40%);">+                                stream_sync->timestamp = rtcp_report->sender_information.rtp_timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+                         stream_sync->ntp = rtcp_report->sender_information.ntp_timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+                       }</span><br><span style="color: hsl(120, 100%, 40%);">+             } else if (frame->frametype == AST_FRAME_VIDEO) {</span><br><span style="color: hsl(120, 100%, 40%);">+                  /* If a video frame is late according to the audio timestamp don't stash it away, just return it.</span><br><span style="color: hsl(120, 100%, 40%);">+                  * If however it is ahead then we keep it until such time as the audio catches up.</span><br><span style="color: hsl(120, 100%, 40%);">+                     */</span><br><span style="color: hsl(120, 100%, 40%);">+                   struct ast_frame *jbframe;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                  framedata->video_stream_id = frame->stream_num;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                       /* If no timing information is available we can't store this away, so just let it through now */</span><br><span style="color: hsl(120, 100%, 40%);">+                  if (!ast_test_flag(frame, AST_FRFLAG_HAS_TIMING_INFO)) {</span><br><span style="color: hsl(120, 100%, 40%);">+                              return frame;</span><br><span style="color: hsl(120, 100%, 40%);">+                 }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                   /* To ensure that the video starts when the audio starts we only start allowing frames through once</span><br><span style="color: hsl(120, 100%, 40%);">+                    * audio starts flowing.</span><br><span style="color: hsl(120, 100%, 40%);">+                       */</span><br><span style="color: hsl(120, 100%, 40%);">+                   if (framedata->audio_flowing) {</span><br><span style="color: hsl(120, 100%, 40%);">+                            struct timeval video_timestamp;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                             video_timestamp = jitterbuffer_frame_get_ntp_timestamp(&framedata->video_stream_sync, frame);</span><br><span style="color: hsl(120, 100%, 40%);">+                          if (ast_tvdiff_ms(framedata->last_audio_ntp_timestamp, video_timestamp) >= 0) {</span><br><span style="color: hsl(120, 100%, 40%);">+                                 return frame;</span><br><span style="color: hsl(120, 100%, 40%);">+                         }</span><br><span style="color: hsl(120, 100%, 40%);">+                     }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                   /* To prevent the early frame buffer from growing uncontrolled we impose a maximum count that it can</span><br><span style="color: hsl(120, 100%, 40%);">+                   * get to. If this is reached then we drop a video frame, which should cause the receiver to ask for a</span><br><span style="color: hsl(120, 100%, 40%);">+                         * new key frame.</span><br><span style="color: hsl(120, 100%, 40%);">+                      */</span><br><span style="color: hsl(120, 100%, 40%);">+                   if (framedata->early_frame_count == MAXIMUM_EARLY_FRAME_COUNT) {</span><br><span style="color: hsl(120, 100%, 40%);">+                           jbframe = AST_LIST_REMOVE_HEAD(&framedata->early_frames, frame_list);</span><br><span style="color: hsl(120, 100%, 40%);">+                          framedata->early_frame_count--;</span><br><span style="color: hsl(120, 100%, 40%);">+                            ast_frfree(jbframe);</span><br><span style="color: hsl(120, 100%, 40%);">+                  }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                   jbframe = ast_frisolate(frame);</span><br><span style="color: hsl(120, 100%, 40%);">+                       if (!jbframe) {</span><br><span style="color: hsl(120, 100%, 40%);">+                               /* If we can't isolate the frame the safest thing we can do is return it, even if the A/V sync</span><br><span style="color: hsl(120, 100%, 40%);">+                             * may be off.</span><br><span style="color: hsl(120, 100%, 40%);">+                                 */</span><br><span style="color: hsl(120, 100%, 40%);">+                           return frame;</span><br><span style="color: hsl(120, 100%, 40%);">+                 }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                   AST_LIST_INSERT_TAIL(&framedata->early_frames, jbframe, frame_list);</span><br><span style="color: hsl(120, 100%, 40%);">+                   framedata->early_frame_count++;</span><br><span style="color: hsl(120, 100%, 40%);">+                    return &ast_null_frame;</span><br><span style="color: hsl(120, 100%, 40%);">+           }</span><br><span style="color: hsl(120, 100%, 40%);">+     }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span>  now_tv = ast_tvnow();</span><br><span>        now = ast_tvdiff_ms(now_tv, framedata->start_tv);</span><br><span> </span><br><span>@@ -1022,6 +1157,8 @@</span><br><span>     }</span><br><span> </span><br><span>        if (frame->frametype == AST_FRAME_CONTROL) {</span><br><span style="color: hsl(120, 100%, 40%);">+               struct ast_frame *early_frame;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span>             switch(frame->subclass.integer) {</span><br><span>                 case AST_CONTROL_HOLD:</span><br><span>               case AST_CONTROL_UNHOLD:</span><br><span>@@ -1029,12 +1166,50 @@</span><br><span>           case AST_CONTROL_SRCUPDATE:</span><br><span>          case AST_CONTROL_SRCCHANGE:</span><br><span>                  framedata->jb_impl->force_resync(framedata->jb_obj);</span><br><span style="color: hsl(120, 100%, 40%);">+                 /* Since we are resyncing go ahead and clear out the video frames too */</span><br><span style="color: hsl(120, 100%, 40%);">+                      while ((early_frame = AST_LIST_REMOVE_HEAD(&framedata->early_frames, frame_list))) {</span><br><span style="color: hsl(120, 100%, 40%);">+                           ast_frfree(early_frame);</span><br><span style="color: hsl(120, 100%, 40%);">+                      }</span><br><span style="color: hsl(120, 100%, 40%);">+                     framedata->audio_flowing = 0;</span><br><span style="color: hsl(120, 100%, 40%);">+                      framedata->early_frame_count = 0;</span><br><span>                         break;</span><br><span>               default:</span><br><span>                     break;</span><br><span>               }</span><br><span>    }</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+ /* If a voice frame is being passed through see if we need to add any additional frames to it */</span><br><span style="color: hsl(120, 100%, 40%);">+      if (ast_test_flag(&framedata->jb_conf, AST_JB_SYNC_VIDEO) && frame->frametype == AST_FRAME_VOICE) {</span><br><span style="color: hsl(120, 100%, 40%);">+         AST_LIST_HEAD_NOLOCK(, ast_frame) additional_frames;</span><br><span style="color: hsl(120, 100%, 40%);">+          struct ast_frame *early_frame;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+              /* We store the last NTP timestamp for the audio given to the core so that subsequents frames which</span><br><span style="color: hsl(120, 100%, 40%);">+            * are late can be passed immediately through (this will occur for video frames which are returned here)</span><br><span style="color: hsl(120, 100%, 40%);">+               */</span><br><span style="color: hsl(120, 100%, 40%);">+           framedata->last_audio_ntp_timestamp = jitterbuffer_frame_get_ntp_timestamp(&framedata->audio_stream_sync, frame);</span><br><span style="color: hsl(120, 100%, 40%);">+           framedata->audio_flowing = 1;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+            AST_LIST_HEAD_INIT_NOLOCK(&additional_frames);</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+          AST_LIST_TRAVERSE_SAFE_BEGIN(&framedata->early_frames, early_frame, frame_list) {</span><br><span style="color: hsl(120, 100%, 40%);">+                      struct timeval early_timestamp = jitterbuffer_frame_get_ntp_timestamp(&framedata->video_stream_sync, early_frame);</span><br><span style="color: hsl(120, 100%, 40%);">+                     int diff = ast_tvdiff_ms(framedata->last_audio_ntp_timestamp, early_timestamp);</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+                  /* If this frame is from the past we need to include it with the audio frame that is going</span><br><span style="color: hsl(120, 100%, 40%);">+                     * out.</span><br><span style="color: hsl(120, 100%, 40%);">+                        */</span><br><span style="color: hsl(120, 100%, 40%);">+                   if (diff >= 0) {</span><br><span style="color: hsl(120, 100%, 40%);">+                           AST_LIST_REMOVE_CURRENT(frame_list);</span><br><span style="color: hsl(120, 100%, 40%);">+                          framedata->early_frame_count--;</span><br><span style="color: hsl(120, 100%, 40%);">+                            AST_LIST_INSERT_TAIL(&additional_frames, early_frame, frame_list);</span><br><span style="color: hsl(120, 100%, 40%);">+                        }</span><br><span style="color: hsl(120, 100%, 40%);">+             }</span><br><span style="color: hsl(120, 100%, 40%);">+             AST_LIST_TRAVERSE_SAFE_END;</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+         /* Append any additional frames we may want to include (such as video) */</span><br><span style="color: hsl(120, 100%, 40%);">+             AST_LIST_NEXT(frame, frame_list) = AST_LIST_FIRST(&additional_frames);</span><br><span style="color: hsl(120, 100%, 40%);">+    }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span>  return frame;</span><br><span> }</span><br><span> </span><br><span>@@ -1066,6 +1241,9 @@</span><br><span>               return -1;</span><br><span>   }</span><br><span> </span><br><span style="color: hsl(120, 100%, 40%);">+ framedata->audio_stream_id = -1;</span><br><span style="color: hsl(120, 100%, 40%);">+   framedata->video_stream_id = -1;</span><br><span style="color: hsl(120, 100%, 40%);">+   AST_LIST_HEAD_INIT_NOLOCK(&framedata->early_frames);</span><br><span>  framedata->timer_fd = ast_timer_fd(framedata->timer);</span><br><span>  framedata->timer_interval = DEFAULT_TIMER_INTERVAL;</span><br><span>       ast_timer_set_rate(framedata->timer, 1000 / framedata->timer_interval);</span><br><span>diff --git a/main/rtp_engine.c b/main/rtp_engine.c</span><br><span>index 3403d70..9b34c3d 100644</span><br><span>--- a/main/rtp_engine.c</span><br><span>+++ b/main/rtp_engine.c</span><br><span>@@ -3956,3 +3956,12 @@</span><br><span> </span><br><span>  return ast_rtp_convert_stats_json(&stats);</span><br><span> }</span><br><span style="color: hsl(120, 100%, 40%);">+</span><br><span style="color: hsl(120, 100%, 40%);">+int ast_rtp_get_rate(const struct ast_format *format)</span><br><span style="color: hsl(120, 100%, 40%);">+{</span><br><span style="color: hsl(120, 100%, 40%);">+       /* For those wondering: due to a fluke in RFC publication, G.722 is advertised</span><br><span style="color: hsl(120, 100%, 40%);">+         * as having a sample rate of 8kHz, while implementations must know that its</span><br><span style="color: hsl(120, 100%, 40%);">+   * real rate is 16kHz. Seriously.</span><br><span style="color: hsl(120, 100%, 40%);">+      */</span><br><span style="color: hsl(120, 100%, 40%);">+        return (ast_format_cmp(format, ast_format_g722) == AST_FORMAT_CMP_EQUAL) ? 8000 : (int)ast_format_get_sample_rate(format);</span><br><span style="color: hsl(120, 100%, 40%);">+}</span><br><span>diff --git a/res/res_rtp_asterisk.c b/res/res_rtp_asterisk.c</span><br><span>index e96223f..c65cf62 100644</span><br><span>--- a/res/res_rtp_asterisk.c</span><br><span>+++ b/res/res_rtp_asterisk.c</span><br><span>@@ -3204,15 +3204,6 @@</span><br><span>     return res;</span><br><span> }</span><br><span> </span><br><span style="color: hsl(0, 100%, 40%);">-static int rtp_get_rate(struct ast_format *format)</span><br><span style="color: hsl(0, 100%, 40%);">-{</span><br><span style="color: hsl(0, 100%, 40%);">- /* For those wondering: due to a fluke in RFC publication, G.722 is advertised</span><br><span style="color: hsl(0, 100%, 40%);">-   * as having a sample rate of 8kHz, while implementations must know that its</span><br><span style="color: hsl(0, 100%, 40%);">-     * real rate is 16kHz. Seriously.</span><br><span style="color: hsl(0, 100%, 40%);">-        */</span><br><span style="color: hsl(0, 100%, 40%);">-     return (ast_format_cmp(format, ast_format_g722) == AST_FORMAT_CMP_EQUAL) ? 8000 : (int)ast_format_get_sample_rate(format);</span><br><span style="color: hsl(0, 100%, 40%);">-}</span><br><span style="color: hsl(0, 100%, 40%);">-</span><br><span> static unsigned int ast_rtcp_calc_interval(struct ast_rtp *rtp)</span><br><span> {</span><br><span>  unsigned int interval;</span><br><span>@@ -4096,7 +4087,7 @@</span><br><span> </span><br><span>   rtp->dtmfmute = ast_tvadd(ast_tvnow(), ast_tv(0, 500000));</span><br><span> </span><br><span style="color: hsl(0, 100%, 40%);">-       if (duration > 0 && (measured_samples = duration * rtp_get_rate(rtp->f.subclass.format) / 1000) > rtp->send_duration) {</span><br><span style="color: hsl(120, 100%, 40%);">+   if (duration > 0 && (measured_samples = duration * ast_rtp_get_rate(rtp->f.subclass.format) / 1000) > rtp->send_duration) {</span><br><span>              ast_debug(2, "Adjusting final end duration from %d to %u\n", rtp->send_duration, measured_samples);</span><br><span>             rtp->send_duration = measured_samples;</span><br><span>    }</span><br><span>@@ -4349,7 +4340,7 @@</span><br><span>            report_block->lost_count.fraction = (fraction_lost & 0xff);</span><br><span>           report_block->lost_count.packets = (lost_packets & 0xffffff);</span><br><span>                 report_block->highest_seq_no = (rtp->cycles | (rtp->lastrxseqno & 0xffff));</span><br><span style="color: hsl(0, 100%, 40%);">-                report_block->ia_jitter = (unsigned int)(rtp->rxjitter * rtp_get_rate(rtp->f.subclass.format));</span><br><span style="color: hsl(120, 100%, 40%);">+              report_block->ia_jitter = (unsigned int)(rtp->rxjitter * ast_rtp_get_rate(rtp->f.subclass.format));</span><br><span>                 report_block->lsr = rtp->rtcp->themrxlsr;</span><br><span>           /* If we haven't received an SR report, DLSR should be 0 */</span><br><span>              if (!ast_tvzero(rtp->rtcp->rxlsr)) {</span><br><span>@@ -4431,7 +4422,7 @@</span><br><span>                   ast_verbose("    Fraction lost: %d\n", report_block->lost_count.fraction);</span><br><span>                      ast_verbose("    Cumulative loss: %u\n", report_block->lost_count.packets);</span><br><span>                     ast_verbose("    Highest seq no: %u\n", report_block->highest_seq_no);</span><br><span style="color: hsl(0, 100%, 40%);">-                     ast_verbose("    IA jitter: %.4f\n", (double)report_block->ia_jitter / rtp_get_rate(rtp->f.subclass.format));</span><br><span style="color: hsl(120, 100%, 40%);">+                 ast_verbose("    IA jitter: %.4f\n", (double)report_block->ia_jitter / ast_rtp_get_rate(rtp->f.subclass.format));</span><br><span>                    ast_verbose("    Their last SR: %u\n", report_block->lsr);</span><br><span>                      ast_verbose("    DLSR: %4.4f (sec)\n\n", (double)(report_block->dlsr / 65536.0));</span><br><span>               }</span><br><span>@@ -4684,7 +4675,7 @@</span><br><span>    int pred, mark = 0;</span><br><span>  unsigned int ms = calc_txstamp(rtp, &frame->delivery);</span><br><span>        struct ast_sockaddr remote_address = { {0,} };</span><br><span style="color: hsl(0, 100%, 40%);">-  int rate = rtp_get_rate(frame->subclass.format) / 1000;</span><br><span style="color: hsl(120, 100%, 40%);">+    int rate = ast_rtp_get_rate(frame->subclass.format) / 1000;</span><br><span>       unsigned int seqno;</span><br><span> #ifdef TEST_FRAMEWORK</span><br><span>         struct ast_rtp_engine_test *test = ast_rtp_instance_get_test(instance);</span><br><span>@@ -5204,7 +5195,7 @@</span><br><span>      double d;</span><br><span>    double dtv;</span><br><span>  double prog;</span><br><span style="color: hsl(0, 100%, 40%);">-    int rate = rtp_get_rate(rtp->f.subclass.format);</span><br><span style="color: hsl(120, 100%, 40%);">+   int rate = ast_rtp_get_rate(rtp->f.subclass.format);</span><br><span> </span><br><span>  double normdev_rxjitter_current;</span><br><span>     if ((!rtp->rxcore.tv_sec && !rtp->rxcore.tv_usec) || mark) {</span><br><span>@@ -5359,7 +5350,7 @@</span><br><span>                           rtp->dtmf_duration = new_duration;</span><br><span>                                rtp->resp = resp;</span><br><span>                                 f = ast_frdup(create_dtmf_frame(instance, AST_FRAME_DTMF_END, 0));</span><br><span style="color: hsl(0, 100%, 40%);">-                              f->len = ast_tvdiff_ms(ast_samp2tv(rtp->dtmf_duration, rtp_get_rate(f->subclass.format)), ast_tv(0, 0));</span><br><span style="color: hsl(120, 100%, 40%);">+                             f->len = ast_tvdiff_ms(ast_samp2tv(rtp->dtmf_duration, ast_rtp_get_rate(f->subclass.format)), ast_tv(0, 0));</span><br><span>                                rtp->resp = 0;</span><br><span>                            rtp->dtmf_duration = rtp->dtmf_timeout = 0;</span><br><span>                            AST_LIST_INSERT_TAIL(frames, f, frame_list);</span><br><span>@@ -5390,7 +5381,7 @@</span><br><span>                         if (rtp->resp && rtp->resp != resp) {</span><br><span>                          /* Another digit already began. End it */</span><br><span>                            f = ast_frdup(create_dtmf_frame(instance, AST_FRAME_DTMF_END, 0));</span><br><span style="color: hsl(0, 100%, 40%);">-                              f->len = ast_tvdiff_ms(ast_samp2tv(rtp->dtmf_duration, rtp_get_rate(f->subclass.format)), ast_tv(0, 0));</span><br><span style="color: hsl(120, 100%, 40%);">+                             f->len = ast_tvdiff_ms(ast_samp2tv(rtp->dtmf_duration, ast_rtp_get_rate(f->subclass.format)), ast_tv(0, 0));</span><br><span>                                rtp->resp = 0;</span><br><span>                            rtp->dtmf_duration = rtp->dtmf_timeout = 0;</span><br><span>                            AST_LIST_INSERT_TAIL(frames, f, frame_list);</span><br><span>@@ -5487,10 +5478,10 @@</span><br><span>               }</span><br><span>    } else if ((rtp->resp == resp) && !power) {</span><br><span>               f = create_dtmf_frame(instance, AST_FRAME_DTMF_END, ast_rtp_instance_get_prop(instance, AST_RTP_PROPERTY_DTMF_COMPENSATE));</span><br><span style="color: hsl(0, 100%, 40%);">-             f->samples = rtp->dtmfsamples * (rtp_get_rate(rtp->lastrxformat) / 1000);</span><br><span style="color: hsl(120, 100%, 40%);">+            f->samples = rtp->dtmfsamples * (ast_rtp_get_rate(rtp->lastrxformat) / 1000);</span><br><span>               rtp->resp = 0;</span><br><span>    } else if (rtp->resp == resp) {</span><br><span style="color: hsl(0, 100%, 40%);">-              rtp->dtmfsamples += 20 * (rtp_get_rate(rtp->lastrxformat) / 1000);</span><br><span style="color: hsl(120, 100%, 40%);">+              rtp->dtmfsamples += 20 * (ast_rtp_get_rate(rtp->lastrxformat) / 1000);</span><br><span>         }</span><br><span> </span><br><span>        rtp->dtmf_timeout = 0;</span><br><span>@@ -6229,6 +6220,7 @@</span><br><span>                    transport_rtp->f.delivery.tv_sec = 0;</span><br><span>                     transport_rtp->f.delivery.tv_usec = 0;</span><br><span>                    transport_rtp->f.src = "RTP";</span><br><span style="color: hsl(120, 100%, 40%);">+                    transport_rtp->f.stream_num = rtp->stream_num;</span><br><span>                         f = &transport_rtp->f;</span><br><span>                        break;</span><br><span>               case AST_RTP_RTCP_RTPFB:</span><br><span>@@ -7104,7 +7096,7 @@</span><br><span>             if (rtp->resp) {</span><br><span>                  struct ast_frame *f;</span><br><span>                         f = create_dtmf_frame(instance, AST_FRAME_DTMF_END, 0);</span><br><span style="color: hsl(0, 100%, 40%);">-                 f->len = ast_tvdiff_ms(ast_samp2tv(rtp->dtmf_duration, rtp_get_rate(f->subclass.format)), ast_tv(0, 0));</span><br><span style="color: hsl(120, 100%, 40%);">+                     f->len = ast_tvdiff_ms(ast_samp2tv(rtp->dtmf_duration, ast_rtp_get_rate(f->subclass.format)), ast_tv(0, 0));</span><br><span>                        rtp->resp = 0;</span><br><span>                    rtp->dtmf_timeout = rtp->dtmf_duration = 0;</span><br><span>                    AST_LIST_INSERT_TAIL(&frames, f, frame_list);</span><br><span>@@ -7188,7 +7180,7 @@</span><br><span>            calc_rxstamp(&rtp->f.delivery, rtp, timestamp, mark);</span><br><span>                 /* Add timing data to let ast_generic_bridge() put the frame into a jitterbuf */</span><br><span>             ast_set_flag(&rtp->f, AST_FRFLAG_HAS_TIMING_INFO);</span><br><span style="color: hsl(0, 100%, 40%);">-               rtp->f.ts = timestamp / (rtp_get_rate(rtp->f.subclass.format) / 1000);</span><br><span style="color: hsl(120, 100%, 40%);">+          rtp->f.ts = timestamp / (ast_rtp_get_rate(rtp->f.subclass.format) / 1000);</span><br><span>             rtp->f.len = rtp->f.samples / ((ast_format_get_sample_rate(rtp->f.subclass.format) / 1000));</span><br><span>        } else if (ast_format_get_type(rtp->f.subclass.format) == AST_MEDIA_TYPE_VIDEO) {</span><br><span>                 /* Video -- samples is # of samples vs. 90000 */</span><br><span>@@ -7196,7 +7188,7 @@</span><br><span>                     rtp->lastividtimestamp = timestamp;</span><br><span>               calc_rxstamp(&rtp->f.delivery, rtp, timestamp, mark);</span><br><span>                 ast_set_flag(&rtp->f, AST_FRFLAG_HAS_TIMING_INFO);</span><br><span style="color: hsl(0, 100%, 40%);">-               rtp->f.ts = timestamp / (rtp_get_rate(rtp->f.subclass.format) / 1000);</span><br><span style="color: hsl(120, 100%, 40%);">+          rtp->f.ts = timestamp / (ast_rtp_get_rate(rtp->f.subclass.format) / 1000);</span><br><span>             rtp->f.samples = timestamp - rtp->lastividtimestamp;</span><br><span>           rtp->lastividtimestamp = timestamp;</span><br><span>               rtp->f.delivery.tv_sec = 0;</span><br><span></span><br></pre><p>To view, visit <a href="https://gerrit.asterisk.org/c/asterisk/+/12871">change 12871</a>. To unsubscribe, or for help writing mail filters, visit <a href="https://gerrit.asterisk.org/settings">settings</a>.</p><div itemscope itemtype="http://schema.org/EmailMessage"><div itemscope itemprop="action" itemtype="http://schema.org/ViewAction"><link itemprop="url" href="https://gerrit.asterisk.org/c/asterisk/+/12871"/><meta itemprop="name" content="View Change"/></div></div>

<div style="display:none"> Gerrit-Project: asterisk </div>
<div style="display:none"> Gerrit-Branch: 16 </div>
<div style="display:none"> Gerrit-Change-Id: I3fd75160426465e6d46bb2e198c07b9d314a4492 </div>
<div style="display:none"> Gerrit-Change-Number: 12871 </div>
<div style="display:none"> Gerrit-PatchSet: 3 </div>
<div style="display:none"> Gerrit-Owner: Joshua Colp <jcolp@digium.com> </div>
<div style="display:none"> Gerrit-Reviewer: Friendly Automation </div>
<div style="display:none"> Gerrit-Reviewer: George Joseph <gjoseph@digium.com> </div>
<div style="display:none"> Gerrit-Reviewer: Kevin Harwell <kharwell@digium.com> </div>
<div style="display:none"> Gerrit-MessageType: merged </div>