<html>
<head>
<base href="https://wiki.asterisk.org/wiki">
<link rel="stylesheet" href="/wiki/s/2033/1/7/_/styles/combined.css?spaceKey=AST&forWysiwyg=true" type="text/css">
</head>
<body style="background: white;" bgcolor="white" class="email-body">
<div id="pageContent">
<div id="notificationFormat">
<div class="wiki-content">
<div class="email">
<h2><a href="https://wiki.asterisk.org/wiki/display/AST/Media+Architecture+Proposal">Media Architecture Proposal</a></h2>
<h4>Page <b>edited</b> by <a href="https://wiki.asterisk.org/wiki/display/~dvossel@digium.com">David Vossel</a>
</h4>
<br/>
<h4>Changes (1)</h4>
<div id="page-diffs">
<table class="diff" cellpadding="0" cellspacing="0">
<tr><td class="diff-snipped" >...<br></td></tr>
<tr><td class="diff-unchanged" >** Extend chan_sip to be able to correctly advertise and negotiate H.264 with attributes in SDPs. <br> <br></td></tr>
<tr><td class="diff-changed-lines" >h2. Phase 3: Extend Asterisk to <span class="diff-deleted-words"style="color:#999;background-color:#fdd;text-decoration:line-through;">work</span> handle multiple media streams <br></td></tr>
<tr><td class="diff-unchanged" > <br>* Step 1 <br></td></tr>
<tr><td class="diff-snipped" >...<br></td></tr>
</table>
</div> <h4>Full Content</h4>
<div class="notificationGreySide">
<h1><a name="MediaArchitectureProposal-Introduction"></a>1. Introduction</h1>
<p>Asterisk was written from the ground up with a set of assumptions about how media is represented and negotiated. These assumptions have worked to get Asterisk where it is today, but unfortunately they have also put in place a set of limitations that must be overcome before Asterisk can meet the demands of the future. While these limitations are built into the foundation of Asterisk's design, the necessary changes required to lift these constraints can be made. This document outlines the required changes and breaks them up into a layered approach. Each section addresses a specific problem with Asterisk's current media architecture and proposes a solution. As the sections progress each new section uses the foundation outlined in the previous sections to address an increasingly complex set of problems. By attacking this issue from the foundation up it is possible to generate a complete solution that exceeds the current development constraints opening Asterisk up to an entire new set of possibilities.</p>
<h1><a name="MediaArchitectureProposal-TableofContents"></a>2. Table of Contents</h1>
<style type='text/css'>/*<![CDATA[*/
div.rbtoc1292276955726 {margin-left: 1.5em;padding: 0px;}
div.rbtoc1292276955726 ul {list-style: disc;margin-left: 0px;padding-left: 20px;}
div.rbtoc1292276955726 li {margin-left: 0px;padding-left: 0px;}
/*]]>*/</style><div class='rbtoc1292276955726'>
<ul>
<li><a href='#MediaArchitectureProposal-Introduction'>1. Introduction</a></li>
<li><a href='#MediaArchitectureProposal-TableofContents'>2. Table of Contents</a></li>
<li><a href='#MediaArchitectureProposal-ProjectRequirements'>3. Project Requirements</a></li>
<li><a href='#MediaArchitectureProposal-RepresentationofMediaFormats'>4. Representation of Media Formats</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-ProblemOverview'>4.1. Problem Overview</a></li>
<li><a href='#MediaArchitectureProposal-Introducingastformat%2CTheNewandImprovedformatt'>4.2. Introducing ast_format, The New and Improved format_t</a></li>
<li><a href='#MediaArchitectureProposal-TheAstFormatAPI'>4.3. The Ast Format API</a></li>
<li><a href='#MediaArchitectureProposal-IntroducingtheFormatAttributeStructure'>4.4. Introducing the Format Attribute Structure</a></li>
<li><a href='#MediaArchitectureProposal-TheAstFormatAttributeAPI'>4.5. The Ast Format Attribute API</a></li>
<li><a href='#MediaArchitectureProposal-TheNewFormatUniqueIdentifier'>4.6. The New Format Unique Identifier</a></li>
<li><a href='#MediaArchitectureProposal-FormatUniqueIdentifierOrganization'>4.7. Format Unique Identifier Organization</a></li>
<li><a href='#MediaArchitectureProposal-NewFormatUniqueIdChangestoframe.h'>4.8. New Format Unique Id Changes to frame.h</a></li>
<li><a href='#MediaArchitectureProposal-NewFormatRepresentationCodeExamplesandUsecases.'>4.9. New Format Representation Code Examples and Use cases.</a></li>
</ul>
<li><a href='#MediaArchitectureProposal-RepresentationofFormatCapabilities'>5. Representation of Format Capabilities</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-ProblemOverview'>5.1. Problem Overview</a></li>
<li><a href='#MediaArchitectureProposal-Introducingastcap%2CTheFormatCapabilityContainer.'>5.2. Introducing ast_cap, The Format Capability Container.</a></li>
<li><a href='#MediaArchitectureProposal-AstFormatCapabilityAPIDefined'>5.3. Ast Format Capability API Defined</a></li>
</ul>
<li><a href='#MediaArchitectureProposal-IAX2AstFormatAPICompatibility'>6. IAX2 Ast Format API Compatibility</a></li>
<li><a href='#MediaArchitectureProposal-RevisedFormatTranslation'>7. Revised Format Translation</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-ProblemOverview'>7.1. Problem Overview</a></li>
<li><a href='#MediaArchitectureProposal-BuildingTranslationPaths'>7.2. Building Translation Paths</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-ComputingTranslationCosts'>7.2.1. Computing Translation Costs</a></li>
<li><a href='#MediaArchitectureProposal-TranslationCostTable'>7.2.2. Translation Cost Table</a></li>
<li><a href='#MediaArchitectureProposal-TranslationPathExamples'>7.2.3. Translation Path Examples</a></li>
<li><a href='#MediaArchitectureProposal-TranslatorCostsDefined'>7.2.4. Translator Costs Defined</a></li>
<li><a href='#MediaArchitectureProposal-CreationofTranslationPathMatrix'>7.2.5. Creation of Translation Path Matrix</a></li>
<li><a href='#MediaArchitectureProposal-ComputingLeastCostTranslationPaths'>7.2.6. Computing Least Cost Translation Paths</a></li>
</ul>
<li><a href='#MediaArchitectureProposal-TranslatorRedundancyandFailover'>7.3. Translator Redundancy and Failover</a></li>
<li><a href='#MediaArchitectureProposal-RedefiningTheTranslatorInterface'>7.4. Redefining The Translator Interface</a></li>
</ul>
<li><a href='#MediaArchitectureProposal-HandlingMultipleMediaStreams'>8. Handling Multiple Media Streams</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-ProblemOverview'>8.1. Problem Overview</a></li>
<li><a href='#MediaArchitectureProposal-DefiningaMediaStreaminAsterisk'>8.2. Defining a Media Stream in Asterisk</a></li>
<li><a href='#MediaArchitectureProposal-Introducingastchannelstream%2CMakingSenseoutofMadness'>8.3. Introducing ast_channel_stream, Making Sense out of Madness</a></li>
<li><a href='#MediaArchitectureProposal-StreamIdentifiers'>8.4. Stream Identifiers</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-DefaultStreams'>8.4.1. Default Streams</a></li>
<li><a href='#MediaArchitectureProposal-AuxiliaryStreams'>8.4.2. Auxiliary Streams</a></li>
<li><a href='#MediaArchitectureProposal-DynamicStreams'>8.4.3. Dynamic Streams</a></li>
</ul>
<li><a href='#MediaArchitectureProposal-AstChannelStreamAPIDefined'>8.5. Ast Channel Stream API Defined</a></li>
<li><a href='#MediaArchitectureProposal-CodeChangeExamples'>8.6. Code Change Examples</a></li>
</ul>
<li><a href='#MediaArchitectureProposal-MediaFormatwithAttributesUserConfiguration'>9. Media Format with Attributes User Configuration</a></li>
<li><a href='#MediaArchitectureProposal-EnhancingFormatNegotiationDuringCallSetup'>10. Enhancing Format Negotiation During Call Setup</a></li>
<li><a href='#MediaArchitectureProposal-FormatRenegotiationAfterCallSetup'>11. Format Renegotiation After Call Setup</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-ProblemOverview'>11.1. Problem Overview</a></li>
<li><a href='#MediaArchitectureProposal-Makingastchannelmakecompatible%28%29Smarter'>11.2. Making ast_channel_make_compatible() Smarter</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-HowRenegotiationWorks'>11.2.1. How Renegotiation Works</a></li>
<li><a href='#MediaArchitectureProposal-RenegotiationwithIntermediaryTranslation'>11.2.2. Renegotiation with Intermediary Translation</a></li>
<li><a href='#MediaArchitectureProposal-RenegotiationwithnoIntermediaryTranslation'>11.2.3. Renegotiation with no Intermediary Translation</a></li>
</ul>
</ul>
<li><a href='#MediaArchitectureProposal-ImplementationPhases'>12. Implementation Phases</a></li>
<ul>
<li><a href='#MediaArchitectureProposal-Phase1%3ARearchitecthowmediaisrepresentedandhowtranslationpathsarebuilt'>12.1. Phase 1: Re-architect how media is represented and how translation paths are built</a></li>
<li><a href='#MediaArchitectureProposal-Phase2%3AExercisethefunctionalityintroducedbyformatswithattributes'>12.2. Phase 2: Exercise the functionality introduced by formats with attributes</a></li>
<li><a href='#MediaArchitectureProposal-Phase3%3AExtendAsterisktohandlemultiplemediastreams'>12.3. Phase 3: Extend Asterisk to handle multiple media streams</a></li>
<li><a href='#MediaArchitectureProposal-Phase4%3AFormatRenegotiationaftercallsetup'>12.4. Phase 4: Format Renegotiation after call setup</a></li>
</ul>
</ul></div>
<h1><a name="MediaArchitectureProposal-ProjectRequirements"></a>3. Project Requirements</h1>
<ul>
        <li>Lift the limit placed on the number of media formats Asterisk can support.</li>
        <li>Add the ability for Asterisk to represent media formats with attributes.
        <ul>
                <li>Support for SILK with attributes</li>
                <li>Support for H.264 with attributes</li>
        </ul>
        </li>
        <li>Add the ability for Asterisk to negotiate media formats with attributes.</li>
        <li>Allow translation paths to be built between all media types, not just audio.</li>
        <li>Allow translation paths to be built in a way that takes into account both media quality and translation cost for all media formats.</li>
        <li>Allow a channel to process multiple media streams, even of the same media type, with translation.</li>
        <li>Support the ability to renegotiate media formats after call setup is complete.</li>
        <li>Support the ability to pass-through media Asterisk does not yet understand.</li>
        <li>Support the ability to for users to specify media formats with attributes in .conf files.</li>
</ul>
<h1><a name="MediaArchitectureProposal-RepresentationofMediaFormats"></a>4. Representation of Media Formats</h1>
<h2><a name="MediaArchitectureProposal-ProblemOverview"></a>4.1. Problem Overview</h2>
<p>One of the key problems the new media architecture must address is how to represent a media format that does not have statically defined parameters. In the past, simply defining a media format type as uLaw or g722 posed no problem as these formats have a very specific set of parameters associated with them. For example uLaw is always 8khz, each sample is always the exact same size, and there is really nothing more required to describe a uLaw payload other than now large it is. Everything else can be calculated because parameters for uLaw payloads never change. Unfortunately the assumption that media formats do not need to be defined beyond their format type has proven to be a limitation in the ability to adopt modern media formats. The problems prohibiting integration of feature complete SILK codec support into Asterisk offers a prime example of how this limitation is hindering development. SILK is an audio codec that may adjust the sample rate used in a stream based upon the capabilities of the network. Right now Asterisk assumes every media format will always contain the same sample rate. Without the ability to define a format's sample rate outside of the hard coded rate defined at compile time, implementing SILK into Asterisk without limiting the codec's functionality is not currently possible.</p>
<p>In order to address this limitation, media formats will have the ability to be further defined using format specific attribute structures. These structures along with usage examples are outlined in below.</p>
<h2><a name="MediaArchitectureProposal-Introducingastformat%2CTheNewandImprovedformatt"></a>4.2. Introducing ast_format, The New and Improved format_t</h2>
<p>The ast_format structure completely replaces format_t everywhere in the code. This new structure allows for a format to be represented not only by a unique ID, but with an attribute structure as well. This means if a channel's read format is SILK and it understands 8khz->16khz without the need of translation, this can be now represented using only a single format identifier. In this case the ast_format's uid would be AST_FORMAT_SILK, and the attribute structure would be configured to further define this format as having a possible dynamic sample rate between 8khz and 16khz.</p>
<p>The ast_format structure on an ast_frame has a slightly different behavior than representing a read and write format though. When on a frame the attributes structure must be used only to further define the frame's payload. In the SILK read format example discussed above, the attribute structure is used to represent a sample rate range the channel's read format is capable of understanding without translation, but when the attribute structure is used on a frame it must represent a very precise set of parameters directly related to the media payload being transported. In the case of SILK, the attribute structure on a frame would represent precisely what sample rate the payload contains.</p>
<h2><a name="MediaArchitectureProposal-TheAstFormatAPI"></a>4.3. The Ast Format API</h2>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>format.h</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*! \brief Represents a media format within Asterisk. */
struct ast_format {
        /*! The unique id representing this format from all the other formats. */
        unsigned int uid;
        /*! Attribute structure used to associate attributes with a format. */
        struct ast_format_attr *fattr;
}
/*! \brief This function is used to set an ast_format object to represent a media format
* with optional format attributes represented by format specific key value pairs.
*
* \details Example usage.
* ast_format_set(format, AST_FORMAT_ULAW); // no capability attributes are needed for ULAW
*
* ast_format_set(format, AST_FORMAT_SILK, // SILK has capability attributes.
*                AST_FORMAT_SILK_RATE, 24000,
*                AST_FORMAT_SILK_RATE, 16000,
*                AST_FORMAT_SILK_RATE, 12000,
*                AST_FORMAT_SILK_RATE, 8000,
*                AST_FORMAT_ATTR_END);
*
* \return Pointer to ast_format object.
*/
struct ast_format *ast_format_set(struct ast_format *format, format_t format, ...);
/*! \brief This function is used to set an ast_format object to represent a media format
* with optional capability attributes represented by format specific key value pairs.
*
* \details Example usage. Is this SILK format capable of 8khz
* is_8khz = ast_format_isset(format, AST_FORMAT_SILK_CAP_RATE, 8000);
*
* \return 0, The format key value pairs are within the capabilities defined in this structure.
* \return -1, The format key value pairs are _NOT_ within the capabilities of this structure.
*/
int ast_format_isset(struct ast_format *format, ...);
/*! \brief Compare ast_formats structures
*
* retval -1, format1 is not identical to format2.
* retval 0, both structures are identical
* retval 1, Of same format type, and format2 is a subset of the attributes present in format1
*/
int ast_format_cmp(struct ast_format *format1, struct ast_format format2);
/*! \brief Find joint format attributes of two ast_format
* structures containing the same uid and return the union in the
* result structure.
*
* retval 0, joint attribute capabilities exist.
* retval -1, no joint attribute capabilities exist.
*/
int ast_format_attr_joint(struct ast_format *format1, struct ast_format_attr *format2, struct ast_format_attr *result);
]]></script>
</div></div>
<h2><a name="MediaArchitectureProposal-IntroducingtheFormatAttributeStructure"></a>4.4. Introducing the Format Attribute Structure</h2>
<p>The attribute structure is present on every ast_format object. This attribute structure is an opaque buffer that can be used in anyway necessary by the format it represents. Since it will be necessary for Asterisk to perform a few generic operations on these attribute structures, every format requiring the use of the attribute structure must implement and register a format attribute interface with Asterisk. These registered interfaces are used by the Ast Format API allowing for attributes on an ast_format structure to be set, removed, and compared using a single set of API functions for all format types. The Ast Format API does all the work of finding the correct interface to use and calling the correct interface functions.</p>
<p>The size of the buffer in the attribute structure was determined by researching the media format with the largest number of attributes expected to be present in Asterisk 1.10. In this case the H.264 SVC draft was used, which is an expanded form of RFC 3984 allowing for some additional functionality. The attributes required by H.264 SVC are determined based upon the SDP parameters defined in the draft. The SDP parameters used by the draft do not all have fixed sizes, but it was determined that an attribute buffer of ~70 bytes will easily suffice for representing the most common use cases. In order to account for undefined future development, this buffer is initially set at 128 bytes which satisfies the current estimated attribute size requirements.</p>
<h2><a name="MediaArchitectureProposal-TheAstFormatAttributeAPI"></a>4.5. The Ast Format Attribute API</h2>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>format_attribute.h</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
#define AST_FORMAT_ATTR_SIZE 128
struct ast_format_attr {
        uint8_t format_attr[AST_FORMAT_FORMAT_ATTR_SIZE];
}
/*! \brief A format must register an attribute interface if it requires the use of the format attributes void pointer */
struct ast_format_attr_interface {
        /*! format type */
        format_t format,
        /*! \brief Determine if format_attr 1 is a subset of format_attr 2.
         *
         * \retval 0, structure 1 has capabilities not present in structure 2
         * \retval 1, structures are identical
         * \retval 2, structure 1 is a subset of the capabilities in structure 2.*/
        int (* const format_attr_cmp)(struct ast_format_attr *format_attr1, struct ast_format_attr *format_attr2),
        /*! \brief Get joint attributes of same format type if they exist.
         *
         * \retval !NULL if joint attributes exist.
         * \retval NULL if no joint attributes are present
         */
        struct ast_format_attr *(* const format_attr_get_joint)(struct ast_format_attr *format_attr1, struct ast_format_attr *format_attr2),
        /*! \brief Set format capabilities based on format specific key value pairs. */
        void (* const format_attr_set)(struct ast_format_attr *format_attr, ...),
}
/*! \brief register ast_format_attr_interface with core.
*
* \retval 0 success
* \retval -1 failure
*/
int ast_format_attr_reg_interface(struct ast_format_attr_interface *interface);
/*! \brief unregister format_attr interface with core.
*
* \retval 0 success
* \retval -1 failure
*/
int ast_format_attr_unreg_interface(struct ast_format_attr_interface *interface);
]]></script>
</div></div>
<h2><a name="MediaArchitectureProposal-TheNewFormatUniqueIdentifier"></a>4.6. The New Format Unique Identifier</h2>
<p>Media formats in Asterisk are currently defined using a bit field, format_t, where every format is uniquely identified by a single bit. While this makes comparing media format capabilities extremely simple using bitwise operations, this representation limits the number of media formats that can be represented due to the limited size of the bit field in use. Even if a bit field could represent an infinite number of bits, this representation has no concept of how to compare format capability attributes.</p>
<p>In order to remove the limitation of the number of unique formats that can be represented the identifier will change from a single bit representation to a numeric representation. This means that #define AST_FORMAT_ULAW (1 << 0) now becomes #define AST_FORMAT_ULAW 1. By changing the way media formats are identified from a bit in a bit field to a numeric value, the limit on the number of formats that can be represented goes from 64 to 4,294,967,296. Altering this representation completely removes the ability to use bitwise operations on a bit field containing multiple media format capabilities, but since these bitwise operations lack the ability to process format attributes, they must be replaced by a more robust system anyway. The new system for computing joint media capabilities between peers hinted at here is discussed in detail in the Representation of Format Capabilities section.</p>
<h2><a name="MediaArchitectureProposal-FormatUniqueIdentifierOrganization"></a>4.7. Format Unique Identifier Organization</h2>
<p>The old system of using a single bit in a bit field to represent a single format also allows for bitmasks to be used to determine what type of media a format is categorized as. For example, there is a bitmask for determining if a format is an audio format, video format, or text format. By changing the unique id to a number the ability to use bitmasks to determine the category is no longer possible. Instead, a new convention of organizing these formats into media categories must be set in place.</p>
<p>Since the number of formats that can be represented will likely never be exhausted using the new system, formats can be uniquely identified and categorized using a system that sections off each category into a range of numbers. Since it is unlikely any category will ever have even close to 10,000 unique formats associated with it, each category will be sectioned off by increments of 10,000. For example, all audio formats will be uniquely identified in a category between 10000-19,999, all video formats will be uniquely identified in a category between 20,000-29,999, and so on for every category. This new system allows for each format's unique id to be overloaded with a category as well just like the previous system did. Instead of using a bitmask to determine if a format is video or audio, a function or macro can be used to do the comparison consistently across the code base.</p>
<h2><a name="MediaArchitectureProposal-NewFormatUniqueIdChangestoframe.h"></a>4.8. New Format Unique Id Changes to frame.h</h2>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>frame.h Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*OLD*/
#define AST_FORMAT_AUDIO_MASK 0xFFFF0000FFFFULL
#define AST_FORMAT_G723_1 (1ULL << 0)
#define AST_FORMAT_GSM (1ULL << 1)
#define AST_FORMAT_ULAW (1ULL << 2)
#define AST_FORMAT_ALAW (1ULL << 3)
#define AST_FORMAT_VIDEO_MASK ((((1ULL << 25)-1) & ~(AST_FORMAT_AUDIO_MASK)) | 0x7FFF000000000000ULL)
#define AST_FORMAT_H263_PLUS (1ULL << 20)
#define AST_FORMAT_MP4_VIDEO (1ULL << 22)
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>frame.h New Changes</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*NEW*/
#define AST_FORMAT_INC 10000
/* ALL FORMAT CATEGORIES */
enum ast_format_type {
AST_FORMAT_TYPE_AUDIO = 1 * FORMAT_INC,
AST_FORMAT_TYPE_VIDEO = 2 * FORMAT_INC,
AST_FORMAT_TYPE_IMAGE = 3 * FORMAT_INC,
};
enum ast_format_id {
/* ALL AUDIO FORMATS */
AST_FORMAT_G723_1 = 1 + AST_FORMAT_TYPE_AUDIO,
AST_FORMAT_GSM = 2 + AST_FORMAT_TYPE_AUDIO,
AST_FORMAT_ULAW = 3 + AST_FORMAT_TYPE_AUDIO,
AST_FORMAT_ALAW = 4 + AST_FORMAT_TYPE_AUDIO,
/* ALL VIDEO FORMATS */
AST_FORMAT_H263_PLUS = 1 + AST_FORMAT_TYPE_VIDEO,
AST_FORMAT_MP4_VIDEO = 2 + AST_FORMAT_TYPE_VIDEO,
};
/* Determine what category a format type is in */
#define AST_FORMAT_GET_TYPE(format) (((unsigned int) (format->uid / AST_FORMAT_INC)) * AST_FORMAT_INC)
]]></script>
</div></div>
<h2><a name="MediaArchitectureProposal-NewFormatRepresentationCodeExamplesandUsecases."></a>4.9. New Format Representation Code Examples and Use cases.</h2>
<p>This section shows example usage of the ast_format structure and how it replaces existing functionality in Asterisk. It also outlines other highlevel use cases that can not easilly be represented by a code example.</p>
<p>Example 1: One to one mapping of old format_t usage with ast_format structure and it's API.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
        /* OLD: Media formats are represented by a bit in the format_t bit field. */
        format_t read_format;
        read_format = AST_FORMAT_ULAW;
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
        /* NEW: Media formats are represented using the ast_format struct and are stored in an ast_cap object. */
        struct ast_format read_format;
        ast_format_set(&read, AST_FORMAT_ULAW);
]]></script>
</div></div>
<p>Example 2: Set an optional format attribute structure for a SILK ast_format structure capable of a dynamic sample rate.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
struct ast_format read_format;
ast_format_set(&read, AST_FORMAT_SILK,
        AST_FORMAT_SILK_RATE, 24000,
        AST_FORMAT_SILK_RATE, 16000,
        AST_FORMAT_SILK_RATE, 12000,
        AST_FORMAT_SILK_RATE, 8000,
        AST_FORMAT_END);
]]></script>
</div></div>
<p>Example 3: Set the sample rate of SILK ast_frame representing the sample rate of the frame's payload. Then compare the format of the ast_frame with a read format determine if translation is required.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 3</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
struct ast_format read_format;
/* The read format is of format type SILK and can be of sample rates 8khz and 12khz */
ast_format_set(&read, AST_FORMAT_SILK,
        AST_FORMAT_SILK_RATE, 12000,
        AST_FORMAT_SILK_RATE, 8000,
        AST_FORMAT_END);
/* The frame's format type is SILK and the payload is 24khz audio. */
ast_format_set(frame->subclass.format, AST_FORMAT_SILK,
        AST_FORMAT_SILK_RATE, 24000,
        AST_FORMAT_END);
/* Comparing the frame with the read format shows that while the formats are identical
* their attributes make them incompatible requiring a translation path to be built. */
if ((ast_format_cmp(&read_format, frame->subclass.format) < 0)) {
        /* Build Translation Path.
         * This will be the outcome of this example. */
} else {
        /* Frame's format is either identical or a subset of the read_format
         * requiring no translation path. */
}
]]></script>
</div></div>
<p>Example 4. Determine if a format is of type audio.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 4 Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*OLD*/
format_t format = AST_FORMAT_ULAW;
if (format & AST_FORMAT_AUDO_MASK) {
        /* this is of type audio */
}
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 4 New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*NEW*/
struct ast_format format;
ast_format_set(&format, AST_FORMAT_ULAW);
if (AST_FORMAT_GET_TYPE(&format) == AST_FORMAT_TYPE_AUDIO) {
        /* this is of type audio */
}
]]></script>
</div></div>
<p>Example 5: Media format seamlessly changes parameters midstream.</p>
<p>1. A channel is defined to have a write format of SILK with the capabilities of understanding 8khz and 16khz without translation.<br/>
2. A stream of SILK audio ast_frames containing 16khz frame attributes begin to be written to the channel.<br/>
3. During the call the audio stream's SILK frame attributes change to 8khz.<br/>
4. ast_write() determines this change is still within the channel's write format capabilities and continues without translation.</p>
<p>Example 6: Media format changes parameters requiring translation midstream.</p>
<p>1. A channel is defined to have a write format of SILK with the capabilities of understanding 8khz and 16khz without translation.<br/>
2. A stream of SILK audio ast_frames containing 16khz frame attributes begin to be written to the channel.<br/>
3. During the call the audio stream's SILK frame attributes change to 24khz.<br/>
4. ast_write() determines this change is not within the bounds of the channel's write format capabilities and builds a translation path from 24khz SILK to 16khz SILK.</p>
<h1><a name="MediaArchitectureProposal-RepresentationofFormatCapabilities"></a>5. Representation of Format Capabilities</h1>
<h2><a name="MediaArchitectureProposal-ProblemOverview"></a>5.1. Problem Overview</h2>
<p>The new way of handling format capabilities must address two issues. First, formats are no longer represented by the format_t bit field and are replaced by the ast_format structure. This means that the old system of representing format capability sets with a bit field must be replaced as well. Second, even if we could use a bit field to represent format capability sets, the bitwise operators used to compare capabilities and calculate joint capabilities are incapable of processing the new format attribute structures. In order to handle both of these changes, an opaque capabilities container must be created to manipulate sets of ast_format structures. This container must also be coupled with an API that abstracts all the work required to compare sets of ast_formats and their internal format attributes.</p>
<h2><a name="MediaArchitectureProposal-Introducingastcap%2CTheFormatCapabilityContainer."></a>5.2. Introducing ast_cap, The Format Capability Container.</h2>
<p>The Format Capability API introduces a new container type, struct ast_cap, which acts as the opaque capabilities container discussed in the overview. Like an ao2_container holds astobj2 objects, the ast_cap container holds ast_format objects. The thing that sets the ast_cap container apart from other generic containers in Asterisk is that it is designed specifically for the purpose of comparing and manipulating sets of ast_format structures. API functions for adding/removing ast_formats, computing joint capabilities, and retrieving all capabilities for a specific media type are present. The best way to communicate the big picture for how this new container and API replaces the current architecture is by providing some examples. These examples will walk through the sections discussed so far and provide a better understanding for how the ast_format and ast_cap containers interact with each other using the new API. All the examples below take code from the existing media architecture in Asterisk and show how the new architecture replaces it.</p>
<p>Example 1: Add format capabilities to a peer.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* ---OLD: Media formats are represented by a bit in a bit field. */
format_t capabilities = 0;
capabilities |= AST_FORMAT_ULAW;
capabilities |= AST_FORMAT_GSM;
/* XXX SILK can not be set using a bit since it requires a capability
* attribute to be associated with it.
* capabilities |= AST_FORMAT_SILK;
*/
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* ---NEW: Media formats are represented using the ast_format struct and are stored in an ast_cap object.*/
struct ast_format tmp = { 0, };
ast_cap_add(capabilties, ast_format_set(&tmp, AST_FORMAT_ULAW));
ast_cap_add(capabilties, ast_format_set(&tmp, AST_FORMAT_GSM));
/* SILK media format requires the format capability attribute to be set.*/
ast_format_set(&tmp, AST_FORMAT_SILK,
        AST_FORMAT_SILK_CAP_RATE, 24000,
        AST_FORMAT_SILK_CAP_RATE, 16000,
        AST_FORMAT_SILK_CAP_RATE, 12000,
        AST_FORMAT_SILK_CAP_RATE, 8000,
        AST_FORMAT_ATTR_END);
ast_cap_add(capabilties, &tmp);
]]></script>
</div></div>
<p>Example 2: Find joint capabilities between a peer and remote endpoint.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*---OLD: Peer and remote capabilities are bit fields, no capability attributes can be used.*/
format_t jointcapabilties = 0;
peer->capability |= (AST_FORMAT_ULAW | AST_FORMAT_GSM);
/*
* peer->capability = ULAW and GSM
*
* remote_capabilities structure is already built to contain uLaw
* remote_capability = ULAW
*
* jointcapabilities will be ULAW
*/
jointcapabilties = peer->capability & remote_capability
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*---NEW: Peer and remote capabilities are ast_cap objects.*/
struct ast_cap *jointcapabilities;
ast_cap_add(peer->capability, ast_format_set(&tmp, AST_FORMAT_ULAW));
ast_cap_add(peer->capability, ast_format_set(tmp, AST_FORMAT_GSM));
ast_format_set(&tmp, AST_FORMAT_SILK,
        AST_FORMAT_SILK_CAP_RATE, 24000,
        AST_FORMAT_SILK_CAP_RATE, 16000,
        AST_FORMAT_SILK_CAP_RATE, 12000,
        AST_FORMAT_SILK_CAP_RATE, 8000,
        AST_FORMAT_ATTR_END);
ast_cap_add(peer->capabilties, &tmp);
ast_format_set(&tmp, AST_FORMAT_H264,
        AST_FORMAT_H264_CAP_PACKETIZATION, 0,
        AST_FORMAT_H264_CAP_PACKETIZATION, 1,
        AST_FORMAT_H264_CAP_RES, "CIF",
        AST_FORMAT_H264_CAP_RES, "VGA",
        AST_FORMAT_ATTR_END);
ast_cap_add(peer->capabilties, &tmp);
/*
* peer->capabilities structure was just built to contain.
* silk (rate = 24000, rate = 16000, rate = 12000, rate = 8000)
* h.264 (packetization = 0, packetization = 1, res = vga, res = cif)
*
* remote_capabilities structure is already built to contain
* silk (rate = 16000)
* h.264 (packetization = 0, res = vga, res = svga)
*
* The resulting jointcapabilities object contains
* SILK (Rate = 16000khz)
* H.264 (packetization = 0, Res = VGA)
*
* Computing of joint capabilities of formats with capability attributes is
* possible because of the format attribute interface each format requiring
* attributes must implement and register with the core.
*/
jointcapabilities = ast_cap_joint(peer->capability, remote_capability);
]]></script>
</div></div>
<p>Example 3: Separate audio, video, and text capabilities.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 3 - Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*---OLD: Separate media types are separated by a bit mask.*/
format_t video_capabilities = capabilities & AST_FORMAT_VIDEO_MASK;
format_t audio_capabilities = capabilities & AST_FORMAT_AUDIO_MASK;
format_t text_capabilities = capabilities & AST_FORMAT_TEXT_MASK;
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 3 - New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*---NEW: Separate media types are returned on a new capabilities structure using ast_cap_get_type()*/
struct ast_cap *video = ast_cap_get_type(capabilities, AST_FORMAT_TYPE_AUDIO);
struct ast_cap *voice = ast_cap_get_type(capabilities, AST_FORMAT_TYPE_VIDEO);
struct ast_cap *text = ast_cap_get_type(capabilities, AST_FORMAT_TYPE_TEXT);
]]></script>
</div></div>
<h2><a name="MediaArchitectureProposal-AstFormatCapabilityAPIDefined"></a>5.3. Ast Format Capability API Defined</h2>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>format_capability.h</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*! Capabilities are represented by an opaque structure statically defined in format_capability.c */
struct ast_cap;
/*! \brief Allocate a new ast_cap structure.
*
* \retval ast_cap object on success.
* \retval NULL on failure.
*/
struct ast_cap *ast_cap_alloc(void);
/*! \brief Destroy an ast_cap structure.
*
* \return NULL
*/
void *ast_cap_destroy(struct ast_cap *cap);
/*! \brief Add format capability to capabilities structure. */
void ast_cap_add(struct ast_cap *cap, struct ast_format *format)
/*! \brief Remove format capability from capability structure */
void ast_cap_remove(struct ast_cap *cap, struct ast_format *format)
/*! \brief Find if ast_format is part of the capabilities structure.
*
* retval 1 format is found.
* retval 0 format is not found
*/
int ast_cap_find(ast_cap *cap, struct ast_format *format)
/*! \brief Get joint capability structure.
*
* \retval !NULL success
* \retval NULL failure
*/
struct ast_cap *ast_cap_joint(struct ast_cap *cap1, struct ast_cap *cap2)
/*! \brief Get all capabilities for a specific media type
*
* \retval !NULL success
* \retval NULL failure
*/
struct ast_cap *ast_cap_get_type(struct ast_cap *cap, unsigned int mtype)
]]></script>
</div></div>
<h1><a name="MediaArchitectureProposal-IAX2AstFormatAPICompatibility"></a>6. IAX2 Ast Format API Compatibility</h1>
<p>IAX2 represents media formats the same way Asterisk currently does using a bit field. This allows Asterisk to communicate format capabilities over IAX2 using the exact same representation Asterisk uses internally. This relationship between Asterisk and IAX2 breaks with the introduction of the ast_format and ast_cap structures though. In order for Asterisk to maintain compatiblity with IAX2 a conversion layer must exist between the previous format representation and the new format representation. This conversion layer will be limited to the formats defined at the moment the media format representation in Asterisk changes to use the ast_format structure. As new media formats are introduced, they must be added to this conversion layer in order to be transported over IAX2. Any media formats requiring the use of media attributes may have to be excluded from this conversion depending on their complexity. Eventually the number of media formats that can be represented in IAX2 will be exhasted. At that point it must be decided to either accept that limitation or alter the protocol in a way that will expand it to take advantage of Asterisk's new format capabilities. This proposal is not defining a change any changes to the IAX2 protocol.</p>
<h1><a name="MediaArchitectureProposal-RevisedFormatTranslation"></a>7. Revised Format Translation</h1>
<h2><a name="MediaArchitectureProposal-ProblemOverview"></a>7.1. Problem Overview</h2>
<p>There are two sets of problems that must be addressed in regards to media translation in Asterisk. The first set of problems is a ripple effect caused by the changes surrounding the new representation of media formats with attributes. Translators must gain the ability to process these attributes and build translation paths between formats requiring the use of them. The other set of problems involves the ability to translate between media types other than just audio. The current translation architecture is very audio specific. It assumes that all translators are audio format translators of some kind, and that no other media type will ever be translated. This assumption is not only within the translation code, it is also deeply rooted throughout the code base. The ability to translate between media other than audio is a concept Asterisk completely lacks at the moment.</p>
<p>This section builds upon the foundation established by the new ast_format media format representation and uses it to redefine what translators look like and how translation paths are built. After these changes are made Asterisk will still not be able to translate video or other types of media even if translation paths actually exist between them. This problem is a result of limitations set in place by the Ast Channel API. The changes required to lift that limitation are discussed in the "Handling Multiple Media Streams" section.</p>
<h2><a name="MediaArchitectureProposal-BuildingTranslationPaths"></a>7.2. Building Translation Paths</h2>
<p>The current method of calculating translation cost by using the computational time required to translate between formats is no longer effective. When all the formats in Asterisk were 8khz audio, picking the best translation path based upon computational cost made sense. The problem with this system now is that it does not take into account the quality of the translation. It may be computationally quicker to translated from one 16khz audio format to another 16khz audio format using 8khz signed linear audio even when 16khz signed linear is an option. Regardless of the computational costs, down sampling an audio stream unless it is absolutely necessary is a bad idea from a quality perspective. Provisions were made in the current code base to account for the down sampling issue just described, but the introduction of that fix was merely a hack to sustain the current system until a more robust architecture could be set in place. The new system must be aware of quality changes between all forms of media, not just sample rate.</p>
<p>Instead of building a translation cost table out based on computational complexity, the table should be built based on what kind of translation is taking place. For example categorizing a translator as a "lossless to lossy translation with a down sampling of quality" gives quite a bit more information about what kind of translation is actually taking place than simply knowing the translation is between two formats and it takes x amount of time to compute 1 second of sample data. As new formats are introduced, knowing how all the different translators affect media during translation allows the path builder algorithm to consistently produce the best quality available.</p>
<h3><a name="MediaArchitectureProposal-ComputingTranslationCosts"></a>7.2.1. Computing Translation Costs</h3>
<p>The new translation cost table is built on a scale between 400 and 9999. Notice that the lowest cost is 400 and the next cost after that is 600. These two numbers add up to 1000, which guarantees that a direct translation path will always take precedence over any path containing multiple translation steps. The only exception to this rule is a multiple step translation path between lossless formats of the same quality, which does not exist in Asterisk yet but may in the future.</p>
<p>Every one of these cost categories can be thought of as a range starting at the number listed and ranging all the way up to the next category. If a format is capable of multiple translators for any single category listed below, the cost associated with those translators should not fall onto the same cost number as each other. Instead each translator for a single format calling into the same cost table category should be given a weighted cost within the category's range. For example, siren17 is a 32khz audio codec with translators capable of down sampling to both 16khz signed linear and 8khz signed linear. Both of these translators fall under the [~dvossel@digium.com:lossy -> lossless] downsample" category which starts at cost 960. In order to make this work the 16khz conversion would be 960 and the 8khz conversion would be 961. This gives the translator that loses the least amount of information priority over the the one that loses more if a tie occurs.</p>
<p>This cost table is weighted in a way that assigns lower cost to translators with the most ideal outcome. For example, translating between a lossless format to a lossy format is aways more ideal that converting a lossy format to a lossless format, translating between two lossy formats of the same quality is always more ideal than translating to a lossy format of lesser quality, and translating to a format of equivalent in quality to the original format is more ideal than any translation that requires some sort of re-sampling. The costs are computed based on these principles and more.</p>
<h3><a name="MediaArchitectureProposal-TranslationCostTable"></a>7.2.2. Translation Cost Table</h3>
<p>Table Terms<br/>
<b>Up Sample:</b> The original format is translated to a format capable of representing more detailed information than the original one. Examples of this term would be audio codec being translated to a higher sample rate, a video codec being translated to a higher resolution/frame rate, or an image being translated to a higher resolution.</p>
<p><b>Down Sample:</b> The original format is translated to a format of lesser quality. Examples of this term would be audio codec being translated to a lower sample rate, a video codec being translated to a lower resolution/frame rate, or an image being translated to a lower resolution.</p>
<p><b>Original Sampling:</b> The original format is translated to a format of similar quality with little to no loss of information. Examples of this term would be an audio codec being translated to a format equivalent in quality of the original one, a video codec being translated to a format which preserves all the original information present, and an image being translated to another format preserving the same resolution and color depth.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Translation Cost Table</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
--- Lossless Source Translation Costs
400 [lossless -> lossless] original sampling
600 [lossless -> lossy] original sampling
800 [lossless -> lossless] up sample
825 [lossless -> lossy] up sample
850 [lossless -> lossless] down sample
875 [lossless -> lossy] down sample
--- Lossy Source Translation Costs
900 [lossy -> lossless] original sampling
915 [lossy -> lossy] original sampling
930 [lossy -> lossless] up sample
945 [lossy -> lossy] up sample
960 [lossy -> lossless] down sample
975 [lossy -> lossy] down sample
]]></script>
</div></div>
<h3><a name="MediaArchitectureProposal-TranslationPathExamples"></a>7.2.3. Translation Path Examples</h3>
<p><b>Example 1:</b> Downsampling g722 to ulaw using signed linear as an intermediary step. Notice that using two lossless conversions is more expensive than downsampling g722 directly to 8khz slin.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - g722 to ulaw</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
[g722->slin16->slin->ulaw] 900+850+600 = 2350
[g722->slin->ulaw] 960+600 = 1560 wins
]]></script>
</div></div>
<p><b>Example 2:</b> Direct lossy to loss translation using ulaw to alaw. Notice how the direct path between uLaw and aLaw beats using the intermediary slin step.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - uLaw to aLaw</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
[ulaw->slin->alaw] 900+600 = 1500
[ulaw->alaw] 945 = 945 wins
]]></script>
</div></div>
<p><b>Example 3:</b> Complex resampling of siren14 to siren7 using g722 as an intermediary step. Notice how downsamping all the way to 8khz signed linear loses to the path that only requires downsampling to 16khz signed linear.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 3 - siren14 to siren7</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
[siren14->slin->g722->slin16->siren7] 960+825+900+600 = 3285
[siren14->slin16->g722->slin16->siren7] 960+600+900+600 = 3060 wins
]]></script>
</div></div>
<p><b>Example 4:</b> Complex resampling using siren14 to a fake 32khz lossy codec. Notice how siren14->slin16 has a 830 cost while siren14-slin8 has 831. This allows translations within the same category to be weighted against each other to produce the best quality.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 4 - siren14 to fake 32khz codec</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
[siren14->slin->Fake 32khz lossy Codec] 961+825 = 1786
[siren14->slin16->Fake 32khz lossy Codec] 960+825 = 1785 wins
]]></script>
</div></div>
<h3><a name="MediaArchitectureProposal-TranslatorCostsDefined"></a>7.2.4. Translator Costs Defined</h3>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>translator.h Defining Cost Table</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*! Translator Cost Table */
/* Lossless Source Translation Costs */
/*! [lossless -> lossless] original sampling */
#define AST_TRANS_COST_LL_LL_ORIGSAMP 400
/*! [lossless -> lossy] original sampling */
#define AST_TRANS_COST_LL_LY_ORIGSAMP 600
/*! [lossless -> lossless] up sample */
#define AST_TRANS_COST_LL_LL_UPSAMP 800
/*! [lossless -> lossy] up sample */
#define AST_TRANS_COST_LL_LY_UPSAMP 825
/*! [lossless -> lossless] down sample */
#define AST_TRANS_COST_LL_LL_DOWNSAMP 850
/*! [lossless -> lossy] down sample */
#define AST_TRANS_COST_LL_LY_DOWNSAMP 875
/* Lossy Source Translation Costs */
/*! [lossy -> lossless] original sampling */
#define AST_TRANS_COST_LY_LL_ORIGSAMP 900
/*! [lossy -> lossy] original sampling */
#define AST_TRANS_COST_LY_LY_ORIGSAMP 915
/*! [lossy -> lossless] up sample */
#define AST_TRANS_COST_LY_LL_UPSAMP 930
/*! [lossy -> lossy] up sample */
#define AST_TRANS_COST_LY_LY_UPSAMP 945
/*! [lossy -> lossless] down sample */
#define AST_TRANS_COST_LY_LL_DOWNSAMP 960
/*! [lossy -> lossy] down sample */
#define AST_TRANS_COST_LY_LY_DOWNSAMP 975
]]></script>
</div></div>
<h3><a name="MediaArchitectureProposal-CreationofTranslationPathMatrix"></a>7.2.5. Creation of Translation Path Matrix</h3>
<p>Most least cost algorithms take a matrix as input. The current code's translation path matrix is represented by a 2 dimensional array of translation path structures. The current matrix will not change structurally, but there are some complications involved. The current code accesses translation paths from the matrix using index values which represent individual formats. The index values are computed by converting the format's bit representation to a numeric value. Since the numeric representation of a format bit has to be between 1 and 64, the maximum size of the bit field in use, the numeric representation works as an index for the current two dimensional matrix. With the introduction of the ast_format structure, this conversion between a format's unique id and the a matrix index value is not clean. To account for this complication a hash table mapping every format id to a matrix index value will be used.</p>
<h3><a name="MediaArchitectureProposal-ComputingLeastCostTranslationPaths"></a>7.2.6. Computing Least Cost Translation Paths</h3>
<p>The Floyd-Warshall algorithm will be the least cost algorithm in use. At its core, the current translation path building code uses this algorithm but has a few layers of complexity added on top of the base algorithm to deal with translation paths between audio codecs of differing sample rates. With the introduction of the new translation cost table, this additional complexity is completely stripped away from the algorithm. Now the translation costs are computed with translation quality and efficiency in mind, which abstracts these concepts away from least cost algorithm in use.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Floyd Warshall Algorithm</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
FloydWarshall ()
for k := 1 to n
for i := 1 to n
for j := 1 to n
path[i][j] = min (path[i][j], path[i][k]+path[k][j]);
]]></script>
</div></div>
<h2><a name="MediaArchitectureProposal-TranslatorRedundancyandFailover"></a>7.3. Translator Redundancy and Failover</h2>
<p>It is possible that multiple redundant translators may exist for a single translation path. A common example of this would be a hardware translator with limited capacity coupled with a software translator. Both of these translators perform the exact same task, but the hardware translator is much faster. In this case the hardware translator would be used until it reached capacity and then it would failover to the software translator. There is however a complication involved with this. Only one of these translators can exist in the translation path matrix at a time. This means that when multiple translators with the same source and destination formats are present, some sort of priority must be used to pick which one is used. If the translator in use reaches capacity it then must deactivate itself allowing the matrix to be rebuilt in order to take advantage of the redundant translator.</p>
<p>In order to prioritize redundant translators, computational cost will be used. Formats requiring the use of redundant translators must supply a set of sample data to translate. This data is already present for most audio formats because it is required by the current architecture to compute translation cost. Translation cost in the new architecture is replaced by the translation cost table, but computational cost is still importation when choosing between redundant translators.</p>
<h2><a name="MediaArchitectureProposal-RedefiningTheTranslatorInterface"></a>7.4. Redefining The Translator Interface</h2>
<p>Translators are currently defined by a simple set of functions (constructor, destructor, framein, frameout) coupled with a source and destination media format to translate between. There is not much that needs to be changed about this interface except that the source and destination formats must be converted to be ast_format structures in all the existing code, and each translator must provide a cost value. There will be a table available to guide exactly what cost value to use. In order to make any future changes to the cost table effortless, defined values will be used when assigning cost to a translator. Otherwise this interface is in great shape for the changes ahead.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Registering Translator Interface</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* each format must be declared statically now */
static struct ast_format slin16;
static struct ast_format g722;
/* each interface holds a pointer to the static formats. */
static struct ast_translator lin16tog722 = {
        .name = "lin16tog722",
        .cost = AST_TRANS_COST_LL_LY_ORIGSAMP,
        .srcfmt = &slin16,
        .dstfmt = &g722,
        .newpvt = lin16tog722_new,        /* same for both directions */
        .framein = lintog722_framein,
        .sample = slin16_sample,
        .desc_size = sizeof(struct g722_encoder_pvt),
        .buffer_samples = BUFFER_SAMPLES * 2,
        .buf_size = BUFFER_SAMPLES,
};
/* Notice the static formats are initialized before registering the translator */
static int load_module(void)
{
        int res = 0;
        ast_format_set(&slin16, AST_FORMAT_SLIN16);
        ast_format_set(&g722, AST_FORMAT_G722);
        res |= ast_register_translator(&lin16tog722);
        if (res) {
                unload_module();
                return AST_MODULE_LOAD_FAILURE;
        }        
        return AST_MODULE_LOAD_SUCCESS;
}
]]></script>
</div></div>
<h1><a name="MediaArchitectureProposal-HandlingMultipleMediaStreams"></a>8. Handling Multiple Media Streams</h1>
<h2><a name="MediaArchitectureProposal-ProblemOverview"></a>8.1. Problem Overview</h2>
<p>Asterisk was designed from the ground up with the idea of only one audio media path being passed between channels. The code that handles this media path is done in such a way that makes expanding it to multiple media paths very difficult, especially media that is not audio. Asterisk has gotten away with being able to support very limited video functionality by not treating it as a media path at all. Instead of putting all media in the same media path as audio, video and other forms of media are just passed through similar to the way signalling is done. In order to bring all media into the same code path as audio, several fundamental design changes must be made to the way channels represent media streams. This section discusses those changes and how they affect channel drivers and other applications requiring access to media streams.</p>
<h2><a name="MediaArchitectureProposal-DefiningaMediaStreaminAsterisk"></a>8.2. Defining a Media Stream in Asterisk</h2>
<p>The first step in improving Asterisk's ability to represent multiple media streams is to actually define what a media stream is. At the moment, a stream in Asterisk is a very abstract idea. There is no tangible representation of a stream, no stream object or structure. The best representation of a stream Asterisk has now is the ast_channel structure which is capable of representing a single set of audio tx/rx streams through the use of a bunch disjoint elements. Lets start this discussion by breaking out the elements of the ast_channel structure that allow it to represent these streams.</p>
<p>In order for the ast_channel structure to represent a single set of audio tx/rx streams it needs the following things.</p>
<ol>
        <li><b>Read translator</b> - Translates stream on the read path going into the Asterisk Core.</li>
        <li><b>Write translator</b> - Translates stream on the write path going out to the channel driver.</li>
        <li><b>Native Format Capabilities</b> - Native capabilities the channel driver is capable of understanding without translation for this stream.</li>
        <li><b>Read Format</b> - Requested Read format after translation on the read path.</li>
        <li><b>Raw Read Format</b> - Expected read format before translation.</li>
        <li><b>Write Format</b> - Requested write format after translation on the write path.</li>
        <li><b>Raw Read Format</b> - Expected write format before translation.</li>
</ol>
<p>The combination of all these items represent everything Asterisk needs to make channels compatible with one another and build translation paths between one another for a single set of corresponding tx/rx streams. The problem with this architecture is that all these disjoint elements make it impossible to replicate this functionality allowing for multiple tx/rx streams to exist on a single channel. In order for channels in Asterisk to gain the ability to process multiple tx/rx stream sets on a single channel all of theses stream elements must be organized into an isolated structure that can be easily replicated and manipulated. This new structure is called the <b>ast_channel_stream</b> structure and is discussed in detail in the next section.</p>
<h2><a name="MediaArchitectureProposal-Introducingastchannelstream%2CMakingSenseoutofMadness"></a>8.3. Introducing ast_channel_stream, Making Sense out of Madness</h2>
<p>The ast_channel_stream structure is made up of all the individual elements required to represent single set of tx/rx streams on an ast_channel structure. This allows all the disjoint translators and formats on the ast_channel structure associated with the audio streams go away and be replaced by a single ast_channel_stream structure. Everyplace in the current code base that directly accesses any of the stream elements on a channel such as nativeformats, readformat, and writeformat will be replaced by a set of API functions provided by the new Ast Channel Stream API. This API contains all the common operations channel drivers and applications need to perform on a stream, such as setting the native format capabilities, initializing the read/write formats, retrieving the current read/write formats, and setting the read/write formats. By using this API, channels also gain the ability contain more than one media stream set. This is done through the concept of stream identifiers which is further discussed in the next section.</p>
<h2><a name="MediaArchitectureProposal-StreamIdentifiers"></a>8.4. Stream Identifiers</h2>
<p>The ast_channel_stream structure isolates the concept of tx/rx streams to a single entity allowing channels to represent multiple streams through the use of multiple ast_channel_stream structures. Since it is prohibited for any part of Asterisk except channel.c to directly access the ast_channel_stream structures on a channel, the rest of Asterisk needs a way access these individual streams through the use of the Ast Channel Stream API. This introduces the concept of <b>stream identifiers</b>. Stream identifiers completely abstract away the concept of the ast_channel_stream structure from the rest of Asterisk. Every ast_channel_stream structure on a channel will have a unique stream id assigned to it. This stream id is required by every function in the Ast Channel Stream API to access and manipulate the individual streams on a channel.</p>
<p>In order to separate ast_frames belonging to separate streams, a stream id will also be present on each frame. This will involve placing a new value on the ast_frame structure to represent what stream the frame belongs to. By default the current code will not use the stream id on the ast_frame even though it will be present. This concept is discussed in more detail in the "Default Streams" section.</p>
<p>Steam identifiers are organized into three categories. For the sake of organization and ABI compatibility each of these categories are given a range of unique stream identifiers available to them. Separating the default streams from the auxiliary and dynamic streams also makes it much easier to filter out auxiliary and dynamic streams for applications and modules that do not support them. Once a new stream identifier is defined, it's unique id must remain consistent. </p>
<ol>
        <li><b>default streams</b>: Unique id defined between 1 - 999999</li>
        <li><b>auxiliary streams</b>: Unique id defined between 100000 - 199999</li>
        <li><b>dynamic streams</b>: Unique id defined between 200000 - 299999</li>
</ol>
<h3><a name="MediaArchitectureProposal-DefaultStreams"></a>8.4.1. Default Streams</h3>
<p>Since Asterisk was designed with the concept of a single audio tx/rx stream set existing on a channel, some provisions must be made to allow for a smooth transition into the concept of multiple stream sets. This is where default streams come into play. Every ast_channel structure will contain a set of default streams associated with it, each with a predefined consistent stream id.</p>
<p><b>Default Audio Streams</b> - The first default tx/rx stream set present on every channel is the default audio streams. This is the stream set all of Asterisk already knows about. It is the one that used to be made of individual elements in the ast_channel structure but was stripped out after defining the ast_channel_stream structure. Every channel driver built so far already knows how to manipulate these streams and many applications require access to them as well. All ast_frame's of type AST_FRAME_VOICE with a stream id of 0 will automatically match this default stream set on a channel. Since 0 is the default initialization value for the stream id on a frame, all the channel drivers and applications already making use of these streams do not have to be modified.</p>
<p>It should be noted that while additional audio streams will be possible in the future, it is likely the default audio stream will be the only one that any kind of tone detection is performed on for DTMF, FAX, etc. This document does not attempt to alter this limitation in any way.</p>
<p><b>Default Video Streams</b> - It is currently impossible to do translation between two channels transmitting different video formats because the channel has no way of representing video translators. This changes with the introduction of the default video rx/tx stream set. Similar to the default audio streams, any video frames containing a stream Id of 0 is automatically matched to the default video stream set on a channel.</p>
<p>As more media types are introduced, it may be beneficial to define additional default stream sets. Initially only audio and video will present.</p>
<h3><a name="MediaArchitectureProposal-AuxiliaryStreams"></a>8.4.2. Auxiliary Streams</h3>
<p>If a channel driver is capable of negotiating more streams than can be represented by the default rx/tx stream sets on a channel, the auxiliary media stream sets can be used. These stream sets work the exact same way as the default stream sets except they require the use of the media stream id on frames. With auxiliary streams the stream id must be present on every ast_frame created for the stream. This allows channels and applications not capable of processing auxiliary streams to filter out the frames they don't understand.</p>
<p>Since Asterisk supports multiple protocols with various capabilities, all the auxiliary streams that can be used anywhere in Asterisk must be defined at compile time. This means when a channel driver is extended to make use of a new type of auxiliary stream, that stream must be defined with a stream id that uniquely represents it across the entire code base. This is the only way to keep the different types of auxiliary streams and what they are used for consistent across all modules.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Auxiliary Stream Usage Example</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
Example 1: Chan_sip is extended to make use of up to four video and audio streams per a call. This sort of functionality has never been done before so six new auxiliary streams must be defined for the three new video and three new audio streams.
enum ast_channel_stream_id {
/*! Define Default Streams below */
AST_STREAM_DEFAULT_AUDIO = 1,
AST_STREAM_DEFAULT_VIDEO = 2,
/*! Define Auxiliary Streams Below */
AST_STREAM_VIDEO_AUX1 = 100000,
AST_STREAM_VIDEO_AUX2, = 100001,
AST_STREAM_VIDEO_AUX3, = 100002,
AST_STREAM_AUDIO_AUX1, = 100003,
AST_STREAM_AUDIO_AUX2, = 100004,
AST_STREAM_AUDIO_AUX3, = 100005,
}
As chan_sip receives individual stream payloads and creates ast_frames to pass into the core, each frame's stream id is marked with the ast_channel_stream_id it belongs to. Any channel driver or applications that gets passed an audio or video frame belonging to one of these newly defined auxiliary streams that does not support it will ignore it.
]]></script>
</div></div>
<h3><a name="MediaArchitectureProposal-DynamicStreams"></a>8.4.3. Dynamic Streams</h3>
<p>It is possible that Asterisk will need the ability to pass through streams containing media it does not understand. This can only be accomplished if both the channel negotiating the unknown media type and whatever that channel is bridged too can agree that they both understand the unknown media type and are assigning it a dynamic stream id that they both agree upon. This document does not define the negotiation of dynamic streams in Asterisk.</p>
<h2><a name="MediaArchitectureProposal-AstChannelStreamAPIDefined"></a>8.5. Ast Channel Stream API Defined</h2>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>channel.c additions</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*! \brief Definition of opaque channel stream structure */
struct ast_channel_stream {
        /*! represents the stream type */
        enum ast_channel_stream_id id;
        struct ast_trans_pvt *writetrans;
        struct ast_trans_pvt *readtrans;
        struct ast_cap nativeformats;
        struct ast_format readformat;
        struct ast_format writeformat;
        struct ast_format rawreadformat;
        struct ast_format rawwriteformat;
};
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>channel.h additions and changes</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/*! \brief stream identifier structure. Present on both ast_frame
* and ast_channel_stream structure.
*/
enum ast_channel_stream_id {
/*! Define all Default Streams below. */
AST_STREAM_DEFAULT_AUDIO = 1,
AST_STREAM_DEFAULT_VIDEO = 2,
/*! Define Auxiliary Streams Below starting at 100000
* Example:
* AST_STREAM_VIDEO_AUX1 = 100000,
*/
}
void ast_channel_init_write_format(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_format *format)
void ast_channel_init_read_format(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_format *format)
void ast_channel_set_native_cap(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_cap *cap)
int ast_channel_copy_readwrite_format(struct ast_channel *chan1, struct ast_channel *chan2, enum ast_channel_stream_id id)
void ast_channel_set_read_format(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_format *format)
void ast_channel_set_write_format(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_format *format)
int ast_channel_get_native_cap(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_cap *result)
int ast_channel_get_write_format(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_format *result)
int ast_channel_get_read_format(struct ast_channel *chan, enum ast_channel_stream_id id, struct ast_format *result)
]]></script>
</div></div>
<h2><a name="MediaArchitectureProposal-CodeChangeExamples"></a>8.6. Code Change Examples</h2>
<p>This sections shows how the Ast Channel Stream API replaces existing usage in Asterisk.</p>
<p>Example 1: A channel driver creating a new channel and initializing the default audio stream's formats and capabilities.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
chan->nativeformats = capabilty;
chan->readformat = best_format;
chan->rawreadformat = best_format;
chan->writeformat = best_format;
chan->rawwriteformat = best_format;
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
ast_channel_set_native_cap(chan, AST_STREAM_DEFAULT_AUDIO, capability);
ast_channel_init_write_format(chan, AST_STREAM_DEFAULT_AUDIO, best_format);
ast_channel_init_read_format(chan, AST_STREAM_DEFAULT_AUDIO, best_format);
]]></script>
</div></div>
<p>Example 2: Setting the read format on a channel.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - Old</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
ast_set_read_format(chan, format);
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - New</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
ast_set_read_format(chan, AST_STREAM_DEFAULT_AUDIO, format);
]]></script>
</div></div>
<h1><a name="MediaArchitectureProposal-MediaFormatwithAttributesUserConfiguration"></a>9. Media Format with Attributes User Configuration</h1>
<p>With the addtion of media formats with attributes, users will need a way to represent these new formats and their attributes in their config files. This will be accomplished by the ability to define custom media types that can be used in the format allow/disallow fields. These custom media type definitions will live in codecs.conf. For people familiar with Asterisk's config structure, the best way to present this concept is with some examples.</p>
<p><b>Example 1</b>. SILK is capable of several different sample rates. If a peer wants to negotiate only using SILK in a narrow band format, a custom format must be created to represent this.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - codecs.conf</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* Limit negotiation of SILK to only 8khz and 12khz. */
[silk_nb]
type=silk
samplerates=8000,12000
/* Limit negotiation of SILK to only 16khz and 24khz */
[silk_wb]
type=silk
samplerates=16000,24000
/* Allow any SILK sample rate a device is capable of to be negotiated */
[silk_all]
type=silk
samplerates=8000,12000,16000,24000
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 1 - sip.conf</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* Define a peer using only the narrow band custom SILK format definition */
[sip_peer]
type=friend
host=dynamic
disallow=all
allow=silk_nb
]]></script>
</div></div>
<p><b>Example 2</b>. H.264 is capable of negotiating a wide range of attributes. If specific attributes are to be negotiated, a custom format must be created to represent this.</p>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - codecs.conf</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* H.264 at vga or svga resolutions, 30 frames per second. */
[h264_custom1]
type=h264
res=vga,svga
framerate=30
]]></script>
</div></div>
<div class="code panel" style="border-style: solid;border-width: 1px;"><div class="codeHeader panelHeader" style="border-bottom-width: 1px;border-bottom-style: solid;"><b>Example 2 - sip.conf</b></div><div class="codeContent panelContent">
<script type="syntaxhighlighter" class="toolbar: false; theme: Confluence; brush: java; gutter: false"><![CDATA[
/* Define a peer using the new h264_custom1 custom format type. */
[sip_peer]
type=friend
host=dynamic
disallow=all
allow_ulaw
allow=h264_custom1
]]></script>
</div></div>
<p>Notice from these examples that both the SILK and H264 custom formats are defined using fields specific to their format. Each format will define what fields are applicable to them. If there are common fields used for several different media formats, those fields should be named in a consistent way across all the media formats that use them. Every format allowing custom media formats to be defined must be documented in codecs.conf along with all the available fields.</p>
<h1><a name="MediaArchitectureProposal-EnhancingFormatNegotiationDuringCallSetup"></a>10. Enhancing Format Negotiation During Call Setup</h1>
<div class='panelMacro'><table class='warningMacro'><colgroup><col width='24'><col></colgroup><tr><td valign='top'><img src="/wiki/images/icons/emoticons/forbidden.gif" width="16" height="16" align="absmiddle" alt="" border="0"></td><td>This is an area of focus for our initial media overhaul efforts, but research into this area is still incomplete. Because of this, the design surrounding the ability to better negotiate media formats during call setup has not yet been defined. This will be addressed at a later date.</td></tr></table></div>
<h1><a name="MediaArchitectureProposal-FormatRenegotiationAfterCallSetup"></a>11. Format Renegotiation After Call Setup</h1>
<div class='panelMacro'><table class='warningMacro'><colgroup><col width='24'><col></colgroup><tr><td valign='top'><img src="/wiki/images/icons/emoticons/forbidden.gif" width="16" height="16" align="absmiddle" alt="" border="0"></td><td>Please note that this section is incomplete. A very high level approach to format renegotiation has been outlined below, but many details pertaining to exactly how this will work are not yet defined. Format renegotiation has been marked as one of the later implementation phases and the design will be completely re-evaluated and proven before implementation.</td></tr></table></div>
<h2><a name="MediaArchitectureProposal-ProblemOverview"></a>11.1. Problem Overview</h2>
<p>When it is at all possible, it is always better to have two bridged channels share the same media formats for their audio streams than to have to perform translation. Translation for audio is expensive, but translation for video is exponentially more expensive that audio. Because of the computational complexity involved with translating video, the concept of being able to renegotiate media after a call is estabilshed in an attempt to get the device to do the translation for us is very important. Right now Asterisk lacks this ability.</p>
<h2><a name="MediaArchitectureProposal-Makingastchannelmakecompatible%28%29Smarter"></a>11.2. Making ast_channel_make_compatible() Smarter</h2>
<p>Every time a channel is bridged with another channel a call to ast_channel_make_compatible() is made. This function takes the two channels to be bridged as input and figures out all the translation paths and intermediate media formats that need to be set in order for the two channels to talk to each other. With protocols like SIP, it is possible to renegotiate the call parameters after call setup has taken place. By placing a feature in ast_channel_make_compatible() that can make the two channels aware of each other's native media format before translation takes place, it is possible for one side to re-negotiate its session to switch to the same media format used by the other side. When this is possible, Asterisk is able to avoid translation completely.</p>
<h3><a name="MediaArchitectureProposal-HowRenegotiationWorks"></a>11.2.1. How Renegotiation Works</h3>
<p>At the heart of renegotiation is the introduction of a channel option called <b>AST_OPTION_FORMAT_RENEGOTIATE</b> and a structure called <b>ast_option_renegotiate_param</b>. The ast_format_renegotiate_param structure is passed as the data for the AST_OPTION_FORMAT_RENEGOTIATE's query and set actions. This structure contains both a format to renegotiate for each stream renegotation must take place on, a function pointer containing the place a channel tech must report the result of it's renegotiation attempt, and an internal structure used to determine what action to take next after a channel tech reports the renegotiation attempt.</p>
<p>On query, the ast_option_renegotiate_param structure is passed down to the channel tech pvt containing information about all the formats and streams to renegotiate. The result of a query request indicates whether or not the channel tech is capable of attempting renegotiation with the formats provided or not. Queries are performed synchronously, meaning the result of a query request must never block for a network transaction to take place.</p>
<p>On set, the ast_option_renegotiate_param structure is passed down to the channel tech pvt containing both the formats and streams to renegotiate along with a place to report the result of the renegotiation. Renegotiation is event driven, meaning that the channel tech pvt is given the renegotation parameters and it must report back at a later time the result of the renegotiation attempt. This allows the set operation to avoid blocking the bridge code by performing the renegotation asynchronously.</p>
<p>During ast_channel_make_compatible(), if it is determined that translation is required to make two channels compatible both channels are queried using the AST_OPTIO_FORMAT_RENEGOTIATE option and ast_option_renegotiate_param structures. After the queries, if either of the two channels are capable of renegotiating the set action is used on best candidate to attempt the renegotiation. If the channel used for the first renegotiation attempt reports a failure, a second attempt at renegotiation may take place for the bridged channel if neither channel has hung up.</p>
<h3><a name="MediaArchitectureProposal-RenegotiationwithIntermediaryTranslation"></a>11.2.2. Renegotiation with Intermediary Translation</h3>
<ul>
        <li>Make Compatible Flow of Events
        <ul>
                <li>ast_channel_make_compatible() is invoked</li>
                <li>read and write formats are different between channels for at least one stream</li>
                <li>translation paths are built for streams requiring translation</li>
                <li>query to AST_OPTION_FORMAT_RENEGOTIATE is made on both channels</li>
                <li>if candidate for renegotation exists, renegotiation parameters are set to the channel using AST_OPTION_FORMAT_RENEGOTIATE</li>
                <li>channels are bridged</li>
        </ul>
        </li>
        <li>Asynchronous Renegotation Flow of Events
        <ul>
                <li>channel tech is set with renegotation paramters using AST_OPTION_FORMAT_RENEGOTIATE</li>
                <li>channel tech attempts renegotiation and reports result to renegotiation parameter result function
                <ul>
                        <li>on SUCCESS: new format is set for renegotated stream and translation path goes away</li>
                        <li>on FAILURE: result function attempts renegotation with bridged channel if possible, else translation must remain</li>
                </ul>
                </li>
        </ul>
        </li>
</ul>
<h3><a name="MediaArchitectureProposal-RenegotiationwithnoIntermediaryTranslation"></a>11.2.3. Renegotiation with no Intermediary Translation</h3>
<ul>
        <li>Make Compatible Flow of Events
        <ul>
                <li>ast_channel_make_compatible() is invoked</li>
                <li>channel's read and write formats are different for at least one stream</li>
                <li><b>NO</b> translation path is possible to make channels compatible</li>
                <li>query to AST_OPTION_FORMAT_RENEGOTIATE is made to both channels</li>
                <li>if best candidate for renegotiation is found, renegotiation parameters are set to the channel using AST_OPTION_FORMAT_RENEGOTIATE</li>
                <li>channel is bridged</li>
                <li>media for incompatible streams are blocked for a period of time while renegotiation takes place</li>
        </ul>
        </li>
        <li>Asynchronous Renegotiation Flow of Events
        <ul>
                <li>channel tech is set with renegotiation parameters using AST_OPTION_FORMAT_RENEGOTIATE.</li>
                <li>channel tech attempts renegotiation and reports result to renegotiation parameter result function
                <ul>
                        <li>on SUCCESS: new format is set for renegotiated stream and translation path goes away</li>
                        <li>on FAILURE: result function attempts renegotiation with bridged channel if possible</li>
                </ul>
                </li>
                <li>if renegotiation fails on both channels, depending on the stream in question media is either indefinitely blocked or both channels are hung up</li>
        </ul>
        </li>
</ul>
<h1><a name="MediaArchitectureProposal-ImplementationPhases"></a>12. Implementation Phases</h1>
<p>With a project of this size, it is important to break down the implementation into manageable phases. Each phase of development contains a set of steps which act as milestones. These steps must be small enough to be attainable within a week to two week period but complete enough to not break any Asterisk functionality once they are introduced. Once a step is complete, it should be reviewed and committed into trunk. This allows progress to be made in a maintainable way.</p>
<h2><a name="MediaArchitectureProposal-Phase1%3ARearchitecthowmediaisrepresentedandhowtranslationpathsarebuilt"></a>12.1. Phase 1: Re-architect how media is represented and how translation paths are built</h2>
<p>From the user perspective, no functionality changes will be present during this phase.</p>
<ul>
        <li>Step 1
        <ul>
                <li>Define new format unique ID system using numbers rather than bits. Allow this definition to remain unused during this step except by the new APIs.</li>
                <li>Create Ast Format API + unit tests.</li>
                <li>Create Ast Capibility API + unit tests.</li>
                <li>Create IAX2 Conversion layer for ast_format and ast_cap objects. Create unit tests and leave this layer inactive until conversion to new APIs takes place.</li>
        </ul>
        </li>
</ul>
<ul>
        <li>Step 2
        <ul>
                <li>Define translation cost table.</li>
                <li>Revise all translation modules to provide predefined cost values for translators.</li>
                <li>Replace current matrix algorithm with new matrix algorithm using predefined costs.</li>
                <li>Create table for mapping format ids to matrix index values. This is required once the conversion from the format bit field representation to a numeric value takes place and will allow for a smoother transition.</li>
        </ul>
        </li>
</ul>
<ul>
        <li>Step 3
        <ul>
                <li>Replace old format unique ID system with the new system. This will temporarily break all asterisk media functionality.</li>
                <li>Add media functionality back into Asterisk by replacing all instances of format_t with ast_format and ast_cap.</li>
                <li>Completely remove format_t type def.</li>
        </ul>
        </li>
</ul>
<h2><a name="MediaArchitectureProposal-Phase2%3AExercisethefunctionalityintroducedbyformatswithattributes"></a>12.2. Phase 2: Exercise the functionality introduced by formats with attributes</h2>
<p>This is done by introducing the SILK codec and allowing H.264 to be negotiated with format attributes.</p>
<ul>
        <li>Step 1
        <ul>
                <li>Define SILK format in Asterisk.</li>
                <li>Create SILK format attribute interface.</li>
                <li>Make SILK translators to and from signed linear.</li>
                <li>Add the ability to define custom media formats with attributes in user configuration.</li>
                <li>Extend the rtp mapping code to allow chan_sip to advertise SILK appropriately in SDPs.</li>
        </ul>
        </li>
</ul>
<ul>
        <li>Step 2
        <ul>
                <li>Create H.264 format attribute interface.</li>
                <li>Extend codecs.conf to allow custom H.264 format definitions.</li>
                <li>Extend chan_sip to be able to correctly advertise and negotiate H.264 with attributes in SDPs.</li>
        </ul>
        </li>
</ul>
<h2><a name="MediaArchitectureProposal-Phase3%3AExtendAsterisktohandlemultiplemediastreams"></a>12.3. Phase 3: Extend Asterisk to handle multiple media streams</h2>
<ul>
        <li>Step 1
        <ul>
                <li>Create Ast Channel Stream API</li>
                <li>Define default audio stream by replacing current audio stream formats and translators on a channel with an ast_channel_stream structure.</li>
                <li>Define default video stream by introducing a new ast_channel_stream structure used solely for negotiating the primary video stream.</li>
        </ul>
        </li>
</ul>
<ul>
        <li>Step 2
        <ul>
                <li>Add the stream id field to the ast_frame structure.</li>
                <li>Block the ability to read anything other than the default streams with all current channel drivers and applications.</li>
                <li>Introduce new ast_read functionality for reading auxiliary streams when it is explicitly requested.</li>
        </ul>
        </li>
</ul>
<ul>
        <li>Step 3
        <ul>
                <li>Exercise the new ability to build video translation paths using an FFMPEG addon translation module.</li>
        </ul>
        </li>
</ul>
<h2><a name="MediaArchitectureProposal-Phase4%3AFormatRenegotiationaftercallsetup"></a>12.4. Phase 4: Format Renegotiation after call setup</h2>
<p>Allowing calls to renegotiate their media formats after call setup is perhaps the most practical functionality introduced by this project. Due to the way multiple media streams will be represented in Asterisk, this ability to represent multiple streams is prerequisite for format renegotiation be implemented correctly. That is the primary reasoning for pushing back the introduction of this functionality to a later phase.</p>
<ul>
        <li>Step 1
        <ul>
                <li>Re-evaluate design. Define use cases and prove concept with a set of sequence diagrams.</li>
                <li>Test interoperability of renegotiation use cases using sipp scenarios against common SIP devices.</li>
        </ul>
        </li>
</ul>
<ul>
        <li>Step 2
        <ul>
                <li>Implement core functionality changes required to detect and attempt format renegotiation with channel techs.</li>
                <li>Implement chan_sip configuration options and functionality required to allow format renegotiation triggered by the Asterisk core to occur after call setup.</li>
        </ul>
        </li>
</ul>
</div>
<div id="commentsSection" class="wiki-content pageSection">
<div style="float: right;">
<a href="https://wiki.asterisk.org/wiki/users/viewnotifications.action" class="grey">Change Notification Preferences</a>
</div>
<a href="https://wiki.asterisk.org/wiki/display/AST/Media+Architecture+Proposal">View Online</a>
|
<a href="https://wiki.asterisk.org/wiki/pages/diffpagesbyversion.action?pageId=9568381&revisedVersion=110&originalVersion=109">View Changes</a>
|
<a href="https://wiki.asterisk.org/wiki/display/AST/Media+Architecture+Proposal?showComments=true&showCommentArea=true#addcomment">Add Comment</a>
</div>
</div>
</div>
</div>
</div>
</body>
</html>