[asterisk-dev] AST_FRAME_DIGITAL

Russell Bryant russell at digium.com
Thu Sep 13 18:12:38 CDT 2007


Matthew Fredrickson wrote:
>> So, if I get it right - there is no need to introduce AST_FRAME_DIGITAL 
>> as it is already there (but named AST_FRAME_MODEM)?
> 
> Yes, basically.  Look in include/frame.h in asterisk 1.4 sources.  There 
> are already subclasses defined for T38 and V150.  I'm thinking that an 
> extension to this frametype would give us what we want.  Then an 
> extension to the translator architecture so that we can make translators 
> for frames other than AST_FRAME_VOICE.

I realize that this exists, but the question is for whether it makes sense to do
so, when the steam itself is actually voice and video.  There is _a lot_ of code
in Asterisk that expects audio and video to be handled in a certain way, and
this is an extremely different way to approach it.  That is why I was trying to
push it that direction.

Later on the tread, I see that there actually is a way to get the stream broken
down into raw voice and video streams by running it through an application which
creates a local channel back into asterisk.  I didn't know this before.  It's
certainly very much different than how things are done anywhere else, but it's
a clever solution.  Knowing that this piece of the puzzle is in place makes me
much more comfortable with the method being proposed, but I don't think I'm
convinced yet.  I wish I had understood all of the details of what existed at
the beginning of all of this.  :)

AST_FRAME_MODEM or DIGITAL or whatever is not going to work without a lot of
extra effort.  However, as has been suggested, creating an AST_FORMAT_H223 would
do it.  It's a hack, but you'd have to put the data in an AST_FRAME_VOICE with a
subclass of AST_FORMAT_H223.  In that case, Asterisk would happily pass it
through without transcoding it, since it has no codec module to handle it.

I don't want to keep this feature from being available to users of Asterisk, I
just want to make sure it is done the most flexible way.

Humor me for a bit longer and help me understand why one way requires a lot more
code in the channel drivers than the other.  In the proposed method, you are
reading the data and stuffing them in H223 frames and passing them into
Asterisk.  Now, if the code that the application is using to decode and encode
the H223 data is put into a library, why is it really any more invasive to the
channel drivers to do the decoding there?

ISDN H223 data -> chan_zap, put it into H223 frames -> Asterisk core

versus

ISDN H223 data -> chan_zap -> H223decoder to VOICE/VIDEO frames -> Asterisk core

Also, is the stream really encoded in such a way that is very much
computationally expensive to do the encoding and decoding of the stream?  That
is a much better argument for avoiding the decoding/encoding when possible, in
my opinion, if that is the case.  Would you hit a CPU bottleneck from this
decode/encode process before you would hit a limit on how much ISDN hardware you
can put in a box?

If it is not that computationally expensive, then the code would actually end up
being a lot simpler and easier to maintain if you can avoid having to create the
local channel back into asterisk that has the decoded stream when you want to
use it.

-- 
Russell Bryant
Software Engineer
Digium, Inc.



More information about the asterisk-dev mailing list