[asterisk-dev] AST_FRAME_DIGITAL

Sergio Garcia sergio.garcia at fontventa.com
Fri Sep 14 02:04:58 CDT 2007




---------- Original Message ----------------------------------
From: Russell Bryant <russell at digium.com>
Reply-To: Asterisk Developers Mailing List <asterisk-dev at lists.digium.com>
Date:  Thu, 13 Sep 2007 18:12:38 -0500

>Matthew Fredrickson wrote:
>>> So, if I get it right - there is no need to introduce AST_FRAME_DIGITAL 
>>> as it is already there (but named AST_FRAME_MODEM)?
>> 
>> Yes, basically.  Look in include/frame.h in asterisk 1.4 sources.  There 
>> are already subclasses defined for T38 and V150.  I'm thinking that an 
>> extension to this frametype would give us what we want.  Then an 
>> extension to the translator architecture so that we can make translators 
>> for frames other than AST_FRAME_VOICE.
>
>I realize that this exists, but the question is for whether it makes sense to do
>so, when the steam itself is actually voice and video.  There is _a lot_ of code
>in Asterisk that expects audio and video to be handled in a certain way, and
>this is an extremely different way to approach it.  That is why I was trying to
>push it that direction.
>
>Later on the tread, I see that there actually is a way to get the stream broken
>down into raw voice and video streams by running it through an application which
>creates a local channel back into asterisk.  I didn't know this before.  It's
>certainly very much different than how things are done anywhere else, but it's
>a clever solution.  Knowing that this piece of the puzzle is in place makes me
>much more comfortable with the method being proposed, but I don't think I'm
>convinced yet.  I wish I had understood all of the details of what existed at
>the beginning of all of this.  :)
>
>AST_FRAME_MODEM or DIGITAL or whatever is not going to work without a lot of
>extra effort.  However, as has been suggested, creating an AST_FORMAT_H223 would
>do it.  It's a hack, but you'd have to put the data in an AST_FRAME_VOICE with a
>subclass of AST_FORMAT_H223.  In that case, Asterisk would happily pass it
>through without transcoding it, since it has no codec module to handle it.
>
>I don't want to keep this feature from being available to users of Asterisk, I
>just want to make sure it is done the most flexible way.
>

So do I, excuse me if I've seem rude sometimes, but getting a "no" without any
apparent reason is something that makes me quite mad.. jeje.. 
By te way, perhaps it could be a good idea if we could setup some kind of
online meeting to be a bit more fluent and avoid some misundestundings.. 

>Humor me for a bit longer and help me understand why one way requires a lot more
>code in the channel drivers than the other.  In the proposed method, you are
>reading the data and stuffing them in H223 frames and passing them into
>Asterisk.  Now, if the code that the application is using to decode and encode
>the H223 data is put into a library, why is it really any more invasive to the
>channel drivers to do the decoding there?
>
>ISDN H223 data -> chan_zap, put it into H223 frames -> Asterisk core
>
>versus
>
>ISDN H223 data -> chan_zap -> H223decoder to VOICE/VIDEO frames -> Asterisk core
>

There are somo problems I see with the all-in-channel approach but I recognize it 
would be a transparent and simple way of integrating it in asterisk.
I also think that we could loose some functionalities, or flexibility by doing that,
for example you couldn't be able to get access to the h223 stream directly, and 
there could be some situations that you would need to do something different with
it (like bridging, or dumping, or somethin like that) and you will loose this
funcitonality, or you'll have to code it in each channel driver.

The main problem I see is just, non-technicall, my library is not going to be 
included (at least in the near end) in the asterisk code. So if I include the code
in the channels I would have to maintain a lot of pacths by myself, and in each
new release of asterisk I should check it works. And it's quite hard, because I 
don't even have an isdn line.. jeje.. so jokes aside for my point of view it's
a LOT much simplier to be able to code everything in an application that is something
that I know it's not going to be difficutl to maintain and very easy to plug in
into asterisk.
If an translator archicture is developed it would be as easy to maintain also.


>Also, is the stream really encoded in such a way that is very much
>computationally expensive to do the encoding and decoding of the stream?  That
>is a much better argument for avoiding the decoding/encoding when possible, in
>my opinion, if that is the case.  Would you hit a CPU bottleneck from this
>decode/encode process before you would hit a limit on how much ISDN hardware you
>can put in a box?
>

No, the muxing/demuxing is not really cpu intensive, transcoding and amr encoding/decoding
is, but you could do it in another box with the demuxed stream. But a good reason of
why you could want to have it separate in another box could be estability and security.
Imagine that you have your asterisk pbx offering some kind of voice service and you would
like to use the same isdn line to offer some demo video one. I don't think it's a clever
idea to include the h324m library in your production server, so you just setup another
asterisk server to test the video part and just bridge  the videocalls to it.

Best reagrds
Sergio 



More information about the asterisk-dev mailing list