[Asterisk-video] Re: [asterisk-dev] Video packetization proposal
Mihai Balea
mihai at hates.ms
Fri Jun 1 15:32:39 MST 2007
On Jun 1, 2007, at 6:22 PM, Sergio Garcia Murillo wrote:
> Hi Mihai, just a few comments.
>
> IMHO video packetization should be transparent to IAX (as is in
> rtp) so I
> would only
> allow the data to be only a full video packet (not transmit
> partiall video
> packets or mix
> several packets in one IAX packet). If a codecs doesn't support
> pacektization the
> specification should be outside of the scope of this document.
Theora does not support packetization and it is, unfortunately, the
only free (as in beer and speech) video codec. I believe that if we
fail to address the issue of packetization, then we stunt the
development of open source video solutions. That being said, I tried
to leave video specific information out of the generic header and
into the extended portion. The video specific flags I describe in
the document should be interpreted as a suggestion, as some codecs
like h264 might not need them.
>
> As in rtp, only one bit is really needed to signal the last packet
> of a
> frame. You can detect
> packets losses and frame changes with secuence numbers and timestamps.
Yes, but if one extra bit makes my life as a programmer easier, I
would go for that.
> I,P,B framet indication is not needed as it should the decoder the
> one who
> is in charge of that.
Some application benefit from knowing the type of frame without
actually decoding it. One example would be video conferencing
applications, and more specifically app_conference. When you want to
switch the video stream from one person to another, you want to do it
on a key frame, otherwise you get garbage.
> The timpestamp conundrum, this is quite a hard problem.
> As you have defined in your document the iax field would be a timer
> reference or frame duration
> instead of a timespamp. I've do it that why youll have a big
> problem if you
> want to translate
> from rtp to IAX. The only way you can do it is storing all the
> packets of a
> frame till you've got
> the first packet of the next video frame, then (and only then) you
> could
> calculate your "timestamp" field.
It is not the frame duration, it is the time difference (in ms)
between the time of transmission of this frame and the time of
transmission of the first frame in the call.
> Also, it has another problem, if a frame has more than one packet,
> you're
> going to set the duration on
> the first one? or in every one?
My proposal does not allow multiple video frames in one IAX2 frame.
RTP packetization for h264 and theora (at least xiph's proposal) does
allow for that, but I believe that video frames are large enough to
be transported one per IAX frame (or one per multiple IAX frames).
Mihai
More information about the asterisk-video
mailing list