[asterisk-app-dev] WebSocket Stasis Control Best Practice
Krandon
krandon.bruse at gmail.com
Wed Jun 18 14:49:26 CDT 2014
On Wednesday, June 18, 2014 at 12:16 PM, Ben Klang wrote:
> Excuse my somewhat tardy reply to this thread, but since you brought up AMD:
>
> On Jun 16, 2014, at 11:47 AM, Ben Langfeld <ben at langfeld.me (mailto:ben at langfeld.me)> wrote:
> > On Sun, Jun 15, 2014 at 9:24 PM, Krandon <krandon.bruse at gmail.com (mailto:krandon.bruse at gmail.com)> wrote:
> > > Hello Asterisk friends,
> > >
> > > I am currently interfacing with Asterisk through ARI and loving the experience so far. I have successfully originated calls and dumped them into my Stasis app. I am trying to figure out what the best way is to send a channel into an Application. The current architecture for /channels/{id}/play works well for the majority of my app, but I am running into a block figuring out how to interact with Asterisk dialplan applications.
> > >
> > > To give an example - I submit an originate to go to SIP/vendor/phoneNumber - with the other leg going to App: myStasisApp, {"soundFile":"blah"}. That works fine (with the proper quote escaping). Now my Stasis app has received the channelID to which we can do a lot of neat stuff. Say I play a sound to the user but then want to call the app WaitForSilence. What's the best way to do this? I may be misinterpreting the intended use of both Stasis and ARI - but I am curious to see what your thoughts are.
> > >
> > > Also, for the stasis app to get a list of arguments, I am passing it through as JSON. So far that is working fine - but I wanted to see if there was a better way to get a list/array of app args to Stasis.
> > >
> > > Forgive me if there is an easy solution - through digging and poking the last few days, I have not been able to find the intended use case or even a use case.
> > >
> > >
> >
> > Well, the solution for this just got added into the Asterisk 12 branch, and so it hasn't made it into a release yet. It should be coming soon in Asterisk 12.4.0.
> >
> > The TALK_DETECT [1] function enables AMI/ARI events [2] [3] [4] [5] on a channel, such that a connected ARI application receives notifications over the WebSocket when a person starts/stops talking. This lets you asynchronously 'know' when both talking/silence has occurred - obviating the need for the WaitForSilence/WaitForNoise dialplan applications. Plus, because it is asynchronous, if you decide you don't *want* to wait for silence, you don't have to!
> >
> > With a bit of manipulation, you could also construct AMD from this as well - but I'll admit that's a bit more challenging. I'd be interested in people's experiences with attempting to do that, and if an asynchronous "IS_HUMAN" detection function is needed or not.
>
> We are in the process right now of creating an application that needs asynchronous AMD. Specifically, we are implementing LumenVox’s CPA product[1] and the use case is this:
I have played around with LumenVox CPA before - it's the best (in terms of accuracy) that I've used before.
>
> * Reminder call is placed to recipient
> * Recipient answers (don’t yet know if it is a human or a machine)
> * Outgoing message begins to play
> * If a human is detected, stop playback and connect to an agent
> * If a machine is detected, keep playing back until…
> * If a beep is detected, stop and restart playback
>
>
>
You start playing the message to the user immediately, and if it's determined to be a human, then you connect them to an agent? Is the reasoning behind doing this instead of starting the call with WaitForSilence or AMD because you want the user to not have a delay when they answer the phone and when the message is played? We've found through testing Lumenvox's CPA and Sangoma's, that the super simple WaitForSilence works _pretty_ well (especially with some threshold "tweaks"), but our use case is just the guarantee of message delivery, not a possible agent/queue scenario. Regardless of what our use case scenario is vs yours, I can definitely see a problem with asynchronous use. Let me know what you find!
I could imagine if you could get the stats from the CPA in real-time to change the path of the call, that would be really neat but impossible right now. Even then, I would imagine the manager interface may be better suited (like what we used for the implementation of Sangoma's CPA).
>
> The only way to achieve this is if we can have an async speech recognizer running while simultaneously playing output, which isn’t possible with Dialplan today, and would require a specialized app even if it were implemented that way. Instead, we are hoping to have a lower-level primitive to do signals detection and playback asynchronously.
>
> In an ideal world, ARI would provide primitives for playback (file or TTS) and input (DTMF or ASR). Some more background from discussion related to our project, courtesy Ben Langfeld:
>
>
> The asynchronous example is more complex. While Adhearsion sees both the input and output components as being asynchronous, this is a fake facility provided by Punchblock to make Asterisk look like an async server when it is not. Both components are implemented atop synchronous Asterisk dialplan applications:
>
>
> For output: Playback() or MRCPSynth()
> For input: MRCPRecog()
>
>
> This means that given the simplest approach to implementation discussed above, the output would be executed, followed by the input being queued and executed once the output had completed. If we were to swap the two, not only would we now have a coordination problem where we have to queue cancellation of the output to paper over the race condition introduced by potentially being asked to stop it before we have a handle on it, we would have the same blocking problem with MRCPRecog().
>
>
> So that rules out combining one of the UniMRCP dialplan applications with the Playback() application in this fashion. There are two other remaining solutions that come to mind:
>
>
> A prompt command to combine the output and input into a single dialplan application invocation (MRCPRecog() for native file playback, SynthAndRecog() for TTS). This avoids the problem of multiple dialplan applications blocking one another, but introduces a fresh one: these applications terminate output as soon as recognition completes (or earlier if barge-in is enabled). There is no opportunity to inject logic to filter the recognition result prior to terminating the output, nor do I think this would make sense.
>
>
>
> The Asterisk Speech API (SpeechLoadGrammar(), SpeechActivateGrammar(), SpeechStart(),SpeechBackground(), etc). If SpeechBackground() this would be the obvious solution, but it unfortunately is not. SpeechBackground() actually sits in a loop, directing audio frames to the recognizer while simultaneously rendering frames of audio (the first option is a file path). The app does not return until recognition has completed, so cannot be combined with Playback(). Upon recognition completion, the output will be terminated, regardless of the recognition result, so this suffers the same problem as Rayo Prompt. It is also not possible to use any other output renderer, such as a TTS engine via MRCP.
>
>
>
> > Can we implement Asterisk/Lumenvox CPA in way to be compatible with the adhearsion-cpa controller methods API?
> >
>
>
> The problems stated above leave us with only one option: extra capability must be introduced to Asterisk in order to handle simultaneous dialplan applications, or to introduce a true async version of SpeechBackground(). The viability of this is something that must be discussed with the Asterisk project / Digium. Note that FreeSWITCH already has this capability, but would also need less invasive changes to cope with LumenVox CPA as stated above; a far more approachable task.
>
>
> In short, the adhearsion-cpa API can be honoured for the synchronous detection case trivially. It cannot be honoured for the async case, nor can any equivalent alternative be introduced, without changes to Asterisk.
>
>
>
>
> [1]: http://www.lumenvox.com/products/speech_engine/cpa.aspx
>
> /BAK/
>
> --
> Ben Klang
> Principal/Technology Strategist, Mojo Lingo
> bklang at mojolingo.com (mailto:bklang at mojolingo.com)
> +1.404.475.4841
>
> Mojo Lingo -- Voice applications that work like magic
> http://mojolingo.com (http://mojolingo.com/)
>
> Twitter: @MojoLingo
>
>
> >
> > [1] https://wiki.asterisk.org/wiki/display/AST/Asterisk+12+Function_TALK_DETECT
> > [2] https://wiki.asterisk.org/wiki/display/AST/Asterisk+12+ManagerEvent_ChannelTalkingStart
> > [3] https://wiki.asterisk.org/wiki/display/AST/Asterisk+12+ManagerEvent_ChannelTalkingStop
> > [4]https://wiki.asterisk.org/wiki/display/AST/Asterisk+12+REST+Data+Models#Asterisk12RESTDataModels-ChannelTalkingStarted
> > [5]https://wiki.asterisk.org/wiki/display/AST/Asterisk+12+REST+Data+Models#Asterisk12RESTDataModels-ChannelTalkingFinished
> >
> > Matt
> >
> > --
> _______________________________________________
> asterisk-app-dev mailing list
> asterisk-app-dev at lists.digium.com (mailto:asterisk-app-dev at lists.digium.com)
> http://lists.digium.com/cgi-bin/mailman/listinfo/asterisk-app-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.digium.com/pipermail/asterisk-app-dev/attachments/20140618/4e9a20fd/attachment-0001.html>
More information about the asterisk-app-dev
mailing list