[asterisk-speech-rec] DTMF CRM:00110073

Randal Rand RandalRand at LumenVox.com
Thu Feb 22 18:00:32 MST 2007


Hi David,

DTMF tones do not get passed to the LumenVox Speech Engine only human
speech.  What happens inside the Asterisk code is that once the very
first DTMF tone is detected speech recognition is turned off.  Then when
the DTMF is done being entered by the caller the resulting text string
of numbers are returned by the SPEECH_TEXT(0) object.  Once you have
that string you can parse that against one of the included DTMF grammars
which will interpret the string into something meaningful - such as a
phone number, date, time, currency, etc...  For this type of
interpretation you would have to go through the AGI I don't think the
dialplan can handle that.


Randal Rand
Speech Application Developer
LumenVox
P: 877-977-0707, just say "Randal"
F: 858-707-7072
RandalRand at LumenVox.com
www.LumenVox.com
 
Tell us what is important to you!  Take our product development survey:
http://www.LumenVox.com/survey/srDevelopment/index.asp

-----Original Message-----
From: asterisk-speech-rec-bounces at lists.digium.com
[mailto:asterisk-speech-rec-bounces at lists.digium.com] On Behalf Of David
Brazier
Sent: Thursday, February 22, 2007 2:54 PM
To: Use of speech recognition in Asterisk
Subject: RE: [asterisk-speech-rec] DTMF

I understand that Lumenvox SRE supports DTMF and voice grammars, but I'm
still not clear if the Asterisk/Lumenvox interface is designed to
support passing DTMF events to Lumenvox.

David

Joshua Colp wrote:
> David Brazier wrote:
> > Thanks for the explanation.  So this means there is no point loading
a 
> > DTMF grammar in the SRE?
> 
> Pretty much.

Joshua Colp wrote:
> DTMF gets returned as a regular speech result from the dtmf grammar
with 
> a score of 1000.
> 
> You could do a check on that to see if they entered DTMF or did
speech.

Stephen Keller wrote:
> The purpose of DTMF grammars is so that once the platform (in this
case Asterisk) recognizes DTMF, 
> you can get semantic interpretation from the Engine regardless of
whether the user spoke or dialed. 
> I.e. you can return the same semantic interpretation to your
application regardless of whether your 
> user said "One" or pressed 1. It sounds like this is what you would
want for your application -- this 
> way all semantic interpretation is handled in grammars and not by your
application.
>
> I am not sure how to pass recognized DTMF from Asterisk to the Engine
for semantic interpretation. 
> Perhaps Josh or somebody else on the list is more familiar with this?
_______________________________________________
--Bandwidth and Colocation provided by Easynews.com --

asterisk-speech-rec mailing list
To UNSUBSCRIBE or update options visit:
   http://lists.digium.com/mailman/listinfo/asterisk-speech-rec


More information about the asterisk-speech-rec mailing list