<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<font face="Courier New, Courier, monospace">Everybody,<br>
<br>
I've recently discovered openai/whisper and have been trying in
earnest to get this working with Asterisk for voicemail
transcriptions (Currently using the NerdVittles script with IBM
Watson)<br>
<br>
<a class="moz-txt-link-freetext" href="https://github.com/openai/whisper">https://github.com/openai/whisper</a><br>
<br>
After spending several hours today, I've successfully integrated
my home Asterisk 16 voicemail with Whisper.<br>
<br>
Once I have followed the instructions for setting up an API server<br>
<br>
<a class="moz-txt-link-freetext" href="https://blog.deepgram.com/how-to-build-an-openai-whisper-api/">https://blog.deepgram.com/how-to-build-an-openai-whisper-api/</a><br>
<br>
Initially, I setup a quad core VM to test this with, but
discovered that without a dedicated card for the inference that it
was horribly slow. So, I've set up testing on my desktop (Kubuntu
20) since I have an nVidia GTX 1060 installed.<br>
<br>
For the integration with Asterisk, I'm using a slightly modified
script from nerdvittles IBM Watson script<br>
<br>
sendmailibm<br>
<br>
That can be found on their website<br>
<br>
<a class="moz-txt-link-freetext" href="https://nerdvittles.com/free-asterisk-voicemail-transcription-with-ibms-stt-engine/">https://nerdvittles.com/free-asterisk-voicemail-transcription-with-ibms-stt-engine/</a><br>
<br>
I will probably find a low cost nVidia video card and get a stand
alone Linux box running to handle this project.<br>
<br>
If you're interested in the details, let me know.<br>
<br>
Doug<br>
<br>
<br>
<br>
</font>
</body>
</html>