On Wed, Mar 18, 2009 at 11:06 AM, Olle E. Johansson <span dir="ltr"><<a href="mailto:oej@edvina.net">oej@edvina.net</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
18 mar 2009 kl. 17.57 skrev Gavin Henry:<br>
<div><div></div><div class="h5"><br>
> 2009/3/18 John Lange <<a href="mailto:john@johnlange.ca">john@johnlange.ca</a>>:<br>
>> On Wed, 2009-03-18 at 10:44 -0400, Mihai Balea wrote:<br>
>>> As far as I know, there is no test harness, so it's not surprising<br>
>>> that regressions pop up as often as they do.<br>
>><br>
>> I think this is the most important point. There should be a test<br>
>> bench<br>
>> that automatically confirms every (supported) feature is working<br>
>> before<br>
>> it's released. If the process was automated then it shouldn't take<br>
>> that<br>
>> long to run the tests.<br>
><br>
> Hi,<br>
><br>
> I think this is key and just about every open source project has a<br>
> "make test".<br>
><br>
> Take Perl and the CPAN. If your distro doesn't have a "make test",<br>
> many people<br>
> won't use your code, as they have no way of know if your libs work.<br>
><br>
> I'm part of the OpenLDAP project and our Release & QA Engineer does<br>
> all this.<br>
><br>
> Does Asterisk have one/somebody?<br>
><br>
> Our Release & QA Engineer makes sure all the right patches are<br>
> applied to the<br>
> CVS branch and calls for testing. Our "make test" runs around 55<br>
> tests, but as usual<br>
> could do with many, many more. But this means we don't release if<br>
> there is a failure<br>
> and don't have any regressions.<br>
><br>
> I understand * will be hard to develop a test suite for, but we should<br>
> at least try to have a<br>
> test for every feature listed here:<br>
><br>
> <a href="http://www.asterisk.org/support/features" target="_blank">http://www.asterisk.org/support/features</a><br>
><br>
> We could do virtualisation different architures etc. dependant on the<br>
> feature we are testing or<br>
> even do smoke testing etc. I don't know, but it would be good to know<br>
> it works before it is<br>
> released. We may even need virtual devices etc.<br>
><br>
> Another great example is the Samba project:<br>
><br>
> <a href="http://build.samba.org/" target="_blank">http://build.samba.org/</a><br>
><br>
> People or companies interested in seeing their kit and/or platform<br>
> supported add a node to the build<br>
> farm and send the results in.<br>
><br>
> If the test suite is deployed like this, people desperate for testing<br>
> will add their servers.<br>
><br>
</div></div>If you search the mailing lists, you will see that this has been<br>
brought forward<br>
a large number of times. There has been efforts by mutliple developers<br>
to start<br>
with a test framework. The work that is in the code today is mostly<br>
murf's<br>
test suite for the AEL parse and a few others.<br>
<br>
Digium also runs a few test servers that run build tests, but not<br>
function tests<br>
as far as I know.</blockquote><div><br>I'll throw in a few opinions on this issue-- Some of the software mentioned<br>with nice test suites are pretty easy to run tests against. In all the <br>cases, it's pure software involved, and you define nice input-output-compare<br>
tests to insure no regressions arise.<br><br>But asterisk is quite challenging. To do it right, I picture electro-mechanical<br>devices that will pick up a handset and dial physical analog phones, supplying<br>standard sound sequences, and recording sound into files. Maybe<br>
such a robotic phone is available for analog lines. For sip phones, a client<br>that can be driven 'robotically', under the control of some sort of script language,<br>or via a command driven network interface, like the manager interface, so you<br>
can script up dozens of standard calling squences involving parking, xfers, call files,<br>with all their permutations of eg. hookflash, call features, sip phone buttons, etc.<br><br>You'd need sound file comparators that would judge sent-vs-received quality loss,<br>
echo level, dialplan scripts to run the tests and look for problems in real time, etc. It's a <br>mind-boggling task! Any part of it is do-able, but wow! Testing a program like Asterisk<br>with so many possible ins and outs will be a real challenge!<br>
<br>I've built dozens of large complex commercial products that I had to maintain for years, with<br>highly critical customers in the thousands and testing was an absolute requirement... <br><br>murf<br><br>A great project for any PBX. <br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
If you can step forward and help us build a foundation for a test<br>
suite, it will<br>
be extremely appreciated!<br>
<font color="#888888"><br>
/Olle<br>
</font><div><div></div><div class="h5"><br>
_______________________________________________<br>
--Bandwidth and Colocation Provided by <a href="http://www.api-digital.com--" target="_blank">http://www.api-digital.com--</a><br>
<br>
asterisk-dev mailing list<br>
To UNSUBSCRIBE or update options visit:<br>
<a href="http://lists.digium.com/mailman/listinfo/asterisk-dev" target="_blank">http://lists.digium.com/mailman/listinfo/asterisk-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Steve Murphy<br>ParseTree Corp<br><br>