[asterisk-dev] Test framework (changed topic)

John Todd jtodd at digium.com
Wed Mar 18 19:10:44 CDT 2009


On Mar 18, 2009, at 3:14 PM, Matt Riddell wrote:

> I think we could discuss this till we're all blue in the face.
>
> It seems that developing an architecture for something always goes  
> round
> in circles.
>
> So,  why don't we just start by writing one test for one function.
>
> Something simple to start with.
>
> So, like lets start with sending manager commands and getting an
> expected response.
>
> The test would involve:
>
> 1. Create a manager.conf file
> 2. Connect to Asterisk manager (events off for this one)
> 3. Execute one application (lets say core show version)
> 4. Read response
> 5. Logoff
> 6. All output from manager should have been logged to file
> 7. We compare the expected response with the received response.
> Anything we received, but didn't expect is a failure.  Anything we
> didn't receive but did expect is a failure.
>
> This could be written with bash, using telnet or something so is quite
> simple.
>
> I realize that a framework would make it easier to develop multiple
> tests, but I'd rather see some tests actually created.
>
> If this sounds good, I can write this test - mind you my regex isn't  
> too
> great and the example I picked would rely on it being correct.
>
> -- 
> Kind Regards,
>
> Matt Riddell
> Director

Sounds good!

However, the concept of "framework" is still valid, and maybe your  
test could at least be written in such a way that it outputs data in a  
way that could be generalized later.   We kind of got in the mess  
we're in now by not having a framework.  Let's put a few things down  
that anyone running this test might be interested in:

  Name of test (shorthand?)
  Author of test (email address)
  Date the test was built
  Revisions of Asterisk this test should work with (list with regexp?)
  Primary target of testing (.c file name)
  Expected duration of test
  Output of test (pass/fail? text output? audio file? text file?)
  Iterative? How many iterations?
  Is this a test that should be run by itself on an otherwise  
quiescent system, or can/should it be run at the same time with other  
tests?
  Any OS limitations on this test? (works only with Linux, etc.)
  Any pre-requisite libraries or software required for this test?  
(python, lua, etc.)

Now, given the really short list of data just about the test I  
provided above (which is far from complete!) is there some way of  
expressing those factors in a machine-accessible manner, such that  
"make test" will be a meaningful thing to do in most circumstances?   
Is XML the answer here?  Should source code have embedded XML tags  
that point to the test jig for that particular code?  This would  
certainly allow tighter coupling between test harness revisions and  
code revisions, since I suspect there will be significant changes of  
what a passing grade is for things like chan_sip as time goes on and  
functionality improves.

If we start creating test jigs, there MUST be some meaningful way to  
tie all of them together, even if they're written with totally  
different scripting tools, expectations, or output formats.  The core  
"test" routines need to be able to know what to do with all the test  
events so that a summary can be presented and a single logfile can be  
maintained.

How about "make test channel sip"?  Or "make test app_meetme"?  Or  
"make test channel all"?  Or "make test all"?  These all seems like  
reasonable commands that would validate certain parts of Asterisk in a  
limited way.

JT

---
John Todd                       email:jtodd at digium.com
Digium, Inc. | Asterisk Open Source Community Director
445 Jan Davis Drive NW -  Huntsville AL 35806  -   USA
direct: +1-256-428-6083         http://www.digium.com/






More information about the asterisk-dev mailing list