[asterisk-dev] [Code Review]: testsuite: Fail Tokens

jrose reviewboard at asterisk.org
Wed Jan 30 15:46:44 CST 2013



> On Jan. 30, 2013, 3:17 p.m., Mark Michelson wrote:
> > This looks like a good foundation for evaluating passing results more accurately. The fun part of this is going to be to change the various test objects and test modules to use fail tokens. I suppose that's next?

That'd be a worthy assumption. I'm mostly focused on using them on tests I have in development right now rather than changing existing tests, but I've given a little thought to adding them to modules as well. For any given component it should really be a fairly simple change. Hopefully we don't unearth too many hidden bugs, but I wouldn't be surprised if adding these to all of our multi-component tests reveals some sources of false positives.


- jrose


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviewboard.asterisk.org/r/2302/#review7778
-----------------------------------------------------------


On Jan. 29, 2013, 4:40 p.m., jrose wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviewboard.asterisk.org/r/2302/
> -----------------------------------------------------------
> 
> (Updated Jan. 29, 2013, 4:40 p.m.)
> 
> 
> Review request for Asterisk Developers, Mark Michelson, Matt Jordan, and kmoore.
> 
> 
> Summary
> -------
> 
> I was flustered when I found out that the pass/failure state was shared between all test modules in a test and setting pass in a single module means you have to actively get failures in every other module in order for the test to fail, so I came up with this interesting little fix.
> 
> Test objects now contain a fail tokens list.  In order to add to this list, the function 'create_fail_token(message)' should be used. When called, this will create a new fail token with a UUID and the message contained and automatically add it to the fail token list. It will return a reference to that fail_token, which should be kept by its issuer so that it can be cleared later.
> 
> If any fail tokens exist in the stack when the overall pass/failure of the test is being evaluated, the test will automatically indicate failure while logging the failure message given to the create_fail_token function that created it.
> 
> Tokens are removed from the list with the remove_fail_token(failtoken) function (which is where the value returned from create_fail_token should be supplied).
> 
> 
> Diffs
> -----
> 
>   /asterisk/trunk/lib/python/asterisk/TestCase.py 3617 
> 
> Diff: https://reviewboard.asterisk.org/r/2302/diff
> 
> 
> Testing
> -------
> 
> I added a few fail tokens to my callparking_timeout/comebacktoorigin_no test and observed what would happen if I cleared none, any one, a subset of them, and all of them.  In every case the right failtoken(s) were cleared and the remaining fail tokens would cause failures to occur with the right messages logged. If no fail tokens were left over, the test would pass provided that the test didn't set failure elsewhere.
> 
> 
> Thanks,
> 
> jrose
> 
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.digium.com/pipermail/asterisk-dev/attachments/20130130/977227df/attachment-0001.htm>


More information about the asterisk-dev mailing list