<html>
<body>
<div style="font-family: Verdana, Arial, Helvetica, Sans-Serif;">
<table bgcolor="#f9f3c9" width="100%" cellpadding="8" style="border: 1px #c9c399 solid;">
<tr>
<td>
This is an automatically generated e-mail. To reply, visit:
<a href="https://reviewboard.asterisk.org/r/2302/">https://reviewboard.asterisk.org/r/2302/</a>
</td>
</tr>
</table>
<br />
<blockquote style="margin-left: 1em; border-left: 2px solid #d0d0d0; padding-left: 10px;">
<p style="margin-top: 0;">On January 30th, 2013, 3:17 p.m., <b>Mark Michelson</b> wrote:</p>
<blockquote style="margin-left: 1em; border-left: 2px solid #d0d0d0; padding-left: 10px;">
<pre style="white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;">This looks like a good foundation for evaluating passing results more accurately. The fun part of this is going to be to change the various test objects and test modules to use fail tokens. I suppose that's next?</pre>
</blockquote>
</blockquote>
<pre style="white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;">That'd be a worthy assumption. I'm mostly focused on using them on tests I have in development right now rather than changing existing tests, but I've given a little thought to adding them to modules as well. For any given component it should really be a fairly simple change. Hopefully we don't unearth too many hidden bugs, but I wouldn't be surprised if adding these to all of our multi-component tests reveals some sources of false positives.</pre>
<br />
<p>- jrose</p>
<br />
<p>On January 29th, 2013, 4:40 p.m., jrose wrote:</p>
<table bgcolor="#fefadf" width="100%" cellspacing="0" cellpadding="8" style="background-image: url('https://reviewboard.asterisk.org/media/rb/images/review_request_box_top_bg.png'); background-position: left top; background-repeat: repeat-x; border: 1px black solid;">
<tr>
<td>
<div>Review request for Asterisk Developers, Mark Michelson, Matt Jordan, and kmoore.</div>
<div>By jrose.</div>
<p style="color: grey;"><i>Updated Jan. 29, 2013, 4:40 p.m.</i></p>
<h1 style="color: #575012; font-size: 10pt; margin-top: 1.5em;">Description </h1>
<table width="100%" bgcolor="#ffffff" cellspacing="0" cellpadding="10" style="border: 1px solid #b8b5a0">
<tr>
<td>
<pre style="margin: 0; padding: 0; white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;">I was flustered when I found out that the pass/failure state was shared between all test modules in a test and setting pass in a single module means you have to actively get failures in every other module in order for the test to fail, so I came up with this interesting little fix.
Test objects now contain a fail tokens list. In order to add to this list, the function 'create_fail_token(message)' should be used. When called, this will create a new fail token with a UUID and the message contained and automatically add it to the fail token list. It will return a reference to that fail_token, which should be kept by its issuer so that it can be cleared later.
If any fail tokens exist in the stack when the overall pass/failure of the test is being evaluated, the test will automatically indicate failure while logging the failure message given to the create_fail_token function that created it.
Tokens are removed from the list with the remove_fail_token(failtoken) function (which is where the value returned from create_fail_token should be supplied).</pre>
</td>
</tr>
</table>
<h1 style="color: #575012; font-size: 10pt; margin-top: 1.5em;">Testing </h1>
<table width="100%" bgcolor="#ffffff" cellspacing="0" cellpadding="10" style="border: 1px solid #b8b5a0">
<tr>
<td>
<pre style="margin: 0; padding: 0; white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;">I added a few fail tokens to my callparking_timeout/comebacktoorigin_no test and observed what would happen if I cleared none, any one, a subset of them, and all of them. In every case the right failtoken(s) were cleared and the remaining fail tokens would cause failures to occur with the right messages logged. If no fail tokens were left over, the test would pass provided that the test didn't set failure elsewhere.</pre>
</td>
</tr>
</table>
<h1 style="color: #575012; font-size: 10pt; margin-top: 1.5em;">Diffs</b> </h1>
<ul style="margin-left: 3em; padding-left: 0;">
<li>/asterisk/trunk/lib/python/asterisk/TestCase.py <span style="color: grey">(3617)</span></li>
</ul>
<p><a href="https://reviewboard.asterisk.org/r/2302/diff/" style="margin-left: 3em;">View Diff</a></p>
</td>
</tr>
</table>
</div>
</body>
</html>