[Asterisk-code-review] runtests: Add the ability to run the test suite in a loop (testsuite[master])

Joshua Colp asteriskteam at digium.com
Mon Oct 26 13:50:48 CDT 2015


Joshua Colp has submitted this change and it was merged.

Change subject: runtests: Add the ability to run the test suite in a loop
......................................................................


runtests: Add the ability to run the test suite in a loop

This patch adds a new option to runtests.py, '-n'. This will instruct the
test suite to run its set of tests 'n' number of times. Each result from
an iteration of the test suite is stored as a new <testsuite> node in the
resulting JUnit XML report. If a negative number is provided for the '-n'
option, the Test Suite will run continuously until told to stop.

In order to tell the Test Suite to stop, signal handlers have been added for
the USR1 and USR2 signals.
 - If the USR1 signal is sent to the python process, the Test Suite will stop
   executing any further tests after the current test case completes. Any
   non-executed tests are marked as skipped.
 - If the USR2 signal is sent to the python process, the current executing
   test case is marked as abandoned and failed. Any subsequent tests in the
   current running set of tests are marked as skipped.

This prevents us from having to use a keyboard interrupt to stop the Test
Suite, losing all results.

Change-Id: I6c65250a45e48dc36b7c4657d4cfbe63efeb1a6d
---
M README.txt
M runtests.py
2 files changed, 153 insertions(+), 69 deletions(-)

Approvals:
  Anonymous Coward #1000019: Verified
  Joshua Colp: Looks good to me, approved
  Corey Farrell: Looks good to me, but someone else must approve



diff --git a/README.txt b/README.txt
index 4ebbd94..b747905 100644
--- a/README.txt
+++ b/README.txt
@@ -3,7 +3,7 @@
 ===                           Asterisk Test Suite                            ===
 ===                                                                          ===
 ===                         http://www.asterisk.org/                         ===
-===                  Copyright (C) 2010 - 2012, Digium, Inc.                 ===
+===                  Copyright (C) 2010 - 2015, Digium, Inc.                 ===
 ===                                                                          ===
 ================================================================================
 
@@ -15,13 +15,14 @@
         1) Introduction
         2) Test Suite System Requirements
         3) Running the Test Suite
+        4) External control of the Test Suite
 
     Writing Tests:
-        4) Test Anatomy
-        5) Test Configuration
-        6) Tests in Python
-        7) Tests in Lua
-        8) Custom Tests
+        5) Test Anatomy
+        6) Test Configuration
+        7) Tests in Python
+        8) Tests in Lua
+        9) Custom Tests
 
 --------------------------------------------------------------------------------
 --------------------------------------------------------------------------------
@@ -198,7 +199,13 @@
        ******************************************
 
 Run the tests:
-    # ./runtests.py
+    $ ./runtests.py
+
+Run multiple iterations:
+    $ ./runtests.py --number 5
+
+Run a specific test:
+    $ ./runtests.py -t tests/pbx/dialplan
 
 For more syntax information:
     $ ./runtests.py --help
@@ -226,7 +233,23 @@
 --------------------------------------------------------------------------------
 
 --------------------------------------------------------------------------------
---- 4) Test Anatomy
+--- 4) External control of the Test Suite
+--------------------------------------------------------------------------------
+
+The Test Suite can be controlled externally using the SIGUSR1 and SIGTERM
+signals.
+    - SIGUSR1 will instruct the Test Suite to stop running any further tests
+      after the current running test completes. Any tests not executed will be
+      marked as skipped.
+    - SIGTERM will attempt to immediately stop execution of the current test,
+      marking it as failed. The Test Suite will stop running any further tests,
+      marking any test not executed as skipped.
+
+--------------------------------------------------------------------------------
+--------------------------------------------------------------------------------
+
+--------------------------------------------------------------------------------
+--- 5) Test Anatomy
 --------------------------------------------------------------------------------
 
 a) File layout
@@ -319,7 +342,7 @@
 --------------------------------------------------------------------------------
 
 --------------------------------------------------------------------------------
---- 5) Test Configuration
+--- 6) Test Configuration
 --------------------------------------------------------------------------------
 
         Test configuration lives in a file called "test-config.yaml".  The
@@ -522,7 +545,7 @@
 --------------------------------------------------------------------------------
 
 --------------------------------------------------------------------------------
---- 6) Tests in Python
+--- 7) Tests in Python
 --------------------------------------------------------------------------------
 
         There are some python modules included in lib/python/ which are intended
@@ -533,7 +556,7 @@
 --------------------------------------------------------------------------------
 
 --------------------------------------------------------------------------------
---- 7) Tests in Lua
+--- 8) Tests in Lua
 --------------------------------------------------------------------------------
 
         The asttest framework included in the asttest directory provides a lot
@@ -545,7 +568,7 @@
 --------------------------------------------------------------------------------
 
 --------------------------------------------------------------------------------
---- 8) Custom Tests
+--- 9) Custom Tests
 --------------------------------------------------------------------------------
 
         The testsuite supports automatic use of custom tests.  This feature is
diff --git a/runtests.py b/runtests.py
index 378cb70..88b0e23 100755
--- a/runtests.py
+++ b/runtests.py
@@ -19,6 +19,7 @@
 import xml.dom
 import random
 import select
+import signal
 
 try:
     import lxml.etree as ET
@@ -45,6 +46,12 @@
 
 TESTS_CONFIG = "tests.yaml"
 TEST_RESULTS = "asterisk-test-suite-report.xml"
+
+# If True, abandon the current running TestRun. Used by SIGTERM.
+abandon_test = False
+
+# If True, abandon the current running TestSuite. Used by SIGUSR1/SIGTERM.
+abandon_test_suite = False
 
 
 class TestRun:
@@ -94,10 +101,14 @@
 
             timedout = False
             try:
-                while(True):
-                    if not poll.poll(self.timeout):
-                        timedout = True
-                        p.terminate()
+                while (not abandon_test):
+                    try:
+                        if not poll.poll(self.timeout):
+                            timedout = True
+                            p.terminate()
+                    except select.error as v:
+                        if v[0] != errno.EINTR:
+                            raise
                     l = p.stdout.readline()
                     if not l:
                         break
@@ -107,13 +118,15 @@
             p.wait()
 
             # Sanitize p.returncode so it's always a boolean.
-            did_pass = (p.returncode == 0)
+            did_pass = (p.returncode == 0 and not abandon_test)
             if did_pass and not self.test_config.expect_pass:
                 self.stdout_print("Test passed but was expected to fail.")
             if not did_pass and not self.test_config.expect_pass:
                 print "Test failed as expected."
 
             self.passed = (did_pass == self.test_config.expect_pass)
+            if abandon_test:
+                self.passed = False
 
             core_dumps = self._check_for_core()
             if (len(core_dumps)):
@@ -142,9 +155,15 @@
                           "test %s (non-fatal)" % self.test_name
 
             self.__parse_run_output(self.stdout)
-            print 'Test %s %s\n' % (
-                cmd,
-                'timedout' if timedout else 'passed' if self.passed else 'failed')
+            if timedout:
+                status = 'timed out'
+            elif abandon_test:
+                status = 'was abandoned'
+            elif self.passed:
+                status = 'passed'
+            else:
+                status = 'failed'
+            print 'Test %s %s\n' % (cmd, status)
 
         else:
             print "FAILED TO EXECUTE %s, it must exist and be executable" % cmd
@@ -472,6 +491,9 @@
             (i, (self.options.timeout / 1000))
 
         for t in self.tests:
+            if abandon_test_suite:
+                break
+
             if t.can_run is False:
                 if t.test_config.skip is not None:
                     print "--> %s ... skipped '%s'" % (t.test_name, t.test_config.skip)
@@ -558,20 +580,10 @@
                 char_list.append(chr(i))
         return data.translate(None, ''.join(char_list))
 
-    def write_results_xml(self, fn, stdout=False):
-        try:
-            f = open(TEST_RESULTS, "w")
-        except IOError:
-            print "Failed to open test results output file: %s" % TEST_RESULTS
-            return
-        except:
-            print "Unexpected error: %s" % sys.exc_info()[0]
-            return
+    def write_results_xml(self, doc, root):
 
-        dom = xml.dom.getDOMImplementation()
-        doc = dom.createDocument(None, "testsuite", None)
-
-        ts = doc.documentElement
+        ts = doc.createElement("testsuite")
+        root.appendChild(ts)
         ts.setAttribute("errors", "0")
         ts.setAttribute("tests", str(self.total_count))
         ts.setAttribute("time", "%.2f" % self.total_time)
@@ -603,11 +615,30 @@
                 self.__strip_illegal_xml_chars(t.failure_message)))
             tc.appendChild(failure)
 
-        doc.writexml(f, addindent="  ", newl="\n", encoding="utf-8")
-        f.close()
 
-        if stdout:
-            print doc.toprettyxml("  ", encoding="utf-8")
+def handle_usr1(sig, stack):
+    """Handle the SIGUSR1 signal
+
+    This should instruct the running test suite to exit as soon as possible
+    """
+    global abandon_test_suite
+
+    print "SIGUSR1 received; stopping test suite after current test..."
+    abandon_test_suite = True
+
+
+def handle_term(sig, stack):
+    """Handle the SIGTERM signal
+
+    This should abandon the current running test, marking it as failed, and
+    gracefully exit the current running test suite as soon as possible
+    """
+    global abandon_test
+    global abandon_test_suite
+
+    print "SIGTREM received; abandoning current test and stopping..."
+    abandon_test = True
+    abandon_test_suite = True
 
 
 def main(argv=None):
@@ -645,6 +676,10 @@
     parser.add_option("-V", "--valgrind", action="store_true",
                       dest="valgrind", default=False,
                       help="Run Asterisk under Valgrind")
+    parser.add_option("--number", metavar="int", type=int,
+                      dest="number", default=1,
+                      help="Number of times to run the test suite. If a value of "
+                           "-1 is provided, the test suite will loop forever.")
     parser.add_option("--random-order", action="store_true",
                       dest="randomorder", default=False,
                       help="Shuffle the tests so they are run in random order")
@@ -654,7 +689,25 @@
 
     (options, args) = parser.parse_args(argv)
 
+    # Install a signal handler for USR1/TERM, and use it to bail out of running
+    # any remaining tests
+    signal.signal(signal.SIGUSR1, handle_usr1)
+    signal.signal(signal.SIGTERM, handle_term)
+
     ast_version = AsteriskVersion(options.version)
+
+    if options.list_tests or options.list_tags:
+        test_suite = TestSuite(ast_version, options)
+
+        print "Asterisk Version: %s\n" % str(ast_version)
+
+        if options.list_tests:
+            test_suite.list_tests()
+
+        if options.list_tags:
+            test_suite.list_tags()
+
+        return 0
 
     if options.timeout > 0:
         options.timeout *= 1000
@@ -664,49 +717,57 @@
         if not test.endswith('/'):
             options.tests[i] = test + '/'
 
-    test_suite = TestSuite(ast_version, options)
-
-    if options.list_tests:
-        print "Asterisk Version: %s\n" % str(ast_version)
-        test_suite.list_tests()
-        return 0
-
-    if options.list_tags:
-        test_suite.list_tags()
-        return 0
-
     if options.valgrind:
         if not ET:
             print "python lxml module not loaded, text summaries " \
                   "from valgrind will not be produced.\n"
         os.environ["VALGRIND_ENABLE"] = "true"
 
-    print "Running tests for Asterisk %s ...\n" % str(ast_version)
+    dom = xml.dom.getDOMImplementation()
+    doc = dom.createDocument(None, "testsuites", None)
 
-    test_suite.run()
+    continue_forever = True if options.number < 0 else False
+    iteration = 0
+    while ((iteration < options.number or continue_forever) and not abandon_test_suite):
 
-    test_suite.write_results_xml(TEST_RESULTS, stdout=True)
+        test_suite = TestSuite(ast_version, options)
 
-    # If exactly one test was requested, then skip the summary.
-    if len(test_suite.tests) != 1:
-        print "\n=== TEST RESULTS ===\n"
-        print "PATH: %s\n" % os.getenv("PATH")
-        for t in test_suite.tests:
-            sys.stdout.write("--> %s --- " % t.test_name)
-            if t.did_run is False:
-                print "SKIPPED"
-                for d in t.test_config.deps:
-                    print "      --> Dependency: %s -- Met: %s" % (d.name, str(d.met))
-                if options.tags:
-                    for t in t.test_config.tags:
-                        print "      --> Tag: %s -- Met: %s" % (t, str(t in options.tags))
-                continue
-            if t.passed is True:
-                print "PASSED"
-            else:
-                print "FAILED"
+        print "Running tests for Asterisk {0} (run {1})...\n".format(
+            str(ast_version).strip('\n'), iteration + 1)
+        test_suite.run()
+        test_suite.write_results_xml(doc, doc.documentElement)
 
+        # If exactly one test was requested, then skip the summary.
+        if len(test_suite.tests) != 1:
+            print "\n=== TEST RESULTS ===\n"
+            print "PATH: %s\n" % os.getenv("PATH")
+            for t in test_suite.tests:
+                sys.stdout.write("--> %s --- " % t.test_name)
+                if t.did_run is False:
+                    print "SKIPPED"
+                    for d in t.test_config.deps:
+                        print "      --> Dependency: %s -- Met: %s" % (d.name, str(d.met))
+                    if options.tags:
+                        for t in t.test_config.tags:
+                            print "      --> Tag: %s -- Met: %s" % (t, str(t in options.tags))
+                    continue
+                if t.passed is True:
+                    print "PASSED"
+                else:
+                    print "FAILED"
+
+        iteration += 1
+
+    try:
+        with open(TEST_RESULTS, "w") as f:
+            doc.writexml(f, addindent="  ", newl="\n", encoding="utf-8")
+    except IOError:
+        print "Failed to open test results output file: %s" % TEST_RESULTS
+    except:
+        print "Unexpected error: %s" % sys.exc_info()[0]
     print "\n"
+    print doc.toprettyxml("  ", encoding="utf-8")
+
     return test_suite.total_failures
 
 

-- 
To view, visit https://gerrit.asterisk.org/1529
To unsubscribe, visit https://gerrit.asterisk.org/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I6c65250a45e48dc36b7c4657d4cfbe63efeb1a6d
Gerrit-PatchSet: 5
Gerrit-Project: testsuite
Gerrit-Branch: master
Gerrit-Owner: Matt Jordan <mjordan at digium.com>
Gerrit-Reviewer: Anonymous Coward #1000019
Gerrit-Reviewer: Corey Farrell <git at cfware.com>
Gerrit-Reviewer: George Joseph <george.joseph at fairview5.com>
Gerrit-Reviewer: Joshua Colp <jcolp at digium.com>
Gerrit-Reviewer: Matt Jordan <mjordan at digium.com>



More information about the asterisk-code-review mailing list