[asterisk-commits] Automatic conversion of tests to use realtime. (testsuite[master])

SVN commits to the Asterisk project asterisk-commits at lists.digium.com
Sun Jan 31 10:13:30 CST 2016


Anonymous Coward #1000019 has submitted this change and it was merged.

Change subject: Automatic conversion of tests to use realtime.
......................................................................


Automatic conversion of tests to use realtime.

This commit adds a pluggable module that can be used to automatically
make a test that currently uses static configuration files get converted
into a realtime test that uses a database.

The conversion works by keeping track of a registry of config files that
it understands how to convert into database data. When one of the known
files is encountered on a test, the configuration is converted to the
database data.

In addition to the conversion, the pluggable module also writes
configuration files necessary to make the test realtime-capable. For
instance, it will write extconfig.conf, res_odbc.conf, and potentially
sorcery.conf files automatically.

This commit only adds support for conversion of pjsip.conf to database,
but the groundwork is laid to be able to write converters that could
convert other types of files as well.

A later commit adds the ability to load pluggable modules globally,
thus allowing for this module to be used widespread, without having
to update individual test-config.yaml files on each realtime-enabled
test.

Change-Id: I3ceaae8b8f4160a6d23f2062ce75017351c1559e
---
A lib/python/asterisk/astconfigparser.py
A lib/python/asterisk/astdicts.py
A lib/python/asterisk/realtime_converter.py
3 files changed, 1,119 insertions(+), 0 deletions(-)

Approvals:
  Anonymous Coward #1000019: Verified
  Matt Jordan: Looks good to me, approved
  Joshua Colp: Looks good to me, but someone else must approve



diff --git a/lib/python/asterisk/astconfigparser.py b/lib/python/asterisk/astconfigparser.py
new file mode 100644
index 0000000..778b17f
--- /dev/null
+++ b/lib/python/asterisk/astconfigparser.py
@@ -0,0 +1,479 @@
+"""
+Copyright (C) 2016, Digium, Inc.
+
+This program is free software, distributed under the terms of
+the GNU General Public License Version 2.
+"""
+
+import re
+import itertools
+
+from astdicts import OrderedDict
+from astdicts import MultiOrderedDict
+
+
+def merge_values(left, right, key):
+    """Merges values from right into left."""
+    if isinstance(left, list):
+        vals0 = left
+    else:  # assume dictionary
+        vals0 = left[key] if key in left else []
+    vals1 = right[key] if key in right else []
+
+    return vals0 + [i for i in vals1 if i not in vals0]
+
+###############################################################################
+
+
+class Section(MultiOrderedDict):
+    """
+    A Section is a MultiOrderedDict itself that maintains a list of
+    key/value options.  However, in the case of an Asterisk config
+    file a section may have other defaults sections that is can pull
+    data from (i.e. templates).  So when an option is looked up by key
+    it first checks the base section and if not found looks in the
+    added default sections. If not found at that point then a 'KeyError'
+    exception is raised.
+    """
+    count = 0
+
+    def __init__(self, defaults=None, templates=None):
+        MultiOrderedDict.__init__(self)
+        # track an ordered id of sections
+        Section.count += 1
+        self.id = Section.count
+        self._defaults = [] if defaults is None else defaults
+        self._templates = [] if templates is None else templates
+
+    def __cmp__(self, other):
+        """
+        Use self.id as means of determining equality
+        """
+        return cmp(self.id, other.id)
+
+    def get(self, key, from_self=True, from_templates=True,
+            from_defaults=True):
+        """
+        Get the values corresponding to a given key. The parameters to this
+        function form a hierarchy that determines priority of the search.
+        from_self takes priority over from_templates, and from_templates takes
+        priority over from_defaults.
+
+        Parameters:
+        from_self - If True, search within the given section.
+        from_templates - If True, search in this section's templates.
+        from_defaults - If True, search within this section's defaults.
+        """
+        if from_self and key in self:
+            return MultiOrderedDict.__getitem__(self, key)
+
+        if from_templates:
+            if self in self._templates:
+                return []
+            for t in self._templates:
+                try:
+                    # fail if not found on the search - doing it this way
+                    # allows template's templates to be searched.
+                    return t.get(key, True, from_templates, from_defaults)
+                except KeyError:
+                    pass
+
+        if from_defaults:
+            for d in self._defaults:
+                try:
+                    return d.get(key, True, from_templates, from_defaults)
+                except KeyError:
+                    pass
+
+        raise KeyError(key)
+
+    def __getitem__(self, key):
+        """
+        Get the value for the given key. If it is not found in the 'self'
+        then check inside templates and defaults before declaring raising
+        a KeyError exception.
+        """
+        return self.get(key)
+
+    def keys(self, self_only=False):
+        """
+        Get the keys from this section. If self_only is True, then
+        keys from this section's defaults and templates are not
+        included in the returned value
+        """
+        res = MultiOrderedDict.keys(self)
+        if self_only:
+            return res
+
+        for d in self._templates:
+            for key in d.keys():
+                if key not in res:
+                    res.append(key)
+
+        for d in self._defaults:
+            for key in d.keys():
+                if key not in res:
+                    res.append(key)
+        return res
+
+    def add_defaults(self, defaults):
+        """
+        Add a list of defaults to the section. Defaults are
+        sections such as 'general'
+        """
+        defaults.sort()
+        for i in defaults:
+            self._defaults.insert(0, i)
+
+    def add_templates(self, templates):
+        """
+        Add a list of templates to the section.
+        """
+        templates.sort()
+        for i in templates:
+            self._templates.insert(0, i)
+
+    def get_merged(self, key):
+        """Return a list of values for a given key merged from default(s)"""
+        # first merge key/values from defaults together
+        merged = []
+        for i in reversed(self._defaults):
+            if not merged:
+                merged = i
+                continue
+            merged = merge_values(merged, i, key)
+
+        for i in reversed(self._templates):
+            if not merged:
+                merged = i
+                continue
+            merged = merge_values(merged, i, key)
+
+        # then merge self in
+        return merge_values(merged, self, key)
+
+###############################################################################
+
+COMMENT = ';'
+COMMENT_START = ';--'
+COMMENT_END = '--;'
+
+DEFAULTSECT = 'general'
+
+
+def remove_comment(line, is_comment):
+    """Remove any commented elements from the line."""
+    if not line:
+        return line, is_comment
+
+    if is_comment:
+        part = line.partition(COMMENT_END)
+        if part[1]:
+            # found multi-line comment end check string after it
+            return remove_comment(part[2], False)
+        return "", True
+
+    part = line.partition(COMMENT_START)
+    if part[1]:
+        # found multi-line comment start check string before
+        # it to make sure there wasn't an eol comment in it
+        has_comment = part[0].partition(COMMENT)
+        if has_comment[1]:
+            # eol comment found return anything before it
+            return has_comment[0], False
+
+        # check string after it to see if the comment ends
+        line, is_comment = remove_comment(part[2], True)
+        if is_comment:
+            # return possible string data before comment
+            return part[0].strip(), True
+
+        # otherwise it was an embedded comment so combine
+        return ''.join([part[0].strip(), ' ', line]).rstrip(), False
+
+    # check for eol comment
+    return line.partition(COMMENT)[0].strip(), False
+
+
+def try_include(line):
+    """
+    Checks to see if the given line is an include.  If so return the
+    included filename, otherwise None.
+    """
+
+    match = re.match('^#include\s*[<"]?(.*)[>"]?$', line)
+    return match.group(1) if match else None
+
+
+def try_section(line):
+    """
+    Checks to see if the given line is a section. If so return the section
+    name, otherwise return 'None'.
+    """
+    # leading spaces were stripped when checking for comments
+    if not line.startswith('['):
+        return None, False, []
+
+    section, delim, templates = line.partition(']')
+    if not templates:
+        return section[1:], False, []
+
+    # strip out the parens and parse into an array
+    templates = templates.replace('(', "").replace(')', "").split(',')
+    # go ahead and remove extra whitespace
+    templates = [i.strip() for i in templates]
+    try:
+        templates.remove('!')
+        return section[1:], True, templates
+    except:
+        return section[1:], False, templates
+
+
+def try_option(line):
+    """Parses the line as an option, returning the key/value pair."""
+    data = re.split('=>?', line)
+    # should split in two (key/val), but either way use first two elements
+    return data[0].rstrip(), data[1].lstrip()
+
+###############################################################################
+
+
+def find_dict(mdicts, key, val):
+    """
+    Given a list of mult-dicts, return the multi-dict that contains
+    the given key/value pair.
+    """
+
+    def found(d):
+        return key in d and val in d[key]
+
+    try:
+        return [d for d in mdicts if found(d)][0]
+    except IndexError:
+        raise LookupError("Dictionary not located for key = %s, value = %s"
+                          % (key, val))
+
+
+def write_dicts(config_file, mdicts):
+    """Write the contents of the mdicts to the specified config file"""
+    for section, sect_list in mdicts.iteritems():
+        # every section contains a list of dictionaries
+        for sect in sect_list:
+            config_file.write("[%s]\n" % section)
+            for key, val_list in sect.iteritems():
+                # every value is also a list
+                for v in val_list:
+                    key_val = key
+                    if v is not None:
+                        key_val += " = " + str(v)
+                        config_file.write("%s\n" % (key_val))
+            config_file.write("\n")
+
+###############################################################################
+
+
+class MultiOrderedConfigParser:
+    def __init__(self, parent=None):
+        self._parent = parent
+        self._defaults = MultiOrderedDict()
+        self._sections = MultiOrderedDict()
+        self._includes = OrderedDict()
+
+    def find_value(self, sections, key):
+        """Given a list of sections, try to find value(s) for the given key."""
+        # always start looking in the last one added
+        sections.sort(reverse=True)
+        for s in sections:
+            try:
+                # try to find in section and section's templates
+                return s.get(key, from_defaults=False)
+            except KeyError:
+                pass
+
+        # wasn't found in sections or a section's templates so check in
+        # defaults
+        for s in sections:
+            try:
+                # try to find in section's defaultsects
+                return s.get(key, from_self=False, from_templates=False)
+            except KeyError:
+                pass
+
+        raise KeyError(key)
+
+    def defaults(self):
+        return self._defaults
+
+    def default(self, key):
+        """Retrieves a list of dictionaries for a default section."""
+        return self.get_defaults(key)
+
+    def add_default(self, key, template_keys=None):
+        """
+        Adds a default section to defaults, returning the
+        default Section object.
+        """
+        if template_keys is None:
+            template_keys = []
+        return self.add_section(key, template_keys, self._defaults)
+
+    def sections(self):
+        return self._sections
+
+    def section(self, key):
+        """Retrieves a list of dictionaries for a section."""
+        return self.get_sections(key)
+
+    def get_sections(self, key, attr='_sections', searched=None):
+        """
+        Retrieve a list of sections that have values for the given key.
+        The attr parameter can be used to control what part of the parser
+        to retrieve values from.
+        """
+        if searched is None:
+            searched = []
+        if self in searched:
+            return []
+
+        sections = getattr(self, attr)
+        res = sections[key] if key in sections else []
+        searched.append(self)
+        if self._includes:
+            res.extend(list(itertools.chain(*[
+                incl.get_sections(key, attr, searched)
+                for incl in self._includes.itervalues()])))
+        if self._parent:
+            res += self._parent.get_sections(key, attr, searched)
+        return res
+
+    def get_defaults(self, key):
+        """
+        Retrieve a list of defaults that have values for the given key.
+        """
+        return self.get_sections(key, '_defaults')
+
+    def add_section(self, key, template_keys=None, mdicts=None):
+        """
+        Create a new section in the configuration. The name of the
+        new section is the 'key' parameter.
+        """
+        if template_keys is None:
+            template_keys = []
+        if mdicts is None:
+            mdicts = self._sections
+        res = Section()
+        for t in template_keys:
+            res.add_templates(self.get_defaults(t))
+        res.add_defaults(self.get_defaults(DEFAULTSECT))
+        mdicts.insert(0, key, res)
+        return res
+
+    def includes(self):
+        return self._includes
+
+    def add_include(self, filename, parser=None):
+        """
+        Add a new #include file to the configuration.
+        """
+        if filename in self._includes:
+            return self._includes[filename]
+
+        self._includes[filename] = res = \
+            MultiOrderedConfigParser(self) if parser is None else parser
+        return res
+
+    def get(self, section, key):
+        """Retrieves the list of values from a section for a key."""
+        try:
+            # search for the value in the list of sections
+            return self.find_value(self.section(section), key)
+        except KeyError:
+            pass
+
+        try:
+            # section may be a default section so, search
+            # for the value in the list of defaults
+            return self.find_value(self.default(section), key)
+        except KeyError:
+            raise LookupError("key %r not found for section %r"
+                              % (key, section))
+
+    def multi_get(self, section, key_list):
+        """
+        Retrieves the list of values from a section for a list of keys.
+        This method is intended to be used for equivalent keys. Thus, as soon
+        as any match is found for any key in the key_list, the match is
+        returned. This does not concatenate the lookups of all of the keys
+        together.
+        """
+        for i in key_list:
+            try:
+                return self.get(section, i)
+            except LookupError:
+                pass
+
+        # Making it here means all lookups failed.
+        raise LookupError("keys %r not found for section %r" %
+                          (key_list, section))
+
+    def set(self, section, key, val):
+        """Sets an option in the given section."""
+        # TODO - set in multiple sections? (for now set in first)
+        # TODO - set in both sections and defaults?
+        if section in self._sections:
+            self.section(section)[0][key] = val
+        else:
+            self.defaults(section)[0][key] = val
+
+    def read(self, filename, sect=None):
+        """Parse configuration information from a file"""
+        try:
+            with open(filename, 'rt') as config_file:
+                self._read(config_file, sect)
+        except IOError:
+            print "Could not open file ", filename, " for reading"
+
+    def _read(self, config_file, sect):
+        """Parse configuration information from the config_file"""
+        is_comment = False  # used for multi-lined comments
+        for line in config_file:
+            line, is_comment = remove_comment(line, is_comment)
+            if not line:
+                # line was empty or was a comment
+                continue
+
+            include_name = try_include(line)
+            if include_name:
+                parser = self.add_include(include_name)
+                parser.read(include_name, sect)
+                continue
+
+            section, is_template, templates = try_section(line)
+            if section:
+                if section == DEFAULTSECT or is_template:
+                    sect = self.add_default(section, templates)
+                else:
+                    sect = self.add_section(section, templates)
+                continue
+
+            key, val = try_option(line)
+            if sect is None:
+                raise Exception("Section not defined before assignment")
+            sect[key] = val
+
+    def write(self, config_file):
+        """Write configuration information out to a file"""
+        try:
+            for key, val in self._includes.iteritems():
+                val.write(key)
+                config_file.write('#include "%s"\n' % key)
+
+            config_file.write('\n')
+            write_dicts(config_file, self._defaults)
+            write_dicts(config_file, self._sections)
+        except:
+            try:
+                with open(config_file, 'wt') as fp:
+                    self.write(fp)
+            except IOError:
+                print "Could not open file ", config_file, " for writing"
diff --git a/lib/python/asterisk/astdicts.py b/lib/python/asterisk/astdicts.py
new file mode 100644
index 0000000..ae63075
--- /dev/null
+++ b/lib/python/asterisk/astdicts.py
@@ -0,0 +1,298 @@
+# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy.
+# Passes Python2.7's test suite and incorporates all the latest updates.
+# copied from http://code.activestate.com/recipes/576693/
+
+try:
+    from thread import get_ident as _get_ident
+except ImportError:
+    from dummy_thread import get_ident as _get_ident
+
+try:
+    from _abcoll import KeysView, ValuesView, ItemsView
+except ImportError:
+    pass
+
+
+class OrderedDict(dict):
+    'Dictionary that remembers insertion order'
+    # An inherited dict maps keys to values.
+    # The inherited dict provides __getitem__, __len__, __contains__, and get.
+    # The remaining methods are order-aware.
+    # Big-O running times for all methods are the same as for regular dictionaries.
+
+    # The internal self.__map dictionary maps keys to links in a doubly linked list.
+    # The circular doubly linked list starts and ends with a sentinel element.
+    # The sentinel element never gets deleted (this simplifies the algorithm).
+    # Each link is stored as a list of length three:  [PREV, NEXT, KEY].
+
+    def __init__(self, *args, **kwds):
+        '''Initialize an ordered dictionary.  Signature is the same as for
+        regular dictionaries, but keyword arguments are not recommended
+        because their insertion order is arbitrary.
+
+        '''
+        if len(args) > 1:
+            raise TypeError('expected at most 1 arguments, got %d' % len(args))
+        try:
+            self.__root
+        except AttributeError:
+            self.__root = root = []                     # sentinel node
+            root[:] = [root, root, None]
+            self.__map = {}
+        self.__update(*args, **kwds)
+
+    def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
+        'od.__setitem__(i, y) <==> od[i]=y'
+        # Setting a new item creates a new link which goes at the end of the linked
+        # list, and the inherited dictionary is updated with the new key/value pair.
+        if key not in self:
+            root = self.__root
+            last = root[0]
+            last[1] = root[0] = self.__map[key] = [last, root, key]
+        dict_setitem(self, key, value)
+
+    def __delitem__(self, key, dict_delitem=dict.__delitem__):
+        'od.__delitem__(y) <==> del od[y]'
+        # Deleting an existing item uses self.__map to find the link which is
+        # then removed by updating the links in the predecessor and successor nodes.
+        dict_delitem(self, key)
+        link_prev, link_next, key = self.__map.pop(key)
+        link_prev[1] = link_next
+        link_next[0] = link_prev
+
+    def __iter__(self):
+        'od.__iter__() <==> iter(od)'
+        root = self.__root
+        curr = root[1]
+        while curr is not root:
+            yield curr[2]
+            curr = curr[1]
+
+    def __reversed__(self):
+        'od.__reversed__() <==> reversed(od)'
+        root = self.__root
+        curr = root[0]
+        while curr is not root:
+            yield curr[2]
+            curr = curr[0]
+
+    def clear(self):
+        'od.clear() -> None.  Remove all items from od.'
+        try:
+            for node in self.__map.itervalues():
+                del node[:]
+            root = self.__root
+            root[:] = [root, root, None]
+            self.__map.clear()
+        except AttributeError:
+            pass
+        dict.clear(self)
+
+    def popitem(self, last=True):
+        '''od.popitem() -> (k, v), return and remove a (key, value) pair.
+        Pairs are returned in LIFO order if last is true or FIFO order if false.
+
+        '''
+        if not self:
+            raise KeyError('dictionary is empty')
+        root = self.__root
+        if last:
+            link = root[0]
+            link_prev = link[0]
+            link_prev[1] = root
+            root[0] = link_prev
+        else:
+            link = root[1]
+            link_next = link[1]
+            root[1] = link_next
+            link_next[0] = root
+        key = link[2]
+        del self.__map[key]
+        value = dict.pop(self, key)
+        return key, value
+
+    # -- the following methods do not depend on the internal structure --
+
+    def keys(self):
+        'od.keys() -> list of keys in od'
+        return list(self)
+
+    def values(self):
+        'od.values() -> list of values in od'
+        return [self[key] for key in self]
+
+    def items(self):
+        'od.items() -> list of (key, value) pairs in od'
+        return [(key, self[key]) for key in self]
+
+    def iterkeys(self):
+        'od.iterkeys() -> an iterator over the keys in od'
+        return iter(self)
+
+    def itervalues(self):
+        'od.itervalues -> an iterator over the values in od'
+        for k in self:
+            yield self[k]
+
+    def iteritems(self):
+        'od.iteritems -> an iterator over the (key, value) items in od'
+        for k in self:
+            yield (k, self[k])
+
+    def update(*args, **kwds):
+        '''od.update(E, **F) -> None.  Update od from dict/iterable E and F.
+
+        If E is a dict instance, does:           for k in E: od[k] = E[k]
+        If E has a .keys() method, does:         for k in E.keys(): od[k] = E[k]
+        Or if E is an iterable of items, does:   for k, v in E: od[k] = v
+        In either case, this is followed by:     for k, v in F.items(): od[k] = v
+
+        '''
+        if len(args) > 2:
+            raise TypeError('update() takes at most 2 positional '
+                            'arguments (%d given)' % (len(args),))
+        elif not args:
+            raise TypeError('update() takes at least 1 argument (0 given)')
+        self = args[0]
+        # Make progressively weaker assumptions about "other"
+        other = ()
+        if len(args) == 2:
+            other = args[1]
+        if isinstance(other, dict):
+            for key in other:
+                self[key] = other[key]
+        elif hasattr(other, 'keys'):
+            for key in other.keys():
+                self[key] = other[key]
+        else:
+            for key, value in other:
+                self[key] = value
+        for key, value in kwds.items():
+            self[key] = value
+
+    __update = update  # let subclasses override update without breaking __init__
+
+    __marker = object()
+
+    def pop(self, key, default=__marker):
+        '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value.
+        If key is not found, d is returned if given, otherwise KeyError is raised.
+
+        '''
+        if key in self:
+            result = self[key]
+            del self[key]
+            return result
+        if default is self.__marker:
+            raise KeyError(key)
+        return default
+
+    def setdefault(self, key, default=None):
+        'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
+        if key in self:
+            return self[key]
+        self[key] = default
+        return default
+
+    def __repr__(self, _repr_running={}):
+        'od.__repr__() <==> repr(od)'
+        call_key = id(self), _get_ident()
+        if call_key in _repr_running:
+            return '...'
+        _repr_running[call_key] = 1
+        try:
+            if not self:
+                return '%s()' % (self.__class__.__name__,)
+            return '%s(%r)' % (self.__class__.__name__, self.items())
+        finally:
+            del _repr_running[call_key]
+
+    def __reduce__(self):
+        'Return state information for pickling'
+        items = [[k, self[k]] for k in self]
+        inst_dict = vars(self).copy()
+        for k in vars(OrderedDict()):
+            inst_dict.pop(k, None)
+        if inst_dict:
+            return (self.__class__, (items,), inst_dict)
+        return self.__class__, (items,)
+
+    def copy(self):
+        'od.copy() -> a shallow copy of od'
+        return self.__class__(self)
+
+    @classmethod
+    def fromkeys(cls, iterable, value=None):
+        '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S
+        and values equal to v (which defaults to None).
+
+        '''
+        d = cls()
+        for key in iterable:
+            d[key] = value
+        return d
+
+    def __eq__(self, other):
+        '''od.__eq__(y) <==> od==y.  Comparison to another OD is order-sensitive
+        while comparison to a regular mapping is order-insensitive.
+
+        '''
+        if isinstance(other, OrderedDict):
+            return len(self)==len(other) and self.items() == other.items()
+        return dict.__eq__(self, other)
+
+    def __ne__(self, other):
+        return not self == other
+
+    # -- the following methods are only used in Python 2.7 --
+
+    def viewkeys(self):
+        "od.viewkeys() -> a set-like object providing a view on od's keys"
+        return KeysView(self)
+
+    def viewvalues(self):
+        "od.viewvalues() -> an object providing a view on od's values"
+        return ValuesView(self)
+
+    def viewitems(self):
+        "od.viewitems() -> a set-like object providing a view on od's items"
+        return ItemsView(self)
+
+###############################################################################
+### MultiOrderedDict
+###############################################################################
+class MultiOrderedDict(OrderedDict):
+    def __init__(self, *args, **kwds):
+        OrderedDict.__init__(self, *args, **kwds)
+
+    def __setitem__(self, key, val, i=None):
+        if key not in self:
+#            print "__setitem__ key = ", key, " val = ", val
+            OrderedDict.__setitem__(
+                self, key, val if isinstance(val, list) else [val])
+            return
+#        print "inserting key = ", key, " val = ", val
+        vals = self[key]
+        if i is None:
+            i = len(vals)
+
+        if not isinstance(val, list):
+            if val not in vals:
+                vals.insert(i, val)
+        else:
+            for j in val.reverse():
+                if j not in vals:
+                    vals.insert(i, j)
+
+
+    def insert(self, i, key, val):
+        self.__setitem__(key, val, i)
+
+    def copy(self):
+        # TODO - find out why for some reason copies
+        #        the [] as an [[]], so do manually
+        c = MultiOrderedDict() #self.__class__(self)
+        for key, val in self.iteritems():
+            for v in val:
+                c[key] = v
+        return c
diff --git a/lib/python/asterisk/realtime_converter.py b/lib/python/asterisk/realtime_converter.py
new file mode 100644
index 0000000..52de9d3
--- /dev/null
+++ b/lib/python/asterisk/realtime_converter.py
@@ -0,0 +1,342 @@
+#!/usr/bin/env python
+"""
+Copyright (C) 2016, Digium, Inc.
+
+This program is free software, distributed under the terms of
+the GNU General Public License Version 2.
+"""
+
+import os
+from sqlalchemy import create_engine, MetaData, Table
+
+import astconfigparser
+import logging
+
+LOGGER = logging.getLogger(__name__)
+
+"""Registry of files convertable to realtime
+
+This is a list of objects that conform to the following specifications:
+    * An __init__ method that takes a filename and some object. The filename is
+      the name of the file that can be converted to realtime. The object is a
+      conversion-specific field that tells the converter how to convert objects.
+
+    * A write_configs method that takes a directory as a parameter. This method
+      is responsible for writing additional configuration information for
+      Asterisk to use when running a test in realtime mode.
+
+    * A write_db method that takes a MetaData, Engine, and Connection. This
+      method is responsible for writing data to the database for the file that
+      is being converted
+
+    * A cleanup_configs method that takes no parameters. This is responsible
+      for removing configuration information that was originally written by
+      this converter.
+
+    * A cleanup_db method that takes a MetaData, Engine, and Connection. This
+      method is responsible for deleting data from the database originally
+      written by this converter
+"""
+REALTIME_FILE_REGISTRY = []
+
+
+class ConfigFile(object):
+    """A config file to be written by a realtime converter
+
+    When the realtime converter is enacted, it is unknown if the relevant
+    configuration file it wants to write exists already for the given test. This
+    class offers a simple way to ensure that the file is restored to its
+    pre-test-run state.
+    """
+
+    def __init__(self, config_dir, filename):
+        self.file = os.path.join(config_dir, filename)
+        self.orig_file_content = None
+        if os.path.exists(self.file):
+            with open(self.file, 'r') as config:
+                self.orig_file_content = config.read()
+
+    def restore(self):
+        if not self.orig_file_content:
+            os.remove(self.file)
+            return
+
+        with open(self.file, 'w') as config:
+            config.write(self.orig_file_content)
+
+
+class SorceryRealtimeFile(object):
+    """A Realtime Converter that works for sorcery-managed configuration
+
+    This converter can be used for any type of configuration file that is
+    managed by sorcery. This is because object types used by configuration files
+    and by realtime are identical when sorcery manages the configuration.
+    """
+    def __init__(self, filename, sections):
+        """Initialize the sorcery converter
+
+        Keyword Arguments:
+        filename: The name of the file to convert to database
+        sections: A dictionary of sorcery.conf sections, containing dictionaries
+                  that map object names to table names.
+        """
+        self.filename = filename
+        self.sections = sections
+        # All affected database tables in list form. Used for convenience
+        self.tables = [table for section in sections.itervalues() for table in
+                       section.itervalues()]
+        self.sorcery = None
+        self.extconfig = None
+
+    def write_configs(self, config_dir):
+        """Write configuration for sorcery.
+
+        This writes the sorcery.conf file and adds to the extconfig.conf file in
+        order to convert a file to database configuration
+
+        Keyword Arguments:
+        config_dir: The directory where Asterisk configuration can be found
+        """
+        self.sorcery = ConfigFile(config_dir, 'sorcery.conf')
+        self.extconfig = ConfigFile(config_dir, 'extconfig.conf')
+        self.write_sorcery_conf()
+        self.write_extconfig_conf()
+
+    def write_sorcery_conf(self):
+        """Write sorcery.conf file.
+
+        Keyword Arguments:
+        config_dir: The directory where Asterisk configuration can be found
+        """
+        with open(self.sorcery.file, 'a') as sorcery:
+            for section, items in self.sections.iteritems():
+                sorcery.write('[{0}]\n'.format(section))
+                for obj, table in items.iteritems():
+                    sorcery.write('{0} = realtime,{1}\n'.format(obj, table))
+
+    def write_extconfig_conf(self):
+        """Write extconfig.conf file.
+
+        Keyword Arguments:
+        config_dir: The directory where Asterisk configuration can be found
+        """
+        with open(self.extconfig.file, 'a') as extconfig:
+            for table in self.tables:
+                # We can assume "odbc" and "asterisk" here because we're
+                # currently only supporting the ODBC realtime engine, and
+                # "asterisk" refers to the name of the config section in
+                # res_odbc.conf.
+                extconfig.write('{0} = odbc,asterisk\n'.format(table))
+
+    def write_db(self, config_dir, meta, engine, conn):
+        """Convert file contents into database entries
+
+        Keyword Arguments:
+        config_dir: Location of file to convert
+        meta: sqlalchemy Metadata
+        engine: sqlalchemy Engine used for database management
+        conn: sqlaclemy Connection to the database
+        """
+        conf = astconfigparser.MultiOrderedConfigParser()
+        conf.read(os.path.join(config_dir, self.filename))
+        for title, sections in conf.sections().iteritems():
+            LOGGER.info("Inspecting objects with title {0}".format(title))
+            for section in sections:
+                obj_type = section.get('type')[0]
+                sorcery_section = self.find_section_for_object(obj_type)
+                if not sorcery_section:
+                    LOGGER.info("No corresponding section found for object "
+                                "type {0}".format(obj_type))
+                    continue
+                table = Table(self.sections[sorcery_section][obj_type], meta,
+                              autoload=True, autoload_with=engine)
+                vals = {'id': title}
+                for key in section.keys():
+                    if key != 'type':
+                        vals[key] = section.get(key)[0]
+
+                conn.execute(table.insert().values(**vals))
+
+    def find_section_for_object(self, obj_type):
+        """Get the sorcery.conf section a particular object type belongs to
+
+        Keyword Arguments:
+        obj_type: The object type to find the section for
+        """
+        for section, contents in self.sections.iteritems():
+            if obj_type in contents:
+                return section
+
+        return None
+
+    def cleanup_configs(self):
+        """Remove sorcery.conf file after test completes
+
+        Keyword Arguments:
+        config_dir: Location of sorcery.conf file
+        """
+        self.extconfig.restore()
+        self.sorcery.restore()
+
+    def cleanup_db(self, meta, engine, conn):
+        """Remove database entries after test completes
+
+        Keyword Arguments:
+        meta: sqlalchemy MetaData
+        engine: sqlalchemy Engine
+        conn: sqlalchemy Connection to database
+        """
+        for table_name in self.tables:
+            table = Table(table_name, meta, autoload=True,
+                          autoload_with=engine)
+            conn.execute(table.delete())
+
+
+class RealtimeConverter(object):
+    """Pluggable module used to convert configuration files to database data.
+
+    Since this uses the pluggable module framework, it can be applied to
+    individual tests if desired.
+
+    However, this module will see its highest use as configured in the global
+    test-config.yaml for the testsuite. In that case, all tests that use the
+    pluggable module framework will automaticaly have this module plugged in,
+    resulting in all tests running in realtime mode if possible. It is
+    recommended that if only a subset of tests are desired to be run, then
+    adding tags to the desired tests would be a good idea.
+
+    WARNING: Do not attempt to use this plugin both as a local and global
+    pluggable module. Doing so will likely have weird effects both on the
+    database and configuration files in the test.
+    """
+    def __init__(self, config, test_object):
+        """Initialize the converter.
+
+        Keyword Arguments:
+        config: Database configuration information
+        test_object: The test object for the particular test
+        """
+        engine = config.get('engine', 'postgresql')
+        username = config.get('username', 'asterisk')
+        password = config.get('password', 'asterisk')
+        host = config.get('host', 'localhost')
+        port = config.get('port', '5432')
+        database = config.get('db', 'asterisk')
+        dsn = config.get('dsn', 'asterisk')
+
+        # XXX This is currently a limitation we apply to automatic realtime
+        # conversion. We only will convert the first Asterisk instance to use
+        # realtime. This is because we currently only allow for the
+        # configuration of one database in the DBMS.
+        self.config_dir = os.path.join(test_object.test_name, "configs", "ast1")
+
+        test_object.register_stop_observer(self.cleanup)
+
+        self.meta = MetaData()
+        self.engine = create_engine('{0}://{1}:{2}@{3}:{4}/{5}'.format(engine,
+            username, password, host, port, database), echo=True)
+        self.conn = self.engine.connect()
+
+        self.modules_conf_exists = False
+        self.modules_conf = None
+
+        self.extconfig = ConfigFile(self.config_dir, 'extconfig.conf')
+        self.res_odbc = ConfigFile(self.config_dir, 'res_odbc.conf')
+        self.modules = ConfigFile(self.config_dir, 'modules.conf.inc')
+        self.write_extconfig_conf()
+        self.write_res_odbc_conf(dsn, username, password)
+        self.write_modules_conf()
+        for realtime_file in REALTIME_FILE_REGISTRY:
+            realtime_file.write_configs(self.config_dir)
+
+        self.write_db()
+
+    def write_extconfig_conf(self):
+        """Write the initial extconfig.conf information
+
+        This only consists of writing "[settings]" to the top of the file. The
+        rest of the contents of this file will be written by individual file
+        converters.
+        """
+        if self.extconfig.orig_file_content:
+            # Bail early. There is presumably already a [settings] section here,
+            # so there is no reason to try to do anything else.
+            return
+
+        with open(self.extconfig.file, 'w') as extconfig:
+            extconfig.write('[settings]\n')
+
+    def write_res_odbc_conf(self, dsn, username, password):
+        """Write res_odbc.conf file
+
+        This uses database configuration to set appropriate information in
+        res_odbc.conf. Individual converters should have no reason to edit this
+        file any further
+        """
+        with open(self.res_odbc.file, 'w') as res_odbc:
+            res_odbc.write('''[asterisk]
+enabled = yes
+pre-connect = yes
+dsn = {0}
+username = {1}
+password = {2}
+'''.format(dsn, username, password))
+
+    def write_modules_conf(self):
+        """Write modules.conf.inc file"""
+        with open(self.modules.file, 'a+') as modules:
+            modules.write('preload => res_odbc.so\npreload=>res_config_odbc.so')
+
+    def write_db(self):
+        """Tell converters to write database information"""
+        for realtime_file in REALTIME_FILE_REGISTRY:
+            realtime_file.write_db(self.config_dir, self.meta, self.engine,
+                                   self.conn)
+
+    def cleanup(self, result):
+        """Cleanup information after test has completed.
+
+        This will call into registered converters to clean themselves up, and
+        then will restore all written configuration to their pre-test state
+
+        Keyword Arguments:
+        result: Running result of stop observer callbacks.
+        """
+        for realtime_file in REALTIME_FILE_REGISTRY:
+            realtime_file.cleanup_configs()
+            realtime_file.cleanup_db(self.meta, self.engine, self.conn)
+        self.res_odbc.restore()
+        self.extconfig.restore()
+        self.modules.restore()
+        self.conn.close()
+        return result
+
+
+REALTIME_FILE_REGISTRY.append(SorceryRealtimeFile('pjsip.conf',
+    # We don't include the following object types in this dictionary:
+    # * system
+    # * transport
+    # * contact
+    # * subscription_persistence
+    # The first two don't work especially well for dynamic realtime since they
+    # are not reloadable. contact and subscription_persistence are left out
+    # because they are write-only and so there should be no configuration
+    # items for those.
+    #
+    # The table names here are the ones that the alembic scripts use.
+    {
+        'res_pjsip': {
+            'endpoint': 'ps_endpoints',
+            'aor': 'ps_aors',
+            'auth': 'ps_auths',
+            'global': 'ps_globals',
+            'domain_alias': 'ps_domain_aliases',
+        },
+        'res_pjsip_endpoint_identifier_ip': {
+            'identify': 'ps_endpoint_id_ips',
+        },
+        'res_pjsip_outbound_registration': {
+            'registration': 'ps_registrations',
+        }
+    }
+))

-- 
To view, visit https://gerrit.asterisk.org/1802
To unsubscribe, visit https://gerrit.asterisk.org/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: I3ceaae8b8f4160a6d23f2062ce75017351c1559e
Gerrit-PatchSet: 8
Gerrit-Project: testsuite
Gerrit-Branch: master
Gerrit-Owner: Mark Michelson <mmichelson at digium.com>
Gerrit-Reviewer: Anonymous Coward #1000019
Gerrit-Reviewer: Joshua Colp <jcolp at digium.com>
Gerrit-Reviewer: Kevin Harwell <kharwell at digium.com>
Gerrit-Reviewer: Mark Michelson <mmichelson at digium.com>
Gerrit-Reviewer: Matt Jordan <mjordan at digium.com>
Gerrit-Reviewer: Richard Mudgett <rmudgett at digium.com>



More information about the asterisk-commits mailing list