[asterisk-dev] [Code Review]: SDP Media Attribute Interface + H.263/H.264 Format Attribute Modules
Paul Belanger
paul.belanger at polybeacon.com
Tue Jul 3 10:09:34 CDT 2012
On 12-07-03 10:40 AM, Matthew Jordan wrote:
>
> ----- Original Message -----
>> From: "Paul Belanger"<paul.belanger at polybeacon.com>
>> To: asterisk-dev at lists.digium.com
>> Sent: Tuesday, July 3, 2012 9:22:23 AM
>> Subject: Re: [asterisk-dev] [Code Review]: SDP Media Attribute Interface + H.263/H.264 Format Attribute Modules
>>
>> On 12-07-02 01:26 PM, Joshua Colp wrote:
>>>
>>>
>>>> On June 27, 2012, 11:19 a.m., Mark Michelson wrote:
>>>>> I'm giving a "ship it" due to the testing you said you've done.
>>>>> I'd feel better if there were unit tests added to ensure that
>>>>> SDP is parsed and generated properly.
>>>>
>>>> Paul Belanger wrote:
>>>> Agreed, I think also adding unit tests before 'ship it' would
>>>> be acceptable here.
>>>
>>> Unit tests are a separate issue, but I wanted to get eyes on this
>>> before going down that road.
>>>
>> Why separate? Trying to understand why we as a project don't focus
>> more
>> on unit tests first before writing code and merging it.
>
> I believe that is up to the developer and the review process. If reviewers
> feel that the code change is sufficiently complex and has a high degree of risk
> associated with it, then it is a valid finding against that code change to
> require unit or functional tests. Its certainly fine as well to post the tests
> on a separate review to keep the code reviews manageable.
>
And I think this is what we need to come up with, the review process.
Best case solution, for each new code review, our CI tool will go out
and run our series of tests against it, even before it is merged into
trunk (branch). I believe leaving the testing up to a developer before
merging hasn't work out too well for us in the past. Mostly because, a
developer has a specific environment setup and for them to manage more
then one is a waste of time and not needed.
Or worst, developers don't know about our testing procedures. Show of
hands, how many people don't even have the testsuite checkout onto there
local system for example. Or how many don't use --enable-dev-mode to
check for compiler errors? I know I don't always do it.
> Answering the question of TDD - which I think you're alluding to - is much
> more complex. TDD provides a lot of benefits, but also has significant costs.
> While I personally am a firm advocate of TDD, particularly for new projects
> and - in some cases, new code - switching your software development processes to
> TDD requires significant thought and careful consideration. It also incurs a
> significant burden on open source contributors.
>
> I will say that the vast majority of new features in Asterisk have some level
> of tests, and the new features that do not have tests either do not because:
> a) The necessary tools to test them don't exist yet (see: WebSockets)
> b) The tools to test them are still under evaluation and we're working on it
> (see: res_jingle2)
> c) We're working on the tests concurrently, but we're trying to review things
> in order of priority.
>
> I fully expect the new features going into Asterisk 11 to have tests covering
> their core functionality by the time the first release is made.
>
I guess what I was trying to get to was, parsing / generating
'something' seems to be a pretty simple unit test IMO. Since we are
talking about SDP, which I think we can agree is a major part of
chan_sip, I would have though it would be fairly straightforward to add
new tests for new features. If this is incorrect, then I believe we
need to make is easier to add new tests, if not, please move along.
In this case, tests are in the pipeline, however because of the 'ship
it' I assumed the code was going to be merged and move on to the next
thing. Looks like I might have been wrong.
>> Is it a time thing or just too much work getting a box setup for unit
>> tests? Also, do people see value in having our code review tool run
>> 'tests' before said patch is merged into svn?
>>
>
> I don't see this as a function of the code review tool, so much as the
> continuous integration process. Having used such systems in the past, there
> is a lot of benefit in having the continuous integration process reject
> check-ins if some test fails; however, that would require significant tool
> integration and process changes that would require some thought.
>
Yup, I have a task to generate some notes about various code review / CI
tools and how they work together. I hope to get that up on the wiki
over the next few weeks.
--
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belanger at polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter:
https://twitter.com/pabelanger
More information about the asterisk-dev
mailing list