Recently, a “Powered by OpenNMS” customer asked Matt Brozowski, our CTO, about the OpenNMS test plan.
I thought his reply warranted a blog post.
The way OpenNMS does testing is as follows:
We have extensive automated tests that run whenever code is changed.
The list of successful tests of the 1.10 release is here (as you can see there are 3743 tests)
Our automated build system does not create RPMs unless all of these tests have passed, otherwise it make a new RPM each night.
In addition to these tests, as a part of each build we also have a set of tests that run against an installed system (we call these Smoke Tests)
The Smoke Tests have automated scripts that install and configure a system and then uses the Selenium GUI emulator to validate that the GUI is functioning correctly. These tests are relatively new for us but I’m sure we will be adding to these tests of the coming months.
Lastly we release a milestone every month and we have a number of community members that install OpenNMS on systems in parallel with their production system and validate that the features that they are using still work with the devices that they have and report bugs as they come up.
After features are complete we make these milestones as release candidates and label them something like 1.9.9x and from this point on only bug fixes are allowed. Next week we will release 1.9.91 as a release candidate in preparation for 1.10.
This is the list of issues that are currently considered critical for release of 1.10 as a stable release.
Not all of these issues have been validated but these are the ones that remain to be considered.
After we release 1.9.91 these will be moved to 1.9.92 and so on until we get them fixed and can release 1.10.
I hope this helps give you an idea about the quality of our testing strategy.