Order of the Green Polo: Requiescat In Pace

One of the first “group chat” technologies I was ever exposed to was Internet Relay Chat (IRC). This allowed a group of people to get together in areas called “channels” to discuss pretty much anything they felt like discussing. The service had to be hosted somewhere, and for most open source projects that was Freenode.

You might have seen that recently Freenode was taken over by new management, and the policies this new management implemented didn’t sit well with most Freenode users. In the grand open source tradition, most everyone left and went to other IRC servers, most notably Libera Chat.

In May of 2002 when I became the sole maintainer of OpenNMS, there was exactly one person who was dedicated full time to the project – me. What kept me going was the community I found on IRC, in both the #opennms channel and the local Linux users group channel, #trilug.

It was the people on IRC who supported me until I could grow the business to the point of bringing on more people. I still have strong friendships with many of them.

I was reminded of those early days as we migrated #opennms to Libera Chat. At the moment there are only 12 members logged in, and most of those are olde skoool OpenNMS people. I haven’t used IRC much since we switched to Mattermost (we host a server at chat.opennms.com) and with it a “bridge” to bring IRC conversations into the main Mattermost channel. Most people moved to use Mattermost as their primary client, but of course there were a few holdouts (Hi Alex!).

While I was reminiscing, I was also reminded of the Order of the Green Polo (OGP). When David, Matt and I started The OpenNMS Group in 2004, interest in OpenNMS was growing, and there was a core of those folks on IRC who were very active in contributing to the project. I was trying to think of someway to recognize them.

At that time, business casual, at least for men, consisted of a polo shirt and khaki slacks. Vendors often gifted polo shirts with their logos/logotypes on them to clients, and a number of open source projects sold them to raise money. We sold a white one and a black one, and I thought, hey, perhaps I can pick another color and use that to identify the special contributors to OpenNMS.

Green has always been associated with OpenNMS. In network monitoring, green symbolizes that everything is awesome. We even named one of our professional services products the “Greenlight Project“. Plus I really like green as a color.

Then the question became “what shade of green?” For some reason I thought of Tiger Woods who, by this time, late 2004, had won the prestigious Masters golf tournament three times (and would again the next spring). The winner of that tournament gets a “hunter green” jacket, and so I decided that hunter green would be the color.

Also, for some unknown reason, I saw an article about a British knighthood called “The Order of the Garter“. I combined the two and thus “The Order of the Green Polo” was born.

It was awesome.

People who had been active in contributing to OpenNMS became even more active when I recognized them with the OGP honor. They contributed code and helped us with supporting our community, as well as adding a lot to the direction of the project. We started having annual developer conferences called “Dev-Jam” and OGP members got to attend for free so we could spend some face to face time with each other. I considered these men in the OGP to be my brothers.

As OpenNMS grew, we looked to the OGP for recruitment. It was through the OGP that Alejandro came to the US from Venezuela and now leads our support and services team (if OpenNMS went away tomorrow, getting him and his spouse here would have made it all worth it). When you hired an OGP member, you were basically paying them to do something they wanted to do for free. Think of is as like eating an ice cream sundae and finding money at the bottom.

But that growth was actually something that lead to the decline of the OGP. When we hired everyone that wanted a job with us, the role of the OGP declined. Dev-Jam was open to anyone, but it was mandatory for OpenNMS employees. Not all employees were OGP even though they were full-time contributors, so there was often pressure to induct new employees into the Order. And, most importantly, as we aged many OGP members moved on to other things. Hey, it happens, and it doesn’t reflect poorly on their past contributions.

We had a special mailing list for the OGP, but instead of discussing OpenNMS governance it basically became a “happy birthday” list (speaking of which, Happy Birthday Antonio!). When OpenNMS was acquired by NantHealth, we had to merge our mail systems and in the process the OGP list was deactivated. I don’t think many people noticed.

Recently it was brought to my attention that associating OpenNMS with the Masters golf tournament through the OGP could have negative connotations. The Masters is hosted by the Augusta National Golf Club and there have been controversies around their membership policies and views on race. It was suggested that we rename the OGP to something else.

One quick solution would be to just change the shade of green to, perhaps, a “stoplight” green. But this got me to thinking that the same logic used to associate the color with racism could apply to the whole “Order of” as well, since that was based on a British knighthood which, much like Augusta, is mainly all male. Plus the British don’t have the best track record when it comes to colonialism, etc.

I think it is time for something totally new, so I’ve decided to retire the Order of the Green Polo. The members of the OGP are all male, and I’m extremely excited that as we’ve grown our company and project we have been able to greatly improve our diversity, and I would love to come up with something that can embrace everyone who has a love of OpenNMS and wants to contribute to it, be that through code, documentation, the community, &tc.

OpenNMS has changed greatly over the past two decades, and it has become harder to contribute to a project that has grown exponentially in complexity. As part of my role as the Chief Evangelist of OpenNMS, I want to change that and come up with easier ways for people to improve the OpenNMS platform, and I need to come up with a new program to recognize those who contribute (and if you want to skip that part and get right to the job thingie, we’re hiring, but don’t skip that part).

To those of you who were in the Order of the Green Polo, thank you so much for helping us make OpenNMS what it is today. I’m not sure if it would exist without you. And even without the OGP mailing list, I plan to remember your birthdays.

Open Source Contributor Agreements

I noticed a recent uptick in activity on Twitter about open source Contributor License Agreements (CLAs), mostly negative.

Twitter Post About CLAs

The above comment is from a friend of mine who has been involved in open source longer than I have, and whose opinions I respect. On this issue, however, I have to disagree.

This is definitely not the first time CLAs have been in the news. The first time I remember even hearing about them concerned MySQL. The MySQL CLA required a contributor to sign over ownership of any contribution to the project, which many thought was fine when they were independent, but started to raise some concerns when they were acquired by Sun and then Oracle. I think this latest resurgence is the result of Elastic deciding to change their license from an open source one to something more “open source adjacent”. This has caused a number of people take exception to this (note: link contains strong language).

As someone who doesn’t write much code, I think deciding to sign a CLA is up to the individual and may change from project to project. What I wanted to share is a story of why we at OpenNMS have a CLA and how we decided on one to adopt, in the hopes of explaining why a CLA can be a positive thing. I don’t think it will help with the frustrations some feel when a project changes the license out from under them, but I’m hoping it will shed some light on our reasons and thought processes.

OpenNMS was started in 1999 and I didn’t get involved until 2001 when I started work at Oculan, the commercial company behind the project. Oculan built a monitoring appliance based on OpenNMS, so while OpenNMS was offered under the GPLv2, the rest of their product had a proprietary license. They were able to do this because they owned 100% of the copyright to OpenNMS. In 2002 Oculan decided to no longer work on the project, and I was able to become the maintainer. Note that this didn’t mean that I “owned” the OpenNMS copyright. Oculan still owned the copyright but due to the terms of the license I (as well as anyone else) was free to make derivative works as long as those works adhered to the license. While the project owned the copyright to all the changes made since I took it over, there was no one copyright holder for the project as a whole.

This is fine, right? It’s open source and so everything is awesome.

Fast forward several years and we became aware of a company, funded by VCs out of Silicon Valley, that was using OpenNMS in violation of the license as a base on which to build a proprietary software application.

I can’t really express how powerless we felt about this. At the time there were, I think, five people working full time on OpenNMS. The other company had millions in VC money while we were adhering to our business model of “spend less than you earn”. We had almost no money for lawyers, and without the involvement of lawyers this wasn’t going to get resolved. One thing you learn is that while those of us in the open source world care a lot about licenses, the world at large does not. And since OpenNMS was backed by a for-profit company, there was no one to help us but ourselves (there are some limited options for license enforcement available to non-profit organizations).

We did decide to retain the services of a law firm, who immediately warned us how much “discovery” could cost. Discovery is the process of obtaining evidence in a possible lawsuit. This is one way a larger firm can fend off the legal challenges of a smaller firm – simply outspend them. It made use pretty anxious.

Once our law firm contacted the other company, the reply was that if they were using OpenNMS code, they were only using the Oculan code and thus we had no standing to bring a copyright lawsuit against them.

Now we knew this wasn’t true, because the main reason we knew this company was using OpenNMS was that a disgruntled previous employee told us about it. They alleged that this company had told their engineers to follow OpenNMS commits and integrate our changes into their product. But since much of the code was still part of the original Oculan code base, it made our job much more difficult.

One option we had was to get with Oculan and jointly pursue a remedy against this company. The problem was that Oculan went out of business in 2004, and it took us awhile to find out that the intellectual property had ended up Raritan. We were able to work with Raritan once we found this out, but by this time the other company also went out of business, pretty much ending the matter.

As part of our deal with Raritan, OpenNMS was able to purchase the copyright to the OpenNMS code once owned by Oculan, granting Raritan an unlimited license to continue to use the parts of the code they had in their products. It wasn’t cheap and involved both myself and my business partner using the equity in our homes to guarantee a loan to cover the purchase, but for the first time in years most of the OpenNMS copyright was held by one organization.

This process made us think long and hard about managing copyright moving forward. While we didn’t have thousands of contributors like some projects, the number of contributors we did have was non-trivial, and we had no CLA in place. The main question was: if we were going to adopt a CLA, what should it look like? I didn’t like the idea of asking for complete ownership of contributions, as OpenNMS is a platform and while someone might want to contribute, say, a monitor to OpenNMS, they shouldn’t be prevented from contributing a similar monitor to Icinga or Zabbix.

So we asked our our community, and a person named DJ Gregor suggested we adopt the Sun (now Oracle) Contributor Agreement. This agreement introduced the idea of “dual copyright”. Basically, the contributor keeps ownership of their work but grants copyright to the project as well. This was a pretty new idea at the time but seems to be common now. If you look at CLAs for, say, Microsoft and even Elastic, you’ll see similar language, although it is more likely worded as a “copyright grant” or something other than “dual copyright”.

This idea was favorable to our community, so we adopted it as the “OpenNMS Contributor Agreement” (OCA). Now the hard work began. While most of our active contributors were able to sign the OCA, what about the inactive ones? With a project as old as OpenNMS there are a number of people who had been involved in the project but due to either other interests or changing priorities they were no longer active. I remember going through all the contributions in our code base and systematically hunting down every contributor, no matter how small, and asking them to sign the OCA. They all did, which was nice, but it wasn’t an easy task. I can remember the e-mail of one contributor bounced and I finally hunted them down in Ireland via LinkedIn.

Now a lot of the focus of CLAs is around code ownership, but there is a second, often more important part. Most CLAs ask the contributor to affirm that they actually own the changes they are contributing. This may seem trivial, but I think it is important. Sure, a contributor can lie and if it turns out they contributed something they really didn’t own the project is still responsible for dealing with that code, but there are a number of studies that have shown that simply reminding someone about a moral obligation goes a long way to reinforce ethical behavior. When someone decides to sign a CLA with such a clause it will at least make them think about it and reaffirm that their work is their own. If a project doesn’t want to ask for a copyright assignment or grant, they should at least ask for something like this.

While the initial process was pretty manual, currently managing the OCAs is pretty automated. When someone makes a pull request on our Github project, it will check to see if they have signed the OCA and if not, send them to the agreement.

The fact that the copyright was under one organization came in handy when we changed the license. One of my favorite business models for open source software is paid hosting, and I often refer to WordPress as an example. WordPress is dead simple to install, but it does require that you have your own server, understand setting up a database, etc. If you don’t want to do that, you can pay WordPress a fee and they’ll host the product for you. It’s a way to stay pure open source yet generate revenue.

But what happens if you work on an open source project and a much bigger, much better funded company just takes your project and hosts it? I believe one of the issues facing Elastic was that Amazon was monetizing their work and they didn’t like it. Open source software is governed mainly by copyright law and if you don’t distribute a “copy” then copyright doesn’t apply. Many lawyers would claim that if I give you access to open source software via a website or an API then I’m not giving you a copy.

We dealt with this at OpenNMS, and as usual we asked our community for advice. Once again I think it was DJ who suggested we change our license to the Affero GPL (AGPLv3) which specifically extends the requirement to offer access to the code even if you only offer it as a hosted service. We were able to make this change easily because the copyright was held by one entity. Can you imagine if we had to track down every contributor over 15+ years? What if a contributor dies? Does a project have to deal with their estate or do they have to remove the contribution? It’s not easy. If there is no copyright assignment, a CLA should at least include detailed contact information in case the contributor needs to be reached in the future.

Finally, remember that open source is open source. Don’t like the AGPLv3? Well you are free to fork the last OpenNMS GPLv2 release and improve it from there. Don’t like what Elastic did with their license? Feel free to fork it.

You might have detected a theme here. We relied heavily on our community in making these decisions. The OpenNMS Group, as stewards of the OpenNMS Project, takes seriously the responsibilities to preserve the open source nature of OpenNMS, and I like to think that has earned us some trust. Having a CLA in place addresses some real business needs, and while I can understand people feeling betrayed at the actions of some companies, ultimately the choice is yours as to whether or not the benefits of being involved in a particular project outweigh the requirement to sign a contributor agreement.

The Server Room Show Podcast

A couple of weeks ago I had the pleasure to chat with Viktor Madarasz on “The Server Room Show” podcast.

The Server Room Podcast Graphic

Viktor is an IT professional with a strong interest in open source, and we had a fun and meandering conversation covering a number of topics. As usual, I talked to much so he ended up splitting our conversation across two episodes.

You can visit his website for links to the podcast from a large variety of podcast sources, or you can listen on Youtube to part one and part two.

It was fun, and I hope to be able to chat again sometime in the future.

Note: Viktor is originally from Hungary, as was my grandfather. I tried to make getting some Túró Rudi part of my appearing on the show, but unfortunately we haven’t figured out how to get it outside of Hungary, and we all know that I’d talk about open source for free pretty much any time and any place.

Thoughts on Security and Open Source Software

Due to the recent supply-chain attack on Solarwinds products, I wanted to put down a few thoughts on the role of open source software and security. It is kind of a rambling post and I’ll probably lose all three of my readers by the end, but I found it interesting to think about how we got here in the first place.

I got my first computer, a TRS-80, as a Christmas present in 1978 from my parents.

Tarus and his TRS-80

As far as I know, these are the only known pictures of it, lifted from my high school yearbook.

Now, I know what you are thinking: Dude, looking that good how did you find the time off your social calendar to play with computers? Listen, if you love something, you make the time.

(grin)

Unlike today, I pretty much knew about all of the software that ran on that system. This was before “open source” (and before a lot of things) but since the most common programming language was BASIC, the main way to get software was to type in the program listing from a magazine or book. Thus it was “source available” at least, and that’s how I learned to type as well as being introduced to the “syntax error”. That cassette deck in the picture was the original way to store and retrieve programs, but if you were willing to spend about the same amount as the computer cost you could buy an external floppy drive. The very first program I bought on a floppy was from this little company called Microsoft, and it was their version of the Colossal Cave Adventure. Being Microsoft it came on a specially formatted floppy that tried to prevent access to the code or the ability to copy it.

And that was pretty much the way of the future, with huge fortunes being built on proprietary software. But still, for the most part you were aware of what was running on your particular system. You could trust the software that ran on your system as much as your could trust the company providing it.

Then along comes the Internet, the World Wide Web and browsers. At first, browsers didn’t do much dynamically. They would reach out and return static content, but then people started to want more from their browsing experience and along came Java applets, Flash and JavaScript. Now when you visit a website it can be hard to tell if you are getting tonight’s television listings or unknowingly mining Bitcoin. You are no longer in charge of the software that you run on your computer, and that can make it hard to make judgements about security.

I run a number of browsers on my computer but my default is Firefox. Firefox has a cool plugin called NoScript (and there are probably similar solutions for other browsers). NoScript is an extension that lets the user choose what JavaScript code is executed by the browser when visiting a page. A word of warning: the moment you install NoScript, you will break the Internet until you allow at least some JavaScript to run. It is rare to visit a site without JavaScript, and with NoScript I can audit what gets executed. I especially like this for visiting sensitive sites like banks or my health insurance provider.

Speaking of which, I just filed a grievance with Anthem. We recently switched health insurance companies and I noticed that when I go to the login page they are sending information to companies like Google, Microsoft (bing.com) and Facebook. Why?

Blocked JavaScript on the Anthem Website

I pretty much know the reason. Anthem didn’t build their own website, they probably hired a marketing company to do it, or at least part of it, and that’s just the way things are done, now. You send information to those sites in order to get analytics on who is visiting your site, and while I’m fine with it when I’m thinking about buying a car, I am not okay with it coming from my insurance company or my bank. There are certain laws governing such privacy, with more coming every day, and there are consequences for violating it. They are supposed to get back to me in 30 days to let me know what they are sending, and if it is personal information, even if it is just an IP Address, it could be a violation.

I bring this up in part to complain but mainly to illustrate how hard it is to be “secure” with modern software. You would think you could trust a well known insurance company to know better, but it looks like you can’t.

Which brings us back to Solarwinds.

Full disclosure: I am heavily involved in the open source network monitoring platform OpenNMS. While we don’t compete head to head with Solarwinds products (our platform is designed for people with at least a moderate amount of skill with using enterprise software while Solarwinds is more “pointy-clicky”) we have had a number of former Solarwinds users switch to our solution so we can be considered competitors in that fashion. I don’t believe we have ever lost a deal to Solarwinds, at least one in which our sales team was involved.

Now, I wouldn’t wish what happened to Solarwinds on my worst enemy, especially since the exploit impacted a large number of US Government sites and that does affect me personally. But I have to point out the irony of a company known for criticizing open source software, specifically on security, to let this happen to their product. Take this post from on of their forums. While I wasn’t able to find out if the author worked at Solarwinds or not, they compare open source to “eating from a dirty fork”.

Seriously.

But is open source really more secure? Yes, but in order to explain that I have to talk about types of security issues.

Security issues can be divided into “unintentional”, i.e. bugs, and “intentional”, someone actively trying to manipulate the software. While all software but the most simple suffers from bugs, what happened to the Solarwinds supply chain was definitely intentional.

When it comes to unintentional security issues, the main argument against open source is that since the code is available to anyone, a bad actor could exploit a security weakness and no one would know. They don’t have to tell anyone about it. There is some validity to the argument but in my experience security issues in open source code tend to be found by conscientious people who duly report them. Even with OpenNMS we have had our share of issues, and I’d like to talk about two of them.

The first comes from back in 2015, and it involved a Java serialization bug in the Apache commons library. The affected library was in use by a large number of applications, but it turns out OpenNMS was used as a reference to demonstrate the exploit. While there was nothing funny about a remote code execution vulnerability, I did find it amusing that they discovered it with OpenNMS running on Windows. Yes, you can get OpenNMS to run on Windows, but it is definitely not easy so I have to admire them for getting it to work.

I really didn’t admire them for releasing the issue without contacting us first. Sending an email to “security” at “opennms.org” gets seen by a lot of people and we take security extremely seriously. We immediately issued a work around (which was to make sure the firewall blocked the port that allowed the exploit) and implemented the upgraded library when it became available. One reason we didn’t see it previously is that most OpenNMS users tend to run it on Linux and it is just a good security practice to block all but needed ports via the firewall.

The second one is more recent. A researcher found a JEXL vulnerability in Newts, which is a time series database project we maintain. They reached out to us first, and not only did we realize that the issue was present in Newts, it was also present in OpenNMS. The development team rapidly released a fix and we did a full disclosure, giving due credit to the reporter.

In my experience that is the more common case within open source. Someone finds the issue, either through experimentation or by examining the code, they communicate it to the maintainers and it gets fixed. The issue is then communicated to the community at large. I believe that is the main reason open source is more secure than closed source.

With respect to proprietary software, it doesn’t appear that having the code hidden really helps. I was unable to find a comprehensive list of zero-day Windows exploits but there seem to be a lot of them. I don’t mean to imply that Windows is exceptionally buggy but it is a common and huge application and that complexity lends itself to bugs. Also, I’m not sure if the code is truly hidden. I’m certain that someone, somewhere, outside of Microsoft has a copy of at least some of the code. Since that code isn’t freely available, they probably have it for less than noble reasons, and one can not expect any security issues they find to be reported in order to be fixed.

There seems to be this misunderstanding that proprietary code must somehow be “better” than open source code. Trust me, in my day I’ve seen some seriously crappy code sold at high prices under the banner of proprietary enterprise software. I knew of one company that wrote up a bunch of fancy bash scripts (not that there is anything wrong with fancy bash scripts) and then distributed them encrypted. The product shipped with a compiled program that would spawn a shell, decrypt the script, execute it and then kill the shell.

Also, at OpenNMS we rely heavily on unit tests. When a feature is developed the person writing the code also creates code to “test” the feature to make sure it works. When we compile OpenNMS the tests are run to make sure the changes being made didn’t break anything that used to work. Currently we have over 8000 of these tests. I was talking to a person about this who worked for a proprietary software company and he said, “oh, we tried that, but it was too hard.”

Finally, I want to get back to that other type of security issue, the “intentional” one. To my understanding, someone was able to get access to the servers that built and distributed Solarwinds products, and they added in malware that let them compromise target networks when they upgraded their applications. Any way you look at it, it was just sloppy security, but I think the reason it went on for so long undetected is that the whole proprietary process for distributing the software was limited to so few people it was easy to miss. These kind of attacks happen in open source projects, too, they just get caught much faster.

That is the beauty of being able to see the code. You have the choice to build your own packages if you want, and you can examine code changes to your hearts content.

We host OpenNMS at Github. If you check out the code you could run something like:

git tag --list

to see a list of release tags. As I write this the latest released version of Horizon is 26.0.1. To see what changed from 26.0.0 I can run

git log --no-merges opennms-26.0.0-1 opennms-26.0.1-1

If you want, there is even a script to run a “release report” which will give you all of the Jira issues referenced between the two versions:

git-release-report opennms-26.0.0-1 opennms-26.0.1-1

While that doesn’t guarantee the lack of malicious code, it does put the control back into your hands and the hands of many others. If something did manage to slip in, I’m sure we’d catch it long before it got released to our users.

Security is not easy, and as with many hard things the burden is eased the more people who help out. In general open source software is just naturally better at this than proprietary software.

There are only a few people on this planet who have the knowledge to review every line of code on a modern computer and understand it, and that is with the most basic software installed. You have to trust someone and for my peace of mind nothing beats the open source community and the software they create.

Prodigal Customers

Growing up in the southern United States meant Sunday mornings were spent at Sunday School. One of the stories we would study was the Parable of the Prodigal Son. A man has two sons. The younger son asks for his inheritance in advance and he goes off and squanders it. When he returns, his father throws a big celebration to welcome him back.

I never really got the point of that story, as I always identified with the older, dutiful son, so it is surprising that it took working with OpenNMS for me to understand it.

We have great customers. Since we do little marketing, before we get a customer they have to first discover OpenNMS, then investigate it to see if it meets their needs, and only then do they contact us. It means that they are self-selecting, and without exception they are incredibly smart, physically beautiful and possessing of a wit so sharp they make Ginsu knives look dull. (grin)

The first company to ever buy an OpenNMS support subscription did so in December of 2001, and this year they renewed for the 17th time. It is a wonderful testament to the work of the team that they created something to inspire such a long commitment.

That said, we do lose a few customers each year. The first one I lost was a little heartbreaking. It was a hospital in Virginia, and when I called them to see if they would renew their support subscription they told me “no”. I was a little shocked, as I was unaware of any problems and they hadn’t opened tickets in awhile, and they told me that was the point. They loved OpenNMS but it “just worked” so they saw no value in getting support, they were still using it.

A more common case for us losing a customer is that our “internal champion” leaves. OpenNMS is a complex and powerful tool, and it does take awhile to climb the learning curve to see its full potential. If all of that knowledge is focused on one person, and that person leaves, their replacement can be overwhelmed and seek out something simpler, even if it is more expensive and less powerful.

I am alway saddened when this happens, but lately we’ve been experiencing what I’m calling “Prodigal Customers”. These are customers who leave and come back.

Cartoon by Chad Essley http://www.cartoonmonkey.com

I love them, and always want to slaughter (figuratively) the fattened calf to welcome them back.

It’s hard to explain, but while it is wonderful to have someone use something you’ve created for almost two decades straight, it is almost more rewarding to have someone go and try something else and discover it doesn’t stack up. Heck, I’d love it if all our customers could try out every possible option, because those that then chose OpenNMS for their solution would truly recognize what an awesome platform it can be.

Being 100% open source, OpenNMS does not have any way to “lock in” a particular customer. You can use it with our services or without, but you always have access to the latest code. Thus choosing to use OpenNMS is a validation of the work we’ve put into it, and whether you are a long time customer, a new customer, or a “prodigal” customer, your preference to use OpenNMS makes all the work to create it worthwhile.

Welcome to 2018

I love New Year’s. Not exactly the party on New Year’s Eve, as I tend to spend it as a quiet evening with friends, but the idea of starting over and starting fresh.

It is also a good time to reflect on the year past. While 2017 was pretty tumultuous for the world at large, for OpenNMS it was a pretty good year.

Our decision to split OpenNMS into two versions is still paying off. We did three major releases of Horizon (19, 20, and 21) as well as point releases every month there wasn’t a major release, and Meridian 2017 finally came out, although later than I would have liked. Horizon users get to experience rapid advancements in power and features while Meridian users can relax knowing their system is very stable and secure.

While it is hard to pick out the best features added in 2017, I’d have to go with OpenNMS Helm and the Minion.

Helm allows you to combine and manage multiple instances of OpenNMS from a Grafana dashboard.

OpenNMS Helm

The Minion is our foray into the whole “Internet of Things” space with an application that can be installed on a small device and used to send remotely collected data to a central OpenNMS instance. Minions have minimal configuration and can be configured redundantly, yet they have the ability to collect massive amounts of monitoring data. We’re very eager to see what novel uses our users come up with for the technology (we have one customer that is “Minion-only”, i.e. they do no monitoring or collection from the central OpenNMS instance at all and instead just put two Minions at each location).

As for the OpenNMS Group, the company behind OpenNMS, we experienced modest growth but still had a record year for gross revenue. What is more exciting is that net income was also a record and several hundred percent above last year, so we are going into 2018 well positioned in our Business Plan of “Spend less than you earn”.

2018 should be exciting. The OpenNMS Drift project brings telemetry (flow) data into OpenNMS, and we are working on some exciting features regarding correlation which will probably involve new machine learning technology.

As always, these features will be available as 100% free and open source software.

Personally, I added three new countries to my list, bringing the total number of countries I’ve been in to forty. I had a great time in Estonia and Latvia, and I really enjoyed my trip to Cuba.

One last thing. If you are reading this you are probably a user of OpenNMS. If so, thank you. We are a small but dedicated group of people creating this platform and often we don’t get much feedback on who uses it and what they like about it. The fact that people do find it useful makes it worthwhile, and we wouldn’t exist without our users and clients.

So, Happy New Year, and may 2018 exceed your wildest expectations.

Update on Expensify

I recently posted a rant on how a vendor we use, Expensify, appeared to be exposing confidential data to workers with the Amazon Mechanical Turk service. In response to the general outcry, they posted a detailed explanation on their blog.

It did little to change my mind.

So apparently what happened is that they used to use the Mechanical Turk from 2009 to 2012, so if you we a customer back then your information was disclosed to those third party workers. Then they stopped, supposedly using some other, similar, in-house system.

But, some genius there decided that the best way for certain customers to insure their receipts were truly private was to have them use the Mechanical Turk with their own staff. I covered that in my first post and it is so complex it hardly registers as a solution.

Of course, they decided to test this new “solution” starting the day before the American Thanksgiving holiday. This was done using receipts from “non-paying customers”. While we pay to use the service (not for much longer), if you were trying it out for free your receipts were exposed to Mechanical Turk workers. Heh, if you aren’t paying for the product you are the product. The post goes on to talk about the security of the Mechanical Turk service, which was surprising because they went on and on about how they didn’t use it.

What really angered me was this paragraph:

The company was away with our families and trying hard to be responsive, while also making the most of a rare opportunity to be with our loved ones. Accordingly, this vacuum of information provided by the company was filled with a variety of well-intentioned but inaccurate theories that generated a bunch of compounding, exaggerated fears. As a family-friendly business we try hard to separate work life from home life, and in this case that separation came at a substantial cost.

Well, boo hoo. If you truly cared about your employees you wouldn’t start a major beta test the day before a big holiday. I spent my holiday worrying about my employees’ personal data possibly being exposed through the Expensify service. Thanks for that.

What pisses me off the most is this condescending Silicon Valley speak that their lack of transparency is somehow our fault. That our fears are just “exaggerated”. When Ryan Schaffer posted on Quora that nothing personal is included on receipts, he demonstrated a tremendous lack of understanding about something on which he should be an expert. As they turn this new leaf and try to be more transparent, I noticed he deleted his answer from the Quora question.

Smells like a cover up to me.

Look, I know that being from North Carolina I can’t possibly understand all the nuances of the brain-heavy Valley, but if Expensify truly does have a “patented, award-winning” methodology for scanning receipts, why don’t they just make that available to their customers instead of using the Turk? This long-winded defense of the Turk seems like they are protesting too much. Something doesn’t make sense here.

I’ve told my folks to stop using SmartScan and that we would move away from Expensify at the end of the year. If you use, or are planning to use, Expensify you should deeply consider whether or not this is a company you want to associate with and if they will act in your best interests.

I decided the answer was “no”.

Dougie Stevenson – The Elvis of Network Management

David messaged me yesterday that Dougie Stevenson had died.

I hadn’t seen Dougie in person in a long time, but I’d kept up with him through the very networks he, in part, helped manage. While I had heard he wasn’t in the best of health, the news of his passing hit me harder than I expected.

I can’t remember the first time I met Dougie. I do remember it was always Dougie, rarely Doug and never Douglas. While most adults might drop such a nickname, it is a reflection on his almost childlike friendliness and good nature that he kept it. I do know that I was working at a company called Strategic Technologies at the time, so this would be the mid-1990s. I was working with tools like HP OpenView, and I’d often run into Dougie at OpenView Forum events. When he decided to take a job at Predictive Systems I followed him, even though it meant commuting to DC four to five days a week.

It was at Predictive that I got to see his genius at work. With his unassuming nature and down-to-earth mannerisms it was easy to miss the mind behind them, but when it came to seriously thinking about the problems of managing networks there were few who could match his penchant for great ideas. I used to refer to him as the “Elvis” of network management.

We were both commuters then. While he had lived in many places, he called Texas home as much as I do North Carolina. We were working on a large project for Qwest near the Ballston metro stop, and after work we’d often visit the nearby Pizzeria Uno. The wait staff loved to see Dougie, and would always laugh when he referred to the cheese quesadillas appetizer as “queasy-dillies”. This was back during the first Internet bubble, around 1999, and while many of us were working hard to make our fortune, Dougie never really cared that much for money. He used to joke it would all go to his ex-wives anyway. I know he had been married but we didn’t talk too much about that aspect of his life. He’d much rather talk about the hotrod pickup truck he was always working on when he had the time. I do remember he once walked away from a small fortune over principles – that was just the kind of person he was.

I can’t remember the last time I saw Dougie, but it could have been in Austin back in 2008. I have this really bad picture I took then:

Dougie and Me

Notice he has on his OpenNMS shirt. He never failed to promote our efforts to create a truly free and open source network management platform whenever he could.

As I’ve gotten older, I wish more for time than money. Between the business and the farm I’m kept so busy that I rarely get to spend as much time with the amazing people I know, and it would have been nice to see Dougie at least once more. In any case, a small part of him lives on in the hearts and minds of those who did know him.

Though it saddens me to say it, Elvis has left the building.

Expensify and Why I Hate the Cloud

Over the weekend I found out that Expensify, a service I use for my company, outsources a feature to Amazon’s Mechanical Turk service. Expensify handles the management of business expenses, which for a company like ours can be problematic as we do a lot of travel when deploying services. The issue is that the feature, the “smart scanning” of receipts, could potentially expose confidential data to third parties. As a user of Expensify, this bothers me.

Expensify touts “SmartScan” as:

As background, SmartScan is the patented, award-winning technology that underpins our “fire and forget” design for expense management. When you get a receipt, rather than stuffing it into your pocket to dread for later, just:

1. Take your phone out of your pocket
2. SmartScan the receipt
3. Put your phone back in your pocket

What they never told us is that if their “patented, award-winning technology” can’t read your receipt, they send it to the Mechanical Turk, which in turn presents it to a human being who will interpret the receipt manually. The thing is, we have no control over who will see that information, which could be confidential. For example, when I post a receipt for an airline ticket, it may include my record locater, ticket number and itinerary, all of which are sensitive.

This apparently never occurred to the folks at Expensify. Take this Quora answer from Ryan Schaffer, listed as Expensify Director of Marketing & Strategy:

Also, its worth mentioning, they don’t see anything that can personally identify you. They see a date, merchant, and amount. Receipts, by their very nature, are intended be thrown away and are explicitly non-sensitive. Anyone looking at a receipt isunable to tell if that receipt is from me, you, your neighbor, or someone on the other side of the world.

Wrong, wrong, wrong. It seems that Mr. Schaffer may limit his business expenses to the occasional coffee at Starbucks, but for the rest of us it is rarely that limited. For someone whose job is to perfect dealing with receipts, his view is pretty myopic.

For examples of what Expensify exposes, take a look at this tweet by Gary Pendergast.

Information Exposed by Expensify Tweet

It is also worth noting that it appears Expensify does its business on the Mechanical Turk as “Fluffy Cloud” instead of Expensify, which strikes me as a little disingenuous.

In a blog post this morning the company addressed this:

As you might imagine, doing this is easier said than done. Given the enormous scale and 24/7 nature of this task, we have agents positioned around the world to hand off this volume from timezone to timezone. Most of the US team is located in Ironwood, MI or Portland, OR (where we have offices and can train in person). Most of the international team is in Nepal or Honduras (where we work with a third-party provider to manage the on-site logistics). But regardless of the location, every single agent is bound by a confidentiality agreement, and subject to severe repercussions if that agreement is broken.

But if this were true, why are random people on Twitter announcing that they can see this data? Are they relying on the Amazon agreement with the people working as part of the Mechanical Turk? That doesn’t instill much confidence in me. But then in the same blog post they double down, and suggest that if you want extra security, you can just set up your own staff as part of the Mechanical Turk:

1. You hire a 24/7 team of human transcription agents.
         o For the fastest processing we suggest staffing three separate shifts — or daytime shifts in three different offices around the world. Otherwise your receipts might lag for many hours before getting processed.

2. They apply to Amazon Mechanical Turk for an account. Be aware that this is a surprisingly involved process, including:
         o The agent must sign up using their actual personal Amazon account. If your account doesn’t have an adequate history of purchases (each of which implies a successful credit card billing transaction and package delivery) or other activity, you will be rejected.
         o The agent must provide their full name, address, and bank account information for reimbursement. Amazon verifies this with a variety of techniques (eg, confirm that your IP is in the country you say you are, verify the bank account is owned by the name and address provided, full criminal background check), and if anything doesn’t add up, you will be rejected.
         o Rejection is final. It requires such an abundance of verifiable documentation (most notably being an active Amazon account with a long history) that you can’t just create a new account and try again.
         o There is no apparent appeals process. Accordingly, I would recommend confirming before hiring that the candidate can pass Amazon Mechanical Turk’s many strict controls because we have no ability to override their judgement.

3. You notify us of the “workerID” of each of your authorized agents.
         o Though you are not obligated to share your staff’s identity with us directly, your staff will still be obligated to follow the Expensify terms of services. Failure to comply with our terms will result in an appropriate response, starting with immediate banning by our automated systems, ranging up to our legal team subpoenaing you (or failing that, Amazon) for the identity of the agent to press charges directly.

4. We will create a “Qualification” for your “Human Intelligence Tasks” (HITs) that ensures only your agents will see your receipts.

5. Your staff will use the Amazon Mechanical Turk interface to discover and process your employee’s receipts.

That’s the solution? This is what passes for security at Expensify? Hire three shifts of employees all using verified personal Amazon accounts and then you can be sure your confidential data is kept confidential?

Wouldn’t it just be easier to create a small webapp that would present receipts to people in a company directly without going through the Mechanical Turk? Heck, why not just bounce it back out to you – it isn’t that great of a chore.

Plus, basically, if you don’t do this Expensify is saying they can’t keep your information secure.

This is what frustrates me the most about “the Cloud”. Everyone is in such a rush to deploy solutions that they just don’t think about security. Hey, it’s only receipts, right? Look what I was able to find out with just a discarded boarding pass – receipts can have much more information. And this is from a company that is supposed to be focused on dealing with expenses.

I demand two things from companies I trust with our information in the cloud: security and transparency. It looks like Expensify has neither.

I will be moving us away from Expensify. If you know of any decent solutions, let me know. Xpenditure looks pretty good, and since they are based in the EU perhaps they understand privacy a little better than they do in San Francisco.

2017 Australian Network Operators Group Conference

Back in June I was chatting with “mobius” about all things OpenNMS. He lives and works in Perth, Australia, and suggested that I do a presentation at AusNOG, the Australian Network Operators Group.

One of the things we struggle with at OpenNMS is figuring out how to make people aware it exists. My rather biased opinion is that it is awesome, but a lot of people have never heard of it. To help with that I used to attend a lot of free and open source conferences, but we’ve found out over the years that our user base tends to be more along the lines of large enterprises and network operators that might not be represented at such shows.

Imagine my surprise when I found out that there were a whole slew of NOGs, network operator groups, around the world. It seems to me that people who attend these conferences have a more immediate need for OpenNMS, and with that in mind I submitted a talk to AusNOG. I was very happy it was selected, not in the least because I would get to return to Australia for a third time.

AusNOG David Hughes

I wasn’t sure what to expect, but was pleasantly surprised. The event was extremely well organized and I really liked the format. Many conferences as they become successful respond by adding multiple tracks. This can be useful if the tracks are easy to delineate, but often you can get “track bloat” where the attendees get overwhelmed with choice and as a presenter you can end up with a nearly empty room if you are scheduled against a popular speaker. At AusNOG there is only one track of somewhat short, highly curated talks that results in a very informative conference without the stress of trying to determine the best set of talks to attend.

AusNOG Program

(Note: Visit the “programme” site and click on talk titles to download the presentations)

The venue was very nice as well. Held at the Langham Hotel, the conference took place in a ballroom that held the 200+ people with a lobby out front for socializing and a few sponsor booths. The program consisted of 90 minutes of presentations separated by a break. They alternated sets of three 30 minute talks with two 45 minutes talks. I found all of the presentations interesting, but I have to admit that I spent a lot of time looking up unfamiliar acronyms. As network operators Autonomous System (AS) numbers were thrown around in much the same way SNMP Private Enterprise Numbers are shared among network management geeks. Australia is also in the process of implementing a nationwide National Broadband Network (nbn™) to provide common infrastructure across the country, so of course that was the focus of a number of talks.

In the middle of each day we broke for lunch which was pretty amazing. The Langham restaurant had a sushi section, a section for Indian food, a large buffet of your standard meat and veg, and at least three dessert sections: one with “healthy” fruit and cheese, another focused on ice cream and a chocolate fountain, and a large case full of amazing pastries and other desserts. All with a view out over the Yarra river.

AusNOG Yarra River

I really liked the format of AugNOG and suggest we adopt it for the next OpenNMS conference. For those few talks that were either over my head or not really of interest, they were over pretty quickly, but I found myself enjoying most of them. I thought it was interesting that concepts we usually equate with the managing servers are being adopted on the network side. One talk discussed topics such as running switch software in containers, while another discussed using Ansible and Salt to manage the configuration of network gear.

AusNOG Runing Switch Software in Containers

I was happy to see that my talk wasn’t the only one that focused on open source. Back fifteen years ago getting large companies to adopt an open source solution was still in the evangelical stage, but now it is pretty much standard. Even Facebook presented a talk on their open source NetNORAD project for monitoring using a distributed system to measure latency and packet loss.

I did have a few favorite talks. In “The Future Is Up in the Sky” Jon Brewer discussed satellite Internet. As someone who suffered for years with a satellite network connection, it was interesting to learn what is being done in this area. I used a system with a satellite in geosynchronous orbit which, while it worked, ended up with latency on the order of a second in real-world use. It turns out that there are a number of solutions using shorter distances with satellites in low earth or high earth orbit. While they present their own challenges, it is still the most promising way to get network access to remote areas.

Another talk by Mark Nottingham discussed issues associated with the increased use of encrypted protocols and the challenges they create for network operators. While the civil libertarian in me applauds anything that makes it harder for surveillance to track users, as a network monitoring guy this can make it more difficult to track down the cause of network issues.

And this will become even more important as the network changes with the adoption of Internet of Things (IoT) devices. Another good talk discussed the issue of IoT security. Even today the main consensus is that you protect weakly secured devices with a firewall, but a number of new exploits leverage infected systems within the firewall for DDoS attacks.

AusNOG Internet of Things Vulnerabilities

I think my own talk went well, it was hard to squeeze a good introduction to OpenNMS into 30 minutes. I did manage it – 30 minutes on the nose – but it didn’t leave time for questions. As a speaker I really liked the feedback the conference provided in the form of a rather long report showing what the attendees thought of the talk, complete with cool graphs.

AusNOG Speaker Response

I really enjoyed this conference, both as an attendee and a speaker. While I hope to speak to more NOGs I would much rather encourage OpenNMS users who are happy with the project to submit real-world talks on how they use the platform to their local tech groups. I think it tells a much stronger story to have someone a little less biased than myself talking about OpenNMS, and plus you get to visit cool conferences like AusNOG.