On Leaving OpenNMS

It is with mixed emotions that I am letting everyone know that I’m no longer associated with The OpenNMS Group.

Two years ago I was in a bad car accident. I suffered some major injuries which required 33 nights in the hospital, five surgeries and several months in physical therapy. What was surprising is that while I had always viewed myself as somewhat indispensable to the OpenNMS Project, it got along fine without me.

Also during this time, The OpenNMS Group was acquired. For fifteen years we had survived on the business plan of “spend less money than you earn”. While it ensured the longevity of the company and the project, it didn’t allow much room for us to pursue ideas because we had no way to fund them. We simply did not have the resources.

Since the acquisition, both the company and the project have grown substantially, and this was during a global pandemic. With OpenNMS in such a good place I began to think, for the first time in twenty years, about other options.

I started working with OpenNMS in September of 2001. I refer to my professional career before then as “Act I”, with my time at OpenNMS as “Act II”. I’m now ready to see what “Act III” has in store.

While I’m excited about the possibilities, I will miss working with the OpenNMS team. They are an amazing group of people, and it will be hard to replace the role they played in my life. I’m also eternally grateful to the OpenNMS Community, especially the guys in the Order of the Green Polo who kept the project alive when we were starting out. You are and always will be my friends.

When I was responsible for hiring at OpenNMS, I ended every offer letter with “Let’s go do great things”. I consider OpenNMS to be a “great thing” and I am eager to watch it thrive with its new investment, and I will always be proud of the small role I played in its success.

If you are doing great things and think I could contribute to your team, check out my profile on LinkedIn or Xing.

Order of the Green Polo: Requiescat In Pace

One of the first “group chat” technologies I was ever exposed to was Internet Relay Chat (IRC). This allowed a group of people to get together in areas called “channels” to discuss pretty much anything they felt like discussing. The service had to be hosted somewhere, and for most open source projects that was Freenode.

You might have seen that recently Freenode was taken over by new management, and the policies this new management implemented didn’t sit well with most Freenode users. In the grand open source tradition, most everyone left and went to other IRC servers, most notably Libera Chat.

In May of 2002 when I became the sole maintainer of OpenNMS, there was exactly one person who was dedicated full time to the project – me. What kept me going was the community I found on IRC, in both the #opennms channel and the local Linux users group channel, #trilug.

It was the people on IRC who supported me until I could grow the business to the point of bringing on more people. I still have strong friendships with many of them.

I was reminded of those early days as we migrated #opennms to Libera Chat. At the moment there are only 12 members logged in, and most of those are olde skoool OpenNMS people. I haven’t used IRC much since we switched to Mattermost (we host a server at chat.opennms.com) and with it a “bridge” to bring IRC conversations into the main Mattermost channel. Most people moved to use Mattermost as their primary client, but of course there were a few holdouts (Hi Alex!).

While I was reminiscing, I was also reminded of the Order of the Green Polo (OGP). When David, Matt and I started The OpenNMS Group in 2004, interest in OpenNMS was growing, and there was a core of those folks on IRC who were very active in contributing to the project. I was trying to think of someway to recognize them.

At that time, business casual, at least for men, consisted of a polo shirt and khaki slacks. Vendors often gifted polo shirts with their logos/logotypes on them to clients, and a number of open source projects sold them to raise money. We sold a white one and a black one, and I thought, hey, perhaps I can pick another color and use that to identify the special contributors to OpenNMS.

Green has always been associated with OpenNMS. In network monitoring, green symbolizes that everything is awesome. We even named one of our professional services products the “Greenlight Project“. Plus I really like green as a color.

Then the question became “what shade of green?” For some reason I thought of Tiger Woods who, by this time, late 2004, had won the prestigious Masters golf tournament three times (and would again the next spring). The winner of that tournament gets a “hunter green” jacket, and so I decided that hunter green would be the color.

Also, for some unknown reason, I saw an article about a British knighthood called “The Order of the Garter“. I combined the two and thus “The Order of the Green Polo” was born.

It was awesome.

People who had been active in contributing to OpenNMS became even more active when I recognized them with the OGP honor. They contributed code and helped us with supporting our community, as well as adding a lot to the direction of the project. We started having annual developer conferences called “Dev-Jam” and OGP members got to attend for free so we could spend some face to face time with each other. I considered these men in the OGP to be my brothers.

As OpenNMS grew, we looked to the OGP for recruitment. It was through the OGP that Alejandro came to the US from Venezuela and now leads our support and services team (if OpenNMS went away tomorrow, getting him and his spouse here would have made it all worth it). When you hired an OGP member, you were basically paying them to do something they wanted to do for free. Think of is as like eating an ice cream sundae and finding money at the bottom.

But that growth was actually something that lead to the decline of the OGP. When we hired everyone that wanted a job with us, the role of the OGP declined. Dev-Jam was open to anyone, but it was mandatory for OpenNMS employees. Not all employees were OGP even though they were full-time contributors, so there was often pressure to induct new employees into the Order. And, most importantly, as we aged many OGP members moved on to other things. Hey, it happens, and it doesn’t reflect poorly on their past contributions.

We had a special mailing list for the OGP, but instead of discussing OpenNMS governance it basically became a “happy birthday” list (speaking of which, Happy Birthday Antonio!). When OpenNMS was acquired by NantHealth, we had to merge our mail systems and in the process the OGP list was deactivated. I don’t think many people noticed.

Recently it was brought to my attention that associating OpenNMS with the Masters golf tournament through the OGP could have negative connotations. The Masters is hosted by the Augusta National Golf Club and there have been controversies around their membership policies and views on race. It was suggested that we rename the OGP to something else.

One quick solution would be to just change the shade of green to, perhaps, a “stoplight” green. But this got me to thinking that the same logic used to associate the color with racism could apply to the whole “Order of” as well, since that was based on a British knighthood which, much like Augusta, is mainly all male. Plus the British don’t have the best track record when it comes to colonialism, etc.

I think it is time for something totally new, so I’ve decided to retire the Order of the Green Polo. The members of the OGP are all male, and I’m extremely excited that as we’ve grown our company and project we have been able to greatly improve our diversity, and I would love to come up with something that can embrace everyone who has a love of OpenNMS and wants to contribute to it, be that through code, documentation, the community, &tc.

OpenNMS has changed greatly over the past two decades, and it has become harder to contribute to a project that has grown exponentially in complexity. As part of my role as the Chief Evangelist of OpenNMS, I want to change that and come up with easier ways for people to improve the OpenNMS platform, and I need to come up with a new program to recognize those who contribute (and if you want to skip that part and get right to the job thingie, we’re hiring, but don’t skip that part).

To those of you who were in the Order of the Green Polo, thank you so much for helping us make OpenNMS what it is today. I’m not sure if it would exist without you. And even without the OGP mailing list, I plan to remember your birthdays.

What’s Old Is New Again

Today we launched a new look for OpenNMS, a rebranding effort that has been going on for the better part of a year. It represents a lot more than just a new logo and new colors. While OpenNMS has been around for over two decades now, it is also quite different from when it started. A tremendous amount of work has gone into the project over the past couple of years, and if you looked at using it even just a short while ago you will be surprised at what has changed.

New OpenNMS Logo

One of the best analogies I can come up with to talk about the “new” OpenNMS concerns cars. I like cars, especially Mercedes, and when I was in college I usually drove an older Mercedes sedan. I enjoyed bringing them back to their former glory (and old, somewhat beaten down cars were all I could afford), and so I might start by redoing the brake system, overhauling the engine, etc.

When I would run out of money, which was often, sometimes I’d have to sell a car. Prospective buyers would often complain that the paint wasn’t perfect or there was an issue with the interior. I’d point out that you could hop in this car right now and drive it across the country and never worry about breaking down, but they seemed focused on how it looked. Cosmetics are usually the last thing you focus on during a restoration, but it tends to be the first thing people see.

This is very much like OpenNMS. For over a decade we’ve been focused on the internals of the platform, and luckily we are now in a position to focus on how it looks.

Please don’t misunderstand: application usability is important, much more important than, say, the paint job on a car, but in order to provide the best user experience we had to start by working under the hood.

For example, from the beginning OpenNMS has contained multiple “daemons” that control various aspects of the platform. Originally this was very monolithic, and thus any small change to one of them would often require restarting the whole application.

OpenNMS is now based on a Karaf runtime which provides a modular way of managing the various features within the application. It comes with a shell that can allow even non-Java programmers access to both high and low level parts of the platform, and to make changes without restarting the whole thing. Features can be enabled and disabled on the fly, and it is easy to test the behavior of OpenNMS against a particular device without having to set up a special test environment to pore through pages of logs.

Another great aspect of OpenNMS is that much of the internal messaging can now take place through a broker such as Kafka. While this increases the stability and flexibility of the platform, users can also create custom consumers for the huge amounts of information OpenNMS is able to collect. For very large networks this creates the option to use that data outside of the platform itself, giving end users a high level of custom observablity.

The monolithic nature of OpenNMS has also been improved. The addition of “Minions” to provide monitoring at the edge of the network creates numerous monitoring solutions where there was none before. You can now reach into isolated or private networks, or monitor the performance of applications from various locations seamlessly. The “Sentinel” project allows the various processes within OpenNMS to be spread out over multiple devices with the aim to have virtually unlimited scale.

APM Example World Map

And I haven’t even started on the ability of OpenNMS to monitor tremendous amounts of telemetry data and to analyze it with tools such as “Nephron” or our foray into artificial intelligence with ALEC.

So much has changed with OpenNMS, much of it recently, that it was time for that new coat of paint. It was time for people to both notice the new look of OpenNMS at the surface, and the new OpenNMS under the covers.

One thing that hasn’t changed is that OpenNMS is still 100% open source. All of these amazing features are available to anyone under an OSI approved open source license. Plus we leverage and integrate with best-in-class open source tools such as Grafana for visualization and Cassandra (using Newts) for storing time series data.

Our new logo is a stylized gyroscope. For centuries the gyroscope has represented a way to maintain orientation in the most chaotic of situations. In much the same way, OpenNMS helps you maintain the orientation of your IT infrastructure which, let’s admit it, plays a huge role in the success of your enterprise.

Where the car analogy falls apart is that while the paint job is usually the end of a restoration, this new look for OpenNMS is just the beginning of a new chapter in the history of the project. Our goal is to create a platform where monitoring just happens. We’re not there yet, but check out the latest OpenNMS and we hope you’ll agree we are getting closer.

OpenNMS Resources

Getting started with OpenNMS can be a little daunting, so I thought I’d group together some of the best places to start.

When OpenNMS began 20+ years ago, the main communication channel was a group of mailing lists. For real time interaction we added an “#opennms” IRC channel on Freenode as well. As new technology came along we eagerly adopted it: hosting forums, creating a FAQ with FAQ-o-matic, building a wiki, writing blogs, etc.

The problem became that we had too many resources. Many weren’t updated and thus might host obsolete information, and it was hard for new users to find what they wanted. So a couple of years ago we decided to focus on just two main places for community information.

We adopted Discourse to serve as our “asynchronous” communication platform. Hosted at opennms.discourse.group the goal is to migrate all of our information that used to reside on sites like FAQs and wikis to be in one place. In as much as our community has a group memory, this is it, and we try to keep the information on this site as up to date as possible. While there is still some information left in places like our wiki, the goal is to move it all to Discourse and thus it is a great place to start.

I also want to call your attention to “OpenNMS on the Horizon (OOH)”. This is a weekly update of everything OpenNMS, and it is a good way to keep up with all the work going on with the platform since a lot of the changes being made aren’t immediately obvious.

While we’ve been happy with Discourse, sometimes you just want to interact with someone in real time. For that we created chat.opennms.com. This is an instance of Mattermost that we host to provide a Slack-like experience for our community. It basically replaces the IRC channel, but there is also a bridge between IRC and MM so that posts are shared between the two. I am “sortova” on Mattermost.

When you create an account on our Mattermost instance you will be added to a channel called “Town Square”. Every Mattermost instance has to have a default channel, and this is ours. Note that we use Town Square as a social channel. People will post things that may be of interest to anyone with an interest in OpenNMS, usually something humorous. As I write this there are over 1300 people who have signed up on Town Square.

For OpenNMS questions you will want to join the channel “OpenNMS Discussion”. This is the main place to interact with our community, and as long as you ask smart questions you are likely to get help with any OpenNMS issues you are facing. The second most popular channel is “OpenNMS Development” for those interested in working with the code directly. The Minion and Compass applications also have their own channels.

Another channel is “Write the Docs”. Many years ago we decided to make documentation a key part of OpenNMS development. While I have never read any software documentation that couldn’t be improved, I am pretty proud of the work the documentation team has put into ours. Which brings me to yet another source of OpenNMS information: the official documentation.

Hosted at docs.opennms.org, our documentation is managed just like our application code. It is written in AsciiDoc and published using Antora. The documentation is versioned just like our Horizon releases, but usually whenever I need to look something up I go directly to the development branch. The admin guide tends to have the most useful information, but there are guides for other aspects of OpenNMS as well.

The one downside of our docs is that they tend to be more reference guides than “how-to” articles. I am hoping to correct that in the future but in the meantime I did create a series of “OpenNMS 101” videos on YouTube.

They mirror some of our in-person training classes, and while they are getting out of date I plan to update them real soon (we are in the process of getting ready for a new release with lots of changes so I don’t want to do them and have to re-do them soon after). Unfortunately YouTube doesn’t allow you to version videos so I’m going to have to figure out how to name them.

Speaking of changes, we document almost everything that changes in OpenNMS in our Jira instance at issues.opennms.org. Every code change that gets submitted should have a corresponding Jira issue, and it is also a place where our users can open bug reports and feature requests. As you might expect, if you need to open a bug report please be as detailed as possible. The first thing we will try to do is recreate it, so having information such as the version of OpenNMS you are running, what operating system you are using and other steps to cause the problem are welcome.

If you would like us to add a feature, you can add a Feature Request, and if you want us to improve an existing feature you can add an Enhancement Request. Note that I think you have to have an account to access some of the public issues on the system. We are working to remove that requirement as we wish to be as transparent as possible, but I don’t think we’ve been able to get it to work just yet. I just attempted to visit a random issue and it did load but it was missing a lot of information that shows up when I go to that link while authenticated, such as the left menu and the Git Integration. You will need an account to open or comment on issues. There is no charge to open an account, of course.

Speaking of git, there is one last resource I need to bring up: the code. We host our code on Github, and we’ve separated out many of our projects to make it easier to manage. The main OpenNMS application is under “opennms” (naturally) but other projects such as our machine learning feature, ALEC, have their own branch.

While it was not my intent to delve into all things git on this post, I did want to point out than in the top level directory of the “opennms” project we have two scripts, makerpm.sh and makedeb.sh that you can use to easily build your own OpenNMS packages. I have a video queued up to go over this in detail, but to build RPMs all you’ll need is a base CentOS/RHEL install, and the packages “git” (of course), “expect”, “rpm-build” and “rsync”. You’ll also need a Java 8 JDK. While we run on Java 11, at the moment we don’t build using it (if you check out the latest OOH you’ll see we are working on it). Then you can run makerpm.sh and watch the magic happen. Note the first build takes a long time because you have to download all of the maven dependencies, but subsequent builds should be faster.

To summarize:

For normal community interaction, start with Discourse and use Mattermost for real time interaction.

For reference, check out our documentation and our YouTube channel.

For code issues, look toward our Jira instance and our Github repository.

OpenNMS is a powerful monitoring platform with a steep learning curve, but we are here to help. Our community is pretty welcoming and hope to see you there soon.

Open Source Contributor Agreements

I noticed a recent uptick in activity on Twitter about open source Contributor License Agreements (CLAs), mostly negative.

Twitter Post About CLAs

The above comment is from a friend of mine who has been involved in open source longer than I have, and whose opinions I respect. On this issue, however, I have to disagree.

This is definitely not the first time CLAs have been in the news. The first time I remember even hearing about them concerned MySQL. The MySQL CLA required a contributor to sign over ownership of any contribution to the project, which many thought was fine when they were independent, but started to raise some concerns when they were acquired by Sun and then Oracle. I think this latest resurgence is the result of Elastic deciding to change their license from an open source one to something more “open source adjacent”. This has caused a number of people take exception to this (note: link contains strong language).

As someone who doesn’t write much code, I think deciding to sign a CLA is up to the individual and may change from project to project. What I wanted to share is a story of why we at OpenNMS have a CLA and how we decided on one to adopt, in the hopes of explaining why a CLA can be a positive thing. I don’t think it will help with the frustrations some feel when a project changes the license out from under them, but I’m hoping it will shed some light on our reasons and thought processes.

OpenNMS was started in 1999 and I didn’t get involved until 2001 when I started work at Oculan, the commercial company behind the project. Oculan built a monitoring appliance based on OpenNMS, so while OpenNMS was offered under the GPLv2, the rest of their product had a proprietary license. They were able to do this because they owned 100% of the copyright to OpenNMS. In 2002 Oculan decided to no longer work on the project, and I was able to become the maintainer. Note that this didn’t mean that I “owned” the OpenNMS copyright. Oculan still owned the copyright but due to the terms of the license I (as well as anyone else) was free to make derivative works as long as those works adhered to the license. While the project owned the copyright to all the changes made since I took it over, there was no one copyright holder for the project as a whole.

This is fine, right? It’s open source and so everything is awesome.

Fast forward several years and we became aware of a company, funded by VCs out of Silicon Valley, that was using OpenNMS in violation of the license as a base on which to build a proprietary software application.

I can’t really express how powerless we felt about this. At the time there were, I think, five people working full time on OpenNMS. The other company had millions in VC money while we were adhering to our business model of “spend less than you earn”. We had almost no money for lawyers, and without the involvement of lawyers this wasn’t going to get resolved. One thing you learn is that while those of us in the open source world care a lot about licenses, the world at large does not. And since OpenNMS was backed by a for-profit company, there was no one to help us but ourselves (there are some limited options for license enforcement available to non-profit organizations).

We did decide to retain the services of a law firm, who immediately warned us how much “discovery” could cost. Discovery is the process of obtaining evidence in a possible lawsuit. This is one way a larger firm can fend off the legal challenges of a smaller firm – simply outspend them. It made use pretty anxious.

Once our law firm contacted the other company, the reply was that if they were using OpenNMS code, they were only using the Oculan code and thus we had no standing to bring a copyright lawsuit against them.

Now we knew this wasn’t true, because the main reason we knew this company was using OpenNMS was that a disgruntled previous employee told us about it. They alleged that this company had told their engineers to follow OpenNMS commits and integrate our changes into their product. But since much of the code was still part of the original Oculan code base, it made our job much more difficult.

One option we had was to get with Oculan and jointly pursue a remedy against this company. The problem was that Oculan went out of business in 2004, and it took us awhile to find out that the intellectual property had ended up Raritan. We were able to work with Raritan once we found this out, but by this time the other company also went out of business, pretty much ending the matter.

As part of our deal with Raritan, OpenNMS was able to purchase the copyright to the OpenNMS code once owned by Oculan, granting Raritan an unlimited license to continue to use the parts of the code they had in their products. It wasn’t cheap and involved both myself and my business partner using the equity in our homes to guarantee a loan to cover the purchase, but for the first time in years most of the OpenNMS copyright was held by one organization.

This process made us think long and hard about managing copyright moving forward. While we didn’t have thousands of contributors like some projects, the number of contributors we did have was non-trivial, and we had no CLA in place. The main question was: if we were going to adopt a CLA, what should it look like? I didn’t like the idea of asking for complete ownership of contributions, as OpenNMS is a platform and while someone might want to contribute, say, a monitor to OpenNMS, they shouldn’t be prevented from contributing a similar monitor to Icinga or Zabbix.

So we asked our our community, and a person named DJ Gregor suggested we adopt the Sun (now Oracle) Contributor Agreement. This agreement introduced the idea of “dual copyright”. Basically, the contributor keeps ownership of their work but grants copyright to the project as well. This was a pretty new idea at the time but seems to be common now. If you look at CLAs for, say, Microsoft and even Elastic, you’ll see similar language, although it is more likely worded as a “copyright grant” or something other than “dual copyright”.

This idea was favorable to our community, so we adopted it as the “OpenNMS Contributor Agreement” (OCA). Now the hard work began. While most of our active contributors were able to sign the OCA, what about the inactive ones? With a project as old as OpenNMS there are a number of people who had been involved in the project but due to either other interests or changing priorities they were no longer active. I remember going through all the contributions in our code base and systematically hunting down every contributor, no matter how small, and asking them to sign the OCA. They all did, which was nice, but it wasn’t an easy task. I can remember the e-mail of one contributor bounced and I finally hunted them down in Ireland via LinkedIn.

Now a lot of the focus of CLAs is around code ownership, but there is a second, often more important part. Most CLAs ask the contributor to affirm that they actually own the changes they are contributing. This may seem trivial, but I think it is important. Sure, a contributor can lie and if it turns out they contributed something they really didn’t own the project is still responsible for dealing with that code, but there are a number of studies that have shown that simply reminding someone about a moral obligation goes a long way to reinforce ethical behavior. When someone decides to sign a CLA with such a clause it will at least make them think about it and reaffirm that their work is their own. If a project doesn’t want to ask for a copyright assignment or grant, they should at least ask for something like this.

While the initial process was pretty manual, currently managing the OCAs is pretty automated. When someone makes a pull request on our Github project, it will check to see if they have signed the OCA and if not, send them to the agreement.

The fact that the copyright was under one organization came in handy when we changed the license. One of my favorite business models for open source software is paid hosting, and I often refer to WordPress as an example. WordPress is dead simple to install, but it does require that you have your own server, understand setting up a database, etc. If you don’t want to do that, you can pay WordPress a fee and they’ll host the product for you. It’s a way to stay pure open source yet generate revenue.

But what happens if you work on an open source project and a much bigger, much better funded company just takes your project and hosts it? I believe one of the issues facing Elastic was that Amazon was monetizing their work and they didn’t like it. Open source software is governed mainly by copyright law and if you don’t distribute a “copy” then copyright doesn’t apply. Many lawyers would claim that if I give you access to open source software via a website or an API then I’m not giving you a copy.

We dealt with this at OpenNMS, and as usual we asked our community for advice. Once again I think it was DJ who suggested we change our license to the Affero GPL (AGPLv3) which specifically extends the requirement to offer access to the code even if you only offer it as a hosted service. We were able to make this change easily because the copyright was held by one entity. Can you imagine if we had to track down every contributor over 15+ years? What if a contributor dies? Does a project have to deal with their estate or do they have to remove the contribution? It’s not easy. If there is no copyright assignment, a CLA should at least include detailed contact information in case the contributor needs to be reached in the future.

Finally, remember that open source is open source. Don’t like the AGPLv3? Well you are free to fork the last OpenNMS GPLv2 release and improve it from there. Don’t like what Elastic did with their license? Feel free to fork it.

You might have detected a theme here. We relied heavily on our community in making these decisions. The OpenNMS Group, as stewards of the OpenNMS Project, takes seriously the responsibilities to preserve the open source nature of OpenNMS, and I like to think that has earned us some trust. Having a CLA in place addresses some real business needs, and while I can understand people feeling betrayed at the actions of some companies, ultimately the choice is yours as to whether or not the benefits of being involved in a particular project outweigh the requirement to sign a contributor agreement.

The Server Room Show Podcast

A couple of weeks ago I had the pleasure to chat with Viktor Madarasz on “The Server Room Show” podcast.

The Server Room Podcast Graphic

Viktor is an IT professional with a strong interest in open source, and we had a fun and meandering conversation covering a number of topics. As usual, I talked to much so he ended up splitting our conversation across two episodes.

You can visit his website for links to the podcast from a large variety of podcast sources, or you can listen on Youtube to part one and part two.

It was fun, and I hope to be able to chat again sometime in the future.

Note: Viktor is originally from Hungary, as was my grandfather. I tried to make getting some Túró Rudi part of my appearing on the show, but unfortunately we haven’t figured out how to get it outside of Hungary, and we all know that I’d talk about open source for free pretty much any time and any place.

Thoughts on Security and Open Source Software

Due to the recent supply-chain attack on Solarwinds products, I wanted to put down a few thoughts on the role of open source software and security. It is kind of a rambling post and I’ll probably lose all three of my readers by the end, but I found it interesting to think about how we got here in the first place.

I got my first computer, a TRS-80, as a Christmas present in 1978 from my parents.

Tarus and his TRS-80

As far as I know, these are the only known pictures of it, lifted from my high school yearbook.

Now, I know what you are thinking: Dude, looking that good how did you find the time off your social calendar to play with computers? Listen, if you love something, you make the time.

(grin)

Unlike today, I pretty much knew about all of the software that ran on that system. This was before “open source” (and before a lot of things) but since the most common programming language was BASIC, the main way to get software was to type in the program listing from a magazine or book. Thus it was “source available” at least, and that’s how I learned to type as well as being introduced to the “syntax error”. That cassette deck in the picture was the original way to store and retrieve programs, but if you were willing to spend about the same amount as the computer cost you could buy an external floppy drive. The very first program I bought on a floppy was from this little company called Microsoft, and it was their version of the Colossal Cave Adventure. Being Microsoft it came on a specially formatted floppy that tried to prevent access to the code or the ability to copy it.

And that was pretty much the way of the future, with huge fortunes being built on proprietary software. But still, for the most part you were aware of what was running on your particular system. You could trust the software that ran on your system as much as your could trust the company providing it.

Then along comes the Internet, the World Wide Web and browsers. At first, browsers didn’t do much dynamically. They would reach out and return static content, but then people started to want more from their browsing experience and along came Java applets, Flash and JavaScript. Now when you visit a website it can be hard to tell if you are getting tonight’s television listings or unknowingly mining Bitcoin. You are no longer in charge of the software that you run on your computer, and that can make it hard to make judgements about security.

I run a number of browsers on my computer but my default is Firefox. Firefox has a cool plugin called NoScript (and there are probably similar solutions for other browsers). NoScript is an extension that lets the user choose what JavaScript code is executed by the browser when visiting a page. A word of warning: the moment you install NoScript, you will break the Internet until you allow at least some JavaScript to run. It is rare to visit a site without JavaScript, and with NoScript I can audit what gets executed. I especially like this for visiting sensitive sites like banks or my health insurance provider.

Speaking of which, I just filed a grievance with Anthem. We recently switched health insurance companies and I noticed that when I go to the login page they are sending information to companies like Google, Microsoft (bing.com) and Facebook. Why?

Blocked JavaScript on the Anthem Website

I pretty much know the reason. Anthem didn’t build their own website, they probably hired a marketing company to do it, or at least part of it, and that’s just the way things are done, now. You send information to those sites in order to get analytics on who is visiting your site, and while I’m fine with it when I’m thinking about buying a car, I am not okay with it coming from my insurance company or my bank. There are certain laws governing such privacy, with more coming every day, and there are consequences for violating it. They are supposed to get back to me in 30 days to let me know what they are sending, and if it is personal information, even if it is just an IP Address, it could be a violation.

I bring this up in part to complain but mainly to illustrate how hard it is to be “secure” with modern software. You would think you could trust a well known insurance company to know better, but it looks like you can’t.

Which brings us back to Solarwinds.

Full disclosure: I am heavily involved in the open source network monitoring platform OpenNMS. While we don’t compete head to head with Solarwinds products (our platform is designed for people with at least a moderate amount of skill with using enterprise software while Solarwinds is more “pointy-clicky”) we have had a number of former Solarwinds users switch to our solution so we can be considered competitors in that fashion. I don’t believe we have ever lost a deal to Solarwinds, at least one in which our sales team was involved.

Now, I wouldn’t wish what happened to Solarwinds on my worst enemy, especially since the exploit impacted a large number of US Government sites and that does affect me personally. But I have to point out the irony of a company known for criticizing open source software, specifically on security, to let this happen to their product. Take this post from on of their forums. While I wasn’t able to find out if the author worked at Solarwinds or not, they compare open source to “eating from a dirty fork”.

Seriously.

But is open source really more secure? Yes, but in order to explain that I have to talk about types of security issues.

Security issues can be divided into “unintentional”, i.e. bugs, and “intentional”, someone actively trying to manipulate the software. While all software but the most simple suffers from bugs, what happened to the Solarwinds supply chain was definitely intentional.

When it comes to unintentional security issues, the main argument against open source is that since the code is available to anyone, a bad actor could exploit a security weakness and no one would know. They don’t have to tell anyone about it. There is some validity to the argument but in my experience security issues in open source code tend to be found by conscientious people who duly report them. Even with OpenNMS we have had our share of issues, and I’d like to talk about two of them.

The first comes from back in 2015, and it involved a Java serialization bug in the Apache commons library. The affected library was in use by a large number of applications, but it turns out OpenNMS was used as a reference to demonstrate the exploit. While there was nothing funny about a remote code execution vulnerability, I did find it amusing that they discovered it with OpenNMS running on Windows. Yes, you can get OpenNMS to run on Windows, but it is definitely not easy so I have to admire them for getting it to work.

I really didn’t admire them for releasing the issue without contacting us first. Sending an email to “security” at “opennms.org” gets seen by a lot of people and we take security extremely seriously. We immediately issued a work around (which was to make sure the firewall blocked the port that allowed the exploit) and implemented the upgraded library when it became available. One reason we didn’t see it previously is that most OpenNMS users tend to run it on Linux and it is just a good security practice to block all but needed ports via the firewall.

The second one is more recent. A researcher found a JEXL vulnerability in Newts, which is a time series database project we maintain. They reached out to us first, and not only did we realize that the issue was present in Newts, it was also present in OpenNMS. The development team rapidly released a fix and we did a full disclosure, giving due credit to the reporter.

In my experience that is the more common case within open source. Someone finds the issue, either through experimentation or by examining the code, they communicate it to the maintainers and it gets fixed. The issue is then communicated to the community at large. I believe that is the main reason open source is more secure than closed source.

With respect to proprietary software, it doesn’t appear that having the code hidden really helps. I was unable to find a comprehensive list of zero-day Windows exploits but there seem to be a lot of them. I don’t mean to imply that Windows is exceptionally buggy but it is a common and huge application and that complexity lends itself to bugs. Also, I’m not sure if the code is truly hidden. I’m certain that someone, somewhere, outside of Microsoft has a copy of at least some of the code. Since that code isn’t freely available, they probably have it for less than noble reasons, and one can not expect any security issues they find to be reported in order to be fixed.

There seems to be this misunderstanding that proprietary code must somehow be “better” than open source code. Trust me, in my day I’ve seen some seriously crappy code sold at high prices under the banner of proprietary enterprise software. I knew of one company that wrote up a bunch of fancy bash scripts (not that there is anything wrong with fancy bash scripts) and then distributed them encrypted. The product shipped with a compiled program that would spawn a shell, decrypt the script, execute it and then kill the shell.

Also, at OpenNMS we rely heavily on unit tests. When a feature is developed the person writing the code also creates code to “test” the feature to make sure it works. When we compile OpenNMS the tests are run to make sure the changes being made didn’t break anything that used to work. Currently we have over 8000 of these tests. I was talking to a person about this who worked for a proprietary software company and he said, “oh, we tried that, but it was too hard.”

Finally, I want to get back to that other type of security issue, the “intentional” one. To my understanding, someone was able to get access to the servers that built and distributed Solarwinds products, and they added in malware that let them compromise target networks when they upgraded their applications. Any way you look at it, it was just sloppy security, but I think the reason it went on for so long undetected is that the whole proprietary process for distributing the software was limited to so few people it was easy to miss. These kind of attacks happen in open source projects, too, they just get caught much faster.

That is the beauty of being able to see the code. You have the choice to build your own packages if you want, and you can examine code changes to your hearts content.

We host OpenNMS at Github. If you check out the code you could run something like:

git tag --list

to see a list of release tags. As I write this the latest released version of Horizon is 26.0.1. To see what changed from 26.0.0 I can run

git log --no-merges opennms-26.0.0-1 opennms-26.0.1-1

If you want, there is even a script to run a “release report” which will give you all of the Jira issues referenced between the two versions:

git-release-report opennms-26.0.0-1 opennms-26.0.1-1

While that doesn’t guarantee the lack of malicious code, it does put the control back into your hands and the hands of many others. If something did manage to slip in, I’m sure we’d catch it long before it got released to our users.

Security is not easy, and as with many hard things the burden is eased the more people who help out. In general open source software is just naturally better at this than proprietary software.

There are only a few people on this planet who have the knowledge to review every line of code on a modern computer and understand it, and that is with the most basic software installed. You have to trust someone and for my peace of mind nothing beats the open source community and the software they create.

It Was Twenty Years Ago Today …

On March 30th, 2000, the OpenNMS Project was registered on Sourceforge. While the project actually started sometime in the summer of 1999, this was the first time OpenNMS code had been made public so we’ve always treated this day as the birth date of the OpenNMS project.

Wow.

OpenNMS Entry on Sourceforge

Now I wasn’t around back then. I didn’t join the project until September of 2001. When I took over the project in May of 2002 I didn’t really think I could keep it alive for twenty years.

Seriously. I wasn’t then nor am I now a Java programmer. I just had a feeling that there was something of value in OpenNMS, something worth saving, and I was willing to give it a shot. Now OpenNMS is considered indispensable at some of the world’s largest companies, and we are undergoing a period of explosive growth and change that should cement the future of OpenNMS for another twenty years.

What really kept OpenNMS alive was its community. In the beginning, when I was working from home using a slow satellite connection, OpenNMS was kept alive by people on the IRC channel, people like DJ and Mike who are still involved in the project today. A year or so later I was able to convince my business partner and good friend David to join me, and together we recruited a real Java programmer in Matt. Matt is no longer involved in the project (people leaving your project is one of the hardest things to get used to in open source) but his contributions in those early days were important. Several years after that we were joined by Ben and Jeff, who are still with us today, and through slow and steady steps the company grew alongside the project. They were followed by even more amazing people that make up the team today (I really want to name every single one of them but I’m afraid I’ll miss one and they’ll be rightfully upset).

I can’t really downplay enough my lack of responsibility for the success of OpenNMS. My only talent is getting amazing people to work with me, and then I just try to remove any obstacles that get in their way. I get some recognition as “The Mouth of OpenNMS” but most of the time I just stand on the shoulders of giants and enjoy the view.

Once Again Into the Breach – Back with Apple

After almost a decade since my divorce from Apple, I find myself back with the brand, and it is all due to the stupid watch.

TL;DR: As a proponent of free software, I grouse at the “walled garden” approach Apple takes with its products, but after a long time of not using their products I find myself back in, mainly because free software missed the boat on mobile.

Back in 2011, I stopped using Apple products. This was for a variety of reasons, and for the most part I found that I could do quite well with open source alternatives.

My operating system of choice became Linux Mint. The desktop environment, Cinnamon, allowed me to get things done without getting in the way, and the Ubuntu base allowed me to easily interact with all my hardware. I got rid of my iMac and bought a workstation from System 76, and for a time things were good.

I sold my iPhone and bought an Android phone which was easier to interact with using Linux. While I didn’t have quite all of the functionality I had before, I had more than enough to do the things I needed to do.

But then I started to have issues with the privacy of my Android phone. I came across a page which displayed all of the data Google was collecting on me, which included every call, every text and every application I opened and how long I used it. Plus the stock Google phones started to ship with all of the Google Apps, many of which I didn’t use and they just took up space. While the base operating system of Android, the Android Open Source Project (AOSP), is open source, much of the software on a stock Android phone is very proprietary, with questionable motives behind gathering all of that data.

Then I started playing with different Android operating systems known as “Custom ROMs”. Since I was frequently installing the operating system on my phone I finally figured out that when Google asks “Would you like to improve your Android experience?”, and you say “yes”, that is when they start the heavy data collection. Opt-out and the phone still works, but even basic functionality such as storing your recent location searches in Google Maps goes away. Want to be able to go to a previous destination with one click? Give them all yer infos.

The Custom ROM world is a little odd. While there is nothing wrong with using software projects run by hobbyists, the level of support can be spotty at best. ROMs that at one time were heavily supported can quickly go quiet as maintainers get other interests or other handsets. For a long time I used OmniROM with a minimal install of Google Apps (with the “do not improve my Android experience” option) and it even worked with my Android Wear smartwatch from LG.

I really liked my smartwatch. It reminded me of when we started using two monitors with our desktops. Having things like notifications show up on my wrist was a lot easier to deal with than having to pull out and unlock my phone.

But all good things must come to an end. When Android Wear 2.0 came out they nerfed a lot of the functionality, requiring Android Assistant for even the most basic tasks (which of course requires the “improved” Android experience). I contacted LG and it wasn’t possible to downgrade, so I stopped wearing the watch.

Things got a little better when I discovered the CopperheadOS project. This was an effort out of Canada to create a highly secure handset based on AOSP. It was not possible (or at least very difficult) to install Google Apps on the device, so I ended up using free software from the F-Droid repository. For those times when I really needed a proprietary app I carried a second phone running stock Android. Clunky, I know, but I made it work.

Then CopperheadOS somewhat imploded. The technical lead on the project grew unhappy with the direction it was going and left in a dramatic fashion. I tried to explore other ROMs after that, but grew frustrated in that they didn’t “just work” like Copperhead did.

So I bought an iPhone X.

Apple had started to position themselves as a privacy focused company. While they still don’t encrypt information in iCloud, I use iCloud minimally so it isn’t that important to me. It didn’t take me too long to get used to iOS again, and I got an Apple Watch 3 to replace my no longer used Android Wear watch.

This was about the time the GDPR was passed in the EU, and in order to meet the disclosure requirements Apple set up a website where you could request all of the personal data they collected on you. Now I have been a modern Apple user since February of 2003 when I ordered a 12-inch Powerbook, so I expected it to be quite large.

It was 5MB, compressed.

The majority of that was a big JSON file with my health data collected from the watch. While I’m not happy that this data could be made available to third parties as it isn’t encrypted, it is a compromise I’m willing to make in order to have some health data. Now that Fitbit is owned by Google I feel way more secure with Apple holding on to it (plus I have no current plans to commit a murder).

The Apple Watch also supports contactless payments through Apple Pay. I was surprised at how addicted I became to the ease of paying for things with the watch. I was buying some medication for my dog when I noticed their unit took Apple Pay, and the vet came by and asked “Did you just Star Trek my cash register?”.

Heh.

For many months I pretty much got by with using my iPhone and Apple Watch while still using open source for everything else. Then in July of last year I was involved in a bad car accident.

In kind of an ironic twist, at the time of the accident I was back to carrying two phones. The GrapheneOS project was created by one of the founders of Copperhead and I was once again thinking of ditching my iPhone.

I spent 33 nights in the hospital, and during that time I grew very attached to my iPhone and Watch. Since I was in a C-collar it made using a laptop difficult, so I ended up interacting with the outside world via my phone. Since I slept off and on most of the day, it was nice to get alerts on my watch that I could examine with a glance and either deal with or ignore and go back to sleep.

This level of integration made me wonder how things worked now on OSX, so I started playing with a Macbook we had in the office. I liked it so much I bought an iMac, and now I’m pretty much neck deep back in the Apple ecosystem.

The first thing I discovered is that there is a ton of open source software available on OSX, and I mainly access it through the Homebrew project. For example, I recently needed the Linux “watch” command and it wasn’t available on OSX. I simply typed “brew install watch” and had it within seconds.

The next major thing that changed for me was how integrated all my devices became. I was used to my Linux desktop not interacting with my phone, or my Kodi media server being separate from my smartwatch. I didn’t realize how convenient a higher level of integration could be.

For example, for Christmas I got an Apple TV. Last night we were watching Netflix through that device and when I picked up my iPhone I noticed that I could control the playback and see information such as time elapsed and time remaining for the program. This happened automatically without the need for me to configure anything. Also, if I have to enter in text, etc. on the Apple TV, I can use the iPhone as a keyboard.

I’ve even started to get into a little bit of home automation. I bought a “smart” outlet controller that works with Homekit. Now I don’t have the “Internet of Things”, instead I have the “LAN of Things” as I block Internet access for most of my IoT-type things such as cameras. Since the Apple TV acts as a hub I can still remotely control my devices even though I can’t reach them via the Internet. All of the interaction occurs through my iCloud account, so I don’t even have to poke a hole in my firewall. I can control this device from any of my computers, my iPhone or even my watch.

It’s pretty cool.

It really sucks that the free and open source community missed the boat on mobile. The flagship mobile open source project is AOSP, and that it heavily controlled by Google. While some brave projects are producing Linux-based phones, they have a long way to go to catch up with the two main consumer options: Apple and Google. For my piece of mind I’m going with Apple.

There are a couple of things Tim Cook could do to ease my conscience about my use of Apple products. The first would be to allow us the option of having greater control of the software we install on iOS. I would like to be able to install software outside of the App Store without having to jailbreak my device. The second would be to enable encryption on all the data stored in iCloud so that it can’t be accessed by any other party than the account holder. If they are truly serious about privacy it is the logical next step. I would assume the pressure from the government will be great to prevent that, but no other company is in a better position to defy them and do it anyway.

A Low Bandwidth Camera Solution

My neighbor recently asked me for advice on security cameras. Lately when anyone asks me for tech recommendations, I just send them to The Wirecutter. However, in this case their suggestions won’t work because every option they recommend requires decent Internet access.

I live on a 21 acre farm 10 miles from the nearest gas station. I love where I live but it does suffer from a lack of Internet access options. Basically, there is satellite, which is slow, expensive and with high latency, or Centurylink DSL. I have the latter and get to bask in 10 Mbps down and about 750 Kbps up.

Envy me.

Unfortunately, with limited upstream all of The Wirecutter’s options are out. I found a bandwidth calculator that estimates a 1 megapixel camera encoding video using H.264 at 24 fps in low quality would still require nearly 2 Mbps and over 5 Mbps for high quality. Just not gonna happen with a 750 Kbps circuit. In addition, I have issues sending video to some third party server. Sure, it is easy but I’m not comfortable with it.

I get around this by using an application called Surveillance Station that is included on my Synology DS415+. Surveillance Station supports a huge number of camera manufacturers and all of the information is stored locally, so no need to send information to “the cloud”. There is also an available mobile application called DS-cam that can allow you to access your live cameras and recordings remotely. Due the the aforementioned bandwidth limitations, it isn’t a great experience on DSL but it can be useful. I use it, for example, to see if a package I’m expecting has been delivered.

DS-Cam Camera App

[DS-Cam showing the current view of my driveway. Note the recording underneath the main window where you can see the red truck of the HVAC repair people leaving]

Surveillance Station is not free software, and you only get two cameras included with the application. If you want more there is a pretty hefty license fee. Still, it was useful enough to me that I paid it in order to have two more cameras on my system (for a total of four).

I have the cameras set to record on motion, and it will store up to 10GB of video, per camera, on the Synology. For cameras that stay inside I’m partial to D-Link devices, but for outdoor cameras I use Wansview mainly due to price. Since these types of devices have been known to be easily hackable, they are only accessible on my LAN (the “LAN of things”) and as an added measure I set up firewall rules to block them from accessing the Internet unless I expressly allow it (mainly for software updates).

To access Surveillance Station remotely, you can map the port on the Synology to an external port on your router and the communication can be encrypted using SSL. No matter how many cameras you have you only need to open the one port.

The main thing that prevented me from recommending my solution to my neighbor is that the DS415+ loaded with four drives was not inexpensive. But then it dawned on me that Synology has a number of smaller products that still support Surveillance View. He could get one of those plus a camera like the Wansview for a little more than one of the cameras recommended by The Wirecutter.

The bargain basement choice would be the Synology DS118. It cost less than $200 and would still require a hard drive. I use WD RED drives which run around $50 for 1TB and $100 for 4TB. Throw in a $50 camera and you are looking at about $300 for a one camera solution.

However, if you are going to get a Synology I would strongly recommend at least a 2-bay device, like the DS218. It’s about $70 more than the DS118 and you also would need to get another hard drive, but now you will have a Network Attached Storage (NAS) solution in addition to security cameras. I’ve been extremely happy with my DS415+ and I use it to centralize all of my music, video and other data across all my devices. With two drives you can suffer the loss of one of them and still protect your data.

I won’t go in to all of the features the Synology offers, but I’m happy with my purchase and only use just a few of them.

It’s a shame that there isn’t an easy camera option that doesn’t involve sending your data off to a third party. Not only does that solution not work for a large number of people, you can never be certain what the camera vendor is going to do with your video. This solution, while not cheap, does add the usefulness of a NAS with the value of security cameras, and is worth considering if you need such things.