The Adventure Continues

Last year I wrote about parting ways with the OpenNMS Project and how I was ready for “Act III” of my professional career.

With my future being somewhat of a tabula rasa, I was a bit overwhelmed with choices, so I decided to return to my roots and dust off my consulting LLC. Soon I found myself in the financial sector helping to deploy network monitoring and observability solutions.

I was working with some pretty impressive applications and it was interesting to see the state of the art for monitoring. We’ve come a long way since SNMP. It was engaging and fun work, but all the software was proprietary and I missed the open source aspect.

Recently, Spot Callaway made me aware of an opportunity at Amazon Web Services for an open source evangelist position. Of all the things I’ve done in my career, acting as an evangelist for open source solutions was my favorite thing to do and here was a chance to do it full time. I will admit that Amazon was not the first name that popped into my head when I think “open source” but as I got to learn more about the team and AWS’s open source initiatives, the more interested I became in the position. After I made it through their rather intense interview process and met even more people with whom I’ll be working, it became a job I couldn’t refuse.

So I’m happy to announce that I’m now a Principal Evangelist at AWS, reporting to David Nalley, who, in addition to being a pretty awesome boss is also the current President of the Apache Software Foundation. OpenNMS would not have existed without software from the ASF, and it will be cool to learn, in addition, more about that organization first hand.

My main role will be to work with open source companies as an advocate for them within AWS. The solutions AWS provides can help jumpstart these companies toward profitability by providing the resources they need to be successful and to affordably grow as their needs change. While I am just getting started within the organization and it will take me some time to learn the ropes, I am hoping my own experience in running an open source business will provide a unique insight into issues faced by those companies.

Exciting times, so watch this space as my open source adventures continue.

“Run-of-the-Mill Person”

I just noticed that my Wikipedia page has been deleted (the old version is still on the Internet Archive).

I’m not sure how I feel about this. When I was first made aware of its existence oh so many years ago I was both flattered and a little embarrassed, mainly because I didn’t think I rated a page on Wikipedia. But then I got to thinking that, hey, pretty much anyone should be able to have a page on Wikipedia as long as it adheres to their format guidelines. It’s not like it takes up much space, and as long as the person is verifiable as being a real person, why not?

I am certain I would have been okay with my page being deleted soon after it was created, but once you get used to having something, earned or not, there is a strong psychological reaction to having it taken away. From what I can tell the page was created in 2010, so it had been around for nearly 12 years with no one complaining.

The most hurtful thing was a comment about the deletion from EdwardX from London:

Nothing cited in the article counts towards WP:GNG, and I can find nothing better online. Run-of-the-mill person.

Really? Was the “Run-of-the-mill person” comment really necessary? (grin)

I’m still happy about what I was able to accomplish with OpenNMS and building the community around it, even if it was run-of-the-mill, and I plan to promote open source and open source companies for the remainder of my career, even if that isn’t Wikipedia-worthy.

Nineteen Years

Nineteen years ago my friend Ben talked me into starting this blog. I don’t update it as frequently any more for a variety of reasons, specifically because more people interact on social media these days and I’m not as involved in open source as I used to be, but it is still somewhat of an achievement to keep something going this long.

My “adventures” in open source started out on September 10th, 2001, when I started a new job with a company called Oculan to work on their open source monitoring platform OpenNMS. In May of 2002 I became the lead maintainer on the project, and by the time I started this blog I’d been at it for several months. Back then blogs were one of the main ways an open source project could communicate with its community.

The nearly two decades I spent with OpenNMS were definitely an adventure, and this site can serve as a record of both those successes and those struggles.

Nineteen years ago open source was very different than it is today. Today it is ubiquitous: I think it would be rare for a person to go a single day without interacting with open source software in some fashion. But back then there was still a lot of fear, uncertainty and doubt about using it, with a lot of confusion about what it meant. Most people didn’t take it seriously, often comparing it to “shareware” and never believing that it would ever be used for doing “real” things. On a side note, even in 2022 I recently had one person make the shareware comparison when I brought up Grafana, a project that has secured nearly US$300 million in funding.

Back then we were trying to figure out a business model for open source, and I think in many ways we still are. The main model was support and services.

You would have thought this would have been more successful than it turned out to be. Proprietary software costing hundred of thousands if not millions of dollars would often require that you purchase a maintenance or support contract running anywhere from 15% to 25% of the original software cost per year just to get updates and bug fixes. You would think that people would be willing to pay that amount or less for similar software, avoiding the huge upfront purchase, but that wasn’t the case. If they didn’t have to buy support they usually wouldn’t. Plus, support doesn’t easily scale. It is hard finding qualified people to support complex software. I’d often laugh when someone would contact me offering to double our sales because we wouldn’t have been able to handle the extra business.

One company, Red Hat, was able to pull it off and create a set of open source products people were willing to purchase at a scale that made them a multi-billion dollar organization, but I can’t think of another that was able to duplicate that success.

Luckily, the idea of “hosted” software gained popularity. One of my favorite open source projects is WordPress. You are reading this on a WordPress site, and the install was pretty easy. They talk about a “five minute” install and have done a lot to make the process simple.

However, if you aren’t up to running your own server, it might as well be a five year install process. Instead, you can go to “wordpress.com” and get a free website hosted by them and paid for by ads being shown on your site, or you can remove those ads for as little as US$4/month. One of the reasons that Grafana has been able to raise such large sums is that they, too, offer a hosted version. People are willing to pay for ease of use.

But by far the overwhelming use of open source today is as a development methodology, and the biggest open source projects tend to be those that enable other, often proprietary, applications. Two Sigma Ventures has an Open Source Index that tries to quantify the most popular open source projects, and at the moment these include Tensorflow (a machine learning framework), Kubernetes (a container orchestration platform) and of course the Linux kernel. What you don’t see are end user applications.

And that to me is a little sad. Two decades ago the terms “open source” and “free software” were often used interchangeably. After watching personal computers go from hobbyists to mainstream we also saw control of those systems move to large companies like Microsoft. The idea of free software, as in being able to take control of your technology, was extremely appealing. After watching companies spend hundreds of thousands of dollars on proprietary software and then being tied to those products, I was excited to bring an alternative that would put the power of that software back into the hands of the users. As my friend Jonathan put it, we were going to change the world.

The world did change, but not in the way we expected. The main reason is that free software really missed out on mobile computing. While desktop computers were open enough that independent software could be put on them, mobile handsets to this day are pretty locked down. While everyone points to Android as being open source, to be honest it isn’t very useful until you let Google run most of it. There was a time where almost every single piece of technology I used was open, including my phone, but I just ran out of time to keep up with it and I wanted something that just worked. Now I’m pretty firmly back into the Apple ecosystem and I’m amazed at what you can do with it, and I’m so used to just being able to get things going on the first try that I’m probably stuck forever (sigh).

I find it ironic that today’s biggest contributors to open source are also some of the biggest proprietary software companies in the world. Heck, even Red Hat is now completely owned by IBM. I’m not saying that this is necessarily a bad thing, look at all the open source software being created by nearly everyone, but it is a long way from the free software dream of twenty years ago. Even proprietary, enterprise software has started to leverage open APIs that at least give a nod to the idea of open source.

We won. Yay.

Recently some friends of mine attended a fancy, black-tie optional gala hosted by the Linux Foundation to celebrate the 30th anniversary of Linux. Most of them work for those large companies that heavily leverage open source. And while apparently a good time was had by all, I can’t help but think of, say, those developers who maintain projects like Log4j who, when there is a problem, get dumped on to fix it and probably never get invited to cool parties.

Open source is still looking for a business model. Heck, even making money providing hosted versions of your software is a risk if one of the big players decides to offer their version, as to this day it is still hard to compete with a Microsoft or an Amazon.

But this doesn’t mean I’ve given up on open source. Thanks to the Homebrew project I still use a lot of open source on my Macintosh. I’m writing this using WordPress on a server running Ubuntu through the Firefox browser. I still think there are adventures to be had, and when they happen I’ll write about them here.

Nextcloud News

I think the title of this post is a little misleading, as I don’t have any news about Nextcloud. Instead I want to talk about the News App on the Nextcloud platform, and I couldn’t think of a better one.

I rely heavily on the Nextcloud News App to keep up with what is going on with the world. News provides similar functionality to the now defunct Google Reader, but with the usual privacy bonuses you expect from Nextcloud.

Back before social networks like Facebook and Twitter were the norm, people used to communicate through blogs. Blogs provide similar functionality: people can write short or long form posts that will get published on a website and can include media such as pictures, and other people can comment and share them. Even now when I see an incredibly long thread on Twitter I just wish the author would have put it on a blog somewhere.

Blogs are great, since each one can be individually hosted without requiring a central authority to manage it all. My friend Ben got me started on my first blog (this one) that in the beginning was hosted using a program called Moveable Type. When their licensing became problematic, most of us switched to WordPress, and a tremendous amount of the Web runs on WordPress even now.

Now the problem was that the frequency that people would post to their blogs varied. Some might post once a week, and others several times an hour. Unless you wanted to go and manually refresh their pages, it was difficult to keep up.

Enter Really Simple Syndication (RSS).

RSS is, as the name implies, an easy way to summarize content on a website. Sites that support RSS craft a generic XML document that reflects titles, descriptions, links, etc. to content on the site. The page is referred to as a “feed” and RSS “readers” can aggregate the various feeds together so that a person can follow the changes on websites that interest them.

Google Reader was a very useful feed reader that was extremely popular, and it in turn increased the popularity of blogs. I put some of the blame on Google for the rise of the privacy nightmare of modern social networks on their decision to kill Reader, as it made individual blogs less relevant.

Now in Google’s defense they would say just use some other service. In my case I switched to Feedly, an adequate Reader replacement. The process was made easier by the fact that most feed readers support a way to export your configuration in the Outline Processor Markup Language (OPML) format. I was able to export my Reader feeds and import them into Feedly.

Feedly was free, and as they say if you aren’t paying for the product you are the product. I noticed that next to my various feed articles Feedly would display a count, which I assume reflected the number of Feedly users that were interested in or who had read that article. Then it dawned on me that Feedly could gather useful information on what people were interested in, just like Facebook, and I also assume, if they chose, they could monetize that information. Since I had a Feedly account to manage my feeds, they could track my individual interests as well.

While Feedly never gave me any reason to assign nefarious intentions to them, as a privacy advocate I wanted more control over sharing my interests, so I looked for a solution. As a Nextcloud fan I looked for an appropriate app, and found one in News.

News has been around pretty much since Nextcloud started, but I rarely hear anyone talking about its greatness (hence this post). Like most things Nextcloud it is simple to install. If you are an admin, just click on your icon in the upper right corner and select “+ Apps”. Then click on “Featured apps” in the sidebar and you should be able to enable the “News” app.

That’s it. Now in order to update your feeds you need to be using the System Cron in Nextcloud, and instructions can be found in the documentation.

Once you have News installed, the next challenge is to find interesting feeds to which you can subscribe. The news app will suggest several, but you can also find more on your own.

Nextcloud RSS Suggestions

It used to be pretty easy to find the feed URL. You would just look for the RSS icon and click on it for the link:

RSS Icon

But, again, when Reader died so did a lot of the interest in RSS and finding feed URLs more became difficult. I have links to feeds at the very bottom of the right sidebar of this blog, but you’d have to scroll down quite a way to find them.

But for WordPress sites, like this one, you just add “/feed” to the site URL, such as:

https://www.adventuresinoss.com/feed

There are also some browser plugins that are supposed to help identify RRS feed links, but I haven’t used any. You can also “view source” on a website of interest and search for “rss” and that may help out as well.

My main use of the News App is to keep up with news, and I follow four main news sites. I like the BBC for an international take on news, CNN for a domestic take, Slashdot for tech news and WRAL for local news.

Desktop Version of News App

Just for reference, the feed links are:

BBC: http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/front_page/rss.xml

CNN: http://rss.cnn.com/rss/cnn_topstories.rss

Slashdot: http://rss.slashdot.org/slashdot/slashdotMain

WRAL: http://www.wral.com/news/rss/48/

This wouldn’t be as useful if you couldn’t access it on a mobile device. Of course, you can access it via a web browser, but there exist a number of phone apps for accessing your feeds in a native app.

Now to my knowledge Nextcloud the company doesn’t produce a News mobile app, so the available apps are provided by third parties. I put all of my personal information into Nextcloud, and since I’m paranoid I didn’t want to put my access credentials into those apps but I wanted the convenience of being able to read news anywhere I had a network connection. So I created a special “news” user just for News. You probably don’t need to do that but I wanted to plant the suggestion for those who think about such things.

On my iPhone I’ve been happy with CloudNews.

iPhone Version of CloudNews App

It sometimes gets out of sync and I end up having to read everything in the browser and re-sync in CloudNews, but for the most part it’s fine.

For Android the best app I’ve used is by David Luhmer. It’s available for a small fee in the Play Store and for free on F-Droid.

Like all useful software, you don’t realize how much you depend on it until it is gone, and in the few instances I’ve had problems with News I get very anxious as I don’t know what’s going on in the world. Luckily this has been rare, and I check my news feed many times during the day to the point that I probably have a personal problem. The mobile apps mean I can read news when I’m in line at the grocery store or waiting for an appointment. And the best part is that I know my interests are kept private as I control the data.

If you are interested, I sporadically update a number of blogs, and I aggregate them here. In a somewhat ironic twist, I can’t find a feed link for the “planet” page, so you’d need to add the individual blog feeds to your reader.

Review: AT&T Cell Booster

Back in the mid-2000s I was a huge Apple fanboy, and I really, really, really wanted an iPhone. At that time it was only available from AT&T, and unfortunately the wireless coverage on that network is not very good where I live.

In 2008 a couple of things happened. Apple introduced the iPhone 3G, and AT&T introduced the 3G Microcell.

The 3G Microcell, technically a “femtocell“, is a small device that you can plug into your home network and it will leverage your Internet connection to augment wireless coverage in a small area (i.e. your house). With that I could get an iPhone and it would work at my house.

In February 3G service in the US will cease, and I thought I was going to have to do without a femtocell. Most modern phones support calling over WiFi now, but it just isn’t the same. For example, if I am trying to send an SMS and there is any signal at all from AT&T, my phone will try to use that network instead of the much stronger wireless network in my house. If I disable mobile access altogether, the SMS will send fine but then I can’t get phone calls reliably. (sigh)

I thought I was going to have to just deal with it when AT&T sent me a notice that they were going to replace my 3G Microcell with a new product called a Cell Booster.

Now a lot of people criticize AT&T for a number of good reasons, but lately they’ve really been hitting the whole “customer service” thing out of the park. The Cell Booster currently shows out of stock on their website with a cost of $229, but they sent me one for free.

AT&T Cell Booster Box

In a related story my mother-in-law, who is on our family plan, was using an older Pixel that was going to stop working with the end of 3G service (it was an LTE phone but doesn’t support “HD Voice” which is required to make calls). So AT&T send us a replacement Samsung S9. Pretty cool.

In any case the Cell Booster installation went pretty smoothly. I simply unplugged the existing 3G Microcell and plugged in the new device. The box included the Cell Booster, a GPS sensor, a power supply and an Ethernet cable. No other instructions outside of a QR code which will take you to the appropriate app store to download the necessary application to set it up.

The Booster requires a GPS lock, and they include a little “puck” connected to a fairly long wire that is supposed to allow one to get a signal even when the device is some distance away from a clear line of sight, such as away from windows. I just plugged it in to the back and left it next to the unit and it eventually got a signal, but it is also pretty much beneath a skylight.

In order to provision the Cell Booster you have to launch the mobile app and fill out a few pages of forms, which includes the serial number of the device. It has five lights on the front and while the power light came on immediately, it did take some time for the other lights, including “Internet” to come up. I assumed the Internet light would have turned on as soon as an IP address was assigned, but that wasn’t the case. It took nearly a half and hour for the first four lights to come on, and then another 15 minutes or so for the final “4G LTE” light to illuminate and the unit to start working. Almost immediately I got an SMS from AT&T saying the unit was active.

AT&T Cell Booster Lights

Speaking of IP addresses, I don’t like putting random devices on my LAN so I stuck this on my public network which only has Internet access (no LAN access). I ran nmap against it and there don’t appear to be any ports open. A traffic capture shows traffic between the Cell Booster and a 12.0.0.0 network address owned by AT&T.

I do like the fact that, unlike the 3G Microcell, you do not need to specify the phone number of the handsets that can use the Cell Booster. It claims to support up to 8 at a time, and while I haven’t had anyone over who is both on the AT&T network and also not on my plan, I’m assuming it will work for them as well (I used to have to manually add phone numbers of my guests to allow them to use the 3G device).

The Cell Booster is a rebranded Nokia SS2FII. One could probably buy one outside of AT&T but without being able to provision it I doubt it would work.

So far we’ve been real happy with the Cell Booster. Calls and SMS messages work just fine, if not better than before (I have no objective way to measure it, though, so it might just be bias). If you get one, just remember that it takes a really long time to start up that first time, but after you have all five lights you should be able to forget it’s there.

Review: ProtonMail

I love e-mail. I know for many it is a bane, which has resulted in the rise of “inbox zero” and even the “#noemail” movement, but for me it is a great way to communicate.

I just went and looked, and the oldest e-mail currently in my system is from July of 1996. I used e-mail for over a decade before then, on school Unix systems and on BBS’s, but it wasn’t until the rise of IMAP in the 1990s that I was able to easily keep and move my messages from provider to provider.

That message from 1996 was off of my employer’s system. I didn’t have my own domain until two years later, in 1998, and I believe my friend Ben was the one to host my e-mail at the time.

When I started maintaining OpenNMS in 2002 I had a server at Rackspace that I was able to configure for mail. I believe the SMTP server was postfix but I can’t remember what the IMAP server was. I want to say it was dovecot but that really wasn’t available until later in 2002, so maybe UW IMAP? Cyrus was pretty big at the time but renown for being difficult to set up.

In any case I was always a little concerned about the security of my mail messages. Back then disks were not encrypted and even the mail transport was done in the clear (this was before SSL became ubiquitous), so when OpenNMS grew to the point where we had our own server room, I set up a server for “vanity domains” that anyone in the company could use to host their e-mail and websites, etc. At least I knew the disks were behind a locked door, and now that Ben worked with us he could continue to maintain the mail server, too. (grin)

Back then I tried to get my friends to use encrypted e-mail. Pretty Good Privacy (PGP) was available since the early 1990s, and MIT used to host plugins for Outlook, which at the time was the default e-mail client for most people. But many of them, including the technically minded, didn’t want to be bothered with setting up keys, etc. It wasn’t until later when open source really took off and mail clients like Thunderbird arrived (with the Enigmail plug-in) that encrypted e-mail became more common among my friends.

In 2019 the decision was made to sell the OpenNMS Group, and since I would no longer have control over the company (and its assets) I decided I needed to move my personal domains somewhere else. I really didn’t relish the idea of running my own mail server. Spam management was always a problem, and there were a number of new protocols to help secure e-mail that were kind of a pain to set up.

The default mail hosting option for most people is GMail. Now part of Google Workspace, for a nominal fee you can have Google host your mail, and get some added services as well.

I wasn’t happy with the thought of Google having access to my e-mail, so I looked for options. To me the best one was ProtonMail.

The servers for ProtonMail are hosted in Switzerland, a neutral country not beholden to either US or EU laws. They are privacy focused, with everything stored encrypted at rest and, when possible, encrypted in transport.

They have a free tier option that I used to try out the system. Now, as an “old”, I prefer desktop mail clients. I find them easiest to use and I can also bring all of my mail into one location, and I can move messages from one provider to another. The default way to access ProtonMail is through a web client, like GMail. Unlike GMail, ProtonMail doesn’t offer a way to directly access their services through SMTP or IMAP. Instead you have to install a piece of software called the ProtonMail Bridge that will create an encrypted tunnel between your desktop computer and their servers. You can then configure your desktop mail client to connect to “localhost” on a particular port and it will act as if it were directly connected to the remote mail server.

In my trial there were two shortcomings that immediately impacted me. As a mail power user, I use a lot of nested folders. ProtonMail does not allow you to nest folders. Second, I share some accounts with my spouse (i.e. we have a single Paypal account) and previously I was able to alias e-mail addresses to send to both of our user accounts. ProtonMail does not allow this.

For the latter I think it has to do with the fact that each mail address requires a separate key and their system must not make it easy to use two keys or to share a key. I’m not sure what the issue is with nested folders.

In any case, this wasn’t a huge deal. To overcome the nested folder issue I just added a prefix, i.e. “CORR” for “Correspondence” and “VND” for “Vendor”, to each mailbox, and then you can sort on name. And while we share a few accounts we don’t use them enough that we couldn’t just assign it to a particular user.


UPDATE: It turns out it is now possible to have nested folders, although it doesn’t quite work the way I would expect.

Say I want a folder called “Correspondence” and I want sub-folders for each of the people with whom I exchange e-mail. I tried the following:

So I have a folder named something like “CORR-Bill Gates”, but I’d rather have that nested under a folder entitled “Correspondence”. In my desktop mail client, if I create a folder called “Correspondence” and then drag the “CORR-Bill Gates” folder into it, I get a new folder titled “Correspondence/CORR-Bill Gates” which is not what I want.

However, I can log into the ProtonMail webUI and next to folders there is a little “+” sign.

Add Folder Menu Item
If I click on that I get a dialog that lets me add new folders, as well as to add them to a parent folder.

Add Folder Dialog Box

If I create a “Correspondence” folder with no parent via the webUI and then a “Bill Gates” folder, I can parent the “Bill Gates” folder to “Correspondence” and then the folders will show up and behave as I expect in my desktop e-mail client. Note that you can only nest two levels deep. In other words if I wanted a folder structure like:

Bills -> Taxes -> Federal -> 2021

It would fail to create, but

Bills -> Taxes -> 2021-Federal

will work.


After I was satisfied with ProtonMail, I ended up buying the “Visionary” package. I pay for it in two-year chunks and it runs US$20/month. This gives me ten domains and six users, with up to 50 e-mail aliases.

Domain set up was a breeze. Assuming you have access to your domain registrar (I’m a big fan of Namecheap) all you need to do is follow the little “wizard” that will step you through the DNS entries you need to make to point your domain to ProtonMail’s servers as well as to configure SPF, DKIM and DMARC. Allowing for the DNS to update, it can be done in a few minutes or it may take up to an hour.

I thought there would be a big issue with the 50 alias limit, as I set up separate e-mails for every vendor I use. But it turns out that you only need to have a alias if you want to send e-mail from that address. You can set up a “catch all” address that will take any incoming e-mail that doesn’t expressly match an alias and send it to a particular user. In my case I set up a specific “catchall@” address but it is not required.

You can also set up filters pretty easily. Here is an example of sending all e-mail sent to my “catchall” address to the “Catch All” folder.

require ["include", "environment", "variables", "relational", "comparator-i;ascii-numeric", "spamtest"];
require ["fileinto", "imap4flags"];

# Generated: Do not run this script on spam messages
if allof (environment :matches "vnd.proton.spam-threshold" "*", spamtest :value "ge" :comparator "i;ascii-numeric" "${1}") {
return;
}


/**
* @type and
* @comparator matches
*/
if allof (address :all :comparator "i;unicode-casemap" :matches ["Delivered-To"] "catchall@example.com") {
fileinto "Catch All";
}

I haven’t had the need to do anything more complicated but there are a number of examples you can build on. I had a vendor that kept sending me e-mail even though I had unsubscribed so I set up this filter:

require "reject";


if anyof (address :all :comparator "i;unicode-casemap" :is "From" ["noreply@petproconnect.com"]) {
reject "Please Delete My Account";
}

and, voilà, no more e-mail. I’ve also been happy with the ProtonMail spam detection. While it isn’t perfect it works well enough that I don’t have to deal with spam on a daily basis.

I’m up to five users and eight domains, so the Visionary plan is getting a little resource constrained, but I don’t see myself needing much more in the near future. Since I send a lot of e-mail to those other four users, I love the fact that our correspondence is automatically encrypted since all of the traffic stays on the ProtonMail servers.

As an added bonus, much of the ProtonMail software, including the iOS and Android clients, are available as open source.

While I’m very satisfied with ProtonMail, there have been a couple of negatives. As a high profile pro-privacy service it has been the target of a number of DDOS attacks. I have never experienced this problem but as these kinds of attacks get more sophisticated and more powerful, it is always a possibility. Proton has done a great job at mitigating possible impact and the last big attack was back in 2018.

Another issue is that since ProtonMail is in Switzerland, they are not above Swiss law. In a high profile case a French dissident who used ProtonMail was able to be tracked down via their IP address. Under Swiss law a service provider can be compelled to turn over such information if certain conditions are met. In order to make this more difficult, my ProtonMail subscription includes access to ProtonVPN, an easy to use VPN client that can be used to obfuscate a source IP, even from Proton.

They are also launching a number of services to better compete with GSuite, such as a calendar and ProtonDrive storage. I haven’t started using those yet but I may in the future.

In summary, if you are either tired of hosting your own mail or desire a more secure e-mail solution, I can recommend ProtonMail. I’ve been using it for a little over two years and expect to be using it for years to come.

On Leaving OpenNMS

It is with mixed emotions that I am letting everyone know that I’m no longer associated with The OpenNMS Group.

Two years ago I was in a bad car accident. I suffered some major injuries which required 33 nights in the hospital, five surgeries and several months in physical therapy. What was surprising is that while I had always viewed myself as somewhat indispensable to the OpenNMS Project, it got along fine without me.

Also during this time, The OpenNMS Group was acquired. For fifteen years we had survived on the business plan of “spend less money than you earn”. While it ensured the longevity of the company and the project, it didn’t allow much room for us to pursue ideas because we had no way to fund them. We simply did not have the resources.

Since the acquisition, both the company and the project have grown substantially, and this was during a global pandemic. With OpenNMS in such a good place I began to think, for the first time in twenty years, about other options.

I started working with OpenNMS in September of 2001. I refer to my professional career before then as “Act I”, with my time at OpenNMS as “Act II”. I’m now ready to see what “Act III” has in store.

While I’m excited about the possibilities, I will miss working with the OpenNMS team. They are an amazing group of people, and it will be hard to replace the role they played in my life. I’m also eternally grateful to the OpenNMS Community, especially the guys in the Order of the Green Polo who kept the project alive when we were starting out. You are and always will be my friends.

When I was responsible for hiring at OpenNMS, I ended every offer letter with “Let’s go do great things”. I consider OpenNMS to be a “great thing” and I am eager to watch it thrive with its new investment, and I will always be proud of the small role I played in its success.

If you are doing great things and think I could contribute to your team, check out my profile on LinkedIn or Xing.

Order of the Green Polo: Requiescat In Pace

One of the first “group chat” technologies I was ever exposed to was Internet Relay Chat (IRC). This allowed a group of people to get together in areas called “channels” to discuss pretty much anything they felt like discussing. The service had to be hosted somewhere, and for most open source projects that was Freenode.

You might have seen that recently Freenode was taken over by new management, and the policies this new management implemented didn’t sit well with most Freenode users. In the grand open source tradition, most everyone left and went to other IRC servers, most notably Libera Chat.

In May of 2002 when I became the sole maintainer of OpenNMS, there was exactly one person who was dedicated full time to the project – me. What kept me going was the community I found on IRC, in both the #opennms channel and the local Linux users group channel, #trilug.

It was the people on IRC who supported me until I could grow the business to the point of bringing on more people. I still have strong friendships with many of them.

I was reminded of those early days as we migrated #opennms to Libera Chat. At the moment there are only 12 members logged in, and most of those are olde skoool OpenNMS people. I haven’t used IRC much since we switched to Mattermost (we host a server at chat.opennms.com) and with it a “bridge” to bring IRC conversations into the main Mattermost channel. Most people moved to use Mattermost as their primary client, but of course there were a few holdouts (Hi Alex!).

While I was reminiscing, I was also reminded of the Order of the Green Polo (OGP). When David, Matt and I started The OpenNMS Group in 2004, interest in OpenNMS was growing, and there was a core of those folks on IRC who were very active in contributing to the project. I was trying to think of someway to recognize them.

At that time, business casual, at least for men, consisted of a polo shirt and khaki slacks. Vendors often gifted polo shirts with their logos/logotypes on them to clients, and a number of open source projects sold them to raise money. We sold a white one and a black one, and I thought, hey, perhaps I can pick another color and use that to identify the special contributors to OpenNMS.

Green has always been associated with OpenNMS. In network monitoring, green symbolizes that everything is awesome. We even named one of our professional services products the “Greenlight Project“. Plus I really like green as a color.

Then the question became “what shade of green?” For some reason I thought of Tiger Woods who, by this time, late 2004, had won the prestigious Masters golf tournament three times (and would again the next spring). The winner of that tournament gets a “hunter green” jacket, and so I decided that hunter green would be the color.

Also, for some unknown reason, I saw an article about a British knighthood called “The Order of the Garter“. I combined the two and thus “The Order of the Green Polo” was born.

It was awesome.

People who had been active in contributing to OpenNMS became even more active when I recognized them with the OGP honor. They contributed code and helped us with supporting our community, as well as adding a lot to the direction of the project. We started having annual developer conferences called “Dev-Jam” and OGP members got to attend for free so we could spend some face to face time with each other. I considered these men in the OGP to be my brothers.

As OpenNMS grew, we looked to the OGP for recruitment. It was through the OGP that Alejandro came to the US from Venezuela and now leads our support and services team (if OpenNMS went away tomorrow, getting him and his spouse here would have made it all worth it). When you hired an OGP member, you were basically paying them to do something they wanted to do for free. Think of is as like eating an ice cream sundae and finding money at the bottom.

But that growth was actually something that lead to the decline of the OGP. When we hired everyone that wanted a job with us, the role of the OGP declined. Dev-Jam was open to anyone, but it was mandatory for OpenNMS employees. Not all employees were OGP even though they were full-time contributors, so there was often pressure to induct new employees into the Order. And, most importantly, as we aged many OGP members moved on to other things. Hey, it happens, and it doesn’t reflect poorly on their past contributions.

We had a special mailing list for the OGP, but instead of discussing OpenNMS governance it basically became a “happy birthday” list (speaking of which, Happy Birthday Antonio!). When OpenNMS was acquired by NantHealth, we had to merge our mail systems and in the process the OGP list was deactivated. I don’t think many people noticed.

Recently it was brought to my attention that associating OpenNMS with the Masters golf tournament through the OGP could have negative connotations. The Masters is hosted by the Augusta National Golf Club and there have been controversies around their membership policies and views on race. It was suggested that we rename the OGP to something else.

One quick solution would be to just change the shade of green to, perhaps, a “stoplight” green. But this got me to thinking that the same logic used to associate the color with racism could apply to the whole “Order of” as well, since that was based on a British knighthood which, much like Augusta, is mainly all male. Plus the British don’t have the best track record when it comes to colonialism, etc.

I think it is time for something totally new, so I’ve decided to retire the Order of the Green Polo. The members of the OGP are all male, and I’m extremely excited that as we’ve grown our company and project we have been able to greatly improve our diversity, and I would love to come up with something that can embrace everyone who has a love of OpenNMS and wants to contribute to it, be that through code, documentation, the community, &tc.

OpenNMS has changed greatly over the past two decades, and it has become harder to contribute to a project that has grown exponentially in complexity. As part of my role as the Chief Evangelist of OpenNMS, I want to change that and come up with easier ways for people to improve the OpenNMS platform, and I need to come up with a new program to recognize those who contribute (and if you want to skip that part and get right to the job thingie, we’re hiring, but don’t skip that part).

To those of you who were in the Order of the Green Polo, thank you so much for helping us make OpenNMS what it is today. I’m not sure if it would exist without you. And even without the OGP mailing list, I plan to remember your birthdays.

What’s Old Is New Again

Today we launched a new look for OpenNMS, a rebranding effort that has been going on for the better part of a year. It represents a lot more than just a new logo and new colors. While OpenNMS has been around for over two decades now, it is also quite different from when it started. A tremendous amount of work has gone into the project over the past couple of years, and if you looked at using it even just a short while ago you will be surprised at what has changed.

New OpenNMS Logo

One of the best analogies I can come up with to talk about the “new” OpenNMS concerns cars. I like cars, especially Mercedes, and when I was in college I usually drove an older Mercedes sedan. I enjoyed bringing them back to their former glory (and old, somewhat beaten down cars were all I could afford), and so I might start by redoing the brake system, overhauling the engine, etc.

When I would run out of money, which was often, sometimes I’d have to sell a car. Prospective buyers would often complain that the paint wasn’t perfect or there was an issue with the interior. I’d point out that you could hop in this car right now and drive it across the country and never worry about breaking down, but they seemed focused on how it looked. Cosmetics are usually the last thing you focus on during a restoration, but it tends to be the first thing people see.

This is very much like OpenNMS. For over a decade we’ve been focused on the internals of the platform, and luckily we are now in a position to focus on how it looks.

Please don’t misunderstand: application usability is important, much more important than, say, the paint job on a car, but in order to provide the best user experience we had to start by working under the hood.

For example, from the beginning OpenNMS has contained multiple “daemons” that control various aspects of the platform. Originally this was very monolithic, and thus any small change to one of them would often require restarting the whole application.

OpenNMS is now based on a Karaf runtime which provides a modular way of managing the various features within the application. It comes with a shell that can allow even non-Java programmers access to both high and low level parts of the platform, and to make changes without restarting the whole thing. Features can be enabled and disabled on the fly, and it is easy to test the behavior of OpenNMS against a particular device without having to set up a special test environment to pore through pages of logs.

Another great aspect of OpenNMS is that much of the internal messaging can now take place through a broker such as Kafka. While this increases the stability and flexibility of the platform, users can also create custom consumers for the huge amounts of information OpenNMS is able to collect. For very large networks this creates the option to use that data outside of the platform itself, giving end users a high level of custom observablity.

The monolithic nature of OpenNMS has also been improved. The addition of “Minions” to provide monitoring at the edge of the network creates numerous monitoring solutions where there was none before. You can now reach into isolated or private networks, or monitor the performance of applications from various locations seamlessly. The “Sentinel” project allows the various processes within OpenNMS to be spread out over multiple devices with the aim to have virtually unlimited scale.

APM Example World Map

And I haven’t even started on the ability of OpenNMS to monitor tremendous amounts of telemetry data and to analyze it with tools such as “Nephron” or our foray into artificial intelligence with ALEC.

So much has changed with OpenNMS, much of it recently, that it was time for that new coat of paint. It was time for people to both notice the new look of OpenNMS at the surface, and the new OpenNMS under the covers.

One thing that hasn’t changed is that OpenNMS is still 100% open source. All of these amazing features are available to anyone under an OSI approved open source license. Plus we leverage and integrate with best-in-class open source tools such as Grafana for visualization and Cassandra (using Newts) for storing time series data.

Our new logo is a stylized gyroscope. For centuries the gyroscope has represented a way to maintain orientation in the most chaotic of situations. In much the same way, OpenNMS helps you maintain the orientation of your IT infrastructure which, let’s admit it, plays a huge role in the success of your enterprise.

Where the car analogy falls apart is that while the paint job is usually the end of a restoration, this new look for OpenNMS is just the beginning of a new chapter in the history of the project. Our goal is to create a platform where monitoring just happens. We’re not there yet, but check out the latest OpenNMS and we hope you’ll agree we are getting closer.

OpenNMS Resources

Getting started with OpenNMS can be a little daunting, so I thought I’d group together some of the best places to start.

When OpenNMS began 20+ years ago, the main communication channel was a group of mailing lists. For real time interaction we added an “#opennms” IRC channel on Freenode as well. As new technology came along we eagerly adopted it: hosting forums, creating a FAQ with FAQ-o-matic, building a wiki, writing blogs, etc.

The problem became that we had too many resources. Many weren’t updated and thus might host obsolete information, and it was hard for new users to find what they wanted. So a couple of years ago we decided to focus on just two main places for community information.

We adopted Discourse to serve as our “asynchronous” communication platform. Hosted at opennms.discourse.group the goal is to migrate all of our information that used to reside on sites like FAQs and wikis to be in one place. In as much as our community has a group memory, this is it, and we try to keep the information on this site as up to date as possible. While there is still some information left in places like our wiki, the goal is to move it all to Discourse and thus it is a great place to start.

I also want to call your attention to “OpenNMS on the Horizon (OOH)”. This is a weekly update of everything OpenNMS, and it is a good way to keep up with all the work going on with the platform since a lot of the changes being made aren’t immediately obvious.

While we’ve been happy with Discourse, sometimes you just want to interact with someone in real time. For that we created chat.opennms.com. This is an instance of Mattermost that we host to provide a Slack-like experience for our community. It basically replaces the IRC channel, but there is also a bridge between IRC and MM so that posts are shared between the two. I am “sortova” on Mattermost.

When you create an account on our Mattermost instance you will be added to a channel called “Town Square”. Every Mattermost instance has to have a default channel, and this is ours. Note that we use Town Square as a social channel. People will post things that may be of interest to anyone with an interest in OpenNMS, usually something humorous. As I write this there are over 1300 people who have signed up on Town Square.

For OpenNMS questions you will want to join the channel “OpenNMS Discussion”. This is the main place to interact with our community, and as long as you ask smart questions you are likely to get help with any OpenNMS issues you are facing. The second most popular channel is “OpenNMS Development” for those interested in working with the code directly. The Minion and Compass applications also have their own channels.

Another channel is “Write the Docs”. Many years ago we decided to make documentation a key part of OpenNMS development. While I have never read any software documentation that couldn’t be improved, I am pretty proud of the work the documentation team has put into ours. Which brings me to yet another source of OpenNMS information: the official documentation.

Hosted at docs.opennms.org, our documentation is managed just like our application code. It is written in AsciiDoc and published using Antora. The documentation is versioned just like our Horizon releases, but usually whenever I need to look something up I go directly to the development branch. The admin guide tends to have the most useful information, but there are guides for other aspects of OpenNMS as well.

The one downside of our docs is that they tend to be more reference guides than “how-to” articles. I am hoping to correct that in the future but in the meantime I did create a series of “OpenNMS 101” videos on YouTube.

They mirror some of our in-person training classes, and while they are getting out of date I plan to update them real soon (we are in the process of getting ready for a new release with lots of changes so I don’t want to do them and have to re-do them soon after). Unfortunately YouTube doesn’t allow you to version videos so I’m going to have to figure out how to name them.

Speaking of changes, we document almost everything that changes in OpenNMS in our Jira instance at issues.opennms.org. Every code change that gets submitted should have a corresponding Jira issue, and it is also a place where our users can open bug reports and feature requests. As you might expect, if you need to open a bug report please be as detailed as possible. The first thing we will try to do is recreate it, so having information such as the version of OpenNMS you are running, what operating system you are using and other steps to cause the problem are welcome.

If you would like us to add a feature, you can add a Feature Request, and if you want us to improve an existing feature you can add an Enhancement Request. Note that I think you have to have an account to access some of the public issues on the system. We are working to remove that requirement as we wish to be as transparent as possible, but I don’t think we’ve been able to get it to work just yet. I just attempted to visit a random issue and it did load but it was missing a lot of information that shows up when I go to that link while authenticated, such as the left menu and the Git Integration. You will need an account to open or comment on issues. There is no charge to open an account, of course.

Speaking of git, there is one last resource I need to bring up: the code. We host our code on Github, and we’ve separated out many of our projects to make it easier to manage. The main OpenNMS application is under “opennms” (naturally) but other projects such as our machine learning feature, ALEC, have their own branch.

While it was not my intent to delve into all things git on this post, I did want to point out than in the top level directory of the “opennms” project we have two scripts, makerpm.sh and makedeb.sh that you can use to easily build your own OpenNMS packages. I have a video queued up to go over this in detail, but to build RPMs all you’ll need is a base CentOS/RHEL install, and the packages “git” (of course), “expect”, “rpm-build” and “rsync”. You’ll also need a Java 8 JDK. While we run on Java 11, at the moment we don’t build using it (if you check out the latest OOH you’ll see we are working on it). Then you can run makerpm.sh and watch the magic happen. Note the first build takes a long time because you have to download all of the maven dependencies, but subsequent builds should be faster.

To summarize:

For normal community interaction, start with Discourse and use Mattermost for real time interaction.

For reference, check out our documentation and our YouTube channel.

For code issues, look toward our Jira instance and our Github repository.

OpenNMS is a powerful monitoring platform with a steep learning curve, but we are here to help. Our community is pretty welcoming and hope to see you there soon.