New Fancy Website for www.opennms.org

As some of you may have noticed, a little while ago the OpenNMS Project website got updated to a new, fancy, responsive version.

OpenNMS Platform

This was mainly the work of Ronny Trommer with a big assist from our graphic designer, Jessica.

We are often so busy working on the code we often forget how important it is to tell people about what we are doing. Most people who take the time to learn about the project realize how awesome it is, but it can be hard to get over that first hump in the learning curve.

I hope that the new site will both reflect the benefits of using OpenNMS as well as the work of the community behind it.

OpenNMS Meridian 2016 Released

I am woefully behind on blog posts, so please forgive the latency in posting about Meridian 2016.

As you know, early last year we split OpenNMS into two flavors: Horizon and Meridian. The goal was to create a faster release cycle for OpenNMS while still providing a stable and supportable version for those who didn’t need the latest features.

This has worked out extremely well. While there used to be eighteen months or so between major releases, we did five versions of Horizon in the same amount of time. That has led to the rapid development of such features as the Newts integration and the Business Service Monitor (BSM).

But that doesn’t mean the features in Horizon are perfect on Day One. For example, one early adopter of the Newts integration in Horizon 17 helped us find a couple of major performance issues that were corrected by the time Meridian 2016 came out.

The Meridian line is supported for three years. So, if you are using Meridian 2015 and don’t need any of the features in Meridian 2016, you don’t need to upgrade. Major performance issues, all security issues and most of the new configurations will be backported to that release until Meridian 2018 comes out.

Compare and contrast that with Horizon: once Horizon 18 was released all work stopped on Horizon 17. This means a much more rapid upgrade cycle. The upside being that Horizon users get to see all the new shiny features first.

Meridian 2016 is based on Horizon 17, which has been out since the beginning of the year and has been highly vetted. Users of Horizon 17 or earlier should have an easy migration path.

I’m very happy that the team has consistently delivered on both Horizon and Meridian releases. It is hoped that this new model will both keep OpenNMS on the cutting edge of the network monitoring space while providing a more stable option for those with environments that require it.

Upgrading Linux Mint 17.3 to Mint 18 In Place

Okay, I thought I could wait, but I couldn’t, so yesterday I decided to do an “in place” upgrade of my office desktop from Linux Mint 17.3 to Mint 18.

It didn’t go smoothly.

First, let me stress that the Linux Mint community strongly recommends a fresh install every time you upgrade from one release to another, and especially when it is from one major release, like Mint 17, to another, i.e. Mint 18. They ask you to backup your home directory and package lists, base the system and then restore. The problem is that I often make a lot of changes to my system which usually involves editing files in the system /etc directory, and this doesn’t capture that.

One thing I’ve always loved about Debian is the ability to upgrade in place (and often remotely) and this holds true for Debian-based distros like Ubuntu and Mint. So I was determined to try it out.

I found a couple of posts that suggested all you need to do is replace “rosa” with “sarah” in your repository file, and then do a “apt-get update” followed by an “apt-get dist-upgrade”. That doesn’t work, as I found out, because Mint 18 is based on Xenial (Ubuntu 16.04) and not Trusty (Ubuntu 14.04). Thus, you also need to replace every instance of “trusty” with “xenial” to get it to work.

Finally, once I got that working, I couldn’t get into the graphical desktop. Cinnamon wouldn’t load. It turns out Cinnamon is in a “backport” branch for some reason, so I had to add that to my repository file as well.

To save trouble for anyone else wanting to do this, here is my current /etc/apt/sources.list.d/official-package-repositories.list file:

deb http://packages.linuxmint.com sarah main upstream import backport #id:linuxmint_main
# deb http://extra.linuxmint.com sarah main #id:linuxmint_extra

deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ xenial-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ xenial partner

Note that I commented out the “extra” repository since one doesn’t exist for sarah yet.

The upgrade took a long time. We have a decent connection to the Internet at the office and still it was over an hour to download the packages. There were a number of conflicts I had to deal with, but overall that part of the process was smooth.

Things seem to be working, and the system seems a little faster but that could just be me wanting it to be faster. Once again many thanks to the Mint team for making this possible.

MC Frontalot and The Doubleclicks at All Things Open

I am happy to finally be able to confirm that MC Frontalot and his band, along with The Doubleclicks, will be playing an exclusive show during the All Things Open conference in October. The OpenNMS Group, at great expense (seriously, this is like our entire marketing budget for the year), has secured these two great acts to help celebrate all things open, and All Things Open.

MC Frontalot

I first met Damian (aka Frontalot) back in 2012 when I hired him to play at the Ohio Linuxfest. I subscribe to the Chris Dibona theory that open source business should give back to the community (he once described his job as “giving money to his friends”) and thus I thought it would be cool to introduce the übernerd Frontalot to the open source world.

We hit it off and now we’ve hired him a number of times. The last time was for OSCON in 2015, where we decided to bring in the entire band. What an eye-opening experience that was. A lot of tech firms talk about “synergy” – the situation when the whole is greater than the sum of its parts – but Front with his band takes the Frontalot experience to a whole new level.

Also at the OSCON show we were able to get The Doubleclicks to open. This duo of sisters, Angela and Aubrey Webber, bring a quirky sensibility to geek culture and were the perfect opening act.

Now, I love open source conferences, but I overdid it last year. So this year I’m on a hiatus and have been to *zero* shows, but I made an exception for All Things Open. First, it’s in my home city of Raleigh, North Carolina, which is also home to Red Hat. We like to think of the area as the hot bed of open source if not its heart. Second, the conference is organized by Todd Lewis, the Nicest Man in Open Source™. He spends his life making the world a better place and it is reflected in his show. We couldn’t think of a better way to celebrate that then to bring in some top entertainment for the attendees.

That’s right: there are only two ways to get in to this show. The easiest is to register for the conference, as the conference badge is what you’ll need to get in to the venue. The second way is to ask us nicely, but we’ll probably ask you to prove your dedication to free and open source software by performing a task along the lines of a Labor of Hercules, except ours will most likely be obscenely biological.

Seriously, if you care about FOSS you don’t want to miss All Things Open, so register.

If you are unfamiliar with the work of MC Frontalot, may I suggest you check out “Stoop Sale” and “Critical Hit“, or if you’re Old Skool like me, watch “It Is Pitch Dark“. His most recent album was about fairy tales (think of it as antique superhero origin stories). Check out “Start Over” or better yet the version of “Shudders” featuring the OpenNMS mascot, Ulf.

As for The Doubleclicks, you can browse most of their catalog on their website. One song that really resonates with me, especially at conferences, is “Nothing to Prove” which I hope they’ll do at the show.

Oh, and I saved the best for last, Front has been working on a free software song. Yup, he is bringing his mastery of rhymes to bear on the conflict between “free as in beer” and “free as in liberty” and its world premiere will be, you guessed it, at All Things Open.

The show will be held at King’s Barcade, just a couple of blocks from the conference, on Wednesday night the 26th of October. You don’t want to miss it.

First Thoughts on Linux Mint 18 “Sarah”

I am a big fan of Linux Mint and I look forward to every release. This week Mint 18 “Sarah” was released. I decided to try it out on my Dell XPS 13 laptop since it is the easiest machine of mine to base and they really haven’t suggested an upgrade path. The one article I was able to find suggested a clean install, which is what I did.

First, I backed up my home directory, which is where most of my stuff lives, and I backed up the system /etc directory since I’m always making a change there and forgetting that I need it (usually concerning setting up the network interface as a bridge).

I then installed a fresh copy of Mint 18. Now they brag that the HiDPI support has improved (as I will grouse later, so does everyone else) but it hasn’t. So the first thing I did was to go to Preferences -> General and set “User interface scaling” to “Double”. This worked pretty well in Mint 17 and it seems to be fine in Mint 18 too.

I then did a basic install (I used a USB dongle to connect to a wired network since I didn’t want to mess with the Broadcom drivers at this point) and chose to encrypt the entire hard drive, which is something I usually do on laptops.

I hit my first snag when I rebooted. The boot cycle would hang at the password screen to decrypt the drive. In Mint 17 the password prompt would be on top of the “LM” logo. I would type in the password and it would boot. Now the “LM” logo has five little dots under it, like the Ubuntu boot screen, and the password prompt is below that. It’s just that it won’t accept input. If I boot in recovery mode, the password prompt is from the command line and works fine.

(sigh)

This seems to be a problem introduced with Ubuntu 16.04. Well, before I dropped back down to Mint 17 I decided to try out that distro as well as Kubuntu. My laptop was based in any case.

I ran into the usual HiDPI problems with both of those. I really, really want to like Kubuntu but with my dense screen I can’t make out anything and thus I can’t find the option to scale it. Ubuntu’s Unity was easier as it has a little sliding scaler, but when I got it to a resolution I liked many of the icon labels were clipped, just like last time I looked at it.

(sigh)

Then it dawned on my that I could just install Mint 18 but see if encrypting just my home directory would work. It did, so for now I’m using Mint 18 without full disk encryption. The next step was to install the proprietary Broadcom driver and then wireless worked.

Next, I edited /etc/fstab and added my backup NFS mount entry, mounted the drive and started restoring my home directory. That went smoothly, until I decided to reboot.

The laptop just hung at the boot screen.

Now there is a bug in Dell BIOS that if I try to boot with a USB network adapter plugged in, it erases the EFI entry for “ubuntu” and I have to go into setup and manually re-add it. Thus I was disconnecting the dongle for every reboot. On a whim I plugged it back in and the system booted. This led me to believe that there was an issue with the NFS mount in /etc/fstab, and that’s what the problem turned out to be.

The problem is that systemd likes to get its little hands into everything, so it tries to mount the volume before the wireless network is initialized. The solution is to add a special option that will cause systemd to automount the volume when it is first requested. Here is what worked:

172.20.10.5:/volume1/Backups /media/backups nfs noauto,x-systemd.automount,nouser,rsize=8192,wsize=8192,atime,rw,dev,exec,suid 0

The key bits are “noauto,x-systemd.automount”.

With that out of the way, I added mounts for my music and my video collection. That’s when I noticed a new weirdness in Cinnamon: dual icons on the desktop. I have set the desktop option to display icons for mounted file systems and now I get two of them for each remote mount point.

Double Desktop Icons

Annoying and I haven’t found a solution, so I just turned that option back off.

Now I was ready to play with the laptop. I’m often criticized for buying brand new hardware and expecting solid Linux support (yeah, you, Eric) but this laptop has been out for over a year. Still the trackpad is a little wonky – the cursor tends to jump to the lower right hand corner. Mint 18 ships with a 4.4 kernel but I had been using Mint 17 with a 4.6 kernel. One of the features of 4.6 is “Dell laptop improvements” so while I was hoping 4.4 would work for me (and that the features I needed would have been backported) it isn’t so. I installed 4.6 and my trackpad problems went away.

The final issue I needed to fix concerned ssh. I use ssh-agent and keys to access a lot of my remote servers, and it wasn’t working on Mint 18. Usually this is a permissions issue, but I compared the laptop to a working configuration on my desktop and the permissions were identical.

The error I got was:

debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0

It turns out that OpenSSH 7.0 seems to require that an “IdentityFile” parameter be expressly defined. I might be able to do this in ssh_config but instead I just created a ~/.ssh/config file with the line:

IdentityFile ~/.ssh/id_dsa_main

That got me farther. Now the error changed to:

debug1: Skipping ssh-dss key /home/tarus/.ssh/id_dsa_main - not in PubkeyAcceptedKeyTypes
debug1: Skipping ssh-dss key tarus@server1.sortova.com - not in PubkeyAcceptedKeyTypes

It seems the key I created back in 2001 is no longer considered secure. Since I didn’t want to go through the process of creating a new key just right now, I added another line to my ~/.ssh/config file:

IdentityFile ~/.ssh/id_dsa_main
PubkeyAcceptedKeyTypes=+ssh-dss

and now it works as expected. The weird part is that you would think this would be controlled on the server side, but the failure was coming from the client and thus I had to fix it on the laptop.

Now that it is installed and seems to be working, I haven’t really played around with Mint 18 much, so I may have to write another post soon. I do give them props for finally updating the default desktop wallpaper. I know the old wallpaper was traditional, but man was it dated.

This was a more complex upgrade than usual, and I don’t agree that you must base your system to do it, even from major release to major release. This isn’t Fedora. It’s based on Ubuntu which is based on Debian and I have rarely had issues with those upgrades. Usually you just change you repositories and then do “apt-get dist-upgrade”.

But … I might wait a week or two once they approve an upgrade procedure and let other people hit the bugs first, just in case. My desktops are more important to me than my laptop.

Hats off to the Mint team. I’m pretty tied to this operating system so I’m encouraged that it keeps moving forward as quickly as it does.

The Inverter: Episode 70– Delicious Amorphous Tech Bubble

This week the Gang of Four is down to three as Stuart is off on holiday with his daughter in New York City. The episode runs 82 minutes long and I’m seeing a trend that the shorter episodes happen when Jeremy is out. I think it is because he clutters up the whole show with facts and reasoning.

The first segment asked the question “Are we in another tech bubble, and if so, what shape is it?” Of course we are in another tech bubble, as Jeremy so deftly demonstrates by comparing a number of start ups with over a billion dollars in valuation to real companies such as General Electric. They talk about a number of reasons for it, but I think they left an important one out: egos.

Look, growing up as a geek in the late 1970s early 1980s, we didn’t get much respect. Now with the various tech bubbles and widespread adoption of technology by the masses, geeks can at least be wealthy if not popular. But I think we still harbor, deep down, a resentment of the jocks and popular kids that results in problems with self-esteem. Take Marc Andreessen as an example. By most measures he’s successful, but take a look at him. He is not a pretty man, even though that male pattern baldness does suggest a big wee-wee. I think he still has something to prove which is why he dumps money into impossible things like uBeam which has something like a $500MM valuation. I think a lot of the big names in Silicon Valley have such a huge fear of missing out that they drive up valuations on companies without a business model and no hope of making a profit, much less a product.

But then Microsoft bought LinkedIn for $26.2B so what do I know.

Well, I do know the shape of the tech bubble: it’s a pear.

In the next segment the guys almost spooge all over themselves talking about the Pixel C tablet. I’ve never been a tablet guy. I have a six-inch … smart phone and it works fine for all of my mobile stuff. If I need anything bigger, I use a Dell XPS laptop running Mint. I do own a Nexus 10 but only use it to read eBooks that come in PDF format.

But all three of them really like it, meaning that if I decide to get a new tablet I’ll seriously consider it. Bryan did mention a couple of apps I was unfamiliar with, so I’ll have to check them out.

The first is called Termux and it provides a terminal emulator (already got one) but it adds a Linux environment as well. Could be cool. The other is DroidEdit which is a text editor for Android with lots of features, similar to vim or gedit on steroids. Bryan used these during his ill-fated attempt to live in the Linux shell for 30 days.

Apparently the Pixel C is magnetic, with magnets so strong you can hang it on your fridge. Add a webcam and I won’t need one of these.

The third segment was on Nextcloud. I’ll give the Nextcloud guys some props for getting press. This is something like the third in-depth interview I’ve listened to in the past three weeks. If you’ve been living under a rock and don’t know that Nextcloud is a fork of OwnCloud, start here. They interviewed Frank Karlitschek and Jan-Christoph Borchardt about the split and their plans.

I was hoping for more details on what caused the fork (because I’m a nosy bastard) but Jono starting off with something like a 90 second leading question to Frank that pretty much handed him an explanation. I was screaming “Objection! Leading the witness!” but it didn’t help. I guess it really doesn’t matter.

I do think I’d really enjoy meeting Frank. They are dedicated to keeping Nextcloud 100% open source (like good ol’ OpenNMS). They also brought up a point that is very hard to make with large, complex open source projects. Everyone will ask “How do you compare with OwnCloud” when the better question is “How do you compare to Dropbox”? At OpenNMS we are always getting the “How are you different from Nagios” when the better question is “How do you compare to Tivoli or OpenView”?

The fourth segment was on the XPrize Global Learning Project. The main takeaway I got from it was that the very nature of the XPrize doesn’t lend itself to the Open Source Way. The prize amount is so high it doesn’t encourage sharing. Still, a couple of projects are trying it so I wish them all the luck.

The final “segment” is the outro where the guys usually just shoot the breeze. They mentioned Stuart, visiting the US, getting slammed with Brexit questions, and I do find that amusing having traveled to the UK numerous times and been peppered with questions about stupid US politics. It’s one of the reasons I hope Donald Trump doesn’t get elected – I’m not ready to go back to claiming to be Canadian when I travel.

They also talked about fast food restaurants. I’m surprised In-N-Out Burger didn’t get a mention. From the moment a new one opens it is usually slammed at all hours. They did mention Chick-Fil-A, which I used to love until I boycotted them over their political activism. There is a pretty cool article on five incredible fast food chains you shouldn’t eat at (including Chick-Fil-A) and one you should but probably can’t (In-N-Out).

Overall I thought it was a solid show, although it needed more ginger. Good to see the guys getting back into form.

OpenNMS and Elasticsearch

With Horizon 18 we added support for sending OpenNMS events into Elasticsearch. Unfortunately, it only works with Elasticsearch 1.0. Elasticsearch 2.0 and higher requires Camel 17, but OpenNMS can’t use it. I wondered why, and if you were wondering too, here is the answer from Seth:

Camel 17 has changed their OSGi metadata to only be compatible with Spring 4.1 and higher. We’re still using Spring 4.0 so that’s one problem. The second issue is that ActiveMQ’s OSGi metadata bans Spring 4.0 and higher. So currently, ActiveMQ and Camel are mutually incompatible with one another inside Karaf at any version higher than the ones that we are currently running.

The biggest issue is the ActiveMQ problem, I’ve opened this bug and it sounds like they’re going to address it in their next major release

So there you have it.

The Inverter: Episode 69 – Bill and Ted and Jeremy and Bryan and Jono and Stuart’s Excellent Adventure

So the Gang of Four decided to actually produce a regular episode of Bad Voltage for the first time in, like, a month, so I decided to resurrect this little column making fun of them.

I am actually supposed to be on vacation this week, but for me vacation means working around the farm. I was working outside when the heat index hit 108.5F so while I was recovering from heat stroke I decided to give this week’s show a listen.

Clocking in at a healthy 75 minutes, give or take, it was an okay show, although the last fifteen minutes kind of wandered (much like most of this review).

The first segment concerned the creation of NextCloud as a fork of OwnCloud. I’ve already presented my thoughts on it from Bryan’s Youtube interview with the founders of NextCloud, and not much new was covered here. But it was a chance for all four of them to discuss it. One of the touted benefits of the new project is the lack of a contributor agreement. I don’t find this a good thing. Note that while I whole heartedly agree that many contributor agreements are evil, that doesn’t make them all evil. Take the OpenNMS contributor agreement. It’s pretty simple, and it protects both the contributor and the project. The most important feature, to me, is that the contributor states that they have a right to contribute the code to the project. I think that’s important, although if it were lacking or the contributor lied, the results would be the same (the infringing code would be removed from the application). It at least makes people think just a bit before sending in code.

Bryan made an offhand mention about trademarks in the same discussion, and I wasn’t sure what he meant by it. Does it mean NextCloud won’t enforce trademarks, or that there is an easy process that allows people to freely use them? I think enforcing trademarks is extremely important for open source companies. Otherwise, someone could take your code, crap all over it, and then ship it out under the same name. At OpenNMS we had issues with this back in 2005 but luckily since then it has been pretty quiet.

While there was even more speculation, no one really knows why the NextCloud fork happened. Some say it was that Frank Karlitschek was friends with Niels Mache of Spreed.me and wanted a partnership, but OwnCloud was against it. I think we’ll never know. Another suggestion that was been made is that it had to deal with the community of OwnCloud vs. the investors. Jono made the statement that VCs don’t take an active role in the community, but I have to disagree. My interactions with 90% of VCs have been an episode of Silicon Valley, and while they may not take an active role, you can expect them to say things like “These features over here will be part of our ‘enterprise’ version and not open, and make sure to hobble the ‘community’ version to drive sales, but other than that, run your community the way you want.”

One new point that was brought up was the business perception of the company. I think everyone who self identifies as an open source fan who is using OwnCloud will most likely switch to NextCloud since that is where the developers went, but will businesses be cautious about investing in NextCloud? The argument can be made that “who knows what will set Frank off next?” and the threat of NextNextCloud might worry some. I am not expecting this to happen (once bitten, twice shy, I bet Frank has learned a lot about what he wants out of his project) but it is a concern.

It is similar to Libreoffice. I don’t know anyone in the open source world using OpenOffice, but it is still huge outside of that world (I did a ride along with a friend who is police and was pleasantly surprised to see him bring up OpenOffice on his patrol car’s laptop).

It kind of reminds me when Google killed Reader and then announced Keep – seemed a bit ironic at the time. If a company can radically change or even remove a service you have come to rely on, will you trust them in the future?

The segment ended with a discussion of the early days of Ubuntu. Bryan made the claim that Ubuntu was made as an easier to use version Debian which Jono vehemently denied. He claimed the goal was to create a free, powerful desktop operating system. All I remember from those days were those kids from the United Colors of Bennetton ads on the covers of the free CDs.

The next piece was Bryan reviewing the latest Dell XPS 13 laptop. My last two laptops have been XPS 13 models and I love them. They ship with Linux (which I want to encourage) and I find they provide a great Linux desktop experience.

I got my newest one last year, and the main issue I’ve had is with the trackpad. Later kernels seem to have addressed most of my problems. I also dumped the Ubuntu 14.04 that shipped with it in exchange for Linux Mint, but I’m still running mainline kernels (4.6 at the moment). I’m eager for Mint 18 to release to see if the (rumoured) 4.4 kernel will work well (they keep backporting device driver changes) but outside of that I’ve had few problems.

Battery life is great, and the HiDPI screen is a big improvement over my old XPS 13. The main weirdness, for my model, is the location of the camera. In order to make the InfinityEdge display, they moved it to the bottom left of the screen so that the top bezel could be as thin as possible. It means people end up looking at the flabby underside of my chin instead of my face at times, but I use it so little that it doesn’t bother me much.

The third segment was about funding open source projects. It’s an eternal question: how do you pay for developers to work on free software? The guys didn’t really address it, focusing for the most part on programs that would provide some compensation for, say, travel to a conference, versus paying someone enough to make their mortgage. Stuart finally brought up that point but no real answers were offered.

The last fifteen minutes was the gang just shooting the breeze. Bryan used the term “duck fart” which apparently is a cocktail (sounds nasty, so don’t expect it on the cocktail blog). There is also, apparently, a science fiction novel called Bad Voltage that is not supposed to be that great, and the suggestion was made that the four of them should write their own version, but in the form of an “exquisite corpse” (my term, not theirs) where each would right their section independently and see what happens when it gets combined.

All in all, not a horrible show but not great, either. It is nice to have them all back together.

I’m eager to see how Bryan manages the next one, since he is spending 30 days solely in the Linux shell. How will Google Hangouts (which is what they use to make the show) work?

Curious minds want to know.

Choose the Right Thermometer

Okay, so I have a love/hate relationship with Centurylink. Centurylink provides a DSL circuit to my house. I love the fact that I have something resembling broadband with 10Mbps down and about 1Mbps up. Now that doesn’t even qualify as broadband according to the FCC, but it beats the heck out of the alternatives (and I am jealous of my friends with cable who have 100Mbps down or even 300Mbps).

The hate part comes from reliability, which lately has been crap. This post is actually focused on OpenNMS so I won’t go into all of my issues, but I’ve been struggling with long outages in my service.

The latest issue is a new one: packet loss. Usually the circuit is either up or completely down, but for the last three days I’ve been having issues with a large percentage of dropped packets. Of course I monitor my home network from the office OpenNMS instance, and this will usually manifest itself with multiple nodeLostService events around HTTP since I have a personal web server that I monitor.

The default ICMP monitor does not measure packet loss. As long as at least one ping reply makes it, ICMP is considered up, so the node itself remains up. OpenNMS does have a monitor for packet loss called Strafeping. It sends out 20 pings in a short amount of time and then measures how long they take to come back. So I added it to the node for my home and I saw something unusual: a consistent 19 out of 20 lost packets.

Strafeping Graph

Power cycling the DSL modem seems to correct the problem, and the command line ping was reporting no lost packets, so why was I seeing such packet loss from the monitor? Was Strafeping broken?

While it is always a possibility, I didn’t think that Strafeping was broken, but I did check a number of graphs for other circuits and they looked fine. Thus it had to be something else.

This brings up a touchy subject for me: false positives. Is OpenNMS reporting false problems?

It reminds me of an event happened when I was studying physics back in the late 1980s. I was working with some newly discovered ceramic material that exhibited superconductivity at relatively high temperatures (around 92K). That temperature can be reached using liquid nitrogen, which was relatively easy to source compared to cooler liquids like liquid helium.

I needed to measure the temperature of the ceramic, but mercury (used in most common thermometers) is a solid at those temperatures, so I went to my advisor for suggestions. His first question to me was “What does a thermometer measure?”

I thought it was a trick question, so I answered “temperature” (“thermo” meaning temperature and meter meaning “to measure”). He replied, “Okay, smart guy, the temperature of what?”

That was harder to answer exactly, so I said vague things like the ambient environment, whatever it was next to, etc. He interrupted me and said “No, a thermometer measures one thing: the temperature of the thermometer”.

This was an important lesson, even though it seems obvious. In the case of the ceramic it meant a lot of extra steps to make sure the thermometer we were using (which was based on changes in resistance) was as close to the temperature of the material as possible.

What does that have to do with OpenNMS? Well, OpenNMS is like that thermometer. It is up to us to make sure that the way we decide to use it for monitoring is as close to our criteria as possible. A “false positive” usually indicates a problem with the method versus the tool – OpenNMS is behaving exactly as it should but we need to match it better to what we expect.

In my case I found out the router I use was limited by default to responding 1 ping per second (to avoid DDoS attacks I assume), so last night when I upped that to allow 20 pings per second Strafeping started to work as expected (as you can see in the graph above).

This allowed me to detect when my DSL circuit packet loss started again today. A little after 14:00 the system detected high packet loss. When this happened before, power cycling the modem seemed to fix it, so I headed home to do just that.

While I was on the way, around 15:30, the packet loss seemed to improve, but as you can see from the graph the ping times were all over the place (the line is green but there is a lot of extra “smoke” around it indicating a variance in the response times). I proactively power cycled the modem and things settled down. The Centurylink agent agreed to send me a new modem.

The point of this post is to stress that you need to understand how your monitoring tools actually work and you can often correct issues that make a monitor unusable and turn it into to something useful. Choose the right thermometer.

Nextcloud, Never Stop Nexting!

It’s been awhile since I’ve posted a long, navel-gazing rant about the business of open source software. I’ve been trying to focus more on our business than spending time talking about it, but yesterday an announcement was made that brought all of it back to the fore.

TL;DR; Yesterday the Nextcloud project was announced as a fork of the popular ownCloud project. It was founded by many of the core developers of ownCloud. On the same day, the US corporation behind ownCloud shut it doors, citing Nextcloud as the reason. Is this a good thing? Only time will tell, but it represents the (still) ongoing friction between open source software and traditional software business models.

I was looking over my Google+ stream yesterday when I saw a post by Bryan Lunduke announcing a special “secret” broadcast coming at 1pm (10am Pacific). As I am a Lundookie, I made a point to watch it. I missed the start of it but when I joined it turned out to be an interview with the technical team behind a new project called Nextcloud, which was for the most part the same team behind ownCloud.

Nextcloud is a fork, and in the open source world a “fork” is the nuclear option. When a project’s community becomes so divided that they can’t work things out, or they don’t want to work things out for whatever reasons, there is the option to take the code and start a new project. It always represents a failure but sometimes it can’t be helped. The two forks I can think of off hand, Joomla from Mambo and Icinga from Nagios, both resulted in stronger projects and better software, so maybe this will happen here.

In part I blame the VC model for financing software companies for the fork. In the traditional software model, a bunch of money is poured into a company to create software, but once that software is created the cost of reproducing it is near zero, so the business model is to sell licenses to the software to the end users in order to generate revenue in the future. This model breaks when it comes to free and open source software, since once the software is created there is no way to force the end users to pay for it.

That still doesn’t keep companies from trying. This resulted in a trend (which is dying out) called “open core” – the idea that some software is available under an open source license but certain features are kept proprietary. As Brian Prentice at Gartner pointed out, there is little difference between this and just plain old proprietary software. You end up with the same lack of freedom and same vendor lock in.

Those of us who support free software tend to be bothered by this. Few things get me angrier than to be at a conference and have someone go “Oh, this OpenNMS looks nice – how much is the enterprise version?”. We only have the enterprise version and every bit of code we produce is available under an open source license.

Perhaps this happened at ownCloud. When one of the founders was on Bad Voltage awhile back, I had this to say about the interview:

The only thing that wasn’t clear to me was the business model. The founder Frank Karlitschek states that ownCloud is not “open core” (or as we like to call it “fauxpensource“) but I’m not clear on their “enterprise” vs. “community” features. My gut tells me that they are on the side of good.

Frank seemed really to be on the side of freedom, and I could see this being a problem if the rest of the ownCloud team wasn’t so dedicated.

On the interview yesterday I asked if Nextcloud was going to have a proprietary (or “enterprise”) version. As you can imagine I am pretty strongly against that.

The reason I asked was from this article on the new company that stated:

There will be two editions of Nextcloud: the free of cost community edition and the paid enterprise edition. The enterprise edition will have some additional features suited for enterprise customers, but unlike ownCloud, the community and enterprise editions for Nextcloud will borrow features from each other more freely.

Frank wouldn’t commit to making all of Nextcloud open, but he does seem genuinely determined to make as much of it open as possible.

Which leads me to wonder, what’s stopping him?

It’s got to be the money guys, right? Look, nothing says that open source companies can’t make money, it’s just you have to do it differently than you would with proprietary software. I can’t stress this enough – if your “open source” business model involves selling proprietary software you are not an open source company.

This is one of the reasons my blood pressure goes up whenever I visit Silicon Valley. Seriously, when I watch the HBO show to me it isn’t a comedy, it’s a documentary (and the fact that I most closely identify with the character of Erlich doesn’t make me feel all that better about myself).

I want to make things. I want to make things that last. I can remember the first true vacation I took, several years after taking over the OpenNMS project when it had grown it to the point that it didn’t need me all the time. I was so happy that it had reached that point. I want OpenNMS to be around well after I’m gone.

It seems, however, that Silicon Valley is more interested in making money rather than making things. They hunt “unicorns” – startups with more than a $1 billion valuation – and frequently no one can really determine how they arrive at that valuation. They are so consumed with jargon that quite often you can’t even figure out what some of these companies do, and many of them fade in value after the IPO.

I can remember a keynote at OSCON by Martin Mickos about Eucalyptus, and how it was “open source” but of course would have proprietary code because “well, we need to make money”. He is one of those Silicon Valley darlings who just doesn’t get open source, and it’s why we now have OpenStack.

The biggest challenge to making money in open source is educating the consumer that free software doesn’t mean free solution. Free software can be very powerful but it comes with a certain level of complexity, and to get the most out of it you have to invest in it. The companies focused on free and open source software make money by providing products that address this complexity.

Traditionally, this has been service and support. I like to say at OpenNMS we don’t sell software, we sell time. Since we do little marketing, all of our users are self selecting (which makes them incredibly intelligent and usually quite physically beautiful) and most of them have the ability to figure out their own issues. But by working with us we can greatly shorten the time to deploy as well as make them aware of options they may not know exist.

In more recent times, there is also the option to offer open source software as a service. Take WordPress, one of my favorite examples. While I find it incredibly easy to install an instance of WordPress, if you don’t want to or if you find it difficult, you can always pay them to host it for you. Change your mind later? You can export it to an instance you control.

The market is always changing and with it there is opportunity. As OpenNMS is a network monitoring platform and the network keeps getting larger, we are focusing on moving it to OpenStack for ultimate scalability, and then coupled with our Minions we’ll have the ability to handle an “Internet of Things” amount of devices. At each point there are revenue opportunities as we can help our clients get it set up in their private cloud, or help them by letting them outsource some or all of it, such as Newts storage. The beauty is that the end user gets to own their solution and they always have the option of bringing it back in house.

None of these models involves requiring a license purchase as part of the business plan. In fact, I can foresee a time in the near future where purchasing a proprietary software product without fully exploring open source alternatives will be considered a breach of fiduciary responsibility.

And these consumers will be savvy enough to demand pure open source solutions. That is why I think Nextcloud, if they are able to focus their revenue efforts on things such as an appliance, has a better chance of success than a company like ownCloud that relies on revenue from software licensing sales. The fact that most of the creators have left doesn’t help them, either.

The lack of revenue from licenses sales makes most VCs panic, and it looks like that’s exactly what happened with the US division of ownCloud:

Unfortunately, the announcement has consequences for ownCloud, Inc. based in Lexington, MA. Our main lenders in the US have cancelled our credit. Following American law, we are forced to close the doors of ownCloud, Inc. with immediate effect and terminate the contracts of 8 employees. The ownCloud GmbH is not directly affected by this and the growth of the ownCloud Foundation will remain a key priority.

I look forward to the time in the not too distant future when the open core model is seen as quaint as selling software on floppy disks at the local electronics store, and I eagerly await the first release of Nextcloud.