♫ The Lunatic is on My Web ♫

The TL;DR of it is that I needed to create a new forum called OpenNMS Connect. This will be a place for Luna. So far I’ve been happy.

When I first started my quest for forum software a couple of month ago, I did what most geeks do and did a search for it. I found a very helpful Wikipedia page (‘natch).

After dismissing the non-open source options, I started looking at the programming language. Now I know I really shouldn’t be a PHP snob (this blog is presented using PHP software) but having been burned in the past with security issues my first inclination is to avoid it.

Now the guys in the office are trying to get me to think all “agile-ly” and so I need a “user story”. For any forum we use it has to support LDAP, for which the story could be “User must be able to access forum using directory services” or better yet “Admin needs a central way of controlling forum access”. We implement LDAP via the FreeIPA project, and it will just be so much easier if we can add and remove people from a particular group and just have it work.

The first project I looked at was Discourse. I was especially interested in a hosted version if I could tie it into our IPA instance. Discourse is kind of the “new hotness” at the moment, but I didn’t see an easy way to implement LDAP. There is a Single Sign On (SSO) option but it would require writing our own authentication page, and it wouldn’t work if we hosted it with them anyway.

The next project that caught my eye was the eXo Platform. It’s written in Java (as is OpenNMS) and it seems to have a ton of features. Perhaps too many. In any case I put the team on it and asked them to get it working with LDAP.

They succeeded in getting LDAP authentication to work, but then hit a ton of other snags. The authenticated users couldn’t access the default /portal/intranet site no matter how often we tweaked the permissions. They could reach the /portal/meridian site but we couldn’t figure out how to change the default portal. And in all cases we couldn’t get the top menu bar to load with an LDAP user which meant you couldn’t log out, etc.

On Friday I decided to see what I could do about it. Friday was a long day.

eXo is one of those companies that produces an open source version of their software as well as a paid version. My three readers know how I feel about that business model, and it made it kind of frustrating to figure out things since I couldn’t tell if the documentation would actually work on the “community” version. Also, to access the forums you need to register, which gets you a couple of spam-y e-mails trying to sell you on their paid version. Not too obnoxious and I can understand why they do it, but it was a little annoying.

It can also be hard to administer. A lot of the configuration is buried in .war files. For example, in order to set the default portal above, you have to unpack portal.war, change it and repack it. In playing around with the system, I decided that while the LDAP authentication is nice, the platform itself is way overkill for what we need. It is huge and on our system took several minutes to start up and would often spike the load with limited users.

So I spent a lot of time looking for alternatives. Unfortunately, the only option I found that had easy to understand LDAP integration was phpBB. When I mentioned that to the team, Jeff threw up in his mouth a little and I wasn’t too happy about that choice either. I don’t have the same prejudices as some, but I felt that its style was a little dated and there have been some serious security issues in the past associated with it.

But for grins I installed phpBB anyway. It was rather easy to do, which made me happy, but then I noticed that it was not easy to make the forum itself private. Another user story is that “Admin requires that only authorized users see the forum”. You can make certain parts of phpBB private, but I kind of wanted the same thing as eXo – an initial log in screen you have to use before accessing the site.

Then it dawned on me that we could just put it in a directory by itself in the web root, say /forum, and then make a pretty splash page on on the site with a link to it. Apache LDAP authentication is something we already figured out and knew worked and I could just require a valid login to access /forum.

This caused another lightbulb to go off. If we are going to do it that way, then why not just put any forum we like behind an LDAP authenticated directory?

The downside would be that users would need to create a forum-specific user if they wanted to add content, but on the upside they could choose their own usernames, thus obfuscating their identities for people who work at sensitive organizations. Thus we could have an LDAP user tied to, say, obama@whitehouse.gov and their forum name could be something totally different, like “Hot Cocoa”.

Yes, I know it is dressing up a bug as a feature, but to me it did seem useful.

Then I thought, hey, let’s revisit Discourse. That turned out to be harder than it would seem

Well, the only way to install Discourse on CentOS is as a Docker container, and at the moment it doesn’t seem to work.

The first time I tried to install it, it died complaining about lack of access to an SMTP server. No where in the instructions did it say you had to modify the app.yml and put in a valid mail server. In any case, I did that and restarted the install.

At one point during the install process I get this:

-- 0:  unicorn (4.8.3) from
/var/www/discourse/vendor/bundle/ruby/2.0.0/specifications/unicorn-4.8.3.gemspec
Bundle complete! 92 Gemfile dependencies, 189 gems now installed.
Gems in the group development were not installed.
Bundled gems are installed into ./vendor/bundle.

I, [2015-04-04T04:49:47.161747 #38]  INFO -- : > cd /var/www/discourse
&& su discourse -c 'bundle exec rake db:migrate'
2015-04-04 04:49:55 UTC [339-1] discourse@discourse ERROR:  relation "users" does not exist at character 323
2015-04-04 04:49:55 UTC [339-2] discourse@discourse STATEMENT:      SELECT a.attname, format_type(a.atttypid, a.atttypmod),	                     pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod
	                FROM pg_attribute a LEFT JOIN pg_attrdef d
	                  ON a.attrelid = d.adrelid AND a.attnum = d.adnum
	               WHERE a.attrelid = '"users"'::regclass
	                 AND a.attnum > 0 AND NOT a.attisdropped
	               ORDER BY a.attnum

which a Google search says to ignore, but then a little while later the install fails with:

FAILED
--------------------
RuntimeError: cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate' failed with return #
Location of failure: /pups/lib/pups/exec_command.rb:105:in `spawn' exec failed with the params {"cd"=>"$home", "hook"=>"bundle_exec", "cmd"=>["su discourse -c 'bundle install --deployment --verbose --without test --without development'", "su discourse -c 'bundle exec rake db:migrate'", "su discourse -c 'bundle exec rake assets:precompile'"]}
68a9a49f29ad74d9ab042bcaadfb06e02ff526104fefd82039eae1588bbb6e43
FAILED TO BOOTSTRAP

on which Google is much less helpful. No matter what I did I couldn’t get past it.

This kind of brings up an issue I have with Docker. Now let’s get this out of the way: I am jealous of the Docker project. We’ve been around for 15 years and gotten little notice whereas they have become huge in a short time. It would be nice if, say, I could get up to four readers on my blog.

But I really, really, really hated how hidden this whole process was. You install software on your system and then load “magic bits” from the Internet and hope it works. I think this is great on a intranet when you need to deploy lots of the same things, but without developing it internally first it was a little scary. When it doesn’t work it is incredibly hard to diagnose. Because the app wouldn’t build I couldn’t play with the database or really do anything, so I just uninstalled and reinstalled numerous times to try to fix this.

Plus, by running in a container, we would then need to modify nginx to use our LDAP configuration and that seems to be much harder than with Apache. I didn’t think it would be easy to just forward requests to the Docker instance, but since I couldn’t get it to work I’ll never know.

By this time I said, screw it, reinstalled phpBB and went home. It’s now about 8pm and I’ve been at it 11 hours.

Well, I have a mild form of OCD, or maybe it’s just being a geek, but I couldn’t let it rest. So early this morning (as in soon after midnight) I discovered a project called Luna (an active project from the aforementioned Wikimedia page).

Luna is the next iteration of the ModernBB project which is in turn is a fork of FluxBB. It’s simple, does almost everything I could want, and was incredibly easy to install. No Docker containers, no large Java app, just some PHP that you drop in your web root. Plus the webUI is built on bootstrap just like OpenNMS.

In about an hour I had it running, had changed the style to match our color palette, and fixed an issue where jquery wasn’t getting loaded by copying it down as a local file.

OpenNMS Luna Website

The downside is that it isn’t production yet. I installed 0.7 and earlier this morning they released 0.8. Jesse fixed an issue with the internal mail system and I have a couple of more issues that I’d like to see fixed, but overall I’m very happy with it. They are aiming to release 1.0 on 13 April.

And I really like their attitude and philosophy. They are self-funded and I love Yannick’s tag line of “You Can Do Anything.”

To help that I sent them 100€. (grin)

Anyway, sorry for the long post. I’ll let you know how it goes.

2014 Open Source Monitoring Conference

This year I got to return to the Open Source Monitoring Conference hosted by Netways in Nürnberg, Germany.

Netways is one of the sponsors of the Icinga project, and for many years this conference was dedicated to Nagios. It is still pretty Nagios-centric, but now it is focused more on the forks of that project than the project itself. There were presentations on Naemon and Sensu as well as Icinga, and then there are the weirdos (non-check script oriented applications) such as Zabbix and OpenNMS.

I like this conference for a number of reasons. Mainly there really isn’t any other conference dedicated to monitoring, much less one focused on open source. This one brings together pretty much the whole gang. Plus, Netways has a lot of experience in hosting conferences, so it is a nice time: well organized, good food and lots of discussion.

My trip started off with an ominous text from American Airlines telling me that my flight from RDU to DFW was delayed. While flying through DFW is out of the way, it enables me to avoid Heathrow, which is worth the extra time and effort. On the way to the airport I was told my outbound flight was delayed to the point that I wouldn’t be able to make my connection, so I called the airline to ask about options.

With the acquisition by US Airways, I had the option to fly through CLT. That would cut off several hours of the trip and let me ride on an Airbus 330. American flies mainly Boeing equipment, so I was curious to see if the Airbus was any better.

As usual with flights to Europe, you leave late in the evening and arrive early in the morning. Ulf and I settled in for the flight and I was looking forward to meeting up with Ronny when we landed.

The trip was uneventful and we met up with Ronny and took the ICE train from the airport to Nürnberg. The conference is at the Holiday Inn hotel, and with nearly 300 of us there we kind of take over the place. I did think it was funny that on my first trip there the instructions on how to get to the hotel from the train station were not very direct. I found out the reason was that the most direct route takes you by the red light district and I guess they wanted us to avoid that, although I never felt unsafe wandering around the city.

We arrived mid-afternoon and checked in with Daniela to get our badges and other information. She is one of the people who work hard to make sure all attendees have a great time.

I managed to take a short nap and get settled in, and then we met up for dinner. The food at these events is really nice, and I’m always a fan of German beer.

I excused myself after the meal due in part to jet lag and in part due to the fact that I needed to finish my presentation, and I wanted to be ready for the first real day of the conference.

The conference was started by Bernd Erk, who is sort of the master of ceremonies.

He welcomed us and covered some housekeeping issues. The party that night was to be held at a place called Terminal 90, which is actually at the airport. Last time they tried to use buses, but it became pretty hard to organize, so this time they arranged for us to take public transportation via the U-Bahn. After the introduction we then broke into two tracks and I decided to stay to hear Kris Buytaert.

I’ve known Kris through his blog for years now, but this was the first time I got to see him in person. He is probably most famous in my circles for introducing the hashtag #monitoringsucks. Since I use OpenNMS I don’t really agree, but he does raise a number of issues that make monitoring difficult and some of the methods he uses to address them.

The rest of the day saw a number of good presentations. As this conference has a large number of Germans in attendance, a little less than half of the tracks are given in German, but there was also always an English language track at the same time.

One of my favorite talks from the first day was on MQTT, a protocol for monitoring the Internet of Things. It addresses how to deal with devices that might not always be on-line, and was demonstrated via software running on a Raspberry Pi. I especially liked the idea of a “last will and testament” which describes how the device should be treated if it goes offline. I’m certain we’ll be incorporating MQTT into OpenNMS in the future.

Ronny and I missed the subway trip to the restaurant because I discovered a bug in my presentation configuration and it took me a little while to correct it, but I managed to get it done and we just grabbed a taxi. Even though it was in the airport, it was a nice venue and we caught up with Kris and my friend Rihards Olups from Zabbix. I first met Rihards at this conference several years ago and he brought me a couple of presents from Lativa (he lives near Riga). I still have the magnet on my office door.

Ulf, however, wasn’t as pleased to meet them.

We had a lot of fun eating, drinking and talking. The food was good and the staff was attentive. Ulf was much happier with our waitress (so was Ronny):

Since I had to call it an early night because my presentation was the first one on Thursday, a lot of people didn’t. After the restaurant closed they moved to “Checkpoint Jenny” which was right across the street (and under my window) from the hotel. Some were up until 6am.

Needless to say, the crowds were a little lighter for my talk. I think it went well, but next year I might focus more on why you might want to move away from check scripts to something a little more scalable. I did a really cool demo (well, in my mind) about sending events into OpenNMS to monitor the status of scripts running on remote servers, but it probably was hard to understand from a Nagios point of view.

Both Rihards and Kris made it to my talk, and Rihards once again brought gifts. I got a lot of tasty Latvian candy (which is now in the office, my wife ordering me to get it out of the house so it won’t get eaten) as well as a bottle of Black Balsam, a liqueur local to the region.

Rihards spoke after lunch, and most people were mobile by then. I enjoyed his talk and was very impressed to learn that every version of the remote proxy ever written for Zabbix is still supported.

I had to head back to Frankfurt that evening so I could fly home on Friday (my father celebrated his 75th birthday and I didn’t want to miss it) but we did find time to get together for a beer before I left. It was cool to have people from so many different monitoring projects brought together through a love of open source.

Next year the conference is from 16-18 November. I plan to attend and I hope to spend more time in Germany that trip than I had available to me this one.

Test Driven Development

One of the things that bothers me a lot about the software industry is this idea that proprietary software is somehow safer and better written than open source software. Perhaps it is because a lot of people still view software as “magic” and since you can’t see the code, is must be more “magical”. Or perhaps is it because people assume that something you have to pay for must be better than something that is free.

I’ve worked for and with a number of proprietary software companies, so I’ve seen how the sausage is made, and in some cases you don’t want to know. Don’t get me wrong, I’ve seen well managed commercial software companies that produce solid code because in the long run solid code is better and costs less, but I’ve also seen the opposite done simply to get a product to market quickly.

With open source, at least if you expect contribution, you have to produce code that is readable. It also helps if it is well written since good programmers respect and like working with other good programmers. It’s out there for everyone to see, and that puts extra demands on its quality.

In the interest of making great code, many years ago we switched to the Spring framework which had the benefit that we could start writing software tests. This test driven development is one reason OpenNMS is able to stay so stable with lots of code changes and a small test team.

What’s funny is that we’ve talked to at least two other companies who started implementing test driven development but then dropped it because it was too hard. It wasn’t easy for us, either, but as of this writing we run 5496 tests every time something changes in the main OpenNMS application, and that doesn’t include all of the other branches and projects such as Newts. We use the Bamboo product from Atlassian to manage the tests so I want to take this opportunity to thank them for supporting us.

OpenNMS 14 contained some of the biggest code changes in the platform’s history but so far it has been one of the smoothest releases yet. While most of that was due to to the great team of developers we have, part of it was due to the transparency that the open source process encourages.

Commercial software could learn a thing or two from it.

Announcing OpenNMS 14 and Newts 1.0

It is with great pleasure that I can announce the release of OpenNMS 14. Yup, you heard right, OpenNMS *fourteen*.

It’s been more than 12 years since OpenNMS 1.0 so we’ve decided to pull a Java and drop the “1.” from the version numbers. Also, we are doing away with stable and development branches. The Master branch has been replaced with the develop branch, which will be much more stable than development releases have been in the past, and we’ll name the next major stable release 15, followed by 16, etc. Do expect bug fix point releases as the in past, but the plan is to release more major releases per year than just one.

A good overview of all the new features in 14 can be found here:

https://github.com/OpenNMS/opennms/blob/release-14.0.0/WHATSNEW.md

The development team has been working almost non-stop over the last two months to make OpenNMS 14 the best and most tested version yet. A lot of things has been added, such as new topology and geographic maps, and some big things have been made better, such as linkd. Plus, oodles of little bugs have finally been closed making the whole release seem more polished and easier to use.

Today we also released Newts 1.0, the first release in a new time series data storage library. Published under the Apache License, this technology is built on Cassandra and is aimed at meeting Big Data and Internet of Things needs by providing fast, hugely scalable and redundant data storage. You can find out more about this technology here:

http://newts.io

While not yet integrated with OpenNMS, the 1.0 release is the first step in the process. Users will have the option to replace the JRobin/RRDtool storage strategies with Newts. Since Newts stores raw data, there will be a number of options for post-processing and graphing that data that I know a number of you will find useful. Whether your data needs are simple or complex, Newts represents a way to meet them.

Feel free to check out both projects. OpenNMS 14 should be in both the yum and apt repos, and as usual I welcome feedback as to what you think about it.

OpenNMS Newts at ApacheCon Europe

Being Hungarian, I am very jealous and yet still proud that our very own Eric Evans will be presenting at ApacheCon Europe in Budapest, Hungary.

He will be talking about Newts which is a new time series data store built on top of Apache Cassandra. It will be a key part of positioning OpenNMS for the Internet of Things as well as being very useful on its own.

Eric is a dynamic and interesting speaker, so if you are attending the conference be sure to check out his talk.

And while you are there, eat a Túró Rudi or three for me.

STUIv2: Focus on the Network

One of the things that really makes me angry is when critics of open source claim that open source doesn’t innovate, despite the fact that the very business model is incredibly innovative and probably the most disruptive thing to happen to the software industry since its inception.

Another example of innovation is in the new network visualization (mapping) software coming in the next release of OpenNMS.

I have been a vocal critic of maps for years. It stems from a time when I was working at a client during the first Internet bubble and my job was pretty much to spend several hours a day moving icons into container objects on the OpenView map. It was mind-numbingly dull work that returned little value. Most experienced network and systems managers move away from maps early on, but often the bosses who tend to make the buying decisions demand it as part of any solution.

Now, I’ve seen “cool” maps so it’s not that maps aren’t cool, it’s just that they tend to require more work to make cool than they save by being useful.

That is about to change with the new OpenNMS Semantic Topology User Interface (STUI).

Before I talk about that, I should mention that OpenNMS has a map. In fact it has a number of them. The first one was built for the Carabinieri in Italy who liked OpenNMS but wanted it to have something like OpenView’s map. Now called the “SVG” map, and it does its job well, as well as any map of that type can.

Then when we built the remote poller we needed a way to represent the pollers’ location geographically, and thus the “distributed” map was born. People liked the geographical representation, so we made it available to all nodes and not just remote pollers with the “geographical” map.

None of this work was really innovative, map-wise. But we started to depart from the norm with the topology map introduced in 1.12.

The topology map was novel in that it lets the user determine the topology to view. By default OpenNMS ships two different topology APIS. One is based on level 2 connections discovered by the “linkd” process, and the other is based on VMWare data showing the relationship between a host machine and its guest operating systems, as well as any network attached storage.

But it doesn’t have to stop there. In JunOS Space, Juniper is able to show connection data through all of its devices by using the API. Any other source of topology data and business intelligence can be added to the OpenNMS system.

However, me, the map hater, still wasn’t sold. While it is fine for smaller networks, what happens when you enter into the realm of tens of thousands of devices? We eventually see OpenNMS as being the platform for managing the Internet of Things, and any type of map we create will have to scale to huge numbers of devices.

Thus the team created the new topology map (STUIv2), available in 1.13 and coming in the next stable OpenNMS release. The key to this implementation is that you can add and remove “focus” from the map. This lets you quickly zoom in to the area of the map that is actually of interest, and then you can navigate around it quickly to both understand network outages as well as to see their impact.

While I like words, it’s probably better if you just check out the video that David created. It’s 20 minutes long and the first ten minutes cover “what has gone before” so if you are pressed for time, jump to the ten minute mark and follow it from there.

I like the fact that the video shows you the workflow from the main UI to the map, but then shows you how you can manage things from the map back to the main UI.

Note that I had nothing to do with this map. I often say that my only true talent is attracting amazing people to work with me, and this just drives that point home.

While I’m still not sold on maps, I am warming up to this one. I got goose bumps around minute 16:45 and then again at 17:30.

It’s great, innovative work and I’m excited to see what the community will do with this new tool.

Nagios News

My friend Alex in Norway sent me a link to a Slashdot story about the Nagios plugin site being taken over by Ethan Galstad’s company Nagios Enterprises. From what I’ve read about the incident, it definitely sounds like it could have been handled better, and it points out one of the main flaws with the “fauxpensource” business strategy.

I assume that at least two of my three readers are familiar with Nagios, but for the one who isn’t, Nagios is one of the most popular tools for monitoring servers, and it has been around just as long as OpenNMS (the NetSaint project, the original name of Nagios, was registered on Sourceforge in January of 2000 while OpenNMS joined in March of that year). Its popularity is mainly based around how easy it is to extend its functionality through the use of user written scripts, or “plugins”, plus it is written in C which made it much easier for it to be included in Linux distributions. A quick Google search on “nagios” just returned 1.7 million hits to 400K for “opennms”
A few years ago Ethan decided to adopt a business model where he would hold back some of the code from the “core” project and he would charge people a commercial licensing fee to access that code (Nagios XI). If your business plan is based on a commercial software model, then your motivations toward your open source community change. In fact, that community can change, with projects such as Icinga deciding to fork Nagios rather than to continue to work within the Nagios project framework.

Enter the Nagios Plugins site.

To say that Nagios Plugins was instrumental to the success of the Nagios application would be an understatement. Back when I tracked such things, there were way more contributions of both new plugins and updates to existing plugins on that project than were given to the Nagios code itself. The plugins community is why Nagios is so popular in the first place, and it seems like they deserve some recognition for that effort.

Trademark issues within open source projects are always tricky. Over a decade ago a company in California started producing “OpenNMS for Mac”. Even though we had OpenNMS on OS X available through the fink project, it required a version of Java that wasn’t generally available to Mac users (just those in the developer program). However, that version was was required to allow OpenNMS to actually work at scale, but this company decided to remove all of the code that depended on it and to release their own version. Unfortunately, they called it “OpenNMS” which could cause a lot of market confusion. Suppose a reviewer tested that program, found it didn’t scale, and decided to pan the whole application. It would have a negative impact on the OpenNMS brand. After numerous attempts to explain this to the man responsible for the fork, I had to hire a lawyer to send a cease and desist order to get him to stop. It was not a happy experience for me. When you give your software away for free, the brand is your intellectual property and you need to protect it.

So I can understand Ethan’s desire to control the Nagios name (which is actually a little ironic since the switch from NetSaint to Nagios was done for similar reasons). He has a commercial software company to run and this site might lead people to check out the open source alternatives available. Since they are based on his product, the learning curve is not very steep and thus the cost to switch is low, and that could have a dramatic impact on his business plan and revenues.

At OpenNMS we treat our community differently. We license the OpenNMS trademark to the OpenNMS Foundation, an independent organization based in Europe that is both responsible for the annual Users Conference (coming to the UK in April) as well as creating the “Ask OpenNMS” site to provide a forum for the community to provide support to each other. They own their own domains and their own servers, and outside of an small initial contribution from the OpenNMS Group, they are self funded. Last year’s conference was awesome – the weather notwithstanding (it was cold and it snowed).

OpenNMS is different from Nagios in other key ways. At its heart, Nagios is a script management tool. The user plugins are great, but they don’t really scale. Almost all of the OpenNMS “checks” are integrated into the OpenNMS code and controlled via configuration files which gives users the flexibility of a plugin but much greater performance. For those functions that can’t be handled within OpenNMS, we teach in our training classes how to use the Net-SNMP “extend” function to provide secure, remote program execution that can scale. OpenNMS is a management application platform that allows enterprises and carriers to develop their own, highly custom, management solution, but at the cost of a higher learning curve than products such as Nagios.

Now that doesn’t make OpenNMS “better” than Nagios – it just makes it better for certain users and not for others (usually based on size). The best management solution is the one that works for you, and luckily for Nagios users there are a plethora of similar products to choose from which use the same plugins – which I believe is at the heart of this whole kerfuffle.

The part of the story that bothers me the most is the line “large parts of our web site were copied“. If this is true, that is unfortunate, and could result in a copyright claim from the plugins site.

To me, open source is a meritocracy – the person who does the work gets the recognition. It seems like the Nagios Plugins community has done a lot of work and now some of that recognition is being taken from them. That is the main injustice here.

It looks like they have it in hand with the new “Monitoring Plugins” site. Be sure to update your bookmarks and mailing list subscriptions, and lend your support to the projects that support you the most.

Goodbye Cyanogenmod, I'll miss you

It is with some disappointment that I read of Cyanogenmod’s descent into fauxpensource. Not only does it appear that they are doing everything they can to ruin any credibility with their community, it also means that I need to find a new operating system for my android devices.

For those who don’t know, Cyanogenmod is was a very popular implementation of the Android Open Source Project (AOSP). Basically, it is a recompiled version of the software Google and others distribute with their phones but the aim of AOSP is to be as open source as possible (i.e. without a lot of proprietary add-ons). If you were to buy, say, a Google Nexus 4 and a Samsung S4, both android phones, you would find that the user interface on both is radically different.

The reason is that it is rare for a company to want to sell commodity products. If the software on android devices were the same across them all, price becomes the main differentiator. If you are a device maker aiming to get the same margins that Apple is able to demand for its products, then you want to add something unique that isn’t available elsewhere, and it is hard to do that under the open source model. Also, the traditional way to offset costs is through deals to bundle other products into your offering. Does anyone here remember buying a retail computer with Windows installed on it? Usually the desktop would arrive full of pre-installed software, or “crapware”, that the vendors paid to have ship with the product. This happened when I bought my Galaxy S3. I tried to remove all of the kruft, such as the Yellowpages app, only to have the operating system tell me that it was a “critical” system app and couldn’t be removed.

So, within two hours of getting my phone I had root access and installed Cyanogenmod.

Now, I have struggled for over a decade to balance the desire to create free and open source software with the need to make money. I can understand the pressures that the Cyanogenmod team must have felt watching their buddies at commercial software companies making large salaries with a decent amount of job security while they toiled along with no real business model. I, too, have heard the siren song of Venture Capitalists who believe that all you need to make a lot of money is to offer some sort of “enterprise” or commercial version of your open source project.

Most of them are wrong.

I was in a meeting with a VC a few weeks ago when this came up. Now you have to realize that there has only been one “Valley-grade” success story with open source (well, that still exists as a private company), and that is Red Hat. However, most in the Valley don’t view it as a success, and I think that is mainly because it wasn’t a Valley deal. The first thing the VCs will say is that Red Hat is too small – it’s not a “real” success – when the fact is that they have a market capitalization similar to Juniper Networks (about US$10 billion). The second thing is that they’ll point out that Red Hat has “an enterprise version”. This is also not true. Red Hat sells time, just like we at OpenNMS do, through support and ease of use. If I want to, I can buy that access, take the product, remove all of the trademarked information and create an open source, feature for feature copy. This is exactly what CentOS does and why I call the measure of whether or not a company’s products are truly open source the “CentOS Test“. The main reason that the Valley has been unable to duplicate Red Hat’s success is that they always undermine it with some sort of commercial software component that removes the reason people would use it in the first place.

Take Eucalyptus for example. They tout themselves as an “open source” cloud solution, but the barriers they erected with their commercial offerings caused the creation of OpenStack – a truly open source solution that in just a few years has easily eclipsed their product. In that same VC meeting the guy asked “yeah, you’re open source, but what is the ‘secret sauce’?”. Well, the “secret sauce” is the fact that OpenNMS is open source. If I were to screw with that we’d stop being a market leader and just become one of many hundreds of commercial offerings, despite any features that make us better than them.

“But,” the open core people will exclaim, “we need to make money.”

One way to make money is to dual-license an open source project. In order to do that, one must own 100% of the copyright. This brings us to the contentious topic of copyright assignment, and Cyanogenmod seems embroiled in this issue at the moment.

I think it was MySQL that pioneered this idea. Their argument was “Sure, you can contribute to the project, but we need you to assign the copyright to the code you wrote to us. Thus, we can offer it under a license like the GPL, but if you want to pay us you can use it under another license.”

In theory this is a great idea, but there are two flaws. The first is that, as a programmer, if I were to create some code and then give away my copyright, then I no longer own what I wrote. Imagine that you wrote some code for MySQL, and, I don’t know, the company gets acquired by, say, Oracle, and you decide you’d like to work on that code for MariaDB. You can’t. You gave it away. You no longer own it.

The second flaw is that when a company makes a commercial offering, the pressure is on to add more stuff to it and leave it out of the “free” version. MySQL started down this path with offering new versions to commercial customers six months or so before releasing them under an open source license, then six months became a year, and then became never. This is exactly how Cyanogenmod hopes to pay back that $7 million investment by requiring device manufacturers to pay for features that they plan to keep out of the open source version.

OpenNMS, I think, has avoided these two traps. First, we do require copyright assignment. One main reason is that we need to be able to defend OpenNMS from people who would try to steal it. This happened a few years ago when a company was using our code in violation of the GPL. When we started legal action to make them stop, their defense was that “if” they were stealing the code, they were stealing from OpenNMS 1.0 (which at the time we didn’t own the copyright) and thus we couldn’t defend it. Myself and David Hustace mortgaged our houses to acquire that copyright and were able to bring the existing OpenNMS code under one copyright holder.

The next problem to solve was future contributions. Instead of unilaterally declaring that we get sole copyright to all contributions, we actually bothered to ask our community for suggestions. DJ Gregor pointed out the Sun Contributors Agreement (now the Oracle Contributors Agreement) which introduced “dual copyright” to the software industry. In much the same way two authors can share copyright on a book, it is possible for a code author to contribute the copyright to their code to a project while retaining the rights as well. We adopted this for OpenNMS and everyone seems to be pretty happy with it.

Now the second issue, that of a dual license, is harder to address. In the case of OpenNMS it comes down to trust. Trust is very important in the open source world. When I install a pre-compiled binary I am trusting that the person who compiled it didn’t do anything evil. Mark Shuttleworth came under fire for implying that Canonical “had root” in response to some questions about Ubuntu and privacy. While the statement was a little harsh in light of the valid concerns of the community, it was also true. We, as Ubuntu users, trust Canonical not to put in any sort of backdoor into their binaries. The difference between that and commercial software, however, is that it can be verified and I have the option of compiling the code myself.

At OpenNMS we promised the community that 100% of the OpenNMS application would always be available under an open source license, and we have kept that promise. In fact, when Juniper (one of our “Powered by OpenNMS” customers) licensed the code, all the additional work they contract from us ended up in OpenNMS as well (you can actually see the code we are working on in feature branches in our git repository). This is a great way to make money and advance the project as it can be used to pay for some of the development.

This is not a plan that Cyanogenmod plans to follow, if the experience of Guillaume Lesniak is any indication.

The only reason I was interested in Cyanogenmod was the fact that it was open source. Now, the beauty of it is that open source almost always offers options. Bradley Kuhn, a person I consider a friend and whose blog post pushed my button to write this in the first place, offers up Replicant as an alternative. I hadn’t looked at that project in awhile and it seems to be coming along nicely, with a lot of newly supported devices. Unfortunately my AT&T S3 isn’t one of them (they only support the international version), so I’m looking to switch to AOKP as soon as I can find the time.

It will be interesting to revisit Cyanogenmod in a year. My guess is that anyone not employed by Cyanogenmod, Inc. will flee to other projects, and Cyanogenmod, instead of being the go-to AOSP alternative, will fade into just another commercial offering. It is doubtful that Samsung will license it, since they pride themselves on in-house expertise, and Google is, well, Google. With the exception of HTC, no one else has any market share.

But, what do I know, right?

Mint with a Dash of Cinnamon

Since switching to using Linux as my primary desktop, I’m always curious as to what options are available to me besides my default go-to distro of Ubuntu.

While Ubuntu 12.04 (the LTS version) is one of the best desktop operating systems I’ve ever used, I’ve grown less enchanted with each subsequent release since then. Part of it comes from some of the choices made by the Ubuntu team (such as the tight integration with Amazon) and I can work around most of those, but I’ve had numerous stability issues with Unity that didn’t really exist in the older releases.

When Debian wheezy came out, I decided to give it a shot as a desktop operating system. I’ve used Debian as a server O/S for over a decade, but the main thing that makes it great for servers, the cautious nature of changes and inherent stability, kind of suck for the desktop. I’ve discussed this with Eric, who is both a Debian user and a Debian committer, and his reply is to ask if you really need to have umpteen updates to firefox, etc. I can see his point, but if I’m using, say, Gnome, having access to the latest release can have a huge impact on the user experience.

So I didn’t like wheezy as a desktop, but before going back to Ubuntu I decided to check out Fedora. It does support Gnome 3.8, but I ran into another issue that affects almost all distros outside of Ubuntu, which is the ability to easily encrypt one’s home directory.

Ubuntu, in the install process, let’s you choose to encrypt your home directory. While I’m firm believer in xkcd’s interpretation of what would happen in the case of someone wanting access to my data, I still like to take some precautions.

I don’t like whole disk encryption for a couple of reasons, namely the possibility of a performance hit but mainly the fact that I can’t remotely reboot my computer without having someone at the keyboard typing in a passphrase. I figure encrypting /home is a good compromise, especially since the key can be tied to the user’s login via pam.

I tried to get this to work on wheezy, but I found the performance was spotty and sometimes I’d login only to find nothing in my home directory. I didn’t spend too much time on it, since I was eager to use Gnome 3.8, but was disappointed to find that Fedora didn’t allow one to easily encrypt their home directory either.

Before giving up, I decided to take a shot a Arch Linux. I’ve been hearing wonderful things about this distro at conferences, but the installation process taxed even me. It it seriously bare-bones, but that it supposed to be part of the appeal. The philosophy around Arch is to create a distro with just the things you, the user, want and with access to the latest, greatest and, supposedly, most stable code.

It appealed to me as a great compromise between Debian and getting the latest shiny, but I couldn’t get it installed. You end up having to create your own fstab and somehow the UUIDs got screwed up and it wouldn’t boot. It also didn’t support the encryption of the home directory as an option out of the box, but I was willing to try to create it as I did under Debian if I could get it up and running. I don’t think it was impossible for me to get working; I simply ran out of play time and decided to try Arch another day.

On my way back to Ubuntu I decided to try one more distro, Linux Mint. I never made it back to Ubuntu.

Linux Mint 15 is a fork of Ubuntu 13.04. It removes some of the choices made by the Ubuntu team that raise the hackles of privacy advocates, and it introduces its own desktop manager called Cinnamon.

I quite like it.

I can’t really say what I like about it. It’s pretty, with the exception of the default desktop background (seriously Mint, yeah I know there’s history there but, sheesh) which is easily changed. The Terminal theme is one of the nicest I’ve used. There’s a pop up menu like Gnome 3, but then there’s these little dashlet thingies that let you launch things quickly, and a notifications system that is easy to access without getting in the way.

Running applications and open windows show up in a bar, like Gnome 2 or Windows, but I don’t find myself using that all that much. It is pretty easy to customize the whole thing, such as changing the location of things as well as setting hot corners.

There are a couple of issues. The menu doesn’t seem to index everything like the Dash in Unity, and I had gotten used to just typing in a few characters of a file name in order to access it. It does seem to remember files you use, so once you have accessed a particular file you can find it via the menu, but it does impact workflow not knowing if it will show up or not. The other issue is that it is still bound to Ubuntu, so they have some common bugs.

For example, I use the screenshot app a lot. Under Ubuntu 12.04, when I’d take a screenshot a dialog would appear asking me to save it. A suggested filename, based on timestamp, would be highlighted followed by the .png extension. I could just start typing and it would replace the highlighted text with what I had typed. That got broken in 12.10, so I’d have to reselect the text in order to set the filename. Not a big deal, but a little bit of a pain.

When I switched to Mint, it had the same issue. Note: in the last day or so it seems to have been fixed, since I am not seeing it as of today.

Of course, you get a lot of the Ubuntu-y goodness such as encrypted home directories out of the box with Mint, but Mint may end up being on the winning side of the Wayland vs. Mir argument, since Cinnamon isn’t tied to Mir (or Wayland for that matter).

For those of my three readers with a life, you may not be aware of either of those projects. Basically, for decades the control of graphical displays on most computer screens is based on a protocol called X11. Under Linux that implementation is currently managed by the X.Org project, a fork of the Xfree86 project that was the Linux standard for many years. The next generation display server arising out of X.Org (well, at least many of the developers) is called Wayland, and in the next few years one can expect it to become the default display server for most Linux distros.

Ubuntu, however, has decided to go in a different direction by launching its own project called Mir. I believe this is mainly because their goal of having a unified user interface across desktop, tablet and phone devices may not be easy to meet under Wayland. My very elementary understanding of Mir is that it allows the whole display space to be managed like one big window – easy to resize under the different screen resolutions of various devices – which differs from Wayland, but I could be making that whole part up.

I’m a huge fan of Ubuntu and I believe that those that do the work get to make the decisions, but I also believe that Wayland will have a much larger adoption base, ergo more users and developers, and will thus be more stable and more feature-rich. My own experiences with Unity’s stability on later releases indicate a trend that the first Mir releases will have some issues, and I’ve decided that I’d rather stick with something else.

For the time being that seems to be Mint with Cinnamon. Not only can I get work done using it, the underlying Ubuntu infrastructure means that I can get drivers for my laptop and still play Steam games. I still run Ubuntu 12.04 on my home desktop and laptop, but that is mainly due to lack of time to change over to Mint.

So, if you are looking for a solid Linux desktop experience, check out Mint. I am still amazed at what the free software community gifts me with every day, so my desktop of choice may change in the future, and I’ll be sure to let you know if I find anything better.

This Is Your Brain on Open Source

Last week I had to get a CT scan of my head. I asked for a copy and the hospital gave it to me on a disk.

When I mounted it on my Ubuntu desktop and tried to open the image I got an error “No Application for Opening DICOM Images”.

But what I loved was that it offered to find a program that could. Within a minutes I had Ginkgo CADx installed and was looking at my skull.

Open Source FTW.