2014 Open Source Monitoring Conference

This year I got to return to the Open Source Monitoring Conference hosted by Netways in Nürnberg, Germany.

Netways is one of the sponsors of the Icinga project, and for many years this conference was dedicated to Nagios. It is still pretty Nagios-centric, but now it is focused more on the forks of that project than the project itself. There were presentations on Naemon and Sensu as well as Icinga, and then there are the weirdos (non-check script oriented applications) such as Zabbix and OpenNMS.

I like this conference for a number of reasons. Mainly there really isn’t any other conference dedicated to monitoring, much less one focused on open source. This one brings together pretty much the whole gang. Plus, Netways has a lot of experience in hosting conferences, so it is a nice time: well organized, good food and lots of discussion.

My trip started off with an ominous text from American Airlines telling me that my flight from RDU to DFW was delayed. While flying through DFW is out of the way, it enables me to avoid Heathrow, which is worth the extra time and effort. On the way to the airport I was told my outbound flight was delayed to the point that I wouldn’t be able to make my connection, so I called the airline to ask about options.

With the acquisition by US Airways, I had the option to fly through CLT. That would cut off several hours of the trip and let me ride on an Airbus 330. American flies mainly Boeing equipment, so I was curious to see if the Airbus was any better.

As usual with flights to Europe, you leave late in the evening and arrive early in the morning. Ulf and I settled in for the flight and I was looking forward to meeting up with Ronny when we landed.

The trip was uneventful and we met up with Ronny and took the ICE train from the airport to Nürnberg. The conference is at the Holiday Inn hotel, and with nearly 300 of us there we kind of take over the place. I did think it was funny that on my first trip there the instructions on how to get to the hotel from the train station were not very direct. I found out the reason was that the most direct route takes you by the red light district and I guess they wanted us to avoid that, although I never felt unsafe wandering around the city.

We arrived mid-afternoon and checked in with Daniela to get our badges and other information. She is one of the people who work hard to make sure all attendees have a great time.

I managed to take a short nap and get settled in, and then we met up for dinner. The food at these events is really nice, and I’m always a fan of German beer.

I excused myself after the meal due in part to jet lag and in part due to the fact that I needed to finish my presentation, and I wanted to be ready for the first real day of the conference.

The conference was started by Bernd Erk, who is sort of the master of ceremonies.

He welcomed us and covered some housekeeping issues. The party that night was to be held at a place called Terminal 90, which is actually at the airport. Last time they tried to use buses, but it became pretty hard to organize, so this time they arranged for us to take public transportation via the U-Bahn. After the introduction we then broke into two tracks and I decided to stay to hear Kris Buytaert.

I’ve known Kris through his blog for years now, but this was the first time I got to see him in person. He is probably most famous in my circles for introducing the hashtag #monitoringsucks. Since I use OpenNMS I don’t really agree, but he does raise a number of issues that make monitoring difficult and some of the methods he uses to address them.

The rest of the day saw a number of good presentations. As this conference has a large number of Germans in attendance, a little less than half of the tracks are given in German, but there was also always an English language track at the same time.

One of my favorite talks from the first day was on MQTT, a protocol for monitoring the Internet of Things. It addresses how to deal with devices that might not always be on-line, and was demonstrated via software running on a Raspberry Pi. I especially liked the idea of a “last will and testament” which describes how the device should be treated if it goes offline. I’m certain we’ll be incorporating MQTT into OpenNMS in the future.

Ronny and I missed the subway trip to the restaurant because I discovered a bug in my presentation configuration and it took me a little while to correct it, but I managed to get it done and we just grabbed a taxi. Even though it was in the airport, it was a nice venue and we caught up with Kris and my friend Rihards Olups from Zabbix. I first met Rihards at this conference several years ago and he brought me a couple of presents from Lativa (he lives near Riga). I still have the magnet on my office door.

Ulf, however, wasn’t as pleased to meet them.

We had a lot of fun eating, drinking and talking. The food was good and the staff was attentive. Ulf was much happier with our waitress (so was Ronny):

Since I had to call it an early night because my presentation was the first one on Thursday, a lot of people didn’t. After the restaurant closed they moved to “Checkpoint Jenny” which was right across the street (and under my window) from the hotel. Some were up until 6am.

Needless to say, the crowds were a little lighter for my talk. I think it went well, but next year I might focus more on why you might want to move away from check scripts to something a little more scalable. I did a really cool demo (well, in my mind) about sending events into OpenNMS to monitor the status of scripts running on remote servers, but it probably was hard to understand from a Nagios point of view.

Both Rihards and Kris made it to my talk, and Rihards once again brought gifts. I got a lot of tasty Latvian candy (which is now in the office, my wife ordering me to get it out of the house so it won’t get eaten) as well as a bottle of Black Balsam, a liqueur local to the region.

Rihards spoke after lunch, and most people were mobile by then. I enjoyed his talk and was very impressed to learn that every version of the remote proxy ever written for Zabbix is still supported.

I had to head back to Frankfurt that evening so I could fly home on Friday (my father celebrated his 75th birthday and I didn’t want to miss it) but we did find time to get together for a beer before I left. It was cool to have people from so many different monitoring projects brought together through a love of open source.

Next year the conference is from 16-18 November. I plan to attend and I hope to spend more time in Germany that trip than I had available to me this one.

Test Driven Development

One of the things that bothers me a lot about the software industry is this idea that proprietary software is somehow safer and better written than open source software. Perhaps it is because a lot of people still view software as “magic” and since you can’t see the code, is must be more “magical”. Or perhaps is it because people assume that something you have to pay for must be better than something that is free.

I’ve worked for and with a number of proprietary software companies, so I’ve seen how the sausage is made, and in some cases you don’t want to know. Don’t get me wrong, I’ve seen well managed commercial software companies that produce solid code because in the long run solid code is better and costs less, but I’ve also seen the opposite done simply to get a product to market quickly.

With open source, at least if you expect contribution, you have to produce code that is readable. It also helps if it is well written since good programmers respect and like working with other good programmers. It’s out there for everyone to see, and that puts extra demands on its quality.

In the interest of making great code, many years ago we switched to the Spring framework which had the benefit that we could start writing software tests. This test driven development is one reason OpenNMS is able to stay so stable with lots of code changes and a small test team.

What’s funny is that we’ve talked to at least two other companies who started implementing test driven development but then dropped it because it was too hard. It wasn’t easy for us, either, but as of this writing we run 5496 tests every time something changes in the main OpenNMS application, and that doesn’t include all of the other branches and projects such as Newts. We use the Bamboo product from Atlassian to manage the tests so I want to take this opportunity to thank them for supporting us.

OpenNMS 14 contained some of the biggest code changes in the platform’s history but so far it has been one of the smoothest releases yet. While most of that was due to to the great team of developers we have, part of it was due to the transparency that the open source process encourages.

Commercial software could learn a thing or two from it.

Announcing OpenNMS 14 and Newts 1.0

It is with great pleasure that I can announce the release of OpenNMS 14. Yup, you heard right, OpenNMS *fourteen*.

It’s been more than 12 years since OpenNMS 1.0 so we’ve decided to pull a Java and drop the “1.” from the version numbers. Also, we are doing away with stable and development branches. The Master branch has been replaced with the develop branch, which will be much more stable than development releases have been in the past, and we’ll name the next major stable release 15, followed by 16, etc. Do expect bug fix point releases as the in past, but the plan is to release more major releases per year than just one.

A good overview of all the new features in 14 can be found here:

https://github.com/OpenNMS/opennms/blob/release-14.0.0/WHATSNEW.md

The development team has been working almost non-stop over the last two months to make OpenNMS 14 the best and most tested version yet. A lot of things has been added, such as new topology and geographic maps, and some big things have been made better, such as linkd. Plus, oodles of little bugs have finally been closed making the whole release seem more polished and easier to use.

Today we also released Newts 1.0, the first release in a new time series data storage library. Published under the Apache License, this technology is built on Cassandra and is aimed at meeting Big Data and Internet of Things needs by providing fast, hugely scalable and redundant data storage. You can find out more about this technology here:

http://newts.io

While not yet integrated with OpenNMS, the 1.0 release is the first step in the process. Users will have the option to replace the JRobin/RRDtool storage strategies with Newts. Since Newts stores raw data, there will be a number of options for post-processing and graphing that data that I know a number of you will find useful. Whether your data needs are simple or complex, Newts represents a way to meet them.

Feel free to check out both projects. OpenNMS 14 should be in both the yum and apt repos, and as usual I welcome feedback as to what you think about it.

OpenNMS Newts at ApacheCon Europe

Being Hungarian, I am very jealous and yet still proud that our very own Eric Evans will be presenting at ApacheCon Europe in Budapest, Hungary.

He will be talking about Newts which is a new time series data store built on top of Apache Cassandra. It will be a key part of positioning OpenNMS for the Internet of Things as well as being very useful on its own.

Eric is a dynamic and interesting speaker, so if you are attending the conference be sure to check out his talk.

And while you are there, eat a Túró Rudi or three for me.

STUIv2: Focus on the Network

One of the things that really makes me angry is when critics of open source claim that open source doesn’t innovate, despite the fact that the very business model is incredibly innovative and probably the most disruptive thing to happen to the software industry since its inception.

Another example of innovation is in the new network visualization (mapping) software coming in the next release of OpenNMS.

I have been a vocal critic of maps for years. It stems from a time when I was working at a client during the first Internet bubble and my job was pretty much to spend several hours a day moving icons into container objects on the OpenView map. It was mind-numbingly dull work that returned little value. Most experienced network and systems managers move away from maps early on, but often the bosses who tend to make the buying decisions demand it as part of any solution.

Now, I’ve seen “cool” maps so it’s not that maps aren’t cool, it’s just that they tend to require more work to make cool than they save by being useful.

That is about to change with the new OpenNMS Semantic Topology User Interface (STUI).

Before I talk about that, I should mention that OpenNMS has a map. In fact it has a number of them. The first one was built for the Carabinieri in Italy who liked OpenNMS but wanted it to have something like OpenView’s map. Now called the “SVG” map, and it does its job well, as well as any map of that type can.

Then when we built the remote poller we needed a way to represent the pollers’ location geographically, and thus the “distributed” map was born. People liked the geographical representation, so we made it available to all nodes and not just remote pollers with the “geographical” map.

None of this work was really innovative, map-wise. But we started to depart from the norm with the topology map introduced in 1.12.

The topology map was novel in that it lets the user determine the topology to view. By default OpenNMS ships two different topology APIS. One is based on level 2 connections discovered by the “linkd” process, and the other is based on VMWare data showing the relationship between a host machine and its guest operating systems, as well as any network attached storage.

But it doesn’t have to stop there. In JunOS Space, Juniper is able to show connection data through all of its devices by using the API. Any other source of topology data and business intelligence can be added to the OpenNMS system.

However, me, the map hater, still wasn’t sold. While it is fine for smaller networks, what happens when you enter into the realm of tens of thousands of devices? We eventually see OpenNMS as being the platform for managing the Internet of Things, and any type of map we create will have to scale to huge numbers of devices.

Thus the team created the new topology map (STUIv2), available in 1.13 and coming in the next stable OpenNMS release. The key to this implementation is that you can add and remove “focus” from the map. This lets you quickly zoom in to the area of the map that is actually of interest, and then you can navigate around it quickly to both understand network outages as well as to see their impact.

While I like words, it’s probably better if you just check out the video that David created. It’s 20 minutes long and the first ten minutes cover “what has gone before” so if you are pressed for time, jump to the ten minute mark and follow it from there.

I like the fact that the video shows you the workflow from the main UI to the map, but then shows you how you can manage things from the map back to the main UI.

Note that I had nothing to do with this map. I often say that my only true talent is attracting amazing people to work with me, and this just drives that point home.

While I’m still not sold on maps, I am warming up to this one. I got goose bumps around minute 16:45 and then again at 17:30.

It’s great, innovative work and I’m excited to see what the community will do with this new tool.

Nagios News

My friend Alex in Norway sent me a link to a Slashdot story about the Nagios plugin site being taken over by Ethan Galstad’s company Nagios Enterprises. From what I’ve read about the incident, it definitely sounds like it could have been handled better, and it points out one of the main flaws with the “fauxpensource” business strategy.

I assume that at least two of my three readers are familiar with Nagios, but for the one who isn’t, Nagios is one of the most popular tools for monitoring servers, and it has been around just as long as OpenNMS (the NetSaint project, the original name of Nagios, was registered on Sourceforge in January of 2000 while OpenNMS joined in March of that year). Its popularity is mainly based around how easy it is to extend its functionality through the use of user written scripts, or “plugins”, plus it is written in C which made it much easier for it to be included in Linux distributions. A quick Google search on “nagios” just returned 1.7 million hits to 400K for “opennms”
A few years ago Ethan decided to adopt a business model where he would hold back some of the code from the “core” project and he would charge people a commercial licensing fee to access that code (Nagios XI). If your business plan is based on a commercial software model, then your motivations toward your open source community change. In fact, that community can change, with projects such as Icinga deciding to fork Nagios rather than to continue to work within the Nagios project framework.

Enter the Nagios Plugins site.

To say that Nagios Plugins was instrumental to the success of the Nagios application would be an understatement. Back when I tracked such things, there were way more contributions of both new plugins and updates to existing plugins on that project than were given to the Nagios code itself. The plugins community is why Nagios is so popular in the first place, and it seems like they deserve some recognition for that effort.

Trademark issues within open source projects are always tricky. Over a decade ago a company in California started producing “OpenNMS for Mac”. Even though we had OpenNMS on OS X available through the fink project, it required a version of Java that wasn’t generally available to Mac users (just those in the developer program). However, that version was was required to allow OpenNMS to actually work at scale, but this company decided to remove all of the code that depended on it and to release their own version. Unfortunately, they called it “OpenNMS” which could cause a lot of market confusion. Suppose a reviewer tested that program, found it didn’t scale, and decided to pan the whole application. It would have a negative impact on the OpenNMS brand. After numerous attempts to explain this to the man responsible for the fork, I had to hire a lawyer to send a cease and desist order to get him to stop. It was not a happy experience for me. When you give your software away for free, the brand is your intellectual property and you need to protect it.

So I can understand Ethan’s desire to control the Nagios name (which is actually a little ironic since the switch from NetSaint to Nagios was done for similar reasons). He has a commercial software company to run and this site might lead people to check out the open source alternatives available. Since they are based on his product, the learning curve is not very steep and thus the cost to switch is low, and that could have a dramatic impact on his business plan and revenues.

At OpenNMS we treat our community differently. We license the OpenNMS trademark to the OpenNMS Foundation, an independent organization based in Europe that is both responsible for the annual Users Conference (coming to the UK in April) as well as creating the “Ask OpenNMS” site to provide a forum for the community to provide support to each other. They own their own domains and their own servers, and outside of an small initial contribution from the OpenNMS Group, they are self funded. Last year’s conference was awesome – the weather notwithstanding (it was cold and it snowed).

OpenNMS is different from Nagios in other key ways. At its heart, Nagios is a script management tool. The user plugins are great, but they don’t really scale. Almost all of the OpenNMS “checks” are integrated into the OpenNMS code and controlled via configuration files which gives users the flexibility of a plugin but much greater performance. For those functions that can’t be handled within OpenNMS, we teach in our training classes how to use the Net-SNMP “extend” function to provide secure, remote program execution that can scale. OpenNMS is a management application platform that allows enterprises and carriers to develop their own, highly custom, management solution, but at the cost of a higher learning curve than products such as Nagios.

Now that doesn’t make OpenNMS “better” than Nagios – it just makes it better for certain users and not for others (usually based on size). The best management solution is the one that works for you, and luckily for Nagios users there are a plethora of similar products to choose from which use the same plugins – which I believe is at the heart of this whole kerfuffle.

The part of the story that bothers me the most is the line “large parts of our web site were copied“. If this is true, that is unfortunate, and could result in a copyright claim from the plugins site.

To me, open source is a meritocracy – the person who does the work gets the recognition. It seems like the Nagios Plugins community has done a lot of work and now some of that recognition is being taken from them. That is the main injustice here.

It looks like they have it in hand with the new “Monitoring Plugins” site. Be sure to update your bookmarks and mailing list subscriptions, and lend your support to the projects that support you the most.

Goodbye Cyanogenmod, I'll miss you

It is with some disappointment that I read of Cyanogenmod’s descent into fauxpensource. Not only does it appear that they are doing everything they can to ruin any credibility with their community, it also means that I need to find a new operating system for my android devices.

For those who don’t know, Cyanogenmod is was a very popular implementation of the Android Open Source Project (AOSP). Basically, it is a recompiled version of the software Google and others distribute with their phones but the aim of AOSP is to be as open source as possible (i.e. without a lot of proprietary add-ons). If you were to buy, say, a Google Nexus 4 and a Samsung S4, both android phones, you would find that the user interface on both is radically different.

The reason is that it is rare for a company to want to sell commodity products. If the software on android devices were the same across them all, price becomes the main differentiator. If you are a device maker aiming to get the same margins that Apple is able to demand for its products, then you want to add something unique that isn’t available elsewhere, and it is hard to do that under the open source model. Also, the traditional way to offset costs is through deals to bundle other products into your offering. Does anyone here remember buying a retail computer with Windows installed on it? Usually the desktop would arrive full of pre-installed software, or “crapware”, that the vendors paid to have ship with the product. This happened when I bought my Galaxy S3. I tried to remove all of the kruft, such as the Yellowpages app, only to have the operating system tell me that it was a “critical” system app and couldn’t be removed.

So, within two hours of getting my phone I had root access and installed Cyanogenmod.

Now, I have struggled for over a decade to balance the desire to create free and open source software with the need to make money. I can understand the pressures that the Cyanogenmod team must have felt watching their buddies at commercial software companies making large salaries with a decent amount of job security while they toiled along with no real business model. I, too, have heard the siren song of Venture Capitalists who believe that all you need to make a lot of money is to offer some sort of “enterprise” or commercial version of your open source project.

Most of them are wrong.

I was in a meeting with a VC a few weeks ago when this came up. Now you have to realize that there has only been one “Valley-grade” success story with open source (well, that still exists as a private company), and that is Red Hat. However, most in the Valley don’t view it as a success, and I think that is mainly because it wasn’t a Valley deal. The first thing the VCs will say is that Red Hat is too small – it’s not a “real” success – when the fact is that they have a market capitalization similar to Juniper Networks (about US$10 billion). The second thing is that they’ll point out that Red Hat has “an enterprise version”. This is also not true. Red Hat sells time, just like we at OpenNMS do, through support and ease of use. If I want to, I can buy that access, take the product, remove all of the trademarked information and create an open source, feature for feature copy. This is exactly what CentOS does and why I call the measure of whether or not a company’s products are truly open source the “CentOS Test“. The main reason that the Valley has been unable to duplicate Red Hat’s success is that they always undermine it with some sort of commercial software component that removes the reason people would use it in the first place.

Take Eucalyptus for example. They tout themselves as an “open source” cloud solution, but the barriers they erected with their commercial offerings caused the creation of OpenStack – a truly open source solution that in just a few years has easily eclipsed their product. In that same VC meeting the guy asked “yeah, you’re open source, but what is the ‘secret sauce’?”. Well, the “secret sauce” is the fact that OpenNMS is open source. If I were to screw with that we’d stop being a market leader and just become one of many hundreds of commercial offerings, despite any features that make us better than them.

“But,” the open core people will exclaim, “we need to make money.”

One way to make money is to dual-license an open source project. In order to do that, one must own 100% of the copyright. This brings us to the contentious topic of copyright assignment, and Cyanogenmod seems embroiled in this issue at the moment.

I think it was MySQL that pioneered this idea. Their argument was “Sure, you can contribute to the project, but we need you to assign the copyright to the code you wrote to us. Thus, we can offer it under a license like the GPL, but if you want to pay us you can use it under another license.”

In theory this is a great idea, but there are two flaws. The first is that, as a programmer, if I were to create some code and then give away my copyright, then I no longer own what I wrote. Imagine that you wrote some code for MySQL, and, I don’t know, the company gets acquired by, say, Oracle, and you decide you’d like to work on that code for MariaDB. You can’t. You gave it away. You no longer own it.

The second flaw is that when a company makes a commercial offering, the pressure is on to add more stuff to it and leave it out of the “free” version. MySQL started down this path with offering new versions to commercial customers six months or so before releasing them under an open source license, then six months became a year, and then became never. This is exactly how Cyanogenmod hopes to pay back that $7 million investment by requiring device manufacturers to pay for features that they plan to keep out of the open source version.

OpenNMS, I think, has avoided these two traps. First, we do require copyright assignment. One main reason is that we need to be able to defend OpenNMS from people who would try to steal it. This happened a few years ago when a company was using our code in violation of the GPL. When we started legal action to make them stop, their defense was that “if” they were stealing the code, they were stealing from OpenNMS 1.0 (which at the time we didn’t own the copyright) and thus we couldn’t defend it. Myself and David Hustace mortgaged our houses to acquire that copyright and were able to bring the existing OpenNMS code under one copyright holder.

The next problem to solve was future contributions. Instead of unilaterally declaring that we get sole copyright to all contributions, we actually bothered to ask our community for suggestions. DJ Gregor pointed out the Sun Contributors Agreement (now the Oracle Contributors Agreement) which introduced “dual copyright” to the software industry. In much the same way two authors can share copyright on a book, it is possible for a code author to contribute the copyright to their code to a project while retaining the rights as well. We adopted this for OpenNMS and everyone seems to be pretty happy with it.

Now the second issue, that of a dual license, is harder to address. In the case of OpenNMS it comes down to trust. Trust is very important in the open source world. When I install a pre-compiled binary I am trusting that the person who compiled it didn’t do anything evil. Mark Shuttleworth came under fire for implying that Canonical “had root” in response to some questions about Ubuntu and privacy. While the statement was a little harsh in light of the valid concerns of the community, it was also true. We, as Ubuntu users, trust Canonical not to put in any sort of backdoor into their binaries. The difference between that and commercial software, however, is that it can be verified and I have the option of compiling the code myself.

At OpenNMS we promised the community that 100% of the OpenNMS application would always be available under an open source license, and we have kept that promise. In fact, when Juniper (one of our “Powered by OpenNMS” customers) licensed the code, all the additional work they contract from us ended up in OpenNMS as well (you can actually see the code we are working on in feature branches in our git repository). This is a great way to make money and advance the project as it can be used to pay for some of the development.

This is not a plan that Cyanogenmod plans to follow, if the experience of Guillaume Lesniak is any indication.

The only reason I was interested in Cyanogenmod was the fact that it was open source. Now, the beauty of it is that open source almost always offers options. Bradley Kuhn, a person I consider a friend and whose blog post pushed my button to write this in the first place, offers up Replicant as an alternative. I hadn’t looked at that project in awhile and it seems to be coming along nicely, with a lot of newly supported devices. Unfortunately my AT&T S3 isn’t one of them (they only support the international version), so I’m looking to switch to AOKP as soon as I can find the time.

It will be interesting to revisit Cyanogenmod in a year. My guess is that anyone not employed by Cyanogenmod, Inc. will flee to other projects, and Cyanogenmod, instead of being the go-to AOSP alternative, will fade into just another commercial offering. It is doubtful that Samsung will license it, since they pride themselves on in-house expertise, and Google is, well, Google. With the exception of HTC, no one else has any market share.

But, what do I know, right?

Mint with a Dash of Cinnamon

Since switching to using Linux as my primary desktop, I’m always curious as to what options are available to me besides my default go-to distro of Ubuntu.

While Ubuntu 12.04 (the LTS version) is one of the best desktop operating systems I’ve ever used, I’ve grown less enchanted with each subsequent release since then. Part of it comes from some of the choices made by the Ubuntu team (such as the tight integration with Amazon) and I can work around most of those, but I’ve had numerous stability issues with Unity that didn’t really exist in the older releases.

When Debian wheezy came out, I decided to give it a shot as a desktop operating system. I’ve used Debian as a server O/S for over a decade, but the main thing that makes it great for servers, the cautious nature of changes and inherent stability, kind of suck for the desktop. I’ve discussed this with Eric, who is both a Debian user and a Debian committer, and his reply is to ask if you really need to have umpteen updates to firefox, etc. I can see his point, but if I’m using, say, Gnome, having access to the latest release can have a huge impact on the user experience.

So I didn’t like wheezy as a desktop, but before going back to Ubuntu I decided to check out Fedora. It does support Gnome 3.8, but I ran into another issue that affects almost all distros outside of Ubuntu, which is the ability to easily encrypt one’s home directory.

Ubuntu, in the install process, let’s you choose to encrypt your home directory. While I’m firm believer in xkcd’s interpretation of what would happen in the case of someone wanting access to my data, I still like to take some precautions.

I don’t like whole disk encryption for a couple of reasons, namely the possibility of a performance hit but mainly the fact that I can’t remotely reboot my computer without having someone at the keyboard typing in a passphrase. I figure encrypting /home is a good compromise, especially since the key can be tied to the user’s login via pam.

I tried to get this to work on wheezy, but I found the performance was spotty and sometimes I’d login only to find nothing in my home directory. I didn’t spend too much time on it, since I was eager to use Gnome 3.8, but was disappointed to find that Fedora didn’t allow one to easily encrypt their home directory either.

Before giving up, I decided to take a shot a Arch Linux. I’ve been hearing wonderful things about this distro at conferences, but the installation process taxed even me. It it seriously bare-bones, but that it supposed to be part of the appeal. The philosophy around Arch is to create a distro with just the things you, the user, want and with access to the latest, greatest and, supposedly, most stable code.

It appealed to me as a great compromise between Debian and getting the latest shiny, but I couldn’t get it installed. You end up having to create your own fstab and somehow the UUIDs got screwed up and it wouldn’t boot. It also didn’t support the encryption of the home directory as an option out of the box, but I was willing to try to create it as I did under Debian if I could get it up and running. I don’t think it was impossible for me to get working; I simply ran out of play time and decided to try Arch another day.

On my way back to Ubuntu I decided to try one more distro, Linux Mint. I never made it back to Ubuntu.

Linux Mint 15 is a fork of Ubuntu 13.04. It removes some of the choices made by the Ubuntu team that raise the hackles of privacy advocates, and it introduces its own desktop manager called Cinnamon.

I quite like it.

I can’t really say what I like about it. It’s pretty, with the exception of the default desktop background (seriously Mint, yeah I know there’s history there but, sheesh) which is easily changed. The Terminal theme is one of the nicest I’ve used. There’s a pop up menu like Gnome 3, but then there’s these little dashlet thingies that let you launch things quickly, and a notifications system that is easy to access without getting in the way.

Running applications and open windows show up in a bar, like Gnome 2 or Windows, but I don’t find myself using that all that much. It is pretty easy to customize the whole thing, such as changing the location of things as well as setting hot corners.

There are a couple of issues. The menu doesn’t seem to index everything like the Dash in Unity, and I had gotten used to just typing in a few characters of a file name in order to access it. It does seem to remember files you use, so once you have accessed a particular file you can find it via the menu, but it does impact workflow not knowing if it will show up or not. The other issue is that it is still bound to Ubuntu, so they have some common bugs.

For example, I use the screenshot app a lot. Under Ubuntu 12.04, when I’d take a screenshot a dialog would appear asking me to save it. A suggested filename, based on timestamp, would be highlighted followed by the .png extension. I could just start typing and it would replace the highlighted text with what I had typed. That got broken in 12.10, so I’d have to reselect the text in order to set the filename. Not a big deal, but a little bit of a pain.

When I switched to Mint, it had the same issue. Note: in the last day or so it seems to have been fixed, since I am not seeing it as of today.

Of course, you get a lot of the Ubuntu-y goodness such as encrypted home directories out of the box with Mint, but Mint may end up being on the winning side of the Wayland vs. Mir argument, since Cinnamon isn’t tied to Mir (or Wayland for that matter).

For those of my three readers with a life, you may not be aware of either of those projects. Basically, for decades the control of graphical displays on most computer screens is based on a protocol called X11. Under Linux that implementation is currently managed by the X.Org project, a fork of the Xfree86 project that was the Linux standard for many years. The next generation display server arising out of X.Org (well, at least many of the developers) is called Wayland, and in the next few years one can expect it to become the default display server for most Linux distros.

Ubuntu, however, has decided to go in a different direction by launching its own project called Mir. I believe this is mainly because their goal of having a unified user interface across desktop, tablet and phone devices may not be easy to meet under Wayland. My very elementary understanding of Mir is that it allows the whole display space to be managed like one big window – easy to resize under the different screen resolutions of various devices – which differs from Wayland, but I could be making that whole part up.

I’m a huge fan of Ubuntu and I believe that those that do the work get to make the decisions, but I also believe that Wayland will have a much larger adoption base, ergo more users and developers, and will thus be more stable and more feature-rich. My own experiences with Unity’s stability on later releases indicate a trend that the first Mir releases will have some issues, and I’ve decided that I’d rather stick with something else.

For the time being that seems to be Mint with Cinnamon. Not only can I get work done using it, the underlying Ubuntu infrastructure means that I can get drivers for my laptop and still play Steam games. I still run Ubuntu 12.04 on my home desktop and laptop, but that is mainly due to lack of time to change over to Mint.

So, if you are looking for a solid Linux desktop experience, check out Mint. I am still amazed at what the free software community gifts me with every day, so my desktop of choice may change in the future, and I’ll be sure to let you know if I find anything better.

This Is Your Brain on Open Source

Last week I had to get a CT scan of my head. I asked for a copy and the hospital gave it to me on a disk.

When I mounted it on my Ubuntu desktop and tried to open the image I got an error “No Application for Opening DICOM Images”.

But what I loved was that it offered to find a program that could. Within a minutes I had Ginkgo CADx installed and was looking at my skull.

Open Source FTW.

The Meritocracy

I’ve been following the recent kerfuffel between Richard Stallman and Canonical over the new Amazon search feature in 12.10, and while I should probably leave well enough alone, I wanted to add a few things to the discussion.

I do respect Richard Stallman for the work he’s done to promote free software, but I get a little tired of his decision to be the final arbiter on where to draw the line. For example, he does walk the walk and uses a netbook as his primary machine because it has an open BIOS. All well and good. But what about the machines that built that netbook? Was their control code open? What about the website he ordered it from, or the person he talked to to place that order? Did they use free software? What about the logistics company that shipped it to him? Was their software 100% free? The reality is that at the moment there simply isn’t enough free software in the supply/services chain to have a totally free experience, and we can’t get there just by wishing it so. It will have to happen in steps, and those steps will involve the free software community working closely with the closed software community.

Thus going after someone like Canonical and calling what they doing spying actually hurts the promotion of free software. What they are doing is a huge step in the right direction.

Having run a business based on free and open source software for a decade, you can imagine that I am a big fan of it. Last year, for a variety of reasons, I decided to make the jump to using a desktop based on Linux. I tried a number of options, but the one that worked for me, the one that “stuck”, was Ubuntu. Using it just comes naturally, and I’ve been using it for so long now that other desktops seem foreign.

I don’t pretend to speak for Mark Shuttleworth, but one of his goals with Ubuntu seems to be to make a desktop operating system that is stable, attractive and easy to use. I think that with Ubuntu they are close to that goal. It works for me. It also works for enough other people that when Valve started working on a Linux port of their Steam client, they chose Ubuntu. When Dell wanted to ship a laptop with Linux, they shipped it with Ubuntu. (I got one, review coming soon)

The Linux desktop world is so fragmented and represents such a small percentage of potential sales, until Ubuntu came along, there weren’t enough people using the Linux desktop to make it worth writing native clients for Linux. It took people like Canonical and Shuttleworth to make decisions and choices that enabled this to happen.

Now purists will point out that products like Steam aren’t open source. True, but that doesn’t prevent me from wanting to use them alongside all of the other wonderful stuff I now use that *is* open source. In much the same way that Apple switched to Intel to make the transition easier from Windows, Ubuntu is making the transition to an open source desktop easier. And with more developers writing to the Linux desktop, that can only increase the proliferation of software for it.

And despite all of the outcry, Ubuntu is still open source. Should I dislike something or want to change it, I have that ability. But this brings up my biggest frustration with the free and open software community – there are those within it who think it is someone else’s job to implement their desires.

Take this Amazon thing, for example. I don’t like it simply because I don’t want to have to add any latency to my searches in Dash, so I turn it off. If the off button didn’t exist, I would have the ability to check out the code that implements that feature, remove it, recompile it and install it. Heck, with the proliferation of git these days the process is even simpler, as I could track my changes along with master.

Yet that does involve something I like to call “work”. Free software doesn’t mean free solution. It is a two way street. You don’t like something? Change it. Ubuntu itself is based on Debian, and Linux Mint is based on Ubuntu. But someone had to do the work to change Debian into Ubuntu, just like someone had to do the work to make Ubuntu into Linux Mint.

It’s what free software is all about.

So it makes me a little unhappy when Stallman refers to the Amazon lookup feature as “spyware”. It’s loaded language meant to get a reaction from his core followers, in much the same way a liberal politician would approach immigration with “let’s open all borders” and a conservative would say “let’s build a wall and throw ’em over it”. The real solution is somewhere in the middle.

This doesn’t mean that users of free software don’t get any say. Feedback is a vital component of any community. I believe when the Amazon feature was introduced in the beta, there wasn’t a way to turn it off. Feedback from the community got the off button added. When questions were raised about trusting Ubuntu with our search results, Shuttleworth replied “We have root“. Not the most diplomatic response, but he made his point that we already trust Ubuntu when we install their libraries on our machines, and compared to that, search results are a minor thing.

If I were truly paranoid, I’d probably run something like Gentoo where the code is build from source each time. But what’s funny is that if I did switch to Gentoo, it would be because I used Ubuntu as the gateway drug to a free desktop.

My final point is that open source software is the ultimate meritocracy. Those who do the work get the most influence. Shuttleworth spent millions to create Ubuntu, so he gets a lot of say in it. Clement Lefebvre founded Mint, so his opinion matters in that community. I think we owe a huge debt to Richard Stallman for his past efforts, but lately I think he is doing more harm than good. And maybe I’m feeding the troll by even bringing it up.

All I know for certain is that I am using way more free software than I was using a year ago, and that is do in large part to Canonical. It was also a lot of work to make the switch, but I had help from like-minded people on the Internet, and isn’t that what open source is truly all about?