#OSMC 2018 – Day -1

The annual Open Source Monitoring Conference (OSMC) held in Nürnberg, Germany each year brings together pretty much everyone who is anyone in the free and open source monitoring space. I really look forward to attending, and so do a number of other people at OpenNMS, but this year I won the privilege, so go me.

The conference is a lot of fun, which must be the reason for the hell trip to get here this year. Karma must be trying to bring things into balance.

As an American Airlines frequent flier whose home airport is RDU, most of my trips to Europe involve Heathrow airport (American has a direct flight from RDU to LHR that I’ve taken more times than I can count).

I hate that airport with the core of my being, and try to avoid it whenever possible. While I could have taken a flight from LHR directly to Nürnberg on British Airways, I decided to fly to Philadelphia and take a direct American flight to Munich. It is just about two hours by train from MUC to Nürnberg Hbf and I like trains, so combine that with getting to skip LHR and it is a win/win.

But it was not to be.

I got to the airport and watched as my flight to PHL got delayed further and further. Chris, at the Admiral’s Club desk, was able to re-route me, but that meant a flight through Heathrow (sigh). Also, the Heathrow flight left five hours later than my flight to Philadelphia, and I ended up waiting it out at the airport (Andrea had dropped me off and I didn’t want to ask her to drive all the way back to get me just for a couple of hours).

Because of the length of this trip I had to check a bag, and I had a lot of trepidation that my bag would not be re-routed properly. Chris even mentioned that American had actually put it on the Philadelphia flight but he had managed to get it removed and put on the England flight, and American’s website showed it loaded on the plane.

That also turns out to be the last record American has on my bag, at least on the website I can access.

American Tracking Website

The fight to London was uneventful. American planes tend to land at Terminal 3 and most other British Airways planes take off from Terminal 5, so you have to make your way down a series a long corridors and take a bus to the other terminal. Then you have to go through security, which is usually when my problems begin.

I wear contact lenses, and since my eyes tend to react negatively to the preservatives found in saline solution I use a special, preservative-free brand of saline. Unfortunately, it is only available in 118ml bottles. As most frequent fliers know, the limit for the size of liquid containers for carry on baggage is 100ml, although the security people rarely notice the difference. When they do I usually just explain that I need it for my eyes and I’m allowed to bring it with me. That is, everywhere except Heathrow airport. Due to the preservative-free nature of the saline I can’t move it to another container for fear of contamination.

Back in 2011 was the first time that my saline was ever confiscated at Heathrow. Since then I’ve carried a doctor’s note stating that it is “medically necessary” but once even then I had it confiscated a few years later at LHR because the screener didn’t like the fact that my note was almost a year old. That said, many times have I gone through that airport with no one noticing the slightly larger size of my saline bottle, but on this trip it was not to be.

When your carry on items get tagged for screening at Heathrow’s Terminal 5, you kind of wait in a little mob of people for the one person to methodically go through your stuff. Since I had several hours between flights it was no big deal for me, but it is still very annoying. Of course when the screener got to my items he was all excited that he had stopped the terrorist plot of the century by discovering my saline bottle was 18ml over the limit, and he truly seemed disappointed when I produced my doctor’s note, freshly updated as of August of this year.

Screeners at Heathrow are not imbued with much decision making ability, so he literally had to take my note and bottle to a supervisor to get it approved. I was then allowed to take it with me, but I couldn’t help thinking that the terrorists had won.

The rest of my stay at the world’s worst airport was without incident, and I squeezed into my window seat on the completely full A319 to head to Munich.

One we landed I breezed through immigration (Germans run their airports a bit more efficiently than the British) and waited for my bag. And waited. And waited.

When I realized it wouldn’t be arriving with me, I went to look for a BA representative. The sign said to find them at the “Lost and Found” kiosk, but the only two kiosks in the rather small baggage area were not staffed. I eventually left the baggage area and made my way to the main BA desk, where I managed to meet Norbert. After another 15 minutes or so, Norbert brought me a form to fill out and promised that I would receive an e-mail and a text message with a “file number” to track the status of my bag.

I then found the S-Bahn train which would take me to the Munich Hauptbahnhof where I would get my next train to Nürnberg.

I had made a reservation for the train to insure I had a seat, but of course that was on the 09:55 train which I would have taken had I been on the PHL flight. I changed that to a 15:00 train when I was rerouted, and apparently one change is all you get with Deutsche Bahn, but Ronny had suggested I buy a “flexpreis” ticket so I could take any train from Munich to Nürnberg that I wanted. I saw there were a number of “Inter-City Express (ICE)” trains available, so I figured I would just hop on the first one I found.

When I got to the station I saw that a train was leaving from Platform (Gleis) 20 at 15:28. It was now 15:30 so I ran and boarded just before it pulled out of the station.

It was the wrong train.

Well, not exactly. There are a number of types of trains you can take. The fastest are the ICE trains that run non-stop between major cities, but there are also “Inter-City (IC)” trains that make more stops. I had managed to get on a “Regional Bahn (RB)” train which makes many, many stops, turning my one hour trip into three.

(sigh)

The man who took my ticket was sympathetic, and told me to get off at Ingolstadt and switch to an ICE train. I was chatting on Mattermost with Ronny most of this time, and he was able to verify the proper train and platform I needed to take. That train was packed, but I ended up sitting with some lovely people who didn’t mind chatting with me in English (I so love visiting Germany for this reason).

So, about seven hours later than I had planned I arrived at my hotel, still sans luggage. After getting something to eat I started the long process of trying to locate my bag.

I started on Twitter. Both the people at American and British Airways asked me to DM them. The AA folks said I needed to talk with the BA folks and the BA folks still have yet to reply to me. Seriously BA, don’t reach out to me if you don’t plan to do anything. It sets up expectations you apparently can’t meet.

Speaking of not doing anything, my main issue was that I need a “file reference” in order to track my lost bag, but despite Norbert’s promise I never received a text or e-mail with that information. I ended up calling American, and the woman there was able to tell me that she showed the bag was in the hands of BA at LHR. That was at least a start, so she transferred me to BA customer support, who in turn transferred me to BA delayed baggage, who told me I needed to contact American.

(sigh)

As calmly as I could, I reiterated that I started there, and then the BA agent suggested I visit a particular website and complete a form (similar to the one I did for Norbert I assume) to get my “file reference”. After making sure I had the right URL I ended the call and started the process.

I hit the first snag when trying to enter in my tag number. As you can see from the screenshot above, my tag number starts with “600” and is ten digits long. The website expected a tag number that started with “BA” followed by six digits, so my AA tag was not going to work.

BA Tracking Website - wrong number

But at least this website had a different number to call, so I called it and explained my situation once again. This agent told me that I should have a different tag number, and after looking around my ticket I did find one in the format they were after, except starting with “AA” instead of “BA”. Of course, when I entered that in I got an error.

BA Tracking Website - error

After I explained that to the agent I remained on the phone for about 30 minutes until he was able to, finally, give me a file reference number. At this point I was very tired, so I wrote it down and figured I would call it a night and go to sleep.

But I couldn’t sleep, so I tried to enter that number into the BA delayed bag website. It said it was invalid.

(sigh)

Then I got a hint of inspiration and decided to enter in my first name as my last, and voila! I had a missing bag record.

BA Tracking Website - missing bag

That site said they had found my bag (the agent on the phone had told me it was being “traced”) and it also asked me to enter in some more information about it, such as the brand of the manufacturer.

BA Tracking Website - information required

Of course when I tried to do that, I got an error.

BA Tracking Website - system error

Way to go there, British Airways.

Anyway, at that point I could sleep. As I write this the next morning nothing has been updated since 18:31 last night, but I hold out hope that my bag will arrive today. I travel a lot so I have a change a clothes with me along with all the toiletries I need to not offend the other conference attendees (well, at least with my hygiene), but I can’t help but be soured on the whole experience.

This year I have spent nearly US$20,000 with American Airlines (they track that for me on their website). I paid them for this ticket and they really could have been more helpful instead of just washing their hands and pointing their fingers at BA. British Airways used to be one of the best airlines on the planet, but lately they seemed to have turned into Ryanair but without that airline’s level of service. The security breach that exposed the personal information of their customers, stories like this recent issue with a flight from Orlando, and my own experience this trip have really put me off flying them ever again.

Just a hint BA – from a customer service perspective – when it comes to finding a missing bag all we really want (well, besides the bag) is for someone to tell us they know where it is and when we can expect to get it. The fact that I had to spend several hours after a long trip to get something approximating that information is a failure on your part, and you will lose some if not all of my future business because of it.

I also made the decision to further curtail my travel in 2019, because frankly I’m getting too old for this crap.

So, I’m now off to shower and to get into my last set of clean clothes. Here’s hoping my bag arrives today so I can relax and enjoy the magic that is the OSMC.

CarbonROM Install on Pixel XL (marlin)

I am still playing around with alternate ROMs for Android devices, and I recently came across CarbonROM. I had some issues getting it installed (more due to me than the ROM itself) and so I thought I’d post my steps here.

I was looking for a ROM that focused on stability and security, and Carbon seems to fit the bill.

While I have a lot of experience playing with ROMs, I hadn’t really done it on handsets with “Seamless Update“. In this case there are two “slots”, Slot A and Slot B, and this can cause a challenge when installing a new operating system. This procedure worked for me (with help from Christian Oder via the CarbonROM community on Google+).

  1. Install latest 8.1 Factory Image

    This may not be required, but since I ran into issues I went ahead and installed the latest “oreo” factory image. I had already upgraded the phone to Android 9 (pie) and thought that might have caused the problems I was having, but I don’t think that was the case.

  2. Unlock the bootloader

    This is not meant to be a tutorial installing alternative ROMs, but basically you go to Settings -> System and then locate the build number. Click on that a number of times until you have enabled “developer mode” then go to the developer options and unlock the bootloader and enable the ability to access the device over USB. Then boot into the bootloader and run “fastboot flashing unlock” and follow the prompts on the screen.

  3. Boot to TWRP using image

    In order to install an alternative ROM it helps to have a better Recovery than stock. I really like TWRP and pretty much just followed the instructions. Using the Android Debugger (adb) you boot into the bootloader and run TWRP from an image file.

  4. Install TWRP zip

    Once you are running TWRP, install it into the boot partition from the .zip file. Use “adb push” to put the .zip file on the /sdcard/ partition.

  5. Reboot to Recovery (to make sure TWRP still works)
  6. Factory reset and erase /system

    Go to “Wipe” and do a factory reset, and then “Advanced Wipe” to nuke the system partition.

    You will also want to erase user data at this point. Once I got Carbon to boot it still asked me for a password which I assumed was the one I set up in the original factory install (you have to get into the factory image to unlock the bootloader). I went back and erased all of the user data and that did what I expected, so you might want to do this at this step.

  7. Install Carbon

    Use “adb push” to send the latest Carbon zip file to the /sdcard/. Install using TWRP.

    This is the point where my issues started. The next step is to reboot back into recovery. You have to do this so that the other Slot gets overwritten with the new operating system. However, with the Carbon install TWRP was overwritten and that hung the device when I tried to reboot into recovery, so

  8. Re-install TWRP

    Use “adb push” to load the TWRP .zip file again and install it while you are still in TWRP, then

  9. Reboot to recovery

    This should get Carbon all happy on your device as it will be copied over into the other Slot. If you try to boot into the system before doing this bad things will happen. (grin)

  10. Install GApps (optional)

    Now, if you want Google applications you need to install a GApps package. I like Open GApps and so I installed the “pico” package. One thing I am experimenting with here is seeing if I can use a minimal amount of Google software without giving Google my entire digital life. The pico package includes just enough to run the Google Play Store.

    This is optional, and if you just want to run, say, F-Droid apps, you can skip this step, but note I’ve been told that you can’t add GApps later, so if you want it, install it now.

  11. Reboot into the System

If everything went well, you should see the Carbon boot screen and eventually get dropped into the “Welcome to Android” Google sign up wizard. Follow the prompts (I turn off almost everything but location services) and then you should be running CarbonROM with a minimal amount of Google-ness.

The first thing I tried out was “Pokémon Go“. Due to people cheating by spoofing their GPS coordinates, Pokémon Go leverages features of Android to detect if people are running an altered operating system. I’ve found that on some ROMs the application will not work. It worked fine on Carbon and so I’m hoping I can add just a few more “Google” things, like Maps, and then use F-Droid for everything else.

Note that I didn’t “root” my operating system. When you boot into TWRP you can access the entire device with root privileges so I never feel the need to have root while I’m running the device. Seems to be a good security practice and it also allows me to still run Pokémon Go.

Many thanks to the CarbonROM team for working on this. I’m eager to see how soon security updates are released as well as what they do with Android 9, but it looks promising.

UKNOF41

I love tech conferences, especially when I get to be a speaker. Nothing makes me happier than to be given a platform to run my mouth.

For the last year or so I’ve been attending various Network Operators Group (NOG) meetings, and I recently got the opportunity to speak at the UK version, which they refer to as a Network Operators Forum (UKNOF). It was a lot of fun, so I thought I’d share what I learned.

UKNOF41 was held in Edinburgh, Scotland. I’d never been to Scotland before and I was looking forward to the visit, but Hurricane Florence required me to return home early. I ended up spending more time in planes and airports than I did in that city, and totally missed out on both haggis and whisky (although I did drink an Irn-Bru). I arrived Monday afternoon and met up with Dr. Craig Gallen, the OpenNMS Project representative in the UK. We had a nice dinner and then got ready for the meeting on Tuesday.

Like most NOG/NOF events, the day consisted of one track and a series of presentations of interest to network operators. I really like this format. The presentations tend to be relatively short and focused, and this exposes you to concepts you might have missed if there were multiple tracks.

UKNOF is extremely well organized, particularly from a speaker’s point of view. There was a ton of information on what to expect and how to present your slides, and everything was run from a single laptop. While this did mean your slides were due early (instead of, say, being written on the plane or train to the conference) it did make the day flow smoothly. The sessions were recorded, and I’ll include links to the presentations and the videos in the descriptions below.

UKNOF41 - Keith Mitchell

The 41st UKNOF was held at the Edinburgh International Conference Centre, located in a newer section of the city and was a pretty comfortable facility in which to hold a conference. Keith Mitchell kicked off the the day with the usual overview of the schedule and events (slides), and then we got right into the talks.

UKNOF41 - Kurtis Lindqvist

The first talk was from Kurtis Lindqvist who works for a service provider called LINX (video|slides). LINX deployed a fairly new technology called EVPN (Ethernet VPN). EVPN is “a multi-tenant BGP-based control plane for layer-2 (bridging) and layer-3 (routing) VPNs. It’s the unifying L2+L3 equivalent of the traditional L3-only MPLS/VPN control plane.” I can’t say that I understood 100% of this talk, but the gist is that EVPN allows for better use of available network resources which allowed LINX to lower its prices, considerably.

UKNOF41 - Neil McRae

The next talk was from Neil McRae from BT (video|slides). While this was my first UKNOF I quickly identified Mr. McRae as someone who is probably very involved with the organization as people seemed to know him. I’m not sure if this was in a good way or a bad way (grin), probably a mixture of both, because being a representative from such a large incumbent as BT is bound to attract attention and commentary.

I found this talk pretty interesting. It was about securing future networks using quantum key distribution. Current encryption, such as TLS, is based on public-key cryptography. The security of public-key cryptography is predicated on the idea that it is difficult to factor large numbers. However, quantum computing promises several orders of magnitude more performance than traditional binary systems, and the fear is that at some point in the future the mathematically complex operations that make things like TLS work will become trivial. This presentation talked about some of the experiments that BT has been undertaking with quantum cryptography. While I don’t think this is going to be an issue in the next year or even the next decade, assuming I stay healthy I expect it to be an issue in my lifetime. It is good to know that people are working on solving it.

At this point in time I would like to offer one minor criticism. Both of the presenters thus far were obviously using a slide deck created for a purpose other than UKNOF. I don’t have a huge problem with that, but it did bother me a little. As a speaker I always consider the opportunity to speak to be a privilege. While I joke about writing the slides on the way to the conference, I do put a lot of time into my presentations, and even if I am using some material from other decks I make sure to customize it for that particular conference. Ultimately what is important is the content and not the deck itself and perhaps UKNOF is a little more casual than other such meetings, but it still struck me as, well, rude, to skim through a whole bunch of slides to fit the time slot and the audience.

UKNOF41 - Julian Palmer

After a break the next presentation was from Julian Palmer of Corero (video|slides). Corero is a DDOS protection and mitigation company, which I assume means they compete with companies such as Cloudflare. I am always fascinated by the actions of those trying to break into networks and those trying to defend them, so I really enjoyed this presentation. It was interesting to see how much larger the DDOS attacks have grown over time and even more surprising to see how network providers can deal with them.

UKNOF41 - Stuart Clark

This was followed by Stuart Clark from Cisco Devnet giving a talk on using “DevOps” technologies with respect to network configurations (video|slides). This is a theme I’ve seen at a number of NOG conferences: let’s leverage configuration management tools designed for servers and apply them to networking gear. It makes sense, and it is interesting to note that the underlying technologies between both have become so similar that using these tools actually works. I can remember a time when accessing network gear required proprietary software running on Solaris or HP-UX. Now with Linux (and Linux-like) operating systems underpinning almost everything, it has become easier to migrate, say, Ansible to work on routers as well as servers.

It was my turn after Mr. Clark spoke. My presentation covered some of the new stuff we have released in OpenNMS, specifically things like the Minion and Drift, as well as a few of the newer things on which we are actively working (video|slides). I’m not sure how well it was received, but number of people came up to me afterward and say they enjoyed it. During the question and answer session Mr. McRae did state something that bothered me. He said, basically, that the goal of network monitoring should be to get rid of people. I keep hearing that, especially from large companies, but I have to disagree. Technology is moving too fast to ever get rid of people. In just half a day I was introduced to technologies such as EVPN and quantum key distribution, not to mention dealing with the ever-morphing realm of DDOS attacks, and there is just no way monitoring software will ever evolve fast enough to cover everything new just to get rid of people.

Instead, we should be focusing on enabling those people in monitoring to be able to do a great job. Eliminate the drudgery and give them the tools they need to deal with the constant changes in the networking space. I think it is a reasonable goal to use tools to reduce the need to hire more and more people for monitoring, but getting rid of them altogether does not seems likely, nor should we focus on it.

I was the last presentation before lunch (so I finished on time, ‘natch).

UKNOF41 - Chris Russell

The second half of the conference began with a presentation by Chris Russell (video|slides). The title was “Deploying an Atlas Probe (the Hard Way)”, which is kind of funny. RIPE NCC is the Internet Registry for Europe, and they have a program for deploying hardware probes to measure network performance. What’s funny is that you just plug them in. Done. While this presentation did include discussion of deploying an Atlas probe, it was more about splitting out a network and converting it to IPv6. IPv6 is the future (it is supported by OpenNMS) but in my experience organizations are very slowly migrating from IPv4 (the word “glacier” comes to mind). Sometimes it takes a strong use case to justify the trouble and this presentation was an excellent case study for why to do it and the pitfalls.

UKNOF41 - Andrew Ingram

Speaking of splitting out networks, the next presentation dealt with a similar situation. Presented by Andrew Ingram from High Tide Consulting, his session dealt with a company that acquired another company, then almost immediately spun it back out (video|slides). He was brought in to deal with the challenges of dealing with a partially combined network that needed to be separated in a very short amount of time with minimal downtime.

I sat next to Mr. Ingram for most of the conference and learned this was his first time presenting. I thought he did a great job. He sent me a note after the conference that he has “managed to get OpenNMS up and running in Azure with an NSG (Network Security Gateway) running in front for security and a Minion running on site. It all seams to be working very nicely”

Cool.

UKNOF41 - Sara Dickinson

The following presentation would have to be my favorite of the day. Given by Sara Dickinson of Sinodun IT, it talked about ways to secure DNS traffic (video|slides).

The Internet wouldn’t work without DNS. It translates domain names into addresses, yet in most cases that traffic is sent in the clear. It’s metadata that can be an issue with respect to privacy. Do you think Google runs two of the most popular DNS servers out of the goodness of their heart? Nope, they can use that data to track what people are doing on the network. What’s worse is that every network provider on the path between you and your DNS server can see what you are doing. It is also an attack vector as well as a tool for censorship. DNS traffic can be “spoofed” to send users to the wrong server, and it can be blocked to prevent users from accessing specific sites.

To solve this, one answer is to encrypt that traffic, and Ms. Dickinson talked about a couple of options: DoT (DNS over TLS) and DoH (DNS over HTTPS).

The first one seems like such a no-brainer that I’m surprised it took me so long to deploy it. DoT encrypts the traffic between you and your DNS server. Now, you still have to trust your DNS provider, but this prevents passive surveillance of DNS traffic. I use a pfSense router at home and decided to set up DoT to the Quad9 servers. It was pretty simple. Of all of the major free DNS providers, Quad9 seems to have the strongest privacy policy.

The second protocol, DoH, is DNS straight from the browser. Instead of using a specific port, it can use an existing HTTPS connection. You can’t block it because if you do you’ll block all HTTPS traffic, and you can’t see the traffic separately from normal browsing. You still have to deal with privacy issues since that domain name has to be resolved somewhere and they will get header information, such as User-Agent, from the query, so there are tradeoffs.

While I learned a lot at UKNOF this has been the only thing I’ve actually implemented.

After a break we entered into the all too common “regulatory” section of the conference. Governments are adding more and more restrictions and requirements for network operators and these NOG meetings are often a good forum for talking about them.

UKNOF41 - Jonathan Langley

Jonathan Langley from the Information Commissioner’s Office (ICO) gave a talk on the Network and Information Systems Directive (NIS) (video|slides). NIS includes a number of requirements including things such as incident reporting. I thought it was interesting that NIS is an EU directive and the UK is leaving the EU, although it was stressed that NIS will apply post-Brexit. While there were a lot of regulations and procedures, it wasn’t as onerous as, say, TICSA in New Zealand.

UKNOF41 - Huw Saunders

This was followed by another regulatory presentation by Huw Saunders from The Office of Communications (Ofcom) (video|slides). This was fairly short and dealt primarily with Ofcom’s role in NIS.

UKNOF41 - Askar Sheibani

Askar Sheibani presented an introduction to the UK Fibre Connectivity Forum (video|slides). This is a trade organization that wants to deploy fiber connectivity to every commercial and residential building in the country. My understanding is that it will help facilitate such deployments among the various stakeholders.

UKNOF41 - David Johnston

The next to the last presentation struck a cord with me. Given by David Johnston, it talked about the progress the community of Balquhidder in rural Scotland is making in deploying its own Internet infrastructure (video|slides). I live in rural North Carolina, USA, and even though the golf course community one mile from my house has 300 Mbps service from Spectrum, I’m stuck with an unreliable DSL connection from CenturyLink, which, when it works, is a little over 10 Mbps. Laws in North Carolina currently make it illegal for a municipality to provide broadband service to its citizens, but should that law get overturned I’ve thought about trying to spearhead some sort of grassroots service here. It was interesting to learn how they are doing it in rural Scotland.

UKNOF41 - Charlie Boisseau

The final presentation was funny. Given by Charlie Boisseau, it was about “Layer 0” or “The Dirty Layer” (video|slides). It covered how cable and fiber are deployed in the UK. The access chambers for conduit have covers that state the names of the organizations that own them, and with mergers, acquisitions and bankruptcies those change (but the covers do not). While I was completely lost, the rest of the crowd had fun guessing the progression of one company to another. Anyone in the UK can deploy their own network infrastructure, but it isn’t exactly cheap, and the requirements were covered in the talk.

After the conference they served beer and snacks, and then I headed back to the hotel to get ready for my early morning flight home.

I had a lot of fun at UKNOF and look forward to returning some day. If you are a network provider in the UK it is worth it to attend. They hold two meetings a year, with one always being in London, so there is a good chance one will come near you at some point in time.

Meridian 2018

It is hard to believe that our first release of OpenNMS Meridian was over three years ago.

Meridian Logo

We were struggling with trying to balance the needs of a support organization with the open source desire to “release early, release often”. How do you deal with wanting to be as cutting edge as possible but to support customers who really need a stable platform? We did have a “development” release, but no one really used it.

Our answer was to model OpenNMS on Red Hat, the most successful open source company in existence. While Red Hat has hundreds of products, their main offering is Red Hat Enterprise Linux (RHEL). This is derived, in large part, from the Fedora Linux distribution. New things hit Fedora first and, once vetted, make their way into RHEL.

We decided to do the same thing with OpenNMS. OpenNMS was split into two main branches: Horizon and Meridian. Horizon was the Fedora equivalent, while Meridian was modeled on RHEL.

This has been very successful. While we were averaging a new major OpenNMS release every 18 months, now we do three or four Horizon releases per year. Tons of new features are hitting Horizon, from the ability to deal with telemetry data, new correlation features to condense alarms into “situations” based on unsupervised machine learning, to the first steps toward a microservices architecture.

We do our best to release code as production-ready as possible. Our users are very creative and use OpenNMS in unique ways. By offering up rapid Horizon releases it allows us to find and fix issues quickly and work out how to best implement new functionality.

But what about our users who are more interested in stability than the “new shiny”? They needed a system that was rock solid and easy to maintain. That’s why we created Meridian. Meridian lags Horizon on features but by the time a feature hits Meridian, it has been tested thoroughly and can immediately be deployed into production.

There is one major Meridian release a year, with usually three or four point updates. Anyone who has ever upgraded OpenNMS understands that dealing with configuration file changes can be problematic. With Meridian, moving from one point release to another rarely changes configuration, so upgrades can happen in minutes and users can rest assured that their systems are up to date and secure. Each Meridian release is supported for three years.

There is a cost associated with using Meridian. Similar to RHEL, it is offered as a subscription. While still 100% open source, you pay a fee to access the update servers, and the idea is that you are paying for the effort it takes to refine Horizon into Meridian and get the most stable version of OpenNMS possible. We are so convinced that Meridian is worth it, it is available without having to buy a support contract. Meridian users get access to OpenNMS Connect, which is a forum for asking questions about using Meridian.

It seems like it was just yesterday that we did this but it has now been over three years. That means support will sunset on Meridian 2015 at the end of the year. Never fear, the latest releases are just as stable and even more feature rich.

The main feature in Meridian 2018 is support for the OpenNMS Minion. The Minion is a stateless application that allows for remote distribution of OpenNMS functionality. For example, I used to run an OpenNMS instance at my house to monitor my devices. Now I just have a Minion. Even though my network is not reachable from our production OpenNMS instance, the Minion allows me to test service availability, and well as collect data and traps, and then forward them on to the main application. The Minion itself is stateless – it connects to a messaging broker on the OpenNMS server in order to get its list of tasks.

A Minion is defined by its “Location”. You can have multiple Minions for a given location and they will access the broker via a “competitive consumer queue”. This way if a particular Minion goes down, there can be another to do the work. By default OpenNMS ships with ActiveMQ as the broker, but it is also possible to use an external Kafka instance as well. Kafka can be clustered for both load balancing and reliability, and the combination of a Kafka cluster and multiple Minions can make the amount of devices OpenNMS monitors virtually limitless (we are working on a proof of concept for one user with over 8 million discrete devices).

There are a number of other features in Meridian 2018, so check out the release notes for more details. It is an exciting addition to the OpenNMS product line.

The Technology Choice Struggles of a Freetard

TL;DR: With the demise of CopperheadOS, I’ve had to struggle to find a new mobile operating system. With the choices coming down to Google or Apple, I decided to return to Apple and I bought an iPhone. Learning quickly that it is very hard to manage the iPhone under Linux, I also decided to switch to a Macbook Pro. A month later and after a business trip with the laptop, I am returning to Linux as my primary operating system.

This is a rather long post that I doubt will interest even one of my three readers, but as I expect a small subset of the population agonizes over technology choices as much as I do, perhaps someone will find it useful.

Back in 2011 I decided to stop using Apple gear and switch to running as much free software as possible. It was difficult, but I managed to switch almost all of my technology to open, if not always free, options. The hardest part was mobile.

For years people have been trumpeting each new year as “The Year of the Linux Desktop“. The problem is that more and more people are doing without a desktop entirely, and instead interact via mobile devices, so it is becoming more like “The Year of the Free Buggy Whip”. The broader free and open source community totally missed the boat when it came to mobile.

Seriously, where is the “Linux” of mobile? We don’t have it. Our choices are pretty much limited to Apple and Google.

Apple is pretty straightforward. They control the whole experience so you buy devices from them and you are allowed to run the software they let you. The freetard in me chafes at these limitations.

So that leaves Android. The problem with Android is that it is pretty much Google. Almost all of the Android Open Source Project (AOSP) derivatives rely on Google for both security updates and device drivers (which are rarely open). They start from a platform over which they have little control, unlike Linux.

Google is becoming more and more intrusive when it comes to surveillance. When you first sign in you are asked “Do you want to improve your Android experience?” Well, who doesn’t, but what I failed to realize is that if you turn that on (it is on by default) you end up sending pretty much every thing you do to Google: every app you open and how long you use it, every phone call you take, every text you send in addition to every link you visit. Turn it off and then your experience is greatly limited. For example, Google Maps won’t store your recent searches unless that feature is turned on. Recently I was in a private Google Hangout when the other person pasted a link. Although the link showed up normally in the chat window, the URL itself first went through Google when you clicked in it. Seriously? Google needs to track your activity down to the level of a link in a private Hangout?

But, Android is open source, unlike iOS, so for years I focused my mobile platform on Android but using alternative versions, often called “custom ROMs”.

Running custom ROMs is not for the faint of heart. Probably the most famous was CyanogenMod, but unfortunately that organization imploded spectacularly (but lives on in LineageOS). While I originally ran CyanogenMod, I found a really nice solution and community in OmniROM. In addition to the O/S, you need to install Google applications (GApps) separately, and projects like Open GApps let you control exactly what you install. I really liked that, and it worked well for awhile.

But there are two main issues with custom ROMs. The first is that almost all of them are volunteer organizations, thus the attention level of any one maintainer can vary greatly. They don’t have huge test organizations and the number of handsets supported can be limited. Find a good ROM with an active maintainer for your handset and you’re golden, but you can be up for a world of disappointment if not.

The second is that Google is getting more and more aggressive about having their applications run on these operating systems. Certain apps won’t run well (or run at all) if the underlying operating system isn’t “Google Approved”.

Thus I started running into problems. All of my older handsets are no longer being maintained (farewell Nexus 6) and OmniROM doesn’t support the Pixel (sailfish) or Pixel XL (marlin) which were released two years ago, so that option is out for me. I also like to play games like Pokémon Go, but it started behaving badly (or not running) on devices that weren’t vanilla Google.

I thought I had found a solution in CopperheadOS. This is (was) an organization out of Canada that made an extremely hardened version of Android. Unlike most custom ROMs where you replace the recovery partition or enable root access, Copperhead took the opposite approach and provided a very locked down, security conscious operating system. You would think this would be in opposition to free software, but it turns out their default software repository was F-Droid, which only features open source software, and in fact it was impossible to run the Google Play Store on the device (you allow Google the right to install any software they want without explicit permission when you use GApps and this went against the Copperhead philosophy).

This appealed to me, so I decided to try it out. I found I could do over 90% of what I needed to do without Google, and for things like Pokémon Go, I just got a second phone running stock Google (with a lot of the surveillance features turned off). So, my personal information lived on my Copperhead phone, and my “toy” phone let me do things like use Google Maps and call a Lyft.

Carrying two handsets wasn’t optimal, but I got used to it, and I found myself using the “Google” phone less and less. I loved the fact that security updates often hit my Copperhead phone a day or two before my Google phone, and I slept soundly knowing that my personal data was about as secure as I could make it (and still actually use a mobile device).

Then came June and the apparent demise of Copperhead (thanks Bryan Lunduke, for telling me about this and ruining my life, again). I needed to find another mobile solution.

About this time, privacy had come to the forefront with the impending implementation of the GDPR in Europe. The amount and level of surveillance being done by Google became even more transparent. There was a high profile study done in Norway that showed not only were companies like Google impacting your privacy, they were being pretty sneaky about it. The study also called out Facebook and Microsoft.

Surprisingly absent from that article was Apple. In fact, the news out of Apple-land was pretty positive. Due to the GDPR Apple made it possible for European users to download all of the tracking data Apple had on a given user and it was rather minuscule. Since Apple makes money on hardware, its business model makes it much more privacy friendly, even if it isn’t exactly a freetard’s best friend.

So I bought an iPhone.

A lot had changed in seven years. The iPhone is much more powerful but it is also a lot less intuitive. Even now I prefer the Android interface to iOS, but I didn’t find the transition too difficult.

No, the difficult part was trying to use the iPhone with Linux. While I found ways to mount the iPhone to my Linux desktop, you can’t manage music without iTunes, and iTunes doesn’t run natively on Linux.

(sigh)

Well, in for a penny, in for a pound. We had a spare 2017 13-inch Macbook Pro at the office, so I conscripted it to be my new laptop/desktop. Remember that the last Apple O/S I used regularly was Snow Leopard, so there was a second learning curve to climb.

Part of it was real easy. Free software on OSX has come a long way, so I simply installed Thunderbird, moved my profile over, and I was in business for e-mail. Similarly, Firefox was up and running with an install and a sync. The wonderful Homebrew project brought most of the rest of the stuff I needed to OSX land.

But I wasn’t super happy with the interface. I’ve tried a large number of desktop environments, and for my needs Cinnamon on Linux Mint works best. Little things about the OSX desktop just seemed to get in the way.

For example, I use a little tool called “onmsblink” that takes a ThingM blink1 USB dongle and changes its color based on the highest current alarm in my OpenNMS system. I launch it from the command line, but because it is Java it shows up in the dock and I can’t make it go away. Also, I’m used to clicking on an icon, say the Finder, and having a new window pop up. In OSX, it brings all open windows to the front, even if it is in another workspace. Is this “wrong” behavior? I don’t think so, but it is different for me and it interrupts my workflow.

Speaking of different, I’m also stuck with using a number of apps where I used to use one. I use the tool gscan2pdf constantly to scan in paper so I can shred and dispose of it. I have two scanners, a Brother ADS-3000N with the document feeder (works amazingly well under Linux) and a Canon LiDE 210 flatbed scanner. On OSX I ended up loading in two separate vendor-supplied applications to use them, and both of them feel really cluttered.

Plus, you would think an ecosystem like iOS would have a real mail client. One of the best mobile apps ever is K9 Mail, and I really miss it. I finally settled on Altamail, which has a yearly subscription but it was the only app that would easily handle nested folders. For example, I have a Customer folder with over 3000 subfolders. I can’t be scrolling through that on a mobile device. I don’t like it all that much, but it is the only option I could find.

Then there’s iTunes. Man, I used to think iTunes was a pig and now it is much, much worse. It took me longer than I would expect to get back to the interface I wanted (specifically, Songs with Browser View enabled). And, since I was playing around with a number of iTunes libraries, I ended up having to wipe the music off of my iPhone a couple of times since Apple won’t let you sync one devices to more than one library.

There are some good things about iTunes, I specifically like the way you can sync playlists, but I’ve been happier with my free music managers.

One app I really do like on OSX is iMessage. I am not a good typist on mobile devices, and being able to send and respond to a text from the desktop is awesome. And nobody comes close to making a trackpad that works as well as those on Apple laptops.

And thus I became an Apple laptop guy. Before I used two desktops, pretty much identical, with one at home and one at the office, with my laptop reserved for trips. Now I had to make sure I brought my laptop between both places (no laptop “drive of shame” so far). It was nice to have all of my information in one place, but the downside is that I did have all of my information in one place and it made the possible loss of my laptop that much worse.

I had resigned myself to being an Apple guy from here on out, but then I went on a business trip to Seattle where I used the laptop for several days and it was then I decided that I just couldn’t continue to use it.

The main issue that soured me was the keyboard. This was a 2017 model with one of those fancy “touch bar” thingies. Now everyone thinks that Apple is a great innovator, and in many cases they are, but the touch bar is something other companies have tried and discarded. I returned a Lenovo X1 Carbon laptop back in early 2014 that had one and they removed the feature from future editions. I use that top row of keys. I like having an escape key I can feel, and having real function keys is useful for things like games. Plus it is a lot easier to change the volume with an “up” or “down” key versus having to click on the volume icon and then use a slider.

But that wasn’t a deal breaker. When the “2” key started sticking, sometimes printing a character, sometimes printing many characters with one key press, and finally often not printing anything at all, I got discourage, nay depressed.

The issues with this generation of Apple keyboards are well known, but as I rarely use the keyboard on the laptop itself (I’m almost always connected to an external monitor and keyboard) I couldn’t believe it would get dirty enough to exhibit the issue that fast. Plus, the keyboard even when working just isn’t that good. I really miss the keyboard I had on my Powerbook.

This weekend when I got back home I decided to go back to Linux. I dragged my desktop out of the closet, booted it up, and decided to bring it up to date. During my hiatus a new version of Mint had been released, Mint 19, so I upgraded.

Man, that is one beautiful desktop. Seriously, I can’t remember using a nicer looking desktop environment on any platform. The tweaks the Mint team has made to Cinnamon have moved it from great to outstanding.

Please note that this is from my perspective. If you aren’t using Mint that doesn’t mean you suck or that your choices are wrong. The one thing I love most about the Linux desktop is that there exists a flavor for almost every taste and need.

It was as easy to move back to Mint from OSX as it was to move from it in the first place, so it has only cost me a few hours of time mainly waiting for the upgrade to download on my slow connection at home. I also installed a fresh copy on my fifth generation Dell XPS 13 and was pleasantly surprised at how much better the new trackpad driver, libinput works. That was the main complaint I had about my Linux laptop, and I’m eager to try it out when I am next on the road.

Moving back to Linux made me question my mobile O/S choice one more time. Searching around it looks like it is currently possible to run Pokémon Go on a custom ROM as long as it is not rooted, so I downloaded TWRP and LineageOS for my Pixel XL, as well as the “pico” version of Open GApps. I was thinking I could get back to, basically, my Copperhead environment with a minimal amount of Google and be happy.

Lineage Install Error

Bam, right out the door my phone started screaming about the phone driver not working. The memory of issues I experienced running alternative ROMs came flooding back, and I simply restored the Pixel to factory and decided to stay with my iPhone.

I feel much happier that I’ve gone back to Linux, at least part of the way. It should make it easier to go free on mobile as soon as the technology catches up. I’m very eagerly following the work of the /e/ foundation but as of yet they haven’t released any code. Plus it looks like they are going for an all-out Google replacement. I’m pretty happy running my own mail server and Nextcloud instances, so I’m more interested in a secure mobile device that can run apps from F-Droid versus a whole ecosystem replacement. Purism is also coming out with a phone. I really like the philosophy behind that company, but I’ve been stung by enough Kickstarters that I’m taking a wait and see attitude.

The problem with free and open source mobile will be the apps. As I mentioned, I was able to do 90% of what I needed using F-Droid, which bodes well for the /e/ solution but not so much for the Purism one, and both will faces challenges with adoption.

Until then, feel free to Facetime me and check out my growing collection of chins.

Dealing with Docker Interfaces

We run a lot of instances of OpenNMS (‘natch) and lately we’ve seen issues with disk space being used up faster than expected.

We tracked the issue down to Docker. If Docker is running on a machine, SNMP will discover a Docker interface, usually labelled “docker0”. When that instance is stopped and restarted, or another Docker instance is created, another interface will be created. This will create a lot of RRD files of limited usefulness, so here is how to address it.

First, we want to tell OpenNMS not to discover those interfaces in the first place. This is done using a “policy” in the foreign source definition for the devices in question. Here is what it looks like in the webUI:

Skip Docker Interfaces Policy

The “SNMP Interface Policy” will match on various fields in the snmpinterface table in the database, which includes ifDescr. The regular expression will match any ifDescr that starts with the string “docker” and it will not persist (add) it to the database. This policy has only one parameter, so either “Match All Parameters” or “Match Any Parameter” will work.

If you want to use the command line, or have a lot of custom foreign source definitions, you can paste this into the proper file:

   <policies>
      <policy name="Ignore Docker interfaces" class="org.opennms.netmgt.provision.persist.policies.MatchingSnmpInterfacePolicy">
         <parameter key="action" value="DO_NOT_PERSIST"/>
         <parameter key="ifDescr" value="~^docker.*$"/>
         <parameter key="matchBehavior" value="ALL_PARAMETERS"/>
      </policy>
   </policies>

This will not deal with any existing interfaces, however. For that there are two steps: delete the interfaces from the database and delete them from the file system.

For the database, with OpenNMS stopped access PostgreSQL (usually with psql -U opennms opennms) and run:

delete from ipinterface where snmpinterfaceid in (select id from snmpinterface where snmpifdescr like 'docker%');

and restart OpenNMS.

For the filesystem, navigate to where your RRDs are stored (usually /opt/opennms/share/rrd/snmp) and run:

find . -type d -name "docker*" -exec rm -r {} \;

That should get rid of existing Docker interfaces, free up disk space and prevent new Docker interfaces from being discovered.

Open Source is Still Dead

Last week I attended the 20th O’Reilly Open Source Conference (OSCON), held this year in Portland, Oregon.

OSCON 20th Anniversary Sign

This is the premiere open source conference in the US, if not the world, and it is rather well run. It is equal to if not better than a lot of proprietary technology conferences I’ve attended, perhaps because it is pretty much a proprietary software conference in itself. I found it a little ironic that the Wednesday morning keynotes started off with a short, grainy video clip where an open source geek shouts out “We’re starting a revolution!”.

I tried to find the source of that quote, and I thought it came from the documentary “Revolution OS“. That movie chronicles the early days of open source software in which the stated goal was to take back software from large companies like Microsoft. There is a famous quote by Eric S. Raymond where he replies to a person from Microsoft with the words “I’m your worst nightmare.” Microsoft is now a major sponsor of OSCON.

When I attended OSCON in 2014 I asked the question “Is Open Source Dead?” Obviously the open source development model has never been more alive, but I was thinking back to my early involvement with open source where the idea was to move control of software out of the hands of big companies like IBM and Microsoft and into the hands of the users. Back then the terms “open source” and “free software” were synonymous. It was obvious that open source operating systems, mainly Linux, would rule the world of servers, so the focus was on the desktop. No one in open source predicted the impact of mobile, and by extension, the “cloud”. Open source today is no more than a development model used mostly to help create proprietary software, usually provided as a subscription or a service over the network. I mean, it makes sense. Companies like Google, Facebook and Amazon wouldn’t exist today if it wasn’t for Linux. If they had to pay a license to Microsoft or Sun (now Oracle) for every server they deployed their business models simply wouldn’t work, and the use of open source for building the infrastructure for applications simply makes sense.

Please note that I am not trying to make any sort of value judgement. I am still a big proponent of free software, and there are companies like Red Hat, OpenNMS and Nextcloud that try to honor the original intention of open source. All of us, open and proprietary, benefit from the large amount of quality open source software being created these days. But I do mourn the end of open source as I knew it. It used to be that open source software was published with “restrictive” licenses like the GPL, whereas now the trend is to move to “permissive” licenses like the MIT or Apache licenses. This allows for the commercialization of open source software, which in turn creates an incentive for large software companies to get involved.

This trend was seen throughout OSCON. The “diamond” sponsors were companies like IBM, Microsoft, Amazon and Google. The main buzzword was “Kubernetes” (or “K8s” if you’re one of the cool kids) which is an open source orchestration layer for managing containers. Almost all of the expo companies were cloud companies that either used open source software to provide a platform for their applications or to create open source agents that would feed back to their proprietary cloud back-end.

I attended my first OSCON in 2009 as a speaker, and I was a speaker for several years after that. My talks were always well-attended, but then for several years none of my paper submissions were accepted. I thought I had pissed off one or more of the organizers (it happens) but perhaps my thoughts on open source software had just become outdated.

I still like going to the conference, even though I no longer attempt to submit a talk. When I used to speak I found I spent most of my time on the Expo floor so now I just try to schedule other business during the week of OSCON and I get a free “Expo only” pass. You also get access to the keynotes, so I was sure to be in attendance as the conference officially started.

OSCON Badge

My favorite keynote was the first one, by Suz Hinton from Microsoft. She is known for doing live coding on the streaming platform Twitch, and she did a live demonstration for her keynote. She used an Arduino to control a light sensor and a servo. When she covered the sensor, the servo would move and “wave” at the OSCON audience. It was a little hard to fight the cognitive dissonance of a Microsoft employee using a Mac to program an open hardware device, but it was definitely entertaining.

OSCON Suz Hinton

My second favorite talk was by Camille Eddy. As interactions between computers and humans become more automated, a number of biases are starting to appear. Google image search had a problem where it would label pictures of black people as “gorillas”. An African-American researcher at MIT named Joy Buolamwini found that a robot recognized her better if she wore a white mask. Microsoft had an infamous experiment where it created a Twitter bot named “Tay” that within 24 hours was making racist posts. While not directly related to open source, a focus on an issue that affects the user community is very much in the vein of classic open source philosophy.

OSCON Camille Eddy

The other keynotes were from Huawei, IBM and Amazon (when you are a diamond sponsor you get a keynote) and they focused more on how those large software companies were using the open source development model to, well, offset the cost of development.

OSCON Tim O'Reilly

The Wednesday keynotes closed with Tim O’Reilly who talked about “Open Source and Open Standards in the Age of Cloud AI”. It kind of cemented the theme for me that open source had changed, and the idea is now much more about tools development and open APIs than in creating user-owned software.

OSCON Expo Floor

The rest of my time was spent wandering the Expo floor. OSCON offers space to traditional open source projects which I usually refer to as the “Geek Ghetto”. This year it was split to be on either side of the main area, and I got to spend some time chatting with people from the Software Freedom Conservancy and the Document Foundation, among others.

OSCON Geek Ghetto

I enjoyed the conference, even if it was a little bittersweet. Portland is a cool town and the people around OSCON are cool as well. If I can combine the trip with other business, expect to find me there next year, wandering the Expo floor.

Prodigal Customers

Growing up in the southern United States meant Sunday mornings were spent at Sunday School. One of the stories we would study was the Parable of the Prodigal Son. A man has two sons. The younger son asks for his inheritance in advance and he goes off and squanders it. When he returns, his father throws a big celebration to welcome him back.

I never really got the point of that story, as I always identified with the older, dutiful son, so it is surprising that it took working with OpenNMS for me to understand it.

We have great customers. Since we do little marketing, before we get a customer they have to first discover OpenNMS, then investigate it to see if it meets their needs, and only then do they contact us. It means that they are self-selecting, and without exception they are incredibly smart, physically beautiful and possessing of a wit so sharp they make Ginsu knives look dull. (grin)

The first company to ever buy an OpenNMS support subscription did so in December of 2001, and this year they renewed for the 17th time. It is a wonderful testament to the work of the team that they created something to inspire such a long commitment.

That said, we do lose a few customers each year. The first one I lost was a little heartbreaking. It was a hospital in Virginia, and when I called them to see if they would renew their support subscription they told me “no”. I was a little shocked, as I was unaware of any problems and they hadn’t opened tickets in awhile, and they told me that was the point. They loved OpenNMS but it “just worked” so they saw no value in getting support, they were still using it.

A more common case for us losing a customer is that our “internal champion” leaves. OpenNMS is a complex and powerful tool, and it does take awhile to climb the learning curve to see its full potential. If all of that knowledge is focused on one person, and that person leaves, their replacement can be overwhelmed and seek out something simpler, even if it is more expensive and less powerful.

I am alway saddened when this happens, but lately we’ve been experiencing what I’m calling “Prodigal Customers”. These are customers who leave and come back.

Cartoon by Chad Essley http://www.cartoonmonkey.com

I love them, and always want to slaughter (figuratively) the fattened calf to welcome them back.

It’s hard to explain, but while it is wonderful to have someone use something you’ve created for almost two decades straight, it is almost more rewarding to have someone go and try something else and discover it doesn’t stack up. Heck, I’d love it if all our customers could try out every possible option, because those that then chose OpenNMS for their solution would truly recognize what an awesome platform it can be.

Being 100% open source, OpenNMS does not have any way to “lock in” a particular customer. You can use it with our services or without, but you always have access to the latest code. Thus choosing to use OpenNMS is a validation of the work we’ve put into it, and whether you are a long time customer, a new customer, or a “prodigal” customer, your preference to use OpenNMS makes all the work to create it worthwhile.

2018 New Zealand Network Operators Group (NZNOG)

One thing that all open source projects struggle with is getting users. Most people in IT and software are overwhelmed with a plethora of information and options, and matching the right material to the right audience is a non-trivial problem.

Last year my friend Chris suggested that I speak at a Network Operators Group (NOG) meeting, specifically AusNOG. It was a lot of fun. I felt very comfortable among this crowd. so I decided to reach out to more NOGs to see if they would be interested in learning about OpenNMS.

The thing I like the most about NOGs is that they value getting things done above all else. While “getting things done” is still important with the free and open source crowd, there seems to be more philosophy and tribalism at those shows. “Oh, that’s written in PHP, it must suck” etc. As a “freetard” I live for the philosophical and social justice aspects of the community, but from a business standpoint it doesn’t translate well into paying customers.

At NOGs the questions are way more business-focused. Does it work? Is it supported? What does it cost? While I’m admittedly biased toward OpenNMS and its open source nature, the main reason I keep promoting it is that it just makes solid business sense for many companies to use it instead of their current solution.

Plus, these folks are pretty smart and entertaining while dispensing solid advice and knowledge.

Anyway, with that preamble, at AusNOG I learned about the New Zeland NOG (NZNOG) and submitted a talk. It got accepted and I found myself in Queenstown.

NZNOG Scenary

The main conference was spread out over two days, and like AusNOG it consisted of 30 to 45 minute talks in one track.

While I know it won’t work for a lot of conferences, I really like the “one track” format. It exposes me to things I wouldn’t have gone to otherwise, and if there is something I am simply not interested in learning about I can use that time to catch up on work or participate in the hallway track.

NZNOG Clare Curran

The conference started with a presentation by the Honorable Clare Curran, a newly minted Member of Parliament (they recently held elections in New Zealand). I’m slowly seeing politicians getting more involved in information technology conferences, which I think is a good thing, and I can only hope it continues. She spoke about a number of issues the government is facing with respect to communications technology.

Several things bother me about the US government, but one big one is the lack of understanding of the importance of access to the Internet at broadband speeds. Curran stated that “lack of reliable high-speed network access is a new measure of poverty”. Later in the day John Greenhough spoke on New Zealand’s Ultra-Fast Broadband (UFB) project, where on one slide broadband was defined as 20Mbps download speed.

NZNOG John Greenhough

Where I live in the US I am lucky to get 10Mbps and many of my neighbors are worse off, yet the government is ceding more of the decision making process about where to build out new infrastructure to the telecommunications companies which have zero incentive to improve my service. It’s wonderful to see a government realize the benefits of a connected populace and to take steps to make it happen.

Because we all need Netflix, right? (grin)

There was a cool talk about how Netflix works, and I didn’t realize that they are working with communications providers to provide low-latency solutions distributed geographically. This is done by supplying providers with caching content servers so that customers can access Netflix content while minimizing the need for lots of traffic over expensive backhaul links.

NZNOG Netflix RRD

I did find it cool that one of the bandwidth graphs presented was obviously done using RRDtool. I don’t know if they collected the data themselves or used something like OpenNMS, but I hope it was the latter.

With this push for ubiquitous network access comes other concerns. New Zealand has a law called TICSA that requires network providers to intercept and store network traffic data for use by law enforcement.

NZNOG Lawful Intercept

I thought the requirements were pretty onerous, but I was told that the NZ government did set aside some funds to help providers with deploying solutions for collecting and storing this data (but I doubt it can cover the whole cost, especially over time). The new OpenNMS Drift telemetry project might be able to help with this.

NZNOG Aftab Siddiqui

There were a couple of talks I had seen in some form at AusNOG. The ever entertaining Aftab Siddiqui talked about MANRS (Mutually Agreed Norms for Routing Security) but unlike in Australia he was hard pressed to find good examples of violations. Part of that could be that New Zealand is much smaller than Australia, but I’m giving the NZ operators the credit for just doing a good job.

NZNOG NetNORAD

The Facebook folks were back to talk about their NetNORAD project. While I have a personal reluctance to deploy agents, there really isn’t a way to measure latency at the detail they want without them. I think it would be cool to be able to gather and manage the data created by this project under OpenNMS.

NZNOG Geoff Huston

What I like most about these NOG meetings is that I always learn something cool, and this one was no different. Geoff Huston gave a humorous talk on DNSSEC and handling DNS-based DDoS attacks. While I was somewhat familiar with DNSSEC, I was unaware of the NSEC part of it.

Most DNS DDoS attacks work by asking for non-existent domains, and the overhead in processing them is what causes the denial of service. The domain name is usually randomly generated, such as jeff123@example.com, jeff234@example.com, etc. If the DNS server doesn’t have the domain in its cache, it will have to ask another DNS server, which in turn won’t have the domain as it doesn’t exist.

The NSEC part of DNSSEC, when responding to a non-existent domain request, will return the next valid domain. In the example above, if I ask for jeff123@example.com, the example.com DNS server can reply that the domain is invalid and, in addition, the next valid domain is www.example.com. If implemented correctly, the original DNS server should then never query for jeff234@example.com since it knows it, too, doesn’t exist.

Pretty nifty.

NZNOG Rata Stanic

One talk I was eagerly awaiting was from Rada Stanic at Cisco. She also spoke at AusNOG but I had to leave early and missed it. While she disrespected SNMP a little more than I liked (grin), her talk was on implementing new telemetry-based monitoring protocols such as gRPC. OpenNMS Drift will add this functionality to the platform. Our experience so far is that the device vendor implementation of the telemetry protocols leaves something to be desired, but it does show promise.

NZNOG Ulf

It was nice being in New Zealand again, and our mascot Ulf seemed to be popular with the locals. Can’t imagine why.

2018 Linuxconf Australia Sysadmin Miniconf

I just wanted to put up a quick post on my trip to Linuxconf Australia (LCA) being held this week in Sydney.

First, a little background. I’ve been curtailing my participation in free and open source software conferences for the last couple of years. It’s not that I don’t like them, quite the opposite, but my travel is funded by The OpenNMS Group and we just don’t get many customers from those shows. A lot of people are into FOSS for the “free” (as in gratis) aspect.

Contrast that with telcos and network operators who tend to have the opposite viewpoint, if they aren’t spending a ton of money then they must be doing it wrong, and you can see why I’ve been spending more of my time focusing on that market.

Anyway, we have recently signed up a new partner in Australia to help us work with clients in the Pacific Rim countries called R-Group International, and I wanted to come out to Perth and do some training with their team. Chris Markovic, their Technical Director as well as being “mobius” on the OpenNMS chat server, suggested I come out the week after LCA, so I asked the LCA team if they had room on their program for me to talk about OpenNMS. They offered me a spot on their Sysadmin Miniconf day.

Linuxconf Australia Sign

The conference is being held at the University of Technology, Sydney (UTS) and I have to say the conference hall for the Sysadmin track was one of the coolest, ever.

Linuxconf Australia - UTS Lecture Hall

The organizers grouped three presentations together dealing with monitoring: one on Icinga 2, one from Nagios and mine on OpenNMS. While I don’t know much about Icinga, I do know the people who maintain it and they are awesome. One might think OpenNMS would have an antagonistic relationship with other FOSS monitoring projects, but as long as they are pure FOSS (like Icinga and Zabbix) we tend to get along rather well. Plus I’m jealous that Icinga is used on the ISS.

Linuxconf Australia Icinga2 Talk

I think my talk went well. I only had 15 minutes and for once I think I was a few seconds under that limit. While it wasn’t live-streamed it was up on YouTube very quicky, and you can watch it if you want.

I had to leave LCA to head to the New Zealand Network Operator’s Group (NZNOG) meeting, so I missed the main conference, but I am grateful the organizers gave me the opportunity to speak and I hope to return in the future.

Linuxconf Australia During a Break