#OSMC 2018 – Day 3: Hackathon

For several years now the OSMC has been extended by one day in the form of a “hackathon”. As I do not consider myself a developer I usually skip this day, but since I wanted to spend more time with Ronny Trommer and to explore the OpenNMS MQTT plugin, I decided to attend this year.

I’m glad I did, especially because the table where we sat was also home to Dave Kempe, and he brought Tim Tams from Australia:

OSMC 2018 Tim Tams

Yum.

You can find them in the US on occasion, but they aren’t as good.

I have been hearing about MQTT for several years now. According to Wikipedia, MQTT (Message Queuing Telemetry Transport) is a messaging protocol designed for connections with remote locations where a “small code footprint” is required or the network bandwidth is limited, thus making it useful for IoT devices.

Dr. Craig Gallen has been working on a plugin to allow OpenNMS to consume MQTT messages, and I was eager to try it out. First, we needed a MQTT broker.

I found that the OpenHAB project supports an MQTT broker called Mosquitto, so we decided to go with that. This immediately created a discussion about the differences between OpenHAB and Home Assistant, the latter being a favorite of Dave. They looked comparable, but we decided to stick with OpenHAB because a) I already had an instance installed on a Raspberry Pi, and b) it is written in Java, which is probably why others prefer Home Assistant.

Ronny worked on getting the MQTT plugin installed while I created a dummy sensor in OpenHAB called “Gas”.

OSMC 2018 Hackathon

This involved creating a “sitemap” in /etc/openhab2:

sitemap opennms label="My home automation" {
    Frame label="Date" {
        Text item=Date
    }
    Frame label="Gas" {
        Text item=mqtt_kitchen_gas icon="gas"
    }
}

and then an item that we could manipulate with MQTT:

Number mqtt_kitchen_gas "Gas Level [%.1f]" {mqtt="<[mosquitto:Home/Floor1/Kitchen/Gas_Sensor:state:default]"}

To install the MQTT plugin:

Ronny added the following to the configuration to connect to our Mosquitto broker on OpenHAB:

<mqttclients>
  <client clientinstanceid="client1">
    <brokerurl>tcp://172.20.11.8:1883</brokerurl>
    <clientid>opennms</clientid>
   <connectionretryinterval>3000</connectionretryinterval>
    <clientconnectionmaxwait>20000</clientconnectionmaxwait>
    <topiclist>
      <topic qos="0" topic="iot/#">
    </topic>
    <username>openhabian</username>
    <password>openhabian</password>
    </client>
</mqttClients>

Now that we had a connection between our OpenHAB Mosquitto broker and OpenNMS, we could try to send information. The MQTT plugin handles both event information and data collection. To test both we used the mosquitto_pub command on the CLI.

For an event one can use something like this:

#/bin/bash
mosquitto_pub -u openhabian --pw openhabian -t "iot/timtam" -m "{ \"name\": \"6114163\",  \"sensordatavalues\": [ { \"value_type\": \"Gas\", \"value\": \"$RANDOM\"  } ] }"

On the OpenNMS side you need to configure the MQTT plugin to look for it:

<messageEventParsers>
  <messageEventParser foreignSource="$topicLevels[5]" payloadType="JSON" compression="UNCOMPRESSED">
    <subscriptionTopics>
      <topic>iot/timtam/event/kitchen/mysensor/doorlock</topic>
    </subscriptionTopics>

    <xml-groups xmlns="http://xmlns.opennms.org/xsd/config/xml-datacollection">
      <xml-group name="timtam-mqtt-lab" resource-type="sensors" resource-xpath="/" key-xpath="@name">
        <xml-object name="instanceResourceID" type="string" xpath="@name"/>
        <xml-object name="gas" type="gauge" xpath="sensordatavalues[@value_type="Gas"]/value"/>
      </xml-group>
    </xml-groups>
    <ueiRoot>uei.opennms.org/plugin/MqttReceiver/timtam/kitchen
  </messageEventParser>
</messageEventParsers>

Note how Ronny worked our Tim Tam obsession into the configuration.

To make this useful, you would want to configure an event definition for the event with the Unique Event Identifier (UEI) of uei.opennms.org/plugin/MqttReceiver/timtam/kitchen:

<events xmlns="http://xmlns.opennms.org/xsd/eventconf">
  <event>
    <uei>uei.opennms.org/plugin/MqttReceiver/timtam/kitchen</uei>
    <event-label>MQTT: Timtam kitchen lab event</event-label>
    <descr>This is our Timtam kitchen lab event</descr>
    <logmsg dest="logndisplay">
      All the parameters: %parm[all]%
    </logmsg>
    <severity>Normal</severity>
    <alarm-data reduction-key="%uei%:%dpname%:%nodeid%:%interface%:%service%" alarm-type="1" auto-clean="false"/>
  </event>
</events>

Once we had that working, the next step was to use the MQTT plugin to collect performance data from the messages. We used this script:

#!/bin/bash
while [ true ]
do
mosquitto_pub -u openhabian --pw openhabian -t "Home/Floor1/Kitchen/Gas_Sensor" -m "{ \"name\": \"6114163\",  \"sensordatavalues\": [ { \"value_type\": \"Gas\", \"value\": \"$RANDOM\"  } ] }"
sleep 10
done

This will create a message including a random number every ten seconds.

To have OpenNMS look for it, the MQTT configuration is:

<messageDataParsers>
  <messageDataParser foreignSource="$topicLevels[5]" payloadType="JSON" compression="UNCOMPRESSED">
    <subscriptionTopics>
      <topic>iot/timtam/metric/kitchen/mysensor/gas</topic>
    </subscriptionTopics>
    <xml-groups xmlns="http://xmlns.opennms.org/xsd/config/xml-datacollection">
      <xml-group name="timtam-kitchen-sensor" resource-type="sensors" resource-xpath="/" key-xpath="@name">
        <xml-object name="instanceResourceID" type="string" xpath="@name" />
        <xml-object name="gas" type="gauge" xpath="sensordatavalues[@value_type="Gas"]/value"/>
      </xml-group>
    </xml-groups>
    <xmlRrd step="10">
      <rra>RRA:AVERAGE:0.5:1:20160</rra>
      <rra>RRA:AVERAGE:0.5:12:14880</rra>
      <rra>RRA:AVERAGE:0.5:288:3660</rra>
      <rra>RRA:MAX:0.5:288:3660</rra>
      <rra>RRA:MIN:0.5:288:3660</rra>
    </xmlRrd>
  </messageDataParser>
</messageDataParsers>

This will store the values in an RRD file which can then be graphed within OpenNMS or through Grafana with the Helm plugin.

It was pretty straightforward to get the OpenNMS MQTT plugin working. While I’ve focused mainly on what was accomplished, it was a lot of fun chatting with others at our table and in the room. As usual, Netways did a great job with organization and I think everyone had fun.

Plus, I got to be reminded of all the amazing stuff being done by the OpenNMS team, and how the view is great up here while standing on the shoulders of giants like Ronny and Craig.

#OSMC 2018 – Day 2

Despite how long the Tuesday night festivities lasted, quite a few people managed to make the first presentation on Wednesday morning. I’m old so I had gone to bed fairly early and was able to see “Make IT Monitoring Ready for Cloud-native Systems” bright and early.

OSMC 2018 RealOpInsight

This presentation focused on a project called RealOpInsight. This seems to be a sort of “Manager of Managers” for multiple monitoring applications, and I didn’t really see a “cloud-native” focus in the presentation. It is open-source so if you find yourself running many instances of disparate monitoring platforms you may find RealOpInsight useful.

This was followed by a presentation from Uber.

OSMC 2018 Uber

One can imagine the number of metrics an organization like Uber collects (and I did refrain myself from making snarky comments like “what database do you use to track celebrities?” and “where do you count the number of assaults by Uber drivers?”). Rob Skillington seemed pretty cool and I didn’t want to put him on the spot.

Uber used to use Cassandra, which is a storage option for OpenNMS, but they found when they hit around 80,000 metrics per second the system couldn’t keep up (one of the largest OpenNMS deployments is 20,000 metrics/sec so 80K is a lot). Their answer was to create a new storage system called M3DB. While it seems pretty impressive, I did ask some questions about how mature it was because at OpenNMS we are always looking out for ways to make things easier for our users, and Rob admitted that while it works well for Uber it needs some work to be generally useful, which is why they open-sourced it. We’ll keep an eye on it.

The next time slot was the “German only” one I mentioned in my last post, so I engaged in the hallway track until lunch.

OSMC 2018 Rihards Olups

It was lovely to see Rihards Olups again. We met at the first OSMC I attended when he was part of the “Latvian Army” at Zabbix. He gave an entertaining talk on dealing with the alerts from your monitoring system, and he ended with the tag line “Make Alerts Meaningful Again (MAMA)”. Seems like a perfect slogan for a ball cap, preferably in red.

OSMC 2018 Dave Kempe

Another delightful human being I got to see was Dave Kempe, who came all the way from Sydney. While we had met at a prior OSMC, this conference we ended up spending a lot more time together (he was in the Prometheus training as well as the Thursday Hackathon). He gave a talk on being a monitoring consultant, and it was interesting to compare his experiences with my own (they were similar).

For most people the conference ended on Wednesday. I said goodbye to people like Peter Eckel and looked forward to the next OSMC so I could see them again.

Speaking of the next OSMC, we are going to be doing OpenNMS training on that first day, November 4th, so save the date. It is the least we could do since they went to the trouble to advertise OpenNMS Horizon® on all their posters (grin).

OSMC 2018 Horizon

Ronny and I were hanging around for the Hackathon on Thursday, and for those attendees there was a nice dinner at a local restaurant called Tapasitos. It was fun to spend more time with the OSMC gang and to get ready for our last day at the conference.

OSMC 2018 Tapasitos

#OSMC 2018 – Day 1

The 2018 Open Source Monitoring Conference officially got started on Tuesday. This was my fifth OSMC (based on the number of stars on my badge), although I am happy to have been at the very first OSMC conference with that name.

As usual our host and Master of Ceremonies Bernd Erk started off the festivities.

OSMC 2018 Welcome

This year there were three tracks of talks. Usually there are two, and I’m not sure how I feel about more tracks. Recently I have been attending Network Operator Group (NOG) meetings and they are usually one or two days long but only one track. I like that, as I get exposed to things I normally wouldn’t. One of my favorite open source conferences All Things Open has gotten so large that it is unpleasant to navigate the schedule.

In the case of the OSMC, having three tracks was okay, but I still liked the two track format better. One presentation was always in English, although one of the first things Bernd mentioned in his welcome was that Mike Julian was unable to make it for his talk on Wednesday and thus that time slot only had two German language talks.

If they seem interesting I’ll sit in on the German talks, especially if Ronny is there to translate. I am very interested in open source home automation (well, more on the monitoring side than, say, turning lights on and off) so I went to the OpenHAB talk by Marianne Spiller.

OSMC 2018 OpenHAB

I found out that there are mainly two camps in this space: OpenHAB and Home Assistant. The former is in Java which seems to invoke some Java hate, but since I was going to use OpenHAB for our MQTT Hackathon on Thursday I thought I would listen in.

OSMC 2018 Custom MIB

I also went to a talk on using a Python library for instrumenting your own SNMP MIB by Pieter Hollants. We have a drink vending machine that I monitor with OpenNMS. Currently I just output the values to a text file and scrape them via HTTP, but I’d like to propose a formal MIB structure and implement it via SNMP. Pieter’s work looks promising and now I just have to find time to play with it.

Just after lunch I got a call that my luggage had arrived at the hotel. Just in time because otherwise I was going to have to do my talk in the Icinga shirt Bernd gave me. Can’t have that (grin).

My talk was lightly attended, but the people who did come seemed to enjoy it. It was one of the better presentations I’ve created lately, and the first comment was that the talk was much better than the title suggested. I was trying to be funny when I used “OpenNMS Geschäftsbericht” (OpenNMS Annual Report) in my submission. It’s funny because I speak very little German, although it was accurate since I was there to present on all of the cool stuff that has happened with OpenNMS in the past year. It was recorded so I’ll post a link once the videos are available.

In contrast, Bernd’s talk on the current state of Icinga was standing room only.

OSMC 2018 State of Icinga

The OSMC has its roots in Nagios and its fork Icinga, and most people who come to the OSMC are there for Icinga information. It is easy to why this talk was so popular (even though it was basically “Icinga Geschäftsbericht” – sniff). The cool demo was an integration Bernd did using IBM’s Node-RED, Telegram and an Apple Watch, but unfortunately it didn’t work. I’m hoping we can work up an Apple Watch/OpenNMS integration by next year’s conference (should be possible to add hooks to the Watch from the iOS version of Compass).

The evening event was held at a place called Loftwerk. It was some distance from the conference so a number of buses were chartered to take us there. It was fun if a bit loud.

OSMC 2018 Loftwerk

OSMC celebrations are known to last into the night. The bar across the street from the conference hotel (which I believe has changed hands at least three times in the lifetime of the OSMC) becomes “Checkpoint Jenny” once the main party ends and can go on until nearly dawn, which is why I like to speak on the first day.

#OSMC 2018 – Day 0: Prometheus Training

To most people, monitoring is not exciting, but it seems lately that the most exciting thing in monitoring is the Prometheus project. As a project endorsed by the Cloud Native Computing Foundation, Prometheus is getting a lot of attention, especially in the realm of cloud applications and things like monitoring Kubernetes.

At this year’s Open Source Monitoring Conference they offered a one day training course, so I decided to take it to see what all the fuss was about. I apologize in advance that a lot of this post will be comparing Prometheus to OpenNMS, but in case you haven’t guessed I’m biased (and a bit jealous of all the attention Prometheus is getting).

The class was taught by Julien Pivotto who is both a Prometheus user and a decent instructor. The environment consisted of 15 students with laptops set up on a private network to give us something to monitor.

Prometheus is written in Go (I’m never sure if I should call it “Go” or if I need to say “Golang”) which makes it compact and fast. We installed it on our systems by downloading a tarball and simply executing the application.

Like most applications written in the last decade, the user interface is accessed via a browser. The first thing you notice is that the UI is incredibly minimal. At OpenNMS we get a lot of criticism of our UI, but the Prometheus interface is one step above the Google home page. The main use of the web page is for querying collected metrics, and a lot of the configuration is done by editing YAML files from the command line.

Once Prometheus was installed and running, the first thing we looked at was monitoring Prometheus itself. There is no real magic here. Metrics are exposed via a web page that simply lists the variables available and their values. The application will collect all of the values it finds and store them in a time series database called simply the TSDB.

The idea of exposing metrics on a web page is not new. Over a decade ago we at OpenNMS were approached by a company that wanted us to help them create an SNMP agent for their application. We asked them why they needed SNMP and found they just wanted to expose various metrics about their app to monitor its performance. Since it ran on Linux system with an embedded web server, we suggested that they just write the values to a file, put that in the webroot, and we would use the HTTP Collector to retrieve and store them.

The main difference between that method and Prometheus is that the latter expects the data to be presented in a particular format, whereas the OpenNMS method was more free-form. Prometheus will also collect all values presented without extra configuration, whereas you’ll need to define the values of interest within OpenNMS.

In Prometheus there is no real auto-discovery of devices. You edit a file in which you create a “job”, in our case the job was called “Prometheus”, and then you add “targets” based on IP address and port. As we learned in the class, for each different source of metrics there is usually a custom port. Prometheus stats are on port 9100, node data is exposed on 9090 via the node_exporter, etc. When there is an issue, this can be reflected in the status of the job. For example, if we added all 15 Prometheus instances to the job “Prometheus” and one of them went down, then the job itself would show as degraded.

After we got Prometheus running, we installed Grafana to make it easier to display the metrics that Prometheus was capturing. This is a common practice these days and a good move since more and more people are becoming familiar it. OpenNMS was the first third-party datasource created for Grafana, and the Helm application brings bidirectional functionality for managing OpenNMS alarms and displaying collected data.

After that we explored various “components” for Prometheus. While a number of applications are exposing their data in a format that Prometheus can consume, there are also other components that can be installed, such as the node_exporter which displays server-related metrics and to provide data that isn’t otherwise natively available.

The rest of the class was spent extending the application and playing with various use cases. You can “alertmanager” to trigger various actions based on the status of metrics within the system.

One thing I wish we could have covered was the “push” aspect of Prometheus. Modern monitoring is moving from a “pull” model (i.e. SNMP) to a “push” model where applications simply stream data into the monitoring system. OpenNMS supports this type of monitoring through the telemetryd feature, and it would be interesting to see if we could become a sink for the Prometheus push format.

Overall I enjoyed the class but I fail to see what all the fuss is about. It’s nice that developers are exposing their data via specially formatted web pages, but OpenNMS has had the ability to collect data from web pages for over a decade, and I’m eager to see if I can get the XML/JSON collector to work with the native format of Prometheus. Please don’t hate on me if you really like Prometheus – it is 100% open source and if it works for you then great – but for something to manage your entire network (including physical servers and especially networking equipment like routers and switches) you will probably need to use something else.

[Note: Julien reached out to me and asked that I mention the SNMP_Exporter which is how Prometheus gathers data from devices like routers and switches. It works well for them and they are actively using it.]

#OSMC 2018 – Day -1

The annual Open Source Monitoring Conference (OSMC) held in Nürnberg, Germany each year brings together pretty much everyone who is anyone in the free and open source monitoring space. I really look forward to attending, and so do a number of other people at OpenNMS, but this year I won the privilege, so go me.

The conference is a lot of fun, which must be the reason for the hell trip to get here this year. Karma must be trying to bring things into balance.

As an American Airlines frequent flier whose home airport is RDU, most of my trips to Europe involve Heathrow airport (American has a direct flight from RDU to LHR that I’ve taken more times than I can count).

I hate that airport with the core of my being, and try to avoid it whenever possible. While I could have taken a flight from LHR directly to Nürnberg on British Airways, I decided to fly to Philadelphia and take a direct American flight to Munich. It is just about two hours by train from MUC to Nürnberg Hbf and I like trains, so combine that with getting to skip LHR and it is a win/win.

But it was not to be.

I got to the airport and watched as my flight to PHL got delayed further and further. Chris, at the Admiral’s Club desk, was able to re-route me, but that meant a flight through Heathrow (sigh). Also, the Heathrow flight left five hours later than my flight to Philadelphia, and I ended up waiting it out at the airport (Andrea had dropped me off and I didn’t want to ask her to drive all the way back to get me just for a couple of hours).

Because of the length of this trip I had to check a bag, and I had a lot of trepidation that my bag would not be re-routed properly. Chris even mentioned that American had actually put it on the Philadelphia flight but he had managed to get it removed and put on the England flight, and American’s website showed it loaded on the plane.

That also turns out to be the last record American has on my bag, at least on the website I can access.

American Tracking Website

The fight to London was uneventful. American planes tend to land at Terminal 3 and most other British Airways planes take off from Terminal 5, so you have to make your way down a series a long corridors and take a bus to the other terminal. Then you have to go through security, which is usually when my problems begin.

I wear contact lenses, and since my eyes tend to react negatively to the preservatives found in saline solution I use a special, preservative-free brand of saline. Unfortunately, it is only available in 118ml bottles. As most frequent fliers know, the limit for the size of liquid containers for carry on baggage is 100ml, although the security people rarely notice the difference. When they do I usually just explain that I need it for my eyes and I’m allowed to bring it with me. That is, everywhere except Heathrow airport. Due to the preservative-free nature of the saline I can’t move it to another container for fear of contamination.

Back in 2011 was the first time that my saline was ever confiscated at Heathrow. Since then I’ve carried a doctor’s note stating that it is “medically necessary” but once even then I had it confiscated a few years later at LHR because the screener didn’t like the fact that my note was almost a year old. That said, many times have I gone through that airport with no one noticing the slightly larger size of my saline bottle, but on this trip it was not to be.

When your carry on items get tagged for screening at Heathrow’s Terminal 5, you kind of wait in a little mob of people for the one person to methodically go through your stuff. Since I had several hours between flights it was no big deal for me, but it is still very annoying. Of course when the screener got to my items he was all excited that he had stopped the terrorist plot of the century by discovering my saline bottle was 18ml over the limit, and he truly seemed disappointed when I produced my doctor’s note, freshly updated as of August of this year.

Screeners at Heathrow are not imbued with much decision making ability, so he literally had to take my note and bottle to a supervisor to get it approved. I was then allowed to take it with me, but I couldn’t help thinking that the terrorists had won.

The rest of my stay at the world’s worst airport was without incident, and I squeezed into my window seat on the completely full A319 to head to Munich.

One we landed I breezed through immigration (Germans run their airports a bit more efficiently than the British) and waited for my bag. And waited. And waited.

When I realized it wouldn’t be arriving with me, I went to look for a BA representative. The sign said to find them at the “Lost and Found” kiosk, but the only two kiosks in the rather small baggage area were not staffed. I eventually left the baggage area and made my way to the main BA desk, where I managed to meet Norbert. After another 15 minutes or so, Norbert brought me a form to fill out and promised that I would receive an e-mail and a text message with a “file number” to track the status of my bag.

I then found the S-Bahn train which would take me to the Munich Hauptbahnhof where I would get my next train to Nürnberg.

I had made a reservation for the train to insure I had a seat, but of course that was on the 09:55 train which I would have taken had I been on the PHL flight. I changed that to a 15:00 train when I was rerouted, and apparently one change is all you get with Deutsche Bahn, but Ronny had suggested I buy a “flexpreis” ticket so I could take any train from Munich to Nürnberg that I wanted. I saw there were a number of “Inter-City Express (ICE)” trains available, so I figured I would just hop on the first one I found.

When I got to the station I saw that a train was leaving from Platform (Gleis) 20 at 15:28. It was now 15:30 so I ran and boarded just before it pulled out of the station.

It was the wrong train.

Well, not exactly. There are a number of types of trains you can take. The fastest are the ICE trains that run non-stop between major cities, but there are also “Inter-City (IC)” trains that make more stops. I had managed to get on a “Regional Bahn (RB)” train which makes many, many stops, turning my one hour trip into three.

(sigh)

The man who took my ticket was sympathetic, and told me to get off at Ingolstadt and switch to an ICE train. I was chatting on Mattermost with Ronny most of this time, and he was able to verify the proper train and platform I needed to take. That train was packed, but I ended up sitting with some lovely people who didn’t mind chatting with me in English (I so love visiting Germany for this reason).

So, about seven hours later than I had planned I arrived at my hotel, still sans luggage. After getting something to eat I started the long process of trying to locate my bag.

I started on Twitter. Both the people at American and British Airways asked me to DM them. The AA folks said I needed to talk with the BA folks and the BA folks still have yet to reply to me. Seriously BA, don’t reach out to me if you don’t plan to do anything. It sets up expectations you apparently can’t meet.

Speaking of not doing anything, my main issue was that I need a “file reference” in order to track my lost bag, but despite Norbert’s promise I never received a text or e-mail with that information. I ended up calling American, and the woman there was able to tell me that she showed the bag was in the hands of BA at LHR. That was at least a start, so she transferred me to BA customer support, who in turn transferred me to BA delayed baggage, who told me I needed to contact American.

(sigh)

As calmly as I could, I reiterated that I started there, and then the BA agent suggested I visit a particular website and complete a form (similar to the one I did for Norbert I assume) to get my “file reference”. After making sure I had the right URL I ended the call and started the process.

I hit the first snag when trying to enter in my tag number. As you can see from the screenshot above, my tag number starts with “600” and is ten digits long. The website expected a tag number that started with “BA” followed by six digits, so my AA tag was not going to work.

BA Tracking Website - wrong number

But at least this website had a different number to call, so I called it and explained my situation once again. This agent told me that I should have a different tag number, and after looking around my ticket I did find one in the format they were after, except starting with “AA” instead of “BA”. Of course, when I entered that in I got an error.

BA Tracking Website - error

After I explained that to the agent I remained on the phone for about 30 minutes until he was able to, finally, give me a file reference number. At this point I was very tired, so I wrote it down and figured I would call it a night and go to sleep.

But I couldn’t sleep, so I tried to enter that number into the BA delayed bag website. It said it was invalid.

(sigh)

Then I got a hint of inspiration and decided to enter in my first name as my last, and voila! I had a missing bag record.

BA Tracking Website - missing bag

That site said they had found my bag (the agent on the phone had told me it was being “traced”) and it also asked me to enter in some more information about it, such as the brand of the manufacturer.

BA Tracking Website - information required

Of course when I tried to do that, I got an error.

BA Tracking Website - system error

Way to go there, British Airways.

Anyway, at that point I could sleep. As I write this the next morning nothing has been updated since 18:31 last night, but I hold out hope that my bag will arrive today. I travel a lot so I have a change a clothes with me along with all the toiletries I need to not offend the other conference attendees (well, at least with my hygiene), but I can’t help but be soured on the whole experience.

This year I have spent nearly US$20,000 with American Airlines (they track that for me on their website). I paid them for this ticket and they really could have been more helpful instead of just washing their hands and pointing their fingers at BA. British Airways used to be one of the best airlines on the planet, but lately they seemed to have turned into Ryanair but without that airline’s level of service. The security breach that exposed the personal information of their customers, stories like this recent issue with a flight from Orlando, and my own experience this trip have really put me off flying them ever again.

Just a hint BA – from a customer service perspective – when it comes to finding a missing bag all we really want (well, besides the bag) is for someone to tell us they know where it is and when we can expect to get it. The fact that I had to spend several hours after a long trip to get something approximating that information is a failure on your part, and you will lose some if not all of my future business because of it.

I also made the decision to further curtail my travel in 2019, because frankly I’m getting too old for this crap.

So, I’m now off to shower and to get into my last set of clean clothes. Here’s hoping my bag arrives today so I can relax and enjoy the magic that is the OSMC.

UKNOF41

I love tech conferences, especially when I get to be a speaker. Nothing makes me happier than to be given a platform to run my mouth.

For the last year or so I’ve been attending various Network Operators Group (NOG) meetings, and I recently got the opportunity to speak at the UK version, which they refer to as a Network Operators Forum (UKNOF). It was a lot of fun, so I thought I’d share what I learned.

UKNOF41 was held in Edinburgh, Scotland. I’d never been to Scotland before and I was looking forward to the visit, but Hurricane Florence required me to return home early. I ended up spending more time in planes and airports than I did in that city, and totally missed out on both haggis and whisky (although I did drink an Irn-Bru). I arrived Monday afternoon and met up with Dr. Craig Gallen, the OpenNMS Project representative in the UK. We had a nice dinner and then got ready for the meeting on Tuesday.

Like most NOG/NOF events, the day consisted of one track and a series of presentations of interest to network operators. I really like this format. The presentations tend to be relatively short and focused, and this exposes you to concepts you might have missed if there were multiple tracks.

UKNOF is extremely well organized, particularly from a speaker’s point of view. There was a ton of information on what to expect and how to present your slides, and everything was run from a single laptop. While this did mean your slides were due early (instead of, say, being written on the plane or train to the conference) it did make the day flow smoothly. The sessions were recorded, and I’ll include links to the presentations and the videos in the descriptions below.

UKNOF41 - Keith Mitchell

The 41st UKNOF was held at the Edinburgh International Conference Centre, located in a newer section of the city and was a pretty comfortable facility in which to hold a conference. Keith Mitchell kicked off the the day with the usual overview of the schedule and events (slides), and then we got right into the talks.

UKNOF41 - Kurtis Lindqvist

The first talk was from Kurtis Lindqvist who works for a service provider called LINX (video|slides). LINX deployed a fairly new technology called EVPN (Ethernet VPN). EVPN is “a multi-tenant BGP-based control plane for layer-2 (bridging) and layer-3 (routing) VPNs. It’s the unifying L2+L3 equivalent of the traditional L3-only MPLS/VPN control plane.” I can’t say that I understood 100% of this talk, but the gist is that EVPN allows for better use of available network resources which allowed LINX to lower its prices, considerably.

UKNOF41 - Neil McRae

The next talk was from Neil McRae from BT (video|slides). While this was my first UKNOF I quickly identified Mr. McRae as someone who is probably very involved with the organization as people seemed to know him. I’m not sure if this was in a good way or a bad way (grin), probably a mixture of both, because being a representative from such a large incumbent as BT is bound to attract attention and commentary.

I found this talk pretty interesting. It was about securing future networks using quantum key distribution. Current encryption, such as TLS, is based on public-key cryptography. The security of public-key cryptography is predicated on the idea that it is difficult to factor large numbers. However, quantum computing promises several orders of magnitude more performance than traditional binary systems, and the fear is that at some point in the future the mathematically complex operations that make things like TLS work will become trivial. This presentation talked about some of the experiments that BT has been undertaking with quantum cryptography. While I don’t think this is going to be an issue in the next year or even the next decade, assuming I stay healthy I expect it to be an issue in my lifetime. It is good to know that people are working on solving it.

At this point in time I would like to offer one minor criticism. Both of the presenters thus far were obviously using a slide deck created for a purpose other than UKNOF. I don’t have a huge problem with that, but it did bother me a little. As a speaker I always consider the opportunity to speak to be a privilege. While I joke about writing the slides on the way to the conference, I do put a lot of time into my presentations, and even if I am using some material from other decks I make sure to customize it for that particular conference. Ultimately what is important is the content and not the deck itself and perhaps UKNOF is a little more casual than other such meetings, but it still struck me as, well, rude, to skim through a whole bunch of slides to fit the time slot and the audience.

UKNOF41 - Julian Palmer

After a break the next presentation was from Julian Palmer of Corero (video|slides). Corero is a DDOS protection and mitigation company, which I assume means they compete with companies such as Cloudflare. I am always fascinated by the actions of those trying to break into networks and those trying to defend them, so I really enjoyed this presentation. It was interesting to see how much larger the DDOS attacks have grown over time and even more surprising to see how network providers can deal with them.

UKNOF41 - Stuart Clark

This was followed by Stuart Clark from Cisco Devnet giving a talk on using “DevOps” technologies with respect to network configurations (video|slides). This is a theme I’ve seen at a number of NOG conferences: let’s leverage configuration management tools designed for servers and apply them to networking gear. It makes sense, and it is interesting to note that the underlying technologies between both have become so similar that using these tools actually works. I can remember a time when accessing network gear required proprietary software running on Solaris or HP-UX. Now with Linux (and Linux-like) operating systems underpinning almost everything, it has become easier to migrate, say, Ansible to work on routers as well as servers.

It was my turn after Mr. Clark spoke. My presentation covered some of the new stuff we have released in OpenNMS, specifically things like the Minion and Drift, as well as a few of the newer things on which we are actively working (video|slides). I’m not sure how well it was received, but number of people came up to me afterward and say they enjoyed it. During the question and answer session Mr. McRae did state something that bothered me. He said, basically, that the goal of network monitoring should be to get rid of people. I keep hearing that, especially from large companies, but I have to disagree. Technology is moving too fast to ever get rid of people. In just half a day I was introduced to technologies such as EVPN and quantum key distribution, not to mention dealing with the ever-morphing realm of DDOS attacks, and there is just no way monitoring software will ever evolve fast enough to cover everything new just to get rid of people.

Instead, we should be focusing on enabling those people in monitoring to be able to do a great job. Eliminate the drudgery and give them the tools they need to deal with the constant changes in the networking space. I think it is a reasonable goal to use tools to reduce the need to hire more and more people for monitoring, but getting rid of them altogether does not seems likely, nor should we focus on it.

I was the last presentation before lunch (so I finished on time, ‘natch).

UKNOF41 - Chris Russell

The second half of the conference began with a presentation by Chris Russell (video|slides). The title was “Deploying an Atlas Probe (the Hard Way)”, which is kind of funny. RIPE NCC is the Internet Registry for Europe, and they have a program for deploying hardware probes to measure network performance. What’s funny is that you just plug them in. Done. While this presentation did include discussion of deploying an Atlas probe, it was more about splitting out a network and converting it to IPv6. IPv6 is the future (it is supported by OpenNMS) but in my experience organizations are very slowly migrating from IPv4 (the word “glacier” comes to mind). Sometimes it takes a strong use case to justify the trouble and this presentation was an excellent case study for why to do it and the pitfalls.

UKNOF41 - Andrew Ingram

Speaking of splitting out networks, the next presentation dealt with a similar situation. Presented by Andrew Ingram from High Tide Consulting, his session dealt with a company that acquired another company, then almost immediately spun it back out (video|slides). He was brought in to deal with the challenges of dealing with a partially combined network that needed to be separated in a very short amount of time with minimal downtime.

I sat next to Mr. Ingram for most of the conference and learned this was his first time presenting. I thought he did a great job. He sent me a note after the conference that he has “managed to get OpenNMS up and running in Azure with an NSG (Network Security Gateway) running in front for security and a Minion running on site. It all seams to be working very nicely”

Cool.

UKNOF41 - Sara Dickinson

The following presentation would have to be my favorite of the day. Given by Sara Dickinson of Sinodun IT, it talked about ways to secure DNS traffic (video|slides).

The Internet wouldn’t work without DNS. It translates domain names into addresses, yet in most cases that traffic is sent in the clear. It’s metadata that can be an issue with respect to privacy. Do you think Google runs two of the most popular DNS servers out of the goodness of their heart? Nope, they can use that data to track what people are doing on the network. What’s worse is that every network provider on the path between you and your DNS server can see what you are doing. It is also an attack vector as well as a tool for censorship. DNS traffic can be “spoofed” to send users to the wrong server, and it can be blocked to prevent users from accessing specific sites.

To solve this, one answer is to encrypt that traffic, and Ms. Dickinson talked about a couple of options: DoT (DNS over TLS) and DoH (DNS over HTTPS).

The first one seems like such a no-brainer that I’m surprised it took me so long to deploy it. DoT encrypts the traffic between you and your DNS server. Now, you still have to trust your DNS provider, but this prevents passive surveillance of DNS traffic. I use a pfSense router at home and decided to set up DoT to the Quad9 servers. It was pretty simple. Of all of the major free DNS providers, Quad9 seems to have the strongest privacy policy.

The second protocol, DoH, is DNS straight from the browser. Instead of using a specific port, it can use an existing HTTPS connection. You can’t block it because if you do you’ll block all HTTPS traffic, and you can’t see the traffic separately from normal browsing. You still have to deal with privacy issues since that domain name has to be resolved somewhere and they will get header information, such as User-Agent, from the query, so there are tradeoffs.

While I learned a lot at UKNOF this has been the only thing I’ve actually implemented.

After a break we entered into the all too common “regulatory” section of the conference. Governments are adding more and more restrictions and requirements for network operators and these NOG meetings are often a good forum for talking about them.

UKNOF41 - Jonathan Langley

Jonathan Langley from the Information Commissioner’s Office (ICO) gave a talk on the Network and Information Systems Directive (NIS) (video|slides). NIS includes a number of requirements including things such as incident reporting. I thought it was interesting that NIS is an EU directive and the UK is leaving the EU, although it was stressed that NIS will apply post-Brexit. While there were a lot of regulations and procedures, it wasn’t as onerous as, say, TICSA in New Zealand.

UKNOF41 - Huw Saunders

This was followed by another regulatory presentation by Huw Saunders from The Office of Communications (Ofcom) (video|slides). This was fairly short and dealt primarily with Ofcom’s role in NIS.

UKNOF41 - Askar Sheibani

Askar Sheibani presented an introduction to the UK Fibre Connectivity Forum (video|slides). This is a trade organization that wants to deploy fiber connectivity to every commercial and residential building in the country. My understanding is that it will help facilitate such deployments among the various stakeholders.

UKNOF41 - David Johnston

The next to the last presentation struck a cord with me. Given by David Johnston, it talked about the progress the community of Balquhidder in rural Scotland is making in deploying its own Internet infrastructure (video|slides). I live in rural North Carolina, USA, and even though the golf course community one mile from my house has 300 Mbps service from Spectrum, I’m stuck with an unreliable DSL connection from CenturyLink, which, when it works, is a little over 10 Mbps. Laws in North Carolina currently make it illegal for a municipality to provide broadband service to its citizens, but should that law get overturned I’ve thought about trying to spearhead some sort of grassroots service here. It was interesting to learn how they are doing it in rural Scotland.

UKNOF41 - Charlie Boisseau

The final presentation was funny. Given by Charlie Boisseau, it was about “Layer 0” or “The Dirty Layer” (video|slides). It covered how cable and fiber are deployed in the UK. The access chambers for conduit have covers that state the names of the organizations that own them, and with mergers, acquisitions and bankruptcies those change (but the covers do not). While I was completely lost, the rest of the crowd had fun guessing the progression of one company to another. Anyone in the UK can deploy their own network infrastructure, but it isn’t exactly cheap, and the requirements were covered in the talk.

After the conference they served beer and snacks, and then I headed back to the hotel to get ready for my early morning flight home.

I had a lot of fun at UKNOF and look forward to returning some day. If you are a network provider in the UK it is worth it to attend. They hold two meetings a year, with one always being in London, so there is a good chance one will come near you at some point in time.

Open Source is Still Dead

Last week I attended the 20th O’Reilly Open Source Conference (OSCON), held this year in Portland, Oregon.

OSCON 20th Anniversary Sign

This is the premiere open source conference in the US, if not the world, and it is rather well run. It is equal to if not better than a lot of proprietary technology conferences I’ve attended, perhaps because it is pretty much a proprietary software conference in itself. I found it a little ironic that the Wednesday morning keynotes started off with a short, grainy video clip where an open source geek shouts out “We’re starting a revolution!”.

I tried to find the source of that quote, and I thought it came from the documentary “Revolution OS“. That movie chronicles the early days of open source software in which the stated goal was to take back software from large companies like Microsoft. There is a famous quote by Eric S. Raymond where he replies to a person from Microsoft with the words “I’m your worst nightmare.” Microsoft is now a major sponsor of OSCON.

When I attended OSCON in 2014 I asked the question “Is Open Source Dead?” Obviously the open source development model has never been more alive, but I was thinking back to my early involvement with open source where the idea was to move control of software out of the hands of big companies like IBM and Microsoft and into the hands of the users. Back then the terms “open source” and “free software” were synonymous. It was obvious that open source operating systems, mainly Linux, would rule the world of servers, so the focus was on the desktop. No one in open source predicted the impact of mobile, and by extension, the “cloud”. Open source today is no more than a development model used mostly to help create proprietary software, usually provided as a subscription or a service over the network. I mean, it makes sense. Companies like Google, Facebook and Amazon wouldn’t exist today if it wasn’t for Linux. If they had to pay a license to Microsoft or Sun (now Oracle) for every server they deployed their business models simply wouldn’t work, and the use of open source for building the infrastructure for applications simply makes sense.

Please note that I am not trying to make any sort of value judgement. I am still a big proponent of free software, and there are companies like Red Hat, OpenNMS and Nextcloud that try to honor the original intention of open source. All of us, open and proprietary, benefit from the large amount of quality open source software being created these days. But I do mourn the end of open source as I knew it. It used to be that open source software was published with “restrictive” licenses like the GPL, whereas now the trend is to move to “permissive” licenses like the MIT or Apache licenses. This allows for the commercialization of open source software, which in turn creates an incentive for large software companies to get involved.

This trend was seen throughout OSCON. The “diamond” sponsors were companies like IBM, Microsoft, Amazon and Google. The main buzzword was “Kubernetes” (or “K8s” if you’re one of the cool kids) which is an open source orchestration layer for managing containers. Almost all of the expo companies were cloud companies that either used open source software to provide a platform for their applications or to create open source agents that would feed back to their proprietary cloud back-end.

I attended my first OSCON in 2009 as a speaker, and I was a speaker for several years after that. My talks were always well-attended, but then for several years none of my paper submissions were accepted. I thought I had pissed off one or more of the organizers (it happens) but perhaps my thoughts on open source software had just become outdated.

I still like going to the conference, even though I no longer attempt to submit a talk. When I used to speak I found I spent most of my time on the Expo floor so now I just try to schedule other business during the week of OSCON and I get a free “Expo only” pass. You also get access to the keynotes, so I was sure to be in attendance as the conference officially started.

OSCON Badge

My favorite keynote was the first one, by Suz Hinton from Microsoft. She is known for doing live coding on the streaming platform Twitch, and she did a live demonstration for her keynote. She used an Arduino to control a light sensor and a servo. When she covered the sensor, the servo would move and “wave” at the OSCON audience. It was a little hard to fight the cognitive dissonance of a Microsoft employee using a Mac to program an open hardware device, but it was definitely entertaining.

OSCON Suz Hinton

My second favorite talk was by Camille Eddy. As interactions between computers and humans become more automated, a number of biases are starting to appear. Google image search had a problem where it would label pictures of black people as “gorillas”. An African-American researcher at MIT named Joy Buolamwini found that a robot recognized her better if she wore a white mask. Microsoft had an infamous experiment where it created a Twitter bot named “Tay” that within 24 hours was making racist posts. While not directly related to open source, a focus on an issue that affects the user community is very much in the vein of classic open source philosophy.

OSCON Camille Eddy

The other keynotes were from Huawei, IBM and Amazon (when you are a diamond sponsor you get a keynote) and they focused more on how those large software companies were using the open source development model to, well, offset the cost of development.

OSCON Tim O'Reilly

The Wednesday keynotes closed with Tim O’Reilly who talked about “Open Source and Open Standards in the Age of Cloud AI”. It kind of cemented the theme for me that open source had changed, and the idea is now much more about tools development and open APIs than in creating user-owned software.

OSCON Expo Floor

The rest of my time was spent wandering the Expo floor. OSCON offers space to traditional open source projects which I usually refer to as the “Geek Ghetto”. This year it was split to be on either side of the main area, and I got to spend some time chatting with people from the Software Freedom Conservancy and the Document Foundation, among others.

OSCON Geek Ghetto

I enjoyed the conference, even if it was a little bittersweet. Portland is a cool town and the people around OSCON are cool as well. If I can combine the trip with other business, expect to find me there next year, wandering the Expo floor.

2018 New Zealand Network Operators Group (NZNOG)

One thing that all open source projects struggle with is getting users. Most people in IT and software are overwhelmed with a plethora of information and options, and matching the right material to the right audience is a non-trivial problem.

Last year my friend Chris suggested that I speak at a Network Operators Group (NOG) meeting, specifically AusNOG. It was a lot of fun. I felt very comfortable among this crowd. so I decided to reach out to more NOGs to see if they would be interested in learning about OpenNMS.

The thing I like the most about NOGs is that they value getting things done above all else. While “getting things done” is still important with the free and open source crowd, there seems to be more philosophy and tribalism at those shows. “Oh, that’s written in PHP, it must suck” etc. As a “freetard” I live for the philosophical and social justice aspects of the community, but from a business standpoint it doesn’t translate well into paying customers.

At NOGs the questions are way more business-focused. Does it work? Is it supported? What does it cost? While I’m admittedly biased toward OpenNMS and its open source nature, the main reason I keep promoting it is that it just makes solid business sense for many companies to use it instead of their current solution.

Plus, these folks are pretty smart and entertaining while dispensing solid advice and knowledge.

Anyway, with that preamble, at AusNOG I learned about the New Zeland NOG (NZNOG) and submitted a talk. It got accepted and I found myself in Queenstown.

NZNOG Scenary

The main conference was spread out over two days, and like AusNOG it consisted of 30 to 45 minute talks in one track.

While I know it won’t work for a lot of conferences, I really like the “one track” format. It exposes me to things I wouldn’t have gone to otherwise, and if there is something I am simply not interested in learning about I can use that time to catch up on work or participate in the hallway track.

NZNOG Clare Curran

The conference started with a presentation by the Honorable Clare Curran, a newly minted Member of Parliament (they recently held elections in New Zealand). I’m slowly seeing politicians getting more involved in information technology conferences, which I think is a good thing, and I can only hope it continues. She spoke about a number of issues the government is facing with respect to communications technology.

Several things bother me about the US government, but one big one is the lack of understanding of the importance of access to the Internet at broadband speeds. Curran stated that “lack of reliable high-speed network access is a new measure of poverty”. Later in the day John Greenhough spoke on New Zealand’s Ultra-Fast Broadband (UFB) project, where on one slide broadband was defined as 20Mbps download speed.

NZNOG John Greenhough

Where I live in the US I am lucky to get 10Mbps and many of my neighbors are worse off, yet the government is ceding more of the decision making process about where to build out new infrastructure to the telecommunications companies which have zero incentive to improve my service. It’s wonderful to see a government realize the benefits of a connected populace and to take steps to make it happen.

Because we all need Netflix, right? (grin)

There was a cool talk about how Netflix works, and I didn’t realize that they are working with communications providers to provide low-latency solutions distributed geographically. This is done by supplying providers with caching content servers so that customers can access Netflix content while minimizing the need for lots of traffic over expensive backhaul links.

NZNOG Netflix RRD

I did find it cool that one of the bandwidth graphs presented was obviously done using RRDtool. I don’t know if they collected the data themselves or used something like OpenNMS, but I hope it was the latter.

With this push for ubiquitous network access comes other concerns. New Zealand has a law called TICSA that requires network providers to intercept and store network traffic data for use by law enforcement.

NZNOG Lawful Intercept

I thought the requirements were pretty onerous, but I was told that the NZ government did set aside some funds to help providers with deploying solutions for collecting and storing this data (but I doubt it can cover the whole cost, especially over time). The new OpenNMS Drift telemetry project might be able to help with this.

NZNOG Aftab Siddiqui

There were a couple of talks I had seen in some form at AusNOG. The ever entertaining Aftab Siddiqui talked about MANRS (Mutually Agreed Norms for Routing Security) but unlike in Australia he was hard pressed to find good examples of violations. Part of that could be that New Zealand is much smaller than Australia, but I’m giving the NZ operators the credit for just doing a good job.

NZNOG NetNORAD

The Facebook folks were back to talk about their NetNORAD project. While I have a personal reluctance to deploy agents, there really isn’t a way to measure latency at the detail they want without them. I think it would be cool to be able to gather and manage the data created by this project under OpenNMS.

NZNOG Geoff Huston

What I like most about these NOG meetings is that I always learn something cool, and this one was no different. Geoff Huston gave a humorous talk on DNSSEC and handling DNS-based DDoS attacks. While I was somewhat familiar with DNSSEC, I was unaware of the NSEC part of it.

Most DNS DDoS attacks work by asking for non-existent domains, and the overhead in processing them is what causes the denial of service. The domain name is usually randomly generated, such as jeff123@example.com, jeff234@example.com, etc. If the DNS server doesn’t have the domain in its cache, it will have to ask another DNS server, which in turn won’t have the domain as it doesn’t exist.

The NSEC part of DNSSEC, when responding to a non-existent domain request, will return the next valid domain. In the example above, if I ask for jeff123@example.com, the example.com DNS server can reply that the domain is invalid and, in addition, the next valid domain is www.example.com. If implemented correctly, the original DNS server should then never query for jeff234@example.com since it knows it, too, doesn’t exist.

Pretty nifty.

NZNOG Rata Stanic

One talk I was eagerly awaiting was from Rada Stanic at Cisco. She also spoke at AusNOG but I had to leave early and missed it. While she disrespected SNMP a little more than I liked (grin), her talk was on implementing new telemetry-based monitoring protocols such as gRPC. OpenNMS Drift will add this functionality to the platform. Our experience so far is that the device vendor implementation of the telemetry protocols leaves something to be desired, but it does show promise.

NZNOG Ulf

It was nice being in New Zealand again, and our mascot Ulf seemed to be popular with the locals. Can’t imagine why.

2018 Linuxconf Australia Sysadmin Miniconf

I just wanted to put up a quick post on my trip to Linuxconf Australia (LCA) being held this week in Sydney.

First, a little background. I’ve been curtailing my participation in free and open source software conferences for the last couple of years. It’s not that I don’t like them, quite the opposite, but my travel is funded by The OpenNMS Group and we just don’t get many customers from those shows. A lot of people are into FOSS for the “free” (as in gratis) aspect.

Contrast that with telcos and network operators who tend to have the opposite viewpoint, if they aren’t spending a ton of money then they must be doing it wrong, and you can see why I’ve been spending more of my time focusing on that market.

Anyway, we have recently signed up a new partner in Australia to help us work with clients in the Pacific Rim countries called R-Group International, and I wanted to come out to Perth and do some training with their team. Chris Markovic, their Technical Director as well as being “mobius” on the OpenNMS chat server, suggested I come out the week after LCA, so I asked the LCA team if they had room on their program for me to talk about OpenNMS. They offered me a spot on their Sysadmin Miniconf day.

Linuxconf Australia Sign

The conference is being held at the University of Technology, Sydney (UTS) and I have to say the conference hall for the Sysadmin track was one of the coolest, ever.

Linuxconf Australia - UTS Lecture Hall

The organizers grouped three presentations together dealing with monitoring: one on Icinga 2, one from Nagios and mine on OpenNMS. While I don’t know much about Icinga, I do know the people who maintain it and they are awesome. One might think OpenNMS would have an antagonistic relationship with other FOSS monitoring projects, but as long as they are pure FOSS (like Icinga and Zabbix) we tend to get along rather well. Plus I’m jealous that Icinga is used on the ISS.

Linuxconf Australia Icinga2 Talk

I think my talk went well. I only had 15 minutes and for once I think I was a few seconds under that limit. While it wasn’t live-streamed it was up on YouTube very quicky, and you can watch it if you want.

I had to leave LCA to head to the New Zealand Network Operator’s Group (NZNOG) meeting, so I missed the main conference, but I am grateful the organizers gave me the opportunity to speak and I hope to return in the future.

Linuxconf Australia During a Break

Conferences: Australia, New Zealand and Senegal

Just a quick note to mention some conferences I will be attending. If you happen to be there as well, I would love the opportunity to meet face to face.

Next week I’ll be in Sydney, Australia, for linux.conf.au. I’ll only be able to attend for the first two “miniconf” days, and I’ll be doing a short introduction to OpenNMS on Tuesday as part of the Systems Administration Miniconf.

Then I’m off to Queenstown, New Zealand for the New Zealand Network Operators Group (NZNOG) conference. I will be the first presenter on Friday at 09:00, talking about, you guessed it, OpenNMS.

The week after that I will be back in Australia, this time on the other side in Perth, working with our new Asia-Pacific OpenNMS partner R-Group International. We are excited to have such a great partner bringing services and support for OpenNMS to organizations in that hemisphere. Being roughly 12 hours out from our home office in North Carolina, USA, can make communication a little difficult, so it will be nice to be able to help users in (roughly) their own timezone.

Plus, I hope to learn about Cricket.

Finally, I’m excited that I’ve been asked to do a one day tutorial at this year’s African Network Operators Group (AfNOG) in Dakar, Senegal, this spring. The schedule is still being decided but I’m eager to visit Africa (I’ve never been) and to meet up with OpenNMS users (and make some new ones) in that part of the world.

I’ll be posting a lot more about all of these trips in the near future, and hope to see you at at least one of these events.