2022 Open Source Summit – Day 3

Thursday at the Open Source Summit started as usual at the keynotes.

Picture of Robin Bender Ginn on stage

Robin Bender Ginn opened today’s session with a brief introduction and then we jumped into the first session by Matt Butcher of Fermyon.

Picture of Matt Butcher on stage

I’ve enjoyed these keynotes so far, but to be honest nothing has made me go “wow!” as much as this presentation by Fermyon. I felt like I was witnessing a paradigm shift in the way we provide services over the network.

To digress quite a bit, I’ve never been happy with the term “cloud”. An anecdotal story is that the cloud got its name from the fact that the Visio icon for the Internet was a cloud (it’s not true) but I’ve always preferred the term “utility computing”. To me cloud services should be similar to other utilities such as electricity and water where you are billed based on how much you use.

Up until this point, however, instead of buying just electricity it has been more like you are borrowing someone else’s generator. You still have to pay for infrastructure.

Enter “serverless“. While there are many definitions of serverless, the idea is that when you are not using a resource your cost should be zero. I like this definition because, of course, there have to be servers somewhere, but under the utility model you shouldn’t be paying for them if you aren’t using them. This is even better than normal utilities because, for example, my electricity bill includes fees for things such as the meter and even if I don’t use a single watt I still have to pay for something.

Getting back to the topic at hand, the main challenge with serverless is how do you spin up a resource fast enough to be responsive to a request without having to expend resources when it is quiescent? Containers can take seconds to initialize and VMs much longer.

Fermyon hopes to address this by applying Webassembly to microservices. Webassembly (Wasm) was created to allow high performance applications, written in languages other than Javascript, to be served via web pages, although as Fermyon went on to demonstrate this is not its only use.

The presentation used a game called Finicky Whiskers to demonstrate the potential. Slats the cat is a very finicky eater. Sometimes she wants beef, sometimes chicken, sometimes fish and sometimes vegetables. When the game starts Slats will show you an icon representing the food they want, and you have to tap or click on the right icon in order to feed it. After a short time, Slats will change her choice and you have to switch icons. You have 30 seconds to feed as many correct treats as possible.

Slide showing infrastructure for Frisky Kittens: 7 microservices, Redis in a container, Nomad cluster on AWS, Fermyon

Okay, so I doubt it will have the same impact on game culture as Doom, but they were able to implement it using only seven microservices, all in Wasm. There is a detailed description on their blog, but I liked that fact that it was language agnostic. For example, the microservice that controls the session was written in Ruby, but the one that keeps track of the tally was written in Rust. The cool part is that these services can be spun up on the order of a millisecond or less and the whole demo runs on three t2.small AWS instances.

This is the first implementation I’ve seen that really delivers on the promise of serverless, and I’m excited to see where it will go. But don’t let me put words into their mouth, as they have a blog post on Fermyon and serverless that explains it better than I could.

Picture of Carl Meadows on stage

The next presentation was on OpenSearch by Carl Meadows, a Director at AWS.

Note: Full disclosure, I am an AWS employee and this post is a personal account that has not been endorsed or reviewed by my employer.

OpenSearch is an open source (Apache 2.0 licensed) set of technologies for storing large amounts of text that can then be searched and visualized in near real time. Its main use case is for making sense of streaming data that you might get from, say, log files or other types of telemetry. It uses the Apache Lucene search engine and latest version is based on Lucene 9.1.

One of the best ways to encourage adoption of an open source solution is by having it integrate with other applications. With OpenSearch this has traditionally been done using plugins, but there is a initiative underway to create an “extension” framework.

Plugins have a number of shortcomings, especially in that they tend to be tightly coupled to a particular version of OpenSearch, so if a new version comes out your existing plugins may not be compatible until they, too, are upgraded. I run into this with a number of applications I use such as Grafana and it can be annoying.

The idea behind extensions is to provide an SDK and API that are much more resistant to changes in OpenSearch so that important integrations are decoupled from the main OpenSearch application. This also provides an extra layer of security as these extensions will be more isolated from the main code.

I found this encouraging. It takes time to build a community around an open source project but one of the best ways to do it is to provide easy methods to get involved and extensions are a step in the right direction. In addition, OpenSearch has decided not to require a Contributor License Agreement (CLA) for contributions. While I have strong opinions on CLAs this should make contributing more welcome for developers who don’t like them.

Picture of Taylor Dolezal on stage

The next speaker was Taylor Dolezal from the Cloud Native Computing Foundation (CNCF). I liked him from the start, mainly because he posted a picture of his dog:

Slide of a white background with the head and sad eyes of a cute black dog

and it looks a lot like one of my dogs:

Picture of the head of my black Doberman named Kali

Outside of having a cool dog, Dolezal has a cool job and talked about building community within the CNCF. Just saying “hey, here’s some open source code” doesn’t mean that qualified people will give up nights and weekends to work on your project, and his experiences can be applied to other projects as well.

The final keynote was from Chris Wright of Red Hat and talked about open source in automobiles.

Picture of Chris Wright on stage

Awhile ago I actually applied for a job with Red Hat to build a community around their automotive vertical (I didn’t get it). I really like cars and I thought that combining that with open source would just be a dream job (plus I wanted the access). We are on the cusp of a sea change with automobiles as the internal combustion engine gives way to electric motors. Almost all manufacturers have announced the end of production for ICEs and electric cars are much more focused on software. Wright showed a quote predicting that automobile companies will need four times the amount of software-focused talent that the need now.

A slide with a quote stating that automobile companies will need more than four times of the software talent they have now

I think this is going to be a challenge, as the automobile industry is locked into 100+ years of “this is the way we’ve always done it”. For example, in many states it is still illegal to sell cars outside of a dealership. When it comes to technology, these companies have recently been focused on locking their customers into high-margin proprietary features (think navigation) and only recently have they realized that they need to be more open, such as supporting Android Auto or CarPlay. As open source has disrupted most other areas of technology, I expect it to do the same for the automobile industry. It is just going to take some time.

I actually found some time to explore a bit of Austin outside the conference venue. Well, to be honest, I went looking for a place to grab lunch and all the restaurants near the hotel were packed, so I decided to walk further out.

Picture of the wide Brazos river from under the Congress Avenue bridge

The Brazos River flows through Austin, and so I decided to take a walk on the paths beside it. The river plays a role in the latest Neal Stephenson novel called Termination Shock. I really enjoyed reading it and, spoiler alert, it does actually have an ending (fans of Stephenson’s work will know what I’m talking about).

I walked under the Congress Avenue bridge, which I learned was home to the largest urban bat colony in the world. I heard mention at the conference of “going to watch the bats” and now I had context.

A sign stating that drones were not permitted to fly near the bat colony under the Congress Avenue bridge

Back at the Sponsor Showcase I made my way over to the Fermyon booth where I spent a lot of time talking with Mikkel Mørk Hegnhøj. When I asked if they had any referenceable customers he laughed, as they have only been around for a very short amount of time. He did tell me that in addition to the cat game they had a project called Bartholomew that is a CMS built on Fermyon and Wasm, and that was what they were using for their own website.

Picture the Fermyon booth with people clustered around

If you think about it, it makes sense, as a web server is, at its heart, a fileserver, and those already run well as a microservice.

They had a couple of devices up so that people could play Finicky Whiskers, and if you got a score of 100 or more you could get a T-shirt. I am trying to simplify my life which includes minimizing the amount of stuff I have, but their T-shirts were so cool I just had to take one when Mikkel offered.

Note that when I got back to my room and actually played the game, I came up short.

A screenshot of my Finicky Whiskers score of 99

The Showcase closed around 4pm and a lot of the sponsors were eager to head out, but air travel disruptions affected a lot of them. I’m staying around until Saturday and so far so good on my flights. I’m happy to be traveling again but I can’t say I’m enjoying this travel anxiety.

[Note: I overcame by habit of sitting toward the back and off to the side so the quality of the speaker pictures has improved greatly.]

On Leaving OpenNMS

It is with mixed emotions that I am letting everyone know that I’m no longer associated with The OpenNMS Group.

Two years ago I was in a bad car accident. I suffered some major injuries which required 33 nights in the hospital, five surgeries and several months in physical therapy. What was surprising is that while I had always viewed myself as somewhat indispensable to the OpenNMS Project, it got along fine without me.

Also during this time, The OpenNMS Group was acquired. For fifteen years we had survived on the business plan of “spend less money than you earn”. While it ensured the longevity of the company and the project, it didn’t allow much room for us to pursue ideas because we had no way to fund them. We simply did not have the resources.

Since the acquisition, both the company and the project have grown substantially, and this was during a global pandemic. With OpenNMS in such a good place I began to think, for the first time in twenty years, about other options.

I started working with OpenNMS in September of 2001. I refer to my professional career before then as “Act I”, with my time at OpenNMS as “Act II”. I’m now ready to see what “Act III” has in store.

While I’m excited about the possibilities, I will miss working with the OpenNMS team. They are an amazing group of people, and it will be hard to replace the role they played in my life. I’m also eternally grateful to the OpenNMS Community, especially the guys in the Order of the Green Polo who kept the project alive when we were starting out. You are and always will be my friends.

When I was responsible for hiring at OpenNMS, I ended every offer letter with “Let’s go do great things”. I consider OpenNMS to be a “great thing” and I am eager to watch it thrive with its new investment, and I will always be proud of the small role I played in its success.

If you are doing great things and think I could contribute to your team, check out my profile on LinkedIn or Xing.

Review: Serval WS Laptop by System76

TL;DR; When I found myself in the market for a beefy laptop, I immediately ordered the Serval WS from System76. I had always had a great experience dealing with them, but times have changed. It has been sent back.

I can’t remember the first time I heard about the Serval laptop by System76. In a world where laptops were getting smaller and thinner, they were producing a monster of a rig. Weighing ten pounds without the power brick, the goal was to squeeze a high performance desktop into a (somewhat) portable form factor.

I never thought I’d need one, as I tend to use desktops most of the time (including a Wild Dog Pro at the office) and I want a light laptop for travel as it pretty much just serves as a terminal and I keep minimal information on it.

Recently we’ve been experimenting with office layouts, and our latest configuration has me trading my office for a desk with the rest of the team, and I needed something that could be moved in case I need to get on a call, record a video or get some extra privacy.

Heh, I thought, perhaps I could use the Serval after all.

I like voting for open source with my wallet. My last two laptops have been Dell “Sputnik” systems (2nd gen and 5th gen) since I wanted to support Dell shipping Linux systems, and when we decided back in 2015 that the iMacs we used for training needed to be replaced, I ordered six Sable Touch “all in one” systems from System 76. The ordering process was smooth as silk and the devices were awesome. We still get compliments from our students.

A year later when my HP desktop died, I bought the aforementioned Wild Dog Pro. Again, customer service to match if not rival the best in the business, and I was extremely happy with my new computer.

Jump forward to the present. Since I was in the market for a “luggable” system, performance was more important than size or weight, so I ordered a loaded Serval WS, complete with the latest Intel i9 processor, 64GB of speedy RAM, NVidia 1080 graphics card, and oodles of disk space. Bwah ha ha.

When it showed up, even I was surprised at how big it was.

Serval WS and Brick

Here you can see it in comparison to a old Apple keyboard. Solidly built, I was eager to plug it in and turn it on.

Serval WS

The screen was really bright, even though so was my office at the time. You can see from the picture that it was big enough to contain a full-sized keyboard and a numeric keypad. This didn’t really matter much to me as I was planning on using it with an awesome external monitor and keyboard, but it was a nice touch. I still like having a second screen since we rely heavily on Mattermost and I always like to keep a window in view and I figured I could use the laptop screen for that.

I had ordered the system with Ubuntu installed. My current favorite operating system is Linux Mint but I decided to play with Ubuntu for a little bit. This was my first experience with Ubuntu post Unity and I must say, I really liked it. Kind of made me eager to try out Pop!_OS which is the System76 distro based on Ubuntu.

When installing Mint I discovered that I made a small mistake when placing my Serval order. I meant to use a 2TB drive as the primary leaving a 1TB drive for use by TimeShift for backup. I reversed them. No real issue, as I was able to install Mint on the 2TB drive just fine after some creative partition manipulation.

Everything was cool until late afternoon when the sun went away. I was rebooting the system and found myself looking at a blank screen (for some reason the screen stays blank for a minute or so after powering on the laptop, I assume due to it having 64GB of RAM). There was a tremendous amount of “bleed” around the edges of the LCD.

Serval WS LCD Bleed

Damn.

Although it probably wouldn’t have impacted me much in day to day use, especially with an external monitor, I would know about it, and as I’m somewhere on the OCD spectrum it would bother me. Plus I paid a lot of money for this system and want it to be as close to perfect as possible.

For those of you who don’t know, the liquid crystals in LCD displays emit no light of their own and they get their illumination usually from a fluorescent source. If there are issues with the way the LCD panel is constructed, this light can “bleed” around the edges and degrade the display quality (it is also why it is hard to get really black images on LCD displays and this is fueling a lot of the excitement around OLED technology).

I’ve had issues with this before on laptops but nothing this bad. Not to worry, I have System76 to rely on, along with their superlative customer service.

I called the number and soon I was speaking with a support technician. When I described the problem they opened a ticket and asked me to send in a picture. I did and then waited for a response.

And waited.

And waited.

I commented on the ticket.

And I continued to wait.

The next day I waited a bit (Denver is two hours behind where I live) but when I got no response I decided, well, I’ll just return the thing. I called to get an RMA number but this time I wasn’t connected with a person and was asked to leave a message. I did, and I should note that I never got that return call.

At this point I’m frustrated, so I decided an angry tweet was in order. That got a response to my ticket, where they offered to send me a new unit.

Yay, here was a spark of the customer service I was used to getting. I’ve noticed a number of tech companies are willing to deal with defective equipment by sending out a new unit before the old unit is returned. In this day and age of instant gratification it is pretty awesome.

I wrote back that I was willing to try another unit, but would it be possible to put Pop!_OS on the new unit on the 2TB drive so that I could try it out of the box and know that all of the System76 specific packages were installed.

A little while later I got a reply that it wouldn’t be possible to install it on the 2TB drive, so I would end up having to reinstall in any case.

(sigh)

When I complained on Twitter I was told “Sorry to hear this, you’ll receive a phone call before EOD to discuss your case.” I worked until 8pm that night with no phone call, so I just decided to return the thing.

Of course, this would be at my expense and the RMA instructions were strict about requiring shipping insurance: “System76 cannot refund your purchase if the machine arrives damaged. For this reason, it is urgent that you insure your package”. The total cost was well over $100.

So I’m out a chunk of change and I’ve lost faith in a vendor of which I was extremely fond. This is a shame since they are doing some cool things such as building computers in the United States, but since they’ve lost sight of what made them great in the first place I have doubts about their continued success.

In any case, I ordered a Dell Precision 5530, which is one of the models available with Ubuntu. Much smaller and not as powerful as the Serval WS, it is also not as expensive. I’ll post as review in a couple of weeks when I get it.

Help Get OpenNMS Packaged by Bitnami

As someone who has used OpenNMS for, well, many years, I think it is a breeze to get installed. Simply add the repository to your server, install the package(s), run the installer and start it.

However, there are a number of new users who still have issues getting it installed. This is not a problem limited just to OpenNMS but can be a problem across a number of open source projects.

Enter Bitnami. Bitnami is a project to package applications to make them easier to install: natively, in the cloud, in a container or as a virtual machine. Ronny pointed out that OpenNMS is listed in their “wishlist” section, and if we can get enough votes, perhaps they will add it to their stack.

Bitnami also happens to have a great team, lead in part by the ever amazing Erica Brescia. As I write this we have less that 50 votes, with the current leaders being over 1200, so there is a long way to go. I’d appreciate your support, and once you vote you get a second chance to vote again via the socials.

Bitnami OpenNMS Wishlist

Thanks, and thanks to Bitnami for the opportunity.

OpenNMS Meridian 2016 Released

I am woefully behind on blog posts, so please forgive the latency in posting about Meridian 2016.

As you know, early last year we split OpenNMS into two flavors: Horizon and Meridian. The goal was to create a faster release cycle for OpenNMS while still providing a stable and supportable version for those who didn’t need the latest features.

This has worked out extremely well. While there used to be eighteen months or so between major releases, we did five versions of Horizon in the same amount of time. That has led to the rapid development of such features as the Newts integration and the Business Service Monitor (BSM).

But that doesn’t mean the features in Horizon are perfect on Day One. For example, one early adopter of the Newts integration in Horizon 17 helped us find a couple of major performance issues that were corrected by the time Meridian 2016 came out.

The Meridian line is supported for three years. So, if you are using Meridian 2015 and don’t need any of the features in Meridian 2016, you don’t need to upgrade. Major performance issues, all security issues and most of the new configurations will be backported to that release until Meridian 2018 comes out.

Compare and contrast that with Horizon: once Horizon 18 was released all work stopped on Horizon 17. This means a much more rapid upgrade cycle. The upside being that Horizon users get to see all the new shiny features first.

Meridian 2016 is based on Horizon 17, which has been out since the beginning of the year and has been highly vetted. Users of Horizon 17 or earlier should have an easy migration path.

I’m very happy that the team has consistently delivered on both Horizon and Meridian releases. It is hoped that this new model will both keep OpenNMS on the cutting edge of the network monitoring space while providing a more stable option for those with environments that require it.

Upgrading Linux Mint 17.3 to Mint 18 In Place

Okay, I thought I could wait, but I couldn’t, so yesterday I decided to do an “in place” upgrade of my office desktop from Linux Mint 17.3 to Mint 18.

It didn’t go smoothly.

First, let me stress that the Linux Mint community strongly recommends a fresh install every time you upgrade from one release to another, and especially when it is from one major release, like Mint 17, to another, i.e. Mint 18. They ask you to backup your home directory and package lists, base the system and then restore. The problem is that I often make a lot of changes to my system which usually involves editing files in the system /etc directory, and this doesn’t capture that.

One thing I’ve always loved about Debian is the ability to upgrade in place (and often remotely) and this holds true for Debian-based distros like Ubuntu and Mint. So I was determined to try it out.

I found a couple of posts that suggested all you need to do is replace “rosa” with “sarah” in your repository file, and then do a “apt-get update” followed by an “apt-get dist-upgrade”. That doesn’t work, as I found out, because Mint 18 is based on Xenial (Ubuntu 16.04) and not Trusty (Ubuntu 14.04). Thus, you also need to replace every instance of “trusty” with “xenial” to get it to work.

Finally, once I got that working, I couldn’t get into the graphical desktop. Cinnamon wouldn’t load. It turns out Cinnamon is in a “backport” branch for some reason, so I had to add that to my repository file as well.

To save trouble for anyone else wanting to do this, here is my current /etc/apt/sources.list.d/official-package-repositories.list file:

deb http://packages.linuxmint.com sarah main upstream import backport #id:linuxmint_main
# deb http://extra.linuxmint.com sarah main #id:linuxmint_extra

deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ xenial-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ xenial partner

Note that I commented out the “extra” repository since one doesn’t exist for sarah yet.

The upgrade took a long time. We have a decent connection to the Internet at the office and still it was over an hour to download the packages. There were a number of conflicts I had to deal with, but overall that part of the process was smooth.

Things seem to be working, and the system seems a little faster but that could just be me wanting it to be faster. Once again many thanks to the Mint team for making this possible.

OpenNMS Horizon 17 Released

I am extremely happy to announce the availability of OpenNMS Horizon 17. This marks the fourth major release of OpenNMS in a little over a year, and I’m extremely proud of the team for moving the project so far forward so quickly.

There is a lot in this release. One of the major things is support for a new storage backend based on the Newts project. This will enable OpenNMS to basically store unlimited amounts of time-series data. The only thing missing, which should completed soon, is a way to convert all of your old RRD-based data to Newts. Since it will take people awhile to get a Newts/Cassandra instance set up, we didn’t want to hold the rest of the release until this was done. If you are installing OpenNMS from scratch and don’t have any legacy data, the Newts integration is ready to go now.

The team is also making great strides in improving the documentation. There is a better version of the Release Notes there.

Horizon 17 will form the basis for Meridian 2016, which we expect in early spring. The next Horizon release will contain the completed Minion functionality, which adds the ability to distribute OpenNMS so that, along with Newts, OpenNMS will have nearly limitless scalability.

Not bad for a free software product, eh? Remember you can always play with the latest and greatest of any OpenNMS development branch just by installing the desired repository.

Anyway, enjoy, and I’ll be sure to post when the RRD converter is available.

Bug

  • [NMS-5613] – odd index "ifservicves_ipinterfaceid_idx" in database – typo?
  • [NMS-5946] – JMX Config Tool CLI is not packaged correctly
  • [NMS-6012] – Statsd randomly looks for storeByForeignSource rrds
  • [NMS-6478] – 'Overall Service Availability' bad info in case of nodeDown / nodeUp transition
  • [NMS-6493] – Running online report "Response Time Summary for node" produces Unexpected Error
  • [NMS-6555] – Outdated Quartz URL in provisiond-configuration.xml file
  • [NMS-6803] – Not evaluating threshold for data collected by HttpCollector
  • [NMS-6927] – test failure: org.opennms.web.alarm.filter.AlarmRepositoryFilterTest
  • [NMS-6942] – test failure: org.opennms.web.svclayer.DefaultOutageServiceIntegrationTest
  • [NMS-6944] – When building the "Early Morning Report" I get a "null" dataset argument Exception.
  • [NMS-7000] – Early Morning Report will not run correctly without any nodes in OpenNMS
  • [NMS-7001] – Availability by node report needs a "No Data for Report" Section
  • [NMS-7024] – Event Translator cant translate events with update-field data present
  • [NMS-7095] – Topology Map does not show selected focus in IE
  • [NMS-7254] – MigratorTest fails on two of the 3 tests.
  • [NMS-7407] – Inconsistent naming in Admin/System Information
  • [NMS-7411] – Fonts are too small in link detail page
  • [NMS-7417] – Fix header and list layout glitches in the WebUI
  • [NMS-7459] – Dashboard node status shows wrong service count
  • [NMS-7516] – XML Collector is not working as expected for node-level resources
  • [NMS-7600] – build failure in opennms-doc/guide-doc on FreeBSD
  • [NMS-7649] – etc folder still contains references to capsd
  • [NMS-7667] – Vaadin dashboard meaning of yellow in the surveillance view
  • [NMS-7679] – Audiocodes.events.xml overrides RMON.events.xml
  • [NMS-7680] – JMX Configuration Generator admin page fails
  • [NMS-7693] – Example Drools rules imports incorrect classes
  • [NMS-7695] – Logging not initialized but used on Drools Rule files.
  • [NMS-7702] – Problems on graphs for 10 gigabit interface
  • [NMS-7703] – Database Report – Statement correction
  • [NMS-7709] – Building OpenNMS results in a NullPointerException on module "container/features"
  • [NMS-7723] – PSQLException: column "nodeid" does not exist when using manage/unmanage services
  • [NMS-7728] – Add support for jrrd2
  • [NMS-7729] – Log messages for the Correlation Engine appear in manager.log
  • [NMS-7736] – bug in EventBuilder method setParam()
  • [NMS-7739] – Unit tests fail for loading data collection
  • [NMS-7748] – SeleniumMonitor with PhantomJS driver needs gson JAR
  • [NMS-7750] – Cannot edit some Asset Info fields
  • [NMS-7755] – c.m.v.a.ThreadPoolAsynchronousRunner: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@59804d53 — APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
  • [NMS-7762] – noSuchObject duplicates links on topology map
  • [NMS-7764] – Error when you drop sequence vulnnxtid
  • [NMS-7766] – Incorrect unit divisor in LM-SENSORS-MIB graph definitions
  • [NMS-7770] – HttpRemotingContextTest is an integration test and needs to be renamed as such
  • [NMS-7771] – Fix unit tests to run also on non-US locale systems.
  • [NMS-7772] – JMX Configuration Generator (webUI) is not working anymore
  • [NMS-7777] – node detail page failure
  • [NMS-7778] – Measurements ReST API broken in develop (CXF)
  • [NMS-7785] – OSGi-based Web Modules Not Accessible
  • [NMS-7791] – OSGi-based web applications are unaccesible
  • [NMS-7794] – Cannot load events page in 17
  • [NMS-7802] – JSON Serialization Broken in REST API (CXF)
  • [NMS-7814] – Queued RRD updates are no longer promoted when rendering graphs
  • [NMS-7816] – The DataCollectionConfigDao returns all resource types, even if they are not used in any data collection package.
  • [NMS-7818] – Measurements ReST API Fails on strafeping
  • [NMS-7819] – Requesting IPv6 resources on measurements rest endpoint fails
  • [NMS-7822] – Remove Access Point Monitor service from service configuration
  • [NMS-7824] – The reload config for Collectd might throws a ConcurrentModificationException
  • [NMS-7826] – Exception in Vacuumd because of location monitor changes
  • [NMS-7828] – NPE on "manage and unmanage services and interfaces"
  • [NMS-7834] – Smoke tests failing because OSGi features fail to install: "The framework has been shutdown"
  • [NMS-7835] – "No session" error during startup in EnhancedLinkdTopologyProvider
  • [NMS-7836] – KIE API JAR missing from packages
  • [NMS-7839] – Counter variables reported as strings (like Net-SNMP extent) are not stored properly when using RRDtool
  • [NMS-7844] – Some database reports are broken (ResponseTimeSummary, etc.)
  • [NMS-7845] – New Provisioning UI: 401 Error when creating a new requisition
  • [NMS-7847] – Graph results page broken when zooming
  • [NMS-7848] – Parameter descriptions are not shown anymore
  • [NMS-7852] – UnsupportedOperationException when using the JMXSecureCollector
  • [NMS-7855] – distributed details page broken
  • [NMS-7856] – Default log4j2.xml has duplicate syslogd appender, missing statsd entries
  • [NMS-7857] – Cisco Packets In/Out legend label wrong
  • [NMS-7858] – Enlinkd CDP code fails to parse hex-encoded IP address string
  • [NMS-7861] – IpNetToMedia Hibernate exception in enlinkd.log
  • [NMS-7867] – Duplicate Drools engines can be registered during Spring context refresh()
  • [NMS-7870] – PageSequenceMonitor broken in remote poller
  • [NMS-7874] – The remote poller doesn't write to the log file when running in headless mode
  • [NMS-7875] – Distributed response times are broken
  • [NMS-7877] – HttpClient ignores socket timeout
  • [NMS-7884] – RTC Ops Board category links are broken
  • [NMS-7890] – Remedy Integration: the custom code added to the Alarm Detail Page is gone.
  • [NMS-7893] – LazyInitializationException when querying the Measurements API
  • [NMS-7897] – Statsd PDF export gives class not found exception
  • [NMS-7899] – Deadlocks on Demo
  • [NMS-7900] – JMX Configgenerator Web UI throws NPE when navigating to 2nd page.
  • [NMS-7901] – Incorrect Fortinet System Disk Graph Definition
  • [NMS-7902] – Pages that contain many Backshift graphs are slow to render
  • [NMS-7907] – The default location for the JRRD2 JAR in rrd-configuration.properties is wrong.
  • [NMS-7909] – Missing dependency on the rrdtool RPM installed through yum.postgresql.org
  • [NMS-7917] – Alarm detail filters get mixed up on the ops board
  • [NMS-7921] – Startup fails with Syslogd enabled
  • [NMS-7926] – FasterFilesystemForeignSourceRepository is not working as expected
  • [NMS-7930] – Heat map ReST services just produce JSON output
  • [NMS-7935] – ClassNotFoundException JRrd2Exception
  • [NMS-7939] – HeatMap ReST Xml output fails
  • [NMS-7942] – Apache CXF brakes the ReST URLs for nodes and requisitions (because of service-list-path)
  • [NMS-7944] – Jersey 1.14 and 1.5 jars mixed in lib with Jersey 1.19
  • [NMS-7945] – Incorrect attribute types in cassandra21x data collection package
  • [NMS-7948] – Bad substitution in JMS alarm northbounder component-dao wiring
  • [NMS-7959] – Bouncycastle JARs break large-key crypto operations
  • [NMS-7962] – Missing graphs in Vaadian dashboard when storeByFs=true
  • [NMS-7963] – JSoup doesn't properly parse encoded HTML character which confuses the XML Collector
  • [NMS-7964] – MBean attribute names are restricted to a specifix max length
  • [NMS-7968] – Auto-discover is completely broken – Handling newSuspect events throws an exception
  • [NMS-7969] – JMS alarm northbounder always indicates message sent
  • [NMS-7972] – Querying the ReST API for alarms using an invalid alarmId returns HTTP 200
  • [NMS-7974] – The ICMP monitor can fail, even if valid responses are received before the timeout
  • [NMS-7977] – JMX Configuration Generation misbehavior on validation error
  • [NMS-7981] – The ReST API code throws exceptions that turns into HTTP 500 for things that should be HTTP 400 (Bad Request)
  • [NMS-7985] – New servers in install guide
  • [NMS-7997] – Background of notifications bell icon is too dark
  • [NMS-7998] – Provisiond default setting does not allow to delete monitoring entities
  • [NMS-7999] – Upgrade to commons-collections 3.2.2
  • [NMS-8001] – NPE in JMXDetector
  • [NMS-8004] – Iplike could not be installed following install guide

Enhancement

  • [NMS-1488] – Add option to the <service> element in poller-configuration.xml to specify service-specific RRD settings
  • [NMS-1910] – Additional storeByGroup capabilities
  • [NMS-2362] – Infoblox events file
  • [NMS-3479] – Adding SNMP traps for Raytheon NXU-2A
  • [NMS-4008] – Add A10 AX load balancer trap events
  • [NMS-4364] – Interactive JMX data collection configuration UI
  • [NMS-5016] – Add Force10 Event/Traps
  • [NMS-5071] – Event definition for Juniper screening SNMP traps
  • [NMS-5272] – events definiton file for DSVIEW-TRAP-MIB
  • [NMS-5397] – Trap definition files for Evertz Multiframe and Modules
  • [NMS-5398] – Trap and data collection definitions for Ceragon FibeAir 1500
  • [NMS-5791] – New (additional) event file for NetApp filer
  • [NMS-6770] – New Fortinet datacollection / graph definition
  • [NMS-7108] – DefaultResourceDao should use RRD-API to find resources
  • [NMS-7131] – MIB support for Zertico environment sensors
  • [NMS-7191] – Implement "integration with OTRS-3.1+" feature
  • [NMS-7258] – Unit tests should be able to run successfully from the start of a compile.
  • [NMS-7404] – Create a detector for XMP
  • [NMS-7520] – Remove linkd
  • [NMS-7553] – Add Juniper SRX flow performance monitoring and default thresholds
  • [NMS-7614] – Enable real SSO via Kerberos (SPNEGO) and LDAP
  • [NMS-7618] – Create opennms.properties option to make dashboard the landing page
  • [NMS-7689] – Get rid of servicemap and servermap database tables
  • [NMS-7700] – Add support for Javascript-based graphs
  • [NMS-7722] – Dell Equallogic Events
  • [NMS-7768] – Persist the CdpGlobalDeviceIdFormat
  • [NMS-7798] – Add Sonicwall Firewall Events
  • [NMS-7805] – JMS Alarm Northbounder
  • [NMS-7821] – DNS Resolution against non-local resolver
  • [NMS-7868] – Recognize Cisco ASA5580-20 for SNMP data collection
  • [NMS-7949] – Promote Compass app when mobile browser detected
  • [NMS-7986] – Document how to configure RRDtool in OpenNMS

Story

  • [NMS-7711] – nodeSource[] resource ids only work when storeByFs is enabled
  • [NMS-7894] – Flatten and improve web app style
  • [NMS-7929] – Document HeatMap ReST services
  • [NMS-7940] – Cleanup docs modules

2015 Dev-Jam: Day Four

Since I sincerely doubt even my loyal readers get to the bottom of my long posts, I figured I’d start this one with the group picture:

Dev-Jam Group Picture

That antenna behind Goldy’s head is part of Jonathan’s project to use OpenNMS to collect and aggregate FunCube data from around the world. Can I get an “Internet of Things“? (grin)

There is this myth that just by making your software open source, thousands of qualified developers will give up their spare time to work on your project. While there are certainly projects with lots of developers, I am humbled by the fact that we have 30-40 hard core people involved with OpenNMS.

Unless you’ve gone through this, it is hard to understand. At one time, OpenNMS was pretty much me in my attic and an IRC channel. Luckily for the project that didn’t last long. My one true talent is getting amazing people to work with me. Then all I have to do is create an environment where they can be awesome.

It’s why I love Dev-Jam.

I also love pizza. Chris at Papa Johns was kind enough to send us some free pie for dinner:

Dev-Jam Pizza

Today we spent time talking about documentation. Documentation tends to be the weak point of a lot of software, and open source software in particular. The Arch Linux people do about the best job I’ve seen, but even then it is hard to keep everything current. For over a year now a group of people has been working very hard to improve the documentation for OpenNMS, and the new documentation site is most excellent.

It does take a little time to understand the navigation. The documentation is included in the source and managed on GitHub, so there is a new version for each release. But just as an example, check out the Administrator’s Guide for 16.0.2.

Written in AsciiDoc, it is now the best place for accurate information on how to use the software. We also want to extend a special thanks to the Atom project for creating the editor used to create it.

One of the things we discussed was how to deal with the wiki and the .org website. It’s not practical to duplicate the AsciiDoc information on the wiki, so the plan is to include the relevant part from the documentation in something like an iframe and use the wiki more for user stories. The “talk” page can then be used for suggestions on improving the documentation, and once those suggestions are merged they can be removed.

I had suggested that we make the wiki page the default landing page for the .org site, but Markus pointed out that we need to do a better job of marketing OpenNMS, and the landing page should be more about “why to use OpenNMS” versus “how”. I had to agree, as we need to do a better job of marketing the software. My friend Waleed pointed out in Twitter this weakness:

Twitter Comment 1 from Waleed

Twitter Comment 2 from Waleed

To better educate folks about why OpenNMS is so amazing, we are considering merging the .com and .org sites and using the .com WordPress instance for the “why you should use OpenNMS” with a very obvious link to the wiki so people can learn how to use OpenNMS. Part of me has always wanted to keep the project and commercial aspects of OpenNMS separate, but it then becomes really hard to maintain both sites.

In case you haven’t guessed, we do spend a lot of time thinking about stuff like this. (grin)

Dev-Jam Thursday

A lot of other cool stuff got done on Thursday. DJ announced that he had separated out the unit tests in OpenNMS (for features) from the integration tests (for regression). OpenNMS has nearly 7,000 junit tests (and growing). It’s the main way we insure that nothing breaks as we work to add new things to the software. But with so many tests it can take a real long time to see if your commit worked or not. This should make things easier for the developers.

It’s hard to believe that Dev-Jam is almost over. Luckily, it sets the stage for the next year’s worth of work. Since our goal is nothing less than making OpenNMS the de facto network management platform of choice, there is a lot of work to be done.

Linux on the Dell M3800

I am way behind on a number of tech reviews, but I’m hoping to catch up soon. Please bear with me.

Earlier in the year I had a disappointing experience with the new Dell XPS 13 and Linux. It was especially hard because I really loved that hardware – not since my first Powerbook have I felt such an attachment to a laptop. I was happy to learn later that there were kernel-level issues with the hardware that had to be addressed, so it wasn’t just my lack of ability in dealing with Linux.

While that story is not over, I did send it back and decided to check out Dell’s other Ubuntu offering, the powerful M3800.

I dutifully placed an order for the Ubuntu version of the laptop, and since it is much larger than the XPS 13 there were more options. I liked the fact that I could get an SSD as well as a standard HDD, so I chose the 256GB SSD option and a 1TB HDD. I travel a lot and thought it would be cool to carry more media with me while still having a fast primary drive.

The order process was pretty painless. Still not as streamlined as the Apple Store, but not too bad.

Then I waited.

My expected arrival date kept slipping. This went on for several weeks until I got an e-mail that, due to a misconfiguration, my order was canceled.

What?

Considering that the website pretty much walks you through the ordering process and indicates any kind of impossible combination (such as a larger battery and an extra drive, since they can’t both occupy the same space) I was confused and a little torqued off.

After a few days to calm down, I decided to retry the process. It turns out that the “misconfiguration” was due to the extra drive, which was surprising. Order it with Windows? No problem. Check Ubuntu and it fails.

Grrr.

I did some investigation and was led to believe that the M3800 Ubuntu version ships with a vanilla 14.04 install. So I decided to pay the Microsoft tax and order the hardware I wanted, and then to base it and install Ubuntu.

This time the process was much smoother, and the laptop even arrived about a week earlier than they told me it would. It was a pleasant surprise.

The shipping box was a bit dinged up:

Dell M3800 Unboxing - Picture 1

But they did a good job of protecting the actual laptop box:

Dell M3800 Unboxing - Picture 2

There were actually two boxes, one holding the laptop and one holding accessories:

Dell M3800 Unboxing - Picture 3

All in all, it was a decent unboxing experience:

Dell M3800 Unboxing - Picture 4

The accessories included the power brick, a restore USB stick and a USB Ethernet adapter.

Dell M3800 Unboxing - Accesories

I liked having the Ethernet adapter since I’ve found installing Linux on a laptop works best when wired. While a lot of modern distros ship the proprietary wireless drivers needed, many times they aren’t enabled during install.

I got the HiDPI touchscreen option (3840×2160) and so I decided to install Linux Mint on it. I figured that if Ubuntu worked on it, I should be able to get Mint to work, and I prefer Cinnamon to Unity, plus Mint handles HiDPI screens much better than Ubuntu. Ubuntu has a scaling factor but it doesn’t really apply across the board, so you end up seeing things like clipped text under icons, etc., and sometimes the selection boxes can be very small. I believe Mint does what Apple does and just doubles everything (i.e. represents system graphics with four pixels instead of one).

This system is screamingly fast (I got the Quad Core 3.3GHz CPU and 16GB of RAM) and the display is solid, but as someone who uses desktops primarily, I wasn’t used to using such a large laptop (although it was quite thin).

Dell M3800 - Mint Login Screen

Mint worked pretty well, but there was a frustrating issue with the clickpad. Sometimes I was unable to select a piece of text on the first try. On a second (or sometimes third attempt) it worked fine, but I could never get the behavior to go away entirely. I have found hints on the Intertoobz that suggest it is a known issue with Cinnamon, so perhaps it will be addressed in 17.2.

My main issue with the unit, outside of the size, was the battery life. I could sit and watch the battery percentage drop, about one percent a minute. This was in light duty mode, such as writing e-mails and browsing the web. While 100 minutes of battery life isn’t terrible, it is less than half of what I am used to.

I took a guess that part of the problem could be in the weird hybrid video controller setup they use. There is both an NVidia card and an Intel card in the unit. I installed bumblebee and that seemed to help some, but it didn’t make the power issue go away.

[Note: as an aside, many thanks to Arch for having such amazing documentation]

Dell M3800 - Mint desktop

Overall, if I was looking for a laptop to replace my desktops, I would have tried to stick with it longer. But the size coupled with the battery issues made me send it back. I was still in love with the XPS 13 so I decided to just wait until they supported Linux on it.

Oregon

I really like visiting the state of Oregon, from Portland all the way down to Roseburg. While I haven’t been to the eastern side, the western side is beautiful, and there is also a lot of OpenNMS history there. The City of Portland uses OpenNMS, as does Earthlink Business Solutions (currently across the river in Vancouver, WA, but with a data center in Portland). Ken Eshelby, who works for the State of Oregon, has one of the most amazing installations of OpenNMS I’ve ever seen (he takes my definition of OpenNMS as a platform very seriously).

But my heart always resides with Oregon State University down in Corvallis. When I first started out on my own back in 2002, OSU was the second organization to purchase a Greenlight project. They’ve been using it for ten years, and last week they invited me back to do a “Tune My OpenNMS” project in order to get them on the latest version and to show them ways to maximize how they are using the tool.

The “Tune My OpenNMS” project (once called “Pimp My OpenNMS” but we had a client warn that he could never submit a purchase request with the word “pimp” in it) is three days long, so I decided to come in a day early to visit friends down in the Peoples Republic of Eugene. Sunday we sat outside drinking microbrews and feasting on salmon caught off the coast, and on Monday I got to go hiking in the Tamolitch Valley.

The area has both old and new growth forests, and there are some pretty extensive lava flows. Here’s a picture of my friend Kate where you can see the lava starting to be overgrown with brush.

The path we took followed the McKenzie River, which made me wish I had brought along my waders and fly rod, but since I didn’t I had to content myself with just pointing out those places where the monster trout would live.

Our destination was a place called The Blue Pool. This is a small pool that contains (so I learned) a lot of dissolved silica. While it shows up as white on the tree trunks and rocks downstream, in the still water of the pool it shows up as an incredibly rich blue (as it scatters blue light). There were once falls on one side of the pool but a lava flow covered them up. There is still water flowing under the rock and it comes out under the water, so they call it a “dry” falls.

I said goodbye and headed up to Corvallis Monday night. On Tuesday I met with Joel Burks at OSU and we got down to business migrating their existing 1.6.7 install to 1.10.5. For lunch we met Bill Ayres (OGP) who was one of the leaders of the OSU OpenNMS project until he retired, and the three of us had a lot of fun.

One night we drove out to Newport to eat seafood at Local Ocean. It was amazing.

I also got to see the controversial mural located downtown that illustrates atrocities committed by Chinese soldiers against Tibetan monks.

I thought it was quite the coincidence that I was in Corvallis when a news story broke that wasn’t about OSU athletic scores.

I didn’t get a paper accepted at OSCON this year so this was my only trip to the area, and I had missed it. I hope to return soon.