2024 FOSS Backstage

I was a speaker at this year’s FOSS Backstage conference, which was held 4-5 March in Berlin, Germany. FOSS Backstage is gathering dedicated to the “non-code” aspects of open source, and I was surprised I had not heard of it before this year. This was the sixth edition of the event, and it was held in a new location due to growing attendance.

TL;DR; I really, really enjoyed this conference, to the point where it is in contention to be my favorite one of the year. The attendees were knowledgeable and friendly, the conference was well run, and it was not so big that I felt I was missing out due to there being too many tracks. I hope I am able to attend again next year.

This was my first time in Berlin, and although I have been to Germany on numerous other trips, for some reason I have never made it to this historic city. It does have a reputation for being a center for hacker culture in Europe, hosting the annual Chaos Communication Congress, and several of my friends who were at FOSS Backstage told me they were in Berlin quite often.

The event was held at a co-working space and we had access to a lobby, a large auditorium, and then downstairs there were two smaller rooms: Wintergarten and a “workshop” room that was used mainly for remote speakers. Each day started off with a keynote in the auditorium followed by breakout sessions of two or three tracks across the three available rooms.

Monday’s keynote (video) was by Melanie Rieback, who is the CEO and Co-founder of a computer security company called Radically Open Security, a not-for-profit security consulting firm. Her company donates 90% of net profits to charity, and they openly share the tools and techniques they use so that their clients can learn to audit their security on their own.

As someone who spends way too much time focused on open source business models, it was encouraging to see a company like Radically Open Security succeed and thrive. But I wasn’t sure I bought in to the premise that all open source companies should be like hers. Consulting firms have a particular business model that is similar to those used by accounting firms, management firms or other service firms such as plumbers or HVAC repair. Software companies have a much different model. For example, I am writing this using WordPress. I didn’t have to pay someone to show me how to use it. If WordPress wants to continue to produce software they need to make money in a different fashion, such as in their case with hosting services and running a marketplace. Those products require capital to create, and since that often can’t be bootstrapped, this means they have investors, investors who will one day expect a return.

Now it is easy to find examples of where investors, specifically venture capitalists, did bad things, but we can’t rule out the model entirely. If you use a Linux desktop, most likely you are using software that companies like Red Hat and Canonical helped to create. Both of those companies are for-profit and have (or in the case of Red Hat, had) investors. The Linux desktop would simply not exist in its current form without them.

The keynote did, however, make me think, which is one of the main reasons I come to conferences.

[Note: I used WordPress as an example because it was handy, and we can discuss the current concerns about selling data for GenAI use another time (grin)]

After the keynote the breakout sessions started, and I headed downstairs to hear Dawn Foster talk about understanding the viability of open source projects (video). Dr. Foster is the Director of Data Science for CHAOSS, a Linux Foundation project for Community Health Analytics in Open Source Software.

Open source viability isn’t something a lot of people think about. Many of us just kind of assume that a piece of open source software will just always be there and always be kept up to date and secure. This can be a dangerous assumption, as illustrated by a famous xkcd comic that I saw no less than three times during the two day conference.

In my $JOB we often use data analytics to examine the health of a project. In addition to metrics such as number of releases, bugs and pull requests, we also look at something called the Pony Factor and the Elephant Factor.

I’ve been using the term Pony Factor for two years now and while I can trace its origin I’m not sure how it got its name. To calculate it, simply rank all the contributors to an open source project by the number of contributions (PRs, lines of code, whatever you think is best) and then start counting from the largest to smallest until you get to 50% of all contributions (usually over a period of time). For example, if for a given month you have 20 contributions and the largest contributor was responsible for 6 and the second largest for 5 you would have a Pony Factor of 2, since the sum of 6 and 5 is 11 which is more than 50% of 20. It is similar to the Bus Factor, which is a little more grisly in that it counts the number of contributors who could get hit by a bus before the project becomes non-viable. People leave open source projects for a number of reasons (and I am thankful that it is rarely of the “hit by bus” type) and if you depend on that project you have a vested interest in its health.

The Elephant Factor is similar, except you count the number of organizations that contribute to a project. In the example above, if the two contributors both worked for the same company, then the project’s Elephant Factor would be 1 (the number of organizations responsible for at least 50% of the project’s contributions). While we often assume that open source software is a pure meritocracy based on the community that creates it, a low Elephant Factor means that the project is controlled by a small number of parties. This isn’t intrinsically a bad thing, but it could result in the interests of that organization(s) outweigh the interests of the project as a whole.

This was just part of what Dawn covered, as there are other metrics to consider, and she didn’t go into detail about what you can do when open source projects that are important to you have low Pony/Elephant factors, but I found the presentation very interesting.

As a nice segue from Dawn’s talk, the next presentation I attended was on how to change the governance model of an existing open source project (video), given by Ruth Cheesley. Ruth is the project lead for Mautic, and they faced an issue when their project, which had an Elephant Factor of 1, found out that the company behind it was no longer going to support development.

Now I want to admit upfront that I will get some of the details of this story wrong, but it is my understanding that Mautic, which does marketing automation, was originally sponsored by Acquia, the company behind the Drupal content management system and other projects. When Acquia decided to step back from the project, those involved had to either pivot to a different governance model or it would die.

There is the myth about open source that simply releasing code under an OSI-approved license means that hundreds of qualified people will give up their nights and weekends to work on it. Creating a sustainable open source community takes a lot of effort, and one of the main tools for building that community is the governance model. No one wants to be a part of a community where they feel their contributions aren’t appreciated and their opinions are not heard, and fewer still want to work in an environment that can be overly aggressive or even hostile. Ruth talked about the path that her project took and how it directly impacted the success of Mautic.

The next session I attended was a panel discussion on open source being used in the transportation, energy, automotive and public sector realms (video). I’m not a big fan of panel discussions, and I was also surprised to see that all the panelists were men (making this a “manel”). FOSS Backstage did a really good job of promoting diversity in other aspects of the conference (and the moderator was female, but I don’t think that avoids the “manel” definition). It was cool to add another data point that “open source is everywhere” and it was interesting to see where each of the panelists were in there “open source journey”. A couple of them seem to have a good understanding of what it gets them but some others were obviously in the “what the heck have we gotten ourselves in to” phase. I did get introduced to Wolfgang Gehring for the first time. Wolfgang works for Mercedes, and I’m an unabashed Mercedes fan. I’ve owned at least one car from most of the humanly affordable brands in my lifetime, and six of them have been Mercedes (I’ve also owned four Ford trucks). While Wolfgang obviously knows his stuff when it comes to open source, I don’t think he can score me F1 tickets. (sniff)

After the lunch break I also revisited Wolfgang and his presentation on Inner Source (video). Most people understand that open source is code that you can see, modify and share. Most open source licenses are based on copyright law, so they don’t apply until you actually “share” the code with a third party. What happens if you want to use open source solely within an organization? In the case of large organizations you might have a number of disparate groups all working on similar projects, and there can be advantages to organizing them. Not only can they share code, they can also share experiences and build a community, albeit an internal one, to maximize the value of the open source solutions they use.

The next three sessions I attended represented kind of a “mini” AWS track. The first one featured Spot Callaway. Spot is something of a savant when it comes to thinking up fun ways to get people involved in open source communities. I’ve known Spot for over two decades and I’ve worked on his team for the last two years, and it is amazing to watch his mind at work. His talk charted a history of his involvement in coming up with cool and weird ways to engage with people in open source (video), and I was around for some of his efforts so I can attest to its effectiveness (and that it was, indeed, fun).

The second session was by Kyle Davis. I had only met him once before seeing him again in Berlin, and this was the first time I’d seen him speak. His topic was on the importance of how you write when it comes open source projects (video). Now, AWS is very much a document-driven culture, and my own writing style has changed in the two years I’ve been there (goodbye passive voice). Kyle’s presentation talked about considerations you should make when communicating to an open source community. Realizing that your information may be read by people from different cultures and where, say, English isn’t a first language can go a long way toward making your project feel more inclusive.

Rich Bowen presented the third session, and he discussed how to talk to management about open source (video). My favorite part of this talk is when he posted a disclaimer that while many managers don’t understand open source, his does (our boss, David Nalley, is currently the President of the Apache Software Foundation).

I made up this graphic which I will use every time I need a disclaimer in the future.

The last session presenter was Carmen Delgado who talked about getting new contributors involved in open source (video). She examined three such programs: Outreachy, Google Summer of Code, and CANOSP and compared and contrasted these three different “flavors” of programs to encourage open source involvement.

Monday’s presentations ended with a set of “lightning” talks. I’ve always wanted to do one of these – a short talk of no more than five minutes in length (there was a timer) but my friends point out that I can’t introduce myself in five minutes much less give a whole, useful talk.

Two talks stood out for me. In the first one the speaker brought her young daughter (in a Pokémon top, ‘natch) and it really made me glad to see people getting into open source at a young age.

I also liked Miriam Seyffarth’s presentation on the Open Source Business Alliance. I was happy to see both MariaDB and the fine folks at Netways are involved.

Tuesday started with a remote keynote (video) given by Giacomo Tenaglia from CERN. As a physics nerd I’ve always wanted to visit CERN. I know a few people who work there but I have not been able to schedule a trip. I was surprised to learn that the CERN Open Source Programs Office (OSPO) is less than a year old. Considering the sheer amount of open source software used by academia and research I would have expected it to be much older.

The next talk was definitely the worst of the bunch (video), but I had to attend since I was giving it (grin). As this blog can attest, I’ve been working in open source for a long time, and I’ve spent way too much of it thinking about open source business models. There are a number of them, but my talk comes to the conclusion that the best way to create a vibrant open source business is to offer a managed (hosted) version of the software you create. I’ve found that when it comes to open source, people are willing to pay for three things: simplicity, security and stability. If you can offer a service that makes open source easier to use, more secure and/or more stable, you can create a business that can scale.

I took a picture just before I started and I was humbled that by the time I was finished the room was packed. Attention is in great demand these days I and really appreciated folks giving me theirs.

I then attended a talk by Celeste Horgan on growing a maintainer community (video). While I have written computer code I do not consider myself a developer, yet I felt that I was able to bring value to the open source project I worked on for decades. This session covered how to get non-coders to contribute and how to manage a project as it grows.

Brian Proffitt gave a talk that I found very interesting to my current role on the difficulties of measuring the return on investment (ROI) for being involved in open source events (video). While I almost always assume engagement with open source communities will generate positive value, how do you put a dollar figure on it? For example, event sponsorship usually gets you a booth space in whatever exhibit hall or expo the event is hosting. When I was at OpenNMS we would sometimes decline the booth because while we wanted to financially sponsor the conference, we couldn’t afford to do that and host a booth. There are a lot of tangible expenses associated with booth duty, such as swag and the travel expenses for the people working in it, as well as intangible expenses such as opportunity cost. For commercial enterprises attendance at an event can be measured in things like how many orders or leads were generated. That doesn’t usually happen at open source events. It turns out that it isn’t an easy question to answer but Brian had some suggestions.

For most of the conference there were two “in person” tracks and one remote track. The only remote track I attended was a talk given by Ashley Sametz comparing outlaw biker gangs to open source communities. Ashley is amazing (she used to work on our team before pursuing a career in law enforcement) and I really enjoyed her talk. Both communities are tightly knit, have their own jargon and different ways of attracting people to the group.

While I wouldn’t have called them part of an “outlaw motorcycle gang”, many years ago I got to meet a bunch of people in a motorcycle club. It was just before Daytona Bike Week and a lot of people were riding down to Florida. North Carolina is a good stopping point about midway into the trip. While it was explained to me and my friend David that “Mama is having a few folks over for a cookout, you two should come” we were a little surprised to find out that all those bikes we saw on the way there were also coming. There were at least 100 bikes, many cars, and one cab from a semi-tractor trailer. It was amazing. If you have ever seen the first part of the movie Mask that is just what it was like. And yes I could rock the John Oates perm back then.

That session ended up being the last session I attended that day. I spent the rest of the conference in the hallway track and got to meet a lot of really interesting people.

That evening I did manage to get Jägerschnitzel, which was on my list of things to do while in Germany. Missed the Döner kebab, however.

I found FOSS Backstage to be well worth attending. I wish I’d known about it earlier, so perhaps this post will get more people interested in attending next year. Open source is so much more than code and it was nice to see an event focused on the non-code aspects of it.

Using rclone to Sync Data to the Cloud

I am working hard to digitize my life. Last year I moved for the first time in 24 years and I realized I have way too much stuff. A lot of that stuff was paper, in the form of books and files, so I’ve been busy trying to get digital copies of all of it. Also, a lot of my my life was already digital. I have e-mails starting in 1998 and a lot of my pictures were taken with a digital camera.

TL;DR; This is a tutorial for using the open source rclone command line tool to securely synchronize files to a cloud storage provider, in this case Backblaze. It is based on MacOS but should work in a similar fashion on other operating systems.

That brings up the issue of backups. A friend of mine was the victim of a home robbery, and while they took a number of expensive things the most expensive was his archive of photos. It was irreplaceable. This has made me paranoid about backing up my data. I have about 500GB of must save data and around 7TB of “would be nice” to save data.

At my old house the best option I had for network access was DSL. It was usable for downstream but upstream was limited to about 640kbps. At that rate I might be able to backup my data – once.

I can remember in college we were given a test question about moving a large amount of data across the United States. The best answer was to put a physical drive in a FedEx box and overnight it there. So in that vein my backup strategy was to buy three Western Digital MyBooks. I created a script to rsync my data to the external drives. One I kept in a fire safe at the house. It wasn’t guaranteed to survive a hot fire in there (paper requires a much higher temperature to burn) but there was always a chance it might depending on where the fire was hottest. I took the other two drives and stored one at my father’s house and the other at a friend’s house. Periodically I’d take out the drive from the safe, rsync it, and switch it with one of the remote drives. I’d then rsync that drive and put it back in the safe.

It didn’t keep my data perfectly current, but it would mitigate any major loss.

At my new house I have gigabit fiber. It has synchronous upload and download speeds so my ability to upload data is much, much better. I figured it was time to choose a cloud storage provider and set up a much more robust way of backing up my data.

I should stress that when I use the term “backup” I really mean “sync”. I run MacOS and I use the built-in Time Machine app for backups. The term “backup” in this case means keeping multiple copies of files, so not only is your data safe, if you happen to screw up a file you can go back and get a previous version.

Since my offsite “backup” strategy is just about dealing with a catastrophic data loss, I don’t care about multiple versions of files. I’m happy just having the latest one available in case I need to retrieve it. So it is more of synchronizing my current data with the remote copy.

The first thing I had to do was choose a cloud storage provider. Now as my three readers already know I am not a smart person, but I surround myself with people who are. I asked around and several people recommended Backblaze, so I decided to start out with that service.

Now I am also a little paranoid about privacy, so anything I send to the cloud I want to be encrypted. Furthermore, I want to be in control of the encryption keys. Backblaze can encrypt your data but they help you manage the keys, and while I think that is fine for many people it isn’t for me.

I went in search of a solution that both supported Backblaze and contained strong encryption. I have a Synology NAS which contains an application called “Cloud Sync” and while that did both things I wasn’t happy that while the body of the file was encrypted, the file names were not. If someone came across a file called WhereIBuriedTheMoney.txt it could raise some eyebrows and bring unwanted attention. (grin)

Open source to the rescue. In trying to find a solution I came across rclone, an MIT licensed command-line tool that lets you copy and sync data to a large number of cloud providers, including Backblaze. Furthermore, it is installable on MacOS using the very awesome Homebrew project, so getting it on my Mac was as easy as

$ brew install rclone

However, like most open source tools, free software does not mean free solution, so I did have a small learning curve to climb. I wanted to share what I learned in case others find it useful.

Once rclone is installed it needs to be configured. Run

$ rclone config

to access a script to help with that. In rclone syntax a cloud provider, or a particular bucket at a cloud provider, is called a “remote”. When you run the configurator for the first time you’ll get the following menu:

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

Select “n” to set up a new remote, and it will ask you to name it. Choose something descriptive but keep in mind you will use this on the command line so you may want to choose something that isn’t very long.

Enter name for new remote.
name> BBBackup

The next option in the configurator will ask you to choose your cloud storage provider. Many are specific commercial providers, such as Backblaze B2, Amazon S3, and Proton Drive, but some are generic, such as Samba (SMB) and WebDav.

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 6 / Backblaze B2
   \ (b2)

...

I chose “6” for Backblaze.

At this point in time you’ll need to set up the storage on the provider side, and then access it using an application key.

Log in to your Backblaze account. If you want to try it out note that you don’t need any kind of credit card to get started. They will limit you to 10GB (and I don’t know how long it stays around) but if you want to play with it before deciding just remember you can.

Go to Buckets in the menu and click on Create a Bucket

Note that you can choose to have Backblaze encrypt your data, but since I’m going to do that with rclone I left it disabled.

Once you have your bucket you need to create an application key. Click on Application Keys in the menu and choose Add a New Application Key.

Now one annoying issue with Backblaze is that all buckets have to be unique in the entire system, so “rcloneBucket” and “Media1” etc have already been taken. Since I’m just using this as an example it was fine for the screenshot, but note that when I add an application key I usually limit it to a particular bucket. When you click on the dropdown it will list available buckets.

Once you create a new key, Backblaze will display the keyID, the keyName and the applicationKey values on the screen. Copy them somewhere safe because you won’t be able to get them back. If you lose them you can always create a new key, but you can’t modify a key once it has been created.

Now with your new keyID, return to the rclone configuration:

Option account.
Account ID or Application Key ID.
Enter a value.
account> xxxxxxxxxxxxxxxxxxxxxxxx

Option key.
Application Key.
Enter a value.
key> xxxxxxxxxxxxxxxxxxxxxxxxxx

This will allow rclone to connect to the remote cloud storage. Finally, rclone will ask you a couple of questions. I just choose the defaults:

Option hard_delete.
Permanently delete files on remote removal, otherwise hide files.
Enter a boolean value (true or false). Press Enter for the default (false).
hard_delete>

Edit advanced config?
y) Yes
n) No (default)
y/n>

The one last step is to confirm your remote configuration. Note that you can always go back and change it if you want, later.

Configuration complete.
Options:
- type: b2
- account: xxxxxxxxxxxxxxxxxxxxxx
- key: xxxxxxxxxxxxxxxxxxxxxxxxxx
Keep this "BBBackup" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

At this point in time, quit out of the configurator for a moment.

You may have realized that we have done nothing with respect to encryption. That is because we need to add a wrapper service around our Backblaze remote to make this work (this is that there learning curve thing I mentioned earlier).

While I don’t know if this is true or not, it was recommended that you not put encrypted files in the root of your bucket. I can’t really see why it would hurt, but just in case we should put a folder in the bucket at which we can then point the encrypted remote. With Backblaze you can use the webUI or you can just use rclone. I recommend the latter since it is a good test to make sure everything is working. On the command line type:

$ rclone mkdir BBBackup:rcloneBackup/Backup

2024/01/23 14:13:25 NOTICE: B2 bucket rcloneBackup path Backup: Warning: running mkdir on a remote which can't have empty directories does nothing

To test that it worked you can look at the WebUI and click on Browse Files, or you can test it from the command line as well:

$ rclone lsf BBBackup:rcloneBackup/
Backup/

Another little annoying thing about Backblaze is that the File Browser in the webUI isn’t in real time, so if you do choose that method note that it may take several minutes for the directory (and later any files you send) to show up.

Okay, now we just have one more step. We have to create the encrypted remote, so go back into the configurator:

$ rclone config

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n

Enter name for new remote.
name> crypt

Just like last time, chose a name that you will be comfortable typing on the command line. This is the main remote you will be using with rclone from here on out. Next we have to choose the storage type:

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)

...

14 / Encrypt/Decrypt a remote
   \ (crypt)
15 / Enterprise File Fabric
   \ (filefabric)
16 / FTP
   \ (ftp)
17 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
18 / Google Drive
   \ (drive)

...

Storage> crypt

You can type the number (currently 14) or just type “crypt” to choose this storage type. Next we have to point this new remote at the first one we created:

Option remote.
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a value.
remote> BBBackup:rcloneBackup/Backup

Note that it contains the name of the remote (BBBackup), the name of the bucket (rcloneBackup), and the name of the directory we created (Backup). Now for the fun part:

Option filename_encryption.
How to encrypt the filenames.
Choose a number from below, or type in your own string value.
Press Enter for the default (standard).
   / Encrypt the filenames.
 1 | See the docs for the details.
   \ (standard)
 2 / Very simple filename obfuscation.
   \ (obfuscate)
   / Don't encrypt the file names.
 3 | Adds a ".bin", or "suffix" extension only.
   \ (off)
filename_encryption>

This is the bit where you get to solve the filename problem I mentioned above. I always choose the default, which is “standard”. Next you get to encrypt the directory names as well:

Option directory_name_encryption.
Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (true).
 1 / Encrypt directory names.
   \ (true)
 2 / Don't encrypt directory names, leave them intact.
   \ (false)
directory_name_encryption>

I choose the default of “true” here as well. Look, I don’t expect to ever become the subject of an in-depth digital forensics investigation, but the less information out there the better. Should Backblaze ever get a subpoena to let someone browse through my files on their system, I want to minimize what they can find.

Finally, we have to choose a passphrase:

Option password.
Password or pass phrase for encryption.
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:

Option password2.
Password or pass phrase for salt.
Optional but recommended.
Should be different to the previous password.
Choose an alternative below. Press Enter for the default (n).
y) Yes, type in my own password
g) Generate random password
n) No, leave this optional password blank (default)
y/g/n>

Now, unlike your application key ID and password, these passwords you need to remember. If you loose them then you will not be able to get access to your data. I did not choose a salt password but it does appear to be recommended. Now we are almost done:

Edit advanced config?
y) Yes
n) No (default)
y/n>

Configuration complete.
Options:
- type: crypt
- remote: BBBackup:rcloneBackup/Backup
- password: *** ENCRYPTED ***
Keep this "cryptMedia" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Now your remote is ready to use. Note that when using a remote with encrypted files and directories do not use the Backblaze webUI to create folders underneath your root or rclone won’t recognize them.

I bring this up because there is one frustrating thing with rclone. If I want to copy a directory to the cloud storage remote it copies the contents of the directory and not the directory itself. For example, if I type on the command line:

$ cp -r Music /Media

it will create a “Music” directory under the “Media” directory. But if I type:

$ rclone copy Music crypt:Media

it will copy the contents of the Music directory into the root of the Media directory. To get the outcome I want I need to run:

$ rclone mkdir crypt:Media/Music

$ rclone copy Music crypt:Media/Music

Make sense?

While rclone has a lot of commands, the ones I have used are “mkdir” and “rmdir” (just like on a regular command line) and “copy” and “sync”. I use “copy” for the initial transfer and then “sync” for subsequent updates.

Now all I have to do for cloud synchronization is set up a crontab to run these commands on occasion (I set mine up for once a day).

I can check that the encryption is working by using the Backblaze webUI. First I see the folder I created to hold my encrypted files:

But the directories in that folder have names that sound like I’m trying to summon Cthulhu:

As you can see from this graph, I was real eager to upload stuff when I got this working:

and on the first day I sent up nearly 400GB of files. Backblaze B2 pricing is currently $6/TB/month, and this seems about right:

I have since doubled my storage so it should run about 20 cents a day. Note that downloading your data is free up to three times the amount of data stored. In other words, you could download all of the data you have in B2 three times in a given month and not incur fees. Since I am using this simply for catastrophic data recovery I shouldn’t have to worry about egress fees.

I am absolutely delighted to have this working and extremely impressed with rclone. For my needs open source once again outshines commercial offerings. And remember if you have other preferences for cloud storage providers you have a large range of choices, and the installation should be similar to the one I did here.

2023 Percona Live – Day 1

The first day of sessions at Percona Live saw me recovered from the food poisoning I experienced on Monday. It was a miserable experience but I’m happy that it didn’t last very long.

Whenever I go to a conference I always like the opening keynotes as they tend to set the tone for the rest of the event. The room in which the keynotes were held was dominated by a large screen featuring the new Percona logo.

The show was opened by Dave Stokes who, like me, is a technology evangelist.

He welcomed us all to the conference and covered the usual housekeeping notes before turning the stage over to Ann Schlemmer, who is the new CEO of Percona.

Schlemmer took over as CEO from founder Peter Zaitsev last autumn, and she seems to have settled into her new role pretty well.

One of the topics she covered was the new Percona Logo.

While I can’t do the description justice, it represents mountains which refers to both the bedrock on which Percona solutions are built as well as the challenges people sometimes have to overcome when working in IT (think climbing the mountain). The sun represents the shining of light into dark places as well kind of looking like a “P” (while the mountains themselves look like the “A” in the name).

At least that is what I took away from it. (grin)

I asked her later if they designed it in-house or if they hired an outside firm and she told me they did it themselves. Either way I like it and think they did a good job.

She was followed by Peter Zaitsev, one of the two Founders of Percona.

I first met Peter at this year’s FOSDEM back in February. When I found out he lived near me I invited him to lunch and we had a great discussion of open source business models and open source in general. As someone who once ran an open source services company, I identify strongly with his business, although he has been more successful than I was.

He is also known for not holding back when he has a strong opinion, and as part of his talk on the state of open source relational databases he leveled some criticism on AWS, who is also my employer.

Note: These thoughts on my personal blog are mine and mine alone, and may or may not align with my employer, Amazon Web Services.

One of the reasons I joined AWS was to take on the challenge of changing both the perception and processes by which Amazon interacts with open source communities. I’m part of a wonderful team and I think we have made progress toward that goal, so while I won’t either agree or disagree with Peter’s statements, my hope is to earn enough trust that there will be no need to have this as a topic in future conversations.

Peter ended his presentation by bringing up Ann and officially passing the torch by gifting her with a dartboard with his face on it, to be used whenever she might feel the need.

It took a couple of tries before the dart stuck, mainly because Peter had kept the dart in his back pocket and forgot to take off the safety cover on the sharp tip.

The next keynote speaker, Rachel Stephens, was new to me, although I’ve known about the company she works for, Redmonk, for a long time.

Redmonk is an analyst firm focused on software development, and she had my attention by basing her presentation on The Princess Bride, one of my favorite movies. It is very quotable, and she had slides like this one:

She also had a slide where she used the term “fauxpen” source:

Back in 2009 I hosted a party in which I was trying to explain open source vs. open core to a non-technical friend of mine. He replied “oh, so it’s fauxpen source”. I immediately registered the domain name (although I no longer own it as it was sold when I sold my company). I did a search back then and I could find no other references to the term, but I’ve seen it a number of times since. I like to think I had some part in popularizing it but it is clever enough that I’m certain others came up with it, too.

After the keynotes the individual sessions began, and since I’m not a DBA a lot of them are over my head. I did go to the one by Jignesh Shah, who is the General Manager of open source databases for Amazon RDS.

Jignesh gave a “state of” talk on the AWS offerings in this space, and also announced that the “trusted language extensions” feature for PostgreSQL that was introduced last autumn now supports Rust.

As I understand it, trusted language extensions give cloud providers a way to allow their users to extend the functionality of the database without introducing security concerns. There is a limit to what languages can be used, however, due to the fact that there may be no way to “sandbox” the extension from being able to access, say, the memory used by the database. The C language was not supported for that reason.

By supporting Rust, this allows end users to create powerful extensions in a language similar to C but with memory protections.

After his talk I spent some time wandering around the sponsor showcase. This is not a large conference, probably around 300 people, and so the “expo” is simply a hallway with booths along one wall. I actually like this because it facilitates easier interactions between attendees and sponsors.

The “premium” sponsors (AWS, Microsoft and Percona) had slightly larger booths on one end of the hallway.

Jignesh has brought along a number of AWS subject matter experts and there was a lot of activity at the booth, as it provided a way for folks to ask and get answers from the people best able to provide them.

One last note on Day 1 is that lunch was an actual buffet and not a boxed lunch that you often get as such conferences.

While I live in North Carolina and have almost sacred opinions on pork barbecue and cornbread, it was pretty good. The only complaint I would make is that the baked beans were not labeled well since, like in the South, they appeared to include small pieces of pork. As many of the attendees are vegetarian it would have been nice to either offer it without meat or make it clear that meat was included.

I was just happy that it was nice and didn’t result in the same issues I experienced after lunch on Monday (grin)

Obligatory 20 Year Blog Post

Not to misquote the Beatles, but it was 20 years ago today that I posted my first entry to this blog.

By 2003 blogs were pretty popular so I was somewhat late to the game. My friend Ben Reed had a blog that he used kind of like a proto-Twitter where he would post many times during the day on what he was doing, which at the time focused on porting KDE to MacOS. Back then a lot of open source projects used blogs as a communication platform and since I was maintaining an open source project I figured I should start one. He used Moveable Type as his blogging software so I did as well.

Moveable Type was very popular back then, but when they started to move their licensing to a more proprietary model, people were turned off and migrated to WordPress. I find it delightfully ironic that WordPress, which is open source, now forms the basis for around 40% of all websites whereas people have probably never heard of Moveable Type these days.

If there happen to be any younger readers here, blogs twenty years ago were like podcasts today: practically everyone had one. Also like podcasts, most were sporadically updated, which is why Really Simple Syndication (RSS) became important. RSS is a protocol that lets you find out when websites are updated. Using a “news reader” like Google Reader, you could aggregate all the websites you were interested in following into one application. It was pretty cool.

But then along came social media sites and what people used to post on blogs they started posting there instead of on their own sites. Even with a lot of hosting options, running a blog is incrementally harder than posting to, say, Facebook. In 2013 Google killed Reader which pretty much ended blogging (although I still use RSS and find that the open source Nextcloud News is a great Reader replacement).

But I’m old and stubborn so I kept blogging. In fact I think I have something like five or six blogs that I update periodically. I use another blogging technology called a “planet” to aggregate all of those blogs so my three readers can easily keep up with what I’m doing.

Another thing that social media brought about was this idea of engagement. People still look at metrics such as number of followers as an indication of how far a particular post reached, and even when I started this thing folks would brag about their stats. As a contrarian I took the opposite approach and decided that I’d be happy if just three people read my posts. I got a chuckle the first time someone came up to me and said “hey, I’m one of your three readers”. Made the whole thing much more personal.

And to me blogging is personal. I love to write and the best way to become a better writer is to do it. A lot. I really wish I had more time to post but between my job (which involves a lot of writing) and the farm it is hard to find the time. As someone who loves the culture around open source software, sharing is key and I hope some of the stuff I’ve posted here has helped someone else as so many other blogs have helped me.

That’s about it for this update. I would promise that I’ll post more often and with better content in the future but I don’t like to lie (grin), and in any case thanks for reading.

2022 Open Source Summit – Day 3

Thursday at the Open Source Summit started as usual at the keynotes.

Picture of Robin Bender Ginn on stage

Robin Bender Ginn opened today’s session with a brief introduction and then we jumped into the first session by Matt Butcher of Fermyon.

Picture of Matt Butcher on stage

I’ve enjoyed these keynotes so far, but to be honest nothing has made me go “wow!” as much as this presentation by Fermyon. I felt like I was witnessing a paradigm shift in the way we provide services over the network.

To digress quite a bit, I’ve never been happy with the term “cloud”. An anecdotal story is that the cloud got its name from the fact that the Visio icon for the Internet was a cloud (it’s not true) but I’ve always preferred the term “utility computing”. To me cloud services should be similar to other utilities such as electricity and water where you are billed based on how much you use.

Up until this point, however, instead of buying just electricity it has been more like you are borrowing someone else’s generator. You still have to pay for infrastructure.

Enter “serverless“. While there are many definitions of serverless, the idea is that when you are not using a resource your cost should be zero. I like this definition because, of course, there have to be servers somewhere, but under the utility model you shouldn’t be paying for them if you aren’t using them. This is even better than normal utilities because, for example, my electricity bill includes fees for things such as the meter and even if I don’t use a single watt I still have to pay for something.

Getting back to the topic at hand, the main challenge with serverless is how do you spin up a resource fast enough to be responsive to a request without having to expend resources when it is quiescent? Containers can take seconds to initialize and VMs much longer.

Fermyon hopes to address this by applying Webassembly to microservices. Webassembly (Wasm) was created to allow high performance applications, written in languages other than Javascript, to be served via web pages, although as Fermyon went on to demonstrate this is not its only use.

The presentation used a game called Finicky Whiskers to demonstrate the potential. Slats the cat is a very finicky eater. Sometimes she wants beef, sometimes chicken, sometimes fish and sometimes vegetables. When the game starts Slats will show you an icon representing the food they want, and you have to tap or click on the right icon in order to feed it. After a short time, Slats will change her choice and you have to switch icons. You have 30 seconds to feed as many correct treats as possible.

Slide showing infrastructure for Frisky Kittens: 7 microservices, Redis in a container, Nomad cluster on AWS, Fermyon

Okay, so I doubt it will have the same impact on game culture as Doom, but they were able to implement it using only seven microservices, all in Wasm. There is a detailed description on their blog, but I liked that fact that it was language agnostic. For example, the microservice that controls the session was written in Ruby, but the one that keeps track of the tally was written in Rust. The cool part is that these services can be spun up on the order of a millisecond or less and the whole demo runs on three t2.small AWS instances.

This is the first implementation I’ve seen that really delivers on the promise of serverless, and I’m excited to see where it will go. But don’t let me put words into their mouth, as they have a blog post on Fermyon and serverless that explains it better than I could.

Picture of Carl Meadows on stage

The next presentation was on OpenSearch by Carl Meadows, a Director at AWS.

Note: Full disclosure, I am an AWS employee and this post is a personal account that has not been endorsed or reviewed by my employer.

OpenSearch is an open source (Apache 2.0 licensed) set of technologies for storing large amounts of text that can then be searched and visualized in near real time. Its main use case is for making sense of streaming data that you might get from, say, log files or other types of telemetry. It uses the Apache Lucene search engine and latest version is based on Lucene 9.1.

One of the best ways to encourage adoption of an open source solution is by having it integrate with other applications. With OpenSearch this has traditionally been done using plugins, but there is a initiative underway to create an “extension” framework.

Plugins have a number of shortcomings, especially in that they tend to be tightly coupled to a particular version of OpenSearch, so if a new version comes out your existing plugins may not be compatible until they, too, are upgraded. I run into this with a number of applications I use such as Grafana and it can be annoying.

The idea behind extensions is to provide an SDK and API that are much more resistant to changes in OpenSearch so that important integrations are decoupled from the main OpenSearch application. This also provides an extra layer of security as these extensions will be more isolated from the main code.

I found this encouraging. It takes time to build a community around an open source project but one of the best ways to do it is to provide easy methods to get involved and extensions are a step in the right direction. In addition, OpenSearch has decided not to require a Contributor License Agreement (CLA) for contributions. While I have strong opinions on CLAs this should make contributing more welcome for developers who don’t like them.

Picture of Taylor Dolezal on stage

The next speaker was Taylor Dolezal from the Cloud Native Computing Foundation (CNCF). I liked him from the start, mainly because he posted a picture of his dog:

Slide of a white background with the head and sad eyes of a cute black dog

and it looks a lot like one of my dogs:

Picture of the head of my black Doberman named Kali

Outside of having a cool dog, Dolezal has a cool job and talked about building community within the CNCF. Just saying “hey, here’s some open source code” doesn’t mean that qualified people will give up nights and weekends to work on your project, and his experiences can be applied to other projects as well.

The final keynote was from Chris Wright of Red Hat and talked about open source in automobiles.

Picture of Chris Wright on stage

Awhile ago I actually applied for a job with Red Hat to build a community around their automotive vertical (I didn’t get it). I really like cars and I thought that combining that with open source would just be a dream job (plus I wanted the access). We are on the cusp of a sea change with automobiles as the internal combustion engine gives way to electric motors. Almost all manufacturers have announced the end of production for ICEs and electric cars are much more focused on software. Wright showed a quote predicting that automobile companies will need four times the amount of software-focused talent that the need now.

A slide with a quote stating that automobile companies will need more than four times of the software talent they have now

I think this is going to be a challenge, as the automobile industry is locked into 100+ years of “this is the way we’ve always done it”. For example, in many states it is still illegal to sell cars outside of a dealership. When it comes to technology, these companies have recently been focused on locking their customers into high-margin proprietary features (think navigation) and only recently have they realized that they need to be more open, such as supporting Android Auto or CarPlay. As open source has disrupted most other areas of technology, I expect it to do the same for the automobile industry. It is just going to take some time.

I actually found some time to explore a bit of Austin outside the conference venue. Well, to be honest, I went looking for a place to grab lunch and all the restaurants near the hotel were packed, so I decided to walk further out.

Picture of the wide Brazos river from under the Congress Avenue bridge

The Brazos River flows through Austin, and so I decided to take a walk on the paths beside it. The river plays a role in the latest Neal Stephenson novel called Termination Shock. I really enjoyed reading it and, spoiler alert, it does actually have an ending (fans of Stephenson’s work will know what I’m talking about).

I walked under the Congress Avenue bridge, which I learned was home to the largest urban bat colony in the world. I heard mention at the conference of “going to watch the bats” and now I had context.

A sign stating that drones were not permitted to fly near the bat colony under the Congress Avenue bridge

Back at the Sponsor Showcase I made my way over to the Fermyon booth where I spent a lot of time talking with Mikkel Mørk Hegnhøj. When I asked if they had any referenceable customers he laughed, as they have only been around for a very short amount of time. He did tell me that in addition to the cat game they had a project called Bartholomew that is a CMS built on Fermyon and Wasm, and that was what they were using for their own website.

Picture the Fermyon booth with people clustered around

If you think about it, it makes sense, as a web server is, at its heart, a fileserver, and those already run well as a microservice.

They had a couple of devices up so that people could play Finicky Whiskers, and if you got a score of 100 or more you could get a T-shirt. I am trying to simplify my life which includes minimizing the amount of stuff I have, but their T-shirts were so cool I just had to take one when Mikkel offered.

Note that when I got back to my room and actually played the game, I came up short.

A screenshot of my Finicky Whiskers score of 99

The Showcase closed around 4pm and a lot of the sponsors were eager to head out, but air travel disruptions affected a lot of them. I’m staying around until Saturday and so far so good on my flights. I’m happy to be traveling again but I can’t say I’m enjoying this travel anxiety.

[Note: I overcame by habit of sitting toward the back and off to the side so the quality of the speaker pictures has improved greatly.]

On Leaving OpenNMS

It is with mixed emotions that I am letting everyone know that I’m no longer associated with The OpenNMS Group.

Two years ago I was in a bad car accident. I suffered some major injuries which required 33 nights in the hospital, five surgeries and several months in physical therapy. What was surprising is that while I had always viewed myself as somewhat indispensable to the OpenNMS Project, it got along fine without me.

Also during this time, The OpenNMS Group was acquired. For fifteen years we had survived on the business plan of “spend less money than you earn”. While it ensured the longevity of the company and the project, it didn’t allow much room for us to pursue ideas because we had no way to fund them. We simply did not have the resources.

Since the acquisition, both the company and the project have grown substantially, and this was during a global pandemic. With OpenNMS in such a good place I began to think, for the first time in twenty years, about other options.

I started working with OpenNMS in September of 2001. I refer to my professional career before then as “Act I”, with my time at OpenNMS as “Act II”. I’m now ready to see what “Act III” has in store.

While I’m excited about the possibilities, I will miss working with the OpenNMS team. They are an amazing group of people, and it will be hard to replace the role they played in my life. I’m also eternally grateful to the OpenNMS Community, especially the guys in the Order of the Green Polo who kept the project alive when we were starting out. You are and always will be my friends.

When I was responsible for hiring at OpenNMS, I ended every offer letter with “Let’s go do great things”. I consider OpenNMS to be a “great thing” and I am eager to watch it thrive with its new investment, and I will always be proud of the small role I played in its success.

If you are doing great things and think I could contribute to your team, check out my profile on LinkedIn or Xing.

Review: Serval WS Laptop by System76

TL;DR; When I found myself in the market for a beefy laptop, I immediately ordered the Serval WS from System76. I had always had a great experience dealing with them, but times have changed. It has been sent back.

I can’t remember the first time I heard about the Serval laptop by System76. In a world where laptops were getting smaller and thinner, they were producing a monster of a rig. Weighing ten pounds without the power brick, the goal was to squeeze a high performance desktop into a (somewhat) portable form factor.

I never thought I’d need one, as I tend to use desktops most of the time (including a Wild Dog Pro at the office) and I want a light laptop for travel as it pretty much just serves as a terminal and I keep minimal information on it.

Recently we’ve been experimenting with office layouts, and our latest configuration has me trading my office for a desk with the rest of the team, and I needed something that could be moved in case I need to get on a call, record a video or get some extra privacy.

Heh, I thought, perhaps I could use the Serval after all.

I like voting for open source with my wallet. My last two laptops have been Dell “Sputnik” systems (2nd gen and 5th gen) since I wanted to support Dell shipping Linux systems, and when we decided back in 2015 that the iMacs we used for training needed to be replaced, I ordered six Sable Touch “all in one” systems from System 76. The ordering process was smooth as silk and the devices were awesome. We still get compliments from our students.

A year later when my HP desktop died, I bought the aforementioned Wild Dog Pro. Again, customer service to match if not rival the best in the business, and I was extremely happy with my new computer.

Jump forward to the present. Since I was in the market for a “luggable” system, performance was more important than size or weight, so I ordered a loaded Serval WS, complete with the latest Intel i9 processor, 64GB of speedy RAM, NVidia 1080 graphics card, and oodles of disk space. Bwah ha ha.

When it showed up, even I was surprised at how big it was.

Serval WS and Brick

Here you can see it in comparison to a old Apple keyboard. Solidly built, I was eager to plug it in and turn it on.

Serval WS

The screen was really bright, even though so was my office at the time. You can see from the picture that it was big enough to contain a full-sized keyboard and a numeric keypad. This didn’t really matter much to me as I was planning on using it with an awesome external monitor and keyboard, but it was a nice touch. I still like having a second screen since we rely heavily on Mattermost and I always like to keep a window in view and I figured I could use the laptop screen for that.

I had ordered the system with Ubuntu installed. My current favorite operating system is Linux Mint but I decided to play with Ubuntu for a little bit. This was my first experience with Ubuntu post Unity and I must say, I really liked it. Kind of made me eager to try out Pop!_OS which is the System76 distro based on Ubuntu.

When installing Mint I discovered that I made a small mistake when placing my Serval order. I meant to use a 2TB drive as the primary leaving a 1TB drive for use by TimeShift for backup. I reversed them. No real issue, as I was able to install Mint on the 2TB drive just fine after some creative partition manipulation.

Everything was cool until late afternoon when the sun went away. I was rebooting the system and found myself looking at a blank screen (for some reason the screen stays blank for a minute or so after powering on the laptop, I assume due to it having 64GB of RAM). There was a tremendous amount of “bleed” around the edges of the LCD.

Serval WS LCD Bleed

Damn.

Although it probably wouldn’t have impacted me much in day to day use, especially with an external monitor, I would know about it, and as I’m somewhere on the OCD spectrum it would bother me. Plus I paid a lot of money for this system and want it to be as close to perfect as possible.

For those of you who don’t know, the liquid crystals in LCD displays emit no light of their own and they get their illumination usually from a fluorescent source. If there are issues with the way the LCD panel is constructed, this light can “bleed” around the edges and degrade the display quality (it is also why it is hard to get really black images on LCD displays and this is fueling a lot of the excitement around OLED technology).

I’ve had issues with this before on laptops but nothing this bad. Not to worry, I have System76 to rely on, along with their superlative customer service.

I called the number and soon I was speaking with a support technician. When I described the problem they opened a ticket and asked me to send in a picture. I did and then waited for a response.

And waited.

And waited.

I commented on the ticket.

And I continued to wait.

The next day I waited a bit (Denver is two hours behind where I live) but when I got no response I decided, well, I’ll just return the thing. I called to get an RMA number but this time I wasn’t connected with a person and was asked to leave a message. I did, and I should note that I never got that return call.

At this point I’m frustrated, so I decided an angry tweet was in order. That got a response to my ticket, where they offered to send me a new unit.

Yay, here was a spark of the customer service I was used to getting. I’ve noticed a number of tech companies are willing to deal with defective equipment by sending out a new unit before the old unit is returned. In this day and age of instant gratification it is pretty awesome.

I wrote back that I was willing to try another unit, but would it be possible to put Pop!_OS on the new unit on the 2TB drive so that I could try it out of the box and know that all of the System76 specific packages were installed.

A little while later I got a reply that it wouldn’t be possible to install it on the 2TB drive, so I would end up having to reinstall in any case.

(sigh)

When I complained on Twitter I was told “Sorry to hear this, you’ll receive a phone call before EOD to discuss your case.” I worked until 8pm that night with no phone call, so I just decided to return the thing.

Of course, this would be at my expense and the RMA instructions were strict about requiring shipping insurance: “System76 cannot refund your purchase if the machine arrives damaged. For this reason, it is urgent that you insure your package”. The total cost was well over $100.

So I’m out a chunk of change and I’ve lost faith in a vendor of which I was extremely fond. This is a shame since they are doing some cool things such as building computers in the United States, but since they’ve lost sight of what made them great in the first place I have doubts about their continued success.

In any case, I ordered a Dell Precision 5530, which is one of the models available with Ubuntu. Much smaller and not as powerful as the Serval WS, it is also not as expensive. I’ll post as review in a couple of weeks when I get it.

Help Get OpenNMS Packaged by Bitnami

As someone who has used OpenNMS for, well, many years, I think it is a breeze to get installed. Simply add the repository to your server, install the package(s), run the installer and start it.

However, there are a number of new users who still have issues getting it installed. This is not a problem limited just to OpenNMS but can be a problem across a number of open source projects.

Enter Bitnami. Bitnami is a project to package applications to make them easier to install: natively, in the cloud, in a container or as a virtual machine. Ronny pointed out that OpenNMS is listed in their “wishlist” section, and if we can get enough votes, perhaps they will add it to their stack.

Bitnami also happens to have a great team, lead in part by the ever amazing Erica Brescia. As I write this we have less that 50 votes, with the current leaders being over 1200, so there is a long way to go. I’d appreciate your support, and once you vote you get a second chance to vote again via the socials.

Bitnami OpenNMS Wishlist

Thanks, and thanks to Bitnami for the opportunity.

OpenNMS Meridian 2016 Released

I am woefully behind on blog posts, so please forgive the latency in posting about Meridian 2016.

As you know, early last year we split OpenNMS into two flavors: Horizon and Meridian. The goal was to create a faster release cycle for OpenNMS while still providing a stable and supportable version for those who didn’t need the latest features.

This has worked out extremely well. While there used to be eighteen months or so between major releases, we did five versions of Horizon in the same amount of time. That has led to the rapid development of such features as the Newts integration and the Business Service Monitor (BSM).

But that doesn’t mean the features in Horizon are perfect on Day One. For example, one early adopter of the Newts integration in Horizon 17 helped us find a couple of major performance issues that were corrected by the time Meridian 2016 came out.

The Meridian line is supported for three years. So, if you are using Meridian 2015 and don’t need any of the features in Meridian 2016, you don’t need to upgrade. Major performance issues, all security issues and most of the new configurations will be backported to that release until Meridian 2018 comes out.

Compare and contrast that with Horizon: once Horizon 18 was released all work stopped on Horizon 17. This means a much more rapid upgrade cycle. The upside being that Horizon users get to see all the new shiny features first.

Meridian 2016 is based on Horizon 17, which has been out since the beginning of the year and has been highly vetted. Users of Horizon 17 or earlier should have an easy migration path.

I’m very happy that the team has consistently delivered on both Horizon and Meridian releases. It is hoped that this new model will both keep OpenNMS on the cutting edge of the network monitoring space while providing a more stable option for those with environments that require it.

Upgrading Linux Mint 17.3 to Mint 18 In Place

Okay, I thought I could wait, but I couldn’t, so yesterday I decided to do an “in place” upgrade of my office desktop from Linux Mint 17.3 to Mint 18.

It didn’t go smoothly.

First, let me stress that the Linux Mint community strongly recommends a fresh install every time you upgrade from one release to another, and especially when it is from one major release, like Mint 17, to another, i.e. Mint 18. They ask you to backup your home directory and package lists, base the system and then restore. The problem is that I often make a lot of changes to my system which usually involves editing files in the system /etc directory, and this doesn’t capture that.

One thing I’ve always loved about Debian is the ability to upgrade in place (and often remotely) and this holds true for Debian-based distros like Ubuntu and Mint. So I was determined to try it out.

I found a couple of posts that suggested all you need to do is replace “rosa” with “sarah” in your repository file, and then do a “apt-get update” followed by an “apt-get dist-upgrade”. That doesn’t work, as I found out, because Mint 18 is based on Xenial (Ubuntu 16.04) and not Trusty (Ubuntu 14.04). Thus, you also need to replace every instance of “trusty” with “xenial” to get it to work.

Finally, once I got that working, I couldn’t get into the graphical desktop. Cinnamon wouldn’t load. It turns out Cinnamon is in a “backport” branch for some reason, so I had to add that to my repository file as well.

To save trouble for anyone else wanting to do this, here is my current /etc/apt/sources.list.d/official-package-repositories.list file:

deb http://packages.linuxmint.com sarah main upstream import backport #id:linuxmint_main
# deb http://extra.linuxmint.com sarah main #id:linuxmint_extra

deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ xenial-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ xenial partner

Note that I commented out the “extra” repository since one doesn’t exist for sarah yet.

The upgrade took a long time. We have a decent connection to the Internet at the office and still it was over an hour to download the packages. There were a number of conflicts I had to deal with, but overall that part of the process was smooth.

Things seem to be working, and the system seems a little faster but that could just be me wanting it to be faster. Once again many thanks to the Mint team for making this possible.