Computer Nostalgia

Two stories this week caught me up in a little bit of old computer nostalgia. The first was that chip manufacturer Zilog was ending production of the Z80 CPU chip, and the second was that NASA managed to restore communications with the Voyager 1 spacecraft.

Now to be honest I didn’t know that Zilog still made the Z80, and I was impressed it had such a long run. My very first computer was a Radio Shack TRS-80 that I got for Christmas in 1978, when I was 12 years old. At a price of $599 (about $3k today) it was an expensive present, but my father saw it as an investment in my future. I spent a lot of time on that machine, and I thought it was cool that “TRS” were the consonants in my first name.

Compared to the simplest computers in use today it is a dinosaur. The Z80 had a 1.78MHz clock (that’s “mega” hertz) and could only address 64kB of memory. The operating system and programming language (BASIC) were stored in ROM, leaving the rest for RAM. My initial machine came with 4kB of RAM and in March of 1979 was when I wrote my first program that wouldn’t fit under that limit.

I upgraded it to “Level II” ROM (which ate up 16kB) and 48kB of RAM to max out the machine, and I eventually added two floppy disk drives.

Back in the day finding information on computers, especially in my rural North Carolina town, was difficult. I got a lot of it from subscribing to computer magazines, which is how I learned about assembly. I couldn’t afford an actual assembler so I would hand code it into the system by typing in pairs of hexadecimal digits. Now what ties this whole experience to Voyager 1 is that what the team at NASA did to restore communications was similar to things I did with the TRS-80.

Another thing I couldn’t afford was a printer. Dot matrix printers at the time ran about $2k, or about $9500 in today’s currency (you could buy a new Toyota Corolla for less than $4k). My Dad, however, worked for General Electric and they had just discontinued the TermiNet 300 – a beast of a machine – and he managed to get one cheap. The way it printed was crazy. It had a bank of 118 “hammers” that faced the paper. In between the hammers and the paper was a band containing little metal “fingers” oriented vertically, and on each finger was one of the characters the machine could print. The band rotated around at high speed, and when the machine wanted to produce a character, the ink ribbon would pop up and a hammer would hit the specific finger for that character as it flew by. That it could do this at 30 characters a second was amazing, and loud.

The problem was that it didn’t have a parallel interface and was instead serial, so I had to buy an RS-232C board and write a printer driver. It was then that I kind of fell in love with the Z80 instruction set. Note that I had to come up with the instruction, convert it to the proper hex code and then “POKE” it into memory (I could then “PEEK” to make sure it worked). Luckily printer drivers aren’t that complicated (take a byte as an input, send it to the interface as an output). It worked, and I made a lot of extra money typing in term papers for other students (the advantage of the TermiNet was that it was typewriter quality output).

I can only imagine the NASA engineers sitting around doing the same thing to fix Voyager (hand assembling code, not typing term papers).

Modern computing today is so abstracted from what I learned. Now you can just ask GenAI to write a program for you. I’m not one of those old guys who thinks what I went through was better, but it was nice to see some of those techniques prove useful today.

And I should point out that Voyager 2 is working just fine, which goes to show that you should never buy the first release of a technology product (grin).

2024 FOSS Backstage

I was a speaker at this year’s FOSS Backstage conference, which was held 4-5 March in Berlin, Germany. FOSS Backstage is gathering dedicated to the “non-code” aspects of open source, and I was surprised I had not heard of it before this year. This was the sixth edition of the event, and it was held in a new location due to growing attendance.

TL;DR; I really, really enjoyed this conference, to the point where it is in contention to be my favorite one of the year. The attendees were knowledgeable and friendly, the conference was well run, and it was not so big that I felt I was missing out due to there being too many tracks. I hope I am able to attend again next year.

This was my first time in Berlin, and although I have been to Germany on numerous other trips, for some reason I have never made it to this historic city. It does have a reputation for being a center for hacker culture in Europe, hosting the annual Chaos Communication Congress, and several of my friends who were at FOSS Backstage told me they were in Berlin quite often.

The event was held at a co-working space and we had access to a lobby, a large auditorium, and then downstairs there were two smaller rooms: Wintergarten and a “workshop” room that was used mainly for remote speakers. Each day started off with a keynote in the auditorium followed by breakout sessions of two or three tracks across the three available rooms.

Monday’s keynote (video) was by Melanie Rieback, who is the CEO and Co-founder of a computer security company called Radically Open Security, a not-for-profit security consulting firm. Her company donates 90% of net profits to charity, and they openly share the tools and techniques they use so that their clients can learn to audit their security on their own.

As someone who spends way too much time focused on open source business models, it was encouraging to see a company like Radically Open Security succeed and thrive. But I wasn’t sure I bought in to the premise that all open source companies should be like hers. Consulting firms have a particular business model that is similar to those used by accounting firms, management firms or other service firms such as plumbers or HVAC repair. Software companies have a much different model. For example, I am writing this using WordPress. I didn’t have to pay someone to show me how to use it. If WordPress wants to continue to produce software they need to make money in a different fashion, such as in their case with hosting services and running a marketplace. Those products require capital to create, and since that often can’t be bootstrapped, this means they have investors, investors who will one day expect a return.

Now it is easy to find examples of where investors, specifically venture capitalists, did bad things, but we can’t rule out the model entirely. If you use a Linux desktop, most likely you are using software that companies like Red Hat and Canonical helped to create. Both of those companies are for-profit and have (or in the case of Red Hat, had) investors. The Linux desktop would simply not exist in its current form without them.

The keynote did, however, make me think, which is one of the main reasons I come to conferences.

[Note: I used WordPress as an example because it was handy, and we can discuss the current concerns about selling data for GenAI use another time (grin)]

After the keynote the breakout sessions started, and I headed downstairs to hear Dawn Foster talk about understanding the viability of open source projects (video). Dr. Foster is the Director of Data Science for CHAOSS, a Linux Foundation project for Community Health Analytics in Open Source Software.

Open source viability isn’t something a lot of people think about. Many of us just kind of assume that a piece of open source software will just always be there and always be kept up to date and secure. This can be a dangerous assumption, as illustrated by a famous xkcd comic that I saw no less than three times during the two day conference.

In my $JOB we often use data analytics to examine the health of a project. In addition to metrics such as number of releases, bugs and pull requests, we also look at something called the Pony Factor and the Elephant Factor.

I’ve been using the term Pony Factor for two years now and while I can trace its origin I’m not sure how it got its name. To calculate it, simply rank all the contributors to an open source project by the number of contributions (PRs, lines of code, whatever you think is best) and then start counting from the largest to smallest until you get to 50% of all contributions (usually over a period of time). For example, if for a given month you have 20 contributions and the largest contributor was responsible for 6 and the second largest for 5 you would have a Pony Factor of 2, since the sum of 6 and 5 is 11 which is more than 50% of 20. It is similar to the Bus Factor, which is a little more grisly in that it counts the number of contributors who could get hit by a bus before the project becomes non-viable. People leave open source projects for a number of reasons (and I am thankful that it is rarely of the “hit by bus” type) and if you depend on that project you have a vested interest in its health.

The Elephant Factor is similar, except you count the number of organizations that contribute to a project. In the example above, if the two contributors both worked for the same company, then the project’s Elephant Factor would be 1 (the number of organizations responsible for at least 50% of the project’s contributions). While we often assume that open source software is a pure meritocracy based on the community that creates it, a low Elephant Factor means that the project is controlled by a small number of parties. This isn’t intrinsically a bad thing, but it could result in the interests of that organization(s) outweigh the interests of the project as a whole.

This was just part of what Dawn covered, as there are other metrics to consider, and she didn’t go into detail about what you can do when open source projects that are important to you have low Pony/Elephant factors, but I found the presentation very interesting.

As a nice segue from Dawn’s talk, the next presentation I attended was on how to change the governance model of an existing open source project (video), given by Ruth Cheesley. Ruth is the project lead for Mautic, and they faced an issue when their project, which had an Elephant Factor of 1, found out that the company behind it was no longer going to support development.

Now I want to admit upfront that I will get some of the details of this story wrong, but it is my understanding that Mautic, which does marketing automation, was originally sponsored by Acquia, the company behind the Drupal content management system and other projects. When Acquia decided to step back from the project, those involved had to either pivot to a different governance model or it would die.

There is the myth about open source that simply releasing code under an OSI-approved license means that hundreds of qualified people will give up their nights and weekends to work on it. Creating a sustainable open source community takes a lot of effort, and one of the main tools for building that community is the governance model. No one wants to be a part of a community where they feel their contributions aren’t appreciated and their opinions are not heard, and fewer still want to work in an environment that can be overly aggressive or even hostile. Ruth talked about the path that her project took and how it directly impacted the success of Mautic.

The next session I attended was a panel discussion on open source being used in the transportation, energy, automotive and public sector realms (video). I’m not a big fan of panel discussions, and I was also surprised to see that all the panelists were men (making this a “manel”). FOSS Backstage did a really good job of promoting diversity in other aspects of the conference (and the moderator was female, but I don’t think that avoids the “manel” definition). It was cool to add another data point that “open source is everywhere” and it was interesting to see where each of the panelists were in there “open source journey”. A couple of them seem to have a good understanding of what it gets them but some others were obviously in the “what the heck have we gotten ourselves in to” phase. I did get introduced to Wolfgang Gehring for the first time. Wolfgang works for Mercedes, and I’m an unabashed Mercedes fan. I’ve owned at least one car from most of the humanly affordable brands in my lifetime, and six of them have been Mercedes (I’ve also owned four Ford trucks). While Wolfgang obviously knows his stuff when it comes to open source, I don’t think he can score me F1 tickets. (sniff)

After the lunch break I also revisited Wolfgang and his presentation on Inner Source (video). Most people understand that open source is code that you can see, modify and share. Most open source licenses are based on copyright law, so they don’t apply until you actually “share” the code with a third party. What happens if you want to use open source solely within an organization? In the case of large organizations you might have a number of disparate groups all working on similar projects, and there can be advantages to organizing them. Not only can they share code, they can also share experiences and build a community, albeit an internal one, to maximize the value of the open source solutions they use.

The next three sessions I attended represented kind of a “mini” AWS track. The first one featured Spot Callaway. Spot is something of a savant when it comes to thinking up fun ways to get people involved in open source communities. I’ve known Spot for over two decades and I’ve worked on his team for the last two years, and it is amazing to watch his mind at work. His talk charted a history of his involvement in coming up with cool and weird ways to engage with people in open source (video), and I was around for some of his efforts so I can attest to its effectiveness (and that it was, indeed, fun).

The second session was by Kyle Davis. I had only met him once before seeing him again in Berlin, and this was the first time I’d seen him speak. His topic was on the importance of how you write when it comes open source projects (video). Now, AWS is very much a document-driven culture, and my own writing style has changed in the two years I’ve been there (goodbye passive voice). Kyle’s presentation talked about considerations you should make when communicating to an open source community. Realizing that your information may be read by people from different cultures and where, say, English isn’t a first language can go a long way toward making your project feel more inclusive.

Rich Bowen presented the third session, and he discussed how to talk to management about open source (video). My favorite part of this talk is when he posted a disclaimer that while many managers don’t understand open source, his does (our boss, David Nalley, is currently the President of the Apache Software Foundation).

I made up this graphic which I will use every time I need a disclaimer in the future.

The last session presenter was Carmen Delgado who talked about getting new contributors involved in open source (video). She examined three such programs: Outreachy, Google Summer of Code, and CANOSP and compared and contrasted these three different “flavors” of programs to encourage open source involvement.

Monday’s presentations ended with a set of “lightning” talks. I’ve always wanted to do one of these – a short talk of no more than five minutes in length (there was a timer) but my friends point out that I can’t introduce myself in five minutes much less give a whole, useful talk.

Two talks stood out for me. In the first one the speaker brought her young daughter (in a Pokémon top, ‘natch) and it really made me glad to see people getting into open source at a young age.

I also liked Miriam Seyffarth’s presentation on the Open Source Business Alliance. I was happy to see both MariaDB and the fine folks at Netways are involved.

Tuesday started with a remote keynote (video) given by Giacomo Tenaglia from CERN. As a physics nerd I’ve always wanted to visit CERN. I know a few people who work there but I have not been able to schedule a trip. I was surprised to learn that the CERN Open Source Programs Office (OSPO) is less than a year old. Considering the sheer amount of open source software used by academia and research I would have expected it to be much older.

The next talk was definitely the worst of the bunch (video), but I had to attend since I was giving it (grin). As this blog can attest, I’ve been working in open source for a long time, and I’ve spent way too much of it thinking about open source business models. There are a number of them, but my talk comes to the conclusion that the best way to create a vibrant open source business is to offer a managed (hosted) version of the software you create. I’ve found that when it comes to open source, people are willing to pay for three things: simplicity, security and stability. If you can offer a service that makes open source easier to use, more secure and/or more stable, you can create a business that can scale.

I took a picture just before I started and I was humbled that by the time I was finished the room was packed. Attention is in great demand these days I and really appreciated folks giving me theirs.

I then attended a talk by Celeste Horgan on growing a maintainer community (video). While I have written computer code I do not consider myself a developer, yet I felt that I was able to bring value to the open source project I worked on for decades. This session covered how to get non-coders to contribute and how to manage a project as it grows.

Brian Proffitt gave a talk that I found very interesting to my current role on the difficulties of measuring the return on investment (ROI) for being involved in open source events (video). While I almost always assume engagement with open source communities will generate positive value, how do you put a dollar figure on it? For example, event sponsorship usually gets you a booth space in whatever exhibit hall or expo the event is hosting. When I was at OpenNMS we would sometimes decline the booth because while we wanted to financially sponsor the conference, we couldn’t afford to do that and host a booth. There are a lot of tangible expenses associated with booth duty, such as swag and the travel expenses for the people working in it, as well as intangible expenses such as opportunity cost. For commercial enterprises attendance at an event can be measured in things like how many orders or leads were generated. That doesn’t usually happen at open source events. It turns out that it isn’t an easy question to answer but Brian had some suggestions.

For most of the conference there were two “in person” tracks and one remote track. The only remote track I attended was a talk given by Ashley Sametz comparing outlaw biker gangs to open source communities. Ashley is amazing (she used to work on our team before pursuing a career in law enforcement) and I really enjoyed her talk. Both communities are tightly knit, have their own jargon and different ways of attracting people to the group.

While I wouldn’t have called them part of an “outlaw motorcycle gang”, many years ago I got to meet a bunch of people in a motorcycle club. It was just before Daytona Bike Week and a lot of people were riding down to Florida. North Carolina is a good stopping point about midway into the trip. While it was explained to me and my friend David that “Mama is having a few folks over for a cookout, you two should come” we were a little surprised to find out that all those bikes we saw on the way there were also coming. There were at least 100 bikes, many cars, and one cab from a semi-tractor trailer. It was amazing. If you have ever seen the first part of the movie Mask that is just what it was like. And yes I could rock the John Oates perm back then.

That session ended up being the last session I attended that day. I spent the rest of the conference in the hallway track and got to meet a lot of really interesting people.

That evening I did manage to get Jägerschnitzel, which was on my list of things to do while in Germany. Missed the Döner kebab, however.

I found FOSS Backstage to be well worth attending. I wish I’d known about it earlier, so perhaps this post will get more people interested in attending next year. Open source is so much more than code and it was nice to see an event focused on the non-code aspects of it.

This Blog Can Now Vote

It’s hard to believe but this blog is now 21 years old, having started back on this day in 2003. In the beginning I only had the one reader, but now I’m up to a whole three readers! I’m hoping by the time it turns 30 I can get to four.

I am writing this from the French Quarter in New Orleans, where I am attending a meeting. I was trying to think about a topic for this auspicious anniversary, and my go-to was to complain, once again, about the death of Google Reader also killing blogs, but instead I thought I’d just mention a few of the events I’ll be attending in the first part of 2024.

Next week I’ll be heading to Berlin for FOSS Backstage. I’ve been to Germany many times and love visiting that county, and this will be my first visit to Berlin so I am looking forward to it. I’m speaking on open source business models, which is a topic I’m passionate about. It should be a lot of fun.

April will find me in Seattle for the Open Source Summit. I won’t be speaking, but I will be in the AWS booth and would love to meet up if you happen to be attending as well.

Finally, in May I’ll be in Paris for the inaugural Open Source Founders Summit. If you run a commercial open source company, or are thinking about starting an open source company, consider applying to attend this conference. Emily Omier is bringing together pretty much the brain-trust when it comes to open source business (and me!) and it promises to be a great discussion on how to make money with open source while remaining true to its values.

Using rclone to Sync Data to the Cloud

I am working hard to digitize my life. Last year I moved for the first time in 24 years and I realized I have way too much stuff. A lot of that stuff was paper, in the form of books and files, so I’ve been busy trying to get digital copies of all of it. Also, a lot of my my life was already digital. I have e-mails starting in 1998 and a lot of my pictures were taken with a digital camera.

TL;DR; This is a tutorial for using the open source rclone command line tool to securely synchronize files to a cloud storage provider, in this case Backblaze. It is based on MacOS but should work in a similar fashion on other operating systems.

That brings up the issue of backups. A friend of mine was the victim of a home robbery, and while they took a number of expensive things the most expensive was his archive of photos. It was irreplaceable. This has made me paranoid about backing up my data. I have about 500GB of must save data and around 7TB of “would be nice” to save data.

At my old house the best option I had for network access was DSL. It was usable for downstream but upstream was limited to about 640kbps. At that rate I might be able to backup my data – once.

I can remember in college we were given a test question about moving a large amount of data across the United States. The best answer was to put a physical drive in a FedEx box and overnight it there. So in that vein my backup strategy was to buy three Western Digital MyBooks. I created a script to rsync my data to the external drives. One I kept in a fire safe at the house. It wasn’t guaranteed to survive a hot fire in there (paper requires a much higher temperature to burn) but there was always a chance it might depending on where the fire was hottest. I took the other two drives and stored one at my father’s house and the other at a friend’s house. Periodically I’d take out the drive from the safe, rsync it, and switch it with one of the remote drives. I’d then rsync that drive and put it back in the safe.

It didn’t keep my data perfectly current, but it would mitigate any major loss.

At my new house I have gigabit fiber. It has synchronous upload and download speeds so my ability to upload data is much, much better. I figured it was time to choose a cloud storage provider and set up a much more robust way of backing up my data.

I should stress that when I use the term “backup” I really mean “sync”. I run MacOS and I use the built-in Time Machine app for backups. The term “backup” in this case means keeping multiple copies of files, so not only is your data safe, if you happen to screw up a file you can go back and get a previous version.

Since my offsite “backup” strategy is just about dealing with a catastrophic data loss, I don’t care about multiple versions of files. I’m happy just having the latest one available in case I need to retrieve it. So it is more of synchronizing my current data with the remote copy.

The first thing I had to do was choose a cloud storage provider. Now as my three readers already know I am not a smart person, but I surround myself with people who are. I asked around and several people recommended Backblaze, so I decided to start out with that service.

Now I am also a little paranoid about privacy, so anything I send to the cloud I want to be encrypted. Furthermore, I want to be in control of the encryption keys. Backblaze can encrypt your data but they help you manage the keys, and while I think that is fine for many people it isn’t for me.

I went in search of a solution that both supported Backblaze and contained strong encryption. I have a Synology NAS which contains an application called “Cloud Sync” and while that did both things I wasn’t happy that while the body of the file was encrypted, the file names were not. If someone came across a file called WhereIBuriedTheMoney.txt it could raise some eyebrows and bring unwanted attention. (grin)

Open source to the rescue. In trying to find a solution I came across rclone, an MIT licensed command-line tool that lets you copy and sync data to a large number of cloud providers, including Backblaze. Furthermore, it is installable on MacOS using the very awesome Homebrew project, so getting it on my Mac was as easy as

$ brew install rclone

However, like most open source tools, free software does not mean free solution, so I did have a small learning curve to climb. I wanted to share what I learned in case others find it useful.

Once rclone is installed it needs to be configured. Run

$ rclone config

to access a script to help with that. In rclone syntax a cloud provider, or a particular bucket at a cloud provider, is called a “remote”. When you run the configurator for the first time you’ll get the following menu:

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

Select “n” to set up a new remote, and it will ask you to name it. Choose something descriptive but keep in mind you will use this on the command line so you may want to choose something that isn’t very long.

Enter name for new remote.
name> BBBackup

The next option in the configurator will ask you to choose your cloud storage provider. Many are specific commercial providers, such as Backblaze B2, Amazon S3, and Proton Drive, but some are generic, such as Samba (SMB) and WebDav.

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 6 / Backblaze B2
   \ (b2)


I chose “6” for Backblaze.

At this point in time you’ll need to set up the storage on the provider side, and then access it using an application key.

Log in to your Backblaze account. If you want to try it out note that you don’t need any kind of credit card to get started. They will limit you to 10GB (and I don’t know how long it stays around) but if you want to play with it before deciding just remember you can.

Go to Buckets in the menu and click on Create a Bucket

Note that you can choose to have Backblaze encrypt your data, but since I’m going to do that with rclone I left it disabled.

Once you have your bucket you need to create an application key. Click on Application Keys in the menu and choose Add a New Application Key.

Now one annoying issue with Backblaze is that all buckets have to be unique in the entire system, so “rcloneBucket” and “Media1” etc have already been taken. Since I’m just using this as an example it was fine for the screenshot, but note that when I add an application key I usually limit it to a particular bucket. When you click on the dropdown it will list available buckets.

Once you create a new key, Backblaze will display the keyID, the keyName and the applicationKey values on the screen. Copy them somewhere safe because you won’t be able to get them back. If you lose them you can always create a new key, but you can’t modify a key once it has been created.

Now with your new keyID, return to the rclone configuration:

Option account.
Account ID or Application Key ID.
Enter a value.
account> xxxxxxxxxxxxxxxxxxxxxxxx

Option key.
Application Key.
Enter a value.
key> xxxxxxxxxxxxxxxxxxxxxxxxxx

This will allow rclone to connect to the remote cloud storage. Finally, rclone will ask you a couple of questions. I just choose the defaults:

Option hard_delete.
Permanently delete files on remote removal, otherwise hide files.
Enter a boolean value (true or false). Press Enter for the default (false).

Edit advanced config?
y) Yes
n) No (default)

The one last step is to confirm your remote configuration. Note that you can always go back and change it if you want, later.

Configuration complete.
- type: b2
- account: xxxxxxxxxxxxxxxxxxxxxx
- key: xxxxxxxxxxxxxxxxxxxxxxxxxx
Keep this "BBBackup" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

At this point in time, quit out of the configurator for a moment.

You may have realized that we have done nothing with respect to encryption. That is because we need to add a wrapper service around our Backblaze remote to make this work (this is that there learning curve thing I mentioned earlier).

While I don’t know if this is true or not, it was recommended that you not put encrypted files in the root of your bucket. I can’t really see why it would hurt, but just in case we should put a folder in the bucket at which we can then point the encrypted remote. With Backblaze you can use the webUI or you can just use rclone. I recommend the latter since it is a good test to make sure everything is working. On the command line type:

$ rclone mkdir BBBackup:rcloneBackup/Backup

2024/01/23 14:13:25 NOTICE: B2 bucket rcloneBackup path Backup: Warning: running mkdir on a remote which can't have empty directories does nothing

To test that it worked you can look at the WebUI and click on Browse Files, or you can test it from the command line as well:

$ rclone lsf BBBackup:rcloneBackup/

Another little annoying thing about Backblaze is that the File Browser in the webUI isn’t in real time, so if you do choose that method note that it may take several minutes for the directory (and later any files you send) to show up.

Okay, now we just have one more step. We have to create the encrypted remote, so go back into the configurator:

$ rclone config

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n

Enter name for new remote.
name> crypt

Just like last time, chose a name that you will be comfortable typing on the command line. This is the main remote you will be using with rclone from here on out. Next we have to choose the storage type:

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)


14 / Encrypt/Decrypt a remote
   \ (crypt)
15 / Enterprise File Fabric
   \ (filefabric)
16 / FTP
   \ (ftp)
17 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
18 / Google Drive
   \ (drive)


Storage> crypt

You can type the number (currently 14) or just type “crypt” to choose this storage type. Next we have to point this new remote at the first one we created:

Option remote.
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a value.
remote> BBBackup:rcloneBackup/Backup

Note that it contains the name of the remote (BBBackup), the name of the bucket (rcloneBackup), and the name of the directory we created (Backup). Now for the fun part:

Option filename_encryption.
How to encrypt the filenames.
Choose a number from below, or type in your own string value.
Press Enter for the default (standard).
   / Encrypt the filenames.
 1 | See the docs for the details.
   \ (standard)
 2 / Very simple filename obfuscation.
   \ (obfuscate)
   / Don't encrypt the file names.
 3 | Adds a ".bin", or "suffix" extension only.
   \ (off)

This is the bit where you get to solve the filename problem I mentioned above. I always choose the default, which is “standard”. Next you get to encrypt the directory names as well:

Option directory_name_encryption.
Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (true).
 1 / Encrypt directory names.
   \ (true)
 2 / Don't encrypt directory names, leave them intact.
   \ (false)

I choose the default of “true” here as well. Look, I don’t expect to ever become the subject of an in-depth digital forensics investigation, but the less information out there the better. Should Backblaze ever get a subpoena to let someone browse through my files on their system, I want to minimize what they can find.

Finally, we have to choose a passphrase:

Option password.
Password or pass phrase for encryption.
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
Confirm the password:

Option password2.
Password or pass phrase for salt.
Optional but recommended.
Should be different to the previous password.
Choose an alternative below. Press Enter for the default (n).
y) Yes, type in my own password
g) Generate random password
n) No, leave this optional password blank (default)

Now, unlike your application key ID and password, these passwords you need to remember. If you loose them then you will not be able to get access to your data. I did not choose a salt password but it does appear to be recommended. Now we are almost done:

Edit advanced config?
y) Yes
n) No (default)

Configuration complete.
- type: crypt
- remote: BBBackup:rcloneBackup/Backup
- password: *** ENCRYPTED ***
Keep this "cryptMedia" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Now your remote is ready to use. Note that when using a remote with encrypted files and directories do not use the Backblaze webUI to create folders underneath your root or rclone won’t recognize them.

I bring this up because there is one frustrating thing with rclone. If I want to copy a directory to the cloud storage remote it copies the contents of the directory and not the directory itself. For example, if I type on the command line:

$ cp -r Music /Media

it will create a “Music” directory under the “Media” directory. But if I type:

$ rclone copy Music crypt:Media

it will copy the contents of the Music directory into the root of the Media directory. To get the outcome I want I need to run:

$ rclone mkdir crypt:Media/Music

$ rclone copy Music crypt:Media/Music

Make sense?

While rclone has a lot of commands, the ones I have used are “mkdir” and “rmdir” (just like on a regular command line) and “copy” and “sync”. I use “copy” for the initial transfer and then “sync” for subsequent updates.

Now all I have to do for cloud synchronization is set up a crontab to run these commands on occasion (I set mine up for once a day).

I can check that the encryption is working by using the Backblaze webUI. First I see the folder I created to hold my encrypted files:

But the directories in that folder have names that sound like I’m trying to summon Cthulhu:

As you can see from this graph, I was real eager to upload stuff when I got this working:

and on the first day I sent up nearly 400GB of files. Backblaze B2 pricing is currently $6/TB/month, and this seems about right:

I have since doubled my storage so it should run about 20 cents a day. Note that downloading your data is free up to three times the amount of data stored. In other words, you could download all of the data you have in B2 three times in a given month and not incur fees. Since I am using this simply for catastrophic data recovery I shouldn’t have to worry about egress fees.

I am absolutely delighted to have this working and extremely impressed with rclone. For my needs open source once again outshines commercial offerings. And remember if you have other preferences for cloud storage providers you have a large range of choices, and the installation should be similar to the one I did here.

2023 Percona Live – Day 2

The final day of Percona Live started off with a keynote and a panel discussion.

As on the day before, Dave Stokes starting things off with some housekeeping notes.

After that he introduced Percona’s Chief Technical Officer and co-Founder, Vadim Tkachenko, who presented a roadmap for Percona’s products.

I am always interested in the customer angle for any product, so after Vadim finished he joined Michael Coburn, Principal Architect at Percona and Ernie Souhrada, who is a database engineer from Pinterest, for a “fireside chat”. Any discussion of a technical solution can be enriched by talking to end users, and Souhrada, as you might expect, was very bullish on Percona but was also able to tell us about some issues they encountered and how they were resolved.

After this opening presentation I spent my time in the sponsor showcase and talked to a number of people. While this conference is pretty specialized, people were enjoying it and seemed to be getting a lot out of the sessions.

In the afternoon I went to a second session by AWS, this one focused on troubleshooting issues with MySQL on Amazon RDS.

Jignesh Shah kicked it off by discussing some of the monitoring tools one gets with Amazon RDS, which include gathering metrics on the instance, the operating system and, of course, the database.

He then turned it over to Raluca Constantin, a database engineer who really knows her stuff.

She went over four different scenarios that she had encountered in the past with MySQL along with step by step instructions on how they were corrected.

The first scenario involved a problem with upgrading from MySQL 5.7 to MySQL 8. In some cases the table names for some system tables would have case differences. This would cause upgrades to hang.

The easiest fix was to run a query before the upgrade to see if these changes existed and if so the table names could be modified in the existing database to make sure the upgrade didn’t fail. However, the first attempt took over nine minutes to complete, and Raluca went through the logic of improving the query until it ran in seconds.

The second scenario involved detecting locks. Locks occur when the database is executing an action that requires exclusive access to, say, a table. If that action takes a long time, performance of the database can degrade. There are tools, such as Percona Monitoring and Management (PMM), that can detect when this happens, and she also showed how one can modify some system parameters so that actions that cause locks will fail if they exceed a specified timeout.

At this point I had to leave to meet some other AWS folks across town, which was disappointing since I really liked how Raluca was presenting these topics. I hope to be able to see her speak again in the near future.

While this was pretty much the end of my Percona Live experience, I did discover that there was another conference going on this week called Glue.

I was pretty certain I saw Matt Butcher from Fermyon in the hotel, but didn’t want to bother him. Fermyon’s technology is a topic for a future post.

By the time I got back to the Marriott later that night, all of the conference stuff had been cleaned up. Overall I was pretty pleased with the venue with the exception that it is in a generic business park and really didn’t show off what Denver has to offer.

It was a pretty intense week (and I had to get up early for my flight home) so I went to bed, but I’m happy I came. I got to see some friends and make some new ones, which is one of the best things about in-person conferences. That said I’m looking forward to being at home for a bit.

2023 Percona Live – Day 1

The first day of sessions at Percona Live saw me recovered from the food poisoning I experienced on Monday. It was a miserable experience but I’m happy that it didn’t last very long.

Whenever I go to a conference I always like the opening keynotes as they tend to set the tone for the rest of the event. The room in which the keynotes were held was dominated by a large screen featuring the new Percona logo.

The show was opened by Dave Stokes who, like me, is a technology evangelist.

He welcomed us all to the conference and covered the usual housekeeping notes before turning the stage over to Ann Schlemmer, who is the new CEO of Percona.

Schlemmer took over as CEO from founder Peter Zaitsev last autumn, and she seems to have settled into her new role pretty well.

One of the topics she covered was the new Percona Logo.

While I can’t do the description justice, it represents mountains which refers to both the bedrock on which Percona solutions are built as well as the challenges people sometimes have to overcome when working in IT (think climbing the mountain). The sun represents the shining of light into dark places as well kind of looking like a “P” (while the mountains themselves look like the “A” in the name).

At least that is what I took away from it. (grin)

I asked her later if they designed it in-house or if they hired an outside firm and she told me they did it themselves. Either way I like it and think they did a good job.

She was followed by Peter Zaitsev, one of the two Founders of Percona.

I first met Peter at this year’s FOSDEM back in February. When I found out he lived near me I invited him to lunch and we had a great discussion of open source business models and open source in general. As someone who once ran an open source services company, I identify strongly with his business, although he has been more successful than I was.

He is also known for not holding back when he has a strong opinion, and as part of his talk on the state of open source relational databases he leveled some criticism on AWS, who is also my employer.

Note: These thoughts on my personal blog are mine and mine alone, and may or may not align with my employer, Amazon Web Services.

One of the reasons I joined AWS was to take on the challenge of changing both the perception and processes by which Amazon interacts with open source communities. I’m part of a wonderful team and I think we have made progress toward that goal, so while I won’t either agree or disagree with Peter’s statements, my hope is to earn enough trust that there will be no need to have this as a topic in future conversations.

Peter ended his presentation by bringing up Ann and officially passing the torch by gifting her with a dartboard with his face on it, to be used whenever she might feel the need.

It took a couple of tries before the dart stuck, mainly because Peter had kept the dart in his back pocket and forgot to take off the safety cover on the sharp tip.

The next keynote speaker, Rachel Stephens, was new to me, although I’ve known about the company she works for, Redmonk, for a long time.

Redmonk is an analyst firm focused on software development, and she had my attention by basing her presentation on The Princess Bride, one of my favorite movies. It is very quotable, and she had slides like this one:

She also had a slide where she used the term “fauxpen” source:

Back in 2009 I hosted a party in which I was trying to explain open source vs. open core to a non-technical friend of mine. He replied “oh, so it’s fauxpen source”. I immediately registered the domain name (although I no longer own it as it was sold when I sold my company). I did a search back then and I could find no other references to the term, but I’ve seen it a number of times since. I like to think I had some part in popularizing it but it is clever enough that I’m certain others came up with it, too.

After the keynotes the individual sessions began, and since I’m not a DBA a lot of them are over my head. I did go to the one by Jignesh Shah, who is the General Manager of open source databases for Amazon RDS.

Jignesh gave a “state of” talk on the AWS offerings in this space, and also announced that the “trusted language extensions” feature for PostgreSQL that was introduced last autumn now supports Rust.

As I understand it, trusted language extensions give cloud providers a way to allow their users to extend the functionality of the database without introducing security concerns. There is a limit to what languages can be used, however, due to the fact that there may be no way to “sandbox” the extension from being able to access, say, the memory used by the database. The C language was not supported for that reason.

By supporting Rust, this allows end users to create powerful extensions in a language similar to C but with memory protections.

After his talk I spent some time wandering around the sponsor showcase. This is not a large conference, probably around 300 people, and so the “expo” is simply a hallway with booths along one wall. I actually like this because it facilitates easier interactions between attendees and sponsors.

The “premium” sponsors (AWS, Microsoft and Percona) had slightly larger booths on one end of the hallway.

Jignesh has brought along a number of AWS subject matter experts and there was a lot of activity at the booth, as it provided a way for folks to ask and get answers from the people best able to provide them.

One last note on Day 1 is that lunch was an actual buffet and not a boxed lunch that you often get as such conferences.

While I live in North Carolina and have almost sacred opinions on pork barbecue and cornbread, it was pretty good. The only complaint I would make is that the baked beans were not labeled well since, like in the South, they appeared to include small pieces of pork. As many of the attendees are vegetarian it would have been nice to either offer it without meat or make it clear that meat was included.

I was just happy that it was nice and didn’t result in the same issues I experienced after lunch on Monday (grin)

2023 Percona Live – Day 0

This week I am in Denver attending the Percona Live conference. As part of my job I tend to focus on relational databases, especially open source relational databases, and Percona Live has become the conference that focuses on that topic. This is my first time attending.

Back when I was mainly focused on monitoring, I would go to an open source monitoring conference in Nürnberg, Germany, organized by Netways. Netways, like Percona, is a software, consulting and support company based around open source solutions, and while both companies have their own products, they welcome other projects and companies in the same space to their conference. So far Percona Live has a similar vibe to OSMC (which is a good thing).

Like many conferences the first day is devoted to tutorials. Since I don’t work directly with databases I didn’t sign up for any, but I was able to register and get my badge.

I did meet up with Vicențiu Ciorbaru from the MariaDB Foundation, and we had a nice, long lunch at a nearby restaurant. I am a big MariaDB fan and it was great to see him again.

Unfortunately, just after that lunch I came down with a case of food poisoning. I’m not sure if it was from that local bar or from the breakfast I ate in the hotel lounge, but after I threw up my spleen I curled up into a ball and slept for 15 hours. Fortunately, I now do feel much better and I’m eager for the main conference to start.

SCaLE 20x – Day 3

The final day of SCaLE 20x was bittersweet, as I was eager to see more presentations but not ready for it to be over.

Dr. Kitty Yeung

The opening keynote was given by Dr. Kitty Yeung. Dr. Yeung is one of those amazing people who makes me feel completely inadequate. A graduate of Cambridge and Harvard, she has worked in fields as varied as fashion and quantum computing. She is also an artist, and most of her slides were ones she created herself.

She did also use a really cool graphic of world population that would make Tufte proud.

World Population Graphic

A lot of her current work centers on the intersection of technology and fashion. Now I am the least fashionable person alive. Seriously, when I’m not in front of customers I wear the same clothes every day: a black, heavy-weight pocket T-shirt and Levis blue jeans.

I have often thought if I ever did start another company one option would be to create modern tech for older people. Now some people may say that products from companies like Apple are easy to use, but as someone who is often around people in their 80s I know this isn’t true for them. There should be a market for very simple, but powerful, tools aimed at people in this age group. I keep thinking of the Yayagram machine I saw a few years ago as an example.

Dr. Yeung’s work on integrating tech and fashion could be a great interface for these products.

Shifting gears a bit, the next presentation I attended was by Don Marti on privacy.

Don Marti

While it is hard for an individual to balance privacy and convenience in today’s surveillance economy, there are some steps you can take to minimize what personal information you share. I take a number of steps to increase my privacy while on the Internet and this talk gave me a few more tools to use.

One of the things I love about SCaLE is that they usually have an amazing closing keynote. It is cool because you get to end the conference on a high note, and as a speaker it is always nice to have something to keep people from leaving early on the last day.

This year’s keynote was no exception and featured Ken Thompson, one of the founders of Unix and the creator of the Go programming language.

Before he spoke, Ilan Rabinovich gave some closing remarks reflecting on 20 years of SCaLE (which I learned started out as an umbrella conference for Southern California area Linux User Groups).

SCaLE Founders

You can see a much younger Ilan as well as the still very tall Gareth Greenaway in that picture from SCaLE 1x. As someone who as been working in open source for over two decades it just doesn’t feel that long to me, so it was cool to reflect on all that has happened.

Ken Thompson with a picture of him and his siblings

Two decades pales in comparison to the experience of Ken Thompson. He was hired by Bell Labs the year I was born.

He gave us some of the history of his time there and walked us through the creation of what was probably the ur-archive of digital music. In the before times, back when mp3 encoding came out and people worked in offices, some of us would bring in our compact disc collections, rip them and place them in a common archive. Ken’s project pre-dated mp3s and started out as a quest to collect all the Billboard hit songs from 1957. As someone with mild OCD issues, I felt seen when he talked about how that expanded to collecting all the songs (grin).

Of course, digital content isn’t useful unless you can access it, so he modified a Wurlitzer jukebox with a couple of iPads to provide a cool interface, and then, because he is awesome, he bought a refurbished player piano with a MIDI interface so you could trigger that from the same device.

So the best way to sum up Sunday at SCaLE is that you are a lazy bum compared to folks like Dr. Leung, Ilan and his team, and Ken Thompson, who apparently thinks about making a space shuttle out of discarded household appliances while you are watching re-runs of The Big Bang Theory.


Hats off to the whole SCaLE team for another great conference, and I’m so happy that it was back in Pasadena. I am already looking forward to next year.

SCaLE 20x – Day 2

I got up fairly early on Saturday and went through my presentation one final time. When working on a new talk there is a point where the feeling I get when thinking about having to present it goes from anxiety to eagerness and that happened this morning, so I felt ready to go.

The conference started off with a keynote by Arun Gupta, who is a VP at Intel focused on open ecosystems.

Arun Gupta Keynote

His talk was about using open source cultural best practices within an organization, and he used specific examples of how that was being done at Intel. It was the first time I had seen the abbreviation “CW*2” which stands for that Zen quote about “Chop wood, carry water“. While that phrase has a lot of different meanings, when applied to open source it references the idea that as a member of an open source community one should not only just focus on the high profile aspects of the project but also the more mundane ones that actually keep the project alive.

After the keynote it was time for my presentation. I was originally scheduled to speak on Sunday morning but due to a conflict I got a spot on Saturday. I was grateful as I like to get my responsibilities out of the way so I can enjoy the rest of the weekend without worrying about them.

Me at the end of my presentation (image yoinked from Zoe Steinkamp’s LinkedIn feed)

I did a talk on open source business models and how things have changed in the past decade or so. My “hook” was to do the presentation in the format of an old school text adventure.

It was fun (and yes, there was a grue reference). It seemed to go over well with the audience and there were a number of great questions afterward.

With that over I decided to walk down the road to grab lunch when I ran into Gareth Greenaway. Gareth was one of the original organizers of SCaLE and it was cool to be able to catch up. He is currently doing some amazing things over at Salt.

SCaLE always has a wonderful hallway track and I also got to see John Willis. I had not seen him in years although we used to cross paths much more frequently and it was nice to be able to catch up. He is a co-author on a new book called “Investments Unlimited” which chronicles the DevOps journey of a financial institution.

I also had some time to wander around the Expo floor. I try to minimize the amount of swag I bring home but I’ve started to collect those little enamel pins that some people give out.

Enamel pins on my backpack

Tha AlmaLinux pin was given to me by the amazing benny Vasquez who was spreading the word about their project which helps fill in the gap left by the CentOS project migrating to CentOS Stream.

Me and benny Vasquez

This year I spent a lot more time in sessions than I normally do as they were just so good. Many times I found myself having to decide between three or more talks that occurred at the same time.

One that I didn’t want to miss was given by Zoe Steinkamp on using InfluxDB to monitor the health of plants.

Zoe Steinkamp

I spent much of my professional career in observability and monitoring so I have a soft spot for unique applications of the technology. Zoe uses sensors to feed information about humidity, sunlight, etc. from her houseplants into InfluxDB so that she can use that information to maintain them in the best of health. My spouse keeps koi and I do something similar to monitor water temperature.

The next presentation I attended was on the Fediverse. Now I have never been much of a social media person, and last year I deleted my Twitter account which left LinkedIn as my only mainstream service. I do have a Mastodon account and with the recent migration of a lot of people to the platform I do find it useful, although I don’t spend nearly as much time on it as I did Twitter. I think it has a lot of potential, however, and what it really needs is that killer app to make it easier to use.

Bob Murphy presents on the Fediverse

Bob Murphy did a great talk on how the Fediverse is not Mastodon, and he introduced me to a number of other services that use ActivityPub, which is the underlying protocol. For example, there are sites that focus on image as well as video sharing, not just microblogging. Speaking of blogging, Automattic (the company behind WordPress) announced that they acquired the makers of an ActivityPub plugin to bring the technology in-house and it seems like they plan to make it a core part of their app.

The final talk I attended was given by Michael Coté. I’ve known Coté for over two decades back when he lived in Texas and it was nice to see him again (he’s living over in Yurrip these days).

Coté on Developer Platforms

As usual, he provided some great insights on what he is calling “platform engineering” (think DevOps mashed up with SRE).

After the talks were over I met up with some friends for dinner. Now I am a fan of the television series The Big Bang Theory. It is set at Caltech which is located in Pasadena, and there is even a street named “The Big Bang Theory Way” (my picture of the street sign didn’t come out, unfortunately). During the weekend I kept hearing people talk about a place called “Lucky Baldwins”. I thought it was a joke since the character of Sheldon in the TV show makes a reference to the place in an episode called “The Irish Pub Formulation” but it turns out it exists.

Lucky Baldwins

We stopped there for a drink and ended up staying for dinner. It was a nice ending to a busy day.

SCaLE 20x – Day 1

I spent Friday morning practicing and working on my presentation, but managed to make it over to the conference just before lunch.

SCaLE 20x Sign

I was really impressed with the “steampunk” graphics for this year’s show. They were cool.

Check-in, as usual with SCaLE, was a breeze. They have automated most of it. You walk up to a bank of computers, choose one and then enter in your registration information and your badge gets printed. I also think you could purchase a registration through the system as well.

Then you walk down to a table to get your conference bag, badge holder and lanyard.

After wandering around for a bit I went down the street to meet up with Aaron Leung. While I love many things about being able to work remotely, I do miss meeting people in person and especially people I work with at AWS. Aaron happens to live in LA and he was kind enough to come out to see me and we had a great lunch.

Having SCaLE back in Pasadena was awesome. Not only is the convention center nice, it is really close to a ton of restaurants so you have a bunch of options for dining. The only downside was that it was raining (you can see the folks with the umbrellas above). When I had to go outside it wasn’t bad – more of a mist – and it was strange to have rain in LA. It did make the hills very green, however, and quite the departure from the usual tan.

After our long lunch I worked some more on my presentation, and then headed back over to the conference. The Expo floor was open so I spent some time wandering around and looking at the booths.

SCaLE Expo Floor

Toward the end of the afternoon I went to see Bryan Cantrill speak on his new company, Oxide Computer.

Bryan Cantrill and The Forgotten Operator

The “forgotten” operator in the title refers to people tasked with running on-premises data centers. Now I’ve been in a number of data centers and they were all has he described: racks upon racks of 1U and 2U servers arranged in rows, some with “hot” aisles and “cold” aisle and each server with a pair of power supplies and lots and lots of cabling.

I have never been inside a Google or Amazon data center, but I’ve always imagined it to be more along the lines of the one Javier Bardem’s character set up in Skyfall.

Picture of a data center from the James Bond film Skyfall.

In these days of the “cloud”, compute is divorced from storage and so a lot of the hardware in an old school 1U rack mount machine is unnecessary. Plus there is the antiquated idea of having separate power supplies for each board in the rack. Computers run on DC power, so why not just supply it directly from a central source vs. individually? I started my professional career working for phone companies and everything was DC (many central offices had a huge room in the basement with chemical batteries – and, yes, it did smell).

When I started my own company 20+ years ago I had two Supermicro 1U machines and when I turned them on they were each louder than a vacuum cleaner. Bryan told us that their racks are whisper-quiet (well, once they are powered on and the fans on the rectifiers spool down).

I’m oversimplifying, but that is the basic idea behind Oxide. They want to supply cloud-grade computing gear to enterprises and break the old paradigm of what a data center should look like. Users can still leverage cloud technologies like Kubernetes but on their own gear. It still doesn’t solve the need to have people who understand the technology on staff, but it was exciting in any case.

Friday evening featured a series of lightning talks called “Upscale”. It was hosted by Jason Hibbets and Hannah Anderson and sponsored by

Upscale Presenters and Participants

Lightning talks are 5 minute presentations consisting of a set number of slides that advance automatically. I’ve never given one, and once when I mentioned that I thought it was cool it was pointed out that I can’t introduce myself in five minutes, much less give a talk. (grin)

I was impressed with the presentations. One that stuck out was the fact that the term “open source” as formalized by the Open Source Initiative is now 25 years old. Wow.

After Upscale a group of us went down the street for dinner and drinks. I can’t emphasize enough about how much I miss the face-to-face aspect of in-person conferences and I hope we can continue to have them safely.