Using rclone to Sync Data to the Cloud

I am working hard to digitize my life. Last year I moved for the first time in 24 years and I realized I have way too much stuff. A lot of that stuff was paper, in the form of books and files, so I’ve been busy trying to get digital copies of all of it. Also, a lot of my my life was already digital. I have e-mails starting in 1998 and a lot of my pictures were taken with a digital camera.

TL;DR; This is a tutorial for using the open source rclone command line tool to securely synchronize files to a cloud storage provider, in this case Backblaze. It is based on MacOS but should work in a similar fashion on other operating systems.

That brings up the issue of backups. A friend of mine was the victim of a home robbery, and while they took a number of expensive things the most expensive was his archive of photos. It was irreplaceable. This has made me paranoid about backing up my data. I have about 500GB of must save data and around 7TB of “would be nice” to save data.

At my old house the best option I had for network access was DSL. It was usable for downstream but upstream was limited to about 640kbps. At that rate I might be able to backup my data – once.

I can remember in college we were given a test question about moving a large amount of data across the United States. The best answer was to put a physical drive in a FedEx box and overnight it there. So in that vein my backup strategy was to buy three Western Digital MyBooks. I created a script to rsync my data to the external drives. One I kept in a fire safe at the house. It wasn’t guaranteed to survive a hot fire in there (paper requires a much higher temperature to burn) but there was always a chance it might depending on where the fire was hottest. I took the other two drives and stored one at my father’s house and the other at a friend’s house. Periodically I’d take out the drive from the safe, rsync it, and switch it with one of the remote drives. I’d then rsync that drive and put it back in the safe.

It didn’t keep my data perfectly current, but it would mitigate any major loss.

At my new house I have gigabit fiber. It has synchronous upload and download speeds so my ability to upload data is much, much better. I figured it was time to choose a cloud storage provider and set up a much more robust way of backing up my data.

I should stress that when I use the term “backup” I really mean “sync”. I run MacOS and I use the built-in Time Machine app for backups. The term “backup” in this case means keeping multiple copies of files, so not only is your data safe, if you happen to screw up a file you can go back and get a previous version.

Since my offsite “backup” strategy is just about dealing with a catastrophic data loss, I don’t care about multiple versions of files. I’m happy just having the latest one available in case I need to retrieve it. So it is more of synchronizing my current data with the remote copy.

The first thing I had to do was choose a cloud storage provider. Now as my three readers already know I am not a smart person, but I surround myself with people who are. I asked around and several people recommended Backblaze, so I decided to start out with that service.

Now I am also a little paranoid about privacy, so anything I send to the cloud I want to be encrypted. Furthermore, I want to be in control of the encryption keys. Backblaze can encrypt your data but they help you manage the keys, and while I think that is fine for many people it isn’t for me.

I went in search of a solution that both supported Backblaze and contained strong encryption. I have a Synology NAS which contains an application called “Cloud Sync” and while that did both things I wasn’t happy that while the body of the file was encrypted, the file names were not. If someone came across a file called WhereIBuriedTheMoney.txt it could raise some eyebrows and bring unwanted attention. (grin)

Open source to the rescue. In trying to find a solution I came across rclone, an MIT licensed command-line tool that lets you copy and sync data to a large number of cloud providers, including Backblaze. Furthermore, it is installable on MacOS using the very awesome Homebrew project, so getting it on my Mac was as easy as

$ brew install rclone

However, like most open source tools, free software does not mean free solution, so I did have a small learning curve to climb. I wanted to share what I learned in case others find it useful.

Once rclone is installed it needs to be configured. Run

$ rclone config

to access a script to help with that. In rclone syntax a cloud provider, or a particular bucket at a cloud provider, is called a “remote”. When you run the configurator for the first time you’ll get the following menu:

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

Select “n” to set up a new remote, and it will ask you to name it. Choose something descriptive but keep in mind you will use this on the command line so you may want to choose something that isn’t very long.

Enter name for new remote.
name> BBBackup

The next option in the configurator will ask you to choose your cloud storage provider. Many are specific commercial providers, such as Backblaze B2, Amazon S3, and Proton Drive, but some are generic, such as Samba (SMB) and WebDav.

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 6 / Backblaze B2
   \ (b2)

...

I chose “6” for Backblaze.

At this point in time you’ll need to set up the storage on the provider side, and then access it using an application key.

Log in to your Backblaze account. If you want to try it out note that you don’t need any kind of credit card to get started. They will limit you to 10GB (and I don’t know how long it stays around) but if you want to play with it before deciding just remember you can.

Go to Buckets in the menu and click on Create a Bucket

Note that you can choose to have Backblaze encrypt your data, but since I’m going to do that with rclone I left it disabled.

Once you have your bucket you need to create an application key. Click on Application Keys in the menu and choose Add a New Application Key.

Now one annoying issue with Backblaze is that all buckets have to be unique in the entire system, so “rcloneBucket” and “Media1” etc have already been taken. Since I’m just using this as an example it was fine for the screenshot, but note that when I add an application key I usually limit it to a particular bucket. When you click on the dropdown it will list available buckets.

Once you create a new key, Backblaze will display the keyID, the keyName and the applicationKey values on the screen. Copy them somewhere safe because you won’t be able to get them back. If you lose them you can always create a new key, but you can’t modify a key once it has been created.

Now with your new keyID, return to the rclone configuration:

Option account.
Account ID or Application Key ID.
Enter a value.
account> xxxxxxxxxxxxxxxxxxxxxxxx

Option key.
Application Key.
Enter a value.
key> xxxxxxxxxxxxxxxxxxxxxxxxxx

This will allow rclone to connect to the remote cloud storage. Finally, rclone will ask you a couple of questions. I just choose the defaults:

Option hard_delete.
Permanently delete files on remote removal, otherwise hide files.
Enter a boolean value (true or false). Press Enter for the default (false).
hard_delete>

Edit advanced config?
y) Yes
n) No (default)
y/n>

The one last step is to confirm your remote configuration. Note that you can always go back and change it if you want, later.

Configuration complete.
Options:
- type: b2
- account: xxxxxxxxxxxxxxxxxxxxxx
- key: xxxxxxxxxxxxxxxxxxxxxxxxxx
Keep this "BBBackup" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

At this point in time, quit out of the configurator for a moment.

You may have realized that we have done nothing with respect to encryption. That is because we need to add a wrapper service around our Backblaze remote to make this work (this is that there learning curve thing I mentioned earlier).

While I don’t know if this is true or not, it was recommended that you not put encrypted files in the root of your bucket. I can’t really see why it would hurt, but just in case we should put a folder in the bucket at which we can then point the encrypted remote. With Backblaze you can use the webUI or you can just use rclone. I recommend the latter since it is a good test to make sure everything is working. On the command line type:

$ rclone mkdir BBBackup:rcloneBackup/Backup

2024/01/23 14:13:25 NOTICE: B2 bucket rcloneBackup path Backup: Warning: running mkdir on a remote which can't have empty directories does nothing

To test that it worked you can look at the WebUI and click on Browse Files, or you can test it from the command line as well:

$ rclone lsf BBBackup:rcloneBackup/
Backup/

Another little annoying thing about Backblaze is that the File Browser in the webUI isn’t in real time, so if you do choose that method note that it may take several minutes for the directory (and later any files you send) to show up.

Okay, now we just have one more step. We have to create the encrypted remote, so go back into the configurator:

$ rclone config

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n

Enter name for new remote.
name> crypt

Just like last time, chose a name that you will be comfortable typing on the command line. This is the main remote you will be using with rclone from here on out. Next we have to choose the storage type:

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)

...

14 / Encrypt/Decrypt a remote
   \ (crypt)
15 / Enterprise File Fabric
   \ (filefabric)
16 / FTP
   \ (ftp)
17 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
18 / Google Drive
   \ (drive)

...

Storage> crypt

You can type the number (currently 14) or just type “crypt” to choose this storage type. Next we have to point this new remote at the first one we created:

Option remote.
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a value.
remote> BBBackup:rcloneBackup/Backup

Note that it contains the name of the remote (BBBackup), the name of the bucket (rcloneBackup), and the name of the directory we created (Backup). Now for the fun part:

Option filename_encryption.
How to encrypt the filenames.
Choose a number from below, or type in your own string value.
Press Enter for the default (standard).
   / Encrypt the filenames.
 1 | See the docs for the details.
   \ (standard)
 2 / Very simple filename obfuscation.
   \ (obfuscate)
   / Don't encrypt the file names.
 3 | Adds a ".bin", or "suffix" extension only.
   \ (off)
filename_encryption>

This is the bit where you get to solve the filename problem I mentioned above. I always choose the default, which is “standard”. Next you get to encrypt the directory names as well:

Option directory_name_encryption.
Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (true).
 1 / Encrypt directory names.
   \ (true)
 2 / Don't encrypt directory names, leave them intact.
   \ (false)
directory_name_encryption>

I choose the default of “true” here as well. Look, I don’t expect to ever become the subject of an in-depth digital forensics investigation, but the less information out there the better. Should Backblaze ever get a subpoena to let someone browse through my files on their system, I want to minimize what they can find.

Finally, we have to choose a passphrase:

Option password.
Password or pass phrase for encryption.
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:

Option password2.
Password or pass phrase for salt.
Optional but recommended.
Should be different to the previous password.
Choose an alternative below. Press Enter for the default (n).
y) Yes, type in my own password
g) Generate random password
n) No, leave this optional password blank (default)
y/g/n>

Now, unlike your application key ID and password, these passwords you need to remember. If you loose them then you will not be able to get access to your data. I did not choose a salt password but it does appear to be recommended. Now we are almost done:

Edit advanced config?
y) Yes
n) No (default)
y/n>

Configuration complete.
Options:
- type: crypt
- remote: BBBackup:rcloneBackup/Backup
- password: *** ENCRYPTED ***
Keep this "cryptMedia" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Now your remote is ready to use. Note that when using a remote with encrypted files and directories do not use the Backblaze webUI to create folders underneath your root or rclone won’t recognize them.

I bring this up because there is one frustrating thing with rclone. If I want to copy a directory to the cloud storage remote it copies the contents of the directory and not the directory itself. For example, if I type on the command line:

$ cp -r Music /Media

it will create a “Music” directory under the “Media” directory. But if I type:

$ rclone copy Music crypt:Media

it will copy the contents of the Music directory into the root of the Media directory. To get the outcome I want I need to run:

$ rclone mkdir crypt:Media/Music

$ rclone copy Music crypt:Media/Music

Make sense?

While rclone has a lot of commands, the ones I have used are “mkdir” and “rmdir” (just like on a regular command line) and “copy” and “sync”. I use “copy” for the initial transfer and then “sync” for subsequent updates.

Now all I have to do for cloud synchronization is set up a crontab to run these commands on occasion (I set mine up for once a day).

I can check that the encryption is working by using the Backblaze webUI. First I see the folder I created to hold my encrypted files:

But the directories in that folder have names that sound like I’m trying to summon Cthulhu:

As you can see from this graph, I was real eager to upload stuff when I got this working:

and on the first day I sent up nearly 400GB of files. Backblaze B2 pricing is currently $6/TB/month, and this seems about right:

I have since doubled my storage so it should run about 20 cents a day. Note that downloading your data is free up to three times the amount of data stored. In other words, you could download all of the data you have in B2 three times in a given month and not incur fees. Since I am using this simply for catastrophic data recovery I shouldn’t have to worry about egress fees.

I am absolutely delighted to have this working and extremely impressed with rclone. For my needs open source once again outshines commercial offerings. And remember if you have other preferences for cloud storage providers you have a large range of choices, and the installation should be similar to the one I did here.

Nextcloud News

I think the title of this post is a little misleading, as I don’t have any news about Nextcloud. Instead I want to talk about the News App on the Nextcloud platform, and I couldn’t think of a better one.

I rely heavily on the Nextcloud News App to keep up with what is going on with the world. News provides similar functionality to the now defunct Google Reader, but with the usual privacy bonuses you expect from Nextcloud.

Back before social networks like Facebook and Twitter were the norm, people used to communicate through blogs. Blogs provide similar functionality: people can write short or long form posts that will get published on a website and can include media such as pictures, and other people can comment and share them. Even now when I see an incredibly long thread on Twitter I just wish the author would have put it on a blog somewhere.

Blogs are great, since each one can be individually hosted without requiring a central authority to manage it all. My friend Ben got me started on my first blog (this one) that in the beginning was hosted using a program called Moveable Type. When their licensing became problematic, most of us switched to WordPress, and a tremendous amount of the Web runs on WordPress even now.

Now the problem was that the frequency that people would post to their blogs varied. Some might post once a week, and others several times an hour. Unless you wanted to go and manually refresh their pages, it was difficult to keep up.

Enter Really Simple Syndication (RSS).

RSS is, as the name implies, an easy way to summarize content on a website. Sites that support RSS craft a generic XML document that reflects titles, descriptions, links, etc. to content on the site. The page is referred to as a “feed” and RSS “readers” can aggregate the various feeds together so that a person can follow the changes on websites that interest them.

Google Reader was a very useful feed reader that was extremely popular, and it in turn increased the popularity of blogs. I put some of the blame on Google for the rise of the privacy nightmare of modern social networks on their decision to kill Reader, as it made individual blogs less relevant.

Now in Google’s defense they would say just use some other service. In my case I switched to Feedly, an adequate Reader replacement. The process was made easier by the fact that most feed readers support a way to export your configuration in the Outline Processor Markup Language (OPML) format. I was able to export my Reader feeds and import them into Feedly.

Feedly was free, and as they say if you aren’t paying for the product you are the product. I noticed that next to my various feed articles Feedly would display a count, which I assume reflected the number of Feedly users that were interested in or who had read that article. Then it dawned on me that Feedly could gather useful information on what people were interested in, just like Facebook, and I also assume, if they chose, they could monetize that information. Since I had a Feedly account to manage my feeds, they could track my individual interests as well.

While Feedly never gave me any reason to assign nefarious intentions to them, as a privacy advocate I wanted more control over sharing my interests, so I looked for a solution. As a Nextcloud fan I looked for an appropriate app, and found one in News.

News has been around pretty much since Nextcloud started, but I rarely hear anyone talking about its greatness (hence this post). Like most things Nextcloud it is simple to install. If you are an admin, just click on your icon in the upper right corner and select “+ Apps”. Then click on “Featured apps” in the sidebar and you should be able to enable the “News” app.

That’s it. Now in order to update your feeds you need to be using the System Cron in Nextcloud, and instructions can be found in the documentation.

Once you have News installed, the next challenge is to find interesting feeds to which you can subscribe. The news app will suggest several, but you can also find more on your own.

Nextcloud RSS Suggestions

It used to be pretty easy to find the feed URL. You would just look for the RSS icon and click on it for the link:

RSS Icon

But, again, when Reader died so did a lot of the interest in RSS and finding feed URLs more became difficult. I have links to feeds at the very bottom of the right sidebar of this blog, but you’d have to scroll down quite a way to find them.

But for WordPress sites, like this one, you just add “/feed” to the site URL, such as:

https://www.adventuresinoss.com/feed

There are also some browser plugins that are supposed to help identify RRS feed links, but I haven’t used any. You can also “view source” on a website of interest and search for “rss” and that may help out as well.

My main use of the News App is to keep up with news, and I follow four main news sites. I like the BBC for an international take on news, CNN for a domestic take, Slashdot for tech news and WRAL for local news.

Desktop Version of News App

Just for reference, the feed links are:

BBC: http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/front_page/rss.xml

CNN: http://rss.cnn.com/rss/cnn_topstories.rss

Slashdot: http://rss.slashdot.org/slashdot/slashdotMain

WRAL: http://www.wral.com/news/rss/48/

This wouldn’t be as useful if you couldn’t access it on a mobile device. Of course, you can access it via a web browser, but there exist a number of phone apps for accessing your feeds in a native app.

Now to my knowledge Nextcloud the company doesn’t produce a News mobile app, so the available apps are provided by third parties. I put all of my personal information into Nextcloud, and since I’m paranoid I didn’t want to put my access credentials into those apps but I wanted the convenience of being able to read news anywhere I had a network connection. So I created a special “news” user just for News. You probably don’t need to do that but I wanted to plant the suggestion for those who think about such things.

On my iPhone I’ve been happy with CloudNews.

iPhone Version of CloudNews App

It sometimes gets out of sync and I end up having to read everything in the browser and re-sync in CloudNews, but for the most part it’s fine.

For Android the best app I’ve used is by David Luhmer. It’s available for a small fee in the Play Store and for free on F-Droid.

Like all useful software, you don’t realize how much you depend on it until it is gone, and in the few instances I’ve had problems with News I get very anxious as I don’t know what’s going on in the world. Luckily this has been rare, and I check my news feed many times during the day to the point that I probably have a personal problem. The mobile apps mean I can read news when I’m in line at the grocery store or waiting for an appointment. And the best part is that I know my interests are kept private as I control the data.

If you are interested, I sporadically update a number of blogs, and I aggregate them here. In a somewhat ironic twist, I can’t find a feed link for the “planet” page, so you’d need to add the individual blog feeds to your reader.

Review: AT&T Cell Booster

Back in the mid-2000s I was a huge Apple fanboy, and I really, really, really wanted an iPhone. At that time it was only available from AT&T, and unfortunately the wireless coverage on that network is not very good where I live.

In 2008 a couple of things happened. Apple introduced the iPhone 3G, and AT&T introduced the 3G Microcell.

The 3G Microcell, technically a “femtocell“, is a small device that you can plug into your home network and it will leverage your Internet connection to augment wireless coverage in a small area (i.e. your house). With that I could get an iPhone and it would work at my house.

In February 3G service in the US will cease, and I thought I was going to have to do without a femtocell. Most modern phones support calling over WiFi now, but it just isn’t the same. For example, if I am trying to send an SMS and there is any signal at all from AT&T, my phone will try to use that network instead of the much stronger wireless network in my house. If I disable mobile access altogether, the SMS will send fine but then I can’t get phone calls reliably. (sigh)

I thought I was going to have to just deal with it when AT&T sent me a notice that they were going to replace my 3G Microcell with a new product called a Cell Booster.

Now a lot of people criticize AT&T for a number of good reasons, but lately they’ve really been hitting the whole “customer service” thing out of the park. The Cell Booster currently shows out of stock on their website with a cost of $229, but they sent me one for free.

AT&T Cell Booster Box

In a related story my mother-in-law, who is on our family plan, was using an older Pixel that was going to stop working with the end of 3G service (it was an LTE phone but doesn’t support “HD Voice” which is required to make calls). So AT&T send us a replacement Samsung S9. Pretty cool.

In any case the Cell Booster installation went pretty smoothly. I simply unplugged the existing 3G Microcell and plugged in the new device. The box included the Cell Booster, a GPS sensor, a power supply and an Ethernet cable. No other instructions outside of a QR code which will take you to the appropriate app store to download the necessary application to set it up.

The Booster requires a GPS lock, and they include a little “puck” connected to a fairly long wire that is supposed to allow one to get a signal even when the device is some distance away from a clear line of sight, such as away from windows. I just plugged it in to the back and left it next to the unit and it eventually got a signal, but it is also pretty much beneath a skylight.

In order to provision the Cell Booster you have to launch the mobile app and fill out a few pages of forms, which includes the serial number of the device. It has five lights on the front and while the power light came on immediately, it did take some time for the other lights, including “Internet” to come up. I assumed the Internet light would have turned on as soon as an IP address was assigned, but that wasn’t the case. It took nearly a half and hour for the first four lights to come on, and then another 15 minutes or so for the final “4G LTE” light to illuminate and the unit to start working. Almost immediately I got an SMS from AT&T saying the unit was active.

AT&T Cell Booster Lights

Speaking of IP addresses, I don’t like putting random devices on my LAN so I stuck this on my public network which only has Internet access (no LAN access). I ran nmap against it and there don’t appear to be any ports open. A traffic capture shows traffic between the Cell Booster and a 12.0.0.0 network address owned by AT&T.

I do like the fact that, unlike the 3G Microcell, you do not need to specify the phone number of the handsets that can use the Cell Booster. It claims to support up to 8 at a time, and while I haven’t had anyone over who is both on the AT&T network and also not on my plan, I’m assuming it will work for them as well (I used to have to manually add phone numbers of my guests to allow them to use the 3G device).

The Cell Booster is a rebranded Nokia SS2FII. One could probably buy one outside of AT&T but without being able to provision it I doubt it would work.

So far we’ve been real happy with the Cell Booster. Calls and SMS messages work just fine, if not better than before (I have no objective way to measure it, though, so it might just be bias). If you get one, just remember that it takes a really long time to start up that first time, but after you have all five lights you should be able to forget it’s there.

Review: ProtonMail

I love e-mail. I know for many it is a bane, which has resulted in the rise of “inbox zero” and even the “#noemail” movement, but for me it is a great way to communicate.

I just went and looked, and the oldest e-mail currently in my system is from July of 1996. I used e-mail for over a decade before then, on school Unix systems and on BBS’s, but it wasn’t until the rise of IMAP in the 1990s that I was able to easily keep and move my messages from provider to provider.

That message from 1996 was off of my employer’s system. I didn’t have my own domain until two years later, in 1998, and I believe my friend Ben was the one to host my e-mail at the time.

When I started maintaining OpenNMS in 2002 I had a server at Rackspace that I was able to configure for mail. I believe the SMTP server was postfix but I can’t remember what the IMAP server was. I want to say it was dovecot but that really wasn’t available until later in 2002, so maybe UW IMAP? Cyrus was pretty big at the time but renown for being difficult to set up.

In any case I was always a little concerned about the security of my mail messages. Back then disks were not encrypted and even the mail transport was done in the clear (this was before SSL became ubiquitous), so when OpenNMS grew to the point where we had our own server room, I set up a server for “vanity domains” that anyone in the company could use to host their e-mail and websites, etc. At least I knew the disks were behind a locked door, and now that Ben worked with us he could continue to maintain the mail server, too. (grin)

Back then I tried to get my friends to use encrypted e-mail. Pretty Good Privacy (PGP) was available since the early 1990s, and MIT used to host plugins for Outlook, which at the time was the default e-mail client for most people. But many of them, including the technically minded, didn’t want to be bothered with setting up keys, etc. It wasn’t until later when open source really took off and mail clients like Thunderbird arrived (with the Enigmail plug-in) that encrypted e-mail became more common among my friends.

In 2019 the decision was made to sell the OpenNMS Group, and since I would no longer have control over the company (and its assets) I decided I needed to move my personal domains somewhere else. I really didn’t relish the idea of running my own mail server. Spam management was always a problem, and there were a number of new protocols to help secure e-mail that were kind of a pain to set up.

The default mail hosting option for most people is GMail. Now part of Google Workspace, for a nominal fee you can have Google host your mail, and get some added services as well.

I wasn’t happy with the thought of Google having access to my e-mail, so I looked for options. To me the best one was ProtonMail.

The servers for ProtonMail are hosted in Switzerland, a neutral country not beholden to either US or EU laws. They are privacy focused, with everything stored encrypted at rest and, when possible, encrypted in transport.

They have a free tier option that I used to try out the system. Now, as an “old”, I prefer desktop mail clients. I find them easiest to use and I can also bring all of my mail into one location, and I can move messages from one provider to another. The default way to access ProtonMail is through a web client, like GMail. Unlike GMail, ProtonMail doesn’t offer a way to directly access their services through SMTP or IMAP. Instead you have to install a piece of software called the ProtonMail Bridge that will create an encrypted tunnel between your desktop computer and their servers. You can then configure your desktop mail client to connect to “localhost” on a particular port and it will act as if it were directly connected to the remote mail server.

In my trial there were two shortcomings that immediately impacted me. As a mail power user, I use a lot of nested folders. ProtonMail does not allow you to nest folders. Second, I share some accounts with my spouse (i.e. we have a single Paypal account) and previously I was able to alias e-mail addresses to send to both of our user accounts. ProtonMail does not allow this.

For the latter I think it has to do with the fact that each mail address requires a separate key and their system must not make it easy to use two keys or to share a key. I’m not sure what the issue is with nested folders.

In any case, this wasn’t a huge deal. To overcome the nested folder issue I just added a prefix, i.e. “CORR” for “Correspondence” and “VND” for “Vendor”, to each mailbox, and then you can sort on name. And while we share a few accounts we don’t use them enough that we couldn’t just assign it to a particular user.


UPDATE: It turns out it is now possible to have nested folders, although it doesn’t quite work the way I would expect.

Say I want a folder called “Correspondence” and I want sub-folders for each of the people with whom I exchange e-mail. I tried the following:

So I have a folder named something like “CORR-Bill Gates”, but I’d rather have that nested under a folder entitled “Correspondence”. In my desktop mail client, if I create a folder called “Correspondence” and then drag the “CORR-Bill Gates” folder into it, I get a new folder titled “Correspondence/CORR-Bill Gates” which is not what I want.

However, I can log into the ProtonMail webUI and next to folders there is a little “+” sign.

Add Folder Menu Item
If I click on that I get a dialog that lets me add new folders, as well as to add them to a parent folder.

Add Folder Dialog Box

If I create a “Correspondence” folder with no parent via the webUI and then a “Bill Gates” folder, I can parent the “Bill Gates” folder to “Correspondence” and then the folders will show up and behave as I expect in my desktop e-mail client. Note that you can only nest two levels deep. In other words if I wanted a folder structure like:

Bills -> Taxes -> Federal -> 2021

It would fail to create, but

Bills -> Taxes -> 2021-Federal

will work.


After I was satisfied with ProtonMail, I ended up buying the “Visionary” package. I pay for it in two-year chunks and it runs US$20/month. This gives me ten domains and six users, with up to 50 e-mail aliases.

Domain set up was a breeze. Assuming you have access to your domain registrar (I’m a big fan of Namecheap) all you need to do is follow the little “wizard” that will step you through the DNS entries you need to make to point your domain to ProtonMail’s servers as well as to configure SPF, DKIM and DMARC. Allowing for the DNS to update, it can be done in a few minutes or it may take up to an hour.

I thought there would be a big issue with the 50 alias limit, as I set up separate e-mails for every vendor I use. But it turns out that you only need to have a alias if you want to send e-mail from that address. You can set up a “catch all” address that will take any incoming e-mail that doesn’t expressly match an alias and send it to a particular user. In my case I set up a specific “catchall@” address but it is not required.

You can also set up filters pretty easily. Here is an example of sending all e-mail sent to my “catchall” address to the “Catch All” folder.

require ["include", "environment", "variables", "relational", "comparator-i;ascii-numeric", "spamtest"];
require ["fileinto", "imap4flags"];

# Generated: Do not run this script on spam messages
if allof (environment :matches "vnd.proton.spam-threshold" "*", spamtest :value "ge" :comparator "i;ascii-numeric" "${1}") {
return;
}


/**
* @type and
* @comparator matches
*/
if allof (address :all :comparator "i;unicode-casemap" :matches ["Delivered-To"] "catchall@example.com") {
fileinto "Catch All";
}

I haven’t had the need to do anything more complicated but there are a number of examples you can build on. I had a vendor that kept sending me e-mail even though I had unsubscribed so I set up this filter:

require "reject";


if anyof (address :all :comparator "i;unicode-casemap" :is "From" ["noreply@petproconnect.com"]) {
reject "Please Delete My Account";
}

and, voilà, no more e-mail. I’ve also been happy with the ProtonMail spam detection. While it isn’t perfect it works well enough that I don’t have to deal with spam on a daily basis.

I’m up to five users and eight domains, so the Visionary plan is getting a little resource constrained, but I don’t see myself needing much more in the near future. Since I send a lot of e-mail to those other four users, I love the fact that our correspondence is automatically encrypted since all of the traffic stays on the ProtonMail servers.

As an added bonus, much of the ProtonMail software, including the iOS and Android clients, are available as open source.

While I’m very satisfied with ProtonMail, there have been a couple of negatives. As a high profile pro-privacy service it has been the target of a number of DDOS attacks. I have never experienced this problem but as these kinds of attacks get more sophisticated and more powerful, it is always a possibility. Proton has done a great job at mitigating possible impact and the last big attack was back in 2018.

Another issue is that since ProtonMail is in Switzerland, they are not above Swiss law. In a high profile case a French dissident who used ProtonMail was able to be tracked down via their IP address. Under Swiss law a service provider can be compelled to turn over such information if certain conditions are met. In order to make this more difficult, my ProtonMail subscription includes access to ProtonVPN, an easy to use VPN client that can be used to obfuscate a source IP, even from Proton.

They are also launching a number of services to better compete with GSuite, such as a calendar and ProtonDrive storage. I haven’t started using those yet but I may in the future.

In summary, if you are either tired of hosting your own mail or desire a more secure e-mail solution, I can recommend ProtonMail. I’ve been using it for a little over two years and expect to be using it for years to come.

A Low Bandwidth Camera Solution

My neighbor recently asked me for advice on security cameras. Lately when anyone asks me for tech recommendations, I just send them to The Wirecutter. However, in this case their suggestions won’t work because every option they recommend requires decent Internet access.

I live on a 21 acre farm 10 miles from the nearest gas station. I love where I live but it does suffer from a lack of Internet access options. Basically, there is satellite, which is slow, expensive and with high latency, or Centurylink DSL. I have the latter and get to bask in 10 Mbps down and about 750 Kbps up.

Envy me.

Unfortunately, with limited upstream all of The Wirecutter’s options are out. I found a bandwidth calculator that estimates a 1 megapixel camera encoding video using H.264 at 24 fps in low quality would still require nearly 2 Mbps and over 5 Mbps for high quality. Just not gonna happen with a 750 Kbps circuit. In addition, I have issues sending video to some third party server. Sure, it is easy but I’m not comfortable with it.

I get around this by using an application called Surveillance Station that is included on my Synology DS415+. Surveillance Station supports a huge number of camera manufacturers and all of the information is stored locally, so no need to send information to “the cloud”. There is also an available mobile application called DS-cam that can allow you to access your live cameras and recordings remotely. Due the the aforementioned bandwidth limitations, it isn’t a great experience on DSL but it can be useful. I use it, for example, to see if a package I’m expecting has been delivered.

DS-Cam Camera App

[DS-Cam showing the current view of my driveway. Note the recording underneath the main window where you can see the red truck of the HVAC repair people leaving]

Surveillance Station is not free software, and you only get two cameras included with the application. If you want more there is a pretty hefty license fee. Still, it was useful enough to me that I paid it in order to have two more cameras on my system (for a total of four).

I have the cameras set to record on motion, and it will store up to 10GB of video, per camera, on the Synology. For cameras that stay inside I’m partial to D-Link devices, but for outdoor cameras I use Wansview mainly due to price. Since these types of devices have been known to be easily hackable, they are only accessible on my LAN (the “LAN of things”) and as an added measure I set up firewall rules to block them from accessing the Internet unless I expressly allow it (mainly for software updates).

To access Surveillance Station remotely, you can map the port on the Synology to an external port on your router and the communication can be encrypted using SSL. No matter how many cameras you have you only need to open the one port.

The main thing that prevented me from recommending my solution to my neighbor is that the DS415+ loaded with four drives was not inexpensive. But then it dawned on me that Synology has a number of smaller products that still support Surveillance View. He could get one of those plus a camera like the Wansview for a little more than one of the cameras recommended by The Wirecutter.

The bargain basement choice would be the Synology DS118. It cost less than $200 and would still require a hard drive. I use WD RED drives which run around $50 for 1TB and $100 for 4TB. Throw in a $50 camera and you are looking at about $300 for a one camera solution.

However, if you are going to get a Synology I would strongly recommend at least a 2-bay device, like the DS218. It’s about $70 more than the DS118 and you also would need to get another hard drive, but now you will have a Network Attached Storage (NAS) solution in addition to security cameras. I’ve been extremely happy with my DS415+ and I use it to centralize all of my music, video and other data across all my devices. With two drives you can suffer the loss of one of them and still protect your data.

I won’t go in to all of the features the Synology offers, but I’m happy with my purchase and only use just a few of them.

It’s a shame that there isn’t an easy camera option that doesn’t involve sending your data off to a third party. Not only does that solution not work for a large number of people, you can never be certain what the camera vendor is going to do with your video. This solution, while not cheap, does add the usefulness of a NAS with the value of security cameras, and is worth considering if you need such things.

Meeting Owl

One of the cool things I get to do working at OpenNMS is to visit customer sites. It is always fun to visit our clients and to work with them to get the most out of the application.

But over the last year I’ve seen a decline in requests for on-site work. This is odd because general interest in OpenNMS is way up, and it finally dawned on me why – fewer and fewer people work in an office.

For example, we work with a large bank in Chicago. However, their monitoring guy moved to Seattle. Rather than lose a great employee, they let him work from home. When I went out for a few days of consulting, we ended up finding a co-working space in which to meet.

Even within our own organization we are distributed. There is the main office in Apex, NC, our Canadian branch in Ottawa, Ontario, our IT guy in Connecticut and our team in Europe (spread out across Germany, Italy and the UK). We tend to communicate via some form of video chat, but that can be a pain if a lot of people are in one room on one end of the conference.

When I was visiting our partner in Australia, R-Group, I got to use this really cool setup they have using Polycom equipment. Video consisted of two cameras. One didn’t move and was focused on the whole room, but the other would move and zoom in on whoever was talking. The view would switch depending on the situation. It really improved the video conferencing experience.

I checked into it when I got back to the United States, and unfortunately it looked real expensive, way more than I could afford to pay. However, in my research I came across something called a Meeting Owl. We bought one for the Apex office and it worked out so well we got another one for Ottawa.

The Meeting Owl consists of a cylindrical speaker topped with a 360° camera. It attaches to any device that can accept a USB camera input. The picture displays a band across the top that shows the whole panorama, but then software “zooms” in on the person talking. The bottom of the screen will split to show up to three people (the last three people who have spoken).

It’s a great solution at a good price, but it had one problem. In the usual setup, the Owl is placed in the center of the conference table, and usually there is a monitor on one side. When the people at the table are listening to someone remote (either via their camera or another Owl), the people seated along the sides end up looking at the large monitor. This means the Owl is pretty much showing everyone’s ear.

It bothers me.

Now, the perfect solution would be to augment the Owl to project a picture as a hologram above the unit so that people could both see the remote person as well as look at the Owl’s camera at the same time.

Barring that, I decided to come up with another solution.

Looking on Amazon I found an inexpensive HDMI signal splitter. This unit will take one HDMI input and split it into four duplicate outputs. I then bought three small 1080p monitors (I wanted the resolution to match the 1080p main screen we already had) which I could then place around the Owl. I set the Owl on the splitter to give it a little height.

Meeting Owl with Three Monitors

Now when someone remote, such as Antonio, is talking, we can look at the small monitors on the table instead of the big one on the side wall. I found that three does a pretty good job of giving everyone a decent view, and if someone is presenting their screen everyone can look at the big monitor in order to make out detail.

Meeting Owl in Call

We tried it this morning and it worked out great. Just thought I’d share in case anyone else is looking for a similar solution.

2017 Europe: Three SIM card

Just a short post to praise the Three SIM card I bought in the UK several years ago.

I tend to buy unlocked phones and so when I travel I like to get a local SIM card, mainly for data. For this trip this was going to prove difficult, as I’m visiting five countries in nine days.

One thing I like about my Three SIM is that it never gets disabled. As long as I have a balance I have never had a problem, although I do travel enough that I end up using it at least once every six months or so. I am not able to top up the card on the Three website since I don’t have a UK credit card, so I simply use Mobiletopup.co.uk to get a £20 voucher from Paypal. Using that I just buy an “All in One 20” add-on which gives me 12GB of network access, 300 minutes and 3000 SMS messages – way more than I need. I turn that on before I leave the US and my phone works when I land.

What’s wonderful about it is that the plan is valid in any EU country. So far this trip I’ve used it in London, Helsinki and Tallinn, and I expect it to work in Riga and Brussels. I have yet to experience any network issues, although I have not moved far outside of major metropolitan areas.

I have no idea if Brexit will change this plan, but I sincerely hope not. So much of the technology I use in my life comes with headaches that I am grateful when things just work. Thanks Three.

Review: Copperhead OS

A few weeks ago I found an article in my news feed about a Tor phone, and it introduced me to Copperhead OS. This is an extremely hardened version of the Android Open Source Project (AOSP) designed for both security and privacy. It has become my default mobile OS so I thought I’d write about my experiences with it.

TL;DR: Copperhead OS is not for everyone. Due to its focus on security is it not easy to install any software that relies on Google Services, which is quite a bit. But if you are concerned with security and privacy, it offers a very stable and up to date operating system. The downside is that I am not able to totally divorce myself from Google, so I’ve taken to carrying two phones: one with Copperhead and one with stock Android for my “Googly” things. What we really need is a way to run a hypervisor on mobile device hardware. That way I could put all of my personal stuff on a Copperhead and the stuff I want to share with Google in a VM.

I pride myself to the point of being somewhat smug about the fact that I use free software for most of my technology needs, or so I thought. My desktops, laptop, servers, router, DVR and even my weather station all use free and open source software, and I run OmniROM (an AOSP implementation) on my phone. I also “sandbox” my Google stuff – I only use Chrome for accessing Google web apps and I keep everything else separate (no sharing of my contacts and calendar, for example). So, I was unpleasantly surprised at how much I relied on proprietary software for my handy (short for “hand terminal” or what most people call a “mobile phone”, but I rarely use the “phone” features of it so it seems like a misnomer).

But first a little back story. I was sitting on the toilet playing on my mobile device (“playing on my handy” seemed a little rude here) when I came across a page that showed me all of the stuff Google was tracking about my mobile usage. It was a lot, and let’s just say any bathroom issues I was having were promptly solved. They were tracking every call and text I made, which apps I opened, as well as my location. I knew about the last one since I do play games like Ingress and Pokémon Go that track you, but the others surprised me. I was able to turn those off (supposedly) but it was still a bit shocking.

Of course, I had “opted in” to all of that when I signed in to my handy for the first time. When you allow Google to backup your device data, you allow them to record your passwords and call history.

Google Backup Terms

If you opt in to help “improve your Android experience”, you allow them to track your app usage.

Google App Terms

And most importantly, by using your Google account you allow them to install software automatically (i.e. without your explicit permission).

Google Upgrade Terms

Note that this was on a phone running OmniROM, and not stock Google, but it still looks like you have to give Google a lot of control over your handy if you want to use a Google account.

Copperhead OS is extremely focused on security, which implies the ability to audit as much software on the device as possible, as well as to control when and what gets updated. This lead them to remove Google Play Services from the ROM entirely. Instead, they set up F-Droid as the trusted repository. All the software in F-Droid is open source, and in fact all of the binaries are built by the F-Droid team and not the developer. Now, of course, someone on that team could be compromised and put malicious software into the repo, but you’ve got to trust somebody or you will spend your entire life doing code reviews and compiling.

Copperhead only runs on a small subset of devices: the Nexus 6P, the Nexus 5X and the Nexus 9 WiFi edition. This is because they support secure boot which prevents malicious code from modifying the operating system. Now, I happened to have a 6P, so I figured I would try it out.

The first hurdle was understanding their terminology. On the download page they refer to a “factory” image, which I initially took to mean the original stock image from Google. What they mean is an image that you can use for a base install. If you flash your handy as often as I do, you have probably come across the process for restoring it to stock. You install the Android SDK and then download a “factory” image from Google. You then expand it (after checking the hash, of course) and run a “flash-all” script. This will replace all the data on your device, including a custom recovery like TWRP, and you’ll be ready to run Copperhead. Note that I left off some steps such as unlocking and then re-locking the bootloader, but their instructions are easy to follow.

The first thing you notice is that there isn’t the usual “set up your Google account” steps, because, of course, you can’t use your Google account on Copperhead. Outside of missing Google Apps, the device has a very stock Android feel, including the immovable search bar and the default desktop background.

This is when reality began to set in as I started to realize exactly how much proprietary software I used to make my handy useful.

The first app I needed to install was the Nova Launcher. This is a great Launcher replacement that gives you a tremendous amount of control over the desktop. I looked around F-Droid for replacement launchers, and they either didn’t do what I wanted them to do, or they haven’t been updated in a couple of years.

Then it dawned on me – why don’t I just copy over the apk?

When you install a package from Google Play, it usually gets copied into the /data/apps directory. Using the adb shell and the adb pull commands from the SDK, I was able to grab the Nova Launcher software off of my Nexus 6 (which was running OmniROM) and copy it over to the 6P. Using the very awesome Amaze file explorer, you just navigate to the apk and open it. Now, of course, since this file didn’t come from a trusted repository you have to go under Security and turn off the “trusted sources” option (and be sure to turn it back on when you are done). I was very happy to see that it runs just fine without Google Services, and I was able to get rid of the search bar and make other tweaks.

Then I focused on installing the open source apps I do use, such as K-9 Mail and Wikipedia, both of which exist in F-Droid. I had been using the MX Player app for watching videos, pretty much out of habit, but it was easy to replace with the VLC app from F-Droid.

I really like the Poweramp music player, with the exception that it periodically checks in with the Play store to make sure your license is valid. Unfortunately, this has happened to me twice when I was in an airplane over the ocean, and the lack of network access meant I couldn’t listen to music. I was eager to replace it, but the default Music app that ships with Copperhead is kind of lame. It does a good job playing music, but the interface is hard to navigate. The “black on gray” color scheme is very hard to read.

Default Music Player Screenshot

So I replaced it with the entirely capable Timber app from F-Droid.

Timber Music Player Screenshot

Another thing I needed to replace was Feedly. I’m old, so I still get most of my news directly from websites via RSS feeds and not social media. I used to use Google Reader, and when that went away I switched to Feedly. It worked fine, but I bristled at the fact that it tracked my reading habits. Next to each article would be a number representing the number of people who clicked on it to read it, so at a minimum they were tracking that. I investigated a couple of open source replacements when I was pleasantly surprised to discover that Nextcloud has a built in News service. We have had a really good experience with Nextcloud over the last couple of months, and it was pretty easy to add the news service to our instance. Using OPML I was able to export my numerous feeds from Feedly into Nextcloud, and that was probably the easiest part of this transition. On the handy I used an F-Droid app called OCReader which works well.

There were still some things I was missing. For example, when I travel overseas I keep in touch with my bride using Skype (which is way cheaper than using the phone) so I wanted to have Skype on this device. It turns out that it is in the Amazon App Store, so I installed that and was able to get things like Skype and the eBay and IMDB apps (as well as Bridge Baron, which I like a lot). Note that you still have to allow unknown sources since the Amazon repository is not trusted, and remember to set it back when you are done.

This still left a handful of apps I wanted, and based on my success with the Nova Launcher I just tried to install them from apks. Surprisingly, most of them worked, although a couple would complain about Google Services being missing. I think background notifications is the main reason they use Google Services, so if you can live without that you can get by just fine.

One app that wouldn’t work was Signal, which was very surprising since they seem to be focused on privacy and security. Instead, the default messenger is an app called Silence, which is a Signal fork. It works well, but it isn’t in the Play store (at least in the US due to a silly trademark issue that hasn’t been fixed) and no one I know uses it so it kind of defeats the purpose of secure messaging. Luckily, I discovered that the Copperhead gang has published their own fork called Noise, which removes the Googly bits but still works with the rest of the Signal infrastructure, so I have been using it as my default client with no issues. Note that it is in the F-Droid app but doesn’t show up on the F-Droid website for some reason.

For other apps such as Google+ and Yelp, I rediscovered the world wide web. Yes, browsers still work, and the web pages for these sites are pretty close to matching the functionality of the native app.

There are still some things for which there is no open source replacement: Google Maps, for example. Yes, I know, by using Google Maps I am sharing my location with Google, but the traffic data is just so good that it has saved literally hours of my life by directing me around accidents and other traffic jams. OpenStreetMap is okay and works great offline, but it doesn’t know where the OpenNMS office is located (I need to fix that) and without traffic it is a lot less useful. There is also the fact that I do like to play games like Ingress and Pokémon Go, and I have some movies and other content on Google servers.

I also lost Android Wear. I really enjoy my LG Urbane but it won’t work without Google Services. I have been playing with AsteroidOS which shows a lot of promise, but it isn’t quite there yet.

Note that Compass by OpenNMS is not yet available in F-Droid. We use Apache Cordova to build it and that is not (yet) supported by the F-Droid team. We do post the apks on Github.

To deal with my desire for privacy and my desire to use some Google software, I decided to carry two phones.

On the Nexus 6P I run Copperhead and it has all of my personal stuff on it: calendar, contacts, e-mail, etc. On the Nexus 6 I am running stock Google with all my Googly bits, including maps. I still lock down what I share with Google, but I feel a lot more confident that I won’t accidentally sync the rest of my life with them.

It sucks carrying two phones. With the processors and memory in modern devices I’m surprised that no one has come up with a hypervisor technology that would let me run Copperhead as my base OS and stock Google in a VM. Well, not really surprised since there isn’t a commercial motivation for it. Apple doesn’t have a reason to let other software on its products, and Google would be shooting itself in the foot since its business model involves collecting data on everything. I do think it will happen, however. The use case involves corporations, especially those involved in privacy sensitive fields such as health care. Wouldn’t it be cool to have a locked down “business” VM that is separate from a “personal” VM with your Facebook, games and private stuff on it.

As for the Copperhead experience itself, it is pretty solid. I had a couple of issues where DNS would stop working, but those seem to have been resolved, and lately it has been rock solid except for one instance when I lost cellular data. I tried reseting the APN but that didn’t help, but after a reboot it started working again. Odd. Overall is it probably the most stable ROM I’ve run, but part of that could be due to how vanilla it is.

Copperhead is mainly concerned with security and not extending the Android experience. For example, one feature I love about the OmniROM version of the Alarm app is the ability to set an action on “shake”. For example, I set it to “shake to dismiss” so when my alarm goes off I can just reach over, shake the phone, and go back to bed. That is missing from the stock ROM (but included in AOSP) and thus it is missing from Copperhead. The upside is that Copperhead is extremely fast with updates, especially security updates.

The biggest shortcoming is the keyboard. I’ve grown used to “gesture” typing using the Google keyboard, but that is missing from the AOSP keyboard and no free third party keyboards have it either. I asked the Copperhead guys about it and got this reply:

If the open-source community makes a better keyboard than AOSP Keyboard, we’ll switch to it. Right now it’s still the best option. There’s no choice available with gesture typing, let alone parity with the usability of the built-in keyboard. Copperhead isn’t going to be developing a keyboard. It’s totally out of scope for the project.

So, not a show stopper, but if anyone is looking to make a name for themselves in the AOSP world, a new keyboard would be welcome.

To further increase security, there is a suggestion to create a strong two-factor authentication mechanism. The 6P has a fingerprint sensor, but I don’t use it because I don’t believe that your fingerprint is a good way to secure your device (it is pretty easy to coerce you to unlock your handy if all someone has to do is hold you down and force your finger on to a sensor). However, having a fingerprint and a PIN would be really secure, as the best security is combining something you have (a fingerprint) with something you know (a PIN).

So here was my desktop on OmniROM:

Old Phone Desktop

and here is my current desktop:

New Phone Desktop

Not much different, and while I’ve given up a few things I’ve also discovered OCReader and Nextcloud News, plus the Amaze file manager.

But the biggest thing I’ve gained is peace of mind. I want to point out that it is possible to run other ROMs, such as OmniROM, without Google Services, but they aren’t quite as focused on security as Copperhead. Many thanks to the Copperhead team for doing this, and if you don’t want to go through all the work I did, you can buy a supported device directly from them.

Review: X-Arcade Gaming Cabinet

Last year I wanted to do something special for the team to commemorate the release of OpenNMS Meridian.

Since all the cool kids in Silicon Valley have access to a classic arcade machine, I decided to buy one for the office. I did a lot of research and I settled on one from X-Arcade.

X-Arcade Machine

The main reasons were that it looked well-made but it also included all of my favorites, including Pac-Man, Galaga and Tempest.

X-Arcade Games

The final piece that sold me on it was the ability to add your own graphics. I went to Jessica, our Graphic Designer, and she put together this wonderful graphic done in the classic eight-bit “platformer” style and featuring all the employees.

X-Arcade Graphic

Ulf took the role of Donkey Kong, and here is the picture meant to represent me:

X-Arcade Tarus

The “Tank Stick” controls are solid and responsive, although I did end up adding a spinner since none of the controls really worked for Tempest.

When you order one of these things, they stress that you need to make sure it arrives safely. Seriously, like four times, in big bold letters, they state you should check the machine on delivery.

I was going to be out of town when it arrived, so I made sure to tell the person checking in the delivery to make sure it was okay (i.e. take it out of the box).

They didn’t (the box looked “fine”) and so we ended up with this:

X-Arcade Cracked Top

(sigh)

Outside of that, everything arrived in working order. You get a small Dell desktop running Windows with the software pre-installed, but you also get CDs with all the games that are included with the system. It’s a little bit of a pain to set up since the instructions are a little vague, but after about an hour or so I had it up and running.

Anyway, it is real fun to play. It supports MAME games, Sega games, Atari 2600 games and even that short lived laserdisc franchise “Dragon’s Lair”. You can copy other games to the system if you have them, although scrolling through the menu can get a bit tiring if you have a long list of titles.

We had an issue with the CRT about 11 months after buying the system. I came back from a business trip to find the thing dark (it never goes dark, if the computer is hung for some reason you’ll still see a “no signal” graphic on the monitor”). Turns out the CRT had died, but they sent us a replacement under warranty and hassle free. It took about an hour to replace (those instructions were pretty detailed) and it worked better than ever afterward.

This motivated me to consider fixing the top. When we had the system apart to replace the monitor, I noticed that the top was a) the only thing broken and b) held on with eight screws. I contacted them about a replacement piece and to my surprise it arrived two days later – no charge.

The only issue I have remaining with the system is the fact that it is Windows-based. This seems to be the perfect application for a small solid-state Linux box, but I haven’t had the time to investigate a migration. Instead I just turned off or removed as much software as I could (all the Dell Update stuff kept popping up in the middle of playing a game) and so far so good.

I am very happy with the product and extremely happy with the company behind it. If you are in the market for such a cabinet, please check them out.

First Thoughts on Linux Mint 18 “Sarah”

I am a big fan of Linux Mint and I look forward to every release. This week Mint 18 “Sarah” was released. I decided to try it out on my Dell XPS 13 laptop since it is the easiest machine of mine to base and they really haven’t suggested an upgrade path. The one article I was able to find suggested a clean install, which is what I did.

First, I backed up my home directory, which is where most of my stuff lives, and I backed up the system /etc directory since I’m always making a change there and forgetting that I need it (usually concerning setting up the network interface as a bridge).

I then installed a fresh copy of Mint 18. Now they brag that the HiDPI support has improved (as I will grouse later, so does everyone else) but it hasn’t. So the first thing I did was to go to Preferences -> General and set “User interface scaling” to “Double”. This worked pretty well in Mint 17 and it seems to be fine in Mint 18 too.

I then did a basic install (I used a USB dongle to connect to a wired network since I didn’t want to mess with the Broadcom drivers at this point) and chose to encrypt the entire hard drive, which is something I usually do on laptops.

I hit my first snag when I rebooted. The boot cycle would hang at the password screen to decrypt the drive. In Mint 17 the password prompt would be on top of the “LM” logo. I would type in the password and it would boot. Now the “LM” logo has five little dots under it, like the Ubuntu boot screen, and the password prompt is below that. It’s just that it won’t accept input. If I boot in recovery mode, the password prompt is from the command line and works fine.

(sigh)

This seems to be a problem introduced with Ubuntu 16.04. Well, before I dropped back down to Mint 17 I decided to try out that distro as well as Kubuntu. My laptop was based in any case.

I ran into the usual HiDPI problems with both of those. I really, really want to like Kubuntu but with my dense screen I can’t make out anything and thus I can’t find the option to scale it. Ubuntu’s Unity was easier as it has a little sliding scaler, but when I got it to a resolution I liked many of the icon labels were clipped, just like last time I looked at it.

(sigh)

Then it dawned on my that I could just install Mint 18 but see if encrypting just my home directory would work. It did, so for now I’m using Mint 18 without full disk encryption. The next step was to install the proprietary Broadcom driver and then wireless worked.

Next, I edited /etc/fstab and added my backup NFS mount entry, mounted the drive and started restoring my home directory. That went smoothly, until I decided to reboot.

The laptop just hung at the boot screen.

Now there is a bug in Dell BIOS that if I try to boot with a USB network adapter plugged in, it erases the EFI entry for “ubuntu” and I have to go into setup and manually re-add it. Thus I was disconnecting the dongle for every reboot. On a whim I plugged it back in and the system booted. This led me to believe that there was an issue with the NFS mount in /etc/fstab, and that’s what the problem turned out to be.

The problem is that systemd likes to get its little hands into everything, so it tries to mount the volume before the wireless network is initialized. The solution is to add a special option that will cause systemd to automount the volume when it is first requested. Here is what worked:

172.20.10.5:/volume1/Backups /media/backups nfs noauto,x-systemd.automount,nouser,rsize=8192,wsize=8192,atime,rw,dev,exec,suid 0

The key bits are “noauto,x-systemd.automount”.

With that out of the way, I added mounts for my music and my video collection. That’s when I noticed a new weirdness in Cinnamon: dual icons on the desktop. I have set the desktop option to display icons for mounted file systems and now I get two of them for each remote mount point.

Double Desktop Icons

Annoying and I haven’t found a solution, so I just turned that option back off.

Now I was ready to play with the laptop. I’m often criticized for buying brand new hardware and expecting solid Linux support (yeah, you, Eric) but this laptop has been out for over a year. Still the trackpad is a little wonky – the cursor tends to jump to the lower right hand corner. Mint 18 ships with a 4.4 kernel but I had been using Mint 17 with a 4.6 kernel. One of the features of 4.6 is “Dell laptop improvements” so while I was hoping 4.4 would work for me (and that the features I needed would have been backported) it isn’t so. I installed 4.6 and my trackpad problems went away.

The final issue I needed to fix concerned ssh. I use ssh-agent and keys to access a lot of my remote servers, and it wasn’t working on Mint 18. Usually this is a permissions issue, but I compared the laptop to a working configuration on my desktop and the permissions were identical.

The error I got was:

debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0

It turns out that OpenSSH 7.0 seems to require that an “IdentityFile” parameter be expressly defined. I might be able to do this in ssh_config but instead I just created a ~/.ssh/config file with the line:

IdentityFile ~/.ssh/id_dsa_main

That got me farther. Now the error changed to:

debug1: Skipping ssh-dss key /home/tarus/.ssh/id_dsa_main - not in PubkeyAcceptedKeyTypes
debug1: Skipping ssh-dss key tarus@server1.sortova.com - not in PubkeyAcceptedKeyTypes

It seems the key I created back in 2001 is no longer considered secure. Since I didn’t want to go through the process of creating a new key just right now, I added another line to my ~/.ssh/config file:

IdentityFile ~/.ssh/id_dsa_main
PubkeyAcceptedKeyTypes=+ssh-dss

and now it works as expected. The weird part is that you would think this would be controlled on the server side, but the failure was coming from the client and thus I had to fix it on the laptop.

Now that it is installed and seems to be working, I haven’t really played around with Mint 18 much, so I may have to write another post soon. I do give them props for finally updating the default desktop wallpaper. I know the old wallpaper was traditional, but man was it dated.

This was a more complex upgrade than usual, and I don’t agree that you must base your system to do it, even from major release to major release. This isn’t Fedora. It’s based on Ubuntu which is based on Debian and I have rarely had issues with those upgrades. Usually you just change you repositories and then do “apt-get dist-upgrade”.

But … I might wait a week or two once they approve an upgrade procedure and let other people hit the bugs first, just in case. My desktops are more important to me than my laptop.

Hats off to the Mint team. I’m pretty tied to this operating system so I’m encouraged that it keeps moving forward as quickly as it does.