Fun with Networks

This is a long post about overcoming some challenges I had with a recent network install. It should have been pretty straightforward but instead I hit a couple of speed bumps that caused it to take much longer than I expected.

Last year I moved for the first time in 24 years, and the new place presented some challenges. One big upside is that it has gigabit fiber, which is a massive improvement over my last place.

The last place, however, had a crawl space and I was able to run Ethernet cable pretty much everywhere it was needed. The new place is a post and beam house (think “ski lodge”) which doesn’t lend itself well to pulling cable, so I needed a wireless solution.

I used to use Ubiquiti gear, and it is quite nice, but It had a couple of downsides. You had to install controller software on a personal device for ease of configuration, and I kind of soured on the company with how it dealt with GPL compliance issues and in disclosing the scale of a security incident. I wanted to check out other options for the new place.

I looked on Wirecutter for their mesh network recommendations and they suggested the ASUS ZenWifi product for gigbit fiber networks, and I ended up buying several nodes. You can connect each node via cable if you want, but there is a dedicated 5 GHz network for backhaul between the nodes for wireless communication, which is what I needed.

There were a couple of issues with the stock firmware, specifically that it didn’t support dynamic DNS for my registrar (the awesome Namecheap) and it also didn’t support SNMP, which is a must have for me. Luckily I found the Merlin project, specifically the gnuton group which makes an open source firmware (with binary blobs for the proprietary bits) for my device with the features I need and more.

[Note: As I write this there is a CVE that ASUS has addressed that has not be patched in gnuton. It’s frustrating as I’ve had to close down NAT for now and the fix is taking some time. I tried to join the team’s Discord channel but you have to be approved and my approval has been in queue for a week now (sigh). Still love the project, though.]

Anyway, while I had some stability issues initially (I am still a monitoring nerd so I was constantly observing the state of the network) those seem to have gone away in the last few months and the system has been rock-solid.

The new farm is nice but there were no outbuildings, so we ended up building a barn. The barn is about 200m from the house, and I wanted to get internet access out there. We live in a dead zone for mobile wireless access so if you are in the barn you are basically cut off, and it would be nice to be able to connect to things like cameras and smart switches.

For a variety of reasons running a cable wasn’t an option, so I started looking for some sort of wireless bridge. I found a great article on Ars Technica on just such a solution, and ended up going with their recommendation of the TP-Link CPE-210, 2.4 GHz version. It has a maximum throughput of 100Mbps but that is more than sufficient for the barn.

I bought these back in December and they were $40 each (you’ll need two) but I just recently got around to installing them.

You configure one as the access point (AP) and one as the client. It doesn’t really matter which is which but I put the access point on the house side. Note that the “quick setup” has an option called “bridge mode” which is exactly what I want to create, a bridge, but that means something different in TP-Link-speak so stick with the AP/Client configuration.

I plugged the first unit into my old Netgear GS110TP switch, but even though it has PoE ports it is not able to drive the CPE-210, so I ended up using the included PoE injector. I simply followed the instructions: I set the IP addresses on each unit to ones I can access from my LAN, I created a separate, wireless network for the bridge, and with the units sitting about a meter apart I was able to plug a laptop on the client side and get internet access.

Now I wanted to be able to extend my mesh network out to the barn, so I bought another ASUS node. The one I got was actually version 2 of the hardware and even though it has the same model number (AX-6600) the software is different enough that gnuton doesn’t support it. From what I can tell this shouldn’t make a difference in the mesh since it will just be a remote node, but I made sure to update the ASUS firmware in any case. The software has a totally different packaging system, which just seems weird to me since the model number didn’t change. I plugged it in and made sure it was connected to my AiMesh network.

I was worried that alignment might be a problem, so I bought a powerful little laser pointer and figured I could use that to help align the radios as I could aim it from one to another.

I had assumed the TP-Link hardware could be directly mounted on the wall, but it is designed to be tie-wrapped to a pole. I don’t have any poles, so it was once again off to Amazon to buy two small masts that I could use for the installation.

Now with everything ready, I started the install. I placed the AP right outside of my office window, which has a clear line of sight to the hayloft window off the barn.

I mounted the Client unit on the side of the barn window, and ran the Ethernet cable down through the ceiling into the tack room.

The tack room is climate controlled and I figured it would be the best place for electronics. The Ethernet cable went into the PoE injector, and I plugged the ASUS ZenWiFi node into the LAN port on the same device. I then crossed my fingers and went back to my desk to test everything.

Yay! I could ping the AP, the Client and the ASUS node. Preparation has it’s benefits.

Or does it?

When I went back to the barn to test out the connection, I could connect to the wireless SSID just fine, but then it would immediately drop. The light on the unit, which should be white, was glowing yellow indicating an issue. I ended up taking the unit back to the house to troubleshoot.

It took me about 90 minutes to figure out the issue. The ASUS device has four ports on the back. Three switched LAN ports and a single WAN port. On the primary unit the WAN port is used to connect to the gigbit fiber device provided by my ISP. What I didn’t realize is that you also use the WAN port if you want to use a cabled backhaul versus the 5GHz wireless network. While not a wired connection per se, I needed to plug the bridge into the blue WAN port versus the yellow LAN port I was using. The difference is about a centimeter yet it cost me an hour and a half.

Everything is a physical layer problem. (sigh)

Once I did that I was golden. I used the laser pointer to make sure I was as aligned as I could be, but it really wasn’t necessary as these devices are meant to span kilometers. My 0.2km run was nothing, and the connection doesn’t require a lot of accuracy. I did a speed test and got really close to the 100Mbps promised.

So I was golden, right?

Nope.

I live so remote that I have an open guest network. There is no way my neighbors would ever come close enough to pick up my signal, and I just think it is easier to tell visitors just to connect without having to give them a password. Most modern phones support calling over WiFi and with mobile wireless service being so weak here my guests can get access easily.

We often have people in the barn so I wanted to make sure that they could have access as well, but when I tested it out it the guest network wouldn’t work. You could connect to the network but it would never assign an IP address.

(sigh)

More research turned out that ASUS uses VLAN tagging to route guest traffic over the network. Something along the way must be gobbling up those packets.

I found what looked like a promising post covering just that issue with the CPE-210, but changing the configuration didn’t work for me.

Finally it dawned on me what the problem must be. Remember that old Netgear switch I mentioned above? I had plugged the bridge into that switch instead of directly into the ASUS. I did this because I thought I could drive it off the switch without using a PoE injector. When I swapped cables around to connect the bridge directly to the AiMesh node, everything started working as it should.

Success! I guess the switch was messing up the protocol just enough to cause the guest network to fail.

If at least one of my three readers has made it this far, I want to point out that several things here made it difficult to pinpoint the problem. When I initially brought up the bridge I could ping the ASUS remote node reliably, so it was hard to diagnose that it was plugged into the wrong port. When the Netgear switch was causing issues with the guest network, the main, password protected SSID worked fine. Had either of these things not worked I doubt it would have taken me so long to figure out the issues.

I am very happy with how it all turned out. I was able to connect the Gree mini-split HVAC unit in the tack room to the network for remote control, and I added TP-Link “kasa” smart switches so we could turn on and off the stall fans from Homekit. I’m sure the next time we have people out here working they will appreciate the access.

Anyway, hope this proves helpful for someone else, and be sure to check out the CPE-210 if you have a similar need.

Using rclone to Sync Data to the Cloud

I am working hard to digitize my life. Last year I moved for the first time in 24 years and I realized I have way too much stuff. A lot of that stuff was paper, in the form of books and files, so I’ve been busy trying to get digital copies of all of it. Also, a lot of my my life was already digital. I have e-mails starting in 1998 and a lot of my pictures were taken with a digital camera.

TL;DR; This is a tutorial for using the open source rclone command line tool to securely synchronize files to a cloud storage provider, in this case Backblaze. It is based on MacOS but should work in a similar fashion on other operating systems.

That brings up the issue of backups. A friend of mine was the victim of a home robbery, and while they took a number of expensive things the most expensive was his archive of photos. It was irreplaceable. This has made me paranoid about backing up my data. I have about 500GB of must save data and around 7TB of “would be nice” to save data.

At my old house the best option I had for network access was DSL. It was usable for downstream but upstream was limited to about 640kbps. At that rate I might be able to backup my data – once.

I can remember in college we were given a test question about moving a large amount of data across the United States. The best answer was to put a physical drive in a FedEx box and overnight it there. So in that vein my backup strategy was to buy three Western Digital MyBooks. I created a script to rsync my data to the external drives. One I kept in a fire safe at the house. It wasn’t guaranteed to survive a hot fire in there (paper requires a much higher temperature to burn) but there was always a chance it might depending on where the fire was hottest. I took the other two drives and stored one at my father’s house and the other at a friend’s house. Periodically I’d take out the drive from the safe, rsync it, and switch it with one of the remote drives. I’d then rsync that drive and put it back in the safe.

It didn’t keep my data perfectly current, but it would mitigate any major loss.

At my new house I have gigabit fiber. It has synchronous upload and download speeds so my ability to upload data is much, much better. I figured it was time to choose a cloud storage provider and set up a much more robust way of backing up my data.

I should stress that when I use the term “backup” I really mean “sync”. I run MacOS and I use the built-in Time Machine app for backups. The term “backup” in this case means keeping multiple copies of files, so not only is your data safe, if you happen to screw up a file you can go back and get a previous version.

Since my offsite “backup” strategy is just about dealing with a catastrophic data loss, I don’t care about multiple versions of files. I’m happy just having the latest one available in case I need to retrieve it. So it is more of synchronizing my current data with the remote copy.

The first thing I had to do was choose a cloud storage provider. Now as my three readers already know I am not a smart person, but I surround myself with people who are. I asked around and several people recommended Backblaze, so I decided to start out with that service.

Now I am also a little paranoid about privacy, so anything I send to the cloud I want to be encrypted. Furthermore, I want to be in control of the encryption keys. Backblaze can encrypt your data but they help you manage the keys, and while I think that is fine for many people it isn’t for me.

I went in search of a solution that both supported Backblaze and contained strong encryption. I have a Synology NAS which contains an application called “Cloud Sync” and while that did both things I wasn’t happy that while the body of the file was encrypted, the file names were not. If someone came across a file called WhereIBuriedTheMoney.txt it could raise some eyebrows and bring unwanted attention. (grin)

Open source to the rescue. In trying to find a solution I came across rclone, an MIT licensed command-line tool that lets you copy and sync data to a large number of cloud providers, including Backblaze. Furthermore, it is installable on MacOS using the very awesome Homebrew project, so getting it on my Mac was as easy as

$ brew install rclone

However, like most open source tools, free software does not mean free solution, so I did have a small learning curve to climb. I wanted to share what I learned in case others find it useful.

Once rclone is installed it needs to be configured. Run

$ rclone config

to access a script to help with that. In rclone syntax a cloud provider, or a particular bucket at a cloud provider, is called a “remote”. When you run the configurator for the first time you’ll get the following menu:

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

Select “n” to set up a new remote, and it will ask you to name it. Choose something descriptive but keep in mind you will use this on the command line so you may want to choose something that isn’t very long.

Enter name for new remote.
name> BBBackup

The next option in the configurator will ask you to choose your cloud storage provider. Many are specific commercial providers, such as Backblaze B2, Amazon S3, and Proton Drive, but some are generic, such as Samba (SMB) and WebDav.

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon Drive
   \ (amazon cloud drive)
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 6 / Backblaze B2
   \ (b2)

...

I chose “6” for Backblaze.

At this point in time you’ll need to set up the storage on the provider side, and then access it using an application key.

Log in to your Backblaze account. If you want to try it out note that you don’t need any kind of credit card to get started. They will limit you to 10GB (and I don’t know how long it stays around) but if you want to play with it before deciding just remember you can.

Go to Buckets in the menu and click on Create a Bucket

Note that you can choose to have Backblaze encrypt your data, but since I’m going to do that with rclone I left it disabled.

Once you have your bucket you need to create an application key. Click on Application Keys in the menu and choose Add a New Application Key.

Now one annoying issue with Backblaze is that all buckets have to be unique in the entire system, so “rcloneBucket” and “Media1” etc have already been taken. Since I’m just using this as an example it was fine for the screenshot, but note that when I add an application key I usually limit it to a particular bucket. When you click on the dropdown it will list available buckets.

Once you create a new key, Backblaze will display the keyID, the keyName and the applicationKey values on the screen. Copy them somewhere safe because you won’t be able to get them back. If you lose them you can always create a new key, but you can’t modify a key once it has been created.

Now with your new keyID, return to the rclone configuration:

Option account.
Account ID or Application Key ID.
Enter a value.
account> xxxxxxxxxxxxxxxxxxxxxxxx

Option key.
Application Key.
Enter a value.
key> xxxxxxxxxxxxxxxxxxxxxxxxxx

This will allow rclone to connect to the remote cloud storage. Finally, rclone will ask you a couple of questions. I just choose the defaults:

Option hard_delete.
Permanently delete files on remote removal, otherwise hide files.
Enter a boolean value (true or false). Press Enter for the default (false).
hard_delete>

Edit advanced config?
y) Yes
n) No (default)
y/n>

The one last step is to confirm your remote configuration. Note that you can always go back and change it if you want, later.

Configuration complete.
Options:
- type: b2
- account: xxxxxxxxxxxxxxxxxxxxxx
- key: xxxxxxxxxxxxxxxxxxxxxxxxxx
Keep this "BBBackup" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

At this point in time, quit out of the configurator for a moment.

You may have realized that we have done nothing with respect to encryption. That is because we need to add a wrapper service around our Backblaze remote to make this work (this is that there learning curve thing I mentioned earlier).

While I don’t know if this is true or not, it was recommended that you not put encrypted files in the root of your bucket. I can’t really see why it would hurt, but just in case we should put a folder in the bucket at which we can then point the encrypted remote. With Backblaze you can use the webUI or you can just use rclone. I recommend the latter since it is a good test to make sure everything is working. On the command line type:

$ rclone mkdir BBBackup:rcloneBackup/Backup

2024/01/23 14:13:25 NOTICE: B2 bucket rcloneBackup path Backup: Warning: running mkdir on a remote which can't have empty directories does nothing

To test that it worked you can look at the WebUI and click on Browse Files, or you can test it from the command line as well:

$ rclone lsf BBBackup:rcloneBackup/
Backup/

Another little annoying thing about Backblaze is that the File Browser in the webUI isn’t in real time, so if you do choose that method note that it may take several minutes for the directory (and later any files you send) to show up.

Okay, now we just have one more step. We have to create the encrypted remote, so go back into the configurator:

$ rclone config

Current remotes:

Name                 Type
====                 ====
BBBackup             b2

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n

Enter name for new remote.
name> crypt

Just like last time, chose a name that you will be comfortable typing on the command line. This is the main remote you will be using with rclone from here on out. Next we have to choose the storage type:

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)

...

14 / Encrypt/Decrypt a remote
   \ (crypt)
15 / Enterprise File Fabric
   \ (filefabric)
16 / FTP
   \ (ftp)
17 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
18 / Google Drive
   \ (drive)

...

Storage> crypt

You can type the number (currently 14) or just type “crypt” to choose this storage type. Next we have to point this new remote at the first one we created:

Option remote.
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a value.
remote> BBBackup:rcloneBackup/Backup

Note that it contains the name of the remote (BBBackup), the name of the bucket (rcloneBackup), and the name of the directory we created (Backup). Now for the fun part:

Option filename_encryption.
How to encrypt the filenames.
Choose a number from below, or type in your own string value.
Press Enter for the default (standard).
   / Encrypt the filenames.
 1 | See the docs for the details.
   \ (standard)
 2 / Very simple filename obfuscation.
   \ (obfuscate)
   / Don't encrypt the file names.
 3 | Adds a ".bin", or "suffix" extension only.
   \ (off)
filename_encryption>

This is the bit where you get to solve the filename problem I mentioned above. I always choose the default, which is “standard”. Next you get to encrypt the directory names as well:

Option directory_name_encryption.
Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (true).
 1 / Encrypt directory names.
   \ (true)
 2 / Don't encrypt directory names, leave them intact.
   \ (false)
directory_name_encryption>

I choose the default of “true” here as well. Look, I don’t expect to ever become the subject of an in-depth digital forensics investigation, but the less information out there the better. Should Backblaze ever get a subpoena to let someone browse through my files on their system, I want to minimize what they can find.

Finally, we have to choose a passphrase:

Option password.
Password or pass phrase for encryption.
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:

Option password2.
Password or pass phrase for salt.
Optional but recommended.
Should be different to the previous password.
Choose an alternative below. Press Enter for the default (n).
y) Yes, type in my own password
g) Generate random password
n) No, leave this optional password blank (default)
y/g/n>

Now, unlike your application key ID and password, these passwords you need to remember. If you loose them then you will not be able to get access to your data. I did not choose a salt password but it does appear to be recommended. Now we are almost done:

Edit advanced config?
y) Yes
n) No (default)
y/n>

Configuration complete.
Options:
- type: crypt
- remote: BBBackup:rcloneBackup/Backup
- password: *** ENCRYPTED ***
Keep this "cryptMedia" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

Now your remote is ready to use. Note that when using a remote with encrypted files and directories do not use the Backblaze webUI to create folders underneath your root or rclone won’t recognize them.

I bring this up because there is one frustrating thing with rclone. If I want to copy a directory to the cloud storage remote it copies the contents of the directory and not the directory itself. For example, if I type on the command line:

$ cp -r Music /Media

it will create a “Music” directory under the “Media” directory. But if I type:

$ rclone copy Music crypt:Media

it will copy the contents of the Music directory into the root of the Media directory. To get the outcome I want I need to run:

$ rclone mkdir crypt:Media/Music

$ rclone copy Music crypt:Media/Music

Make sense?

While rclone has a lot of commands, the ones I have used are “mkdir” and “rmdir” (just like on a regular command line) and “copy” and “sync”. I use “copy” for the initial transfer and then “sync” for subsequent updates.

Now all I have to do for cloud synchronization is set up a crontab to run these commands on occasion (I set mine up for once a day).

I can check that the encryption is working by using the Backblaze webUI. First I see the folder I created to hold my encrypted files:

But the directories in that folder have names that sound like I’m trying to summon Cthulhu:

As you can see from this graph, I was real eager to upload stuff when I got this working:

and on the first day I sent up nearly 400GB of files. Backblaze B2 pricing is currently $6/TB/month, and this seems about right:

I have since doubled my storage so it should run about 20 cents a day. Note that downloading your data is free up to three times the amount of data stored. In other words, you could download all of the data you have in B2 three times in a given month and not incur fees. Since I am using this simply for catastrophic data recovery I shouldn’t have to worry about egress fees.

I am absolutely delighted to have this working and extremely impressed with rclone. For my needs open source once again outshines commercial offerings. And remember if you have other preferences for cloud storage providers you have a large range of choices, and the installation should be similar to the one I did here.

Nextcloud News

I think the title of this post is a little misleading, as I don’t have any news about Nextcloud. Instead I want to talk about the News App on the Nextcloud platform, and I couldn’t think of a better one.

I rely heavily on the Nextcloud News App to keep up with what is going on with the world. News provides similar functionality to the now defunct Google Reader, but with the usual privacy bonuses you expect from Nextcloud.

Back before social networks like Facebook and Twitter were the norm, people used to communicate through blogs. Blogs provide similar functionality: people can write short or long form posts that will get published on a website and can include media such as pictures, and other people can comment and share them. Even now when I see an incredibly long thread on Twitter I just wish the author would have put it on a blog somewhere.

Blogs are great, since each one can be individually hosted without requiring a central authority to manage it all. My friend Ben got me started on my first blog (this one) that in the beginning was hosted using a program called Moveable Type. When their licensing became problematic, most of us switched to WordPress, and a tremendous amount of the Web runs on WordPress even now.

Now the problem was that the frequency that people would post to their blogs varied. Some might post once a week, and others several times an hour. Unless you wanted to go and manually refresh their pages, it was difficult to keep up.

Enter Really Simple Syndication (RSS).

RSS is, as the name implies, an easy way to summarize content on a website. Sites that support RSS craft a generic XML document that reflects titles, descriptions, links, etc. to content on the site. The page is referred to as a “feed” and RSS “readers” can aggregate the various feeds together so that a person can follow the changes on websites that interest them.

Google Reader was a very useful feed reader that was extremely popular, and it in turn increased the popularity of blogs. I put some of the blame on Google for the rise of the privacy nightmare of modern social networks on their decision to kill Reader, as it made individual blogs less relevant.

Now in Google’s defense they would say just use some other service. In my case I switched to Feedly, an adequate Reader replacement. The process was made easier by the fact that most feed readers support a way to export your configuration in the Outline Processor Markup Language (OPML) format. I was able to export my Reader feeds and import them into Feedly.

Feedly was free, and as they say if you aren’t paying for the product you are the product. I noticed that next to my various feed articles Feedly would display a count, which I assume reflected the number of Feedly users that were interested in or who had read that article. Then it dawned on me that Feedly could gather useful information on what people were interested in, just like Facebook, and I also assume, if they chose, they could monetize that information. Since I had a Feedly account to manage my feeds, they could track my individual interests as well.

While Feedly never gave me any reason to assign nefarious intentions to them, as a privacy advocate I wanted more control over sharing my interests, so I looked for a solution. As a Nextcloud fan I looked for an appropriate app, and found one in News.

News has been around pretty much since Nextcloud started, but I rarely hear anyone talking about its greatness (hence this post). Like most things Nextcloud it is simple to install. If you are an admin, just click on your icon in the upper right corner and select “+ Apps”. Then click on “Featured apps” in the sidebar and you should be able to enable the “News” app.

That’s it. Now in order to update your feeds you need to be using the System Cron in Nextcloud, and instructions can be found in the documentation.

Once you have News installed, the next challenge is to find interesting feeds to which you can subscribe. The news app will suggest several, but you can also find more on your own.

Nextcloud RSS Suggestions

It used to be pretty easy to find the feed URL. You would just look for the RSS icon and click on it for the link:

RSS Icon

But, again, when Reader died so did a lot of the interest in RSS and finding feed URLs more became difficult. I have links to feeds at the very bottom of the right sidebar of this blog, but you’d have to scroll down quite a way to find them.

But for WordPress sites, like this one, you just add “/feed” to the site URL, such as:

https://www.adventuresinoss.com/feed

There are also some browser plugins that are supposed to help identify RRS feed links, but I haven’t used any. You can also “view source” on a website of interest and search for “rss” and that may help out as well.

My main use of the News App is to keep up with news, and I follow four main news sites. I like the BBC for an international take on news, CNN for a domestic take, Slashdot for tech news and WRAL for local news.

Desktop Version of News App

Just for reference, the feed links are:

BBC: http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/front_page/rss.xml

CNN: http://rss.cnn.com/rss/cnn_topstories.rss

Slashdot: http://rss.slashdot.org/slashdot/slashdotMain

WRAL: http://www.wral.com/news/rss/48/

This wouldn’t be as useful if you couldn’t access it on a mobile device. Of course, you can access it via a web browser, but there exist a number of phone apps for accessing your feeds in a native app.

Now to my knowledge Nextcloud the company doesn’t produce a News mobile app, so the available apps are provided by third parties. I put all of my personal information into Nextcloud, and since I’m paranoid I didn’t want to put my access credentials into those apps but I wanted the convenience of being able to read news anywhere I had a network connection. So I created a special “news” user just for News. You probably don’t need to do that but I wanted to plant the suggestion for those who think about such things.

On my iPhone I’ve been happy with CloudNews.

iPhone Version of CloudNews App

It sometimes gets out of sync and I end up having to read everything in the browser and re-sync in CloudNews, but for the most part it’s fine.

For Android the best app I’ve used is by David Luhmer. It’s available for a small fee in the Play Store and for free on F-Droid.

Like all useful software, you don’t realize how much you depend on it until it is gone, and in the few instances I’ve had problems with News I get very anxious as I don’t know what’s going on in the world. Luckily this has been rare, and I check my news feed many times during the day to the point that I probably have a personal problem. The mobile apps mean I can read news when I’m in line at the grocery store or waiting for an appointment. And the best part is that I know my interests are kept private as I control the data.

If you are interested, I sporadically update a number of blogs, and I aggregate them here. In a somewhat ironic twist, I can’t find a feed link for the “planet” page, so you’d need to add the individual blog feeds to your reader.

Review: AT&T Cell Booster

Back in the mid-2000s I was a huge Apple fanboy, and I really, really, really wanted an iPhone. At that time it was only available from AT&T, and unfortunately the wireless coverage on that network is not very good where I live.

In 2008 a couple of things happened. Apple introduced the iPhone 3G, and AT&T introduced the 3G Microcell.

The 3G Microcell, technically a “femtocell“, is a small device that you can plug into your home network and it will leverage your Internet connection to augment wireless coverage in a small area (i.e. your house). With that I could get an iPhone and it would work at my house.

In February 3G service in the US will cease, and I thought I was going to have to do without a femtocell. Most modern phones support calling over WiFi now, but it just isn’t the same. For example, if I am trying to send an SMS and there is any signal at all from AT&T, my phone will try to use that network instead of the much stronger wireless network in my house. If I disable mobile access altogether, the SMS will send fine but then I can’t get phone calls reliably. (sigh)

I thought I was going to have to just deal with it when AT&T sent me a notice that they were going to replace my 3G Microcell with a new product called a Cell Booster.

Now a lot of people criticize AT&T for a number of good reasons, but lately they’ve really been hitting the whole “customer service” thing out of the park. The Cell Booster currently shows out of stock on their website with a cost of $229, but they sent me one for free.

AT&T Cell Booster Box

In a related story my mother-in-law, who is on our family plan, was using an older Pixel that was going to stop working with the end of 3G service (it was an LTE phone but doesn’t support “HD Voice” which is required to make calls). So AT&T send us a replacement Samsung S9. Pretty cool.

In any case the Cell Booster installation went pretty smoothly. I simply unplugged the existing 3G Microcell and plugged in the new device. The box included the Cell Booster, a GPS sensor, a power supply and an Ethernet cable. No other instructions outside of a QR code which will take you to the appropriate app store to download the necessary application to set it up.

The Booster requires a GPS lock, and they include a little “puck” connected to a fairly long wire that is supposed to allow one to get a signal even when the device is some distance away from a clear line of sight, such as away from windows. I just plugged it in to the back and left it next to the unit and it eventually got a signal, but it is also pretty much beneath a skylight.

In order to provision the Cell Booster you have to launch the mobile app and fill out a few pages of forms, which includes the serial number of the device. It has five lights on the front and while the power light came on immediately, it did take some time for the other lights, including “Internet” to come up. I assumed the Internet light would have turned on as soon as an IP address was assigned, but that wasn’t the case. It took nearly a half and hour for the first four lights to come on, and then another 15 minutes or so for the final “4G LTE” light to illuminate and the unit to start working. Almost immediately I got an SMS from AT&T saying the unit was active.

AT&T Cell Booster Lights

Speaking of IP addresses, I don’t like putting random devices on my LAN so I stuck this on my public network which only has Internet access (no LAN access). I ran nmap against it and there don’t appear to be any ports open. A traffic capture shows traffic between the Cell Booster and a 12.0.0.0 network address owned by AT&T.

I do like the fact that, unlike the 3G Microcell, you do not need to specify the phone number of the handsets that can use the Cell Booster. It claims to support up to 8 at a time, and while I haven’t had anyone over who is both on the AT&T network and also not on my plan, I’m assuming it will work for them as well (I used to have to manually add phone numbers of my guests to allow them to use the 3G device).

The Cell Booster is a rebranded Nokia SS2FII. One could probably buy one outside of AT&T but without being able to provision it I doubt it would work.

So far we’ve been real happy with the Cell Booster. Calls and SMS messages work just fine, if not better than before (I have no objective way to measure it, though, so it might just be bias). If you get one, just remember that it takes a really long time to start up that first time, but after you have all five lights you should be able to forget it’s there.

Review: ProtonMail

I love e-mail. I know for many it is a bane, which has resulted in the rise of “inbox zero” and even the “#noemail” movement, but for me it is a great way to communicate.

I just went and looked, and the oldest e-mail currently in my system is from July of 1996. I used e-mail for over a decade before then, on school Unix systems and on BBS’s, but it wasn’t until the rise of IMAP in the 1990s that I was able to easily keep and move my messages from provider to provider.

That message from 1996 was off of my employer’s system. I didn’t have my own domain until two years later, in 1998, and I believe my friend Ben was the one to host my e-mail at the time.

When I started maintaining OpenNMS in 2002 I had a server at Rackspace that I was able to configure for mail. I believe the SMTP server was postfix but I can’t remember what the IMAP server was. I want to say it was dovecot but that really wasn’t available until later in 2002, so maybe UW IMAP? Cyrus was pretty big at the time but renown for being difficult to set up.

In any case I was always a little concerned about the security of my mail messages. Back then disks were not encrypted and even the mail transport was done in the clear (this was before SSL became ubiquitous), so when OpenNMS grew to the point where we had our own server room, I set up a server for “vanity domains” that anyone in the company could use to host their e-mail and websites, etc. At least I knew the disks were behind a locked door, and now that Ben worked with us he could continue to maintain the mail server, too. (grin)

Back then I tried to get my friends to use encrypted e-mail. Pretty Good Privacy (PGP) was available since the early 1990s, and MIT used to host plugins for Outlook, which at the time was the default e-mail client for most people. But many of them, including the technically minded, didn’t want to be bothered with setting up keys, etc. It wasn’t until later when open source really took off and mail clients like Thunderbird arrived (with the Enigmail plug-in) that encrypted e-mail became more common among my friends.

In 2019 the decision was made to sell the OpenNMS Group, and since I would no longer have control over the company (and its assets) I decided I needed to move my personal domains somewhere else. I really didn’t relish the idea of running my own mail server. Spam management was always a problem, and there were a number of new protocols to help secure e-mail that were kind of a pain to set up.

The default mail hosting option for most people is GMail. Now part of Google Workspace, for a nominal fee you can have Google host your mail, and get some added services as well.

I wasn’t happy with the thought of Google having access to my e-mail, so I looked for options. To me the best one was ProtonMail.

The servers for ProtonMail are hosted in Switzerland, a neutral country not beholden to either US or EU laws. They are privacy focused, with everything stored encrypted at rest and, when possible, encrypted in transport.

They have a free tier option that I used to try out the system. Now, as an “old”, I prefer desktop mail clients. I find them easiest to use and I can also bring all of my mail into one location, and I can move messages from one provider to another. The default way to access ProtonMail is through a web client, like GMail. Unlike GMail, ProtonMail doesn’t offer a way to directly access their services through SMTP or IMAP. Instead you have to install a piece of software called the ProtonMail Bridge that will create an encrypted tunnel between your desktop computer and their servers. You can then configure your desktop mail client to connect to “localhost” on a particular port and it will act as if it were directly connected to the remote mail server.

In my trial there were two shortcomings that immediately impacted me. As a mail power user, I use a lot of nested folders. ProtonMail does not allow you to nest folders. Second, I share some accounts with my spouse (i.e. we have a single Paypal account) and previously I was able to alias e-mail addresses to send to both of our user accounts. ProtonMail does not allow this.

For the latter I think it has to do with the fact that each mail address requires a separate key and their system must not make it easy to use two keys or to share a key. I’m not sure what the issue is with nested folders.

In any case, this wasn’t a huge deal. To overcome the nested folder issue I just added a prefix, i.e. “CORR” for “Correspondence” and “VND” for “Vendor”, to each mailbox, and then you can sort on name. And while we share a few accounts we don’t use them enough that we couldn’t just assign it to a particular user.


UPDATE: It turns out it is now possible to have nested folders, although it doesn’t quite work the way I would expect.

Say I want a folder called “Correspondence” and I want sub-folders for each of the people with whom I exchange e-mail. I tried the following:

So I have a folder named something like “CORR-Bill Gates”, but I’d rather have that nested under a folder entitled “Correspondence”. In my desktop mail client, if I create a folder called “Correspondence” and then drag the “CORR-Bill Gates” folder into it, I get a new folder titled “Correspondence/CORR-Bill Gates” which is not what I want.

However, I can log into the ProtonMail webUI and next to folders there is a little “+” sign.

Add Folder Menu Item
If I click on that I get a dialog that lets me add new folders, as well as to add them to a parent folder.

Add Folder Dialog Box

If I create a “Correspondence” folder with no parent via the webUI and then a “Bill Gates” folder, I can parent the “Bill Gates” folder to “Correspondence” and then the folders will show up and behave as I expect in my desktop e-mail client. Note that you can only nest two levels deep. In other words if I wanted a folder structure like:

Bills -> Taxes -> Federal -> 2021

It would fail to create, but

Bills -> Taxes -> 2021-Federal

will work.


After I was satisfied with ProtonMail, I ended up buying the “Visionary” package. I pay for it in two-year chunks and it runs US$20/month. This gives me ten domains and six users, with up to 50 e-mail aliases.

Domain set up was a breeze. Assuming you have access to your domain registrar (I’m a big fan of Namecheap) all you need to do is follow the little “wizard” that will step you through the DNS entries you need to make to point your domain to ProtonMail’s servers as well as to configure SPF, DKIM and DMARC. Allowing for the DNS to update, it can be done in a few minutes or it may take up to an hour.

I thought there would be a big issue with the 50 alias limit, as I set up separate e-mails for every vendor I use. But it turns out that you only need to have a alias if you want to send e-mail from that address. You can set up a “catch all” address that will take any incoming e-mail that doesn’t expressly match an alias and send it to a particular user. In my case I set up a specific “catchall@” address but it is not required.

You can also set up filters pretty easily. Here is an example of sending all e-mail sent to my “catchall” address to the “Catch All” folder.

require ["include", "environment", "variables", "relational", "comparator-i;ascii-numeric", "spamtest"];
require ["fileinto", "imap4flags"];

# Generated: Do not run this script on spam messages
if allof (environment :matches "vnd.proton.spam-threshold" "*", spamtest :value "ge" :comparator "i;ascii-numeric" "${1}") {
return;
}


/**
* @type and
* @comparator matches
*/
if allof (address :all :comparator "i;unicode-casemap" :matches ["Delivered-To"] "catchall@example.com") {
fileinto "Catch All";
}

I haven’t had the need to do anything more complicated but there are a number of examples you can build on. I had a vendor that kept sending me e-mail even though I had unsubscribed so I set up this filter:

require "reject";


if anyof (address :all :comparator "i;unicode-casemap" :is "From" ["noreply@petproconnect.com"]) {
reject "Please Delete My Account";
}

and, voilà, no more e-mail. I’ve also been happy with the ProtonMail spam detection. While it isn’t perfect it works well enough that I don’t have to deal with spam on a daily basis.

I’m up to five users and eight domains, so the Visionary plan is getting a little resource constrained, but I don’t see myself needing much more in the near future. Since I send a lot of e-mail to those other four users, I love the fact that our correspondence is automatically encrypted since all of the traffic stays on the ProtonMail servers.

As an added bonus, much of the ProtonMail software, including the iOS and Android clients, are available as open source.

While I’m very satisfied with ProtonMail, there have been a couple of negatives. As a high profile pro-privacy service it has been the target of a number of DDOS attacks. I have never experienced this problem but as these kinds of attacks get more sophisticated and more powerful, it is always a possibility. Proton has done a great job at mitigating possible impact and the last big attack was back in 2018.

Another issue is that since ProtonMail is in Switzerland, they are not above Swiss law. In a high profile case a French dissident who used ProtonMail was able to be tracked down via their IP address. Under Swiss law a service provider can be compelled to turn over such information if certain conditions are met. In order to make this more difficult, my ProtonMail subscription includes access to ProtonVPN, an easy to use VPN client that can be used to obfuscate a source IP, even from Proton.

They are also launching a number of services to better compete with GSuite, such as a calendar and ProtonDrive storage. I haven’t started using those yet but I may in the future.

In summary, if you are either tired of hosting your own mail or desire a more secure e-mail solution, I can recommend ProtonMail. I’ve been using it for a little over two years and expect to be using it for years to come.

A Low Bandwidth Camera Solution

My neighbor recently asked me for advice on security cameras. Lately when anyone asks me for tech recommendations, I just send them to The Wirecutter. However, in this case their suggestions won’t work because every option they recommend requires decent Internet access.

I live on a 21 acre farm 10 miles from the nearest gas station. I love where I live but it does suffer from a lack of Internet access options. Basically, there is satellite, which is slow, expensive and with high latency, or Centurylink DSL. I have the latter and get to bask in 10 Mbps down and about 750 Kbps up.

Envy me.

Unfortunately, with limited upstream all of The Wirecutter’s options are out. I found a bandwidth calculator that estimates a 1 megapixel camera encoding video using H.264 at 24 fps in low quality would still require nearly 2 Mbps and over 5 Mbps for high quality. Just not gonna happen with a 750 Kbps circuit. In addition, I have issues sending video to some third party server. Sure, it is easy but I’m not comfortable with it.

I get around this by using an application called Surveillance Station that is included on my Synology DS415+. Surveillance Station supports a huge number of camera manufacturers and all of the information is stored locally, so no need to send information to “the cloud”. There is also an available mobile application called DS-cam that can allow you to access your live cameras and recordings remotely. Due the the aforementioned bandwidth limitations, it isn’t a great experience on DSL but it can be useful. I use it, for example, to see if a package I’m expecting has been delivered.

DS-Cam Camera App

[DS-Cam showing the current view of my driveway. Note the recording underneath the main window where you can see the red truck of the HVAC repair people leaving]

Surveillance Station is not free software, and you only get two cameras included with the application. If you want more there is a pretty hefty license fee. Still, it was useful enough to me that I paid it in order to have two more cameras on my system (for a total of four).

I have the cameras set to record on motion, and it will store up to 10GB of video, per camera, on the Synology. For cameras that stay inside I’m partial to D-Link devices, but for outdoor cameras I use Wansview mainly due to price. Since these types of devices have been known to be easily hackable, they are only accessible on my LAN (the “LAN of things”) and as an added measure I set up firewall rules to block them from accessing the Internet unless I expressly allow it (mainly for software updates).

To access Surveillance Station remotely, you can map the port on the Synology to an external port on your router and the communication can be encrypted using SSL. No matter how many cameras you have you only need to open the one port.

The main thing that prevented me from recommending my solution to my neighbor is that the DS415+ loaded with four drives was not inexpensive. But then it dawned on me that Synology has a number of smaller products that still support Surveillance View. He could get one of those plus a camera like the Wansview for a little more than one of the cameras recommended by The Wirecutter.

The bargain basement choice would be the Synology DS118. It cost less than $200 and would still require a hard drive. I use WD RED drives which run around $50 for 1TB and $100 for 4TB. Throw in a $50 camera and you are looking at about $300 for a one camera solution.

However, if you are going to get a Synology I would strongly recommend at least a 2-bay device, like the DS218. It’s about $70 more than the DS118 and you also would need to get another hard drive, but now you will have a Network Attached Storage (NAS) solution in addition to security cameras. I’ve been extremely happy with my DS415+ and I use it to centralize all of my music, video and other data across all my devices. With two drives you can suffer the loss of one of them and still protect your data.

I won’t go in to all of the features the Synology offers, but I’m happy with my purchase and only use just a few of them.

It’s a shame that there isn’t an easy camera option that doesn’t involve sending your data off to a third party. Not only does that solution not work for a large number of people, you can never be certain what the camera vendor is going to do with your video. This solution, while not cheap, does add the usefulness of a NAS with the value of security cameras, and is worth considering if you need such things.

Meeting Owl

One of the cool things I get to do working at OpenNMS is to visit customer sites. It is always fun to visit our clients and to work with them to get the most out of the application.

But over the last year I’ve seen a decline in requests for on-site work. This is odd because general interest in OpenNMS is way up, and it finally dawned on me why – fewer and fewer people work in an office.

For example, we work with a large bank in Chicago. However, their monitoring guy moved to Seattle. Rather than lose a great employee, they let him work from home. When I went out for a few days of consulting, we ended up finding a co-working space in which to meet.

Even within our own organization we are distributed. There is the main office in Apex, NC, our Canadian branch in Ottawa, Ontario, our IT guy in Connecticut and our team in Europe (spread out across Germany, Italy and the UK). We tend to communicate via some form of video chat, but that can be a pain if a lot of people are in one room on one end of the conference.

When I was visiting our partner in Australia, R-Group, I got to use this really cool setup they have using Polycom equipment. Video consisted of two cameras. One didn’t move and was focused on the whole room, but the other would move and zoom in on whoever was talking. The view would switch depending on the situation. It really improved the video conferencing experience.

I checked into it when I got back to the United States, and unfortunately it looked real expensive, way more than I could afford to pay. However, in my research I came across something called a Meeting Owl. We bought one for the Apex office and it worked out so well we got another one for Ottawa.

The Meeting Owl consists of a cylindrical speaker topped with a 360° camera. It attaches to any device that can accept a USB camera input. The picture displays a band across the top that shows the whole panorama, but then software “zooms” in on the person talking. The bottom of the screen will split to show up to three people (the last three people who have spoken).

It’s a great solution at a good price, but it had one problem. In the usual setup, the Owl is placed in the center of the conference table, and usually there is a monitor on one side. When the people at the table are listening to someone remote (either via their camera or another Owl), the people seated along the sides end up looking at the large monitor. This means the Owl is pretty much showing everyone’s ear.

It bothers me.

Now, the perfect solution would be to augment the Owl to project a picture as a hologram above the unit so that people could both see the remote person as well as look at the Owl’s camera at the same time.

Barring that, I decided to come up with another solution.

Looking on Amazon I found an inexpensive HDMI signal splitter. This unit will take one HDMI input and split it into four duplicate outputs. I then bought three small 1080p monitors (I wanted the resolution to match the 1080p main screen we already had) which I could then place around the Owl. I set the Owl on the splitter to give it a little height.

Meeting Owl with Three Monitors

Now when someone remote, such as Antonio, is talking, we can look at the small monitors on the table instead of the big one on the side wall. I found that three does a pretty good job of giving everyone a decent view, and if someone is presenting their screen everyone can look at the big monitor in order to make out detail.

Meeting Owl in Call

We tried it this morning and it worked out great. Just thought I’d share in case anyone else is looking for a similar solution.

2017 Europe: Three SIM card

Just a short post to praise the Three SIM card I bought in the UK several years ago.

I tend to buy unlocked phones and so when I travel I like to get a local SIM card, mainly for data. For this trip this was going to prove difficult, as I’m visiting five countries in nine days.

One thing I like about my Three SIM is that it never gets disabled. As long as I have a balance I have never had a problem, although I do travel enough that I end up using it at least once every six months or so. I am not able to top up the card on the Three website since I don’t have a UK credit card, so I simply use Mobiletopup.co.uk to get a £20 voucher from Paypal. Using that I just buy an “All in One 20” add-on which gives me 12GB of network access, 300 minutes and 3000 SMS messages – way more than I need. I turn that on before I leave the US and my phone works when I land.

What’s wonderful about it is that the plan is valid in any EU country. So far this trip I’ve used it in London, Helsinki and Tallinn, and I expect it to work in Riga and Brussels. I have yet to experience any network issues, although I have not moved far outside of major metropolitan areas.

I have no idea if Brexit will change this plan, but I sincerely hope not. So much of the technology I use in my life comes with headaches that I am grateful when things just work. Thanks Three.

Review: Copperhead OS

A few weeks ago I found an article in my news feed about a Tor phone, and it introduced me to Copperhead OS. This is an extremely hardened version of the Android Open Source Project (AOSP) designed for both security and privacy. It has become my default mobile OS so I thought I’d write about my experiences with it.

TL;DR: Copperhead OS is not for everyone. Due to its focus on security is it not easy to install any software that relies on Google Services, which is quite a bit. But if you are concerned with security and privacy, it offers a very stable and up to date operating system. The downside is that I am not able to totally divorce myself from Google, so I’ve taken to carrying two phones: one with Copperhead and one with stock Android for my “Googly” things. What we really need is a way to run a hypervisor on mobile device hardware. That way I could put all of my personal stuff on a Copperhead and the stuff I want to share with Google in a VM.

I pride myself to the point of being somewhat smug about the fact that I use free software for most of my technology needs, or so I thought. My desktops, laptop, servers, router, DVR and even my weather station all use free and open source software, and I run OmniROM (an AOSP implementation) on my phone. I also “sandbox” my Google stuff – I only use Chrome for accessing Google web apps and I keep everything else separate (no sharing of my contacts and calendar, for example). So, I was unpleasantly surprised at how much I relied on proprietary software for my handy (short for “hand terminal” or what most people call a “mobile phone”, but I rarely use the “phone” features of it so it seems like a misnomer).

But first a little back story. I was sitting on the toilet playing on my mobile device (“playing on my handy” seemed a little rude here) when I came across a page that showed me all of the stuff Google was tracking about my mobile usage. It was a lot, and let’s just say any bathroom issues I was having were promptly solved. They were tracking every call and text I made, which apps I opened, as well as my location. I knew about the last one since I do play games like Ingress and Pokémon Go that track you, but the others surprised me. I was able to turn those off (supposedly) but it was still a bit shocking.

Of course, I had “opted in” to all of that when I signed in to my handy for the first time. When you allow Google to backup your device data, you allow them to record your passwords and call history.

Google Backup Terms

If you opt in to help “improve your Android experience”, you allow them to track your app usage.

Google App Terms

And most importantly, by using your Google account you allow them to install software automatically (i.e. without your explicit permission).

Google Upgrade Terms

Note that this was on a phone running OmniROM, and not stock Google, but it still looks like you have to give Google a lot of control over your handy if you want to use a Google account.

Copperhead OS is extremely focused on security, which implies the ability to audit as much software on the device as possible, as well as to control when and what gets updated. This lead them to remove Google Play Services from the ROM entirely. Instead, they set up F-Droid as the trusted repository. All the software in F-Droid is open source, and in fact all of the binaries are built by the F-Droid team and not the developer. Now, of course, someone on that team could be compromised and put malicious software into the repo, but you’ve got to trust somebody or you will spend your entire life doing code reviews and compiling.

Copperhead only runs on a small subset of devices: the Nexus 6P, the Nexus 5X and the Nexus 9 WiFi edition. This is because they support secure boot which prevents malicious code from modifying the operating system. Now, I happened to have a 6P, so I figured I would try it out.

The first hurdle was understanding their terminology. On the download page they refer to a “factory” image, which I initially took to mean the original stock image from Google. What they mean is an image that you can use for a base install. If you flash your handy as often as I do, you have probably come across the process for restoring it to stock. You install the Android SDK and then download a “factory” image from Google. You then expand it (after checking the hash, of course) and run a “flash-all” script. This will replace all the data on your device, including a custom recovery like TWRP, and you’ll be ready to run Copperhead. Note that I left off some steps such as unlocking and then re-locking the bootloader, but their instructions are easy to follow.

The first thing you notice is that there isn’t the usual “set up your Google account” steps, because, of course, you can’t use your Google account on Copperhead. Outside of missing Google Apps, the device has a very stock Android feel, including the immovable search bar and the default desktop background.

This is when reality began to set in as I started to realize exactly how much proprietary software I used to make my handy useful.

The first app I needed to install was the Nova Launcher. This is a great Launcher replacement that gives you a tremendous amount of control over the desktop. I looked around F-Droid for replacement launchers, and they either didn’t do what I wanted them to do, or they haven’t been updated in a couple of years.

Then it dawned on me – why don’t I just copy over the apk?

When you install a package from Google Play, it usually gets copied into the /data/apps directory. Using the adb shell and the adb pull commands from the SDK, I was able to grab the Nova Launcher software off of my Nexus 6 (which was running OmniROM) and copy it over to the 6P. Using the very awesome Amaze file explorer, you just navigate to the apk and open it. Now, of course, since this file didn’t come from a trusted repository you have to go under Security and turn off the “trusted sources” option (and be sure to turn it back on when you are done). I was very happy to see that it runs just fine without Google Services, and I was able to get rid of the search bar and make other tweaks.

Then I focused on installing the open source apps I do use, such as K-9 Mail and Wikipedia, both of which exist in F-Droid. I had been using the MX Player app for watching videos, pretty much out of habit, but it was easy to replace with the VLC app from F-Droid.

I really like the Poweramp music player, with the exception that it periodically checks in with the Play store to make sure your license is valid. Unfortunately, this has happened to me twice when I was in an airplane over the ocean, and the lack of network access meant I couldn’t listen to music. I was eager to replace it, but the default Music app that ships with Copperhead is kind of lame. It does a good job playing music, but the interface is hard to navigate. The “black on gray” color scheme is very hard to read.

Default Music Player Screenshot

So I replaced it with the entirely capable Timber app from F-Droid.

Timber Music Player Screenshot

Another thing I needed to replace was Feedly. I’m old, so I still get most of my news directly from websites via RSS feeds and not social media. I used to use Google Reader, and when that went away I switched to Feedly. It worked fine, but I bristled at the fact that it tracked my reading habits. Next to each article would be a number representing the number of people who clicked on it to read it, so at a minimum they were tracking that. I investigated a couple of open source replacements when I was pleasantly surprised to discover that Nextcloud has a built in News service. We have had a really good experience with Nextcloud over the last couple of months, and it was pretty easy to add the news service to our instance. Using OPML I was able to export my numerous feeds from Feedly into Nextcloud, and that was probably the easiest part of this transition. On the handy I used an F-Droid app called OCReader which works well.

There were still some things I was missing. For example, when I travel overseas I keep in touch with my bride using Skype (which is way cheaper than using the phone) so I wanted to have Skype on this device. It turns out that it is in the Amazon App Store, so I installed that and was able to get things like Skype and the eBay and IMDB apps (as well as Bridge Baron, which I like a lot). Note that you still have to allow unknown sources since the Amazon repository is not trusted, and remember to set it back when you are done.

This still left a handful of apps I wanted, and based on my success with the Nova Launcher I just tried to install them from apks. Surprisingly, most of them worked, although a couple would complain about Google Services being missing. I think background notifications is the main reason they use Google Services, so if you can live without that you can get by just fine.

One app that wouldn’t work was Signal, which was very surprising since they seem to be focused on privacy and security. Instead, the default messenger is an app called Silence, which is a Signal fork. It works well, but it isn’t in the Play store (at least in the US due to a silly trademark issue that hasn’t been fixed) and no one I know uses it so it kind of defeats the purpose of secure messaging. Luckily, I discovered that the Copperhead gang has published their own fork called Noise, which removes the Googly bits but still works with the rest of the Signal infrastructure, so I have been using it as my default client with no issues. Note that it is in the F-Droid app but doesn’t show up on the F-Droid website for some reason.

For other apps such as Google+ and Yelp, I rediscovered the world wide web. Yes, browsers still work, and the web pages for these sites are pretty close to matching the functionality of the native app.

There are still some things for which there is no open source replacement: Google Maps, for example. Yes, I know, by using Google Maps I am sharing my location with Google, but the traffic data is just so good that it has saved literally hours of my life by directing me around accidents and other traffic jams. OpenStreetMap is okay and works great offline, but it doesn’t know where the OpenNMS office is located (I need to fix that) and without traffic it is a lot less useful. There is also the fact that I do like to play games like Ingress and Pokémon Go, and I have some movies and other content on Google servers.

I also lost Android Wear. I really enjoy my LG Urbane but it won’t work without Google Services. I have been playing with AsteroidOS which shows a lot of promise, but it isn’t quite there yet.

Note that Compass by OpenNMS is not yet available in F-Droid. We use Apache Cordova to build it and that is not (yet) supported by the F-Droid team. We do post the apks on Github.

To deal with my desire for privacy and my desire to use some Google software, I decided to carry two phones.

On the Nexus 6P I run Copperhead and it has all of my personal stuff on it: calendar, contacts, e-mail, etc. On the Nexus 6 I am running stock Google with all my Googly bits, including maps. I still lock down what I share with Google, but I feel a lot more confident that I won’t accidentally sync the rest of my life with them.

It sucks carrying two phones. With the processors and memory in modern devices I’m surprised that no one has come up with a hypervisor technology that would let me run Copperhead as my base OS and stock Google in a VM. Well, not really surprised since there isn’t a commercial motivation for it. Apple doesn’t have a reason to let other software on its products, and Google would be shooting itself in the foot since its business model involves collecting data on everything. I do think it will happen, however. The use case involves corporations, especially those involved in privacy sensitive fields such as health care. Wouldn’t it be cool to have a locked down “business” VM that is separate from a “personal” VM with your Facebook, games and private stuff on it.

As for the Copperhead experience itself, it is pretty solid. I had a couple of issues where DNS would stop working, but those seem to have been resolved, and lately it has been rock solid except for one instance when I lost cellular data. I tried reseting the APN but that didn’t help, but after a reboot it started working again. Odd. Overall is it probably the most stable ROM I’ve run, but part of that could be due to how vanilla it is.

Copperhead is mainly concerned with security and not extending the Android experience. For example, one feature I love about the OmniROM version of the Alarm app is the ability to set an action on “shake”. For example, I set it to “shake to dismiss” so when my alarm goes off I can just reach over, shake the phone, and go back to bed. That is missing from the stock ROM (but included in AOSP) and thus it is missing from Copperhead. The upside is that Copperhead is extremely fast with updates, especially security updates.

The biggest shortcoming is the keyboard. I’ve grown used to “gesture” typing using the Google keyboard, but that is missing from the AOSP keyboard and no free third party keyboards have it either. I asked the Copperhead guys about it and got this reply:

If the open-source community makes a better keyboard than AOSP Keyboard, we’ll switch to it. Right now it’s still the best option. There’s no choice available with gesture typing, let alone parity with the usability of the built-in keyboard. Copperhead isn’t going to be developing a keyboard. It’s totally out of scope for the project.

So, not a show stopper, but if anyone is looking to make a name for themselves in the AOSP world, a new keyboard would be welcome.

To further increase security, there is a suggestion to create a strong two-factor authentication mechanism. The 6P has a fingerprint sensor, but I don’t use it because I don’t believe that your fingerprint is a good way to secure your device (it is pretty easy to coerce you to unlock your handy if all someone has to do is hold you down and force your finger on to a sensor). However, having a fingerprint and a PIN would be really secure, as the best security is combining something you have (a fingerprint) with something you know (a PIN).

So here was my desktop on OmniROM:

Old Phone Desktop

and here is my current desktop:

New Phone Desktop

Not much different, and while I’ve given up a few things I’ve also discovered OCReader and Nextcloud News, plus the Amaze file manager.

But the biggest thing I’ve gained is peace of mind. I want to point out that it is possible to run other ROMs, such as OmniROM, without Google Services, but they aren’t quite as focused on security as Copperhead. Many thanks to the Copperhead team for doing this, and if you don’t want to go through all the work I did, you can buy a supported device directly from them.

Review: X-Arcade Gaming Cabinet

Last year I wanted to do something special for the team to commemorate the release of OpenNMS Meridian.

Since all the cool kids in Silicon Valley have access to a classic arcade machine, I decided to buy one for the office. I did a lot of research and I settled on one from X-Arcade.

X-Arcade Machine

The main reasons were that it looked well-made but it also included all of my favorites, including Pac-Man, Galaga and Tempest.

X-Arcade Games

The final piece that sold me on it was the ability to add your own graphics. I went to Jessica, our Graphic Designer, and she put together this wonderful graphic done in the classic eight-bit “platformer” style and featuring all the employees.

X-Arcade Graphic

Ulf took the role of Donkey Kong, and here is the picture meant to represent me:

X-Arcade Tarus

The “Tank Stick” controls are solid and responsive, although I did end up adding a spinner since none of the controls really worked for Tempest.

When you order one of these things, they stress that you need to make sure it arrives safely. Seriously, like four times, in big bold letters, they state you should check the machine on delivery.

I was going to be out of town when it arrived, so I made sure to tell the person checking in the delivery to make sure it was okay (i.e. take it out of the box).

They didn’t (the box looked “fine”) and so we ended up with this:

X-Arcade Cracked Top

(sigh)

Outside of that, everything arrived in working order. You get a small Dell desktop running Windows with the software pre-installed, but you also get CDs with all the games that are included with the system. It’s a little bit of a pain to set up since the instructions are a little vague, but after about an hour or so I had it up and running.

Anyway, it is real fun to play. It supports MAME games, Sega games, Atari 2600 games and even that short lived laserdisc franchise “Dragon’s Lair”. You can copy other games to the system if you have them, although scrolling through the menu can get a bit tiring if you have a long list of titles.

We had an issue with the CRT about 11 months after buying the system. I came back from a business trip to find the thing dark (it never goes dark, if the computer is hung for some reason you’ll still see a “no signal” graphic on the monitor”). Turns out the CRT had died, but they sent us a replacement under warranty and hassle free. It took about an hour to replace (those instructions were pretty detailed) and it worked better than ever afterward.

This motivated me to consider fixing the top. When we had the system apart to replace the monitor, I noticed that the top was a) the only thing broken and b) held on with eight screws. I contacted them about a replacement piece and to my surprise it arrived two days later – no charge.

The only issue I have remaining with the system is the fact that it is Windows-based. This seems to be the perfect application for a small solid-state Linux box, but I haven’t had the time to investigate a migration. Instead I just turned off or removed as much software as I could (all the Dell Update stuff kept popping up in the middle of playing a game) and so far so good.

I am very happy with the product and extremely happy with the company behind it. If you are in the market for such a cabinet, please check them out.