2022 Open Source Summit – Day 0

Monday was a travel day, but it was notable as it was the first time I have been in an airport since August. I fly out of RDU, and the biggest change was that they now have the “Star Trek” x-ray machines to scan carry-on luggage. While I was panicked for a second when I downloaded my boarding pass and didn’t see the TSA Precheck logo, I was able to get that sorted out so going through security was pretty easy.

The restrictions on masks for air travel have been lifted, but I wore mine along with about 10% of the other travelers. Even though I’ve had four shots and a breakthrough case of COVID I do interact with a lot of older people and since I’ll be around the most people in years at the Open Source Summit I figured I’d wear mine throughout the trip.

And while it isn’t N95, being a car nut I tried out these masks from K&N Engineering, who are known for high end air filtration for performance vehicles, and you almost don’t realize you are wearing a mask.

Anyway, I made my way to the Admiral’s Club and was pleasantly surprised to see it wasn’t very crowded. It was nice to have the membership (it comes with my credit card) as my flight to Charlotte was delayed over 90 minutes. I wasn’t too worried since I had a long layover before heading to Austin, so I was a lot less stressed than many of my fellow travelers.

The flight to Austin left on time and landed early, but we got hit with the curse in that our gate wasn’t available, so we ended up on the tarmac for 45 minutes, getting in 30 minutes late.

Not that I’m complaining. Seriously, according to my handy the trip from my home to Austin by car is 19 hours. From the moment I left my home until we landed was more like 8 hours, and most of that was enjoyable. I always have to remind myself of this wonderful clip by Louis CK which kind of sums up the amazing world in which we live where every time we fly we should be saying to ourselves “I’m in a chair in the sky!”

I checked in at the hotel and then we headed back out in our rented minivan to get the last member of our team, and then we drove about 45 minutes outside of Austin to this barbecue joint called Salt Lick in Driftwood Texas. It was wonderful and I was told we owed this experience to a recommendation years ago from Mark Hinkle, so thanks Mark!

A white van in the parking lot of the Salt Lick barbecue restaurant

You can’t really tell a good barbecue restaurant by its looks, although shabbier tends to be better, but more by the smell. When you get out of your vehicle your nose is so assaulted with the most wonderful smell you might be drawn to the entrance so quickly that you miss the TARDIS.

A British Police box that looks like the TARDIS from Doctor Who in the parking lot of the Salt Lick barbecue restaurant

We sat at a big picnic table and ordered family style, which was all you could eat meat, slaw, baked beans, bread, pickles and potato salad. I was in such a food coma by the end that I forgot to take a picture of the cobbler.

A table full of food at the Salt Lick barbecue restaurant

I tried not to fall asleep on the ride back to Austin (I wasn’t driving) but it was a great start to what I hope is a wonderful week.

2022 Open Source Summit North America

Next week I’ll be attending my first conference in nearly three years. My last one turned out to be the very last OSCON back in 2019. Soon after that I was in a bad car accident that laid me up for many months and then COVID happened.

Open Source Summit Logo Showing Member Conferences

I am both eager and anxious. Even having four vaccine shots and one breakthrough case I still feel a little exposed around large groups of people, but the precautions outlined in the “Health and Safety” section of the conference website are pretty robust and I am eager to see folks face-to-face (or mask-to-mask) once again.

The Linux Foundation’s Open Source Summit used to be known as Linuxcon and now it is an umbrella title for a number of conferences around open source, all of which look cool. My new employer, AWS, is a platinum sponsor and will also have a booth (I am not on booth duty this trip but I’ll be around). I am looking forward to getting to meet in person many of my teammates who I’ve only seen via video, old friends I haven’t seen in years, and to making a bunch of new ones.

Of course, we would have to have a conference in Austin during a heat wave. I was thinking about never leaving the conference venue but then I remembered … barbecue.

If you are going and would like to say “hi” drop me a note on Twitter or LinkedIn or send an e-mail to tarus at tarus dot io.

In Pursuit of Quality Interactions

Recently my friend Jonathan had a birthday, and I sent him a short note with best wishes for the day and to let him know I was thinking about him.

In his reply he included the following paragraph:

[I] was reminded of your comment about a sparsely attended OUCE conference at Southampton one year. You said something along the lines of that it didn’t matter, that you would try to make it the best experience you could for everyone there. That stuck with me. It’s been one of my mantras ever since then.

I can remember talking about that, although I also remember I was very ill during most of that conference and spent a lot of time curled up in my room.

Putting on conferences can be a challenge. You don’t know how many people will show up, but you have to plan months in advance in order to secure a venue. Frequently we could use information about the previous conference to approximate the next one, but quite often there were a number of new variables that were hard to measure. In this case moving the conference from Germany, near Frankfurt, to Southampton in the UK resulted in a lot less people coming than we expected.

It is easy to get discouraged when this happens. I have given presentations in full rooms where people were standing in the back and around the edges, and I have given presentations to three people in a large, otherwise empty room. In both cases I do my best to be engaging and to meet the expectations of those people who were kind enough to give me their attention.

I think this is important to remember, especially in our open source communities. I don’t think it is easy to predict which particular people will become future leaders on first impressions, so investing a little of your attention in as many people as possible can reap large results. I can remember when I started in open source I’d sometimes get long e-mails from people touting how great they were, which was inevitably followed up with a long list of things I needed to do to make my project successful. Other times I’d get a rather timid e-mail from someone wanting to contribute, along with some well written documentation or a nice little patch or feature, and I valued those much more.

I can remember at another OUCE we ended up staying at a hotel outside of Fulda because another convention (I think involving public service vehicles like fire trucks and ambulances) was in town at the same time. There was a van that would pick us up and take us into town each morning, and on one day a man named Ian joined me for the ride. He was complaining about how his boss made him come to the conference and he was very unhappy about being there. I took that as a challenge and spent some extra time with him, and by the end of the event he had become one of the project’s biggest cheerleaders.

Or maybe it was just the J├Ągermeister.

In the book Zen and the Art of Motorcycle Maintenance the author Robert Persig demonstrates a correlation between “attention” and “quality”. In today’s world I often find it hard to focus my attention on any one thing at a time, and it is something I should improve. But I do manage to put a lot of attention into person-to-person interactions, and that has been very valuable over the years.

In any case I was touched that Jonathan remembered that from our conversation, and it helps to be reminded. It also motivated me to write this blog post (grin).

AWS: Impressions So Far

When I announced that I had joined AWS, at least two of my three readers reached out with questions so I thought I’d post an update on my onboarding process and impressions so far.

One change you can expect is that when I talk about my job on this blog, I’m going to add the following disclaimer:

Note: Everything expressed here represents my own thoughts and opinions and I am not speaking for my employer Amazon Web Services.

Back when I owned the company I worked for I had more control about what I could share publicly. While I am very excited to be working for AWS and may, at some time in the future, speak on their behalf, this is not one of those times.

A number of people joked about me joining the “dark side”. My friend Talal even commented on my LinkedIn post with the complete “pitch speech” Darth Vader made to Luke Skywalker in Empire. While I got the joke I’d always had a pretty positive opinion of Amazon, gained mainly through being a retail customer.

I recently went and traced what I think to be my first interaction with Amazon back to a book purchase made in December of 1997. In the nearly 25 years I’ve been shopping there I can think of only two times that I was disappointed with their customer service (both involving returns) and numerous times when my expectations were exceeded by Amazon. For example, I once spent around $70 on two kits used to clean high performance automotive air filters. In shipment one of them leaked, and I asked if I could return it. They told me to keep both and refunded the whole $70, even after I protested that I’d be happy with half that.

It was this focus on customer service that attracted me to the possibility of working with Amazon. When I was at OpenNMS I crafted a mission statement that read “Help Customers. Have Fun. Make Money”. I thought I came up with it on my own but I may have gotten inspiration from a Dilbert cartoon, although I changed the order to put the focus on customers. I always put a high value on customer satisfaction.

I have also been a staunch, and I’ll admit, opinionated, proponent of free and open source software and nearly 20 years of those opinions are available on this blog. Despite that, AWS still wanted to talk to me, and as I went through the interview process I really warmed to the idea of working on open source at AWS.

Just before I started I received a note from the onboarding specialist with links to content related to Amazon’s “peculiar” culture. When I read the e-mail I was pretty certain they meant “particular”, as “particular” implies “specific” and “peculiar” implies “strange”. Nope, peculiar is the word they meant to use and I’m starting to understand why. They are so laser-focused on customer satisfaction that their methods can seem strange to people used to working in other companies.

As you can imagine with a company that has around 1.6 million employees, they have the onboarding process down to a science. My laptop and supporting equipment showed up before my start date, and with few problems I was able to get on the network and access Amazon resources. These last two weeks have been packed with meeting people, attending virtual classes with other new hires, and going through a lot of online training. One concept they introduce early on is the idea of “working backwards”. At Amazon, everything starts from the customer and you work backwards from there. After having this drilled into my head in one of the online courses it was funny to watch a video of Jeff Bezos during an All Hands meeting where someone asks if the “working backwards” process is optional.

Based on my previous experience with large companies I was certain of the answer: no, working backwards is not optional. Period.

But that wasn’t what he said. He said it wasn’t optional unless you can come up with something better. I know it is kind of a subtle distinction but it really resonated with me, as it drove home the fact that at Amazon no process is really written in stone. Everything is open to change if it can be improved. As I learn more about Amazon I’ve found that there are many “tenets”, or core principles, and every one of them is presented in the context that these exist until something better is discovered, and there seem to be a lot of processes in place to suggest those improvements at all levels of the company.

If there is anything that isn’t open to change, it is the goal of becoming the world’s most customer-centric company. While a lot of companies can claim to be focused on their customers without many specifics, at Amazon this is defined has having low prices, large selection and a great customer experience. Everything else is secondary.

I bring this up because it is key to understanding Amazon as a company. To get back to my area of expertise, open source, quite frequently open source involvement is measured by things such as number of commits, lines of code committed, number of projects sponsored and number of contributors. That is all well and good but seen through the lens of customer satisfaction they mean nothing, so they don’t work at Amazon. Amazon approaches open source as “how can our involvement improve the experience of our customers?”

(Again, please remember that is my personal opinion based on my short tenure at AWS and doesn’t constitute any formal policy or position)

Note that with respect to open source at AWS, “customer” can refer to both end users of software who want an easy and affordable way to leverage open source solutions as well as open source projects and companies themselves. My focus will be on the latter and I’m very eager to begin working with all of these cool organizations creating wonderful open source solutions.

This focus may not greatly increase those metrics mentioned above, but it is hoped that it will greatly increase customer satisfaction.

So, overall, I’m very happy with my decision to come to AWS. I grew up in North Carolina where the State motto is Esse Quam Videri, which is Latin for “to be rather than to seem”. My personal goal is to see AWS considered both a leader and an invaluable partner for open source companies and projects. I realize that won’t happen overnight and I welcome suggestions on how to reach that goal. In any case it looks like it is going to be a lot of fun.

Posted in AWS

Creating Strong Passwords

For obvious reasons I’ve been creating some new passwords lately, and I wanted to share my method for creating strong passwords that are easy to remember yet hard to guess.

Of course, Randall Munroe set the bar with this comic:

xkcd Password Strength comic

It does make a lot of sense, but the method has its critics. Attackers can and do use random word generators which can break such passwords more quickly, even with, say, substituting “3” for “e”, etc.

There is also a good argument to be made that we should all be using password managers that generate long random passwords and not really creating passwords at all.

Then there is the very good idea of using two factor authentication, but that tends to augment passwords more than replace them.

In normal life you have to have at least a few passwords memorized, such as the one to get into your device and one to get into your password manager, so I thought I’d share my technique.

I like music, and I tend to listen to pretty obscure artists. What I do is to think of a random lyric from a song I like and then convert that into a password.

For example, right now I’m listening to the album Wet Tennis by Sofi Tukker. The track that gives me the biggest earworm is “Original Sin” which opens with the lyric:

So I think you’ve got
Something wrong with you
Something’s not right with me, too
It’s not right with me

If I were going to turn that into a password, I would come up with something like:

sItUgswwysnrwm,2inrwm

Looks pretty random, and contains lower case and upper case letters, a number and a special character. At 21 characters it isn’t quite as long as “correcthorsebatterystaple” but you can always add more words from the lyrics if needed.

Just thought I’d throw this out there as it works for me. The only thing I have to remember is not to hum the song while logging in.

The Adventure Continues

Last year I wrote about parting ways with the OpenNMS Project and how I was ready for “Act III” of my professional career.

With my future being somewhat of a tabula rasa, I was a bit overwhelmed with choices, so I decided to return to my roots and dust off my consulting LLC. Soon I found myself in the financial sector helping to deploy network monitoring and observability solutions.

I was working with some pretty impressive applications and it was interesting to see the state of the art for monitoring. We’ve come a long way since SNMP. It was engaging and fun work, but all the software was proprietary and I missed the open source aspect.

Recently, Spot Callaway made me aware of an opportunity at Amazon Web Services for an open source evangelist position. Of all the things I’ve done in my career, acting as an evangelist for open source solutions was my favorite thing to do and here was a chance to do it full time. I will admit that Amazon was not the first name that popped into my head when I think “open source” but as I got to learn more about the team and AWS’s open source initiatives, the more interested I became in the position. After I made it through their rather intense interview process and met even more people with whom I’ll be working, it became a job I couldn’t refuse.

So I’m happy to announce that I’m now a Principal Evangelist at AWS, reporting to David Nalley, who, in addition to being a pretty awesome boss is also the current President of the Apache Software Foundation. OpenNMS would not have existed without software from the ASF, and it will be cool to learn, in addition, more about that organization first hand.

My main role will be to work with open source companies as an advocate for them within AWS. The solutions AWS provides can help jumpstart these companies toward profitability by providing the resources they need to be successful and to affordably grow as their needs change. While I am just getting started within the organization and it will take me some time to learn the ropes, I am hoping my own experience in running an open source business will provide a unique insight into issues faced by those companies.

Exciting times, so watch this space as my open source adventures continue.

“Run-of-the-Mill Person”

I just noticed that my Wikipedia page has been deleted (the old version is still on the Internet Archive).

I’m not sure how I feel about this. When I was first made aware of its existence oh so many years ago I was both flattered and a little embarrassed, mainly because I didn’t think I rated a page on Wikipedia. But then I got to thinking that, hey, pretty much anyone should be able to have a page on Wikipedia as long as it adheres to their format guidelines. It’s not like it takes up much space, and as long as the person is verifiable as being a real person, why not?

I am certain I would have been okay with my page being deleted soon after it was created, but once you get used to having something, earned or not, there is a strong psychological reaction to having it taken away. From what I can tell the page was created in 2010, so it had been around for nearly 12 years with no one complaining.

The most hurtful thing was a comment about the deletion from EdwardX from London:

Nothing cited in the article counts towards WP:GNG, and I can find nothing better online. Run-of-the-mill person.

Really? Was the “Run-of-the-mill person” comment really necessary? (grin)

I’m still happy about what I was able to accomplish with OpenNMS and building the community around it, even if it was run-of-the-mill, and I plan to promote open source and open source companies for the remainder of my career, even if that isn’t Wikipedia-worthy.

Nineteen Years

Nineteen years ago my friend Ben talked me into starting this blog. I don’t update it as frequently any more for a variety of reasons, specifically because more people interact on social media these days and I’m not as involved in open source as I used to be, but it is still somewhat of an achievement to keep something going this long.

My “adventures” in open source started out on September 10th, 2001, when I started a new job with a company called Oculan to work on their open source monitoring platform OpenNMS. In May of 2002 I became the lead maintainer on the project, and by the time I started this blog I’d been at it for several months. Back then blogs were one of the main ways an open source project could communicate with its community.

The nearly two decades I spent with OpenNMS were definitely an adventure, and this site can serve as a record of both those successes and those struggles.

Nineteen years ago open source was very different than it is today. Today it is ubiquitous: I think it would be rare for a person to go a single day without interacting with open source software in some fashion. But back then there was still a lot of fear, uncertainty and doubt about using it, with a lot of confusion about what it meant. Most people didn’t take it seriously, often comparing it to “shareware” and never believing that it would ever be used for doing “real” things. On a side note, even in 2022 I recently had one person make the shareware comparison when I brought up Grafana, a project that has secured nearly US$300 million in funding.

Back then we were trying to figure out a business model for open source, and I think in many ways we still are. The main model was support and services.

You would have thought this would have been more successful than it turned out to be. Proprietary software costing hundred of thousands if not millions of dollars would often require that you purchase a maintenance or support contract running anywhere from 15% to 25% of the original software cost per year just to get updates and bug fixes. You would think that people would be willing to pay that amount or less for similar software, avoiding the huge upfront purchase, but that wasn’t the case. If they didn’t have to buy support they usually wouldn’t. Plus, support doesn’t easily scale. It is hard finding qualified people to support complex software. I’d often laugh when someone would contact me offering to double our sales because we wouldn’t have been able to handle the extra business.

One company, Red Hat, was able to pull it off and create a set of open source products people were willing to purchase at a scale that made them a multi-billion dollar organization, but I can’t think of another that was able to duplicate that success.

Luckily, the idea of “hosted” software gained popularity. One of my favorite open source projects is WordPress. You are reading this on a WordPress site, and the install was pretty easy. They talk about a “five minute” install and have done a lot to make the process simple.

However, if you aren’t up to running your own server, it might as well be a five year install process. Instead, you can go to “wordpress.com” and get a free website hosted by them and paid for by ads being shown on your site, or you can remove those ads for as little as US$4/month. One of the reasons that Grafana has been able to raise such large sums is that they, too, offer a hosted version. People are willing to pay for ease of use.

But by far the overwhelming use of open source today is as a development methodology, and the biggest open source projects tend to be those that enable other, often proprietary, applications. Two Sigma Ventures has an Open Source Index that tries to quantify the most popular open source projects, and at the moment these include Tensorflow (a machine learning framework), Kubernetes (a container orchestration platform) and of course the Linux kernel. What you don’t see are end user applications.

And that to me is a little sad. Two decades ago the terms “open source” and “free software” were often used interchangeably. After watching personal computers go from hobbyists to mainstream we also saw control of those systems move to large companies like Microsoft. The idea of free software, as in being able to take control of your technology, was extremely appealing. After watching companies spend hundreds of thousands of dollars on proprietary software and then being tied to those products, I was excited to bring an alternative that would put the power of that software back into the hands of the users. As my friend Jonathan put it, we were going to change the world.

The world did change, but not in the way we expected. The main reason is that free software really missed out on mobile computing. While desktop computers were open enough that independent software could be put on them, mobile handsets to this day are pretty locked down. While everyone points to Android as being open source, to be honest it isn’t very useful until you let Google run most of it. There was a time where almost every single piece of technology I used was open, including my phone, but I just ran out of time to keep up with it and I wanted something that just worked. Now I’m pretty firmly back into the Apple ecosystem and I’m amazed at what you can do with it, and I’m so used to just being able to get things going on the first try that I’m probably stuck forever (sigh).

I find it ironic that today’s biggest contributors to open source are also some of the biggest proprietary software companies in the world. Heck, even Red Hat is now completely owned by IBM. I’m not saying that this is necessarily a bad thing, look at all the open source software being created by nearly everyone, but it is a long way from the free software dream of twenty years ago. Even proprietary, enterprise software has started to leverage open APIs that at least give a nod to the idea of open source.

We won. Yay.

Recently some friends of mine attended a fancy, black-tie optional gala hosted by the Linux Foundation to celebrate the 30th anniversary of Linux. Most of them work for those large companies that heavily leverage open source. And while apparently a good time was had by all, I can’t help but think of, say, those developers who maintain projects like Log4j who, when there is a problem, get dumped on to fix it and probably never get invited to cool parties.

Open source is still looking for a business model. Heck, even making money providing hosted versions of your software is a risk if one of the big players decides to offer their version, as to this day it is still hard to compete with a Microsoft or an Amazon.

But this doesn’t mean I’ve given up on open source. Thanks to the Homebrew project I still use a lot of open source on my Macintosh. I’m writing this using WordPress on a server running Ubuntu through the Firefox browser. I still think there are adventures to be had, and when they happen I’ll write about them here.

Nextcloud News

I think the title of this post is a little misleading, as I don’t have any news about Nextcloud. Instead I want to talk about the News App on the Nextcloud platform, and I couldn’t think of a better one.

I rely heavily on the Nextcloud News App to keep up with what is going on with the world. News provides similar functionality to the now defunct Google Reader, but with the usual privacy bonuses you expect from Nextcloud.

Back before social networks like Facebook and Twitter were the norm, people used to communicate through blogs. Blogs provide similar functionality: people can write short or long form posts that will get published on a website and can include media such as pictures, and other people can comment and share them. Even now when I see an incredibly long thread on Twitter I just wish the author would have put it on a blog somewhere.

Blogs are great, since each one can be individually hosted without requiring a central authority to manage it all. My friend Ben got me started on my first blog (this one) that in the beginning was hosted using a program called Moveable Type. When their licensing became problematic, most of us switched to WordPress, and a tremendous amount of the Web runs on WordPress even now.

Now the problem was that the frequency that people would post to their blogs varied. Some might post once a week, and others several times an hour. Unless you wanted to go and manually refresh their pages, it was difficult to keep up.

Enter Really Simple Syndication (RSS).

RSS is, as the name implies, an easy way to summarize content on a website. Sites that support RSS craft a generic XML document that reflects titles, descriptions, links, etc. to content on the site. The page is referred to as a “feed” and RSS “readers” can aggregate the various feeds together so that a person can follow the changes on websites that interest them.

Google Reader was a very useful feed reader that was extremely popular, and it in turn increased the popularity of blogs. I put some of the blame on Google for the rise of the privacy nightmare of modern social networks on their decision to kill Reader, as it made individual blogs less relevant.

Now in Google’s defense they would say just use some other service. In my case I switched to Feedly, an adequate Reader replacement. The process was made easier by the fact that most feed readers support a way to export your configuration in the Outline Processor Markup Language (OPML) format. I was able to export my Reader feeds and import them into Feedly.

Feedly was free, and as they say if you aren’t paying for the product you are the product. I noticed that next to my various feed articles Feedly would display a count, which I assume reflected the number of Feedly users that were interested in or who had read that article. Then it dawned on me that Feedly could gather useful information on what people were interested in, just like Facebook, and I also assume, if they chose, they could monetize that information. Since I had a Feedly account to manage my feeds, they could track my individual interests as well.

While Feedly never gave me any reason to assign nefarious intentions to them, as a privacy advocate I wanted more control over sharing my interests, so I looked for a solution. As a Nextcloud fan I looked for an appropriate app, and found one in News.

News has been around pretty much since Nextcloud started, but I rarely hear anyone talking about its greatness (hence this post). Like most things Nextcloud it is simple to install. If you are an admin, just click on your icon in the upper right corner and select “+ Apps”. Then click on “Featured apps” in the sidebar and you should be able to enable the “News” app.

That’s it. Now in order to update your feeds you need to be using the System Cron in Nextcloud, and instructions can be found in the documentation.

Once you have News installed, the next challenge is to find interesting feeds to which you can subscribe. The news app will suggest several, but you can also find more on your own.

Nextcloud RSS Suggestions

It used to be pretty easy to find the feed URL. You would just look for the RSS icon and click on it for the link:

RSS Icon

But, again, when Reader died so did a lot of the interest in RSS and finding feed URLs more became difficult. I have links to feeds at the very bottom of the right sidebar of this blog, but you’d have to scroll down quite a way to find them.

But for WordPress sites, like this one, you just add “/feed” to the site URL, such as:

https://www.adventuresinoss.com/feed

There are also some browser plugins that are supposed to help identify RRS feed links, but I haven’t used any. You can also “view source” on a website of interest and search for “rss” and that may help out as well.

My main use of the News App is to keep up with news, and I follow four main news sites. I like the BBC for an international take on news, CNN for a domestic take, Slashdot for tech news and WRAL for local news.

Desktop Version of News App

Just for reference, the feed links are:

BBC: http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/front_page/rss.xml

CNN: http://rss.cnn.com/rss/cnn_topstories.rss

Slashdot: http://rss.slashdot.org/slashdot/slashdotMain

WRAL: http://www.wral.com/news/rss/48/

This wouldn’t be as useful if you couldn’t access it on a mobile device. Of course, you can access it via a web browser, but there exist a number of phone apps for accessing your feeds in a native app.

Now to my knowledge Nextcloud the company doesn’t produce a News mobile app, so the available apps are provided by third parties. I put all of my personal information into Nextcloud, and since I’m paranoid I didn’t want to put my access credentials into those apps but I wanted the convenience of being able to read news anywhere I had a network connection. So I created a special “news” user just for News. You probably don’t need to do that but I wanted to plant the suggestion for those who think about such things.

On my iPhone I’ve been happy with CloudNews.

iPhone Version of CloudNews App

It sometimes gets out of sync and I end up having to read everything in the browser and re-sync in CloudNews, but for the most part it’s fine.

For Android the best app I’ve used is by David Luhmer. It’s available for a small fee in the Play Store and for free on F-Droid.

Like all useful software, you don’t realize how much you depend on it until it is gone, and in the few instances I’ve had problems with News I get very anxious as I don’t know what’s going on in the world. Luckily this has been rare, and I check my news feed many times during the day to the point that I probably have a personal problem. The mobile apps mean I can read news when I’m in line at the grocery store or waiting for an appointment. And the best part is that I know my interests are kept private as I control the data.

If you are interested, I sporadically update a number of blogs, and I aggregate them here. In a somewhat ironic twist, I can’t find a feed link for the “planet” page, so you’d need to add the individual blog feeds to your reader.

Review: AT&T Cell Booster

Back in the mid-2000s I was a huge Apple fanboy, and I really, really, really wanted an iPhone. At that time it was only available from AT&T, and unfortunately the wireless coverage on that network is not very good where I live.

In 2008 a couple of things happened. Apple introduced the iPhone 3G, and AT&T introduced the 3G Microcell.

The 3G Microcell, technically a “femtocell“, is a small device that you can plug into your home network and it will leverage your Internet connection to augment wireless coverage in a small area (i.e. your house). With that I could get an iPhone and it would work at my house.

In February 3G service in the US will cease, and I thought I was going to have to do without a femtocell. Most modern phones support calling over WiFi now, but it just isn’t the same. For example, if I am trying to send an SMS and there is any signal at all from AT&T, my phone will try to use that network instead of the much stronger wireless network in my house. If I disable mobile access altogether, the SMS will send fine but then I can’t get phone calls reliably. (sigh)

I thought I was going to have to just deal with it when AT&T sent me a notice that they were going to replace my 3G Microcell with a new product called a Cell Booster.

Now a lot of people criticize AT&T for a number of good reasons, but lately they’ve really been hitting the whole “customer service” thing out of the park. The Cell Booster currently shows out of stock on their website with a cost of $229, but they sent me one for free.

AT&T Cell Booster Box

In a related story my mother-in-law, who is on our family plan, was using an older Pixel that was going to stop working with the end of 3G service (it was an LTE phone but doesn’t support “HD Voice” which is required to make calls). So AT&T send us a replacement Samsung S9. Pretty cool.

In any case the Cell Booster installation went pretty smoothly. I simply unplugged the existing 3G Microcell and plugged in the new device. The box included the Cell Booster, a GPS sensor, a power supply and an Ethernet cable. No other instructions outside of a QR code which will take you to the appropriate app store to download the necessary application to set it up.

The Booster requires a GPS lock, and they include a little “puck” connected to a fairly long wire that is supposed to allow one to get a signal even when the device is some distance away from a clear line of sight, such as away from windows. I just plugged it in to the back and left it next to the unit and it eventually got a signal, but it is also pretty much beneath a skylight.

In order to provision the Cell Booster you have to launch the mobile app and fill out a few pages of forms, which includes the serial number of the device. It has five lights on the front and while the power light came on immediately, it did take some time for the other lights, including “Internet” to come up. I assumed the Internet light would have turned on as soon as an IP address was assigned, but that wasn’t the case. It took nearly a half and hour for the first four lights to come on, and then another 15 minutes or so for the final “4G LTE” light to illuminate and the unit to start working. Almost immediately I got an SMS from AT&T saying the unit was active.

AT&T Cell Booster Lights

Speaking of IP addresses, I don’t like putting random devices on my LAN so I stuck this on my public network which only has Internet access (no LAN access). I ran nmap against it and there don’t appear to be any ports open. A traffic capture shows traffic between the Cell Booster and a 12.0.0.0 network address owned by AT&T.

I do like the fact that, unlike the 3G Microcell, you do not need to specify the phone number of the handsets that can use the Cell Booster. It claims to support up to 8 at a time, and while I haven’t had anyone over who is both on the AT&T network and also not on my plan, I’m assuming it will work for them as well (I used to have to manually add phone numbers of my guests to allow them to use the 3G device).

The Cell Booster is a rebranded Nokia SS2FII. One could probably buy one outside of AT&T but without being able to provision it I doubt it would work.

So far we’ve been real happy with the Cell Booster. Calls and SMS messages work just fine, if not better than before (I have no objective way to measure it, though, so it might just be bias). If you get one, just remember that it takes a really long time to start up that first time, but after you have all five lights you should be able to forget it’s there.