I was recently at a client site where I met a man named Jeremy Ford. He’s sharp as a knife and even though, at the time, he was new to OpenNMS, he had already hacked a few neat things into the system (open source FTW).
One of those was the addition of a weathermap to the OpenNMS home page. He has graciously put the code up on Github.
The code is a script that will generate a JSP file in the OpenNMS “includes” directory. All you have to do then is to add a reference to it in the main index.jsp file.
For those of you who don’t know or who have never poked around, under the $OPENNMS_HOME directory should be a directory called jetty-webapps. That is the web root directory for the Jetty servlet container that ships with OpenNMS.
Under that directory you’ll find a subdirectory for opennms. When you surf to http://[my OpenNMS Server]:8980/opennms that is the directory you are visiting. In it is an index.jsp file that serves as the main page.
If you are familiar with HTML, the JSP file is very similar. It can contain references to Java code, but a lot of it is straight HTML. The file is kept simple on purpose, with each of the three columns on the main page indicated by comments. The part you will need to change is the third column:
Feel free to look around. If you ever wanted to rearrange the OpenNMS Home page, this is a good place to start.
Now, I used to like poking around with these files since they would update automatically, but later versions of OpenNMS (which contain later versions of Jetty) seem to require a restart. If you get an error, restart OpenNMS and see if it goes away.
Now the weather.jsp file gets generated by Jeremy’s python script. In order to get that to work you’ll need to do two things. The most important is to get an API key from Weather Underground. It is a pretty easy process, but be aware that you can only do 500 queries a day without paying. The second thing you’ll need to do is edit the three URLs in the script and change the location. It is currently set to “CA/San_Francisco” but I was able to change it to “NC/Pittsboro” and it “just worked”.
Finally, you’ll need to set the script up to run via cron. I’m not sure how frequently Weather Underground updates the data, but a 10 minute interval seems to work well. That’s only 144 queries a day, so you could easily double it and still be within your limit.
[IMPORTANT UPDATE: Jeremy pointed out that the script actually does three queries, not just one, so instead of doing 144 queries a day, it’s 432. Still leaves some room with 10 minute queries but you don’t want to increase the frequency too much.]
Thanks to Jeremy for taking the time to share this. Remember, once you get it working, if you upgrade OpenNMS you’ll need to edit index.jsp and add it back, but that should be the only change needed.
Once again we will descend on the campus of the University of Minnesota for a week of fun, fellowship and hacking on OpenNMS and all things open source.
Anyone is welcome to attend, although I must stress that this is aimed at developers and it is highly unstructured. Despite that, we get a ton of things done and have a lot of fun doing it (and I’m not just saying that, there’s videos).
We stay at Yudof Hall on campus, and while that can scare older folks I want to point out the accommodation is quite nice and I’ve been told they they have recently refurbished the dorm. If you want to stay on campus the cost is US$1500 for the week which includes all meals.
If you prefer hotels, there are several nearby, and you can come to the conference for US$800.
Registration is now open and space is limited. If you think you want to come but aren’t sure, let me know and I’ll try to save you a space. We’ve sold out the last two years.
Oh, sponsorships are available as well for $2500. You will help us bring someone deserving to Dev Jam who wouldn’t ordinarily get to attend, and you’ll get your logo and link on www.opennms.org for a year.
For the last few days it has been hard to remain true to my free and open source roots. I guess I’ve been spoiled lately with almost everything I try out “just working”, but it wasn’t so with my upgrade to OmniROM 6.0 on my Nexus 6 (shamu).
I’ve been a big fan of OmniROM since it came out, and I base my phone purchases on what handsets are officially supported. While I tend not to rush to upgrade to the latest and greatest, once the official nightlies switched to Android “Marshmallow” I decided to make the jump.
Now there are a couple of tools that I can’t live without when playing with my phone. They are the Team Win Recovery Project (TWRP) and Titanium Backup. The first lets you create easy to restore complete backups and the latter allows you to restore application status even if you factory reset your device, which I had to do.
[NOTE: I should also mention that I rely on Chainfire’s SuperSU for root. It took me awhile to find a link for it I trust.]
When I tried the first 6.0 nightlies, all I did was sideload the ROM, wipe the caches, and reboot. I liked the new “OMNI” splash screen but once the phone booted, the error “Unfortunately process com.android.phone has stopped” popped up and couldn’t be cleared. Some investigation suggested a factory reset would fix the issue, but since I didn’t want to go through the hassle of restoring all of my applications I decided to just restore OmniROM 5.1 and wait to see if a later build would fix it.
Well, this weekend we got a dose of winter weather and I ended up home bound for several days, so I decided to give it another shot. I sideloaded the latest 6.0 nightly and sure enough, the same error occurred. So I did a factory reset and, voilà, the problem went away.
Now all I had to do was reload all 100+ apps. (sigh)
I started by installing the “pico” GApps package from Open GApps and in case you were wondering, the Nexus 6 uses a 32-bit ARM processor.
I guess I really shouldn’t complain, as doing a fresh install once in awhile can clean out a bunch of kruft that I’ve installed over the past year or so, but I’ve come expect OmniROM upgrades to be pretty easy.
One of the first things I installed from the Play store was the “K-9 Mail” application. Unfortunately, it kept having problems connecting to my personal mail server (the work one was fine). The sync would error with “SocketTimeoutException: fai”. So I rebooted back to Omni 5.1 and things seemed to work okay (although I did see that error when trying to sync some of the folders). Back I went to 6.0 (see where TWRP would come in handy here?) and I noticed that when I disabled Wi-Fi, it worked fine.
As I was trying to sleep last night it hit me – I bet it has something to do with IPv6. We use true IPv6 at the office, but not to our external corporate mail server, which would explain why a server in the office would fail but the other one work. At home I’m on Centurylink DSL and they don’t offer it (well, they offer 6rd which is IPv6 encapsulated over IPv4 but not only is it not “true” IPv6 you have to pay extra for a static IP to get it to work). I use a Hurricane Electric tunnel and apparently Marshmallow utilizes a different IPv6 stack and thus has issues trying to retrieve data from my mail server when using that protocol.
I tried turning off IPv6 on Android. It’s not easy and I couldn’t get any of the suggestions to work. Then I found a post that suggested it was the MTU, so I reduced the MTU to 1280 and still no love.
So I turned off the HE tunnel. Bam! K-9 started working fine.
For now I’ve just decided to leave IPv6 off. While I think we need to migrate there sooner rather than later, there is nothing I absolutely have to have IPv6 for at the moment and I think as bandwidth increases, having to tunnel will start to cause performance issues. Normal traffic, such as using rsync, seems to be faster without IPv6.
That experience cost me about two days, but at the moment I’m running the latest OmniROM and I’m pretty happy with it. The one open issue I have is that the AOSP keyboard crashes if you try to swipe (gesture type) but I just installed the Google Keyboard and now it works without issue.
I have to say that there were some moments when I was very close to installing the Google factory image back on my Nexus 6. It’s funny, but the ability to shake the phone to dismiss an alarm is kind of a critical app with me. Since the last time I checked it wasn’t an available option on the Google ROM, I was willing to stick it out a little longer and figure out my issues with OmniROM.
So, yes, the gang from OpenNMS will be at the SCaLE conference this weekend (I will not be there, unfortunately, due to a self-imposed conference hiatus this year). It should be a great time, and we are happy to be a Gold Sponsor.
But this post is not about that. This is about how Horizon 17 and data collection can scale. You can come by the booth at SCaLE and learn more about it, but here is the overview.
When OpenNMS first started, we leveraged the great application RRDTool for storing performance data. When we discovered a java port called JRobin, OpenNMS was modified to support that storage strategy as well.
Using a Round Robin database has a number of advantages. First, it’s compact. Once the file containing the RRD database is created, it never grows. Second, we used RRDTool to also graph the data.
However, there were problems. Many users had a need to store the raw collected data. RRDTool uses consolidation functions to store a time-series average. But the biggest issue was that writing lots of files required really fast hard drives. The more data you wanted to store, the greater your investment in disk arrays. Ultimately, you would hit a wall, which would require you to either reduce your data collection or partition out the data across multiple systems.
No more. With Horizon 17 OpenNMS fully supports a time-series database called Newts. Newts is built on Cassandra, and even a small Cassandra cluster can handle tens of thousands of inserts a second. Need more performance? Just add more nodes. Works across geographically distributed systems as well, so you get built-in high availability (something that was very difficult with RRDTool).
Just before Christmas I got to visit a customer on the Eastern Shore of Maryland. You wouldn’t think that location would be a hotbed of technical excellence, but it is rare that I get to work with such a quick team.
They brought me up for a “Getting to Know You” project. This is a two day engagement where we get to kick the tires on OpenNMS to see if it is a good fit. They had been using Zenoss Core (the free version) and they hit a wall. The features they wanted were all in the “enterprise” paid version and the free version just wouldn’t meet their needs. OpenNMS did, and being truly open source it fit their philosophy (and budget) much better.
This was a fun trip for me because they had already done most of the work. They had OpenNMS installed and monitoring their network, and they just needed me to help out on some interesting use cases.
One of their issues was the need to store a lot of performance data, and since I was eager to play with the Newts integration we decided to test it out.
In order to enable Newts, first you need a Cassandra cluster. It turns out that ScyllaDB works as well (more on that a bit later). If you are looking at the Newts website you can ignore the instructions on installing it as it it built directly into OpenNMS.
Another thing built in to OpenNMS is a new graphing library called Backshift. Since OpenNMS relied on RRDTool for graphing, a new data visualization tool was needed. Backshift leverages the RRDTool graphing syntax so your pre-defined graphs will work automatically. Note that some options, such as CANVAS colors, have not been implemented yet.
To switch to newts, in the opennms.properties file you’ll find a section:
###### Time Series Strategy ####
# Use this property to set the strategy used to persist and retrieve time series metrics:
# Supported values are:
# rrd (default)
Note: “rrd” strategy can refer to either JRobin or RRDTool, with JRobin as the default. This is set in rrd-configuration.properties.
The next section determines what will render the graphs.
###### Graphing #####
# Use this property to set the graph rendering engine type. If set to 'auto', attempt
# to choose the appropriate backend depending on org.opennms.timeseries.strategy above.
# Supported values are:
# auto (default)
If you are using Newts, the “auto” setting will utilize Backshift but here is where you could set Backshift as the renderer even if you want to use an RRD strategy. You should try it out. It’s cool.
Finally, we come to the settings for Newts:
###### Newts #####
# Use these properties to configure persistence using Newts
# Note that Newts must be enabled using the 'org.opennms.timeseries.strategy' property
# for these to take effect.
There are a lot of settings and most of those are described in the documentation, but in this case I wanted to demonstrate that you can point OpenNMS to multiple Cassandra instances. You can also set different keyspace names which allows multiple instances of OpenNMS to talk to the same Cassandra cluster and not share data.
From the “fine” documentation, they also recommend that you store the data based on the foreign source by setting this variable:
I would recommend this if you are using provisiond and requisitions. If you are currently doing auto-discovery, then it may be better to reference it by nodeid, which is the default.
I want to point out two other values that will need to be increased from the defaults: org.opennms.newts.config.ring_buffer_size and org.opennms.newts.config.cache.max_entries. For this system they were both set to 1048576. The ring buffer is especially important since should it fill up, samples will be discarded.
So, how did it go? Well, after fixing a bug with the ring buffer, everything went well. That bug is one reason that features like this aren’t immediately included in Meridian. Luckily we were working with a client who was willing to let us investigate and correct the issue. By the time it hits Meridian 2016, it will be completely ready for production.
If you enable the OpenNMS-JVM service on your OpenNMS node, the system will automatically collected Newts performance data (assuming Newts is enabled). OpenNMS will also collect performance data from the Cassandra cluster including both general Cassandra metrics as well as Newts specific ones.
This system is connected to a two node Cassandra cluster and managing 3.8K inserts/sec.
If I’m doing the math correctly, since we collect values once every 300 seconds (5 minutes) by default, that’s 1.15 million data points, and the system isn’t even working hard.
OpenNMS will also collect on ring buffer information, and I took a screen shot to demonstrate Backshift, which displays the data point as you mouse over it.
Horizon 17 ships with a load testing program. For this cluster:
so there is plenty of room to grow. Need something faster? Just add more nodes. Or, you can switch to ScyllaDB which is a port of Cassandra written in C. When run against a four node ScyllaDB cluster the results were:
Unfortunately I do not have statistics for a four node Cassandra cluster to compare it directly with ScyllaDB.
Of course the Newts data directly fits in with the OpenNMS Grafana integration.
Which brings me to one down side of this storage strategy. It’s fast, which means it isn’t compact. On this system the disk space is growing at about 4GB/day, which would be 1.5TB/year.
If you consider that the data is replicated across Cassandra nodes, you would need that amount of space on each one. Since the availability of multi-Terabyte drives is pretty common, this shouldn’t be a problem, but be sure to ask yourself if all the data you are collecting is really necessary. Just because you can collect the data doesn’t mean you should.
OpenNMS is finally to the point where the storing of performance data is no longer an issue. You are more likely to hit limits with the collector, which in part is going to be driven by the speed of the network. I’ve been in large data centers with hundreds of thousands of interfaces all with sub-millisecond latency. On that network, OpenNMS could collect on hundreds of millions of data points. On a network with lots of remote equipment, however, timeouts and delays will impact how much data OpenNMS could collect.
But with a little creativity, even that goes away. Think about it – with a common, decentralized data storage system like Cassandra, you could have multiple OpenNMS instances all talking to the same data store. If you have them share a common database, you can use collectd filters to spread data collection out over any number of machines. While this would take planning, it is doable today.
What about tomorrow? Well, Horizon 18 will introduce the OpenNMS Minion code. Minions will allow OpenNMS to scale horizontally and can be managed directly from OpenNMS – no configuration tricks needed. This will truly position OpenNMS for the Internet of Things.
Seth recently sent me to an interesting article by Gregory Brown discussing a “death spiral” often faced by software projects when issues and feature requests start to out pace the ability to close them.
Now Seth is pretty much in charge of managing our Jira instance, which is key to managing the progress of OpenNMS software development. He decided to look at our record:
[UPDATE: Logged into Jira to get a lot more issues on the graph]
Not bad, not bad at all.
A lot of our ability to keep up with issues comes from our project’s investment in using the tool. It is very easy to let things slide, resulting the the first graph above and causing a project to possibly declare “issue bankruptcy“. Since all of this information is public for OpenNMS, it is important to keep it up to date and while we never have enough time for all the things we need to do, we make time for this.
I think it speaks volumes for Seth and the rest team that OpenNMS issues are managed so well. In part it comes naturally from “the open source way” since projects should be as transparent as possible, and managing issues is a key part of that.
Let’s hope the Intro is not an indication of things to come. Worst … intro … ever. Seriously, just jump to the 3 minute mark. You’ll be glad you did.
Okay, brand new year and that means predictions, where I predict that Jeremy will once again win. Yes, his entries aren’t all that strong, but he always wins.
The way the game works is that each member of the BV team must make two predictions, with bonus predictions available as well.
This is the year that some sort of Artificial Intelligence (AI) or Virtual Reality (VR) device goes mainstream. I’m not sure if Mycroft or Echo counts as an AI device, but after playing with the Samsung Gear VR I made the prediction that VR would really take off this year. He specifically stated that the device in question would not be the Oculus Rift.
Apple will have a down year, meaning that gross revenues will be lower this year than in 2015. Hrm, I’ve been thinking this might happen but I’m not sure this is the year. In the show they brought up the prospect of Apple making a television, and if that happens I would expect enough fans to rush out and buy it that Apple’s revenues would increase considerably. But without a new product line, I think there is a good chance this could happen.
Bonus: a device with a bendable display will become popular. There are devices out there with bendable displays, but nothing much outside of CES. We’ll see.
Canonical pulls out of the phone/tablet business. While the Ubuntu phone hasn’t been a huge success, it is the vehicle for exploring the idea of turning a handset-sized device into the only computer you use (i.e. you connect it up to a keyboard and screen to make a “desktop”). I can’t really see Shuttleworth giving this up, but in a mobile market that is pretty much owned by Apple and Android, this probably makes good business sense.
In a repeat from last year, Bryan predicts that ChromeOS will run Android apps natively, i.e. any app you can get from the Google Play store will run on Chrome without any special tricks. Is the second time the charm?
Bonus: Wayland will not ship as the default replacement for X on any major distro. Probably a safe bet.
The VR Project Morpheus on Playstation will be more popular than Oculus Rift. Another VR prediction, and it is hard to argue with his logic. Sony already has a large user base with its Playstation 4 console, and if this product can actually make it to market with a decent price point, you can expect a lot of adoption. Contrast that to the Oculus Rift, whose user base is still unknown, plus an estimated price tag of US$600 and the need for a high end graphics computer, and Morpheus has a strong chance to own the market. Making it to market and the overall user experience will still determine if this is a winner or a dud.
Part of Canonical will be sold off. Considering that Canonical has a number of branches, from its mobile division, the desktop and the cloud, the company might be stretched a little thin to focus on all of them. Plus, Shuttleworth has been bank-rolling this endeavor for awhile now and he may want to cash some of it out. Moving the cloud part of the company to separate entity makes the most sense, but I’m not feeling that this will happen this year.
Bonus: a crowdfunding campaign will pass US$200MM. The current record crowdfunding campaign is for the video game Star Citizen, which has passed US$100MM, so Jono is betting that something will come along that is twice as successful. As I’ve started to sour on crowdfunding, as have others I know, it would have to be something pretty spectacular.
People will stop carrying cash. Well, duh. It is rare that I have more than a couple of dollars on me at any time. Now, this is different when I travel, but around town I pay for everything with a credit card. I get the one bill every month and I can track my purchases. Heck, even my favorite BBQ joint takes cards now (despite what Google says). Not sure how they will score this one.
Microsoft will open source the Microsoft Edge browser. Hrm – Microsoft has been embracing open source more and more lately, so this isn’t out of the realm of possibility. If I were a betting man I’d bet against it, but it could happen.
Bonus: he was going to originally bet that Canonical would get out of the phone business, but since Bryan beat him to it he went with smaller phones would outsell larger phones in 2016. It’s going to be hard to measure, but he gets this right if phones 5 inches and smaller move more units than phones bigger than that. I don’t know – I love my Nexus 6 and I think once you get used to a larger phone it is hard to go back, but we’ll see.
The gang seemed pretty much in agreement this year. No one joined me in the prediction that a large “cloud” vendor would have a significant security issue, but both Jono and Jeremy mentioned VR.
The next segment was on a product called the “Coin“. This is a device that is supposed to replace all of the credit cards in your wallet. Intriguing, but it has one serious flaw – it doesn’t work everywhere. If you can’t be sure it will work, then you end up having to carry some spare cards, and that defeats the whole purpose. Coin’s website “onlycoin.com” seems to imply that Coin is the only thing you need, but even they admit there are problems.
It also doesn’t seem to support some of the newer technologies, such as “Chip and PIN” (which isn’t exactly new). This means that Coin is probably dead on arrival. Jeremy brought up a competitor called Plastc, but that product isn’t out yet, so the fact that Coin is shipping gives it an advantage.
I don’t carry that many cards to begin with, so I have little interest in this. I’d rather see NFC pay technologies take off since I usually have my phone with me. I need more help with my “rewards” cards such as for grocery stores, and there are already apps for that, like Stocard. I don’t see either of these things taking off, but I give the edge to Plastc over Coin.
Note: Stocard is pretty awesome. It is dead easy to add cards and they have an Android Wear integration so I don’t even need to take the phone out of my pocket.
The last segment was an interview with Jorge Castro (the guy from Canonical’s Juju project and not the actor from Lost). Juju is an “orchestration” application, and while focused on the Cloud I can’t help but group it with Chef, Puppet and Ansible (a friend of mine who used to work on Juju just moved to Ansible). Chef has “recipes” and Juju has “charms”.
I don’t do this level of system administration (we are leaning toward using Ansible at OpenNMS just ’cause I love Red Hat) thus much of the discussion was lost on me (lost, get it?). I couldn’t help but think of my favorite naming scheme, however, which comes from the now defunct Sorcerer Linux distribution. In it, software packages were called “spells” and you would install applications using the command “cast”. The repository of all the software packages was called the “grimoire”.
The show closed with a reminder that the next BV would be Live Voltage at the SCaLE conference. I’ve seen these guys get wound up in front of 50 people, so I can’t imagine what will happen in front of nearly 1000 people. They have lots of prizes to give away as well, so be there. I can’t make it but I hope there is a live stream and a Twitter feed like the last Live Voltage show so I can at least follow along. I can’t promise it will be good, but I can promise it will be memorable.
So, overall not a great show but not bad. I don’t like the title, and if you listen to the Outro you might agree with me that “Huge Bag Full of Nickels” would have been a better one.
Just a quick note that the annual LinuxQuestions “Member’s Choice” poll is out. While I don’t believe OpenNMS is known to many of the members of that site, if you feel like showing it a little love, please register and vote.
I’m supposed to be on vacation today. My 50th birthday is coming up and I’m taking some time off to celebrate and reflect. But Jan Wildeboer posted a link to a critical article about a recent Paul Graham essay, and it touched a nerve. I wanted to write down a few thoughts about it while they were fresh.
In the essay, Graham boasts about increasing income inequality. It’s the new version of “greed is good“. He proposes that the best method for modeling democracy is that of the startup. I can’t agree with that.
Look, I work at a ten-year-old startup, but that isn’t what Graham means. He means the Silicon Valley startup which follows this basic model:
1) Come up with an idea
2) Get some rich people to give you money to pursue the idea
If you get past Step 2, this is considered “a success” because if a rich guy wants to give you money your idea must be good, right?
3) Burn through that money as fast as you can in search of turning your idea into something people will watch, download, share or buy
4) Run out of money
5) Get more money
6) Go back to step 4, eroding your share of the idea until the rich people own it
Success is then measured by an acquisition or IPO. Failure is that you can’t get past step 5 at some point.
I can’t remember who told me this, so I do apologize for not being able to credit you, but it was pointed out to me that a lot of startups tend to hit the US$5MM revenue mark and then stall. The reason, she said (and I do believe it was a she) was that startups are aimed at the culture of Silicon Valley, and quite frequently an idea that works in the Valley doesn’t work elsewhere.
The Valley consists mainly of young, white and Asian males. I’ve spent a lot of time in the Valley, and while I’ve met a lot of amazing people, I’ve met an equal number of assholes. The latter seemed to measure value strictly on wealth, and they pursue money above all else (“go big or go home”). Look, I think money is great, it can provide options and security, but the sole pursuit of money is not a good way to live. If I have any wisdom to impart after 50 years it would be to buy experiences, not things. The former will last a lot longer.
And this shameless pursuit of money, in both the Valley and on Wall Street, is creating a huge wealth inequality. From what I could find on the web, the average software engineer in the Valley makes around US$150K. Meanwhile, for the same year the average household income was a little over US$50K, so a third of that probably with more than one person working.
People will defend those salaries because they say they are valuable, but if we are talking about a startup-driven economy, most startups both lose money and eventually fail. So I’m not sure it can be defended on value creation. Plus, as the wealth gap gets larger and larger, there is a real, non-zero chance of a whole lot of people with baseball bats storming those gated communities.
When I was younger and took my first Spanish class, the teacher told us that many countries in South and Central America, where Spanish is spoken, had turbulent political histories. She explained that it was often due to wealth inequality. When you have a small but significant group of rich people and a whole lot of poor people, those at the “top” don’t tend to stay there. She then pointed to the US and its large middle class, and argued that it was one of the reasons we’ve been around for 200+ years.
Also, back in the “old days”, if you asked a kid to list jobs you’d get things like teacher, policeman, doctor, janitor, nurse, mailman, lawyer, baker, fireman and, my favorite, astronaut.
Those are wonderful, productive roles in society. Sure, the doctor and lawyer made more money, but we didn’t look down on the janitor (I can remember really liking the janitor at our elementary school and thinking he was so nice to keep our school clean). But somewhere in the last ten to twenty years, we’ve seemed to lose our way as a culture and we look down on a lot of these jobs. The message seems to be “be scared and buy shit” and success is measured on how much shit you can buy.
It’s not sustainable. In finance the idea of “grow, grow, grow!” is considered the goal. In nature it’s called “cancer”.
This is one reason I love my job. At OpenNMS our business plan is simple: spend less than you earn. The mission statement is: help customers, have fun, make money.
A lot of that comes from the fact that we base our business around open source software. One of the traditional methods for securing profit in the software industry, especially the Valley, is to lock your customers into your products so they both become reliant on them and are unable to easily switch. Then you can increase your prices and … profit!
In order to do this, you have to have a lot of secrets. Your code has to be secret, your product roadmap needs to be secret, and you have to spend a lot of money on engineering talent because you have to find highly skilled specialists to work in such an environment.
Contrast that to open source. Everything is transparent. The code is out there. The roadmap is out there. This week is the CES show in Las Vagas where products will be “unveiled”. We don’t unveil anything – you can follow the development branches in our git repository in real time. While I am lucky to work with highly skilled people, they found OpenNMS, not the other way around, because they had something to offer. Our customers pay us a fair rate for our work because if it isn’t worth it to them, they don’t have to buy it.
This has allowed OpenNMS to survive and, yes, grow, over the last decade while a number of startups have come and gone.
This transparency is important to the “open source way“. It promotes both community and participation, and it is truly a meritocracy, unlike much of the Valley. In the Valley, value is measured more by how much money you make and who you know. In open source, it is based on what you get done and how well you advance the project.
[Note: just to be fair, I know a number of very talented people in the Valley who are worth every penny they make. But I know way more people who, in no way, earn their exorbitant salaries]
Another comment that triggered this post was a tweet by John Cleese about a quote from Charlie Mayfield, the Chairman of the John Lewis Partnership which is a huge retail concern in the UK. He said “… maximisation of profit is not our goal. We aim to make sufficient profit.”
What a novel idea.
I’m sure my comments will be easily dismissed by many as just the ranting of an old fart, similar to “get off my lawn”. But I have always wished for OpenNMS to be, above all else, something that lasts – something that survives me and something that provides value long after I’m gone. Would I like more money? Of course I would, but for longevity the focus must be on creating value and providing a great experience for those who work on the project, and the money will come.
The last Bad Voltage of 2015 is a long one. Bryan is out sick, which is surprising since he only misses the shows with which I’m involved, so I guess he was really sick this time.
Since the first BV episode of the year includes predictions, the last one of the year is used to measure how well the guys did, and this was the topic of the first part of the program.
Aq predicted that mobile phone payments via NFC (such as Apple Pay and Android Pay) would increase greatly. They did, but by more than an order of magnitude than the amount he predicted. I’m not sure why he didn’t get credit for this one since he was correct, he just missed a zero at the end. He also predicted that Steam game consoles would be a big success. One of the issues with measuring these predictions is that it is hard to get verifiable numbers, but they all agreed that had Steam shipped a million consoles they would have mentioned it.
His “extry credit” prediction was that Canonical would get bought. They didn’t, so Aq didn’t do so well overall.
Then they moved on to Jono. He predicted there would be a large migration away from traditional sources of video, such as cable television and satellite, to streaming services such as Netflix and Hulu. This was again hard to verify (remember the quote that there are lies, damned lies and statistics). I think one of the reasons is that, especially in the case of cable, the vendors bundle so much together that it is usually cheaper to get television included as part of a package instead of just going Internet-only. Considering how many people talk about shows that are only available via streaming services and how clients for those services are now ubiquitous in televisions, it seems to be a safe bet that people are spending more of their time watching those services, at the cost of traditional shows, but it is very hard to measure with any level of objectivity.
Speaking of televisions, Jono also predicted a surge in 4K televisions to the point that they would be available for $500 or less. I haven’t seen it. The content is just not there yet, and while, yes, you can buy a 4K TV on Amazon for less than US$500, no one who really cared about the quality of that picture would buy one. The best 4K TV recommended by Wirecutter is still nearly US$1600.
So I don’t think he should get credit for that one.
His extra prediction was a large increase in “connected homes”. This was vague enough to be impossible to measure, but with products like those from Nest becoming more popular, it seems inevitable. I think there was definitely a jump in 2015, but then again going from nearly zero to only a handful would still be a huge increase, percentage-wise. I think it will be some time before a majority of homes in the US are “connected” in an Internet of Things fashion.
Jeremy’s predictions were next. He predicted that laptop and desktop computer sales would actually go up after years of decline, and while the rate of decline slowed, this was a miss.
The guys gave him his second one, which was that wireless charging for portable devices would become the norm (with a notable exception in Apple). While I’m charging my Nexus 6 right now on a TYLT charger, the latest generation of Nexus phones do not support wireless charging, and with the introduction of USB-C and “fast charging” I think wireless charging has peaked. Still, he got credit for it, so I think Aq should get credit for his mobile payments prediction.
Jeremy had two bonus predictions. One was that the markets would both see a peak in the NASDAQ index (which happened) as well as a correction of more than 10% (which also happened). His prediction of an Uber IPO did not happen, however.
Bryan wasn’t around to defend his predictions, but in the first case it was the opposite of Aq’s prediction that Steam consoles would be a huge success with the prediction that they would ship zero units. That didn’t happen, of course.
He also predicted that Ubuntu phone sales would be minor compared to other “open source” handset units such as those from Jolla. While no one would claim the Ubuntu phone was a runaway success, from what can be guessed from various sales figures, it seems to have sold about as well as the options.
Finally, his bonus prediction would be that ChromeOS would be able to run all Android apps natively. That, too, didn’t happen. It would have been interesting to hear his analysis of his performance, but he was pretty blunt in that he totally expected to lose.
So, Jeremy wins.
The second segment was a bit heady even for these guys. It concerns an announcement by the Linux Foundation to promote the creation of “block chain” tools.
Now, I kind of think I have my brain around block chains, but don’t expect me to explain them. It was invented as part of the bitcoin protocol, and it is a type of ledger database that can confirm transactions and resist tampering. This can be useful, since it provides a very distributed and public way of running a list of transactions, but there is not requirement that the block chains themselves be made public.
The idea is that we could promote this for use in, say, banking, and it could both improve speed and reliability.
I’m not sure it made a great topic for the show, however. This is esoteric stuff, and for once there were a lot of pregnant pauses in the discussion. I think the overall consensus was that this is a Good Thing™ but that in practical use the data won’t be very open.
The next segment was a review of the Titan USB cable – a hardened USB cable to resist damage. While not bad for a last minute substitution since Bryan was unavailable to do his originally scheduled review, I thought the discussion went on way too long on an already long show. TL;DR: – break a lot of USB cables? You might want to check this out. No? Don’t worry about it.
While the cable part of the Titan is well protected, the connector ends, a common source for failure, aren’t much different from a normal cable. Considering the cost, if you only damage a cable occasionally, it probably isn’t worth it to get a Titan.
At least it wasn’t about that $500 gold HDMI cable. The thing I love about digital is that it pretty much works or doesn’t work. I used to agonize over analog speaker cable, but cable quality is considerably less important in a purely digital realm.
The final segment concerned an apparent conflict of interest around the Linux Foundation’s role in the lawsuit involving the Software Freedom Conservancy and VMware concerning GPL violations. There are a lot of corporate interests involved with the Linux Foundation, and the general question asks if the Foundation is more concerned with protecting those interests than software freedom?
My own experience with GPL enforcement is that it is a shit job. Many people think that if the software is “free” they should be able to do whatever they want to with it, and so they don’t understand the problem when some third party decides to commercialize your hard work.
Next, discovery is a pain. If you can see the code, it is somewhat easy to determine if it was the same or different as another piece of code, but the problem with GPL enforcement is usually the code in question is closed. Discovery costs a lot of money as well, and money is not something a lot of open source projects have in abundance.
Finally, even if you have a case, getting a judge that can understand the nuances of the issue is harder still. Without such an understanding, it is both hard to win the case as well as to get damages. Even if you succeed, the remedy might just mean open sourcing part of the infringing code with no monetary damages.
When you look at it, pursuing a GPL violation is a thankless job that most projects can’t even consider. But it is incredibly important to the future of free software that those who create it have the power to determine under what conditions their work can be used. It is why we donate to the Software Free Conservancy. They are fighting the good fight, in very much a David and Goliath scenario, for the rights of everyone involved with free software. There are not many people up to that task.
For example, it appears that the car manufacturer Tesla is in violation of the GPL. Telsa is popular and well funded. There are very few people, especially those in the technology industry, who wouldn’t want to own a Tesla. So, do you want to sue them? First, they will bury you in legal procedures that will drain what little funds you have. Next, people will be mad at you for “attacking” such a cool company. Third, your chance for success is slim.
Now I don’t have any experience with the Linux Foundation. I don’t know anyone there and I’ve never been to their conferences. I think they can play an important role in acting as a bridge between traditional corporations and the free and open source software community. It seems to me that they are at a crossroads, however. If they allow large companies like VMWare to control the message, then they will eventually become just another irrelevant mouthpiece for the commercial software industry. Yes, that stand may cost them contributions in the near term, but if they truly want to represent this wonderful environment that has grown up around Linux, they have to do it.
I just went and looked up the compensation of the officers of the Linux Foundation. This is an organization with income around US$23MM per year (in 2014). The Executive Director makes about US$500K per year, the COO a little more than that, and there are a number of people making north of US$200K. In fact, of the roughly US$7.5MM salary expense, a third of that went to eight people. Considering that much of the Linux Foundation income comes from corporate donations, I think these eight would have a strong incentive to act in a way to protect those donations, even at the expense of Linux and open source as a whole.
Let’s compare that to the Software Freedom Conservancy. For the same time period they had about US$868K in total revenue, so about 1/30th of that of the Linux Foundation. They only have one listed employee, Bradley Kuhn, with a reasonable salary of US$91K a year (with total compensation a little north of US$110K).
Who would you trust with defending your rights concerning free software? Eight people who together make more than US$2.5MM a year from corporate sponsors or one guy who makes US$100K?
It’s funny, I wasn’t very upset about this segment when I listened to it, but now that I’m investigating it more, it is starting to piss me off. I expect someone in the Valley to defend those high salaries for the Linux Foundation as part of doing business in that area, so I looked up a similar organization, the Wikimedia Foundation. Twice as large as the Linux Foundation, their Executive Director makes around US$200K/year.
I’m going to stop now since I’ll probably write something I’ll regret. For full disclosure I want to state that I’ve known Bradley Kuhn for several years, and even though we tend to disagree on almost everything, I consider him a friend. I also know that Karen Sandler has joined the Software Freedom Conservancy in a paid role in 2015, so their salary expenses will go up, but I’d bet my life that she isn’t making US$500K/year. Finally, remember that if you shop at Amazon be sure to go to smile.amazon.com and you can choose a charity to get a small portion of your purchase donated to them. I send mine to, you guessed it, the Software Freedom Conservancy.
Getting back to Bad Voltage, the show ended with a reminder that the “best Live Voltage show ever” will happen at the end of the month at the Southern California Linux Expo conference in Pasadena. You should be there.
Since the next show will be about predictions for 2016, I’m going to throw my two into the ring.
First, a well known cloud service will experience a large security breach that will make national headlines. I won’t point out possible targets for fear of getting sued, but it has to happen eventually and I pick this to be the year.
Second, by Christmas, consumer virtual reality will be the “it” gift. We’re not there yet, but I got to play with a Samsung Gear VR headset over the holidays and I was impressed. It is a more polished version of Google Cardboard although still based on a phone, and it is developed by Oculus, the current leaders in this type of technology.
While the resolution isn’t great yet, the potential is staggering. I watched demos that included a “fly along” with the Blue Angels, and although the resolution reminded me of early editions of Microsoft’s Flight Simulator, it was cool if not a little nauseating.
There was a Myst-like game called “Lands End” that was also enjoyable, although once again the low resolution detracted from the experience.
Then I played Anshar Wars. It was a near perfect VR experience. A first-person space shooter, you fly around and dogfight with the bad guys while dodging asteroids and picking up power-ups. No headaches, no complaints about resolution, it was something I could have played for hours. Note that it helped to be in a swivel chair ’cause you swing around a lot.
So those are my predictions. Since I doubt I’ll have the stamina to keep up with these posts, I’ll probably never revisit them, but the chance will improve if I’m right.