Killing Open Source

NOTE: As usual, this blog expresses my opinions and not those of my employer

I first got exposed to modern open source in the late 1990s, and I think it was via Red Hat Linux version 5, or perhaps version 6 (note: this was before Red Hat Enterprise Linux).

I say “modern” open source because in the late 1970s/early 1980s a lot of computer code was shared via hobbyist magazines. While not expressly open source, you could see the code (’natch) and obviously modify it. I was too young to worry about whether or not I had the freedom to share those modifications, and as the only young person in my small town with a computer there weren’t any people to share it with, if I could.

Using Red Hat, I saw the potential. To me it was just a matter of time before I could replace all of my tech with open source alternatives, and that it would just keep getting better year after year.

What I never expected was a coordinated effort to kill open source outright.

If this were a straight-forward murder, I think we would all see it coming and could react accordingly, but the situation is more like a death by a thousand cuts. Here is a breakdown of the various areas in which open source is being assaulted.

One would think that publishing free software under an open source license with “no warranty, expressed or implied” would be enough to settle any legal issues with using that code, but that would be wrong.

The first main attack came from Europe, of all places. The Cyber Resilience Act (CRA), proposed in 2022, requires producers of all products with “digital elements” to provide incident reports and automatic security updates or face stiff financial penalties. To me, the biggest blind spot in the CRA is the assumption that all software is produced by large commercial companies that will have the ability to meet the demands imposed by this legislation, when that is simply not true, especially in open source.

Due to heavy lobbying, the CRA was amended to add an “exclusion of open source projects, communities, foundations, and their development and package distribution platforms”. This is great, but it leaves out organizations that produce open source software and also try to commercialize it, or those that redistribute open source code as part of their business model. To succeed, open source must have a valid business model and not every project fits within a foundation or non-profit, and many are too small to be able to spend limited resources trying to meet these requirements.

But at least there is some sort of “carve out” for open source. Prior to those amendments, the CRA would have had a chilling effect on open source development, and there is no guarantee that something similar, without open source exceptions, won’t be added later.

Speaking of later, the US state of California in 2025 unanimously ratified the California Digital Age Assurance Act which requires anyone who provides an “operating system” to perform age verification, with no open source exception. It is interesting that the law was found to be pushed for heavily by Meta, which didn’t want to have to implement its own age verification.

This follows legislation introduced in the UK in 2023 called the Online Safety Act. It requires age verification, but at the application layer. One can only assume that Meta didn’t want to have to deal with something similar in the US, so they just pushed it down the stack.

“Safety”, especially when it come to protecting children, is a frequent battle-cry of companies with a vested interest in censorship in general or in raising the bar so high that only the current, entrenched players can participate.

How popular open source operating systems like Fedora and Ubuntu will deal with this is unknown. Some projects, such as GrapheneOS, have stated that they are not going to comply. While I applaud this stance, I am not sure how this will impact the recently announced deal where Motorola was going to provide handsets with GrapheneOS pre-installed.

I am not a lawyer, and I’m not sure how this is going to play out. I could see, at least in the US, a First Amendment claim that these regulations stifle expression. While outlawing open source outright would fail, by pushing all open source development out of either the private or the commercial sector and into foundations, only the large companies that consume and commercialize open source will be able to realize the full benefit of permissively licensed code, stifling competition by not allowing smaller companies to get a foothold in the market.

Restrictions on Hardware

Open source grew out of the idea of general purpose computing. It’s the idea that computers aren’t designed, out of the box, for a single purpose, and that a single device is able to run different software in order to perform different tasks. Open source puts the power to make that hardware do whatever the user wants in the hands of the user, and not the organization that built the hardware.

When Linux was introduced, most consumer-grade computers ran Windows. If you had a Windows machine it was possible to install Linux. There were no barriers to doing this. That didn’t mean it was easy, but there was nothing in the hardware to expressly forbid the installation of open source hardware.

But the operating system is not the only software in a computer. In order to run the OS you need a basic input/output system (BIOS) and most of that code is proprietary. There have been a number of efforts to create open source BIOS replacements, with varying levels of success.

In most modern hardware, BIOS has been replaced by the Unified Extensible Firmware Interface (UEFI). UEFI also introduced the idea of “Secure Boot” which meant that only code that has been digitally signed will be run. Luckily, the technology allowed for most open source OSs to run under Secure Boot, but there was a bit of a delay as only Windows was supported at launch.

The news is not so good for mobile devices. While Apple devices are locked down by design, Android-based devices should be able to run alternate software builds. However one of the leading handset manufacturers, Samsung, recently removed the ability to “unlock” their devices in order to install custom software, and other manufacturers may follow suit.

While not directly hardware related, Google has announced that, starting in September, the ability to add software to Android devices will require a new verification process, controlled by Google. So if there is a particular piece of software you want to use, the developer of that software will need to register. This will limit some software as the creators may not want to go through the process, or the applications they create, say for tracking illegal government activity, would require they remain anonymous.

It is not hard to imagine the industry getting emboldened by such moves and making it harder for open source to be installed on hardware. But then again the point will be moot if current trends continue in the price of computer hardware.

According to PCPartPicker, in autumn of last year DDR5-6000 (2x32GB) memory cost a little over $200 USD. Now that price is over $1000 USD. There are similar trends for storage. This will price the more powerful devices out of the budgets of a lot of people. General purpose computing use to follow Moore’s Law, getting cheaper year over year. If these prices hold, it will force more people to use software as a service and not run their own instances, which in turn will reduce the use of open source software.

Of course those price hikes are driven by …

Generative AI

The impact of GenAI on open source is still being assessed, but the trend that is most worrisome to me is this idea of “clean room” replacements of open source applications.

The idea is simple: instead of using an open source project and having to abide by its license, why not use GenAI to just replicate the functionality, and then it can be used as a replacement free of licensing restraints.

There are a couple of issues with this. The first is that GenAI code assistants would never have been able to exist without being trained on the large corpus of available open source code. To then use this technology to get around licensing seems morally, if not legally, questionable.

In my past life I was involved in some “clean room” development, but that required hiring coders with zero exposure to the original code, keeping them separate from any influence by the original coders, and having them recreate the functionality, feature by feature.

Without a detailed list of the training data it is not possible to know if these models were trained on the original open source code, but the probability is high, at least for the more popular projects. Having “seen” the code means that you aren’t in a “clean room” environment.

The second thing, which seems to be getting glossed over, is that the US Supreme Court ruled that copyright is reserved for things created by humans. While I am not aware of a court case that has established precedent for this with respect to computer code, the current legal discourse indicates that AI-generated code cannot be protected by copyright. With prominent proponents of AI code claiming that they ship it to production without any review, much less making substantial changes, I’m curious to see where this plays out. Perhaps licenses will become moot as GenAI use takes off and everything becomes public domain.

One must remember that open source is defined by copyright law. If work cannot be covered by copyright, then open source licenses, both permissive and restrictive, can no longer be enforced.

In addition, you have the issues facing open source projects around code created by GenAI being submitted via pull requests. Projects are getting overwhelmed, especially when the code has obviously not been reviewed by a human being. This taxes the already limited resources of most open source projects, and will hasten maintainer burnout.

Look, I am not anti-GenAI. The only two repositories in my GitHub account were coded with the help of GenAI. But it is a tool, not a replacement for skilled developers who have spend their lives honing their craft. If all you have is the GenAI hammer, then everything looks like a nail.

What’s Next?

I’m not sure how this will go, but I expect to see more and more restrictions on accessing resources on the Internet. Run your own mail server? Get ready to have your mail blocked by the big providers like Gmail. Access information via a browser? Get ready to be forced to use an app instead or lose functionality. Run Linux on the desktop? Well, that’s not “secure” so expect to be blocked. Do you want to put different firmware on hardware you own? Expect your device to be bricked.

Summary

I used to think that the main threat to open source was apathy. Most people would simply not care that they had no control over the code they were using.

While that still plays a role, I am seeing a concerted effort to make open source development legally problematic, making it harder to purchase and use general purpose computer hardware, and trying to make open source software irrelevant by vibe-coding alternatives.

I don’t have any solutions, but maybe by pointing out that these challenges exist will help find one.