Some developers are fouling up open-source software

From ethical concerns, a desire for more money, and simple obnoxiousness, a handful of developers are ruining open-source for everyone.

Getty Images

One of the most amazing things about open-source isn't that it produces great software. It's that so many developers put their egos aside to create great programs with the help of others. Now, however, a handful of programmers are putting their own concerns ahead of the good of the many and potentially wrecking open-source software for everyone.

For example, JavaScript's package manager maintainer RIAEvangelist, Brandon Nozaki Miller, wrote and published an open-code npm source-code package called peacenotwar. It did little but print a message for peace to desktops. So far, so harmless. 

Miller then inserted malicious code into the package to overwrite users' filesystems if their computer had a Russia or Belarus IP address. He then added it as a dependency to his popular node-ipc program and instant chaos! Numerous servers and PCs went down as they updated to the newest code and then their systems had their drives erased. 

Miller's defense, "This is all public, documented, licensed and open source," doesn't hold up. 

Liran Tal, the Snyk researcher who uncovered the problem said, "Even if the deliberate and dangerous act [is] perceived by some as a legitimate act of protest, how does that reflect on the maintainer's future reputation and stake in the developer community?  Would this maintainer ever be trusted again to not follow up on future acts in such or even more aggressive actions for any projects they participate in?" 

Miller is not a random crank. He's produced a lot of good code, such as node-ipc, and Node HTTP Server. But, can you trust any of his code to not be malicious? While he describes it as "not malware, [but] protestware which is fully documented," others venomously disagree. 

As one GitHub programmer wrote, "What's going to happen with this is that security teams in Western corporations that have absolutely nothing to do with Russia or politics are going to start seeing free and open-source software as an avenue for supply chain attacks (which this totally is) and simply start banning free and open-source software -- all free and open-source software -- within their companies." 

As another GitHub developer with the handle nm17 wrote, "The trust factor of open source, which was based on the good will of the developers is now practically gone, and now, more and more people are realizing that one day, their library/application can possibly be exploited to do/say whatever some random dev on the internet thought 'was the right thing they to do.'"

Both make valid points. When you can't use source code unless you agree with the political stance of its maker, how can you use it with confidence? 

Miller's heart may be in the right place -- Slava Ukraini! -- but is open-source software infected with a malicious payload the right way to protect Russia's invasion of Ukraine? No, it's not. 

The open-source method only works because we trust each other. When that trust is broken, no matter for what cause, then open-source's fundamental framework is broken. As Greg Kroah-Hartman, the Linux kernel maintainer for the stable branch, said when students from the University of Minnesota deliberately tried to insert bad code in the Linux kernel for an experiment in 2021 said, "What they are doing is intentional malicious behavior and is not acceptable and totally unethical."

People have long argued that open-source should include ethical provisions as well. For example, 2009's Exception General Public License (eGPL), a revision of the GPLv2, tried to forbid "exceptions," such as military users and suppliers, from using its code. It failed. Other licenses such as the JSON license with its sweetly naive "the software shall be used for good, not evil" clause still being around, but no one enforces it.  

More recently, activist and software developer Coraline Ada Ehmke introduced an open-source license that requires its users to act morally.  Specifically, her Hippocratic license added to the MIT open-source license a clause stating: 

"The software may not be used by individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups in violation of the United Nations Universal Declaration of Human Rights."

Sounds good, but it's not open source. You see, open-source is in and of itself an ethical position. Its ethics are contained in the Free Software Foundation's (FSF)'s Four Essential Freedoms. This is the foundation for all open-source licenses and their core philosophy. As open-source legal expert and Columbia law professor Eben Moglen, said at the time that ethical licenses can't be free software or open-source licenses

"Freedom zero, the right to run the program for any purpose, comes first in the four freedoms because if users do not have that right with respect to computer programs they run, they ultimately do not have any rights in those programs at all.  Efforts to give permission only for good uses, or to prohibit bad ones in the eyes of the licensor, violate the requirement to protect freedom zero." 

In other words, if you can't share your code for any reason, your code isn't truly open-source. 

Another more pragmatic argument about forbidding one group from using open-source software is that blocking on something such as an IP address is a very broad brush. As Florian Roth, security company Nextron Systems' Head of Research, who considered "disabling my free tools on systems with certain language and time zone settings," finally decided not to. Why? Because by doing so, "we would also disable the tools on systems of critics and freethinkers that condemn the actions of their governments." 

Unfortunately, it's not just people trying to use open-source for what they see as a higher ethical purpose that are causing trouble for open-source software. 

Earlier this year, JavaScript developer Marak Squires deliberately sabotaged his obscure, but vitally important open-source Javascript libraries 'colors.js' and 'faker.js." The result? Tens of thousands of JavaScript programs blew up.

Why? It's still not entirely clear, but in a since-deleted GitHub post, Squires wrote, "Respectfully, I am no longer going to support Fortune 500s ( and other smaller-sized companies ) with my free work. There isn't much else to say. Take this as an opportunity to send me a six-figure yearly contract or fork the project and have someone else work on it." As you might imagine, this attempt to blackmail his way to a paycheck didn't work out so well for him. 

And, then there are people who deliberately put malware into their open-source code for fun and profit. For example, the DevOps security firm JFrog discovered 17 new JavaScript malicious packages in the NPM repository that deliberately attack and steal a user's Discord tokens. These can then be used on the Discord communications and digital distribution platform.

Besides creating new malicious open-source programs that look innocent and helpful, other attackers are taking old, abandoned software and rewriting them to include crypto coin stealing backdoors. One such program was event-stream. It had malicious code inserted into it to steal bitcoin wallets and transfer their balances to a Kuala Lumpur server. There have been several similar episodes over the years.

With each such move, faith in open-source software is worn down. Since open-source is absolutely vital to the modern world, this is a lousy trend. 

What can we do about it? Well, for one thing, we should consider very carefully indeed when, if ever, we should block the use of open-source code. 

More practically, we must start adopting the use of Linux Foundation's Software Package Data Exchange (SPDX) and Software Bill of Materials (SBOM). Together these will tell us exactly what code we're using in our programs and where it comes from. Then, we'll be much more able to make informed decisions.

Today, all-to-often people use open-source code without knowing exactly what they're running or checking it for problems. They assume all's well with it. That's never been a smart assumption. Today, it's downright foolish. 

Even with all these recent changes, open-source is still better and safer than the black-box proprietary software alternatives. But, we must check and verify code instead of blindly trusting it. It's the only smart thing to do going forward.

Related Stories: