In Tom Espiner's story about former White House cyber-security adviser Howard Schmidt and liability for software flaws, a security representative of the BCS (British Computer Society) said that Schmidt had suggested personal accountability for software developers on the software they write. The article was mistakenly titled "Expert: Hold developers liable for flaws" should have used the term "accountable" instead of "liable" and will be corrected. Although the BCS doesn't endorse that extreme level of accountability, they do think the software companies they work for should be held responsible. The world "liability" wasn't used here, but we should be careful in the choice of words. While I'm always in favor of accountability, liability is a slippery slope that we should not entertain.
Many advocates in the Open Source community favor software liability lawsuits because they view it as a way of bringing down Closed Source software companies since they're selling a product that can be sued. The problem with that is; where does this slippery slope end? If a developer gives away his or her software for free and a user is hacked because of a security vulnerability in that free software, does that protect that developer from legal liability if we start a legal precedence in software malpractice? There was even a case where a medical doctor was sued for malpractice because a man needing emergency medical care died under his voluntary care outside of the hospital. Medical malpractice has already crippled the medical industry with multi-million and even billion dollar lawsuits and doctors are leaving the profession in droves because of skyrocketing malpractice insurance premiums. Is this really the fate we want for the software industry?
I've always favored reasonable disclosure and accountability for software companies, but that accountability should be a reasonable agreement between the software maker and the consumer be it an individual or company. I'm talking about a set of guidelines such as the one below that would aim to avoid litigation if everyone does their job.
- Software makers who sign on to this level of agreement promises to patch all independently confirmed critical security flaws in the software they sell or distribute on a timely basis.
- Security researchers who find bugs must first notify the software maker in advance of any public disclosure and give adequate time for patch creation, internal testing, release cycle, customer test cycle, and customer deployment. This means up to 30 days for the software maker to write and test the patch. After the patch is written, allow 0 to 30 days for making the next monthly patch release. Once the patch is released, allow 30 days for businesses and consumers to test and deploy the patch. The total grace period would vary from 60 to 90 days depending on the time of month with respect to the monthly patch release cycle. Zotob struck because security researchers publicly released the exploit code within 24 hours of the patch release and customers were caught unprepared as the Zotob worm ravaged Windows 2000 computers. As far as I'm concerned, this sort of behavior should be illegal in any civilized nation.
- If the software maker misses their SLA (Service Level Agreement), they should refund all licensing and support fees ever collected if any exploit occurs during the tardy period.
- Researchers who publicly disclose exploits before the grace period expiration should be legally liable. On the other hand, researchers who follow these guidelines and release the exploit code after the grace period should be immune from legal bullying. I've personally known of researchers who feared even talking about a vulnerability because they were afraid to be sued by the big bad software company.
- Any exploit disclosure after the 60-90 grace period falls on the shoulders of the users themselves if they don't apply the patch that was readily available. This means the company or organization who failed to apply the security patch that led to a compromise in their customers data should be the one held liable.
- Now don't get me wrong; ultimate liability should fall on the perpetrators of the crime itself and if we ever get our hands on them, I say lock them up and throw away the keys. From a realistic standpoint, we usually don't catch them and we as consumers have a right to a certain level of protection from the overseers of our data, but the difference between legal liability and criminal liability should be clear.
In my opinion, these are very reasonable guidelines. Any software company or individual who sells or distributes free software should have the option to adopt these guidelines or not. However, any software maker who refuses to adopt these guidelines should automatically be barred from consideration in any software purchase for any company or organization responsible for sensitive data. What this means is that anyone is still free to write bad software and anyone is still free to buy bad software, they just shouldn't expect to be compliant with anything like PCI, HIPAA or SOX nor should they expect sympathy from a jury.