Ivan Ristic (pictured to the right) posted a story today on his blog that highlights some changes that are to go into effect in England sometime this year. The changes to the Computer Misuse Act (CMA) would appear to put security researchers and consultants in the UK at risk of being considered criminals. Ristic mentions the key proposed additions below:
3A Making, supplying or obtaining articles for use in offence under section 1 or 3
- A person is guilty of an offence if he makes, adapts, supplies or offers to supply any article intending it to be used to commit, or to assist in the commission of, an offence under section 1 or 3.
- A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1 or 3.
- A person is guilty of an offence if he obtains any article with a view to its being supplied for use to commit, or to assist in the commission of, an offence under section 1 or 3.
- In this section “article” includes any program or data held in electronic form.
- A person guilty of an offence under this section shall be liable—
- on summary conviction in England and Wales, to imprisonment for a term not exceeding 12 months or to a fine not exceeding the statutory maximum or to both;
- on summary conviction in Scotland, to imprisonment for a term not exceeding six months or to a fine not exceeding the statutory maximum or to both;
- on conviction on indictment, to imprisonment for a term not exceeding two years or to a fine or to both.
The main issue is the ambiguity of the word likely in "[...] likely to be used to commit, or to assist in the commission of, and offence [...]", which effectively criminalises a large number of security professionals who are just doing their jobs.
I think we all know that the tools a security researcher/consultant uses are the same (for the most part, likely minus some Ninja scripts/tools that some hackers keep private) as those hackers use. I'm a security researcher and a security consultant as a full-time job; however, I also consider myself a hacker. The only time I'm doing anything malicious is when I'm playing pranks on friends (mostly Mike Wood), but to me, what defines someone as a hacker is their mindset. I think most people understand that these days. Governments are just a few years behind the curve apparently, and it is dangerous to all of us in this profession.
It seems like every law that comes out related to computer security is either so vague it loses its bite (see PCI), or so vague it allows people who aren't even guilty of anything evil to be implicated and treated as criminals (see my recent post on the new laws to crack down on child pornography). It's a scary world we live in. I'm not big into politics, but as a US citizen, this is a disturbing trend to me. I wonder if the US is beginning to consider similar non-sensical (I don't even think that's a real word that's how fired up I am right now) laws.
As Ristic mentions:
A much bigger problem is that the new law leaves too much to interpretation. The risk is just too high: do you want to be in a position to defend your actions in front of a jury that will almost certainly fail to understand the subject matter? Even if you are successful in your defence, such an event will require significant financial resources, disrupt your life, cause you and your family endless pain, and most certainly kill your career.
I mean, how do you even go about hiring a lawyer if you are implicated of something like this? I'd actually feel more comfortable representing myself with my very limited knowledge of law than I would hiring an attorney with a very limited knowledge of computers.
Further, Ristic mentions the possible outcomes of this law coming into effect:
Possession it not likely to be criminalised (from the Guidance: "[...] does not criminalise possession per se unless an intent to use it to commit one of the other offences in section 1 or 3 CMA can be shown.") so it will probably still be safe to research computer security in private, but exchanging information with others might become dangerous. With the threat of persecution hanging over their heads, most people in the UK are likely to stop publicly discussing what they know.
Full disclosure—no matter what you think of it—will be criminalised, but it won't go away. Those who believe will continue to release vulnerability information, but they will likely take precautions to keep their identities secret.
Tool authors will have a choice to make. If they don't change their distribution practices they will risk becoming a target of investigation and, possibly, prosecution. The Guidance seems to imply the safe way to distribute the tools is via a vetted list of computer security professionals. This is not feasible for most tool writers as they cannot afford the overhead of such a process. On top of that, even if such practices are followed, there is still no guarantee that you won't be persecuted. Each case will be reviewed on its own merits. Thus the alternatives—ending further development or moving the tools underground—seem far more likely.
So great idea, let's just go ahead and nab all those evil security researchers and consultants who keep trying to make our systems more secure, since we can catch them, as they are public enough to be seen. Then, we'll completely miss out on all those underground hackers who are actually doing the evil deeds, cause we have neither the time, people, nor skill to catch them.
Way to think it through guys. Well played.