commentary Australia remains on the Enemies of the Internet watch list. Sure, we're no Syria or North Korea, but we deserve to be watched. Mandatory internet censorship is still government policy, and the "voluntary" filtering based on Interpol's blacklist is being propped up by dodgy statistics.
The enemies list was published by Reporters Without Borders on 12 March, the World Day Against Cyber-Censorship, and highlights nations that restrict online freedom of expression.
Those listed as internet enemies are the usual suspects, including Iran, China, North Korea and Cuba. Those flagged as "under surveillance", like Australia, include "supposedly democratic countries [which have] continued to set a bad example by yielding to the temptation to prioritise security over other concerns and by adopting disproportionate measures to protect copyright".
Reporters without Borders points to Labor's ill-defined mandatory internet "filtering" policy, which hasn't gone away, it's just on the back burner.
The ALP's National Platform — the document that defines party policy until it's overturned at their national conference or Labor is voted out of office and a new government changes everything — still sets out in chapter 11 the entire policy, such that it is:
"Labor supports the National Classification Code, which classifies content against the standards of morality, decency and propriety accepted by reasonable adults. The principles of classification should apply on a platform neutral basis. Labor recognises the necessity of an independent and accountable review process for the list of URLs to be blocked by mandatory filtering. Labor believes mandatory ISP level filtering should be limited to Refused Classification content according to the National Classification Code. Labor does not support the introduction of mandatory ISP filtering that would lead to significant degradation of network speeds."
Widely recognised problems with the definition of Refused Classification (RC) and scope creep over time led to mandatory filtering being put on hold in July 2010 while the Australian Law Reform Commission (ALRC) conducted a review of content classification across all media.
The ALRC's final report was published on 1 March. It recommended that RC should be renamed "Prohibited" and be defined more narrowly. In particular, the government should review the ban on the depiction of sexual fetishes in films and "detailed instruction in the use of proscribed drugs", and consider limiting the ban on content that "promotes, incites or instructs in matters of crime" to "serious crime".
On the surface that sounds like it'll fix the problem. But note the weasel words: recommend, should, review, consider.
New laws would still have to be drafted and make it through parliament.
A government is under no obligation to follow ALRC recommendations. Recommendations on classification have been ignored in the past, when a porn-viewing session shocked conservative MPs with a variety of non-missionary-position recreations.
A government might well cave in to pressure from ultra-conservative groups like the Australian Christian Lobby (ACL) — a group that already boasts of its frequent closed-door meetings with Communications Minister Senator Stephen Conroy.
The new Prohibited category could even end up being more restrictive than RC. Nothing is guaranteed.
Consider also the recommendation that "content providers must not sell, screen, provide online, or otherwise distribute [my emphasis] Prohibited content", including "unclassified content that, if classified, would be likely to be classified Prohibited".
Couldn't this provide a legal mechanism for the comprehensive monitoring of otherwise private forums and even personal communication?
Meanwhile, without any public policy debate, but with plenty more of those closed-door meetings, some internet service providers — currently Telstra, Optus and CyberOne in Canberra with more to come — have started "voluntary" filtering using Interpol's blacklist of child exploitation material.
Let that completely undemocratic process slide for now. Focus on that Interpol list and its implementation.
Again, on the surface it sounds good. The Interpol list is billed as a "worst of" list. Depictions of real children under the age of 13, or who appear so, in sexually exploitative situations ranked as "severe" and reviewed by two or more independent law enforcement agencies in different countries. All good.
But when a site is added to the blacklist, the entire domain is blocked. One dodgy video among legitimate millions on a social media site can lead to the entire service being blacked out.
That's not a problem for established operations like YouTube and Facebook, who already have reasonably efficient take-down procedures. But what about start-ups, smaller players or otherwise legitimate sites in countries without good Interpol liaison? What about Queensland dentists who don't understand any of this?
"If a hosting provider is a bit recalcitrant about taking content down, and they leave content that is known to be inappropriate up, they will then be put on the list, and amazingly, within minutes the actual content will be removed," said Mark Riley, chief technology officer at filter vendor ContentKeeper last year. "Having that amount of leverage over a hosting provider seems to have very, very positive outcomes."
Sure, but is that really due process?
More recently we've seen news stories that at Telstra alone 84,000 attempts to access child exploitation material have been blocked, highlighting the extent of child exploitation. A decline in the numbers over time has been explained as the filter working to dissuade the paedophiles.
According to the documents used by the Australian Federal Police, that number is "purely a count of redirection triggers". A connection, for whatever reason, was redirected to the Interpol block page.
But was the user knowingly trying to access child exploitation material, or some other content on the site? In other words, did they have criminal intent?
What proportion of these "redirection triggers" are completely innocent? How many of them represent unique users? How many of them are even human, rather than automated tools like web indexing robots or even hackers scanning the network?
We simply don't know what these numbers mean, yet they're being waved at politicians as proof that the system works.
So there we have it. An ill-defined mandatory censorship system is still official government policy, we don't know how ongoing policy reviews might turn out, and, after zero democratic process, a system has been installed that by design is almost guaranteed to cause collateral damage.
The Reporters Without Borders report may be larded with words like "repressive" and "fraught with danger", but it certainly has a point. No one in government — or opposition for that matter, apart from the Greens — seems to be backing up their lip service to the importance of human rights with actual action.
Australia deserves to be watched.