A friend of mine who works in IT passed on some surprising news the other day.
According to the ContentKeeper software used by his current bosses to police employee net access, my personal Web site Gusworld was deemed unsuitable for staff to visit.
My site allegedly fell into two banned categories. While I had no argument with it being classified as entertainment, I was just a tad surprised to learn that it was also branded as 'crime/terrorism'. For a site whose most popular features are information about ABBA and Kirsty MacColl, this was an odd classification, to say the least.
ContentKeeper's site doesn't include a handy "your filtering system is rubbish" link, so I sent off a general e-mail query asking what the basis for this decision was. I received a very prompt reply from the company's managing director David Wigley -- something I doubt would have happened if I wasn't a journalist.
Apparently, the fact that I've written articles in the past about Internet security "made the job of the AI engines more difficult than usual due to numerous mention of trigger words and phrases". Following my complaint, the site had been reclassified into a more innocuous category.
Whether anyone from a well-known financial institution can actually visit my site matters not a jot in the grand scheme of things, but this experience reminded me of the profound limitations of such censorship systems.
Firstly, their categories are a touch on the broad side. Secondly, their reliance on automated processing means that they're going to make a whole bunch of mistakes. But the most disturbing aspect is that you have no way of knowing that you've been blocked by a given system unless someone else happens to let you know about it.
Companies that use this software are essentially trusting a third party not only to block inappropriate content, but also not to inadvertently block information that might be useful.
In an era where information is ever more critical, that's a pretty big risk.