A close election race, a social media angle, and an allegation of fraudulent behaviour all in the one story? No wonder the claims that Australia's opposition leader and probable next prime minister Tony Abbott had bought fake Facebook fans, "Like" clicks, and Twitter followers made global news.
Whether Abbott's campaign team or independent supporters have actually done any fakery, or whether the other parties have, are questions for the political media. Personally, I think it's all a sideshow. A candidate having more Facebook likes or Twitter followers isn't something that's likely to persuade anyone to change their vote. As I've written previously about Klout and Kred scores, they're all what Will Scully-Power calls "vanity metrics".
Still, such vanity metrics are cheap and easy to fake, and cheap and easy to report upon. Even the dullest of dullards of the Canberra press gallery can manage to grasp a crayon and scrawl a yarn about who has the bigger number in some arbitrary and ultimately pointless metric. Yawn.
Here's something a little more interesting.
On Saturday, political tragics noticed a sequence of identical tweets from seven different Twitter accounts.
"The Greens Party new election ad is very ordinary. I'd be hiring a new creative director if I was the Greens. #AusVotes #AusPol #Greens," they said, illiterately oblivious to the existence of the subjunctive mood.
The original tweet must've been created by a human, and other humans may have passed it on, but later accounts in the chain look suspiciously fake.
The profile of Benjamin Erb (@_johnny_p_Dc), for example, says he's a "Registered patent attorney and representing clients ranging from startup ventures to Fortune 500 corporations in all aspects of intellectual property" in Kaneohe, Hawaii, despite his tweets saying he's in South Carolina. The combination of one set of name elements in the real name field, utterly different ones plus a few random characters in the username, is not what patent attorneys do.
Such accounts are algorithmically generated. So are their generic and slightly odd tweets, such as "Between school, debate, work, home, a broken back, and mixed emotions, i just dont have any time for myself", intended to camouflage their spambot nature.
Anyone could be behind these tweets. A weakened Greens party would benefit every other. The tweets could even be from a disgruntled Greens member unhappy with the party's campaigning, or a creative director looking for work.
What makes Erb and his clones interesting and, in fact, hilarious is how stupid they are.
A handful of obviously fake Twitter accounts all spouting the same message? This is only half a notch above the email I received from a certain Jessica Goldman last week:
"Cool Belarusian girls love to correspond with foreign people =))! For almost all of my neighbors have already created a family on Ukrainian women, you do not how much they imagine true =) Now you can chat with them at all times, they are always happy to foreigners. Do not miss this chance to all for free!"
Well, I'm convinced.
Actually, our phantom tweeter is half a notch below the alleged Jessica Goldman. At least the Jessica clones can alter their messages. "Merry Ukrainian girls are ready to chat with foreign people!", "Merry Ukrainian girls love to chat with foreigners!", "Angelic Ukrainian women love to correspond with foreigners!", and other such sophisticated variations.
While the idea of Twitter clonebots might seem advanced to mainstream political operatives, chat bots (short for "robots", in case you've just beamed in) are basic technology by internet standards — and ancient, hailing from the latter years of last century.
It's now almost 20 years since the bot-assisted Internet Relay Chat (IRC) wars of 1993. Robey Pointer first wrote his highly modular Eggdrop IRC bot around December 1993. Eggdrop's initial role was to defend the #gayteen chat channel from attackers. Other IRC channels soon adopted it. Eggdrop is still popular today, managing IRC channels and doing everything from running games to sharing files.
John Canavan's 2005 paper The Evolution of Malicious IRC Bots (PDF) is a good overview of another thread of bot development. We're looking at something far more complex than the Erbclones' dumb parroting.
Heck, even the National Australia Bank was trialling chat bots for customer service in 2008. Five years ago. Are we surprised that our political class is so far behind the pace? No, we are not.
So let's think ahead a few years, to the campaigns of Pyne 2019 in Australia, or Weiner 2020 in the US, or Robinson 2020 in the UK.
You can buy fake Twitter followers today for just $70 per 10,000 with a "100 percent retention guarantee". You can buy malware today for just $250 that's unique to you and guaranteed to remain undetected by the top 10 anti-malware products. You can buy enough computer time today that will allow you to produce malicious software that can bypass a vendor's digital certificate verification process for just $250,000.
We already have personalised malware today, too, "handcrafted lovingly by emotionless robots out on the web", as Sophos senior technology consultant Sean Richmond put it recently. It automatically configures itself to match the vulnerabilities in your system and wriggles its way in.
Richmond said we should think of malware as less like the threat of a few big sharks, and more like the ever-present threat of few million disease-laden mosquitoes. There are so many of them, and it's so easy to overlook a hole in our net.
The major political parties already have voter databases, and can cross-match them to all manner of commercial information about our lives and preferences. I've already described the potential there for highly manipulative media to be created just for us.
Now watch these cognitive robots from the US Naval Research Laboratory. They can understand human speech, make inferences, volunteer potentially useful information, and even play hide and seek. Their voice synthesis is pretty good, too. And those videos were made a few years ago. I reckon that a few years from now, they could make the robots speak in any voice they wanted. Perhaps that's already possible.
So project all of that stuff forward two election cycles.
Computing power goes up. Malware prices drop. I think we can safely assume that we will have moved well beyond fake Twitter accounts broadcasting identical messages.
We'll have artificial social media identities that know who we are, where we live, how we intend to vote, and what political issues press our buttons. And they'll know how to break through our computer's defences, pop up, and start a conversation with us.
We'll be plagued by digital swarms of whining Pynedroids, slimy Weinerbots, and worse.
It's a frightening scenario.
Will politicians resort to these tactics? You bet.
Given that even in Australia's relatively problem-free democratic processes, we already see plenty of dodgy behaviour — ranging from campaign workers dressing as opposing parties while handing out their own party's how-to-vote cards to anonymous racist or defamatory leaflets — political candidates will inevitably swarm to use these new digital weapons like flies to a day-old corpse.