X
Innovation

The lesson of Microsoft's Tay AI chatbot: Experiments are hard (but worth it)

Microsoft's Tay bot is a perfect example of the tension between security, experimentation, permission and forgiveness.
Written by Mary Branscombe, Contributor
microsofttay770x578.jpg

How do tech companies experiment (like with the Tay.ai bot above) without getting caught out?

Image: Microsoft/Twitter

The internet doesn't innately have a sense of place; Facebook and Twitter and other online services can feel like the water cooler at work or your local bar making it harder to replicate the real-world notion of 'the thing that is appropriate there is not appropriate here'.

Marry that with the attitude of 'what's not forbidden is permitted' or 'anything I can find a way to do, I'm allowed to do' - which goes back to Grace Hopper's view of 'anything that's not nailed down I can requisition' (and if I can pry it up it's not really nailed down) and that turns into the Silicon Valley attitude of 'ask for forgiveness rather than permission'.

When the broken processes or the draconian security policies at work make it hard to get your work done, that attitude is responsible for people pulling out USB sticks or turning to cloud storage no matter how many company policies they break because they care about getting their job done, and tempts people to get a chatbot to start spewing the unacceptable which is what happened with Microsoft's Tay.ai experiment.

It can seem a fine line between being empowered to act and acting entitled to behave badly. What separates those are intent and morality, and intent and morality aren't questions you can tackle with technology. As I frequently say when Google comes out with another version of Google Plus, there is no algorithm for social.

The positive side of acting first and asking later is an attitude Microsoft has been trying to learn. You can see it in Microsoft putting the Outlook name on the Accompli email client even though it still (securely) cached (encrypted) email credentials in the Amazon cloud rather than Azure; IT admins hadn't complained about users running Accompli but the exact same (popular) features from Microsoft raised howls of outrage because of the expectations of those admins about something with the Outlook brand. You can see in in the Insider Programs for Windows 10 and Windows 10 Mobile; the path to fast, stable development is moving fast and fixing the problems that causes, but you have to be able to cause problems rather than waiting until everything is perfect (and it's never perfect).

You can even see it in the barely-official but virally popular ninja cat meme; ninja cat riding a flaming unicorn or jumping over left shark on bacon skis was doing the rounds on stickers printed up by the community long before the Microsoft branding team allowed ninja cat to show up on mugs and T shirts in the internal company store.

That sort of feistiness is part of Microsoft growing out of behaviours that served it well in the past but put it on the back foot competing with Google and Facebook and hundreds of startups. Part of it was the aftermath of the anti-trust case and consent decree, inculcating a culture of not integrating too much into a Microsoft platform. Part of it was listening to customers, which works best when there isn't an inflection point in technology going on. Microsoft enterprise customers would tell the company they couldn't digest new versions of software every year; two years or three years or five or seven or even every decade was as often as they could cope with change. As late as 2011, according to 'father of SharePoint' Jeff Teper, the Office team was waiting for customers to tell them when it was time to be in the cloud - but those customers weren't even considering it. After one meeting, he realised "our customers were going to be a trailing indicator on the market, that the IT people in the room were not going to tell us when the market had turned, they were going to tell us after it turned."

Another part is that Microsoft hasn't been good at predicting what customers would do with software. That's true across the technology industry, and sometimes it's a good thing. Wi-fi wasn't designed for ubiquitous connectivity; the street, as author William Gibson famously said in Burning Chrome, finds its own uses for things.

But just as BlackBerry listened to admins asking for more and more controls to turn off new features on BlackBerry handsets until BlackBerry users came to believe their handsets just didn't have any cool features, the team at Microsoft who created Group Policy didn't expect that admins would use all the policies to lock PCs down until they became slow and unpleasant to use.

The customer isn't always right. Human nature isn't always benevolent. Consequences are rarely intended. You need to think 'what's the worst that could happen?' and attack your own technology. Better yet, you need a Red Team with really suspicious minds and cynical attitudes to do that, because they won't be limited by the assumptions of the people who build the technology. Involve the social science and social media researchers who look at how people use social media in their lives.

And then remember that you can't protect everything and that you have to experiment unless you want technology to fossilize, so do the risk analysis. Ask what's the worst that could happen, work out how you'll try to mitigate the risk - and go ahead and experiment responsibly.

Read more about Tay chatbot

Editorial standards