The ethics lessons will continue until morality improves

Developer conferences like Build are exactly the right place to talk about the responsibility of building AI.
Written by Mary Branscombe, Contributor

Video: Facebook, Microsoft, and Google aim to woo developers

Instead of leaping into the news that good old fashioned Windows desktop applications can include pieces of the 'modern' Windows APIs that were once reserved for UWP Store apps, or saying that any company can buy the 3D infrared camera from the HoloLens to build into their own devices, or even that Notepad can handle Linux line endings, CEO Satya Nadella opened Build by asking developers to be a bit more responsible as they change the world. Don't move fast and break things, as it were.

Must read: Microsoft's AI journey beyond Office

But 8:30am on a Monday morning might not be the best time for philosophy, or asking developers to choose between 1984 and a Brave New World of consumerism as possible futures. Microsoft was somewhat constrained by having to work around the Google I/O keynote on Tuesday, and chose to start as early as possible on Monday and to cram in as many announcements as it could on the first day.

There were many more announcements dotted throughout the conference, from Azure Event Hubs supporting Apache Kafka for pulling in vast amounts of real time data at speed rather than relying on Microsoft's own Hadoop-based HDInsight, to a container version of Windows Server and even Windows 10 with no UI -- for testing applications, though it will undoubtedly be mistaken for a way to do VDI.

Build was crammed with new things and experiments; walking out as the event was ending we saw cricket bats with stick-on antennas to report the speed and strength of a batter's stroke to an Azure Sphere microcontroller, and another Azure Sphere nestling inside the case of an Altair 8800 running Altair BASIC and collecting temperature readings when the front panel switches were toggled on and off.

replica Altair 8800

The Altair 8800 was the first machine Microsoft created developer tools for

Altair replica running on an Azure Sphere

Inside this Altair 8800 replica is an Azure Sphere

So, why didn't Build start with that? For exactly the same reason that reactions to Google Duplex has been so divided: Because technology powered by AI has the potential to make our lives far, far better -- or far, far more unbearable.

Microsoft showed a meeting room camera system that recognised people walking into the room, greeted them by name, and transcribed every word they said -- even if their deafness made them a little harder to understand. That deaf team member could join in at an equal level with everyone else, and so could remote colleagues. Everyone got a list of what they had said they were going to do, delivered to their to-do lists. Empowering and convenient -- exactly the kind of system the $25 million AI for Accessibility grant programme Nadella announced is there to create. The same system in a railway station in a country with an authoritarian government, or even left on in an HR meeting room where someone is trying to report an abusive boss, would be deeply worrying.

Google showed its Duplex assistant phoning a restaurant and sounding enough like a human to be treated like a real customer. That's fantastic for the deaf, or for those with social anxiety or just a very strong accent; making it obvious a bot is calling would likely get them much worse service. It's terrible if scammers and robocallers get hold of it, or if Google just uses it to call every business in the world five times a day to ask when they close or if the special of the day has sold out.

Read also: Ten technologies that are precursors to the AI era | Don't fear the machine, it's already enriching your life | Machine learning: Mainstream tools for your business | Four ways your business can start using AI for automation | Putting free AI-driven services to work in your business

In both cases, it's the same technology -- but the difference is what developers do with it, what controls there are for preventing abuse, how clearly the designers of the system think about who will use the system, what the context is, how it could go wrong, and how it could be abused. Automation takes something inefficient and makes it unbearable because of the unavoidable, inhuman scale. It doesn't even matter if the technology works perfectly every time, because unless the systems built around it look for nuance and exceptions and false positives, the results are going to be applied -- right or wrong.

Those designers and developers have a responsibility to think about all this, to consider the impact of what they build, and until it's clear that those ethical issues are front and centre, technology conferences need to start with the wakeup call we all still need to get this right before people get hurt. That has to come before all the cool new toys that get the applause.

How ubiquitous AI will permeate everything we do without our knowledge.

Editorial standards