X
Innovation

Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech

The transformative potential of algorithms means that developers are now expected to think about the ethics of technology -- and that wasn't part of the job description.
Written by Daphne Leprince-Ringuet, Contributor

The tech industry is entering a new age, one in which innovation has to be done responsibly. "It's very novel," says Michael Kearns, a professor at the University of Pennsylvania specialising in machine learning and AI. "The tech industry to date has largely been amoral (but not immoral). Now we're seeing the need to deliberately consider ethical issues throughout the entire tech development pipeline. I do think this is a new era." 

AI technology is now used to inform high-impact decisions, ranging from court rulings to recruitment processes, through profiling suspected criminals or allocating welfare benefits. Such algorithms should be able to make decisions faster and better -- assuming they are built well. But increasingly the world is realising that the datasets used to train such systems still often include racial, gender or ideological biases, which -- as per the saying "garbage in, garbage out" -- lead to unfair and discriminatory decisions. Developers once might have believed their code was neutral or unbiased, but real-world examples are showing that the use of AI, whether because of the code, the data used to inform it or even the very idea of the application, can cause real-world problems.

From Amazon's recruitment engine penalising resumés that include the word 'women's', to the UK police profiling suspected criminals based on criteria indirectly linked to their racial background, the shortcomings of algorithms have given human rights groups reason enough to worry. What's more: algorithmic bias is only one side of the problem: the 'ethics of AI' picture is indeed a multifaceted one.

To mitigate the unwelcome consequences of AI systems, governments around the world have been working on drafts, guidelines and frameworks designed to inform developers and help them come up with algorithms that are respectful of human rights. 

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

The EU recently released a strategy for AI that "puts people first" with "trustworthy technology". Chinese scientists last year published the Beijing AI Principles, which they wrote in partnership with the government, and which focus on the respect of human rights. 

This year in the US, ten principles were proposed to build public trust in AI; and just a few months before, the Department of Defense released draft guidelines on the deployment of AI in warfare, which insisted on safeguarding various principles ranging from responsibility to reliability. The UK's Office for AI similarly details the principles that AI should follow, and the government also published a Data Ethics Framework with guidelines on how to use data for AI.

To date, most guidelines have received a positive, albeit measured, response from experts, who have more often than not stressed that the proposed rules lacked substance. A recent report from an independent committee in the UK, in fact, found that there is an "urgent need" for practical guidance and enforceable regulation from the government when it comes to deploying AI in the public sector. 

Despite the blizzard of publications, Christian de Vartavan, an expert on the UK's All-Party Parliamentary Group (APPG) on AI tells ZDNet that governments are struggling to stay on top of the subject: "That's because it is still so early, and the technology develops so fast, that we are always behind. There is always a new discovery, and it's impossible to keep up."

What practically all the frameworks released so far do, is point to the general values that developers should keep in mind when programming an AI system. Often based on human rights, these values typically include fairness and absence of discrimination, transparency in the algorithm's decision-making, and holding the human creator accountable for their invention. 

Crucially, most guidelines also insist that thought be given to the ethical implications of the technology from the very first stage of conceptualising a new tool, and all the way through its implementation and commercialisation. 

This principle of 'ethics by design' goes hand in hand with that of responsibility and can be translated, roughly, as: 'coders be warned'. In other words, it's now on developers and their teams to make sure that their program doesn't harm users. And the only way to make sure it doesn't is to make the AI ethical from day one.

The trouble with the concept of ethics by design, is that tech wasn't necessarily designed for ethics. "This is clearly well-meaning, but likely not realistic," says Ben Zhao, professor of computer science at the University of Chicago. "While some factors like bias can be considered at lower levels of design, much of AI is ethically agnostic."

Big tech companies are waking up to the problem and investing time and money in AI ethics. Google's CEO Sundar Pichai has spoken about the company's commitment to ethical AI and the search giant has published an open-source tool to test AI for fairness. The company's Privacy Sandbox is an open web technology that allows advertisers to show targeted ads without having access to users' personal details. 

Google even had a go at creating an ethics committee, called the Advanced Technology External Advisory Council (ATEAC), founded last year to debate the ethical implications of AI. For all of the company's goodwill, however, ATEAC was shut down just a few weeks after it launched. 

Pichai is not the only one advocating for greater ethics: most tech giants, in fact, have joined the party. Apple, Microsoft, Facebook, Amazon -- to name but a few -- have all vouched in one way or the other to stick with human rights when deploying AI systems. 

Big tech might be slightly too late, according to some experts. "The current trend of Silicon Valley corporations deciding to empower ethics owners can be traced to a series of crises that have embroiled the industry in recent years," note researchers Data & Society in a paper on "corporate logics"

Google's Project Maven flop is one example of the complexities involved. Two years ago, the search engine considered selling AI software to improve drone video analysis for the US Department of Defense. The talks promptly led to 4,000 staff petitioning for Google to quit the deal, and a dozen employees walking out, because they objected to their work potentially being used in such a way.  

As a result, Google reported that it wouldn't renew its contract with the Pentagon, and published a set of principles stating that it would not design or deploy AI for weapons or any other technologies whose purpose was to harm people. 

But the University of Chicago's Ben Zhao feels we should also cut the tech industry some slack. "It certainly has seemed that big tech has been fixing the ethical mess they have created after the damage has been done," he conceded, "but I don't believe the damage is always intentional. Rather, it is due to a lack of awareness of the potential risks involved in technology deployed at scales we have never seen before."

Arguably, Google could have foreseen the unintended consequences of selling an AI system to the Pentagon. But the average coder, who designs, say, an object-recognition algorithm, is not trained to think about all the potential misuses that their technology could lead to, should the tool fall into malicious hands. 

Anticipating such consequences, and taking responsibility for them, is a pretty novel requirement for most of the industry. Zhao is adamant that the concept is "fairly new" to Silicon Valley, and that ethics have rarely been a focus in the past.

He is not the only one to think so. "Think about the people who are behind new AI systems. They are tech guys, coders -- many of them have no background in philosophy," said De Vartavan. Although the majority of developers are keen to program systems for the greater good, they are likely to have no relevant training nor expertise when it comes to incorporating ethics into their work. 

Perhaps as a result, technology firms have actively come forward to ask public bodies to take action and provide stronger guidelines in the field of ethics. Sundar Pichai insisted that government rules on AI should complement the principles published by companies; while Microsoft's president Brad Smith has repeatedly called for laws to regulate facial recognition technology

Law-making remains a delicate craft, and balancing the need for rules with the risk of stopping innovation is a thorny task. The White House, for its part, has made the US government's position clear: "light-touch" rules that won't "needlessly hamper AI innovation" are deemed preferable by the Trump administration. 

The University of Pennsylvania's Michael Kearns similarly leans towards "a combination of algorithmic regulation and industry self-discipline". 

SEE: 7 business areas ripe for an artificial intelligence boost

De Vartavan argues that companies need to be clearer about the decisions they make with their code.

"What the government should insist on is that companies start with thinking and defining the sort of values and choices they want to put into their algorithms, and then explain exactly what they are". From there, users will be able to make an informed choice as to whether or not to use the tool, he says.

De Vartavan is confident that AI developers are on track to start designing ethics into their inventions "from the ground up". The technology industry, it would seem, is slowly starting to realise that it doesn't exist in a vacuum; and that its inextricable links to philosophy and morality cannot be avoided.

Earlier this year, in fact, IBM and Microsoft joined up with an unexpected third player in a call for ethical AI. None other than the president of the Pontifical Academy for Life Archbishop Vincenzo Paglia added his signature to the bottom of the document, asking for a human-centred technology and an "algor-ethical" vision across the industry. 

Rarely has the metaphysics of tech been more real. And as AI gains ever-more importance in our everyday lives, it is increasingly looking like the technology industry is embarking onto a new chapter -- one where a computer engineering degree mixes well with a philosophy textbook.

Editorial standards