X
Business

AI and big data vs ethics: How to make sure your artificial intelligence project is heading the right way

Making the right moral decisions about your AI projects might be just as hard as getting the technology to work.
Written by Mark Samuels, Contributor

The excitement around any new technology comes with a side order of fears about how that system or service will affect people. In key areas like cloud computing and social media, it often feels as if the regulators are having to play catch-up with the tech firms that create these innovations and the businesses that exploit them.

Yet it is in the area of artificial intelligence (AI) that these fears are perhaps greater than anywhere else. Rather than just being a technology that people will themselves use, some experts believe AI could instead help to replace human decision-making at work and at home. So, how can businesses work to reduce fears and create AI systems that exploit big data ethically?

Anastasia Dedyukhina recognises that, while the pace at which organisations are embracing AI continues to quicken, definitions around the technology remain murky. Customers who will be most affected by the use of AI have little comprehension of how this impact will reveal itself, says Dedyukhina, who is founder of consultancy Consciously Digital.

"More decisions are being taken for us by machines, anything from how much you pay for health insurance to who your life partner should be," she says. "As technology is affecting more elements of our lives, we should all have a say in this – we need to make sure people understand the consequences of AI, what it is and how people are different from computers."

Dedyukhina – who joined a group of experts to discuss ethics in AI at the recent Big Data World event in London – says businesses must help develop this better understanding of AI. Executives running AI projects should always consider why they are amassing information and how their customers are affected by this process.

"The ethical way to collect data is to do it in a way that actually improves the customer experience and to explain to them why you are collecting this information," she says. "The next step beyond that is to make it easy for your customers to opt out of data collection if they want to. Don't make it so complicated. Give control back to the customer."

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

That sentiment resonates with Adrian Baker, policy manager for AI at the British Heart Foundation (BHF), who points to evidence from an inquiry the BHF is currently running into AI in healthcare. Baker says patients feel very differently about sharing data than the general public – and that might be because patients have to consider how their information is being used to create the right health outcomes.

"The research highlights how everyone involved in the use of AI and big data must have wider discussions about the outcome you're looking for, such as better health, and then work backwards to issues like data sharing and information security. You should always start with the outcome," he says.

Baker suggests business leaders looking to ensure they focus on the right objectives for AI and data should consider establishing a public ethics board. Just like companies have executive boards to make decisions, these ethics panels can help organisations that are using emerging technology to make publicly minded decisions.

"We know some tech companies, like Deep Mind, already do this," says Baker. "Don't assume that you know what the public wants or that the market research you conduct into public opinions is correct. You need to actually have an ethics panel, and discuss what the issues are and what the needs of the public really are."

SEE: Managing AI and ML in the enterprise (ZDNet special report) | Download the report as a PDF (TechRepublic)

While such ethics boards can help highlight issues of concern, their contribution to any debate around AI and the use of data will be directly related to the quality of people who sit on the panel. Baker says any ethics board must include a diverse mix of people and experiences.

Where possible, companies should look to publish the results of these ethics boards to help encourage public debate and to shape future policy on data use. "The greater the potential impact of AI, the more important the creation of an ethics board – it's crucial, for example, in an area like autonomous vehicles," says Baker.

Bertie Müller, senior lecturer in computer science at the University of Swansea, also agrees that public awareness is critical to the ethical development of emerging technology. "We clearly see how AI can create benefits, but we need to trust it," says Müller, whose research explores how organisations can create transparency around automation.

SEE: Special report: How to automate the enterprise (free ebook)

With the increasing amount of autonomy in decision-making systems, businesses should evaluate their ethics over and over again, says Müller. Executives must not be complacent and mistakenly believe that data-use policies approved at the time of technology deployment will stand the test of time.

"We don't know how an AI system is going to evolve and how it will influence future decision-making," he says. "You need to have something a bit like an ethics MOT, where your ethics board considers the products you produce, and which takes place annually, but possibly monthly or less in the case of critical applications. Approval must be a continual process."

Just as ethics boards can help ensure systems and data use remain ethical, so the panels themselves must also be held up to scrutiny. Businesses that work to establish an ethics board must be careful to ensure the panel does not simply become a rubber-stamping exercise that satisfies legislators and protects shareholder interests.

Size matters, too. While larger organisations might be able to justify the cost of establishing and running an ethics board, smaller organisations might be reticent. What matters is that any organisation – regardless of size – assesses the potential impact of AI and ensures the business has, wherever possible, considered the objectives of collecting and exploiting customer data.  

To this end, governance is another potential sticking point when it comes to assessing the potential impact of AI. There is no accepted global standard when it comes to ethics around AI and data use. Ethics are a set of accepted morals that vary between cultures. Ideally, the AI systems that organisations create will adapt to the cultures in which they're adopted and used.

The good news, suggests the BHF's Baker, is that progress towards standards is being made. The Centre for Data Ethics and Innovation (CDEI), which will analyse the opportunities and risks of data-driven technology, published its 19/20 Work Programme and 2-year Strategy last week, setting out priorities and ways of working for its first year. And work in key sectors – such as the health service in the UK – illustrates how organisations can bake ethics into their AI developments.

"The NHS is creating frameworks, guidance and standards that are reviewed and updated every six months," says Baker. "And I think this might be the way to go – to provide a framework or a set of standards that is agile enough to keep up with, not only the pace of technological development, but also the public's view."

MORE ON AI AND BIG DATA:

Editorial standards