X
Innovation

Report finds startling disinterest in ethical, responsible use of AI among business leaders

Just 6% of respondents said they ensure AI is used ethically and responsibly by making development teams diverse.
Written by Jonathan Greig, Contributor

A new report from FICO and Corinium has found that many companies are deploying various forms of AI throughout their businesses with little consideration for the ethical implications of potential problems. 

There have been hundreds of examples over the last decade of the many disastrous ways AI has been used by companies, from facial recognition systems unable to discern darker skinned faces to healthcare apps that discriminate against African American patients to recidivism calculators used by courts that skew against certain races

Despite these examples, FICO's State of Responsible AI report shows business leaders are putting little effort into ensuring that the AI systems they use are both fair and safe for widespread use. 

The survey, conducted in February and March, features the insights of 100 AI-focused leaders from the financial services sector, with 20 executives hailing from the US, Latin America, Europe, the Middle East, Africa, and the Asia Pacific regions. 

The executives, serving in roles ranging from Chief Data Officer to Chief AI Officer, represent enterprises that bring in more than $100 million in annual revenue and were asked about how their companies ensure AI is used responsibly and ethically. 

Almost 70% of respondents could not explain how specific AI model decisions or predictions are made, and only 35% said their organization made an effort to use AI in a way that was transparent and accountable. 

Just 22% told the survey their organization had an AI ethics board that could make decisions about the fairness of the technology they used, and the other 78% said they were "poorly equipped to ensure the ethical implications of using new AI systems."

Nearly 80% said they had significant difficulty in getting other senior executives to even consider or prioritize ethical AI usage practices. Few, if any, executives truly understood the business and reputational risks associated with unfair, unethical, or mismanaged AI usage. 

More than 65% said their enterprise had "ineffective" processes in place to make sure that all AI projects complied with any regulations, and nearly half called these processes "very ineffective." 

screen-shot-2021-05-24-at-9-32-13-pm.png
FICO

Despite acknowledging the lack of care put into how their enterprises use AI, 77% agreed that AutoML technology could be misused and 90% agreed with the idea that inefficient processes for model monitoring represent a barrier to AI adoption. 

While some IT and compliance employees did have some awareness of AI ethics, the vast majority of shareholders had a poor understanding of the concept, according to respondents.

A lack of understanding about the ramifications of mismanaged AI is having little effect on the desire of enterprises to incorporate AI, with 49% of respondents reporting an increase in resources devoted toward AI projects in the last year. 

"At the moment, companies decide for themselves whatever they think is ethical and unethical, which is extremely dangerous. Self-regulation does not work," Ganna Pogrebna, lead for behavioral data science at The Alan Turing Institute, told the survey. 

Respondents overwhelmingly said there was no consensus about what responsibility companies had in deploying ethical AI, particularly AI that "may impact people's livelihoods or cause injury or death." 

A majority of respondents said they had absolutely no responsibility to make sure the AI they used was ethical beyond simple regulatory compliance. 

More than half of respondents said AI used for data collection and back-end business operations must meet basic ethical standards. But the numbers dipped under half when it came to AI systems that "indirectly affect people's livelihoods."

According to survey respondents, 80% are struggling to create the kind of processes needed to make sure AI was used appropriately. 

Businesses are increasingly putting pressure on employees to deploy AI systems quickly, regardless of the ethics around how the AI is used, with 78% of respondents saying they are having problems getting executive support for prioritizing AI ethics and responsible AI practices.

When asked about the standards and processes in place to govern AI usage, half of respondents said they "ensure global explainability," while 38% said they had data bias detection and mitigation steps.

Just 6% of respondents said they did so by ensuring that development teams were diverse. 

Those in charge of ethical AI faced a variety of barriers, including organizational politics, poor data quality, and a lack of data standardization. 

"Many don't understand that your model is not ethical unless it's demonstrated to be ethical in production," FICO CAO Scott Zoldi said in the study. 

"It's not enough to say that I built the model ethically and then I wash my hands of it. What we're missing today is honest and straight talk about which algorithms are more responsible and safe."

Editorial standards