Time to break open AI's black box, and keep it open

"It is time that we re-balance the discussion from being driven by the creators and the sellers of artificial intelligence to being balanced with the user perspective."
Written by Joe McKendrick, Contributing Writer

Artificial intelligence is fast evolving to the point where anyone with the skills now has has access to the tools and platforms needed to make it happen. But is it time to stop and think before we plunge headlong into cognitive chaos?

Photo: Joe McKendrick

Developers and IT managers are now at the front lines of growing ethical dilemmas, as well as a potential partial surrendering by businesses of control over their decision-making to machines. Perhaps its time for greater awareness and education on bringing AI-based decision-making into the light.

At its recent MSBuild conference, Microsoft stated its goal going forward was to "help every developer be an AI developer" on top of its offerings, especially on its Azure cloud platform. At the same time, Google keeps opening up AI access to anyone who wants to work with it, as announced at its Google I/O conference, which also just took place. There, CEO Sundar Pichai famously demonstrated technology passing the Turing Test, running an audio of a highly interactive phone call placed by Google Assistant to a hair salon.

With all this great power comes great responsibility, and developers and executives are being cautioned not to build, or rely on, the black boxes that have characterized AI up to this point. Recently, Bank of America and Harvard University teamed up to convene the Council on the Responsible Use of Artificial Intelligence, which will bring together, educate and enlighten business, government and societal leaders on the latest technological developments in AI and machine learning, discuss emerging legal, moral, and policy implications, and investigate ways of developing responsible AI platforms.

Bank of America has been working with a range of AI approaches within its business lines, but its technology leaders also seek to build more transparency and ethics into AI solutions before it's too late -- we have an industry going full steam into AI. "When the creators and sellers are dominating the discussion with their models and data sources around which they have built intellectual property, by definition, they are building a black box that as the user, we may or may not have transparent insight into," says Cathy Bessant, COO and CTO for BofA and an active sponsor of the program, in a recent Forbes interview with Peter High.

Opening up the black box -- and keeping it open -- "is an important part of understanding the intended and unintended consequences of the models that we are using to drive learning," Bessant said. "In financial services, we went through these decades ago in credit scoring."

BofA's optimistic, yet cautious, approach to AI was recently at its own technology conference, as described by Penny Crosman is American Banker. The financial services giant sees potential in employing AI to get smarter about fraudulent credit card use, or to expedite disputes more quickly. Previously, it was noted, "an old-school fraud analytics program might see a customer using a card in a place they have never used a card before and block the transaction. AI can do better."

At the same time, the data that feeds AI decisions needs to be carefully vetted and reviewed. For example, "a robotic process automation solution that automates a loan process so the bank can deliver the loan faster to a client would be great," according to Aditya Bhasin, head of consumer and wealth management technology at Bank of America, also quoted in American Banker. "But using AI or robotic process automation as a shortcut to data integration might not make sense. For example, when BofA launched a digital mortgage, 'we could have done a whole bunch of robotics to go and pull data from different places and prepopulate the mortgage application, [but] it probably would have been fraught with error," he said.

Too many organizations are rushing into AI without considering the full implications of the people element, according to Bessant. "It is time that we re-balance the discussion from being driven by the creators and the sellers of artificial intelligence to being balanced with the user perspective," she says. "The discussion has been dominated by the sellers. Flip on any one of the morning financial shows and what you see is advertisement after advertisement for large and small-scale technology firms that are pushing the notion of data and modeling and that AI will help. Generally, society seems sold that artificial intelligence is better than we are as humans. However, because we build it, it is a subset of who we are and our thinking and bias."

There's a lot of buzz, and a lot of money now pouring into AI. It's important that some of that attention and money goes into education and building awareness of the processes behind the processes.

Editorial standards