X
Innovation

MIT, Harvard, tech industry luminaries team up on Fund for Ethical AI

The Ethics and Governance of Artificial Intelligence fund will start out with $27 million to fund work that "advances the development of ethical AI in the public interest."
Written by Chris Kanaracus, Contributor
brain
Getty Images/iStockphoto

Amid a mounting tide of concerns over the implications of AI (artificial intelligence) on society, a new fund backed by well-known tech industry figures along with Harvard and MIT has been formed. The Ethics and Governance of Artificial Intelligence fund will start out with $27 million to fund work that "advances the development of ethical AI in the public interest."

EBay founder Pierre Omidyar's charitable foundation Omidyar Network, LinkedIn founder Reid Hoffman, the Knight Foundation and others are contributors to the fund. MIT's Media Lab and Harvard's Berkman Klein Center for Internet & Society will serve as anchor institutions. Omidyar Network laid out the case in a press release:

Artificial intelligence and complex algorithms in general, fueled by big data and deep-learning systems, are quickly changing how we live and work--from the news stories we see, to the loans for which we qualify, to the jobs we perform. Because of this pervasive but often concealed impact, it is imperative that AI research and development be shaped by a broad range of voices--not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers.

The fund's goals will be to develop the best ways of explaining AI to the public; ethical design frameworks for AI; to advance "accountable and fair AI" through proper controls; helping AI entrepreneurs and engineers thrive while serving the public interest; and by getting more constituencies involved with AI.

Last year, AI took the spotlight perhaps more than any other tech trend. The rapid-fire pace of AI development represents risk that must be held up to scrutiny, MIT Media Lab director Joi Ito said in a statement:

"For example, one of the most critical challenges is how do we make sure that the machines we 'train' don't perpetuate and amplify the same human biases that plague society?"

In its first year, the fund will focus on experimental and iterative projects, including a class of joint "AI Fellows" at MIT and Harvard, according to a FAQ document.

The fund isn't the first organization to rally around AI and ethical questions. Last month, the Institute of Electrical and Electronics Engineers released an initial draft guide for ethical AI design. There's also the Partnership on AI, formed by Google, Facebook, Amazon, IBM and Microsoft.

But the fund can play an important role amid other efforts, the FAQ states:

We believe we can add something unique to the growing AI ecosystem through supporting unbiased, sustained, evidenced-based, solution-oriented work that cuts across disciplines and sectors. Grounding this initiative at two anchor academic institutions (the Berkman Klein Center for Internet & Society at Harvard University and the MIT Media Lab), will enable us to bring together a diversity of voices and perspectives that might not be possible with industry-driven or other efforts.

Analysis: Plenty of room at the AI ethics table

The broad-based team that MIT and BKC are assembling is impressive, says Constellation Research VP and principal analyst Steve Wilson: "It points to deep, cross-disciplinary systems thinking being applied to the problem of mechanizing ethics."

Wilson investigates the ethical implications of algorithms; his research suggests that there are mathematical limits to what sorts of problems can be computable, and concludes--at this early stage--that complex human tasks such as driving a car might never be completely automated. There may be unexpected failure modes, embodying novel characteristics for which our social institutions like laws and courts may not be ready.

Wilson recently presented some of his current work at a law conference in South Korea organized by the Digital Asia Hub, a regional offspring of the Berkman Klein Center. A new Constellation Research report is due out soon.

"The thing about ethics that takes technologists by surprise is that some problems are never going to be completely solved," Wilson says. "This is why courtroom processes make for good television drama. Court cases aren't always predictable. The very best lawyers will sometimes get it wrong. And there are new legal precedents all the time, meaning things we never thought of before."

This is very different from the world of algorithms, where all expected inputs need to be known and designed for when the code is written, he adds.

"I don't know how future artificial intelligences are going to cope with unprecedented events," Wilson says. "Machine learning might evolve to deal with surprises, but there will always be 'meta surprises' that no pre-programmed machine can deal with on its own. So it's great to see MIT and BKC putting such serious resources and breadth of talent onto AI and ethics."

Overall, Constellation believes that debate and transparency about the ethics and societal impact of AI are crucial and therefore, the new fund provides a welcome addition to the discussion. That the fund is partnering with two of the most credible, independent and influential academic research organizations in the world can only help.

Also setting it apart is the simple fact it will directly fund innovative and diverse AI research, rather than merely produce a quickly forgotten position paper or run academic symposiums. It's what the AI ethics debate needs. More insight into the fund's direction will come at an event in July, and Constellation will be watching with interest.

Have a few minutes to spare? Take Constellation's 2017 Digital Transformation Survey. Constellation will send you a copy of the results. See how your organization stacks up against others pursuing digital transformation.

Editorial standards