X
Innovation

MIT finally gives a name to the sum of all AI fears

Rather than simply being scared of “intelligent machines,” say researchers at MIT’s Media Lab, society needs to study algorithms with a multi-disciplinary approach akin to the field of ethology.
Written by Tiernan Ray, Senior Contributing Writer

Now we know what to call it, that vast, disturbing collection of worries about artificial intelligence and the myriad of threats we imagine, from machine bias to lost jobs to Terminator-like robots: "Machine behaviour."

That's the term that researchers at the Massachusetts Institute of Technology's Media Lab have proposed for a new kind of interdisciplinary field of study to figure out how AI evolves, and what it means for humans. (They use the British spelling, as this is a European journal.)

The stakes are high because there is lots of potential for human ability to be amplified by algorithms, but also lots of peril.

Commentators and scholars, they write, "are raising the alarm about the broad, unintended consequences of AI agents that can exhibit behaviours and produce downstream societal effects -- both positive and negative -- that are unanticipated by their creators." There is "a fear of the potential loss of human oversight over intelligent machines," and the development of "autonomous weapons" means that "machines could determine who lives and who dies in armed conflicts."

While there are no conclusions here about any of this, it's a nice, ambitious effort to give some direction to studying AI's role in society rather than just worrying about it. 

Also: AI pioneer Sejnowski says it's all about the gradient

Published in the journal Nature this week, the paper, Machine Behaviour, calls for a joint effort of "the fields that design and engineer AI systems and the fields that traditionally use scientific methods to study the behaviour of biological agents." Specifically, the authors propose to study not just how machine learning algorithms works, but how they are affected by, and affect, the environment in which they function. 

It's "akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry -- intrinsic properties -- with the study of ecology and evolution -- properties shaped by the environment." 

abb7e724-21dc-4cee-afae-134088d0cf09.jpg

Questions large and small MIT proposes asking about A.I.

MIT Media Lab

Lead authors iyad Rahwan, Manuel Cebrian, and Nick Obradovich of MIT collaborated with 20 other researchers from numerous institutions, including Facebook AI, Microsoft, Stanford, the sociology department of Yale University, and Berlin's Max Planck Institute for Human Development, among others. They've also prepared a blog post on the topic. Rahwan runs a group within the Media Lab called the Scalable Cooperation Group that has been coming up with numerous avenues of research on studying machines and issues of ethics and such.

The authors start with the fact that much of AI is able to function -- despite the fact humans don't understand why it functions. 

"They are given input and produce output," as they describe it, "but the exact functional processes that generate these outputs are hard to interpret even to the very scientists who generate the algorithms themselves."

"Because of their ubiquity and complexity, predicting the effects of intelligent algorithms on humanity -- whether positive or negative -- poses a substantial challenge," the authors note.

Also: Google's DeepMind asks what it means for AI to fail

The authors take their cue from Nobel Prize winner Nikolaas Tinbergen, co-founder of ethology. Tinbergen described ethology as the "biological study of behavior," and he proposed four elements that make up such a study: mechanisms, development, function, and evolution. These four concepts can be a way to explore machine behaviour, they write. 

In this framework, they propose, the term mechanisms concerns the areas already most studied in AI, such as neural network models, and the data that feed them. The notion of development concerns things like neural nets that learn new strategies because of how they interact with the environment. "For instance, a reinforcement learning agent trained to maximize long-term profit can learn peculiar short-term trading strategies based on its own past actions and concomitant feedback from the market."

The authors describe "function" as a kind of mixture between the purpose an algorithm serves to its human creators, but also the unintended role that an algorithm may take on, such as social media algorithms that lead to "filter bubbles" and fake news. The authors are really exploring here the problem of the "objective function" in machine learning, namely, what exactly these algorithms are supposed to be achieving. 

The fourth aspect, evolution, isn't as simple as you might imagine from the name: There's several aspects, including the propensity for assumptions by creators of neural nets to promote certain kinds of algorithms versus others, but also the prospect of "mutations" propagating in unexpected ways. "It is possible for a single adaptive 'mutation' in the behaviour of a particular driverless car to propagate instantly to millions of other cars through a software update," they observe. 

These four areas lead to some interesting questions about AI, both big-picture and low-level. 

For example, with things such as autonomous vehicles, they pose questions such as, "How aggressively does the car overtake other vehicles?" And "how does the car distribute risk between passengers and pedestrians?" Other intriguing questions are things like whether conversational robots end up being a means to hook kids on products, or whether matching algorithms for dating sites "alter the distributional outcomes of the dating process."

"Machines shape human behaviour," is one of their disturbing observations. "It is important to investigate whether small errors in algorithms or the data that they use could compound to produce society-wide effects and how intelligent robots in our schools, hospitals and care centres might alter human development and quality of life and potentially affect outcomes for people with disabilities."

Must read


This kind of study won't be easy, not only because it involves bringing together many disciplines. The authors note that researchers face challenges if they want to study commercial algorithms with copyright or patent protections. And the choice to try and observe algorithms in an empirical fashion in the wild brings its own set of ethical concerns. 

Even the very term "agent," which they use repeatedly to refer to AI innovations, brings all kinds of problematic assumptions of human and animal parallels, they acknowledge. 

"Even if borrowing existing behavioural scientific methods can prove useful for the study of machines, machines may exhibit forms of intelligence and behaviour that are qualitatively different—even alien—from those seen in biological agents

Cloud services: 24 lesser-known web services your business needs to try

Editorial standards