X
Innovation

An AI payout? Should companies remunerate society for lost jobs?

Scholars at the high-profile Institute for Humanity propose a scheme in which companies agree to pay back to society a portion of excess profits produced by AI, to compensate for lost jobs. Can it be gamed by corporations is one of the questions that springs to mind.
Written by Tiernan Ray, Senior Contributing Writer

If artificial intelligence eventually eliminates some jobs for humans, should the parties that profit from AI pay money to society to compensate for the loss?

That's the intriguing question raised by a paper produced last month by the Future of Humanity Institute, the think tank inside Oxford University that is widely cited in the popular press regarding AI, and that is headed by philosopher Nick Bostrom.

The Institute's Centre for the Governance of AI published "The Windfall Clause: Distributing the Benefits of AI for the Common Good," posting it on the arXiv pre-print server. The paper, authored by Cullen O'Keefe, Peter Cihon, Ben Garfinkel, Jade Leung, and Allan Dafoe, proposes that companies that yield excess profit attributable to AI pay put some portion of the money over and above the taxes they would normally pay. 

ZDNet reached out to lead author Cullen O'Keefe, who is also a Juris Doctor candidate at Harvard Law School, requesting to ask follow-up questions. O'Keefe declined, writing in an email to ZDNet that the Institute "has decided against doing publicity on this particular paper." Despite that, ZDNet offers some questions to mull over below. 

The authors write that "the transformative potential of AI has become increasingly salient as a matter of public and political interest," but they argue that there have been few proposals that would make institutional obligations a matter of law. 

The windfall clause, as it's called, would be a commitment made years before any profits materialize, called an ex-ante agreement. Companies would agree to pay out some percentage, even though they don't know for sure if they'll ever realize any excess profit attributable to AI. 

Also: Devil's in the details in Historic AI debate

The point is to mitigate the deleterious effects of AI. Because while AI may increase aggregate wealth for society, "many have argued that AI could lead to a substantial lowering of wages, job displacement, and even large-scale elimination of employment opportunities as the structure of the economy changes productivity," the authors write. 

There is a "strong consensus" they write, among AI researchers, "that most, if not all, human work can, in theory, be automated." They even put a rough time frame on that: 2060 is the year AI researchers project "AI would be able to outperform humans at all economically relevant tasks," citing a separate paper put out by the Institute and by UC Berkeley scholars in 2018.

The proposal suggests a sliding scale of obligations. Depending on the amount of the excess profit relative to the "gross world product," the percent of the profit any entity pays out would rise from zero to as much as fifty percent of the excess profit. "As a simplified illustration, the authors hypothesize a company that makes $5 trillion in excess profit in 2060, using 2010 dollars. Assuming gross world product in 2060 of $268 trillion, the company would be obligated by the "Windfall Function" to pay $488.12 billion. 

windfall-clause-2019.png
Future of humanity institute

The appeal to a company should be that a commitment like this is something that is transparent and that can be planned for, reducing risk. For example, O'Keefe and colleagues hypothesize that a company would discount that future $488 billion by the 10% average cost of capital for an internet company. And they could then discount further by the relatively low likelihood of actually earning such excess profit (because windfall profits decades in the future are a probability, not a certainty.)

After discounting, the present cost to a company of that $488 billion future bill would be $649 million annually, which is in line with the philanthropic giving of many large companies. That's important to convince boards of directors that the clause would meet various requirements of corporate rule-making and case law around the world. It can be thought of as analogous to stock options, they point out. A useful analogy can be drawn between the Windfall Clause and stock option compensation, which is incontrovertibly permissible.

"Like the Windfall Clause, stock option payments have a permissibly low present expected value but can have a much higher value once exercised."

The authors acknowledge many things would have to be thought through, and they offer their work in the spirit of prompting a discussion. They address a couple of questions in the paper. One is why this would be preferable to the taxation of excess profit. It's not necessarily, they write, except that it might be more "actionable," they write, because "it depends only on convincing individual firms, not political majorities."

A question not addressed by the paper is whether it creates a way for companies to buy their way out of big ethical issues. In other words, would paying out profits be a way for companies, and society, to stop thinking critically about the impact of AI on jobs? And would it be a way to avoid regulation? That doesn't seem to be the authors' intent, but it's worth keeping in mind as a potential issue, just like any corporate attempt to resolve ethical issues through the marketplace. 

Another question, more intriguing intellectually, is whether the whole matter of discounted future rewards and costs can be gamified by the companies that will themselves earn the profits, or may earn them. In theory, a company such as DeepMind, a unit of Google, could simulate future rewards as if it were a reinforcement learning problem like that of AlphaZero. By constructing a value function of future rewards, such an entity could evaluate the state-action choices of the present time and presumably optimize present actions to maximize future rewards. 

That raises the question of what such an objective function looks like in a game of AI profits. Does an outfit like DeepMind start to get tricky, and pursue initiatives that have "soft" benefits without amplifying reported profit, to minimize future costs under the windfall clause?

Such questions for the moment are quite remote, given reports last year that DeepMind is far from generating any profit for Google. But it's among the many intriguing questions raised by a very intriguing bit of research. 

Editorial standards