X
Business

Why Startupland needs the veil of ignorance

Organisations need a 'moral imagination' to build ethical data services that are fair for all the players. That's what Centrelink got so very wrong.
Written by Stilgherrian , Contributor

Many recent discussions on privacy and data governance have focused on the practical, and on the short term. That's understandable, given that here in Australia our mandatory data breach notification regime looms large, and Europe's General Data Protection Regulation (GDPR) follows soon after. But balance is a good thing.

I was pleased, therefore, that the Data + Privacy Asia Pacific conference in Sydney on Wednesday kicked off with a look at the ethics of data stewardship. Not the everyday what, where, and how of data operations, but the why of doing any of these data things in the first place.

This framing was deliberate, Australia's Information and Privacy Commissioner, Timothy Pilgrim, told ZDNet.

"There is no irony in the fact that often the most personal information is the richest in its potential for public data use," said Pilgrim in his opening remarks. Therein lie the ethical problems.

How do we balance personal risks with the opportunity for public good, or at least the good of the organisation collecting the data? What counts as having a "genuine interest" in collecting the data, as opposed to sucking in as much as possible as soon as possible?

New ways of analysing data, re-identifying supposedly anonymous data, and reaching conclusions are being developed rapidly. Even the biggest players like Google and Facebook would admit they've no idea what might be possible even a few years ahead.

So how do we work within the ocean of future possibilities when data can be bought, sold, lost, stolen, or leaked?

"I think you're exactly right in pointing to this as the main challenge, not just for us in this discussion, but for all of us here today," said Rob Sherman, deputy chief privacy officer at Facebook. "We don't know 20 years down the road what technology is going to look like."

"You have to be willing to iterate. We have to have principles that are established, that reflect our views on the way to do this, independent of technology, and independent of specific use of data."

Great. But how do you develop principles in the abstract?

According to Dr Simon Longstaff, executive director of The Ethics Centre, a useful tool here is the "veil of ignorance", a thought experiment proposed in 1971 by American philosopher John Rawls.

Imagine that you're developing the operating principles for, say, an on-demand transport service.

Now imagine that you know nothing about yourself, your natural abilities, or your position in society. You know nothing about your gender, race, language skills, health ... none of these things. The veil of ignorance has descended.

How would you set up the rules for this service when you could be any of the people involved -- driver, passenger, shareholder, brown-skinned, pregnant, mentally ill, drunk, whatever? Or even people not directly involved, such as vehicle manufacturers, regulators trying to minimise their overhead, or residents dealing with any environmental effects?

What principles would be fair and reasonable for everyone involved?

"[By doing that] you can start to get a sense of what you would build, that is technology-neutral, and effective in terms of dealing with our interests," said Longstaff.

Such a "moral imagination", as Longstaff described it, would go a long way towards addressing one of Startupland's most obvious problems -- that services are imagined by, built by, and built for a narrow demographic that's mostly male, mostly white, mostly privileged, mostly aged under 30, and mostly besotted with their own "understanding" of how the world works.

Such a moral imagination might have helped create an on-demand transport service very different from Uber.

Remember the real reason for Uber?

"We wanted to get a classy ride. We wanted to be baller in San Francisco. That's all it was about," said founder Travis Kalanick in 2013.

Such a moral imagination might have helped the creators of the service which, it is alleged, discouraged poor students from university, rather than suggesting ways to help them follow their dream. They might have imagined how a teenager might feel being told, "Nah, I wouldn't bother."

And such a moral imagination might have helped human services minister Alan Tudge navigate his way through the Centrelink robodebt debacle, where shoddy algorithms and processes led to unreasonable demands for money being sent to welfare recipients. He might have imagined what it'd be like on the receiving end.

Things work very differently in Canada.

"Governments want to link data, so it might be to cut off somebody's benefits, for example, because you're declaring income which was not [previously] known," said Michael McEvoy, Deputy Commissioner in the Office of the Information and Privacy Commissioner for British Columbia.

"What we're working with government on that is to say that you can do that by machine process, but if you're doing to disentitle somebody, or in some way be prejudicing that individual, a human being has to look at that before any decision is made."

While imagining Tudge with a moral imagination may be a stretch of the imagination, it's not quite as unrealistic to expect a organisation's board to include these issues under the heading of corporate social responsibility.

As I wrote in 2015, big data ethics is a board-level issue.


Editorial standards