X
Innovation

Data science, ethics, and the 'massive scumbags' problem

Discussions of ethics in data science and artificial intelligence are all well and good, but they won't go anywhere if the prime directive is making massive profits for venture capitalists.
Written by Stilgherrian , Contributor
sp-ethics-dictionary-red-500px.jpg

"You're OK with doing business with evil people, right?" One of Australia's leading venture capital advisers had asked me that question back when the first dotcom bubble was about to burst, in the year 2000 or 2001.

I was involved with a startup, and we were meeting to discuss what the business plan needed to look like. The basic requirement, we were told, was a growth chart that went up and to the right at an acceptable rate. Investors needed to see that they'd get the return they'd expect.

It didn't matter what the startup actually did. Investors assumed you knew your technology, so you could easily baffle them with the bovine stuff. Your startup could be curing cancer or drowning puppies, as long as that chart went up and to the right. Up, and to the right.

"Don't worry, they'll give you the money," the adviser said. "Just remember that these people are evil."

Nearly 20 years later, what's changed? Not much, it seems.

I've been popping in and out of the inaugural Ethics and Data Science conference at the University of Sydney this week.

There's been some fascinating discussions about ethics, the morality of algorithms, and even whether ethical decisions might be automated. But there's also been some reminders of challenges that just won't go away.

Discussions of ethics in the context of big data isn't something new. In 2015, I wrote that big data ethics are a board-level issue. I've even suggested that big data evangelists are pushing a dangerous faith-based ideology.

The current buzz issue in these ethical discussions is fairness, says Dr Matthew Beard, a fellow at The Ethics Centre in Sydney. And he's right. Consider examples like the race bias in facial recognition, or the alleged algorithmic racial profiling by NSW Police.

"Before we started talking about fairness, the buzz issue was privacy, and those are both crucial things. But do they represent the full picture of ethical concerns that we need to focus on?", Beard asked the conference.

"By focusing too much on one thing, we can miss other things that we ought to be concerned with."

Beard is the co-author of the free book Ethical By Design: Principles for Good Technology. He says that organisations need a set of values to operate by, based on the organisation's purpose.

See: AI and big data vs ethics: How to make sure your artificial intelligence project is heading the right way

For example, the University of Sydney is a research institution. Its purpose is to advance knowledge, so it's going to value things like truth, rigour, and innovation.

"But as we all know, there are lots of people, lots of institutions, that have a very clear purpose for the world, they have things they care about very strongly, but they happen to be massive scumbags," Beard said.

It probably won't surprise you to learn that his case study for the conference was the saga of Facebook and Cambridge Analytica.

Beard identified five "ethical smokescreens" that could work against ethical behaviour, or even prevent ethical issues from being considered at all.

  • Techno-logic: Technology is a way of seeing the world in which we aim to reduce the world to something we can control. But this worldview tends to ignore things which can't easily be measured, and that includes many human factors.
  • Value neutrality: This worldview says that technology doesn't do bad things, people do. No, in fact, technology frames our choices. "If you are designing a gun, you are also making a value statement that sometimes it's a good thing to kill people," Beard said. "Looking at someone through binoculars and looking at them through the scope of a sniper rifle is a very different process."
  • Dilution of responsibility: The inverse view is that if something goes wrong we can blame the technology, not the people who created it. If an algorithm is unfair, that's a shame but it's not our fault.
  • Inevitability: Technology is inevitable, and we shouldn't try to shackle the wonder of human creativity. Alternatively, "Someone's going to build this technology so it might as well be us."
  • Progress thesis: This is the idea that the most important thing is advancement, and we can deal with the problems later -- because that wonder of human creativity will find a way.

According to Beard, there has to be an ongoing discussion, including from people outside the organisation.

"As soon as you turn it into a box-ticking exercise it's no longer ethics," he said.

"Technology systems and processes will only reflect your good intentions, your desires, and your ethical disposition inasmuch as you put them in there explicitly. It's not enough to have good intentions and then build a tool that doesn't reflect those things."

But back to the evil people...

Here in Australia, the Australian Competition and Consumer Commission (ACCC) has the potential to regulate artificial intelligence (AI) and algorithmic decision-making, but a number of public bodies are "becoming very timid" when it comes to technology, according to Julia Powles, Associate Professor of Technology Law and Policy at the University of Western Australia.

"This logic and fetishisation of innovation takes away from anything that would be about protecting individuals," Powles told the conference.

"One of the reasons I get so infuriated about the conversation we have about AI is I think there's never been a technology more realisable to the public interest," she said.

To do AI, you need access to datasets, compute power, and smart people, and we have those things "in abundance".

"To see the space and the development of technology so categorically deferred to a small number of private players is really quite tragic. The key piece of this is finance," Powles said.

"At the moment, the only way to build scaleable tech systems is to have venture capital funding, which fundamentally compromises the sorts of systems you build."

At the moment, then, it's evil people all the way down.

Related Coverage

Editorial standards