Artificial intelligence: Job killer or your next boss?

Artificial intelligence: Job killer or your next boss?

Summary: As software automates an increasing number of tasks, is it time to reassess traditional workplace roles and create an office that suits both man and machine?

SHARE:
9

Could information technology become an engine of unemployment, automating roles that once used scores of human workers?

Some writers and academics argue this is already the case, blaming communications technologies and automation for the falls in both wages and the number of men finding jobs in the US over the past 40 years. The problem of workers being displaced and wages being driven down will continue to get worse, they argue, as increased computing power and better software empowers computers and robots to take over more roles in factories and offices.

The net result, argues MIT economist Erik Brynjolfsson, could be a widening of the already sizeable gap between the earnings of employees and employers, and between college graduates and the less educated.

A sensible response to this shift, Brynjolfsson says, is to reassess workplace roles to find tasks particularly suited to people, and have humans and computers work alongside, rather than against, each other to complete them.

Intelligence augmentation

This notion of intelligence augmentation dates back to the early days of computing, when engineer Vannevar Bush coined the term to describe an intellectual symbiosis between man and machine. In 1945, Bush wrote about a future where an associative store of all books, records and communications called the memex would aid human recollection, a concept that today is embodied by the World Wide Web.

The power of man and machine working in unison was at the heart of a speech about intelligence augmentation by Ari Gesher, engineering ambassador with Palantir Technologies, at the recent Economist Technology Frontiers 2013 conference in London.

"The idea is to have a very well defined division of labour between the computing machines and the humans," he said, spelling out the complementary skills of men and computers.

"Most of AI is statistics. Any time you need to do this kind of statistical processing - be it figuring out how to target an ad, give recommendations to someone on Amazon or figuring out how to segment a voting population - computers are magic. They can really come up with very good robust answers to those kind of questions. These statistical methods basically depend on the characterisation of data remaining the same.

"We [also] know what humans are good at. It's making hypothesis, writing poetry, dealing with things like incomplete data. Recognising patterns that are similar to other patterns that have been seen before but are not the same."

The online game Foldit provides an example of how to exploit the relative strengths of humans and computers when it comes to information processing, he said. In the game players fold computer models of proteins to help scientists gain insights into their real-world structure. Computers can take the brute force statistical approach but human pattern recognition skills enabled by the brain's visual cortex has allowed people to devise solutions to Foldit tasks that computers have been unable to match.

By being aware of these relative abilities, and matching people and machines to the right tasks, you can outperform machines or people acting on their own, Gesher said.

"The idea is to do everything you can to remove the friction at the boundary between man and machine. Offload as much as possible onto the machines and bring in the injection of human insight into the system."

How to beat a chess grandmaster

The power of human-machine collaboration was demonstrated by two unranked amateur chess players in 2005, he said. The pair took part in a Playchess.com freestyle chess tournament, where individuals can team up with other people or computers. Using custom chess software running on three laptops to analyse play these amateurs were able to win a competition that featured the Hydra supercomputer and several grandmasters.

"They understood the problems of chess well enough to know how to communicate with the computers to get them to do all the right work," said Gesher.

In this instance the deciding factor in who was victorious wasn't the ability of the individual humans or computers to play chess, but how effectively the human and computer chess players were able work alongside each other, he said.

"The grandmasters knew a lot about chess but they didn't know how to use the computers as effectively as possible, how to leverage them to win."

The reason that perfecting the interface between man and machine can pay such dividends is that that increases in computing power have outpaced our ability to exploit them, Gesher believes.

"In 1960, $1,000 would get you one calculation per second. Today that number is somewhere around 1010, so you're talking about nine orders of magnitude in 50 years," he said.

"What's special about this? It's never happened before. This exponential growth inside two or three human generations is completely unprecedented in human history. We're still figuring out how to use those machines' effective power.

"[Therefore] small changes in the friction at the interface boundary, in how we offload work to computers, can lead to huge gains in the work that we do."

Topics: IT Employment, Hardware

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

9 comments
Log in or register to join the discussion
  • The more things change, the more they stay the same

    we heard this rant when the cotton gin was invented. When the steam engine was invented, when the computer was invented, when the automated loom was invented. It never pans out because people adapt, new jobs created by the technology appear, etc. Of course, back when these inventions occurred, people had the cultural attitude that they needed to provide for themselves and not have some nanny state wipe their nose and make sure they were safe on the playground. Who knows how it will play out in today's "take care of me," culture.
    baggins_z
    • Except the machines you cite ...

      require oversight, an operator and tender.

      Computers ARE the operator. And if something goes wrong the computer tells the tender where to look for the problem if not the actual fault.

      But then, you rambled on about the nanny state so you're probably unable to figure that one out.

      .
      Rob Berman
    • misnomer

      Machine rendered physical slavery moot.

      Society whines we don't think but it doesn't think, preferring self-fulfilling prophecy and blaming afterward...

      Think about that.
      HypnoToad72
    • ps

      With all the free time AI will give you just like how the cotton gin did, what will you do to contribute? What if you can't? Produce or perish, or pro-life, which are you?
      HypnoToad72
    • pps

      And with all the competition, how do you plan to remain economically -- albeit ethically?

      Keep thinking... but not too much. neo-feudalism might not be the future either...
      HypnoToad72
  • Until they don't....

    Just ask Malthus. His theory was pretty accurate for the thousands of years of human history that preceded him, yet he was proven wrong by many of the inventions that you mention. Yours has only a few hundred to back it.

    The average human brain is about 10x more powerful than the fastest supercomputer that we have now. Extrapolate Moore's law into the future and I will see desktop computers with more horsepower than all the brains of the entire human race combined.

    All previous "job destroying" inventions only replaced people at a specific task and still required human oversight. So, being the adaptable creatures we are, we could move on to other more interesting things. But with that level of computing power we start arriving at machines which are not replacements for people at specific tasks, but replacements for people flat out.

    With the advances in machine vision and dexterity occurring recently we are beginning to zero in on replacing unskilled labor entirely. Not everyone is bright enough to adapt. Even Foxconn is deciding that Chinese wages are too expensive over machines. Watson is starting to nibble at white-collar brain jobs too.

    Our current economic models do not really support obsolescence of the human race. We may end up with augmented humans ala Deus Ex, we may end up with an odd socialism like Vonnegut's excellent Player Piano, or we may end up in a society where only ownership not productivity is relevant and most people are incredibly poor.
    SlithyTove
    • The brain and thinking, can't be replaced by machines or "computers",

      and what the brain does, has nothing to do with speed or how fast it does the thinking; the brain is also not about "horsepower", since it's "designed" to "just" decipher and solve problems and do wonderfully "creative" things. What the brain does, no computer today can even come close to matching.

      As it is, the human mind is not just 10x more powerful than the most powerful computer; the mind is millions of times more powerful, and no computer will ever be able to "think", and that's what the mind is about.

      A computer can be programmed to "emulate" what a human does, and perhaps to even exhibit a little bit of "thinking", but at the end of the day, it's still a set of computer instructions designed to mimic what humans do, and at the lowest level of "thinking". Heck, even a mouse is many thousands of times smarter than the most powerful super-computer of today.
      adornoe
  • Mathus' theory will always fail

    precisely because it doesn't take into account technological advancement. Likewise this theory that AI will completely replace humans will also fail because it fails to take into account that there is more to intelligence than computing power. If you assume a world with perfect AI and robotization, then there becomes no need for humans to work; they'll simply tell the robots to get make them whatever. If there isn't perfect AI and robotization, then there will always be a demand for the intangibles that only humans can provide.
    baggins_z
    • A modified Malthus

      that takes into account technological advancement increasing the population cap works fairly well. And Malthus theory still accurately describes large swathes of the modern world even if 1st world countries have largely left it behind. It wasn't a failure so much as incomplete.

      "Likewise this theory that AI will completely replace humans will also fail because it fails to take into account that there is more to intelligence than computing power. "

      Indeed there is a lot more to intelligence than computing power. That's why we won't suddenly have human-like intelligence when we get computers that are as fast as the human brain: we don't understand the algorithms yet.

      But once you start talking about desktop computers 6 billion times faster than the human brain you no longer need to understand the algorithms. We can brute force it with evolutionary algorithms and the biggest problem simply becomes supplying the inputs to allow it to learn correctly at high speed (over a hundred human years of experience per second).

      The problem with an economy where we simply tell the robots to make whatever is that we will still be resource bound. Labor will not be a relevant cost, but material will be and the question becomes how to distribute resources. It also assumes that we have a rational economic transition to that state which may not be the case. There will be a long, messy, middle ground where much can go wrong.
      SlithyTove