X
Innovation

Accenture CTO: AI is the alpha trend

Paul Daugherty, chief innovation and technology officer of Accenture, shares what really matters in artificial intelligence. Learn what you need to know about AI in business from this esteemed leader and author.
Written by Michael Krigsman, Contributor

Public discussions about artificial intelligence generally fall into two camps. One group talks about the dangers of AI and the risks to humanity of unfettered technology development. The second group presents AI and machine learning technology as a panacea that will solve every human problem.

Unfortunately, these loud and extreme voices of AI publicity tend to drown out more reasoned thinking.

Paul Daugherty, the Chief Innovation and Technology Officer at Accenture, is one of these reasoned voices. His new book, called Human + Machine: Reimagining Work in the Age of AI, examines AI in business based on substantial research and experience as a senior leader at one of the largest companies in the world. Accenture employs 435,000 people and has over $35 billion in revenue. Daugherty reports directly to the firm's CEO, Pierre Nanterme.

Given his unique vantage point, I invited Daugherty to participate in a special CXOTalk event held at Amelia City, the AI Experience Lab of the cognitive technology firm IPsoft, in New York City. This new CXOTalk series brings the brightest senior execs to New York for in-depth conversational interviews in front of a live audience.

The AI hype cycle.

The AI hype cycle. Photo inspired by Joe McKendrick.

The focal point of our discussion, and the theme of Paul's book, is the connection between people and computers. Although the combination of computing power, data, and algorithms makes artificial intelligence possible, the value only arises when AI serves practical human needs.

Although AI can solve human problems across a range of domains, ethical governance and oversight must be part of the equation. While the flexibility of AI leads to great potential, open-minded skepticism can help us avoid the extremes of unreasonable negativity or unbridled hype.

Please watch the fascinating conversation in the video embedded above and read edited highlights below. The complete transcript is also available.

Why this gap between hype and reality for AI?

Paul Daugherty: Artificial intelligence is what I call the alpha trend: the trend driving other trends and shaping what's happening with other technologies. Yeah, it is overhyped, and there is too much hype around it, but we believe there's also a huge amount of potential and reality behind that hype. That's why it's important to have this type of conversation; talk about the real versus the hype and separate the two.

What is "the real"?

Paul Daugherty: AI has been around for 60 years. The term was coined in 1956 at a famous conference at Dartmouth University. It's springing into action because of three things that changed dramatically.

One was advances in computing, so we have the computing power to run much more powerful algorithms, cloud computing, so that's computing data. We now have data at scale, and the cost of data is declining precipitously, and we have new sources of data: IoT, video, all sorts of new information, petabytes and exabytes of new information flowing into enterprises, so compute data. Then algorithms, we had some algorithmic advances that we made in 2010, 2012 timeframe that allowed us to take techniques that had been gestating for a while and gave them a new life, things like back-propagation and deep learning. If you track these things, that led to a resurgence of applicability to fields like vision, speech, and natural language understanding. That propelled us forward over the last five years. That's why it's happening now and why we see this real resurgence of the 60-year-old technology.

Where is the ultimate value of AI?

Paul Daugherty: The interesting thing with AI is where can you do different things that you couldn't have even envisioned doing before. An example is work we're doing in the life sciences industry, using new deep learning algorithms to match molecular compound characteristics to therapeutic treatments. This accelerates the matching process between disease and treatment and time to market for new treatments for patients, improving health outcomes, saving lives by solving problems in ways you couldn't solve before. That's doing different things and where some of the real interesting potential of AI comes in.

Is the point innovation rather than efficiency?

Paul Daugherty: There's always an efficiency benefit. Cloud computing, all these things, every technology has allowed us to do some things more efficiently. Then they open up new capabilities. AI is no different. A lot of the initial applications of AI have been on the efficiency side because that's often where businesses start where we can build business cases. We can get our ROI. That's not a bad thing. It's often where it makes sense to start.

It's still the very early stages. We should talk more about how this is maturing. But, we're seeing the real potential being in where you can really reimagine. It's the word we use. Where can you reimagine these different things and take your business in a different direction than you could before you had this technology available.

How can business leaders reimagine their organizations?

Paul Daugherty: We talk about this third generation of work that we're in, the third generation of work or business that we're in. The first generation was Henry Ford and Fredrick Taylor, scientific management, assembly lines where we matched up machines with the hands of people and automated physical labor. That was about 100 years ago.

Then, toward the end of the last century, in the 1990s, we had reengineering, which was automating the knowledge worker. It was still people as part of processes and made our flowcharts for re-engineering processes was people and their minds and knowledge as part of a process. That was the second generation.

We think this third generation, what reimagine means, is it's not a people, a static process, and a sequential process like we saw with the hands and minds through those two first generations. It's about how you engage the human capability of a person to be creative, to be empathetic, to improvise and use our human capabilities combined with the power of technology to work in a very different way. That enables this third generation of work that we're just entering into where you can reimagine the way you work, and it's more agile, more flexible, personalized, and adaptive way of structuring your business. That's where the real opportunity comes in for organizations.

Why is the title of your book Human + Machine?

Paul Daugherty: My co-author Jim Wilson, a fantastic colleague at Accenture, leads our technology research at Accenture. We were sitting down about two years ago, and what we noticed is there was this meme going on. It was a discussion about automation and AI. Robots were going to, on the one hand, take over the world. We had that going on, Terminator-type scenarios. On the other hand, we had this meme of it's going to put all the people out of work, and maybe we need to be preparing for a new leisure class because none of us need to work anymore and the implications of that.

We didn't believe any of that was true. We believed that the real power was the plus. It was not humans versus machines, machines fighting humans or taking over humans or putting humans out of work. It was the combination of the two creating new, more human potential for us. As I said with this third generation, we believe we're moving into a more human era that emphasizes our human characteristics.

It's about using technology differently. That's why we wrote the book. Human + Machine is about that combination. The other thing I'd say on it is I strongly believe that the more powerful technology is and the more human-like the technology is, the more it enhances our ability to be human.

We're sitting in Amelia City. If we have technologies like Amelia that could communicate with us in very human ways and understanding what we're doing and even having an emotional intelligence, emotional AI component to them, that's powerful. It allows us to communicate with these machines more effectively. That's the human plus machine that we tried to get across with the title of the book and the research in the book.

What are the implications for jobs and talent?

Paul Daugherty: Yeah, huge implications for talent. I think that's one of the biggest things we see and maybe one of the questions that require the new work to answer. We looked at this a lot in the book, but I think talent around AI is the issue for us and the generation that's coming. This is a long-term transition to AI. It's happening fast, but this isn't going to be over in three years. We're going to be applying this technology for many years to come.

The talent issue is on two levels, and I think that's why it's challenging. One level is the talent for AI itself, the talent to do AI. That's what a lot of people focus on. They raise their hand and say, "I need more machine learning experts. I need people who do deep learning, who know convolutional neural networks," whatever the technology might be.

Yes, that's important. You need those people and access to that talent in your organization. I think we'll solve that in a lot of different ways. Those are relatively small numbers of talent you need to do those things.

The bigger issue is not the talent for AI, but it's the talent that uses AI. How do you change the culture and train the people that need to use AI for these different types of jobs? How do you make sure they understand it, they embrace it, they have the right background skills to understand it the right way?

That's why we talk about eight skills at the end our book. The eight skills we think we need to start developing in people so that they're ready to incorporate AI in the jobs they do. Not necessarily the deep AI experts, but those that will use AI, which is most people, as they do their jobs in the future.

How should business leaders prepare for AI?

Paul Daugherty: We're at a point where organizations need to think about executive level responsibility for AI in a different way. I've talked before and would emphasize again; I think it's time for organizations to think about, do I need a Chief AI Officer? That doesn't mean that every organization needs to get out and name a new C-level post, but you better have some accountabilities at a senior management level in your organization for some things that matter in terms of readiness for AI.

The three things that I've put into that bucket to think about in a Chief AI Officer-type of responsibility. The first is talent, and it's not just machine learning and technical talent, but somebody who is thinking about the workforce impact of these technologies more broadly. That's one big responsibility in terms of the readiness, being ready on both ends of that. We find, in that sense, centers of excellence in those types of capabilities are good ways to get started.

A second thing that you need to put into that Chief AI Officer readiness category is the data, which we've talked about already.

But, one thing a lot of organizations struggle with as they start is the data is siloed. It's in different organizations. It's hard to pull together. They haven't had one view of data governance across the organization. We work with many organizations who are starting to look at that differently and, again, create a Data Officer or maybe a Chief AI Officer, depending on how you view it, to pull those data sources together and govern it differently, because AI runs across your business and you need to view it that way.

Then the third responsibility after talent and data. Responsible AI is the third broad category that I think organizations need to make sure they're ready for. I think the debates of the day that we see in the headlines around some of the tech companies are emblematic of companies needing to think more about these responsible AI issues.

I break them down into:

  • Accountability, you need to think about where you're going to be comfortable with machines being accountable for things and where humans need to be in the loop. There are very few things you want to trust to machines [laughter], so you really need to think about the human level accountability.
  • Transparency, which gets into explainabilty as well. Where is it okay for a machine to decide something and you have not to know why? That's a big topic we can talk about more if you're interested.
  • Fairness, there's a big issue with biased data biased, and I can talk about some examples. There have been some public examples where biased data has put companies at risk and put consumers at risk. That's unacceptable. We need to make sure that fairness is enforced.
  • Honesty, if you have a self-driving car, it should follow the speed limit. It probably can sense very easily where the traffic zones are and where the police are, et cetera, and game the system, but it shouldn't. We should design our AI systems to be honest and follow the rules that we've set for society.

Isaac Asimov had his three rules of safe robotics. Those are what I'd say are the kind of four principles of responsible AI that organizations need to think about if they want to be ready for AI. Whether you have a chief AI officer or not, somebody at a senior level of your organization needs to be thinking about and accountable for those things.

Previous and Related Coverage

Artificial intelligence will be worth $1.2 trillion to the enterprise in 2018

Gartner says that AI-based customer experience technologies are boosting market value.

Ex-Google CEO Schmidt's warfare warning: We need AI ground rules for Pentagon work

But Eric Schmidt urges DoD to do more AI programs like the one that sparked protests from Google employees.

What's the next stage in cybersecurity? An AI-powered, data-centric model

CEO of MinerEye tells ZDNet how he stopped chasing bad guys and worked to rethink the paradigm IT uses to protect a company's most valuable digital assets.

Want to survive the technological revolution? Be an adapter

New report shows why it's important to manage the risks of newer technologies such as artificial intelligence and the Internet of Things.

Making AI communication more human

We bring science to human communication, says StarTek's Dr. James Keaten. StarTek monitors the impact on customer experience.

CXOTalk offers in-depth conversations with the world's top innovators. Be sure to watch our many videos!

Editorial standards