The Xerox Palo Alto Research Center, which most people refer to as Xerox PARC, is one of the most fabled institutions in Silicon Valley. Founded in 1970, Xerox PARC is the birthplace of innovations such as (from Wikipedia):
Computer-generated bitmap graphics
The graphical user interface, featuring windows and icons, operated with a mouse
The WYSIWYG text editor
Interpress, a resolution-independent graphical page-description language and the precursor to PostScript
Ethernet as a local-area computer network
Fully formed object-oriented programming in the Smalltalk programming language and integrated development environment.
Model-view-controller software architecture
PARC is also where Steve Jobs and early Apple engineers took inspiration for aspects of the Macintosh computer.
Today, Xerox PARC remains an active operation with a host of commercial clients, focused in areas that include printed electronics, data and analytics, cleantech, and contextual intelligence.
As part of the cxotalk.com series of talks with innovators, I spoke with the Xerox PARC's CEO, Stephen Hoover.
During the conversation, we talked about innovation, managing brilliant researchers, and the critical importance of user experience. One of the fascinating parts of our discussion was a window Steve provided into PARC's work on the Internet of Things.
You can watch the entire video embedded above and read a complete transcript on the episode page at cxotalk.com. Here are portions of Stephen Hoover's comments, edited for length and clarity.
Why does innovation succeed or fail?
There are three major reasons why significant innovations fail:
The technology isn't right, it's not ready, or it doesn't work
You misunderstand the market and are not solving the problem in a way that customers will adopt
You don't have the right business model and are not able to appropriate value across the ecosystem, to the right people in the right ways, so the whole value chain makes money
Businesses need to make money from delivering product success to customers over the long term. If you're not exploring those other two aspects -- the market and the business model -- then I would argue that you're not doing a good job of innovation.
What is human-centered big data?
Our role is to understand people's jobs and how technology can help them get those jobs done while making it easy for them to adapt.
For example, Xerox has a significant business in customer care, with 2.5 million customer interactions daily. Across all of those interactions, we can learn which are the more likely problems needing to be solved and the answers.
This idea of human-centered, big data has a human on both sides of the equation. There is the end customer; we need to understand their state of mind and what they're doing. Then, the call center agent: helping them take that call and use it as a learning opportunity. We watch what the customer care agent does to solve the problem, using that data and machine learning to make our diagnostic algorithms better for the next time.
As computers become more and more capable, we're going to have human-computer teams solving problems, interacting together to come up with a better answer. There is a lot of science, a lot of technology, in that.
[For example,] it used to be that humans were best at chess in the world, and then artificial intelligence technologies came along, and computers beat the best humans.
But, who are the best computer chess players today? Human-computer teams. You take a person, and a computer and they work together. Computers are really good at deep, fast search to evaluate all the alternatives, but they are not best at high-level strategy.
You may be thinking three or four strategies; the computer investigates and provides the likely outcomes. The human then picks the next strategy, and they play back and forth in the cycle. That's the best chess player today.
We're going to see work happen more and more in that way. We have a significant investment in this whole area of machine learning and empathetic computing, understanding what people's intent and behaviors are and how to help them.
What are opportunities for the Internet of Things?
How do I do low cost, highly distributed sensing if I'm going to put the things on the Internet? If it takes $100 to put a smart computer on a bottle of vaccine to measure its temperature during shipping, well I won't do that.
If I want to put a temperature sensor on a bottle of vaccine for 50 cents, I don't need a whole lot of intelligence, and it's gotten cheaper with silicon to cram more and more intelligence. But I want is price down at a dollar with a smart label sensor to sense those things.
We're working on technologies, like printed electronics, to make very low-cost electronics that are smart enough. We think that's the Internet of Everyday Things.
The Internet of Things is about Googling reality. It's about right now; my body is the sensor. I see things, I hear things, I sense the world around me. Why does that have to be geosynchronous and why does it have to be synchronous in time and space from where I'm at, which is what my sensors normally are.
I can instrument and understand what my customers are doing with my products across the world now. I can see if those devices are starting to fail. I can adapt their behavior to be responsive to the local environment. [For example,] GE and their jet engines: when a plane's running into a headwind, the jet engine can run differently because it knows I'm in that situation.
Couple the Internet of Things with data analytics and machine learning to make sense of all that data, to get a job done.
What is the future of the Internet of Things?
I think there are interesting long-term opportunities for what we call 'systems of systems. For example, satellite swarms. Right now, we build one big satellite and send it up in space; that satellite is expensive and if it fails you're done.
Instead, what if instead I could build a series of small satellites that are all individually re-deployable but can are controllable in a coordinated way. So, it's a swarm of 50 satellites, small and cheap. If one dies, that's okay.
When I want a lot of imagery on a certain place, I'll aim 50 of them at the same location; when I don't, I distribute them [more broadly]. There's a challenge because you've got a complex system and you're redesigning it constantly during use because you want it to do different things. As pieces fail and don't fail, you task them to look at different problems, to sense different things. There's a whole science around AI planning and managing that system of systems.
But, back to my human-computer team. In the end, those systems are being tasked by a human. How does a human interact and manage that level of complexity while ensuring the system has local autonomy and understands what the human is trying to do? We're working in that space.
Think about the Google autonomous car. When you've got thousands of autonomous cars on a road, how do they behave together? And how do they behave with the humans who interact with them? We think that's where the next wave of complexity will occur in automation. It's going to be systems of systems interacting.