X
Innovation

The death of the PC has been exaggerated: Get ready for the era of ubiquitous computing

It’s not Post-PC, it’s not PC+. Tomorrow is already here, it's just not evenly distributed.
Written by Simon Bisson, Contributor

Welcome to the future. Tomorrow is already here, it's just not evenly distributed (as William Gibson would say). It's a future that's in the middle of being born, one where natural user interfaces, machine learning, high-bandwidth wireless connectivity, and the cloud, are changing everything we know about computing. It's also a future that's been a long, long time coming. More than 20 years, in fact.

Let's roll the clock back to the early 1990s, when I was working in the advanced local loop group of a major telecoms research lab. While my job was to try to understand what the future of the last mile of copper between switch and home would be, I realised quite quickly that that future was going to be driven by the endpoints of the network.

As they became more complex, they'd demand more and more bandwidth — and oddly less and less human interaction. Computers would talk to computers, and we'd take advantage of that endless chatter to build systems that would make the complex seem easy.

That's the world we're starting to see, where the things around us are becoming connected, delivering data and communicating with each other. It's a world where we can tap into cloud-hosted computing on our omnipresent pocket computing devices, anywhere and at any time. It's a very different world from the early days of computing when we had to go to a specific place to book a specific time to take a share of the limited computation that was available. Now computing is, well, ubiquitous.

InternetAllTheThings
Internet All The Things!

Back in those heady days of the early 90s, one research scientist at Xerox PARC, Marc Weiser, was charting out the world we live in, building experimental systems that have become the inspirations for much of today's computational infrastructure. Weiser called it "ubiquitous computing", and he discussed it in detail in an influential article for Scientific American in 1991, "The Computer for the 21st Century".

He foresaw a world where what mattered was the user, and their data, not the computing equipment they used – and where it was "one user, many computers", where computing was like writing. It was a world where different classes of computing device would share information seamlessly, letting computation flow from device to device, and from user experience to user experience. We're well into the second decade of the 21st century now, and from where I'm standing it looks like we're heading right into the world he thought we’d be building. Sadly though, he's not here to see it; he died in 1999.

I found that paper while I was working at that telcoms lab, and like Ted Nelson's hypertext rant Dream Machines, it became part of the work I was doing – helping me understand that the local loop was about much more than just voice, and that data was more than (in those days) email, Usenet and FTP. At the heart of where Dream Machines and ubiquitous computing met was the fledgling web, and the protocols that defined an interconnected and (as Nelson would put it) deeply intertwingled world.

That last point was particularly important, and one that's key to understanding the ubiquitous computing revolution that's happening around us. Weiser focused on three classes of device (though in practice there would be many, many more). He called them Tabs, Pads and Walls.

Tabs would be tiny devices, often without displays. They'd be sensors and effectors, the "things" in today’s Internet of Things. They’re the grains of compute that are spread across the world to gather a picture of what's going on, and just where it's happening. Think of a tab as a device like an Arduino, just with built-in connectivity. As well as acting as sensors for the wider network, tabs could form the basis of low-cost, simple wearable computing systems. Andy Hopper at Olivetti's European research center in Cambridge implemented them as smart, interactive active badges that made it possible to route calls to the nearest phone, as well as controlling access to rooms.

Pads were a very different concept, similar to today's tablet computers, but vastly more flexible. Intended to be the main way users would interact with compute capabilities, they'd mix their own processing with local and, what we’d now call, cloud resources. While much of Weiser's work addressed scenarios that were similar to those anticipated by Alan Kay's proto-tablet Dynabook, tabs are more than slates — encompassing everything from smartphone-like devices to desktop PCs. Pads would be the main route of user interaction with a ubiquitous computing environment, offrering different user interfaces depending on the context of the user.

Walls are something different, offering large multi-user interactive surfaces that could be used for display and for collaboration. They could be horizontal or vertical, screens or projected — just as long as they were interactive. You could think of a wall as something like a 55-inch Perceptive Pixel display, or one of Microsoft's original table-top Surfaces – or even an Xbox One hooked up to a large-screen TV, using Kinect for user interactions.

Put together the components of Wesier’s ubiquitous computing future looked very like Star Trek: The Next Generation. Wearable communications devices were a gateway to the ship's computers, which displayed information and took inputs from flexible tablet-like devices, using massive wall screens and projections for additional many-to-many work. Under the covers of flexible, task appropriate user interfaces, the real work was done by massive shared computational systems.

Of course Weiser wasn't alone in thinking about ubiquitous computing (or ubicomp). IBM's research teams were discussing something similar, calling it "pervasive computing". A few years later, working in the mobile group of one of the first web consultancies, we suggested calling the background chatter of the any-to-any world of meshed smart devices and the sensor networks they'd build "ambient computing", taking a cue from Brian Eno’s musical philosophy.

It's not surprising to see that fictional future and the ubiquitous computing world it describes reflected in the three largest computing ecosystems. Google's Chromebook and Android are its Tabs and Pads, while the Chrome browser scales to the largest Walls. In the cloud App Engine and Google Apps power user experiences across all those devices. Similarly, Microsoft's Devices and Services model of "three screens and a cloud" puts Azure behind Windows, Windows Phone and Xbox, while Apple's iCloud backends both iOS and Mac OS, and many of the applications that run on its devices.

You can call this world we're building post-PC, but that’s an exclusionary term that removes one element of the way we're smearing compute from things to pocket to desktop to machine room to cloud. In a ubiquitous computing world we're not getting rid of things — we're adding compute to much more. That may mean we use traditional computing devices less; after all, who still uses a teletype? But it doesn't mean that they go away.

Instead, it means that we'll access that computation in different ways: tapping on a small screen, swiping on a larger touch surface, typing on a keyboard, talking to a TV. When you look back at Weiser's work, it's no wonder that Microsoft called its family of tablets Surface, or that Apple chose to call its iPad. Those are all terms that harken back to that Scientific American paper, and to the new computing world they're building. Microsoft's vision used to be a computer on every desk, now we're looking at computers everywhere.

Welcome, then, to the ubiquitous computing future. You're already part of it. Now, let's see what we can do with it.

Editorial standards