Looking back over 2012, one thing is clear -- over the total number of waking hours during that year, I must have spent 38% of them arguing (in a good way) with people over whether a tablet can be used for "proper work" and whether putting a keyboard on one is a good idea. This argument is about one thing -- are PCs and tablets so closely related that each one should be able to do what the other one does. Spoiler: no they're not, and no they shouldn't.
For me, this issue comes down to classification. I think it's easy to see one form of computing device and conflate it together with others. A PC, for example, has a processor, memory, and operating system just as a tablet does. And they can be used for similar things. If an iPad and a PC both have an operating system, why should one have a keyboard and the other not? Why should creating Word documents be within the domain of one and not the other?
The point is that a PC is not a tablet, and the tablet is not a PC. If we as technologists fail to hold this separation, it becomes hard to see the relative strengths and weaknesses of each and that makes it harder to make good decisions. What we really need is a better classification that more clearly defines the "PC" and "smartphones and tablets" as discrete ideas.
If you enjoy computing and enjoy the PC, you likely know about the story of Xerox PARC and the origin of the PC. Others have told this story much better than I, but here's a recap for the sake of argument.
The Palo Alto Research Center (PARC) is a research organisation operated by Xerox. Back in the 1970s they were doing some research into a new form of microcomputer designed for use in office environments. The researchers invented the WIMP user interface that we still use today together with the desktop metaphor and lots of other bits and bobs that we still use today. For me, this research group invented the PC itself -- although that statement glosses over the work that myriad companies did in order to realise the PC into an world-dominating product.
If we look at the early life of the PC, that type of device gained traction because it was identified as a device that could deliver efficiency improvements in commercial settings. (For example, an organisation might be better at managing their creditors with a single PC sitting in an office running accountancy software than if they used a paper-based system.) That "commercial efficiency" defines the essentially essence of the PC. That, right at the core of it all, is all a PC is -- it's just a type of device that can be tied back to a business case with the payout being increased efficiency. The fact that you can use a PC for stuff that isn't about corporate efficiency has more to do with luck and humanity's general ability to bend things designed to do X to do Y than anything else.
That classification adequately defines the PC, but without a good classification on the smartphones and tablets side, we're in danger of both groups conflating and mashing together. What we need is a way of drawing a similar boundary around smartphones and tablets.
Back to PARC then. In the 1980s, researchers at PARC started to talk about this idea of "ubiquitous computing", often shortened to "ubicomp". The phrase is attributed to Mark Weiser in 1988 when he was the PARC's Chief Technologist. (Weiser died very young at the age of just 47.)
The easiest way to grab your attention about ubicomp is to tell you that one of the central tenets is that it describes a computing environment that is made up of a large number of small devices which coexist in a network. Weiser and his team further subdivided those devices into three sets: tabs, pads, and boards. Tabs are supposed to be wearable devices, although I think that a modern view of these is that they are actually smartphones. Pads are supposed to be handheld devices. Remember how we all laughed back in 2010 when Jobs announced that Apple's new tablet would be called the "iPad"? Not only does no one laugh at the name now, but it appears that the name "iPad" was lifted directly from the ubicomp manifesto. Boards are less frequently found, but they are found in educational environments, and there is some precedent for this sort of device in Microsoft PixelSense (the table-format devices that used to be called Surface), and also in the acquisition of Perceptive Pixel, which is much more like what we'd consider to be a "board". We can tell from this basic definition within ubicomp that things that we call "smartphones and tablets" are defined clearly within the ubicomp vision.
Whilst that prescience is cute, what we're really looking for though is ways in which ubicomp creates a separating force between new types of devices like tablets and smartphones and from the PC. To do that, we need to get a little more wacky.
Ubicomp is supposed to be something that operates in the background of our lives. The likelihood is that, if you own a smartphone, you would have experienced this next concept thousands of times. Imagine that you're waiting in line at the store. You take out your smartphone, check your Facebook account, and put your phone back in your pocket. That is ubiquitous computing in action. The device is there, the network is there, but it's in the background. It's in your foreground awareness for as long as it needs to be, before going back into the background smoothly and without fuss.
That's a key difference between ubicomp and the PC. I've been at work now for about nine hours. Apart from a small break for a lunch, I've essentially been staring at a screen for that entire time. The PC isn't a background device -- it's a foreground device. The PC's tendency to be a foreground device comes from its commercial efficiency roots. When you're at work, you're supposed to be busy and you're using the PC as a tool. When you're using a ubicomp device, you're just using it for the period when you need to be using it whilst getting on with the rest of your normal life. (Of course, there's nothing to stop you from using a PC, turning away from it and using a smartphone in which case you're doing both. And there's also nothing to stop your digital life subsuming your entire "in real life" life, in which case what's background and what's not background becomes a bit fuzzy. But I digress.)
Next up is the idea that ubicomp is about "calm". This is related to the idea of a device receding into the background but has subtle implications of the user experience (UX) itself. The way this has practically worked through in ubicomp operating systems is by eschewing windowing operating systems and going for a single-serving approach. What "calm" means in this context is taking steps to not overwhelm the user both in terms of data presentation, and also in terms of the cognitive load involved in actually using the interface. A single-serving approach does this really well. Another way in which the user cannot be overwhelmed is by reducing the amount of intimidation by ramping up the security and trust on the devices compared to the PC. Thus, locking down devices and making them much more simple achieves this goal of creating "calm".
The final tenet of ubicomp is (literally) about philosophy. In a 1994 talk that Weiser gave he spoke about how in order to understand ubicomp you have to "start from arts and humanities: philosophy, phenomenology, anthropology, psychology, post-modernism, sociology of science, feminist criticism". Critically, he goes on in that talk to say "This is the most important part of the talk. You may not get it on first hearing. Patience". What he was trying to underscore here is that the sociological aspects of ubicomp devices trumps the technology. Features and specifications should have only have an impact on the design of ubicomp devices insofar as they're necessary to actually build the device. Ultimately, the user shouldn't care whether about things like does it have a dual-core or quad-core processor.
Personally, this mirrors my own experience. Whenever I talk to people about, for example, why being able to run Word on an iPad isn't relevant it's in this area where myself and that individual diverge. Yes, of course you should be able to run Word on an iPad -- it has a screen, a processor, memory, and an operating system, but if you do that it's unsympathetic to the ubicomp vision. Ubicomp is about one's relationships with others and also with oneself, not about work.
The reason why I wanted to do this piece on ubicomp was to try and show that as technologists our natural inclination to conflate PCs together with smartphones and tablets ignores a lot of very obvious work that happens to have come out of the same organisation as the PC itself. Whether or not its important that PARC invented the PC and ubicomp is something I've yet to make peace with. The more romantic-slash-superstitious side of myself tends to think it is relevant. The more scientific part of me wants to write it off as coincidence. But whatever the reality, the point is that these types of devices are different, with different histories, progenitors, and philosophies.
What I think we can learn from this is the importance of a deliberate separation. Without wishing to bash Microsoft's tablet strategy in Windows RT, Windows 8, and Surface, it does show what happens when you try and merge the world of the PC with the world of ubicomp. It's a bit like trying to cross-pollinate an aspidistra with a koala -- there's a fundamental problem with the genetics that stops you from achieving a sensible result. Windows RT fails miserably in achieving calm because there's so much of the PC world inherited into a ubicomp device and the PC world is not calm. One of the key things that Weiser and his team were looking to achieve with ubicomp was undoing the damage caused by the PC's lack of calm. Microsoft seems to have totally failed to get that. But, on the other hand, an understanding of ubicomp lets us appreciate more keenly what Windows RT and Surface is good at -- it's just a different sort of PC.
You may be expecting this, but iOS is a particularly good example of a ubicomp operating system, something which I attribute to Steve Jobs and Apple actually setting out to design a ubicomp devices with the iPhone and iPad. For what it's worth, trying to draw out from history whether Jobs was a student of ubicomp is more difficult than you would think. You can see evidence of ubicomp in Apple's design decisions throughout the company's life. (A salient point here is that Apple's whole ethos has been far less about building computers to drive commercial efficiency and much more about the user's relationship with themselves and others.) A nice illustration of Jobs seeming to be interested in ubicomp comes from the launch of the iPad 2 in 2011: "It's in Apple's DNA that technology alone is not enough -- it's technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing and nowhere is that more true than in these post-PC devices". That statement mirrors Weiser's own words about ubicomp quite elegantly.
For me, Apple's primacy in the technology market today stems from it being in harmony with normal people seeming to want to do computing in a "ubicomp way". In much the same way, Microsoft's primacy in the market stemmed (note past tense) from the market seeing opportunities coming from commercial efficiency improvements.
The question is, which part is the more important going forward. I can't see how it can possibly be the PC simply because there is much more non-work stuff than work stuff in our society. Ubicomp, FTW.
What do you think? Post a comment, or talk to me on Twitter: @mbrit.