X
Tech

Apple iOS 7: Removing the obstacle of user interface

Some have accused Apple of copying the user experience of its competitors for the latest version of its mobile OS. Some are complaining that the changes are too drastic, or have not gone far enough. The reality is that the future of computing has one ultimate and common goal, which is to eliminate the user interface altogether.
Written by Jason Perlow, Senior Contributing Writer

There's been a lot of talk about the visual and user experience changes in iOS 7, Apple's flagship mobile operating system that powers the iPhone, the iPod touch, and the iPad products.

My ZDNet colleague Steven J Vaughn-Nichols refers to it as a "Mashup" of everyone else's mobile operating system, Android in particular.

Our resident mobile software consultant Matthew Baxter-Reynolds believes this is an exercise of whether or not Apple should have "moved the cheese," and how much it is willing to risk alienating its end users with changes in order to advance its products.

And our editor-in-chief, Larry Dignan, would like us all to stop complaining about user interface changes and just move on.

brain-computer-interface-ios7
Image: CBS Interactive/ZDNet

What is the common thread in all of this? Clearly, the nature of changing user interfaces and dealing with the issue of human-device interaction is not just an Apple problem. This is a problem that every single device manufacturer and software developer has to deal with now that we are moving into an age of ubiquitous computing.

And it's not so much as a "post-PC" problem but a "Where do we go from here?" problem.

As an industry, we don't have a detailed plan of the steps that are required, and what changes are needed to get us to the next phase in human-to-device interaction. But we have a very good idea of what the ultimate destination, or rather the desired destination, is.

Look no further than the works of popular science fiction if there was any doubt about what that destination actually looks like.

For example, in Star Trek, human beings interact directly with computers and devices primarily without the use of user interfaces at all. Our favorite characters use voice recognition and work with artificial intelligences all the time to interact with information systems. 

The "Communicator" has no UI; you simply speak to it as an extension of the Enterprise main computer system. Taken even further, there are beings depicted on the show and in the movies (such as the "Borg") that interface directly with computers by linking them with their brains.

This is not just a theme within Star Trek, but also a common theme in many other forms of SF literature, such as with William Gibson's Neuromancer series of novels published in the mid-1980s (and his short stories that preceded it, such as Johnny Mnemonic and Burning Chrome), and was further popularized in movies like The Matrix.

If you want to go back even further than the 1980s, there are endless examples in the works of the Grand Masters such as Asimov, Clarke, Heinlein, and Silverberg, just to name a few. And we shouldn't forget Philip K. Dick, either.

The ultimate goal of directly interfacing with computers or the creation of a Man-Machine intelligence has been termed the "Singularity" within the context of a prevailing theory developed by many notable computer scientists, mathematicians and theoretical physicists such as Ray Kurzweil and Michio Kaku.

Such a symbiotic existence for humanity is not expected to happen for hundreds or perhaps thousands of years.

In the interim, using conventional technology, raw consumption of information in the form of services, or data streams through applications such as Twitter, Vine, Instagram, and Facebook is more of an immediate goal. The notion of a "lifestream" as a replacement of the current web paradigm has also been introduced by Yale University computer science professor David Gelernter.

I fundamentally disagree that consumption of lifestreams will replace the web, at least in the intermediate future.

I also have noted my reservations for what a stream-oriented society will do to humanity as a whole, particularly as it relates to our youngest generation. But that's not what this article is about.

To approach singularity, or any type of direct human to computer interface, we have to remove a number of obstacles in order to have the most efficient presentation of information.

Ultimately, the purpose of computers and devices is presentation and facilitating communication of information to and from the end user.

Human beings are slow, computers are fast.

The primary obstacle bottlenecking the interaction of the human being to the computer is the user interface. Singularity is ultimately the complete elimination of user interface.

In order to remove those bottlenecks, user interfaces have to become much more simplified. So it's no surprise that all of these mobile and even our desktop operating systems are moving in this direction, as they are all attempting to solve the same problems.

To say that Apple has copied this is a bit much. The cognoscenti of Silicon Valley are all very much part of the same social group.

Smartphones and tablets have minimal real estate, and ultimately, you want to present information to the user as quickly and as efficiently as possible, and to have as few obstacles as possible for getting to that information.

When I first saw iOS 7, my first thoughts were not "Hey, this looks like Android." Actually, my initial thoughts were "Hey, this looks kind of Google-ey."

Now, you might not see a distinction between Google and Android's user experience, but to me, they are different.

Google has always had minimalistic user experience designs for their software, whether it has been in its own search page or within products like Gmail, even before Android was purchased by the company in 2005. So you could say that Android has inherited Google's tendencies toward highly efficient and minimalized UX.

Now, to say that Apple has copied this is a bit much. The cognoscenti of Silicon Valley are all very much part of the same social group.

And if you want to extend that even further out to points north of San Francisco, I think it is safe to say that you can include those folks as well. If anything, they decided to throw their babies out with the bathwater, and "move the cheese" significantly first in the name of advancing their products.

But hey, don't listen to me, I'm biased, I'm one of those people.

Jony Ive and Marissa Mayer are known to be close friends, and, as we all know, before she became CEO of Yahoo, Marissa Mayer's previous job was head of User Experience at Google. So to say that these folks don't all sit down for coffee (or dinner) on occasion to discuss wide-ranging subjects such as typography and UX minimalism is naive at best.

To imply that they all "copy" is much like saying that well-known artists during any particular stylistic period have copied each other. It's asinine. They belong to the same school of UX design, and thus that ideology is reflected in the end products of each of these companies.

That school is employing clean typography, making the most use of white space, and reducing UI element complexity to eliminate barriers to consuming information and optimizing devices for new forms of human and device interaction such as touch, speech, and gesture/facial/motion recognition. This is the new norm, no matter which of the powers that be that we are talking about.

Is the ultimate goal of computing to eliminate the user interface? Talk back and let me know.

Editorial standards