Just how do you write apps for Google Glass?

Just how do you write apps for Google Glass?

Summary: As we get closer to a point where mere mortals users might be able to get their hands on Glass, exactly how will mere mortal developers write apps for them?

TOPICS: Smartphones
Google Glass App
You too could be writing apps that renders text on images in new and exciting ways. (Image: Matt Baxter-Reynolds/ZDNet)

The Google Glass API has been available for some weeks, and over the weekend my ZDNet colleague Steven J Vaughan-Nichols blogged that the source code for the Glass devices was now available. I decided to have a look at how developers can build apps for glass.

It's a bit stranger than you'd think.

I assumed it would be that Glass was just like another Android device, albeit with a funny screen and new input method. Moreover, I'd assumed that users would download and run apps on it much like a smartphone.

However, the device is not structured anything like a smartphone, and the programming model is similarly differently structured.

Whenever a user sets a Glass device up, they get given a "timeline". It's through interaction with this timeline that the device works. For example, take a photo, and it appears in your timeline. Receive an email, and a notification appears in your timeline.

Timelines are a collection of "cards". You've likely already seen pictures of Glass cards floating around. Here's one that I made: 

Screen Shot 2013-04-29 at 17.06.23
A sample Glass card. You can add images, options, create bundles of other cards within this card, etc. This is just basic text. (Image: Matt Baxter-Reynolds/ZDNet)

The only development model you get is the ability for your server to talk to Google's servers to add new cards, as well as delete or change cards that you've added.

For example, imagine you own a social networking site and one of your users has Glass. In the first instance, they have to bind up their Glass timeline with your site. From that point, whenever the user receives a message through your social networking site, if you want to tell the user about it, you create a new card for their timeline, send that card to Google, and they sync it with the device.

(If you want more nitty-gritty, the authentication is done with OAuth, and the creation of cards in a user's timeline is done by posting JSON data over a REST interface. There's nothing weird about the mechanics, as far as I can tell.)

If you've written any mobile software for any platform that uses some form of push notification server, this will sound familiar. You get a token to deliver messages to, and when you need to send messages, you just hand the message over to the platform's notification broker and off it goes to the device.

In a bit, I'll go on and talk more about things you can do, but this is essentially it. You don't get an app store client where you can install innovative and clever little apps — you just get a push notification service that you can wire your own server up to.

For example, The New York Times has a Glass app, but all it does it push headlines down to the device. That's commensurate with what we know about how the API works. It's not like tapping The New York Times app on your phone and tablet, and then scrolling around — it's just feeding stuff into your timeline from the servers.

And then?

One way to look at this is that it's a blank canvas for developers to think about interesting user stories. Another way to look at it is that it's quite limiting given the freedom that developers have enjoyed over the past few years.

For instance, looking at the SDK, it's not obvious to me how developers boot their own interaction into the system. In a normal post-PC world, you just install an app, and the icon appears on the launcher, whereas the commands that are understood by first generation Glass devices ("OK, Glass...") are baked in.

An expectation is that as part of the binding from your service to a user's Glass device, you add a card to their timeline which they "pin" to the home screen. The user can then go back and dig out this card whenever they want your service. Cards can have custom menu options associated with them. Those features generally form the shape of an app connected to a back-end service together with menu options.

That sounds like it works, but it seems to be putting a huge cognitive load on the user. That's nothing like as simple as a grid of icons.

As well as pinning, you can also "share" with a service, the idea being that your service receives that shared data and then puts new things on, or changes existing things in the user's timeline.

One example in the Glass SDK is "Add a cat to that". The (example) idea here is that you take a photo using Glass, and then send that picture to your service. You then render a cat onto the photo and post the response back into the user's timeline. This is done using the "share" option — ie, you find the photo and share it with a contact that's bound to your service. As part of inception of your service, you actually create that contact.

That approach is less refined than the sharing features already available on Android. Plus, again, it seems to have huge cognitive loading. Google's preference seems to be to consider the usability of Glass apps only from the perspective of what Google wants them used for. 

As mentioned, cards that you add into the timeline can have custom commands on them. So, for "Add a cat to that", when you see the resulting picture with rendered cat, you can add commands like "Change to tabby", or "Change to Russian Blue". Those commands come back to your service, and you either update the card you added originally, or create a new one.

The key is that none of this happens locally. It's the same push notification-based arrangement — it's just that you can do a lot more with it than the sort of basic push notifications thus far on smartphones and tablets.


A final thing to call out is that you can subscribe to location updates. This results in your service being pinged whenever the user's Glass location changes. I'm not entirely sure how I feel about that. At the very least, it adds another dimension to concerns about privacy with regards to Glass.

Even if you don't subscribe to updates, you can ask Google for the location when you receive a message. This is mentioned in the "Nearby pet stores" API example.


The trick when approaching Glass development seems to be "don't expect it to behave like a smartphone". If you want to make money out of Glass, you're going to have to think outside of the box. Or "think outside the frame", as in "glasses frame". Ah well, you get the idea ... It will at least be interesting to see what people do with it.

What do you think? Post a comment, or talk to me on Twitter: @mbrit

Topic: Smartphones

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Perfect companion for museums and city tours

    It seems that augmenting "reality" with additional information is one of the venues Glass brings to the user. Apparently the user has less influence on what sort of information he can select from instead he is accompanied by information.
    Sounds great for some highly specialized niches but otherwise completely useless compared to a smartphone.
  • Developers are important

    Developers are going to be so huge in diversifying uses for Glass and taking it mainstream. The possibilities are endless.. I think the penetration will start out in industry and creep in consumer buying. That's funny too because most mobile devices have gained popularity going from consumer to enterprise. The paradigm shifts are interesting to say the least.
  • User location

    Hi Matt,

    Does the user location (provided you subsribe to it) come with the direction the user facing?
  • Sample application

    John L. Ries
  • Seems like a standard Browser-based paradigm

    This interaction model - send a page with data and some options, process the option chosen, send next page - is surely familiar from HTML, VoiceXML etc. and seems to work well enough in these environments. Glass has to be ultra-thin client if you're wearing it on your nose, or in future on your fovea, so it's not surprising that the local processing is limited. That doesn't mean that the developer's imagination has to be limited.