X
Business

iPhone vs. Android development: Day 4

It's day 4 of a 5-day course on iPhone programming. As before I'll be sharing my observations from the classroom on how development on the iPhone compares to Android development.
Written by Ed Burnette, Contributor

It's day 4 of a 5-day course on iPhone programming. As before I'll be sharing my observations from the classroom on how development on the iPhone compares to Android development. The class is presented by Joe Conway from Big Nerd Ranch.

[Read: Day 1, Day 2, Day 3, Day 4, Day 5]

Yesterday's topics included saving and loading data, handling low memory situations, graphics with OpenGL ES, and Multi-touch events. Today's course will cover:

  1. Core Graphics
  2. View Transitions
  3. Core Animation
  4. Camera
  5. Accelerometer

Now, on with my notes from the course. At the end I'll wrap it up with a conclusion section:

Next: Core Graphics, View Transitions > Core Graphics

The iPhone has a 2-dimensional Core Graphics API that developers can use to create any kind of custom display such as graphs and images. It's similar to painting on an HTML Canvas or using the android.graphics package on the gPhone. All the basic functionality is there, such as lines, rectangles, points, clip regions, gradients, and so forth. Curiously, there doesn't seem to be a method to draw a rounded rectangle (instead you have to use a combination of lines and arcs to draw it yourself).

In a UIView class you implement the drawRect message to do your custom drawing with Core Graphics. You never call drawRect yourself, instead you rely on the system to call it whenever your view needs to be redrawn. This works the same way as Android's View.onDraw() method. On both systems, you need to take special care to do the drawing as soon as possible. For example, you should avoid any memory allocation in the method. Set up everything you can ahead of time.

The interface to Core Graphics is all in the C language, not Objective-C, so you have methods and arguments instead of messages and labeled parameters. Luckily you can intermix C and Objective-C code in the same module (even the same method) so it's not really a problem. It just takes some getting used to when you're switching back and forth between object-oriented and non-object-oriented programming styles. By contrast, Android's APIs are all Java objects.

Core Graphics is not implemented in or associated with OpenGL ES, which we covered yesterday. Both Android and iPhone treat OpenGL ES as some kind of "alien" subsystem that exists on its own and only talks with the "native" graphics through a few glue objects. This is nice in a way because you can port an OpenGL ES program from one system to the other by just changing some of the code around the periphery.

View Transitions

When you click on the "info" button (usually a little italics "i" in a white circle) you want your view to flip over like a playing card, showing what's on the other "side". To do this you create a view transition block with UIView:beginAnimations, set any options on the transition such as duration and transition type, do what you need to swap the views (remove one and add the other), and then close the block with UIView commitAnimations. The animation starts running immediately in the background. If you like, you can set up a delegate (listener) for the transition and be notified when it's done.

View transitions are very limited compared to Core Animations. You can only animate removing and adding subviews or change the frame, bounds, center, transform, or alpha property on a view. Android has a similar concept but it's not used much in practice.

Next: Core Animation, Camera > Core Animation

Imagine a 15-tile game, where each tile is a UIView. When you touch one tile you want it to slide to a nearby empty slot if possible. The iPhone's Core Animation library can be used to do that. Instead of creating an animation block like you do for View Transitions, you create animation objects (for example with CAKeyframeAnimation) and then add either a path of points or a set of intermediate transformation matrices to it. Once you have it set up, you add the animation to the view's Layer and off it goes.

In the iPhone API, there's the view and there's the view layer. The layer is like a screenshot of the live display of the view. During animation, it's the layer that is moved around, not the view itself. So if you animate the layer from point A to point B, then you also have to move the view itself in another line of code. Otherwise it would appear to just snap back to its original position when the animation was done running.

Android has some of the same concepts but typically all the animation is specified in your program's resource files (XML files compiled into your app at build time). Just like string resources, you can references these animation resources by name in other resource files and in your code. You can define them in Java code instead, but using XML is quicker and it lets you easily create reusable animations that can be used by multiple views. For example you could create an animation that shakes something back and forth for 2 seconds, then use it on a form field or on the entire view containing the form when someone enters bad data.

The iPhone's Core Animation system supports 4 "media timing functions": Linear, EaseIn, EaseOut, and EaseInEaseOut. You set one of these on your animation to describe the acceleration of the animation at the beginning, middle, and end of the time period. Android calls these "interpolators" and lets you define your own independently of animations that use it. For example in the Android API demos there's one called "cycle_7" that runs your animation back and forth from start to finish 7 times.

Camera

Access to the Camera is currently severely limited on the iPhone. You create a UIImagePickerController instance, set the source type (camera, library, or albums), and then define a delegate to receive the returned image. When you need an image, you send the view a presentModalViewController method. The iPhone provides all the camera's user interface, which is a fancy way of saying you have absolutely no control over it.

When the user picks or takes an image, your delegate (usually your view) gets an imagePickerController callback with the selected image. There's also a crop rectangle passed back if you set the flag to receive it.

Contrast this to Android's Camera class which provides control that is a little closer to the hardware. You can get raw preview frames from the camera's CCD, get a callback when the shutter button is pressed, and another callback when the camera lens gets itself focused. That last feature is used by the bar code reader program to know when to start interpreting the code as soon as it comes into focus. The "chrome" around your camera viewfinder image is completely up to the programmer, but there are standard Intents you can fire if your needs are modest, for example you just want to take a picture or pick from an album using the built-in applications. The beauty of Android comes into play when you write your own programs that handle those Intents. Don't like the G1's camera app? Replace it with your own. With rare exceptions, it's a level playing field.

Next: Accelerometer, Conclusion > Accelerometer

Ever since the Nintendo Wii was released, everybody has gotten into the accelerometer craze. While iPhones don't yet come with a strap (hmm, maybe they should), there are many examples of programs that can use subtle, or not so subtle movements of the phone. Whether it's steering an on-screen car, drinking virtual beer, or avoiding traps in a labyrinth, the onboard 3-axis sensor on the iPhone enables a new way of interacting with the user.

Using the Accelerometer is deceptively simple in an iPhone program. First you get a handle to the system accelerometer, you set the update interval on it, and you set a delegate for callbacks. There's exactly one callback: accelerometer:didAccelerate. When you implement that you get a UIAcceleration structure which has an X, Y, and Z force vector and a timestamp. That's it, now go make Super Monkey Ball.

Ok, it's a little more complicated than that. The trick of course is what do you *do* with that data. How often should you sample? How do you detect gestures like tilting and shaking? As with touch events, acceleration events need to be queued by your program and compared with each other to get the overall picture of what's going on. On top of that, accelerometer data is very noisy, so you need to work with averages and filter out high frequency jiggles.

Android devices like the T-Mobile G1 also have accelerometer sensors. The Android API supports other sensors too like magnetism (compass), proximity, temperature, and ambient light. As far as I know only the first two are implemented on the G1. In addition to the regular X, Y, and Z vector you can get at the raw unfiltered data if you want to.

Android also has a notion of sensor accuracy, and you can get a notification if the accuracy degrades or improves. Instead of specifying a particular number of samples per second like you do on the iPhone, on Android you specify whether you want to get your sensor data "as fast as possible", at a rate "suitable for games", a rate "suitable for screen orientation changes", or a rate "suitable for the user interface". Whatever that means. Of course on the iPhone if you ask for X updates per second, you may not actually get that, because the API says there are minimum and maximum limits (but helpfully, doesn't tell you want they are).

Conclusion, Day 4

The iPhone API is a curious mix of C and Objective-C due to the legacy interfaces it inherited from Mac OS X. That's great for porting existing Mac programs over, but not so much for people who were never familiar with the Mac to begin with.

That's not to say it isn't a capable and powerful system because it certainly is. The developer has a plethora of options to chose from when designing the user interface. I was particularly impressed by the Core Animation system which could be the subject of its own week-long class. Even so, Android's resource system (which is used for defining animations, view layouts, images, and other things) is hard to beat in terms of flexibility, elegance, and ease of use for the developer.

The Accelerometer and Camera systems are in some ways emblematic of the philosophical differences between the two platforms. Apple provides you one way to do it that is simple and functional but not customizable. Android provides lots of bells and whistles and options, which allows the developer a lot of leeway for customization at the possible expense of less consistency between applications and installations. As one attendee put it, you can pick up pretty much any iPhone app and as long as it follows the UI guidelines you will kind of know how to use it because it looks like other apps you've used. With Android there are no guidelines, no rules, and no boundaries.

Tomorrow's session will conclude the course with a look at Web services, using the Address book, Preferences, and Networking. See you then!

Be sure to check out the other articles in this series: [Read: Day 1, Day 2, Day 3, Day 4, Day 5]

Editorial standards