X
Tech

iPhone vs. Android development: Day 3

This is the 3rd in a series of 5 posts about an iPhone programming course I'm taking this week. The course is presented by Joe Conway from Big Nerd Ranch.
Written by Ed Burnette, Contributor

This is the 3rd in a series of 5 posts about an iPhone programming course I'm taking this week. The course is presented by Joe Conway from Big Nerd Ranch. To make things more interesting I'm writing about how iPhone development differs from Android development, a subject with which I'm more familiar.

[Read: Day 1, Day 2, Day 3, Day 4, Day 5]

Yesterday's topics included localizing applications for natural languages, embedding a web view, controlling stacks and lists of views, and producing sound effects and music. Today's agenda includes:

  1. Saving and Loading Data
  2. Low Memory Warning
  3. OpenGL ES
  4. Textures
  5. Multi-touch Events

As before I'll record my impressions during the day on each topic, and then wrap it up with a conclusion section:

Next: Saving and Loading Data, Low Memory Warning > Saving and Loading Data

iPhone applications run in an application sandbox that limits access to files on the system. Through the sandbox you can get access to:

  • Resources bundled with your program (read only),
  • the Documents directory, for persistent data,
  • Library/Preferences, for application settings,
  • the tmp directory, for temporary files.

Each program gets it own Documents, Library/Preferences, and tmp directories.

The iPhone platform comes bundled with the SQLite database, which is a lightweight API for data access that understands SQL statements. A SQLite database can contain many tables, each of which can contain one or more columns. A database is stored as a single file on the file system.

Like the iPhone, Android programs are prevented from accessing files owned by other programs, with a few exceptions. The /sdcard directory on Android is open to all, and can be used for data sharing. Also Android has a concept of a Content Provider, where one program can define a public interface that another program can query to get data. That's how the Contacts database is handled. Any Android program can do this, not just ones from Google.

Low Memory Warning

Handling constrained memory situations is a little primitive on the iPhone, possibly because it wasn't as much of an issue on the desktop version of MacOS X, from which the iPhone OS is derived. When memory is tight your view delegate can get a message from the OS called applicationDidReceiveMemoryWarning. It's up to you to decide what you can free up. Android's life cycle management is much more sophisticated as I've mentioned before, which also makes it harder to understand and use.

Next: OpenGL ES, Textures > OpenGL ES

OpenGL ES is a standard graphics library for embedded systems produced by the Khronos Group. Both the iPhone and Android phones support it. On the iPhone, coordinates are always in floating point (32-bit floats), but on Android you have a choice between floating point and integer types. That's because the processor used on the iPhone has floating point support, but Android can be run on cheaper hardware that might not have that.

Xcode has a nice wizard for creating OpenGL ES applications that generates most of the boilerplate code you need to use the library. If you've read the OpenGL chapter in Hello, Android you'll know all that has be done by hand on the gPhone. It sure makes graphics programming easier to have that done for you.

Instead of using a separate thread as you do on Android, OpenGL painting is done during callbacks from an NSTimer on the iPhone. Normally you'd set the timer to trigger every 1/60th of a second to achieve a smooth frame rate. Timers are delivered synchronously on the main event queue.

As an aside, while poking around in the /iPhoneOS2.1.sdk/usr/lib directory I noticed a copy of libblas.so and liblapack.so. BLAS and LAPACK are linear algebra math packages for things like multiplying matrices and solving sets of linear equations. You could use these to do scientific programming on the iPhone if you like.

Textures

Textures are images that are wrapped around 2D or 3D objects on the screen. To understand textures, consider a wood laminate table. The manufacturer takes a picture of real wood and then they glue it to a cheaper substrate like particle board.

In OpenGL, textures are elastic so they have to be pinned down to specific vertices by using texture coordinates (usually called t and s for some reason). By adjusting these texCoords you can make the texture appear bigger or smaller on the object. You also have to consider the texture wrapping mode, which defines what happens when the texture doesn't cover the whole object. For example if you have a texture image of a brick, and you want to apply it to an OpenGL model of a wall, you don't want to see one big brick there. Instead you want the brick pattern to be replicated seamlessly over and over on the wall.

The only difference between Android and iPhone textures is how they are loaded and sent over to the OpenGL side. On the iPhone, you could load it into a UIImage, draw it into a bitmap with CGContextDrawImage, and then define it to OpenGL using glTexImage2D. On Android you could load it into a Bitmap, go through each pixel to convert the picture to a ByteBuffer, and then call glTexImage2D.

Next: Multi-touch Events, Conclusion > Multi-touch Events

One of the big differentiating features of the iPhone is MultiTouch. Android only supports one touch at a time. iPhone apps, on the other hand, can recognize and track up to 5 touch positions and do something intelligent with that information such as two-finger drags and pinch-to-zoom.

There's no magic involved in how this works. Your application gets a touchesBegan, touchesMoved, and touchesEnded event when the user touches their finger on the screen, drags their finger, and removes their finger from the screen respectively. The "Multi" part of MultiTouch comes in because each of these methods takes a set of touch events. If you touch the screen with two fingers, then touchesBegan will get a set with two UITouch objects in it.

One nice thing Apple did with this API is that once a touch event starts, you'll always get the same UITouch object that represents that finger's interaction with the screen. So your began, moved, and ended methods will get passed the same structure. Unfortunately they didn't allow you to add any extra data to the object, but you can use the address of the object as a key in a NSMutableDictionary to keep your own info.

To detect something like a pinch, you'd need to notice two different fingers pressing down (touchesBegan), keep track of their positions over time (touchesMoved), and notice that they move a certain distance from their original position. I was kind of expecting some kind of special "pinch" event but instead, it's up to your program to interpret these presses and movements however you want.

Conclusion, Day 3

I certainly have a much greater respect for the iPhone hardware and APIs after this class. One nice thing I like is the unified file system of at least 8GB. There is not a separate area for programs and data, so it's up to the user how to partition that space. They can use most of it for programs, most of it for music and videos, or split it 50/50. This is very convenient for programmers, although not having removable SD card support puts a hardwired limit on the space available.

Another thing I like as a programmer is the floating point hardware and the speed of native code. This is limited by the lack of a GPU on the iPhone (at least on current generations), but computationally expensive operations are possible as long as you don't mind draining your battery.

MultiTouch is an intriguing enabling technology for human-computer interfaces. Consider a piano application (there's a free one in the App Store you can try out). With MultiTouch you can press 2, 3, even 5 keys at once! What a concept. It needs to support more than 5, though, in my opinion. Imagine a two-player game with the players sitting on opposite sides of the device. 5 touches only lets them use 2 and a half fingers each. Oh well, it's a minor nit, and the API does not hard-code the limit so it would be possible for Apple to add more in future generations.

Tomorrow looks like a busy day we're set to cover Core Graphics, View Transitions, Core Animation, the Camera, and the Accelerometer.

Don't forget to check out the other articles in this series: [Read: Day 1, Day 2, Day 3, Day 4, Day 5]

Editorial standards