The Mac community was buzzing in late November when the director of Apple's Unix group showed a slide at the LISA (Large System Adminstration) conference that predicted that the Snow Leopard version of Mac OS X would ship in the first quarter of 2009. However, there were more than 100 other slides in the presentation, and they offered some interesting bits of their own.
The talk was by Jordan Hubbard the director of Apple's Unix Technology Group. From the PDF of the talk (confirmed by some LISA blog postings), he discussed a number of Mac OS X security features and several open source software projects that Apple is supporting, including Apple Syslog (a rewrite of the BSD syslog), MacPorts (an easy-to-use system for compiling, installing, and upgrading either command-line, X11 or Aqua based open-source software), MacRuby (a version of Ruby 1.9, ported to run directly on top of Mac OS X core technologies), and WebKit (the Web engine behind Safari).
Here are some of the 117 slides that caught my eye:
Mobility questions. Hubbard said that "ubiquitous computing is not 'coming,' it is already here! He suggested that developers start thinking of ever-smaller devices, meaning power budgets in the milliwatt range.
He offered a number of "lessons" from Apple's iPhone experience. He said programmers need to avoid making assumptions about power and performance when dealing with a small, mobile platform.
•“Enterprise” features (like code signing) can also be substantially leveraged on mobile devices. •Mobile device features (like CoreAnimation) can also encourage innovation in “bigger” devices. •You can actually can run a full Unix on a phone now. •It’s all about the power, and all resources (memory, ?ash, CPU) take power. We need to challenge our “Unix assumptions” about power being plentiful. •Stability is key for something this critical (it can’t crash while dialing emergency services). You just can’t run everything you want to.
Multiple core computing. Pointing to the roadmaps from Intel, Hubbard said we can expect more than 32 cores arriving in "commodity hardware" in 2010. This will create problems for programmers, he said.
One problem with multi-core computers is that processors can run faster than they can fetch data from memory (don't even talk about retrieving data from the disk!). The processor is said to be "starved," while waiting around for the data. There are different schemes for improving this performance issue, some under the subject heading of NUMA (non-uniform memory access). NUMA is a cache that can help each processor core hold data that it might need.
Now, AMD's high-performance group uses ccNUMA (cache coherent non-uniform memory access), which uses a technology that lets the processors better keep track of the cached information.
Of course, Apple chose Intel. Hubbard appears to warn developers that processor developers (or in this case, Intel) won't spending the money to develop coherency cache engines and that software makers (and OS vendors) would have to figure out ways to do this better. He calls this an "incoming meteor."
•It means that hardware folks are out of headroom on pure clock speed and must go lateral. •The hardware folks are also probably tired of paying for the Software people’s sins. ccNUMA is likely to eventually yield (back) to NUMA. Good for them, bad for us! •Memory access, already very expensive, will become substantially more so. •Forget everything you thought you knew about multi-threaded programming (and, as it turns out, most developers didn’t know much anyway). •The kernel is the only one who really knows the right mix of cores and power states to use at any given time - this can’t be a pure app-driven decision. •We need new APIs and mechanisms for dealing with this incoming meteor.