Open-source developer Landon Fuller hit the headlines at the start of January when he announced an ambitious plan to bolster the security of Apple's software.
Fuller has vowed to create or source a patch for every vulnerability uncovered by the Month Of Apple Bugs (MOAB) — a controversial initiative being conducted by two security researchers. MOAB's goal is to announce a new flaw in an Apple application or operating system each day in January. Fuller's aim is to protect users by quickly finding a fix.
Although not a full-time security professional, Fuller has extensive knowledge of Apple having worked in its BSD Technology Group.
We interviewed Fuller by email to find why he is devoting his time to the Month Of Apple Fixes, and to learn his views on responsible disclosure and the relative security of Microsoft and Apple code.
Q: What prompted you to take on the task of fixing the bugs published by MOAB? In your blog you described it as "part brain exercise, part public service"...
A: As a brain exercise, patching a new bug each day is an enjoyable technical challenge — I've been introduced to pieces of the operating system that I've never before explored, and gained a more significant understanding of how it all works together. It's enjoyable to work co-operatively with other Macintosh developers on these issues.
On the other hand, this isn't just a technical exercise — critical vulnerabilities are being released without notifying the vendor, and I believe that providing users with an option is ultimately beneficial to the community. I also hope that I can help to clarify what the risks of the vulnerabilities are, as I don't believe users should be using our patches without understanding the risks.
Are you happy with the response from the developer community to your initial request for assistance? How well is the MOAB Fixes Google Group working? Could you benefit from more assistance?
I'm very happy with the response from both friends and colleagues in the Macintosh and Darwin developer community — they've provided advice, development assistance, and have even produced some of the bug patches. I'd never turn down more assistance — these are complex problems, and we all have day jobs.
What is the process for coding a patch for Apple software vulnerabilities? What do you need to do to write a patch?
Writing a patch requires spending some quality time in a debugger to determine where the code specifically fails and how best to patch it. As a third party, I don't generally have access to the source, requiring reverse engineering of the code in question. Once I've determined how the code works, I can write a patch that wraps the vulnerable code, rejecting data that would otherwise trigger the bug and allow the exploit to succeed.
These are patches, but not a replacement for a vendor-supplied fix. I think "band-aid" is an apt term.
What is your opinion of what the MOAB guys are doing? They say that responsible disclosure doesn't get good results — what is your opinion?
I'm not a security researcher by trade, and I have limited experience with disclosing vulnerabilities to Apple or any other vendor. It seems obvious that some members of the security community are frustrated — even rancorous — over Apple's handling of security issues. It's my hope that their grievances can be addressed and Apple can build and maintain a positive relationship with the security community at large. However, I do not personally agree with releasing critical vulnerabilities with zero vendor notification, regardless of the vendor.
How come you've been tasked with fixing the bugs? Obviously you have a lot of experience, but my understanding was that you no longer worked for Apple?
I'm not sure "tasked" is quite the right word. I don't work for Apple, and this is a spare-time pursuit. 1 January was a work holiday, and I just happened to come across the first Month of Apple Bugs issue.
What is your opinion of the severity of possible exploits for the bugs published so far? Is the threat serious or more theoretical?
The bugs have varied in severity. I would say that the QuickTime bugs have been the most critical so far, and do present a serious threat. While also critical, the OmniWeb vulnerability was patched in mere hours by The OmniGroup, and the VLC team released an update in a couple of days.
Until now Microsoft software has been the dominant target for hackers. As Apple software becomes more popular, will more exploits be written for it?
I'm very bad at predicting the future, but I do think that the size of the installed base is a factor when it comes to the business of compromising computers.
Do you think Apple's move to Intel chips will have any effect on the platform's overall security performance?
Not directly, but I don't think it's unreasonable to take into account the existing security expertise with x86 assembly.
There is some debate as to the relative security of Apple and Microsoft code. Some argue that Microsoft code was flawed from the beginning, whereas Apple simply wrote more secure code. What's your opinion?
Mac OS X is comprised of a vast quantity of code from a variety of sources, including NeXT, The FreeBSD Project, The NetBSD Project and legacy Apple code that precedes their purchase of NeXT. I don't think one can categorically declare that such a complex system written by multiple individuals in different organisations at different times is entirely comprised of superior, better-written code. As an example, Firefox is a great browser, but the RTSP vulnerability in Apple's QuickTime plug-in is sufficient to lead to remote code execution.
While it only takes one bug, there are some general steps that Apple could take to help minimise the impact of such a bug — the "non-executable" stack is one example, and is implemented on Apple's Intel machines. Address space randomisation is another positive step that would close some of the holes (ie, return-to-libc) that allow an exploit to bypass non-executable stack protection.