X
Business

10 years since the Bill Gates security memo: A personal journey

Ten years after the famous Trustworthy Computing memo, Microsoft principal cybersecurity architect Michael Howard shares memories from the Redmond security trenches.
Written by Ryan Naraine, Contributor

Guest editorial by Michael Howard

I remember the security situation at Microsoft in 2001 and 2002 like it was yesterday. Perhaps no other couple of years will be so indelibly etched into my brain as those two. 2001 was not so good, but 2002 was a heck of a lot better! Given 2001, this was not a difficult achievement for 2002! So, let me start at the beginning...

In late 1999, a small band of us formed a small security team (as in “threats,” not as in “features”) to help raise software security awareness across the company. We had no name for a long time, until the vice president in Windows at the time, Dave Thompson, decided to call us the Secure Windows Initiative (SWI). Our charter was to start reviewing Windows code in depth looking

for security bugs, but having a small number of people reviewing something the size of Windows was clearly not going to work. So, we moved to a “Security Bug Bashes” model where we would deliver security education in the morning to a small development group within Windows (e.g., networking, terminal services, IIS, IE, etc.), and then for the rest of the day we would have the engineering team go look for security bugs. It was fun and we found bugs. But the most important point was raising awareness. It really didn’t matter how many bugs were found -- the key was to make people aware of the security issues and reduce the chance that mistakes would be made in the future.

The downside of the bug bashes was that even though they were more effective than the original SWI charter, they still didn’t scale very well and they were very labor-intensive. Still, the security bug bashes continued for about another eighteen months.

2001 was not a good year for Microsoft security because of CodeRed and Nimda, two worms that affected Internet Information Server 4.0 and 5.0. CodeRed was the result of a one-line error in some code running by default in IIS4 and 5. In hindsight, the code should not have been installed by default. Nimda was the more sophisticated of the two worms because it used more than one vulnerability to compromise systems.

While all this was happening, David LeBlanc and I were mid-way through creating the first edition of Writing Secure Code. We had written the book because the same security-related questions were being asked time and time again and we wanted a reference we could point people to. Little did we realize that Writing Secure Code would later become a runaway bestseller.

As 2001 wound down and Writing Secure Code was finally sent to the printers, I got an email from Loren Kohnfelder, who was one of the security leads in the .NET Framework. Loren is best-known for defining what is now commonly referred to as Public Key Infrastructure (PKI). You can read his 1978 thesis on the topic here. Loren was also one of the protagonists behind the STRIDE threat modeling mnemonic.

Loren told me that the .NET Common Language Runtime (CLR) team had uncovered a small number of security bugs during the final development phase of the project, and he was really concerned. We decided to do a bigger version of a bug bash; but rather than lasting only one day, it would be done when it was done. “Done” meant the rate of incoming security bugs approached zero. This became known as the “.NET Security Standdown,” and we even had T-Shirts made with the date of the start of the event. On the day the event was to start, the Pacific Northwest got a huge snow storm and the Microsoft Redmond campus was closed, so we started the standdown a few days later.

The standdown was a great success, thanks to Brian Harry and his team, who managed the process brilliantly. We reeducated the .NET engineering team, we found and fixed bugs, but most important, in my mind, we introduced the concept of reducing attack surface (i.e., limiting the amount of code exposed to untrusted users). That’s where the concept of AllowParticallyTrustedCallersAttribute (APTCA) came from and why we flipped ASP.NET to run in much lower privilege.

December 2001 saw the release of Writing Secure Code, and Doug Bayer and I had a lengthy meeting with Bill Gates to explain security vulnerabilities in detail. Clearly he was concerned by the worms of 2001 and wanted to learn more. At the end of the meeting I gave Bill a copy of Writing Secure Code.

At the end of December 2001, the .NET Standdown was over and we had learned a great deal about rallying the troops to a common security cause. But there was much more work to do!

In light of the success of the .NET work, we decided to aim our sights at Windows .NET Server (as it was called back then). Following the .NET model, we started in February and would be done when we were done. For the most part, that ended up being late March for most teams within Windows.

This became known as the “Windows Security Push.”

As everyone knows by now, Bill sent his famous Trustworthy Computing (TwC) memo to the company in January 2002, right as we were planning the security work for Windows. His memos are rare, and this one signaled the start of something big within the company.

During the push, we had three streams of education: I handled all the Windows developers, Jason Garms worked with all the program managers and architects, and Chris Walker trained all the testers. Steve Lipner and Glenn Pittaway led much of the day-to-day process management, keeping in constant communication with upper management.

One practice we borrowed from the security bug bashes was that we always had a senior person from management kick off the training. At one of my sessions, I had Rob Short, VP of Windows Base (Kernel down to the metal) open the day. Rob’s a tall, lean Irishman with a thick Irish accent, and there’s something he said that has stuck with me forever. He said, “There is nothing special about security; it’s just part of getting the job done.” Whenever I deliver a security talk to new engineers within Microsoft or am onsite with a customer, I always recite Rob’s words, because they are so incredibly true.

The Windows Security Push begat the SQL Server Security Push, the Exchange Security Push, and the Office Security Push. Slowly but surely things started to change across the company. Engineers and managers “got it.”

A key element of all the pushes was to reduce the default attack surface of the products. That’s why Windows Server 2003 (note the name change) had a reduced functionality browser, no Web server installed by default, and much more.

One thing that is not commonly known about the pushes is that a lot of documentation was written about the security implications of various technologies. Much of that learning ended up in the second edition of Writing Secure Code; the book ballooned from 500 pages to over 800 pages, and much of that was detail we learned and fine-tuned throughout 2002. A great example is the chapter concerning the security implications of internationalization and globalization. The text in the book is derived from a whitepaper written by the globalization team within Windows after they had gone through the push process and had looked at their important corner of Windows with a fresh security perspective.

The pushes were just the start, however. Real change came only when we implemented the Security Development Lifecycle (SDL). As I have said many times, you can’t build some software and then have a security push. It just doesn’t scale and, frankly, having a push at the end is too late. We needed something that was “part of the process,” and that is how the SDL was born.

There was a wrinkle along the way, however. In 2003 we saw Slammer affect SQL Server and Blaster affect Windows. Because one of the effects of Blaster was blue-screened computers, product support saw a huge increase in support calls. Many of us manned the phones to help out. Raymond Chen, a lead developer on the Windows shell team, and I were seated next to each other, and he wrote about it in his blog.

Blaster led to a lengthy and intense effort known as “Springboard,” led by Rebecca Norlander, Matt Thomlinson, and John Lambert. The end result of the process was Windows XP SP2, in which we not only found and fixed security bugs but also added numerous critical defenses to Internet Explorer, DCOM, and RPC. We also enhanced and enabled the Windows Firewall and added data execution prevention (DEP), and we made it easier for users to enable automatic updates by prompting them right after setup.

Microsoft has come a long way in the last ten years, and I am incredibly proud to have been a part of this watershed time. Much has changed. The SDL is now seen as industry-leading and is in use by many software developers outside of Microsoft. My role has changed too: I now work with our customers and partners as part of the Microsoft Americas Services Cybersecurity team to help them adopt SDL practices as they recognize the need for an increased focus on security.

It’s been an amazing ten years. We still have much to do, however. And no one knows that more than the incredibly talented people across Microsoft helping bake security into our products and our partners’ and customers’ products every day.

* Michael Howard is a Principal Cybersecurity Architect at Microsoft.

Editorial standards