The secret to secure code--stop repeating old mistakes

The game of leapfrog between code defenders and attackers in cyberspace will continue to be waged as long as code is written. But the majority of security holes could be eliminated.
Written by Dan Farber, Inactive

The game of leapfrog between code defenders and attackers in cyberspace will continue to be waged as long as code is written. But the majority of security holes could be eliminated. It's not a technology prowess issue. The problem is that most companies and programmers don't have the basic skills or knowledge necessary to build security into their products from the ground up.

This was the distilled knowledge after spending a few hours last week with the Fortify Technical Advisory Board, which included a who's who of cybersecurity experts. Fortify primarily develops and sells static code analysis tools for programmers. The advisory board group, pictured below, was universal in pinning the majority of security woes on poor programming practices and a lack of education and awareness.

"Every time we move into something new, we repeat the old mistakes," said Avi Rubin, professor of computer science at Johns Hopkins University. "We saw it with RFID and voting systems and now the Web. Windows 3.1 systems and other older system are safer because modern virus won't work on them."

Li Gong, the primary architect for the Java security model while he was at Sun and most recently the managing director for Windows Live China, echoed Rubin's comments.

"The industry is currently defaulting to a small number of platforms--Windows, Java and a few others. Once the platform is built it is hard to make it more secure. You only get one or two chances to make it more secure, especially once it ships. Because its layers, you have to solve the security problems at each layer. The problem is that people repeat the same mistakes every time they create something new, such as with the Web or AJAX. They forget about lessons learned in the past."

  • Back row: David Wagner, University of California, Berkeley, professor and a top software security researcher; Brian Chess, chief scientist at Fortify; Bill Pugh, University of Maryland professor of computer science and founder of the open source tool, Find Bugs; Marcus Ranum, inventor of the proxy firewall and expert on security system design and implementation; Li Gong, primary architect for the Java security model at Sun and most recently managing director for Windows Live China.
  • Middle row: Matt Bishop, University of California, Davis, professor of computer science and  author of “Computer Security: Art and Science”; Avi Rubin, professor of computer science at Johns Hopkins University and author of "Brave New Ballot: The Battle to Safeguard Democracy in the Age of Electronic Voting" 
  • Front row: Fred Schneider, Cornell University professor of computer science, Director of Cornell’s Information Assurance Institute and member of the technical advisory boards of Intel and Microsoft; Gary McGraw, CTO of Cigital, and the author of “Software Security”, “Exploiting Software” and “Building Secure Software"; Greg Morrisett, professor of computer science at Harvard University ; Roger Thornton, CTO of Fortify.

"When Google started to put stuff on peoples' desktops, they lost control of security, said Gary McGraw, CTO of Cigital and author of several popular books on security. "The real interesting aspect of Web 2.0 has to do with where the trust boundary ought to be drawn. Some of the stuff is outside trust boundary."

A lack of expert security architects as part of development teams was also cited as contributing to flawed code being released into the world. It's a systemic issue, a whole mentality problem," Gong said. "You don't have amateur architects building bridges," Rubin added.

"You won't solve the security problem by adding a few more security experts and all other programmers remain ignorant," advised David Wagner, a top software researcher and computer science professor at the University of California, Berkeley.

"Two things are going on. There is the issue of security and the issue of good coding practices. They are interlinked. Everyone has to use best practices--the chain is only as strong as weakest link," said Matt Bishop, professor of computer science at the University of California, Davis. 

Bill Pugh, professor of computer science at the University of Maryland, said, "Most programmers are ignorant of security."  Student study textbooks, but a static analysis tool can provide a more poignant lesson, he added, noting that the textbooks need to be updated. "By the time someone graduates from college it's too late," Gong claimed. Learning security practices on the fly is not the best way to create secure software.

The experts also discussed security as a holistic discipline. "There is a misconception that security is a operating system, language or network problem. Security affects the whole system. These days we are getting better at layers, but now [attackers] are hitting more at the application layer, but you can't fix it by upgrading to Vista," said Wagner. 

As the bottom layers have become more secure, applications have become more interesting targets to attacker code.

Marcus Ranum, inventor of the proxy firewall, referenced Larry Wall, the creator of Perl, on programmers: "He likes to say that there are two defining characteristics of great programmers--laziness and hubris. That is what got us to where we are. People putting their lives on the line with code is crap. The only part of 'software engineering' is in the word 'engineering.' You have to get the end customer to understand that there is a safety issue."

Fortify's Brian Chess noted that software developers make predictable mistakes and that tools can look for problems and take away the "low hanging fruit" from unsophisticated hackers.

However, a huge potential downside is that the only people who have time to look in obscure parts of code are those who want to do harm, Rubin said. 

"Cigital and Fortify have a joint customer, and it uses Fortify to look at millions of lines of code. They keep track of what they find and then teach programmers. It's cheaper to instruct than have to find bugs. Software security initiatives, like those at Microsoft, Qualcomm, and many banks, have to close the loop and have way of measuring success," McGraw added.

Despite the weekly array of security breaches identified in Windows products, the security experts gave Microsoft kudos for its security initiatives. "The temptation is to beat up on Microsoft, but the company is not unique and in many ways is one of the best to integrate security into its development process," said Wagner, who is well known for discovering serious flaws in popular software.

The SCADA (Supervisory Control And Data Acquisition) systems that run energy, water and other critical infrastructure, however, didn't get kudos from the security experts. "Every time I look at the way a SCADA system is constructed, I fall on the floor laughing with my feet kicking or beat my head against the wall," said Ranum. "The SCADA systems are worse than you can imagine. Five years ago, a publicly accessible site was controling a console for pumping systems in Middle East. It was frightening beyond belief. The answer is don’t plug that into the Internet or your own internal network."

Referring to software development in general, Fred Schneider concluded: "This is the software correctness problem. If you can't get software correct, then you are fielding something that can be abused. In the end the software correctness problem is having to articulate what you want the system to do--no one will ever formalize that and the buck will stop there. There will be problems as long as there is software."

Editorial standards