Security core: Best practices

The recently concluded Security Vulnerability Summit addressed pressing security issues and concerns of the day, which are highlighted in this report. A lengthy but essential read for security personnel, consultants, corporate users and businesses.

Security. In the world of the Internet and computing, the word can have very different meanings to different people, who can have very different assumptions about where to lay blame and when to accept responsibility for security vulnerabilities.

Those viewpoints came together, with surprising consensus on fundamental points, at the Security Vulnerability Summit co-hosted by eWeek and the security company Guardent Inc. earlier this month.

Despite the maturity of many security principles and technologies, there are vital unanswered questions involving the responsibilities of vendors, security researchers, the government and the media, as well as a complex maze of legal and ethical issues.

To address these topics in pursuit of a statement of best practices, an open call was put out to the Internet community. The resulting group convened at the Foster City, Calif., offices of eWeek Labs, representing a broad spectrum of the organizations and individuals dealing with security today.

With such a diverse group, it would be reasonable to expect some level of friction and even some combativeness. However, there was instead a very high level of cooperation and agreement on the core issues under discussion at the summit.

There was clearly a group consensus that some form of standard needs to be defined to handle thorny issues such as how researchers should report vulnerabilities, how vendors should disclose security problems and how security information should be reported to the user community.

This level of cooperation was probably best exemplified by two attendees sitting next to each other. Steve Lipner, manager of Microsoft Corp.'s Security Response Center, and Rain Forest Puppy from Neohapsys, which last month found a security hole in Microsoft's Internet Information Server, maintained a professional and collegial interaction throughout the summit.

Microsoft was the subject of a few memorable one-liners, but there remained a genuine respect between the vendors in attendance and those who regularly find problems in their products.

The summit attendees were charged with addressing several core security issues, including identification of stakeholders in the security vulnerability cycle and their underlying assumptions, release of security information, legal and ethical issues, vulnerability disclosure and verification, and patch releases. At the end of each discussion, a working group was created to develop best-practice standards.

Several issues cropped up in more than one agenda item discussion. One of the most common concerned the creation of a "seal of approval" for businesses that have been recognized as following basic best practices for security. Many participants also said that people who discover security flaws would bring them forward more quickly if their identities could be protected by a recognized neutral agency.

Also discussed was intent. For example, if someone invades a system to find a security flaw that needs to be corrected, is that different from invading a system for mischief or personal gain?

The main goal of the Security Vulnerability Summit was the creation of best-practice standards and guidelines for the security community. However, the summit also served as a microcosm of the issues and questions that all e-businesses must address.

Among the security stakeholders the group identified were developers, vendors, security researchers, government, the media and consumers. Attendees agreed that good security has to start at the developer level, but that security was surprisingly and unfortunately absent from most college computer science programs.

Another concern was the tough job that users at the individual and corporate levels face in digesting security alerts and other information available at sites such as CERT, SANS and Bugtraq.

Robert Ratcliffe, Textron Inc.'s group director of corporate information management, brought up many of the concerns of corporate users. "Security and vulnerability information has to be dumbed down so that businesses can get on with their business at hand," Ratcliffe said.

Elias Levy, chief technology officer at SecurityFocus.com, said that security-savvy users such as consultants have a need for rich and detailed vulnerability information. "There should be levels of information," Levy said. "Some people can use information before a patch is available to mitigate a problem."

Another hot topic was the underlying assumptions and myths that security personnel must deal with on a day-to-day basis. The attendees agreed unanimously that apathy toward security is a huge problem. "What's the biggest misconception?" Microsoft's Lipner asked. "[That] 'security is not a problem for me.'"

Leia Amidon, principal security technologist at Internet infrastructure provider Logictier Inc., added, "It's amazing to me that people think a computer is secure when nothing else in life is."

When summit moderator and eWeek Technology Editor Peter Coffee introduced the topic of "hackers," the group was obviously uncomfortable. When pressed, attendees said that "hacker" was an overused and often misused term, especially in the media. Rather, they agreed, the general term "hacker" encompasses researchers, intruders and attackers, and that their intents are, respectively, discovery, notoriety and harm.

Textron's Ratcliffe said he fears that illegal intrusions will only get worse. "We all deal with lots of [annoying] incidents," he said. "But I'm afraid that we'll see a lot more serious problems in the future, such as industrial espionage."

One of the core issues of the summit was information release and the role of advisory sites such as Bugtraq, SANS and CERT. Much of the discussion centered on risk assessment and ranking, and on finding a way to manage the flood of security information.

Shawn Hernan, vulnerability-handling team leader at the CERT coordination center, said CERT scores vulnerabilities using internal metrics and releases alerts on the problems that it feels "people will stay Friday night to fix." Hernan added, "The cutoff is not meant to hide vulnerabilities; it is because we don't have the resources to produce authoritative documents on everything."

Also discussed was how much information needs to be released. "Some people refer to full disclosure as full exploit details," said Steve Christey, lead information security engineer at Mitre Corp. "Others say it's enough just to describe a problem. A scripted tool may be the best way to understand the details of a problem, but it can be misused."

Weld Pond, manager of R&D at security research company @stake Inc., added, "There is a difference between something that detects a problem accurately and something that hands you back a root shell."

Most security problems that businesses deal with are more than a year old. Mitre's Christey gave as an example a business that was recently affected by a sendmail bug that originated in 1993. CERT's Hernan showed a chart that detailed the repeating cycle of a typical vulnerability, with a peak of attacks after initial discovery, a slowdown, and then another peak, months later, after the problem is supposedly well-known.

Christey supplied a key talking point during the discussion of proper methods of notifying vendors of vulnerabilities. He shared a chart detailing the various disclosure policies used by research companies, many of which were in attendance at the summit. The chart measured elements such as time to notify vendors and time to wait before going public with a vulnerability.

A key element of the threat disclosure discussion was the level of contact with the vendor. "You need communications between researcher and vendor," @stake's Pond said. "When Neohapsys came out with its draft policy, the best feature was the constant contact provision."

"I think you should give the vendor enough time to make the fix," said Dirk Van Droogenbroeck, security engineer at SecurityWatch.com. "Depending on the severity of the vulnerability, you need to reach some agreement in terms of time frame. If the vulnerability gets loose 'in the wild', then notify right away."

This brought up the issue of what to do if a business decides to sit on a security problem: Should this be considered an action similar to hiding a public health threat? Several participants brought up examples of businesses, especially financial companies, that have chosen to hide security problems. This again brought up the idea of a third-party agent that could receive and verify anonymous tips. "The security community needs to stand up and make security consciousness and disclosure a good thing," Pond said.

Some software vendors also refuse to verify reported security problems. The group discussed the conflicting needs of the security community to know about a problem and the desire of vendors to limit damage both to themselves and to their customers.

"We share the same philosophy [as Microsoft]: We want to share a fix with a customer when we go public with it," said Michael Fuhrman, manager of security consulting at Cisco Systems Inc. "If we just alert everybody there's a problem, customers are being opened up to attack and will be upset that Cisco didn't have a fix available. You want information to get out so that proactive people can take the right steps, but then the customer base not paying attention is a sitting duck."

Several of the security researchers in the group said that lack of confirmation can lead some users to think a problem is just a hoax.

This led to discussion of some form of credibility rating for researchers, something that would let users know that the rest of the security community has found a source of advisories to be valid and reliable.

One form of vendor verification is the release of a patch, which is often hidden by vendors inside other patches and updates. "Security content has to be advertised as a motivation to get people to install patches," said Farm9.com Inc.'s Andrew Baker. Attendees also pointed out that non-English-language versions of software are often patched long after English versions are fixed.

The group discussed whether researchers should release patches or workarounds on their own. "Vulnerability announcements that come out with a workaround, that don't involve a patch, are probably good things," @stake's Pond said.

Textron's Ratcliffe expressed his frustration with managing patch releases and justifying the cost of implementation across the enterprise. "It's a risk assessment," he said.

The tendency of the general news media to blow relatively small issues out of proportion, and the problems this create for the security community, was another area of consensus for the participants. "Press coverage has more to do with comprehensibility to the public than with the severity of a problem," Microsoft's Lipner said. Many participants said threats with catchy names are covered in depth, while long-standing threats of much more significance are ignored because their mechanism is too difficult to explain to nontechnical users.

However, there was also agreement that news coverage can be beneficial, especially if it increases awareness and includes information to educate users on security. "If you can get education into security stories, then we want more stories," Pond said. Discussion then centered on the possible creation of a list of security experts to whom general media can turn for security education and verification.

The summit agenda included one item that no one among the attendees was eager to tackle: legal issues. Representatives from several of the research companies in attendance detailed legal threats from companies that didn't want security information published. These participants agreed that if the information was true, they believed they were in the clear.

A major concern to the group was a draft convention on cyber-crime, which is now before the European Parliament and proposed to become a multilateral treaty that could make most security scanning and research illegal (because it prohibits distribution of tools that are often used illegally).

Issues of ethics also proved controversial and were discussed often during the summit. Ethical issues also overlapped with legal issues because a quick threat of legal action was generally seen as an ethical problem—especially when combined with hostility and when directed at someone who may merely own zombie systems that are being used to attack others.

The researchers at the summit also agreed that it is ethical to provide appropriate and full notification when security problems are discovered.

"There is a responsibility on the part of vulnerability finders to notify the vendor or CERT about severe problems," Pond said. Farm9.com's Baker wondered, "If you find a new vulnerability, and it's been exploited, do you have a need to tell the community that there's an exploit already loose?"

This question and the others brought up at the summit will be considered by the working groups, which will present all their findings and proposed best practices at a subsequent Security Vulnerability Summit. For more on the summit, go to www.vulnerabilitysummit.org.

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All