X
Business

Should security bugs be proprietary?

Is timely disclosure -- an open source security process -- the key to a timely fix? Or do loose lips sink chips?
Written by Dana Blankenhorn, Inactive
Cisco logo
After an extensive hullaballoo Cisco Systems has gotten former ISS employee Michael Lynn to shut up about a security flaw he found in its router software.

Lynn said he found a buffer overflow in Cisco's software in April, alerted them to the problem, but they failed to fix it promptly. So he was ready to tell all at the Black Hat Briefings in Las Vegas this week, but Cisco won a temporary restraining order. ISS agreed to yank the presentation, but hours later Lynn quit his job and described the problem anyway, saying "I had to do what's right for the country" and that bad people were already working on exploits.

Cisco responded with more legal paper, and Lynn agreed to shut up.

The question for me is not, did Cisco have a right to do what it did? The question I have is, did Cisco in this case do right by Cisco?

Keeping bugs, especially security bugs, proprietary is a very touchy subject. Some folks feel spilling the beans lets the bad guys know what to work on. Others say spilling the beans is the only way to let good guys know what to work on.

I've previously discussed the famous graph showing that the risk from an exploit hits its maximum between its announcement and the creation of a fix. But that risk is rising from the time the exploit is discovered, and the risk never falls to zero.

Is timely disclosure -- an open source security process -- the key to a timely fix? Or do loose lips sink chips?

Cisco has delivered its verdict. I'm wondering what the truth is.

Editorial standards