Who’s Dumber: Bad Guys … Or Good Guys?

Who’s Dumber: Bad Guys … Or Good Guys?

Summary: In the old cowboy movies, the black hats were villains that created mayhem, until the white hats came along and ended their reigns of fear. Now, we have the spectacle of good guys seemingly educating the bad guys on how to exploit flaws or processes of the Internet, that could compromise traffic and users.

SHARE:
In the old cowboy movies, the black hats were villains that created mayhem, until the white hats came along and ended their reigns of fear. Now, we have the spectacle of good guys seemingly educating the bad guys on how to exploit flaws or processes of the Internet, that could compromise traffic and users. Then, there are good guys who act in braindead ways. So who should we fear the most?

Thus far this summer, the Internet has not cracked, even though Dan Kaminsky basically revealed all the details of a flaw in the Domain Name System that could have led to a train wreck on the Internet. Thankfully, he cautiously provided the details, so patches could be put in place to prevent identities of users of banking and other sites on the Web to be hijacked, first. Now, two security researchers have demonstrated how huge amounts of unencrypted Internet traffic can be siphoned off through the Border Gateway Protocol. One computer expert said in this Wired article that he “went around screaming my head about this about ten or twelve years ago” to intelligence agencies and to the National Security Council to no effect. That’s the point. So far, the black hats haven’t shown they are smart enough to exploit hijack IDs through the DNS flaw or Internet traffic through the BGP eavesdropping. Meanwhile, though, there seem to be plenty of dumb guys in white hats, making life miserable for thousands or millions of computer and Web users. There’s the memory stick that got lost in the United Kingdom by the consulting firm that is working on the government’s ID card project. Data on 84,000 prisoners and 43,000 serious offenders went missing. Oh, and the data on the stick was, naturally, unencrypted. That’s data about lawbreakers. How about the million people whose account numbers, passwords, mobile phone numbers and signatures were sold, inadvertently, on eBay? Their information was supposed to be protected by The Royal Bank of Scotland. But its archiving company sold a server on the auction network without wiping the hard drive. Helllloooo ... anybody home? There is not just stupidity on the other side of the pond. Connecticut Gov. Jodi Rell has been probing the loss of Social Security Numbers and other personal information belonging to 4.5 million customers of Bank of New York Mellon. And Rhode Island lost a disk with the Social Security Numbers of about 1,400 state employees. With consultants, bankers and government officials like this, too often it seems that "good guys" give us more to worry about than bad guys. SLIDES: "Stealing The Internet" from Defcon IMAGE SOURCE: www.fortunecity.com

Topics: Banking, Browser, Enterprise Software, Government, Government US, Networking

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

13 comments
Log in or register to join the discussion
  • Bad guys don't need applause

    Just money. So they'll modestly attempt to avoid receiving their due when they compromise systems. I wouldn't assume that not hearing about a success means the success has not occurred.


    Also, this statement is confusing:

    Now, we have the spectacle of good guys seemingly educating the bad guys on how to exploit flaws or processes of the Internet, that could compromise traffic and users.

    [End quote]

    Publicizing a flaw or an exploit makes one a bad guy, no?! The criminal sanctions shouldn't be affected by the stated motive of the bad guy.
    Anton Philidor
    • Publicizing flaws

      White hats typically disclose flaws privately, so
      actions can be taken before going public. The public
      notice then should motivate anyone who was not
      aware of the problem to act, before black hats do.
      The assumption is black hats will be black hats,
      regardless. TST
      Tom Steinert-Threlkeld
      • Timing?

        When there's a market for software flaws - and I've read of $50,000 prices - publicizing a flaw or, worse, providing a means to take advantage of a flaw is a black hat activity regardless of the timing.

        When patches are released there are attacks based on the flaw identified because many vulnerable computers are patched slowly if at all. So even when private notification has led to a sufficient response, the chances for black hat activity means public disclosure is still inappropriate and, I hope, illegal.

        I can't identify a time when teaching how to harm people and organizations is acceptable. And surprised that anyone could believe in exposing (uninvolved) third parties to substantial damage.
        Anton Philidor
        • "when teaching how to harm people and organizations is acceptable. "

          Police firearms training is a black hat activity?
          So we can no longer train bomber pilots in the military, since it's not "acceptable?"

          Silly.
          bmerc
  • Oh freaking wahhhhhhhhhh.... well seriously think about what your saying.

    1. People are human and make mistakes.
    2. People are greedy and exploits can make them money.
    3. We do not, I repeat, DO NOT, live in a utopian society. People do things based on their own motives, so saying that all white hats should follow the same unwritten rule is like saying police should pull everyone over as soon at they go 1 mph over the speed limit and everyone should be pulled over. Its just NOT going to happen.
    4. The inconvience people... nope... people who didnt know it existed the last 20 years, still dont know it exists. When the patch arrives at their doorstep via MS update, they will install it.
    5. Put each exploit into context before screaming about its release. Some things only affect old systems which very few people use, some affect many systems but may have other mitigating factors associated that essentially prevent the exploit from being really nasty.

    Personally, i would like to know if my system has a vulnerability, so i can plan for prevention or atleast know what caused an issue if it comes up. Releasing exploit code helps me diagnose my system to ensure its not vulnerable and allows firms that specialize in protection to develop an immediate counter to it. It also forces companies with an eye on profit to stay on the ball when it comes to security.

    I ask all my vendors if they have their stuff tested for security flaws, if they say yes, i ask for the report or summary. They want to make money and i want to stay in business....
    Been_Done_Before
    • So the advantages are sufficient...

      ... that you would find it acceptable to have severe, irreparable damage done to your computer or to your clients' computers. So long as you can gain from the identified flaw or exploit code almost every time. Many would not agree.

      And many would say that making exploit code ineffective one device at a time is less protective than not publicly identifying the flaw or making the exploit code available. Being one's own security company can be expensive and time-consuming and a waste of resources.

      I hope that your determination and confidence and skill and knowledge are successful in dealing with every issue, every time. You apparently appreciate challenges that other, sensible people would shrink from.
      Anton Philidor
      • There are challenges involved... but for those who need to know...

        its critical to know.

        Would you like to know a hurricane is coming before its blowing down your door or after?

        When its business critical to stop intruders who buy and sell 0-day hacks from getting into your system, time is of the utmost importance.

        Big software companies have to make a fix availabe to everyone that will not break their systems. I as an IT Admin, need to know if people can exploit something i have running to access patient information. Prompt notification is what gives me the time to figure out cost versus benefit.

        I think of the internet as a open battlefield in the middle of the city. You have a bunch of crap going on all around you that may or may not hurt your systems.

        My job as IT Admin is to keep everyone in their respective places. Its kinda hard to do that when someone slips past your security barrier because your trusted vendor left the door unlocked. It may be best to blockade that door till the vender can fix the lock, but i dont know about the unlocked door until after they have walked away with all the information.


        Bottom Line:
        This kind of security is a farse. Security through obscurity is not security and those who think it is are clueless about security.
        Been_Done_Before
        • Suppose...

          ... someone published an article in the newspaper announcing by name everyone who had the key to your building and where each of them kept it. And added a foolproof means of obtaining possession of at least one copy of the key.

          You would appreciate knowing about the vulnerability, certainly, but how would you feel about knowing that everyone else had the same information you did?

          You'd try to have everyone change where he kept the key, but you'd also know that a lot of time will pass before the published information is entirely obsolete. And during that whole time you'd be more vulnerable.

          Would you be inclined to thank the reporter?

          By comparison, suppose the reporter came to you and advised you of what he'd discovered. Without publishing any of it. Then you'd be able to respond with slightly less concern. The fact the reporter could find out means you're vulnerable, but an immediate attempt to ... exploit the situation would be less likely.

          You'd thank the reporter for his restraint, no?!

          Also, the reporter would be irresponsible and partly to blame for any incident involving theft and misuse of the key after publication.
          Anton Philidor
          • RE: Suppose...

            Quote: [i]... someone published an article in the newspaper announcing by name everyone who had the key to your building and where each of them kept it. And added a foolproof means of obtaining possession of at least one copy of the key.[/i]

            [b]SIMPLE, I would change the LOCK!!!!!!!![/b]

            If the reporter wrote that 'XYZ' brand locks suffers from a design defect which will allow someone with a hairpin to easily pick the lock, then I would change the lock.

            If I read where Windoze suffered from severe design deficiencies; then I would consider [b]getting rid of Windoze!!!![/b]

            I want to know what security holes I may be exposed to so I can take action. That way, if the boss says that he has heard about this or that exploit, I can give him a straight answer.
            fatman65535
      • Security through obscurity?

        A flaw in and of itself.

        So, when should information be made available to us?
        What about those who deny the existence of vulnerabilities without some evidence, such as an explanation or sample exploit code?
        How long should we give vendors to build a patch before disclosure? (5-6 years and counting on some XP vulns. BGP, 10 years or so).

        Doing the exploit should be illegal, not information about it or the fact that it exists. The problem is that no-one is bothering to nail the criminals most of the time.

        Code is like fire; we can't put the genie back in the bottle, nor should we, but we can investigate and arrest the arsonists.
        seanferd
  • RE: Who???s Dumber: Bad Guys ??? Or Good Guys?

    Who is dumber is not the problem. Every day we hear about "new" vulnerabilities. In 99.999% of cases, they are NOT NEW!

    They have in most cases been there for many months and sometimes for many years without anyone publicly acknowledging their existence. When a vulnerability is first publicized its "mal-ware potential" becomes available to many more people, but it was ALWAYS THERE FROM THE DAY IT WAS WRITTEN INTO THE PROGRAM.

    Its publication may prompt good guys to create protection which would not otherwise occur. But the vulnerability could have been discovered years ago by someone who did not tell anyone else.

    They may have been using the vulnerability for years to gain information or control or just cause trouble, as long as they kept their activity below the radar. They could do as they pleased until someone else found the vulnerability and publicized it or abused it enough to be detected by the community.

    I found a flaw in the old (happily no longer in use) MS Mail program. It allowed me to read any mail or send mail as any user. I notified Microsoft. They said they could not replicate the vulnerability and denied the bug existed. I demonstrated it on a test account on another admin's post office, but I did not want the information to get out about how it was done so I never gave anyone (outside MS) enough information to know how I did it. And they continued to stonewall me.

    I could have taken advantage. But I like White better, and in my case it would have made little difference. I was already the admin of my email post office, and did not find the potential notoriety of publication enticing. And knowing the havoc it would have caused if the information got out gave me the chills. Besides, if the information got out, it would not have been to anyones benefit. I might be known as smarter than the idiot at Microsoft, but dumber than almost everyone else for bringing about the ruinous damage that would have resulted.

    So I kept quiet. And the vulnerability "never existed".
    RIGHT!

    Luckily it was near the end of use of MS Mail and Exchange was coming. And the likely-hood of someone accidentally finding the vulnerability was remote, and I had tried my best to warn Microsoft, so I just let it drop.

    While this kind of thing happens rarely (I HOPE!), it does happen. And some of the White hats at MS were apparently dumber than I would have believed possible. But at least "as far as I know" no one ever used this vulnerablity for mal-purposes.

    Sadly, the "as far as I know" standard of protection/safety is totally inadequate. Worse, it remains the current standard for any newly discovered vulnerability. We almost never have previous data to check to determine if the new vulnerability was ever previously used to compromise systems and accounts. And even if we have a sample of traffic to check, it does not cover the entire internet, so we never know if it has been used elsewhere.

    So what we have are millions of additional "mal-days" of exposure to vulnerabilities which are currently under the radar. Mal-days being days during which a vulnerability existed, including the time before it was publicized, which are not currently acknowledged by the community. Failing to acknowledge this additional exposure to harm when the vulnerability existed but had not been publicized is the same as believing that Security by Obscurity works...

    I do not have a solution, but I believe that the potential harmfulness of vulnerabilities should be tracked from the day they were built in, not from the day they are publicized. This will make everyone much more sensitive to the real danger they represent. And it may force companies and programmers to even greater efforts to produce bug free code.

    It is now your turn to assist the community by suggesting something to help solve this problem.

    Thanks for reading,
    Ty
    tdibble
  • RE: Whos Dumber: Bad Guys

    Obviously the good guys are dumber..they still haven't caught the bad guys.
    mSn mSN
  • "...to no effect"? Yeah, right.

    This made me chuckle...

    ???...screaming my head about this about ten or twelve years ago??? to intelligence agencies and to the National Security Council to no effect.

    ...uh huh. More likely they bounced him out with a "not interested" response, then worked on exploiting this for their own ends (assuming the weren't doing so already).

    Everyone remembers this timeline:
    - July, MS fixes RPC
    - a month later, Lovesan et al exploit it
    - a month later, the "fix" is revised"

    But folks forget that defect "discovered" when MS patched it in "July", was present and exploitable for years on NT OS versions.

    If someone with deep resources and strong organizational discipline had been quietly and very selectively been banging away at that for all those years, how would you know?

    If those levels of resources and power were brought to bear on the vendors who created the flawed software, would such flaws only arise accidentally?

    We're used to a little-number world, but big numbers exist. Echelon exists, and must cost a bomb to create and run... it's probably quite a good RoI for some of that money to have been spend on "vendor relations" and code research.
    cquirke1