Google introducing Safe Browsing diagnostic to help owners of compromised sites

Google introducing Safe Browsing diagnostic to help owners of compromised sites

Summary: Last week, Google's Niels Provos made an announcement regarding a newly introduced feature aiming to help owners of compromised sites in understanding the implications of the compromise, as well as the malicious events that took place when Google last indexed the site.

TOPICS: Google, Malware, Security

Last week, Google's Niels Provos made an announcement regarding a newly introduced feature aiming to help owners ofGoogleÂ’s Logo compromised sites in understanding the implications of the compromise, as well as the malicious events that took place when Google last indexed the site. From Google's Online Security Blog :

We've been protecting Google users from malicious web pages since 2006 by showing warning labels in Google's search results and by publishing the data via the Safe Browsing API to client programs such as Firefox and Google Desktop Search. To create our data, we've built a large-scale infrastructure to automatically determine if web pages pose a risk to users. This system has proven to be highly accurate, but we've noted that it can sometimes be difficult for webmasters and users to verify our results, as attackers often use sophisticated obfuscation techniques or inject malicious payloads only under certain conditions. With that in mind, we've developed a Safe Browsing diagnostic page that will provide detailed information about our automatic investigations and findings.

These are some of the key benefits that I've already found highly effective in my investigative assessments.

  • despite that the data is kept for 90 days only, even a three months period of time with a snapshot of the malicious activity that's been going on at a particular domain is handy when conducting assessments, especially in those cases where the compromise has already been detected by the site owner, and the malicious links/scripts removed
  • the feature's investigative and relationship establishing nature in the sense of listing other sites compromised by the same malicious domain, as well as the domains where the malware was hosted acting as redirection points in this case, easily allow you to see the big picture from different angles regarding a particular malware group or an incident
  • the endless possibilities for automation and integration of the data thanks to the Safe Browsing API, as well as the possibility to use the service as a early warning system for security incidents

What type of data is stored about a compromised site anyway? Google's Diagnostics answers four questions regarding a compromised site :

What is the current listing status for [the site in question]? What happened when Google visited this site? Has this site acted as an intermediary resulting in further distribution of malware? Has this site hosted malware?

Let's test the service and diagnose Redmond Magazine, which was among the high profile victims of a recent SQL injection attack, in order to demonstrate the type of data Google gathers. According to the historical situation at this domain :

Of the 59 pages we tested on the site over the past 90 days, 3 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 05/19/2008, and the last time suspicious content was found on this site was on 05/10/2008. Malicious software includes 3 trojan(s), 3 exploit(s). Successful infection resulted in an average of 5 new processes on the target machine. Malicious software is hosted on 2 domain(s), including, 2 domain(s) appear to be functioning as intermediaries for distributing malware to visitors of this site, including,

You can safely test the service by looking up the fast-flux domain which I mentioned in a previous post, or if curiosity prevails, diagnose the malicious domains injected in the ongoing SQL injection attacks.

The introduction of the Safe Browsing diagnostic feature is a step in the right direction - limiting speculations and empowering both, researchers, and the average end users with evidential data regarding a particular compromise. However, there have been and continue to be numerous successful attempts by malicious parties to trick Google's crawlers into flagging a malicious sites as a clean one. In fact, a huge number of the sites used as redirectors to malicious domains in the recent SQL injection attacks, remained undetected, yet another indication that the bad guys change their tactics and adapt rapidly, sometimes more rapidly than we'd like to imagine they do.

Topics: Google, Malware, Security

Dancho Danchev

About Dancho Danchev

Dancho Danchev is an independent security consultant and cyber threats analyst, with extensive experience in open source intelligence gathering, malware and cybercrime incident response.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Great article, but...

    you know what would be even better is if Google gave its users a better idea of what to do when Google gets hacked. To date, while I love the responsiveness and capability of their security team, their policy has been to not disclose vulnerability details, or even vulnerability anouncements... pretty much a standard for all companies. Why? Seems to me they'd rather bury things down and keep their users in a eutopian state of belief that they are secure.

    • Agreed...

      What they are doing is a giant step in the right direction and almost makes me want to Google a site first even if I know the URL but adding additional data on the vectors used by these sites would further allow administrators to protect whole networks.
    • Re: Great article, but...

      The new feature is a good example of marginal thinking - little data is better than no data at all, although if they really wanted to help the average webmaster clean their website, they could have included a snippet of the code that they detected as malicious, since there are false positives of javascripts looking like obfuscated javascripts courtesy of a malicious party. The service is handy for researchers and law enforcement, but the average webmaster wants to to know which script has to be removed in order to have their site flagged as a safe one.

      Now, why wouldn't they do that? It will violate their OPSEC (operational security), and provide a great deal of details to malicious parties on which obfuscation or injection was detected, and best of all - which didn't get detected at all. As you can image, a basic sampling of their domain portfolio of live exploit URls against Safe Browsing's diagnostics would have an enormous impact on their ability to fool Google, because Google told them how.

      If Google are to hijack the insecurities of malware injected or embedded sites by offering tips on which vendor's service or product to use in order to clean up the sites, that smells like an anti-competitive practice from miles away. The should, whatsoever, come up with a balanced way to provide more resources than the current tips available in a FAQ form :

      As for the type of vulnerabilities discovered on the sites, with the ongoing efforts to obfuscate them on the fly, even third-party solutions that specialize in exploits detected are started to standardize a great deal of vulnerabilities under the malicious javascript label.

      Just for the record - Google's been automatically verifying the maliciousness of sites since 2006, whereas Yahoo for instance started protecting its users as of recently with the integration of SiteAdvisor, for which I have some reservations, and the rest of the search engines out there aren't even considering the responsibility of protecting their users.

    • Eutopian?

      Since when has Google said anything they release is eutopian? That word fails me. How can Beta be what you describe??
  • What Google doesn't realize....

    Is that most sites that are 'compromised'..... it is not an accident that they have been compromised, in the slightest.
    They are usually compromised on purpose by the people who put up the site as virus and spyware traps, in order to get their stuff on the computers of the stupid and lazy.
  • For the very reasons adduced by Lerianis,

    I should very much like to see a [b]Firefox[/b] add-on which, in the manner of the [b]McAfee SiteAdvisor[/b], provided [u]users[/u] with a small notice icon in the status bar regarding the present status of a web site they have visited, the clicking of which gave access to more detailed information. This might serve to make people more aware of the tactics used by certain net predators and thus, hopefully, better able to counter them....

  • I created a tool to make it easy to check

    Great write-up guys, I hadn't even heard about this until I read your article. One thing noticed while searching around was there isn't a form to test a site.... sooo..

    I built one
    <a href="">Easy Form To Check Your Site</a>

    I also put a link up to this site to show my appreciation.