Click Fraud: What does Google fear?

Why did Google go on the offensive at the SES Conference during an industry panel, seeking to undermine the work of third party click fraud assessment services?
Written by Donna Bogatin, Contributor
Google CEO Eric Schmidt advised at the Search Engine Strategies Conference in San Jose last Wednesday, regarding click fraud, that he believes “we have it under control.”

If that is the case, then why did Google’s Business Product Manager, Trust & Safety, Shuman Ghosemajumder, go on the offensive the prior day at the Conference during an industry panel, by seeking to undermine the work of third party click fraud assessment services?

I report on Ghosemajumder’s tactics at the conference in “SES on Click Fraud: industry collaboration, or vested interests”:

he unveiled a strategic Google assault against the work of third party click fraud consultants such as panelists Alchemist Media, KeywordMax, Click Foresnsics…

Apparently timed to correspond with the SES panel, Ghosemajumder announced his posting at the Google blog of a Google engineers’ penned “Troubling findings on how some third parties detect click fraud”:

The Google 'findings' purport to expose the 'work of several click fraud consultants':

fictitious clicks: events which are reported as fraudulent, but are never recorded or charged as ad clicks by Google.’

In a combatative tone, Ghosemajumder detailed Google engineers’ analysis that “widely quoted estimates of the size of the click fraud problem are exaggerated.”

Panel members, and targets of the strategic Google assault, seemed to feel ambushed and expressed dismay that they were not advised of the Google “findings,” or alerted to Ghosemajumder’s blog post.

It is not in advertisers’ best interests to accept at face value Google engineers’ assessment of the work of third party firms which seek to audit the work of Google engineers.

It is not the first time, however, that Google has put forth internally generated, self-interest motivated, analysis to assuage advertiser’s concerns that they are paying for fraudulent clicks.

In “Click fraud: Google 'estimating invalid clicks', transparency or PR?” I report:

Google AdWords blog heralds ‘a new AdWords feature enabling advertisers to have a much more detailed picture of invalid click activity in their account.’

Ghosemajumder said of the initiative:

Our goal is to provide that transparency so advertisers who previously may have been unnerved or concerned about these wildly exaggerated figures will be able to see now what Google is doing to protect them.

The Google AdWords feature purporting to provide advertisers a “detailed picture of invalid click activity,” however, merely reflects Google self-reported, internally generated data.

How transparent is Google actually being, if the data it submits to advertisers is not substantiated by a third party? How can Google’s desired transparency be evaluated if it is self-reporting data to advertisers with the goal of showing them “what Google is doing to protect them.”

There is no sure means for an advertiser to verify “the number of invalid clicks on their ads,” as reported by Google.

The data and analysis advertisers ought to take a close look at is the only publicly available, independent and scientific review of the click fraud problem, a court sanctioned report by Dr. Alexander Tuzhilin, NYU Professor of Information Systems.

Tuzhilin concludes that the Pay Per Click (PPC) — Cost Per Click (CPC) search advertising model is “inherently vulnerable to click fraud.”

In “Click fraud: are advertisers sufficiently protected?” I cite his assessments of Google click fraud prevention:

the measures…are only statistical measures providing some evidence that Google’s filters work reasonably well. This does not mean, however, that any particular advertiser cannot be hurt badly by fraudulent attacks, given the evidence that Google filters ‘work.’ Since Google has a very large number of advertisers, one particular bad incident will be lost in the overall statistics. Good performance measures indicative that filters work well only mean that there will be ‘relatively few’ such bad cases.

Tuzhilin also qualifies Google’s “Click Quality” team efforts:

Since its establishment in the Spring and Summer of 2003 the Click Quality team has been developing an infrastructure for detecting and removing invalid clicks…Currently, they reached a consolidation phase in their efforts, when their methods work reasonably well, the invalid click detection problem is ‘under control,’ and the Click Quality team is fine tuning these methods. There is no hard data that can actually prove this statement.


Editorial standards