Danny Sullivan of Search Engine Watch takes an in-depth look at the week's big news: the case of Gonzales v. Google, in which the search giant has pushed back against a government subpoena and the Justice Dept. is requesting the court force its compliance. Some of Sullivan's points:
The government is asking for a random list of URLs in Google's index and a random list of search queries. It emphasizes that no user data need be associated. And it claims that:
Reviewing URLs available through search engines will help us understand what sites users can find using search engines, to estimate the prevalence of harmful-to-minors (HTM) materials among such sites, to characterize those sites, and to measure the effectiveness of content filters in screening HTM materials from those sites.
Reviewing user queries to search engines will help us understand the search behavior of current web users, to estimate how often web users encounter HTM materials through searches, and to measure the effectiveness of filters in screening those materials.
This Sullivan finds "jawdropping."
[S]ure, they could hand over a list of 1 million URLs. But you have no idea from that list how often any of those URLs actually rank for anything or receive clicks. It is non-data, useless.
Secondly, the list of search queries HAS NO RANKING DATA associated with it. So let's say the DOJ sees a query for "lindsay lohan." They don't know from that data what exactly showed up on Google or another search engines for that query, not from what they've asked for. Since they don't know what was listed, they further can't detect any HTMs that might show up.
In short, gathering this data is worse than a fishing expedition. It's a futile exercise that will proof absolutely nothing about the presence of HTMs in search results. All it proves so far is that the DOJ lawyers (and apparently their experts) haven't a clue about how search engines work. An actual search expert will tear apart whatever "proof" they think they can concoct from the data gathered so far.