How so? He offers paragraph after paragraph of “evidence” for why he concludes that:
This case was never about getting content out. It was about trying to blackmail Google into including content.
Sullivan does not share how he knows that leading European publishers are “trying to blackmail Google,” but he references Google to assert that:
The content could have been removed through the use of robots.txt files or meta robots files such as explained on the Google Blog recently.
Google today waves the robots.txt flag once again at its blog:
If publishers do not want their websites to appear in search results, technical standards like robots.txt and metatags enable them automatically to prevent the indexation of their content.
In layman terms, Google’s robots.txt defense is similar to a shoplifter saying “If stores do not want their merchandise to be taken without payment, locksmiths enable them automatically to prevent the theft of their products.”
Just as a shoplifter’s “if you didn’t want me to take your merchandise, you shouldn’t have made it so tempting” defense has no merit, Google’s “if you didn’t want us to take your content, you shouldn’t have made it so tempting” defense has no merit as well, as the Belgium courts suggest:
Brussels court said Google Inc. violated copyright laws by publishing links to Belgian newspapers without permission and ordered the company to remove them (see “Will Google pay for content?”).