Google details troubles it and others face meeting right to be forgotten requests

Google details troubles it and others face meeting right to be forgotten requests

Summary: Google has revealed the pains of complying with Europe's right to be forgotten ruling in a letter to European data watchdogs.

TOPICS: Privacy, Google, EU

Google has outlined the complexities and costs it faces when handling European 'right to be forgotten' requests, throwing a spotlight on the difficulties smaller rivals may face with the process.

The last thing Google wanted when the EC proposed including a 'right to be forgotten' in data protection legislation back in 2012 was for such a law to apply to search engines. Publishers, yes; hosting platforms, not really; search engines definitely not, appeared to be its stance.

Now, thanks to a ruling by the European Court of Justice handed down in May, Google is having to contend with the introduction of just such a right. The court's decision stipulated that European citizens can ask search engines to remove particular links from results returned for a search on their name, if the material is deemed to be out of date, no longer relevant, or excessive.

After starting to process the first 'right to be forgotten' requests in June, it set out in a letter to Europe's privacy watchdogs this week the difficulties it's faced, including the "significant hiring effort" it had to undertake to handle the requests.

Of the 91,000 requests Google received by 18 July, the company's global privacy counsel Peter Fleischer said that Google had knocked back around 32 percent of requests, asked for more information for 15 percent requests, and removed 53 percent of them.

"We generally have to rely on the request for information, without assurance beyond the requester's own assertions as to its accuracy. Some requests turn out to have been made with false and inaccurate information," Fleischer wrote in the letter.

"Even if requesters provide us with accurate information, they understandably may avoid presenting facts that are not in their favour. As such, we may not become aware of relevant context that would speak in favour of preserving the accessibility of a search result."

Because requests are not automated, processing them has cause Google to draft in new workers. "We are not automating decisions about these removals. We have to weigh each request individually on its merits, and that is done by people. We have many people working full time on the process, and ensuring enough resources are available for the processing of requests required a significant hiring effort," the letter said. The response raises questions about the economic burden on smaller companies who may receive such requests. However, with Google having over 90 percent market share of search in Europe, it's unlikely other search providers will face a similar deluge of complaints as the one facing Google.

According to the company, there are no short cuts that can be taken to make it easier for other search engines to handle requests. Currently, Google doesn’t know what the average time for processing each request is, however does envisage it will be faster once it has cleared its current backlog.  

Asked whether Google had considered sharing delisted search results with other providers, Fleischer said Google had not and would find it problematic to do so. 

"We would note that sharing the delisted URLs without further information about the request would not enable other search engine providers to make informed decisions about removals, but sharing this information along with details or a copy of the complaint itself would raise concerns about additional disclosure and data processing."

Google's letter was sent to the EC's Article 29 Working Party, which last week asked Google for information on ahead of drafting guidelines on how the 'right to be forgotten' ruling should be implemented in practice.

Questions that Google's advisory council on the 'right to be forgotten' has thrown back to Europe's Article 29 Working Party to answer include:

  • Are there any procedural issues raised by the case (eg responsibilities of search engines, data protection authorities, publishers, individuals)?
  • What is the nature and delineation of a public figure's right to privacy?
  • How should we differentiate content that is in the public interest from content that is not?
  • Does the public have a right to information about the nature, volume, outcome of removal requests made to search engines?
  • What is the public's right to information when it comes to reviews of professional or consumer services? Or criminal histories?
  • Should individuals be able to request removal of links to information published by a government?
  • Do publishers of content have a right to information about requests to remove it from search?

Fleischer also explained the legal basis for Google notifying publishers when it delists a story. That practice came to light after several publishers, including the Guardian and BBC, were notified by email that Google had delinked stories.

According to Fleischer, Google doesn't need any justification under European data protection law to notify publishers since it's not sharing any personal data with the webmaster and in any case isn't in control of content found at a particular URL.

"We are simply notifying the webmaster about a partial removal of search results for a specific URL on his/her domain… Any personal data that can be found at the specific URL is not part of the data that is processed by Google."

Read more on the right to be forgotten

Topics: Privacy, Google, EU

Liam Tung

About Liam Tung

Liam Tung is an Australian business technology journalist living a few too many Swedish miles north of Stockholm for his liking. He gained a bachelors degree in economics and arts (cultural studies) at Sydney's Macquarie University, but hacked (without Norse or malicious code for that matter) his way into a career as an enterprise tech, security and telecommunications journalist with ZDNet Australia. These days Liam is a full time freelance technology journalist who writes for several publications.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • It seems they missed out on the easiest option!

    If someone writes in and tells Google they don't want information to show, all Google need do is to add them to an Ignore List of words or names not to conduct a search on and return a page stating the search request can not be done due to it being on the requested ignore list. That should greatly reduce the work load and the number of requests. Search engine companies should NOT be put in the place of checking everything that's posted on the net, let them do the search and then the people it applies to can follow up on inaccurate data through the legal system.
    Deadly Ernest
    • they still need to vet the request

      You can't just blindly delist everything requested, otherwise pranksters and vandals could effectively delete the entire search engine. Also, read the list of questions posed by Google to the Working Party. Remember the spat about journalists' articles being delisted unbeknownst to the publisher/author? Should, say, a convicted criminal or disgraced politician be allowed to have any mention of them delisted?

      This is a stupid law with lots of complicated consequences.
      • A simple vetting the request is from the person is very easy

        and takes little in the way of resources, it's the effort needed to do a search and check the validity of the results that's the worry. But once people get told a request means we refuse to allow any searches on you name people won't be in a hurry to lodge a request as it means even the legitimate data they want people to see is hidden from everyone. No info found on Facebook, Linkedin, or anything else.

        The answer I propose is a very simple binary process: (a.) no restrictions on what is found and displayed, or (b.) no processing of a name if they request it be filtered out. The name is allowed or totally filtered out - end of story.
        Deadly Ernest
        • So this assumes noone has the same name correct?

          This is NOT as simple as you are trying to make it.
          Yes, Google is going to have to jump through a few hoops to satisfy these requests.
  • A very bad law...

    You'd think that after the shooting down of Malaysian Air Flight 17 over the Ukraine the European courts would rethink the wisdom of their "right to be forgotten" laws. You'll recall that shortly after they shot down the airliner, and before they realized that they had, in fact targeted a civilian aircraft, Russian backed rebels boated of their kill all over social media. Then, once the truth of what happened became evident, they quickly went about scrubbing the evidence of their crime from the Internet.

    No doubt, the law as currently written would not require Google to comply with requests to scrub their search results of such pages, but laws that begin as ostensible measures to protect ordinary citizens have a way of being abused by governments and politicians to hide embarrassing events from their past. Just look at the way British politicians regularly sue publishers in court for defamation over unflattering articles and news stories. Just look at the way the DMCA is regularly abused in the United States by companies seeking to remove unflattering information about their practices from YouTube.

    Transparency is the only way to ensure that democracy and freedom have a chance of survival in this world. Laws that seek to obscure or rewrite the past are always abused, always a bad idea.