Google granting half of 'right to be forgotten' requests

Google granting half of 'right to be forgotten' requests

Summary: After being called to account over its handling of the 'right to be forgotten' ruling, it has been revealed that Google is approving around 50 percent of all requests.

SHARE:
TOPICS: Privacy, Security, EU
3

On the same day as Google faced a grilling from European data watchdogs over its handling of right to be forgotten cases, it emerged that the company has so far agreed to remove around half of the links it was requested to.

Following a European Court of Justice ruling in April, EU citizens can ask for search engines to stop providing links to material that is out of date, irrelevant, or excessive, in the results for searches on their names.

According to reports, Google has stopped providing links to "slightly over" 50 percent of the URLs it has been asked to under the ruling. It began taking the requests in May and to date it has received 91,000 of them from individuals asking for 328,000 links to no longer be returned as results for searches for their names, the reports said.

The largest number of requests came from France, followed closely by Germany.

The figures were reported in both the Wall Street Journal and Bloomberg on Thursday, both citing "a person familiar with the matter".

Google believes the ECJ ruling has struck the wrong balance between the right to be forgotten and the right to know, according to its chairman Eric Schmidt. The company has waged a high profile campaign against it, and has notified publishers when links to particular stories had been removed under the ruling.

This has caused many of the 'forgotten' links received more attention than they had previously — a problem that saw it attract the interest of Europe's data protection regulators. Google, Yahoo, and Microsoft were all called to a meeting with the watchdogs yesterday to discuss the ECJ ruling, and it was expected the regulators would use the event to address Google's handling of the matter so far.

The regulators are expected to publish guidelines on how search engines should process the requests this September; until then, it's up to companies to decide how best to handle them. Google has not published any details on how it arrives at its decisions on whether to keep, remove, or reinstate particular search results.

Microsoft opened its own 'right to be forgotten' form earlier this month, but has not said how many requests have been made through it, or if any requests have been granted.

Google did not respond to request for comment.

Read more on the right to be forgotten

Topics: Privacy, Security, EU

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

3 comments
Log in or register to join the discussion
  • Public records vs. private history

    For example, if you search my name, my twenty year-old bankruptcy should not come up BY MY NAME. However, if you are searching the court records and happen to see my name in archived records, that is acceptable. Everyone has information that is outdated or irrelevant. Should that underage drinking conviction taint a person's life when he is fifty? Should a speeding ticket thirty years ago cause insurance premiums to go up? While the information is still there, searching for "John Q. Public" shouldn't give JQR's entire history. Google has a responsibility to be responsible. They just don't get it.
    Iman Oldgeek
    • How do you guarantee that the court doc ...

      ... is about the person you are looking for?? Plenty of people have the exact same name.

      Now, I don't believe Google (or any search engine) should be the one responsible for "forgetting" what is publish in some server they don't own. The data should be erased AT THE SOURCE. Without erasing the source, all you are doing is hiding "the evidence" with clear/transparent saran wrap and leaving it in the middle of the table.
      wackoae
    • holding a search engine responsible for content is absurd

      And your "Should a..." hypotheticals might make for interesting philosophical discussion but should have no place in governments regulating businesses. If there is content about you that is false and/or defamatory then you already have laws to protect you, but of course they apply to the source of the data. Google (and other search engines) should be a dispassionate, amoral indexer of data. Trying to impose some vague sense of social justice into such a system is only going to make a mess.
      frylock