These tech giants will share a database of online terrorist content
The European Commission on Thursday published new guidelines for online platforms to step up the prevention, detection, and removal of objectionable content such as hate speech and terrorist-related content.
"The Commission expects online platforms to take swift action over the coming months," it said in a release, noting that terrorism and illegal hate speech are "already illegal under EU law, both online and offline."
If tech companies don't implement the guidelines, the release said, the Commission will "assess whether additional measures are needed... including possible legislative measures to complement the existing regulatory framework."
The guidelines address three categories: detection and notification, effective remove and the prevention of re-appearance. For detection, the EC expects platforms to cooperate more closely with "competent national authorities" by appointing points of contact. The guidlines also urge companies to set up automated detection effeorts, as well as to work with "trusted flaggers" with "expert knowledge" on what constitutes illegal content.
For "effective removal," the guidelines say companies may be subject to timeframes "where serious harm is at stake," though the timeframes have yet to be specified. The guidelines also say companies should introduce safeguards to prevent "over-removal." Lastly, the guidelines urge companies to develop more automatic tools to prevent illegal content from re-appearing after it's been removed.
"The rule of law applies online just as much as offline," Commissioner Vera Jourová said in a statement. "We cannot accept a digital Wild West, and we must act. The code of conduct I agreed with Facebook, Twitter, Google and Microsoft shows that a self-regulatory approach can serve as a good example and can lead to results. However, if the tech companies don't deliver, we will do it."
In a press conference Thursday, the AFP reports, Jourová said she deleted her own Facebook account "because it was the highway for hatred, and I am not willing to support it."
Jourová also reportedly said she most recently met with Silicon Valley leaders just last week in a visit to California, and they all reconized the need for action.
In May of last year, Microsoft, Google, Twitter, and Facebook all signed a European Commission code of conduct requiring a more active approach in tackling hate speech and terrorist propaganda online. Among other things, it called on tech companies to review the "majority" of valid notifications for the removal of hate speech in less than 24 hours and to remove or disable access to the content if required.
Tech companies have since made multiple promises and announced various initiatives aimed at curbing nefarious online content. Just last week, the Global Internet Forum to Counter Terrorism -- comprised of Facebook, Microsoft, Twitter, and YouTube -- said it made a "multimillion-dollar" commitment to support research on terrorist abuse of the internet.
Tech firms have also become more aggressive at shutting down what they deem to be objectionable content. In the wake of the violent protests this year in Charlottesville, Virginia, Google pulled domain registration support for the neo-Nazi site The Daily Stormer. Facebook, meanwhile, hired a fleet of contractors to look for potential terrorist activity -- before giving a clear definition of what it considers terrorism.
Along with curbing hate speech and terrorism, online platforms are now coming under scrutiny for enabling bad actors to interfere in democratic elections. Executives from Facebook, Google, and Twitter have been asked to testify next month to the US Congress regarding Russia's alleged interference in the 2016 US presidential election.