Facebook has said it will restrict the use of its Live video-streaming feature for users that violate its community standards.
The social media giant said following the Christchurch terrorist attack, where a video was viewed around 4,000 times on Facebook before it was finally reported after being live for 29 minutes, it has been reviewing what it could do to limit its services from being "used to cause harm or spread hate".
"As a direct result, starting today, people who have broken certain rules on Facebook -- including our Dangerous Organizations and Individuals policy -- will be restricted from using Facebook Live," the company's VP of Integrity Guy Rosen wrote.
Facebook will be applying a "one strike" policy to Live.
"From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time -- for example 30 days -- starting on their first offence," Rosen continued. "For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time."
While the restrictions apply solely to the platform's Live feature, the company said it had plans to extend them to other areas over the coming weeks, beginning with preventing those that are banned from creating ads on Facebook.
"We recognise the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook," the post continued. "Our goal is to minimise risk of abuse on Live while enabling people to use Live in a positive way every day."
Prior to the new rules, if a user posted content that violated Facebook's Community Standards anywhere on its platform, the company would remove the post. If further violations occurred, the user would then be blocked for a certain period of time.
Rosen said that in some cases, the user was banned altogether, pointing to either repeated low-level violations or a single egregious violation, such as using terror propaganda in a profile picture or sharing images of child exploitation.
Additionally, Facebook also announced it would invest $7.5 million into new research partnerships with the University of Maryland, Cornell University, and the University of California, Berkeley, that are designed to improve image and video analysis technology so it can also find videos that have been modified to avoid detection.
"One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People -- not always intentionally -- shared edited versions of the video, which made it hard for our systems to detect," Rosen said.
"Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realised that this is an area where we need to invest in further research."
Facebook said it would seek out further partnerships in order to "innovate in the face of this threat".
- NZ Privacy Commissioner labels Facebook as 'morally bankrupt pathological liars'
- Facebook has removed 1.5M videos of New Zealand mosque massacre (CNET)
- Facebook data privacy scandal: A cheat sheet (TechRepublic)
- Why the tech industry is wrong about Australia's video streaming legislation
- Canberra 'underwhelmed' with Facebook's live-streaming defence
- Zuckerberg uses online op-ed to call for internet regulation
- Facebook introduces policy to ban white nationalist and separatist content
- Facebook's Mark Zuckerberg: 'The future is private'