ISPs to continue blocking graphic violent content in Australia

The new protocol positions ISPs to block websites that host graphic material, such as a terrorist act or violent crime, as part of efforts to 'stem the risk of its rapid spread as an online crisis event unfolds'.

The federal government, through the Australian eSafety Commissioner, and the nation's internet service providers (ISPs) have agreed on new protocols that would see the continued blocking of websites that host terrorist and graphic violent content.

The agreement, according to Minister for Communications, Cyber Safety and the Arts Paul Fletcher, positions ISPs to block websites hosting graphic material that depict terrorist acts or violent crimes, in a bid to "stem the risk of its rapid spread as an online crisis event unfolds".

The protocol follows the eSafety Commissioner in September issuing a direction to the nation's ISPs to continue blocking websites that host the video of the Christchurch terrorist attack, after the initial blocking period of six months had expired.

In the immediate wake of the attack, a video of what had occurred was viewed around 4,000 times before it was finally reported and taken down after being live for 29 minutes.

Australia's telcos blocked 40 sites of their own accord after the attack.

In August, the government said it would create a content blocking regime for crisis events, with the eSafety Commissioner set to gain the power to force the nation's telcos to block certain content. 

"In the aftermath of the devastating events in Christchurch last year, major Australian internet service providers voluntarily blocked websites hosting the video. Now we have a framework in place to enable a rapid, coordinated, and decisive response to contain the rapid spread of terrorist or extreme violent material," Fletcher said on Tuesday.

"This protocol will be activated during an online crisis event, as declared by the eSafety Commissioner, and is an important new mechanism that will help keep Australians safe online."

See also: Christchurch Call: USA missing from 26 member pledge to eliminate violent online content

As defined by the protocol, an online crisis event involves terrorist or extreme violent material being shared widely online in a manner likely to cause significant harm to the Australian community and warranting a rapid, coordinated response by industry and government.

The eSafety Commissioner will issue blocks for a period of time on a case-by-case basis to address the risk of any unfolding online crisis event, Fletcher explained.

ISPs participating in the protocol include Telstra, Foxtel, Optus, TPG, VHA, and Vocus Group.

Fletcher said the development of the protocol delivers on one of the recommendations of the Taskforce to Combat Terrorist and Extreme Violent Material Online. The taskforce includes members from Facebook, Google, Amazon, Microsoft, Twitter, Telstra, Vodafone, Optus, and TPG.

Australia's abhorrent video streaming legislation was in April rushed through Parliament, requiring hosting and content service providers to notify the Australian Federal Police (AFP) if their platform could be used to access particular violent material that is occurring in the country.

The Criminal Code Amendment (Unlawful Showing of Abhorrent Violent Material) Bill 2019 came in direct response to the Christchurch terrorist attack.

See also: Why the tech industry is wrong about Australia's video streaming legislation 

Fletcher said the new protocol builds on the legislation and that the government plans to provide legislative backing for the protocol through the new Online Safety Act.

Under the new Act, online platforms would see the amount of time that they have to pull down content after receiving a missive from the Australian eSafety Commissioner be reduced to 24 hours.

The Act would also include the extension of cyberbullying provisions from children to the entire population, with a higher threshold for adults; getting search engines to "de-rank offending content"; and the eSafety Commissioner would be handed the power to force transparency reporting by digital platforms.

The Act would also give the eSafety Commissioner power to have content related to child exploitation, abhorrent violence, incites terrorism or violence, and "other extreme material" removed, no matter where it is hosted around the world.

RELATED COVERAGE