Google and Meta on the defensive in Australian social media probe

Google and Meta acknowledged there are still gaps in their efforts to remove COVID-19 misinformation and cyberbullying, respectively, on their platforms.
Written by Campbell Kwan, Contributor

Tech giants Google and Meta appeared before the Australian Select Committee on Social Media and Online Safety on Thursday morning, with both companies being grilled about misinformation and cyberbullying existing across their platforms.

Google's director of government affairs and public policy Lucinda Longcroft was repeatedly asked by the committee about YouTube displaying Clive Palmer's United Australia Party (UAP) ads containing COVID-19 misinformation.

At least nine UAP ads presented incomplete extracts of a Therapeutic Goods Administration's (TGA) report on vaccinations to convey misleading COVID-19 information, according to committee deputy chair and Labor representative Tim Watts.

Conceding the existence of these ads on YouTube, Longcroft told the committee that the platform's COVID-19 misinformation policies are "robust, rapid, and effectively enforced" as it had issued one strike to the UAP for these ads, which removed the political party's YouTube privileges for a week. 

YouTube has a three-strike system whereby a YouTube channel is terminated permanently if it receives three strikes within 90 days.

Along with the strike, YouTube also took down six of those ads.

When asked why the UAP received only one strike rather than three despite six ads being found of violating YouTube's ad policies, Longcroft said when a number of videos are found to be violative at the same time they are bundled into one strike.

"I'll remember to stretch them out next time," Watts retorted in response to the explanation.

Later in the morning, Meta representatives appeared before the committee and were grilled about the death and rape threats directed towards Australian presenter Erin Molan and her young daughter on Facebook.

Molan, who appeared before the committee earlier this week, had testified she submitted a request on Facebook for those threats to be removed from the platform. In response to the request, Facebook sent an automated response that the content would remain online, she said on Tuesday.

She also told the committee that taking any further action to remove that content through Facebook was too difficult, and that she felt nervous going out in public after receiving those threats.

When asked about that particular cyberbullying incident, Meta ANZ policy director Mia Garlick told the committee her company could not locate Molan's original request but the content in question was eventually removed. The content was only removed after Molan filed a police report, however, Garlick said after she received further questioning from committee chair and Liberal representative Lucy Wicks.

"Unfortunately, in the real world, we haven't been able to locate that original complaint and so I think a police report was made and we worked through that process to make sure that we were taking appropriate action," Garlick said.

Garlick then said this type of online harassment is a common experience among female journalists, and that Meta has been working on tools to make its social media platforms a safer place.

"There's still more work to be done to continue to expand what [Facebook] can achieve. And I can understand that some of that terminology does violate our policies, but we still need to improve the technology to proactively detect it," she said.

In response to Garlick's outline of Meta's efforts to make its platforms safer, Watts said he remained sceptical of the company's intentions, especially in light of the whistleblower allegations of Facebook prioritising profits over safety.

"I [am] struck by something of a disconnect … we've seen a slew of former Facebook employees coming forward with often very hard critiques of the internal culture of the company, and often backed by substantial pages of leaked internal documents that tell a different story about how the company works," Watts said.

Garlick rejected this notion, telling the committee that these claims are all "categorically not true".

"Safety is at the core of our business," she said.

The Select Committee on Social Media and Online Safety was established late last year to inquire into the practices of major technology companies and consider evidence relating to the impact social media platforms have on the mental health of Australians. 

The committee's inquiry was approved by the federal government with the intention of building on the proposed social media legislation to "unmask trolls".

Twitter is set to appear before the committee tomorrow morning, with the inquiry currently set to provide its findings next month.


  • Suicide Call Back Service on 1300 659 467
  • Lifeline on 13 11 14
  • Kids Helpline on 1800 551 800
  • MensLine Australia on 1300 789 978
  • Beyond Blue on 1300 22 46 36
  • Headspace on 1800 650 890
  • QLife on 1800 184 527


Editorial standards