Technology giants -- Twitter, Google, Facebook, Microsoft, TikTok, Apple, Redbubble, and Adobe -- have all published their inaugural transparency report that outline their commitments and efforts on how they plan to protect Australians against harm from online disinformation and misinformation on their respective platforms.
The first set of annual transparency reports comes three months after they all committed to the Australian Code of Practice on Disinformation and Misinformation. As part of agreeing to the code, all the participating signatories said they would release an annual transparency report about their efforts under the code. The code [PDF] was prepared by the Digital Industry Group Inc (DiGi).
In December 2019, the Australian government asked the digital industry to develop the code in response to policy as set out in Regulating in the Digital Age: Government Response and Implementation Roadmap for the Digital Platforms Inquiry. DiGi volunteered to develop the draft for the industry.
Google has outlined in its transparency report [PDF] that it will combat misinformation and disinformation based on four key efforts: Raising quality content and authoritative sources, removing content and behaviours that infringe on its rules, reducing the spread of potentially harmful information, and rewarding publishers and content creators who would like to monetise and advertise their content.
Specifically, this will include introducing policies and processes that require human review of user behaviours or content that is s available on digital platforms, including reviewing processes that are conducted in partnership with fact-checking organisation; exposing metadata to users about the source of content; and implementing measures to enable users to make informed choices about news and factual information and to access alternative sources of information.
"Google's mission is to organise the world's information to make it universally accessible and useful. Clearly, misinformation and disinformation run contrary to that mission … where people encounter content that's unreliable or actively designed to mislead them, our apps and services do not serve their purpose of connecting people with content that is relevant and useful to them, and we risk losing their trust," Google government affairs and public policy senior manager Samantha Yorke said.
"As such, we take these issues immensely seriously. This is particularly true when the content relates to issues such as health, civics and elections, and other issues that may significantly affect the livelihood of our users or the societies that we operate in."
For Facebook, it outlined in its transparency report [PDF] that it is pledging to 43 specific commitments of the code, including some Australia-specific ones, such as supporting the Australian government with the COVID-19 vaccine rollout and combatting any misinformation surrounding the pandemic, working towards expanding its fact-checking partner capability within Australia in 2021, and extending its existing its transparency policy surrounding political advertising to also cover social issues advertising.
Facebook said the Australian-specific commitments are in addition to global efforts that Facebook already undertakes to combat disinformation and misinformation.
"We have chosen to opt into every commitment under the code, and this response outlines in detail Facebook's commitments in the first year of the code," the social media giant said.
"We have opted into the industry code for Facebook and Instagram and, although not required by the industry code, this response also includes information about steps that WhatsApp and Messenger have taken to address misinformation and disinformation."
The company also accompanied its pledge with statistics of actions it has already taken against combatting misinformation on its platform so far. It pointed out that from March to December 2020, it removed over 14 million of COVID-19-related fake posts globally, including 110,000 pieces of content that were from pages or accounts from Australia.
Similarly, Twitter also noted in its transparency report [PDF] that the company has so far taken action against 3.5 million globally for violation of rules, including suspending 1 million accounts and removing 4.5 million pieces of content. For 3,400 accounts globally, it was in relation to misleading information about COVID-19.
In Australia specifically, 37,000 Australian Twitter accounts were actioned for violating Twitter rules, resulting in 7,200 accounts being suspending and 47,000 pieces of content authored by an Australian account being removed.
Meanwhile, in TikTok's transparency report [PDF], the social media platform revealed it removed 651 videos in Australia which mentioned "coronavirus" or "COVID" between October 2020 to March 2021 that violated the company's misinformation policy. TikTok also removed 222 videos in Australia that were considered as medical misinformation videos.
The company added that it works with Agence France-Presse (AFP) in Australia to fact-check content posted on its platform.
"AFP makes independent assessments about the veracity of claims on our platform. These contributions help us to strike a balance and prevent either under-moderating or over-moderating identified potential misinformation," TikTok wrote in its report.
RedBubble said it plans to further combat misinformation by investigating what improvements could be made to the company's data and analytics capabilities. It noted this could further improve the way the company track trends and monitor its own performance in managing misinformation and disinformation.
It highlighted the importance of tracking these trends by disclosing in its transparency report [PDF] that RedBubble witnessed an uptick in anti-vaccination content being uploaded in Australia in the 12 months to April 2021. It pointed out that sales for merchandise in Australia with anti-vax tags peaked to over AU$15,000 in mid-2020, while some 80 Australian-created products were removed because they breached the company's misinformation policy and featured various anti-vaccination tags.
In releasing its report [PDF], Microsoft took the opportunity to reinforce its efforts to combat misinformation so far, such as introducing Video Authenticator to combat the prevalence of deepfakes in still photos and videos; forming the Coalition for Content Provenance and Authenticity (C2PA) alongside Adobe, Intel, Arm, Truepic, and the BBC earlier this year; and continually expanding of its Defending Democracy Program.
Adobe also acknowledged in its report [PDF] that forming the C2PA is a step forward to reducing the harm and impact of inauthentic online content. It added that part of the solution will involve detection of deliberately deceptive media through a combination of algorithmic identification and human-centered verification, as well as education and content provenance.
"We believe provenance will create a virtuous cycle. The more creators distribute content with proper attribution, the more consumers will expect and use that information to make judgement calls, thus minimizing the influence of bad actors and deceptive content," Adobe stated.
"Ultimately, a holistic solution that includes provenance, detection and education to provide a common and shared understanding of objective facts is essential to help us make more thoughtful decisions when consuming media."
As for Apple, it said in its transparency report [PDF] that its focus will be keeping Apple News in check, with the company opting into only some of the code's commitments.
"Apple's products and services in Australia do not fall within the expressed scope of the Code of Practice," it said. "Notwithstanding the expressed scope and application of the Code of Practice, Apple Australia recognises the issues associated with disinformation and misinformation, and has 'opted-in' to the Code of Practice in relation to Apple News.
"This is in line with Apple's commitment to 'creating a trusted, informative news environment by advancing quality journalism and thwarting misinformation' and in recognition of the many shared goals with other signatories in this area."
- Australia warned to not ignore domestic misinformation in social media crackdown
- Countering foreign interference and social media misinformation in Australia
- Australian poll finds 1 in 8 blame Bill Gates and 5G for coronavirus
- Experts renew calls for a government body to tackle foreign disinformation
- Twitter vows to work alongside Australia in thwarting foreign interference
- Labor floats jail time as penalty for social media giants that breach Aussie law