Twitter is taking a closer look at its own algorithms in a bid to reduce bias

The social media platform has launched a new initiative to evaluate its AI systems and better inform users.

In an effort to address mounting concerns about algorithmic harms, Twitter has announced a new initiative that will subject some of the company's machine-learning systems to more scrutiny and pave the way for changes to any problematic AI models. 

Dubbed 'Responsible ML', the initiative is designed not only to increase the transparency of the AI systems used by Twitter, but also to improve the fairness of the algorithms, and to provide users with "algorithmic choice" when it comes to the technologies that might affect them.

Twitter has pledged to take responsibility for the platform's algorithmic decisions, and has appointed a Responsible ML working group to lead the initiative. This group, whose members are drawn from across the company, will be managed by Twitter's existing ML Ethics, Transparency and Accountability (META) team.

SEE: Building the bionic brain (free PDF) (TechRepublic)

With almost 200 million people using Twitter daily, the platform relies on machine-learning models for many tasks, ranging from organizing content by relevance to identifying posts that violate terms of service.  

"When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended," said META team member Jutta Williams and software engineering director Rumman Chowdhury in a post introducing Responsible ML.

In the coming months, Twitter will make available analyses carried out to assess potential harms in three of the platform's key algorithms: a gender and racial bias analysis of its image-cropping algorithm; a fairness assessment of the Home timeline recommendations across different racial subgroups; and an analysis of content recommendations for different political ideologies across seven countries.

Twitter has recently found itself in hot water when it emerged that the platform's image preview cropping tool was automatically favoring white faces.  

The company reacted by maintaining that analyses had shown no evidence of racial or gender bias, but acknowledged that the way photos are automatically cropped has the potential to cause harm. Committing to conducting further analysis of the tool, Twitter pledged to give users more visibility in the future over what their images will look like in a tweet.

The Responsible ML initiative suggests that the platform is keen to act in similar ways when it finds algorithmic harm in its systems. Depending on the results of the upcoming analyses, the company doesn't rule out changing a product, adapting standards and policies, or removing an algorithm altogether.

Big Tech in the spotlight

The past few months have seen tech giants come under fire as lawmakers pointed to the role that social media platforms are playing in the rapid spread of misinformation. Machine-learning models that drive content recommendations, in effect, are tailored to maximize user engagement -- and platforms like Twitter and Facebook are accused of doing little to stop the spread of polarizing content that fosters echo chambers, and is considered to pose a threat to democracy. 

Last month, Facebook's Mark Zuckerberg, Alphabet's Sundar Pichai and Twitter's Jack Dorsey all appeared before Congress as lawmakers grilled them about their failure to rein in misinformation on their platforms. They specifically called out false content about COVID-19 vaccines, and posts that fomented anger ahead of the attempted insurrection on the US Capitol in January.

"Twitter is aware that issues of political ideology are very recognizable to the public, and that this is where much of the attention is currently turned," Virginia Dignum, researcher in social and ethical AI at Umeå University in Sweden, told ZDNet. "It's a good step that they are taking responsibility for their algorithmic decisions and bringing transparency to the table."

Although the work carried out by the Responsible ML team will not always translate into visible product changes, it will at least contribute to raising public awareness of the ways that algorithmic models are built and applied.

Williams and Chowdhury said that the team will be focusing on making machine learning more explainable to the public and to the industry by sharing data insights and analyses, as well as unsuccessful attempts to tackle the challenges raised by algorithmic bias.

SEE: The algorithms are watching us, but who is watching the algorithms?

The public will be given the opportunity to provide feedback along every step of the design and deployment of an automated system; the team is also working on expanding algorithmic choice to give users more control over the AI systems in place on the platform -- a project, however, that's still in "the early stages of exploring," according to the scientists.

"The process of development and decision-making will be carried out together with the users who will be potentially affected, instead of presenting the user with the final results," said Dignum. "It is one of the best approaches I have seen.

"Of course, the truth is in the pudding. It's a difficult issue and no one solution is proven and tested. I can imagine they will face some complexities, but I applaud the initiative." 

Not all social media giants have gone down a similar route to tackle algorithmic bias. YouTube, for example, has long been targeted by activist groups concerned that the platform's recommendation algorithm steers users towards watching increasingly extremist videos. The company was prompted many times, without success, to reveal the inner workings of the algorithm and allow analysts assess the performance of the model.  

YouTube, however, has committed to making amends, and claims that updates to the system have shown a 70% average drop in watch time for videos deemed borderline.