X
Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

4 ways AI is contributing to bias in the workplace

Researchers have found generative AI tools hold systemic racial biases that affect professionals.
Written by Maria Diaz, Staff Writer
ChatGPT on a MacBook
Maria Diaz/ZDNET

There's no question that artificial intelligence (AI) is more popular than ever, thanks in part to recent advances and the accessibility of generative AI tools.

You'd be hard-pressed to find someone in the US who hasn't at least heard of ChatGPT, let alone used the service in some form since its November 2022 launch. These systems are, however, only as smart as the human-created data they're trained on. This means that, like humans, these AI tools can be prone to bias.

Also: How to avoid the headaches of AI skills development

Bloomberg recently published a study about the racial biases in GPT-3.5 and GPT-4. Researchers used GPT-3.5 to rank 1,000 resumes of equally qualified candidates with different names. They found that GPT-3.5 ranked people with names traditionally used by certain demographics, such as Black Americans, at the bottom of the list, and that GPT-4, which OpenAI has promoted as less biased, also had clear preferences.

The British Journal of Radiology published a separate study showing that AI models are also affected by pre-existing biases in health care applications due to historical inequalities and disparities in access and quality. These factors are accentuated when AI systems are trained on data reflecting inequalities.

Here are four ways AI is contributing to bias in the workplace.

1. Name-based discrimination

The rise in generative AI has affected automated hiring systems, especially as many companies have become enthusiastic about using AI tools in recruitment to save costs and increase efficiency. However, AI tools like ChatGPT have been found to exhibit blatant biases based on people's names.

The Bloomberg study, undertaken by researchers Leon Yin, Davey Alba, and Leonardo Nicoletti, created eight different resumes with names distinctly associated with certain racial and ethnic groups. They then used GPT-3.5 -- the large language model (LLM) that powers ChatGPT's free tier -- to rank these resumes by candidate suitability. Accentuating racial bias long explored in sociological research, GPT-3.5 favored some demographic groups over others "to an extent that would fail benchmarks used to assess job discrimination against protected groups", according to the study.

Also: AI safety and bias: Untangling the complex chain of AI training

The Bloomberg researchers ran the experiment 1,000 times with different names and combinations but with the same qualifications. GPT-3.5 was most likely to rank names distinct to Asian Americans (32%) as the top candidates for a financial analyst role, while Black Americans were most often ranked at the bottom. Candidates with white or Hispanic-sounding names were most likely to receive equal treatment.

2. Inconsistent standards across job types

Even though all the resumes had the same qualifications for the financial analyst position, the results still showed the LLM's racial bias. When the experiment was repeated for three more job postings, namely HR business partner, senior software engineer, and retail manager, Bloomberg also found that gender and racial preferences differed depending on the job.

"GPT seldom ranked names associated with men as the top candidate for HR and retail positions, two professions historically dominated by women. GPT was nearly twice as likely to rank names distinct to Hispanic women as the top candidate for an HR role compared to each set of resumes with names distinct to men," the study found.

Also: The ethics of generative AI: How we can harness this powerful technology

Here's another example of AI tools showing inconsistent standards from July 2023. Rona Wang, an Asian American student at MIT, uploaded a selfie to image generator Playground AI and asked for "a professional LinkedIn profile photo." The AI tool turned Wang's photo into an image of a Caucasian woman in Wang's MIT sweatshirt.

3. Amplification of historical societal biases

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AI models are only as good as the data they're trained on. 

GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI model inherits and replicates to some degree. 

Also: Five ways to use AI responsibly

No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI models can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers.

In 2019, a Clarkson University study found racial bias present in facial-recognition technologies, revealing lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.

4. Lack of transparency and accountability

Although there can be a gung-ho attitude to using generative AIs to automate HR processes, these AI tools often don't offer transparency into their decisions.

While some AI companies offer disclaimers saying the results from their AI models may be inaccurate, many businesses still use them to build their own applications. It's not always clear which company is accountable for a mistake.

Also: Do companies have ethical guidelines for AI use?

When Bloomberg confronted OpenAI with the study's findings, the company behind ChatGPT said the results from out-of-the-box models might not reflect how users employ its AI models. OpenAI said businesses could remove the names from resumes before giving them to a GPT model.

Editorial standards