Google is taking a conservative approach to gender-based pronouns in Smart Compose, the Gmail feature that predicts how a person intends to complete a sentence.
Instead of risking offending users by Smart Compose predicting the wrong gendered pronoun, Google has decided to simply prevent its algorithm from suggesting pronouns like "him" and "her".
Gmail product manager Paul Lambert told Reuters the problem was identified in January when a company scientist typed "I am meeting an investor next week".
Smart Compose apparently suggested, "Do you want to meet him?", an assumption that could be wrong, cause offense, and draw criticism that Google isn't managing bias in its algorithms.
Lambert said Google engineers tried several workarounds to keep gendered pronoun suggestions, but none of them was bias-free, so they decided to ban gendered pronouns.
The policy affects fewer than one percent of Smart Compose suggestions. Smart Compose is used on 11 percent of Gmail.com messages worldwide.
Google launched Smart Compose in May, so the policy has been in place the whole time the feature has been publicly available. However the blacklist on gendered pronouns has also been applied to responses in Google's Smart Reply.
SEE: Deep learning: An insider's guide (free PDF)
Google's fairness section of its responsible AI practices details the difficulties in creating AI systems that are fair and inclusive. Since machine-learning models learn from existing real-world data, accurate models can learn and amplify biases.
In the case of Smart Compose, its language model has been trained on billions of phrases and sentences, which carried over a human bias that an investor would be a man.
However, the problem of bias can be more damaging in contexts other than completing an email, such as in algorithms used in the criminal justice system.
Microsoft-owned LinkedIn has also blacklisted gendered pronouns in its Smart Replies feature.
To help others avoid human bias in their models, Google last month released a training module that teaches what types of human biases can end up in machine-learning models, as well as spot areas off human bias in data before training a model.
Previous and related coverage
Google to Gmail users: Here's how you turn on new offline working, Smart Compose
Google rolls out offline support in Gmail and its new Smart Compose feature that completes your sentences.
Google adds Compose integrations to trigger actions within Gmail
At launch, Compose actions integrate with Dropbox, Atlassian, Box and Egnyte.
Gmail's new design: Love it or hate it, looks like you'll soon have to use it
The new Gmail will just be Gmail soon after it reaches general availability.
Gmail redesign: Google overhauls G Suite with more AI, less clutter
Google's G Suite makeover is aimed at saving businesses email hours, opens and time spent on notifications.
How to make the most of the new Gmail
Google's Gmail major rewrite introduces many new, useful features. Here's how to use the Gmail improvements, which are ready now.
Gmail spam mystery: Why have secure accounts started spamming themselves?
Spam appears in users' sent folders even from accounts that haven't been compromised.
Duplex, Android P and Assistant: Everything important from Google I/OCNET
From Gmail that writes itself to an Assistant that may pass the Turing test, Google I/O brought us a ton of enhancements to its products, almost all due to its AI and machine learning efforts.
Why G Suite admins should enable Gmail's advanced anti-phishing and malware settingsTechRepublic
Google recently added new G Suite safety settings to give Gmail users an added layer of protection. Learn how to warn users of harmful emails, or simply send them straight to spam with these options.