X
Business

LGBTQ: The missing letters in Google’s YouTube alphabet and the moral struggle of algorithms

Where's the training data on morals and ethics? Google is trying to make sense of hot-button cultural and political issues using AI and algorithms.
Written by Tom Foremski, Contributor

Google has been sued by a group of independent video producers for restricting their YouTube channels and not allowing them to buy advertising to promote their LGBTQ related content. 

Eight plaintiffs representing the LGBTQ community -- a protected class under California laws -- claim Google is guilty of: "Discrimination, fraud, unfair and deceptive business practices, unlawful restraint of speech and breach of consumer contract."

"We've all tried repeatedly to communicate with Google/YouTube to treat us fairly and work with us to allow our voices to be heard and inspire systemic change," said Celso Dulay, one of the plaintiffs in the lawsuit and host of GlitterBombTV

"It's shameful that so many in the LGBTQ+ community on Google/YouTube are restricted, censored or blocked -- yet we're repeatedly being subjected to harassment by homophobic and racist hate-mongers who are free to post vile and obscene content."

Dulay said that he never had trouble buying Google Ads to promote GlitterBombTV until a hidden policy change blocked his ads just days before Christmas Day 2018. Dulay said that he recorded a conversation with a representative of Google Ads who explained there is a company policy against "the gay thing." 

Google denies it discriminates against protected classes and says it focuses on limiting hate speech and shocking content and it does not filter out content based on any sexual orientation.

Foremski's Take: 

Four years ago Google quietly dropped its founders' motto: "Don't be evil"  as Alphabet, Google's new holding company, adopted "Do the right thing." 

How do you do the right thing? The answers are always in the data as long as where to look is the unspoken mantra of a software engineering culture.

Google certainly knows how to collect massive amounts of data and analyze it and use it to train its AI systems and algorithms.

By using its software to make key decisions on what to publish Google saves a ton of money compared to using human editors and human creators but more importantly, it is key to its protected legal status as a publishing platform unlike a newspaper which controls what it publishes and carries all responsibilities. 

Avoiding using humans is a smart strategy but the trouble is that Google's algorithms are not that smart -- especially when it comes to cultural and political issues where they can't discriminate between legitimate and harmful content. The software has no understanding of what it is viewing. 

It's not just LGBTQ content that is a problem for the software. Google cannot tell the difference between a well-researched YouTube history channel about the Second World War and an extreme hate speech channel -- and will demonetize and demonize them equally.

This problem in its operating strategy is ever more evident as Google gets larger: its dumb algorithms are making ever larger dumb decisions affecting more and more people. 

It needs to make high-level executive decisions in an area that it knows little about: moral and ethical leadership.

Where's the data for making those types of decisions? For example: if you were to train an AI system based on all human historical records it would likely conclude that violence, war and genocide are successful strategies.

Morality and ethics cannot be extrapolated from history but must be instructed -- usually by family and community.

Google's struggle in the coming decade will be about how a software engineering culture that depends on data to inform decisions figures out moral and ethical positions on some of society's most sensitive cultural and political issues.

Good luck with that.

Editorial standards