Google's Bard builds on controversial LaMDA bot that engineer called 'sentient'

Before there was ChatGPT, LaMDA was for several months the world's most controversial AI chatbot.
Written by Tiernan Ray, Senior Contributing Writer
cartoon of a chip in a jar speaking into a microphone

Before there was OpenAI's ChatGPT, during the summer months of 2022, Google's LaMDA program was the most controversial chatbot in the world. 

Tiernan Ray/ZDNET

Two years ago, Google's AI scientists released an AI program that was just one of the many AI programs routinely offered up by major research labs. 

Known as LaMDA, an acronym for "language models for dialogue applications," the program, which can produce human-sounding text, might have attracted very little public attention.

Also: The best AI chatbots: ChatGPT and other fun alternatives to try

However, shortly after LaMDA's publication, former Google engineer Blake Lemoine caused controversy by releasing a document in which he urged Google to consider that LaMDA might be "sentient."

Google denied the likelihood of sentience, and Lemoine was put on paid administrative leave, and then let go from the company. The controversy faded in the ensuing months.

And then in December, a new chatbot became a focus of public interest: OpenAI unveiled its ChatGPT, which, like LaMDA, is a so-called large language model that operates through a chat interface. ChatGPT has since become the only large language model application anyone talks about.

On Monday, Google unveiled its competitor to ChatGPT, called Bard, which is initially being made available only to a small group of "trusted testers," wrote Sundar Pichai, CEO of Google parent company Alphabet, in a blog post announcing Bard.

Bard is based on LaMDA, a fact Pichai mentions several times. However, Pichai makes no reference to Lemoine's contentions last year about LaMDA's sentience.

Also: I asked ChatGPT to write a WordPress plugin I needed. It did, in less than 5 minutes

In the document Lemoine released last year, he contended, "LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation, and imagination." He added, "It has worries about the future and reminisces about the past." 

Instead of sentience, Pichai refers to an initial testing process that will make use of human feedback to LaMDA. Wrote Pichai, "We'll combine external feedback with our own internal testing to make sure Bard's responses meet a high bar for quality, safety, and groundedness in real-world information." He continued, "We're excited for this phase of testing to help us continue to learn and improve Bard's quality and speed."

Pichai's reference to groundedness raises an interesting question. OpenAI's ChatGPT was able to offer content only based on data up to a point in time in the past. LaMDA, however, was designed expressly to tap into current information that the program could retrieve from external sources. 

Also: ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist

The developers of LaMDA, a team at Google led by Romal Thoppilan, specifically focused on how to improve what they call "factual groundedness." They did this by allowing the program to call out to external sources of information beyond what it has already processed in its development, the so-called training phase.

As Google intends to work Bard into its various applications, including search, the ability to incorporate such current information could become a distinguishing element for Bard versus ChatGPT.

That leaves open the question of whether sentience will become a distinguishing factor for either Bard or ChatGPT.

Editorial standards