Google employees claim Bard is rushed and 'pathetic'

A Bloomberg report showcases internal Google reactions to the launch of Google's ChatGPT competitor.
Written by Sabrina Ortiz, Editor
Google logo on side of building
Anadolu Agency/Contributor/Getty Images

In November, OpenAI launched ChatGPT and its massive success opened the AI chatbot floodgates. Companies began rushing to get their own chatbots on the market, hoping to get a share of ChatGPT's success -- including Google. 

Only three months after ChatGPT's launch, Google announced Bard, its own chatbot. From the moment it was announced -- when the chatbot's demo outputted incorrect information about the James Webb Space Telescope -- it was evident that the chatbot was not ready for launch. 

Also: This new technology could blow away GPT-4 and everything like it

Yet, Google opened its waitlist to the public on March 21 and, due to the quick turnaround, the chatbot left much to be desired.

In ZDNET's experience, it failed to answer basic questions, had a longer wait time, didn't automatically include sources, and paled in comparison to more established competitors. 

Even Google CEO Sundar Pichai called Bard 'a souped-up Civic' compared to ChatGPT and Bing Chat. 

As if being compared to 'a souped-up Civic' wasn't bad enough, now both former and current Google employees are chiming in to further criticize the bot and Google's AI operations. 

Also: What is Auto-GPT? Everything to know about the next powerful AI tool

A Bloomberg report shares details of discussions with 18 current and former employees, as well as documents in which employees express their discontent with the chatbot and Google's rushing of the release.

Some of the highlights of the discussion include the employees calling the bot "pathetic", "a pathological liar", "useless", and more. 

The employees share that ethics took the backseat when launching Bard and the priority was placed on competing in the space. Staffers working on the ethics team were even encouraged to not get in the way of the launch. 

The report highlights that Google "overruled" a risk evaluation submitted by members of the safety team, who flagged that Bard could possibly cause harm. Shortly after the concerns were voiced, Google launched the bot as an "experiment". 

Also: This AI-powered app lets you chat with historical and fictional characters

Google has not stopped its AI experimentation and has actually rolled it out across more products, including its Google Workspace. Google is also said to be working on a new AI search engine and adding more AI features to its current engine

Rushing such large AI models that could have a significant impact on society has many concerned. 

A petition that has now garnered over 25,000+ signatures calls for a halt to giant AI experiments because of the "out-of-control [AI] race" that is producing systems that their creators can't "understand, predict, or reliably control."

Governments have started to prioritize AI policy, with some countries opting to prohibit it altogether. Meanwhile, countries like the US are developing strategies to safeguard users from any potential dangers posed by these systems.

Editorial standards