'ZDNET Recommends': What exactly does it mean?
ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.
When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.
ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.
Bard is Google's experimental, conversational, AI chat service. It is meant to function similarly to ChatGPT, with the biggest difference being that Google's service will pull its information from the web.
Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me
Like most AI chatbots, Bard can code, answer math problems, and help with your writing needs.
Bard was unveiled on February 6 in a statement from Google and Alphabet CEO Sundar Pichai. Even though Bard was an entirely new concept at the announcement, at launch, the AI chat service was powered by Google's Language Model for Dialogue Applications (LaMDA), which was unveiled two years prior.
Google Bard is powered by Google's very own and most advanced large language model (LLM) PaLM 2, which was unveiled at Google I/O 2023.
PaLM 2, a more advanced version of PaLM which was released in April 2022, will allow Bard to be much more efficient, perform at a much higher level and fix previous issues.
Also: Every major AI feature announced at Google I/O 2023
The initial version of Bard used a lightweight model version of LaMDA, because it requires less computing power and could be scaled to more users.
LaMDA was built on Transformer, Google's neural network architecture that it invented and open-sourced in 2017. Interestingly, GPT-3, the language model ChatGPT functions on, was also built on Transformer, according to Google.
Google's decision to use its own LLMs, LaMDA and PaLM 2, was a bold decision from Google since some of the most popular AI chatbots right now, including ChatGPT and Bing Chat, use an LLM in the GPT series.
At Google I/O, the tech giant announced that there would no longer be a waitlist for Bard, meaning it would be open to the general public.
Google Bard's previous waitlist opened up on March 21, 2023 and waitlist-granted access to limited users in the US and UK on a rolling basis.
At Google I/O, Google announced that Bard will support Japanese and Korean and is on track for supporting 40 more languages soon.
Google's Bard had a rough launch, with a demo of Bard delivering inaccurate information about the James Webb Space Telescope (JWST).
To launch the AI service, Google tweeted a demo of the AI chat service in which the prompt read, "What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard replied: "JWST took the very first pictures of a planet outside of our own solar system." People quickly noticed that the output response was factually incorrect.
Also: I asked ChatGPT write a WordPress plugin I needed. It did it in less than 5 minutes
"This highlights the importance of a rigorous testing process, something that we're kicking off this week with our Trusted Tester program," said a Google spokesperson to ZDNET in a statement.
The actual performance of the chatbot also led to lots of negative feedback.
In ZDNET's experience, Bard also failed to answer basic questions, had a longer wait time, didn't automatically include sources, and paled in comparison to more established competitors. Google CEO Sundar Pichai called Bard 'a souped-up Civic' compared to ChatGPT and Bing Chat.
Before Bard was released, Google's LaMDA came under fire as well. As ZDNET's Tiernan Ray reports, shortly after LaMDA's publication, former Google engineer Blake Lemoine released a document in which he shared that LaMDA might be "sentient." This controversy faded after Google denied the sentience and put Lemoine on paid administrative leave before letting him go from the company.
Google's switch from LaMDA to PaLM 2 should help mitigate many of Bard's current issues.
ChatGPT has been a hit since its release. In less than a week after launching, ChatGPT had more than one million users. According to analysis by Swiss bank UBS, ChatGPT is the fastest-growing app of all time. Because of this success, other tech companies, including Google, are trying to get into the space while it's hot.
Within the same week Google unveiled Bard, Microsoft unveiled a new AI-improved Bing, which runs on a next-generation OpenAI large language model customized specifically for search.
Google has developed other AI services that have yet to be released to the public. The tech giant typically treads lightly when it comes to AI products and doesn't release them until it's confident in a product's performance.
For example, Google has developed an AI image generator, Imagen, which could be a great alternative to OpenAI's DALL-E when released. Google also has an AI music generator, MusicLM, which Google says it has no plans to release at this point.
Also: How Google Socratic can help you do homework
In a recent paper discussing MusicLM, Google recognizes the risk that these kinds of models could pose to the misappropriation of creative content and inherent biases present in the training that could affect cultures underrepresented in the training, as well fears over cultural appropriation.