A new era of searching the internet is underway, driven by impressive advances in AI. Just a few short months after its launch, OpenAI's conversational chatbot ChatGPT has Google rethinking its foundational service, and it's created an opening for other technology companies like Microsoft to gain new ground.
It's no surprise that a conversational tool like a chatbot could disrupt the search business when you think about how the market has evolved.
Google, the world's dominant search engine for about two decades, says its mission is "to organize the world's information and make it universally accessible and useful."
The world's information, however, continues to accumulate at a dizzying pace. The research firm IDC last year predicted that the amount of data created on an annual basis will reach more than 221,000 exabytes by 2026. That's more than double the amount of data created in 2022.
A search engine that indexes websites is certainly an effective way to organize all that information, but it's not necessarily the best way to make it useful.
In fact, it's so easy to collect and organize data, that it can be a challenge just to sift through your own data, or the data you're searching at work. Do you remember how long it took the last time you had to sift through your company's HR platform to figure out how to file an expense report?
These kinds of challenges present opportunities for the next iteration of search.
"When people think of Google, they often think of turning to us for quick factual answers, like 'how many keys does a piano have?'" Google CEO Sundar Pichai wrote in a blog post this week, introducing Google's own experimental AI chatbot, Bard. "But increasingly, people are turning to Google for deeper insights and understanding -- like, 'is the piano or guitar easier to learn, and how much practice does each need?' Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives."
Pichai added: "AI can be helpful in these moments, synthesizing insights for questions where there's no one right answer."
The problem, however, is that these subjective insights, neatly packaged in a conversational format, typically have to be grounded in some kind of truth.
As Sabrina Ortiz explained for ZDNET, these conversational chatbots are designed to converse with people -- not necessarily to deliver accurate answers. OpenAI trained its language model using Reinforcement Learning from Human Feedback (RLHF), according to OpenAI. Human AI trainers provided the model with conversations in which they played both parts, the user and AI assistants. Instead of asking for clarification on ambiguous questions, the model just takes a guess at what your question means, which can lead to unintended responses to questions.
OpenAI itself acknowledges, "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers."
Google's Bard seemingly attempts to address this issue by allowing its models to tap into recently created data from external sources. Bard is based on LaMDA, a large language model developed by Google. LaMDA's developers, as Tiernan Ray noted for ZDNET, specifically focused on how to improve what they call "factual groundedness." They did this by allowing the program to call out to external sources of information beyond what it has already processed in its development, the so-called training phase.
However, Google's recent Bard demo gone-wrong illustrates exactly why tapping external sources of information is risky business, particularly for AI models that prioritize coherence over accuracy. In response to the question, "What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?" Bard said that the telescope took the first-ever image of an exoplanet -- which isn't right.
How did Bard end up giving this inaccurate statement? It probably has to do with the quality of external information available on the topic. As any computer scientist knows, "garbage in, garbage out."
And indeed, NASA's own materials about the James Webb Space Telescope -- no doubt trying to portray the telescope in the best light possible -- was ambiguous. In September 2022, the agency wrote, "For the first time, astronomers have used NASA's James Webb Space Telescope to take a direct image of a planet outside our solar system." To clarify, this was the first time this specific telescope took a direct image on an exoplanet -- but another telescope did so as early as 2004.
One immediate way to address these chatbot shortcomings is to offer as much transparency as possible. Microsoft's new version of the Bing search engine, which runs on a next-generation OpenAI large language model, cites its sources with its answers.
So, where does this leave us? For sure, it helps, as always, for users to leverage these tools with a skeptical eye and a clear understanding of how they work, as Microsoft points out.
"Bing aims to base all its responses on reliable sources -- but AI can make mistakes, and third party content on the internet may not always be accurate or reliable," the Bing FAQ section reads. "Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate. Use your own judgment and double check the facts before making decisions or taking action based on Bing's responses."
Our new digital friends may want to be helpful, but it would unwise to rely on them yet.