Ever since ChatGPT surged in popularity in November, the AI chatbot space has become saturated with ChatGPT alternatives. These chatbots vary in LLMs, pricing, UIs, internet access, and more, making it difficult to decide which to use.
The Chatbot Arena is a benchmark platform for LLMs where users can put two randomized models to the test by inserting a prompt and selecting the best answer without knowing which LLM is behind either answer.
After users pick a chatbot, they get to see which LLMs were used to generate the output.
The results of the user ratings are used to rank the LLMs on a leaderboard based on an Elo rating system, a widely-used rating system in chess, according to LMSYS Org.
When trying the arena for myself, I used the prompt, "Can you write me an email telling my boss that I will be out because I am going on a vacation that was planned months ago."
The two responses were very different, with one providing much more context, length, and fill-in-the-blanks that would have been appropriate for the email.
After picking "Model B" as the winner, I found out it was the LLM created by LMSYS Org, based on Meta's LLaMA model, "vicuna-7b." The losing LLM was "gpt4all-13b-snoozy," an LLM developed by Nomic AI and finetuned from LLaMA 13B.
The leaderboards unsurprisingly currently place GPT-4, OpenAI's most advanced LLM, in first place with an Arena Elo rating of 1227. In second place with a rating of 1227 is Claude-v1, an LLM developed by Anthropic.
Anthropic's second-ranking Claude is not available to the public just yet, but it does have a waitlist available where users can sign up for early access.
Ranked number eight on the leaderboard is PaLM-Chat-Bison-001, a submodel of PaLM 2, the LLM behind Google Bard. This ranking parallels the general sentiment behind Bard, not the worst but not one of the best.
On the Chatbot Arena site, there is an option where you can select the two different models you want to compare. This feature could be helpful if you want to experiment with specific LLMs.