GPT-4 Turbo reclaims the 'best AI model' crown from Anthropic's Claude 3

OpenAI's latest update tops all 82 LLMs in the Chatbot Arena. Here's how to compare them for yourself.
Written by Sabrina Ortiz, Editor
Trophy technology
Getty Images/sofiana indriani

OpenAI has been on an update hot streak lately, making the latest GPT-4 Turbo available to developers and paid ChatGPT subscribers last week. When launching the model, OpenAI shared that the new GPT-4 Turbo boasts several improvements from its predecessor. Users are now finding that to be true.  

Also: Zoom gets its first major overhaul in 10 years, powered by generative AI

On Thursday, the updated version of GPT-4 Turbo, gpt-4-turbo-2024-04-09, reclaimed its number one spot on the Large Model Systems Organization (LMSYS) Chatbot Arena, a crowdsourced platform where users can evaluate large language models (LLMs). 

The Chatbot Arena lets users chat with two LLMs side by side and compare their responses to each other without knowing the models' names.

After viewing the responses, users can continue chatting until they feel comfortable determining which model won, if it is a tie, or if they both performed poorly, as seen below. 

Chatbot Arena
Screenshot by Sabrina Ortiz/ZDNET

Chatbot Arena then uses the results to rank the 82 LLMs on its leaderboard, which includes the most popular LLMs available, such as Gemini Pro, Claude 3, and Mistral-Large-2402. 

As of the latest Chatbot Arena update on April 13, the updated version of GPT-4 Turbo holds the lead in the overall, coding, and English categories.

Also: The best AI chatbots: ChatGPT isn't the only one worth trying

This means that less than a month after overtaking GPT-4 Turbo in the Chatbot Arena, Anthropic's Claude 3 Opus has been pushed into second place in the overall category, followed by GPT-4-1106-preview, an older version of GPT-4 Turbo, in third place. 

These results could be attributed to gpt-4-turbo-2024-04-09's improved coding, math, logical reasoning, and writing capabilities, demonstrated by its higher performance on a series of benchmarks used to test the proficiency of AI models, as seen below. 

If you're interested in comparing gpt-4-turbo-2024-04-09's performance against other LLMs, you can visit Chatbot Arena and click on the Arena (side-by-side) option to select the models you want to compare.

Also: Adobe Premiere Pro's two new AI tools blew my mind. Watch them in action for yourself

Since you know the identity of the models in the side-by-side option, you will not be able to vote. If you want to vote and have that count toward the leaderboard, use the "Arena (battle)" option to compare random models. 

If you'd rather skip the testing and jump straight into using gpt-4-turbo-2024-04-09 in ChatGPT, you have to subscribe to ChatGPT Plus, which costs $20 per month.

Editorial standards