Fine-tuning OpenAI's GPT-3.5 Turbo can make it as capable as GPT-4 (if not more)

The much anticipated feature will cut costs, improve speeds, and create more tailored use cases.
Written by Sabrina Ortiz, Editor
abstract cube
Eugene Mymrin/Getty Images

OpenAI's advanced large language models have been leveraged by enterprises and developers for their own specific use cases. Now, an update to GPT-3.5 Turbo is going to boost its functionality for its customers. 

On Tuesday, OpenAI announced that its most cost-effective model in the GPT-3.5 family, GPT-3.5 Turbo, would be available for fine-tuning. This means that developers can now use their own data to customize the model for their use cases. 

Also: How to make ChatGPT provide sources and citations 

"Since the release of GPT-3.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users," said OpenAI in the post. 

In the private beta, OpenAI found that customers were able to improve the model's performance in a variety of use cases. These include improved steerability, which allows the model to better follow instructions, reliable output formatting, and custom tone, which allows businesses to incorporate their brand voice within the model. 

OpenAI also claims that the fine-tuning allows businesses to shorten their prompts, with early testers reducing prompt size by up to 90%. This reduction cuts costs and speeds up each API call, according to the company. 

Also: You can demo Meta's AI-powered multilingual speech and text translator. Here's how

Most impressively, OpenAI shared that early tests showed the fine-tuned version of GPT-3.5 Turbo can "match, or even outperform" GPT-4-level capabilities on "certain narrow tasks."

To address privacy concerns involved with harnessing an AI model for enterprise use cases, OpenAI reassures users that the customer data used to fine-tune the model remains in customer ownership and is not used by OpenAI to train other models, such as with another API model. 

Editorial standards