Apple restricts employee use of ChatGPT. Here's why

The tech-giant joins the growing list of companies banning ChatGPT. Are the security concerns justified?
Written by Sabrina Ortiz, Editor
ChatGPT on a phone
Getty Images/NurPhoto

Generative AI models continually improve their performance by utilizing user interactions to refine their algorithms. As a result, even confidential information in your prompts could potentially be used to further train the model. 

For that reason, data privacy is one of the biggest challenges surrounding generative AI -- including ChatGPT in particular. 

Also: OpenAI rolled out a free ChatGPT app for iPhones. Does it live up to the hype?

Fears over data leaks have caused many companies like Verizon, JPMorgan Chase and Amazon to restrict usage of ChatGPT by employees. Now Apple joins the list. 

According to documents reviewed by the Wall Street Journal, ChatGPT and other external AI tools, such as Microsoft owned Github Copilot, have been restricted for some employees. 

The worries arise from the potential for unintentional release of private information when utilizing these models, which has happened before. 

AlsoMost Americans think AI threatens humanity, according to a poll

The most recent example is the ChatGPT Mar. 20 outage, which allowed some users to see titles from other users' chat history. This event caused Italy to temporarily ban ChatGPT

OpenAI has tried addressing concerns regarding data before. In late April, OpenAI released a feature which allows users to turn off their chat history. This gives users more control of their own data, allowing them to choose what chats can be used to train OpenAI's models or not. 

Editorial standards