X
Innovation

Apple's new AI model could understand your home screen and supercharge Siri

Meet Ferret-UI, a multimodal large language model capable of understanding the elements on your home screen and potentially helping Siri actually perform tasks for you.
Written by Sabrina Ortiz, Editor
Hand with technology zapping Apple illustration
Getty Images/Yuuji

Despite not launching any AI models since the generative AI craze began, Apple is working on some AI projects. Just last week, Apple researchers shared a paper unveiling a new language model the company is working on, and insider sources reported that Apple has two AI-powered robots in the works. Now, the release of yet another research paper shows Apple is just getting started. 

On Monday, Apple researchers published a research paper that presents Ferret-UI, a new multimodal large language model (MLLM) capable of understanding mobile user interface (UI) screens.

Also: Generating music using AI in Copilot just got even better

MLLMs differ from standard LLMs in that they go beyond text, showing a deep understanding of multimodal elements such as images and audio. In this case, Ferret-UI is trained to recognize the different elements of a user's home screen, such as app icons and small text. 

Identifying app screen elements has been challenging for MLLMs in the past due to their small nature. To overcome that issue, according to the paper, the researchers added "any resolution" on top of Ferret, which allows it to magnify the details on the screen. 

Building on that, Apple's MLLM also has "referring, grounding, and reasoning capabilities," which allow Ferret-UI to comprehend UI screens fully and perform tasks when instructed based on the contents of the screen, according to the paper, as seen in the photo below. 

Apple Ferret-UI image
K. You et al.

To measure how the model performs compared to other MLLMs, Apple researchers compared Ferret-UI to GPT-4V, OpenAI's MLLM, in public benchmarks, elementary tasks, and advanced tasks.

Also: The best AI image generators to try right now

Ferret-UI outperformed GPT-4V across nearly all tasks in the elementary category, including icon recognition, OCR, widget classification, find icon, and find widget tasks on iPhone and Android. The only exception was the "find text" task on the iPhone, where GPT-4V slightly outperformed the Ferret models, as seen in the chart below. 

Apple Ferret-UI chart
K. You et al.

When it comes to grounding conversations on the findings of the UI, GPT-4V has a slight advantage, outperforming Ferret 93.4% to 91.7%. However, the researchers note that Ferret UI's performance is still "noteworthy" since it generates raw coordinates instead of the set of pre-defined boxes GPT-4V chooses from. You can find an example below. 

Apple Ferret-UI image
K. You et al.

The paper does not address what Apple plans to leverage the technology for, or if it will at all. Instead, the researchers more broadly state that Ferret-UI's advanced capabilities have the potential to positively impact UI-related applications.

"The advent of these enhanced capabilities promises substantial advancements for a multitude of downstream UI applications, thereby amplifying the potential benefits afforded by Ferret-UI in this domain," the researchers wrote. 

Also: Google updates Gemini and Gemma on Vertex AI, and gives Imagen a text-to-live-image generator

The ways in which Ferret-UI can improve Siri are evident. Because of the thorough understanding the model has of a user's app screen, and knowledge of how to perform certain tasks, Ferret-UI could be used to supercharge Siri to perform tasks for you. 

There's certainly interest in an assistant that does more than just respond to queries. New AI gadgets such as the Rabbit R1 get plenty of attention for being able to carry out an entire task for you, such as booking a flight or ordering a meal, without you having to instruct them step by step.

Editorial standards