X
Innovation

Now Microsoft has a new AI model - Kosmos-1

Microsoft's Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT's text prompts.
Written by Liam Tung, Contributing Writer
working-at-pc
Image: Morsa Images/Getty Images

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more. 

OpenAI's ChatGPT has helped popularize the concept of LLMs, such as the GPT (Generative Pre-trained Transformer) model, and the possibility of transforming a text prompt or input into an output. 

Also: OpenAI is hiring developers to make ChatGPT better at coding

While people are impressed by these chat capabilities, LLMs still struggle with multimodal inputs, such as image and audio prompts, Microsoft's AI researchers argue in a paper called 'Language Is Not All You Need: Aligning Perception with Language Models'. The paper suggests that multimodal perception, or knowledge acquisition and "grounding" in the real world, is needed to move beyond ChatGPT-like capabilities to artificial general intelligence (AGI).

"More importantly, unlocking multimodal input greatly widens the applications of language models to more high-value areas, such as multimodal machine learning, document intelligence, and robotics," the paper says.

Alphabet-owned robotics firm Everyday Robots and Google's Brain Team showed off the role of grounding last year when using LLMs to get robots to follow human descriptions of physical tasks. The approach involved grounding the language model in tasks that are possible within a given real-world context. Microsoft also used grounding in its Prometheus AI model for integrating OpenAI's GPT models with real-world feedback from Bing search ranking and search results.

Microsoft says its Kosmos-1 MLLM can perceive general modalities, follow instructions (zero-shot learning), and learn in context (few-shot learning). "The goal is to align perception with LLMs, so that the models are able to see and talk," the paper says.

The demonstrations of Kosmos-1's outputs to prompts include an image of a kitten with a person holding a paper with a drawn smile over its mouth. The prompt is: 'Explain why this photo is funny?' Kosmos-1's answer is: "The cat is wearing a mask that gives the cat a smile." 

Other examples show it: perceiving from an image that a tennis player has a pony tail; reading the time on an image of a clock face at 10:10; calculating the sum from an image of 4 + 5; answering 'what is TorchScale?' (which is a PyTorch machine-learning library), based on a GitHub description page; and reading the heart rate from an Apple Watch face.

Each of the examples demonstrates a potential for MLLMs like Kosmos-1 to automate a task in multiple situations, from telling a Windows 10 user how to restart their computer (or any other task with a visual prompt), to reading a web page to initiate a web search, interpreting health data from a device, captioning images, and so on. The model, however, does not include video-analysis capabilities.

Also: What is ChatGPT? Here's everything you need to know

The researchers also tested how Kosmos-1 performed in the zero-shot Raven IQ test. The results found a "large performance gap between the current model and the average level of adults", but also found that its accuracy showed potential for MLLMs to "perceive abstract conceptual patterns in a nonverbal context" by aligning perception with language models. 

The research into "web page question answering" is interesting given Microsoft's plan to use Transformer-based language models to make Bing a better rival to Google search.   

"Web page question answering aims at finding answers to questions from web pages. It requires the model to comprehend both the semantics and the structure of texts. The structure of the web page (such as tables, lists, and HTML layout) plays a key role in how the information is arranged and displayed. The task can help us evaluate our model's ability to understand the semantics and the structure of web pages," the researchers explain.

kosmos.png
Image: Microsoft
Editorial standards