X
Innovation

LLMs aren't even as smart as dogs, says Meta's AI chief scientist

LeCun also said not to fear an AI takeover because 'there is no correlation between being smart and wanting to take over.'
Written by Jada Jones, Associate Editor
generative AI apps
Photo by OLIVIER MORIN/Getty Images

You may be impressed with artificially intelligent large language models (LLMs) like ChatGPT that can write code, create an app, and pass the bar exam. But LLMs still lack artificial general intelligence, the state of a hypothetical autonomous system that can achieve intellectual tasks humans or animals perform.

Also: What is ChatGPT and why does it matter?

And according to Meta's AI chief scientist, Yann LeCun, LLMs aren't even as smart as dogs. He says LLMs are not truly intelligent because LLMs cannot understand, interact with, or comprehend reality and only rely on language training to produce an output.

LeCun says that true intelligence stretches beyond language, citing that most human knowledge has little to do with language. LLMs like ChatGPT lack emotions, creativity, sentience, and consciousness -- cornerstones of human intelligence.

ChatGPT can solve a complex mathematical problem and, without its safety guardrails, can explain how to create harmful substances from scratch at home, according to OpenAI's GPT-4 whitepaper

Yet, ChatGPT lacks the cognitive abilities to sense, plan, exhibit common sense, or reason based on real-world experiences. However, GPT-4, the newest version of OpenAI's language model, demonstrated human-level performance in math, coding, and law, signaling that achieving artificial general intelligence could be on the horizon.

Also: AI can write your emails, reports, and essay. But can it express your emotions? Should it?

OpenAI continues to train and expand the capabilities of its GPT language models in an attempt to one day achieve artificial general intelligence. Still, the company acknowledges that the achievement of such technology could significantly disrupt society.

In May, OpenAI's CEO, Sam Altman, testified before the US Senate Judiciary Subcommittee and expressed that his greatest fear is that his technology causes "significant harm to the world." 

In a blog post, OpenAI states that generally intelligent beings can serve many purposes, but using and researching the technology responsibly is paramount.

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

LeCun says one day, artificial beings will be more intelligent than humans and that when that happens, they should be "controllable and basically subservient to humans." He says people's fear that artificially generally intelligent beings will want to take over the world is unfounded, as "there is no correlation between being smart and wanting to take over."

And OpenAI's sentiments on creating AI that can achieve artificial general intelligence are similar to LeCun's. The company believes it's impossible to halt the creation of artificial beings that can become just as or smarter than humans.

Also: How to use ChatGPT

But OpenAI's mission is to ensure the technology is developed with great caution, as it believes artificial general intelligence's risks could be "existential" if it falls into the wrong hands and is deployed maliciously.

The future of artificial intelligence that we once thought was only seen in sci-fi movies is in our near future. Will we be ready?

Editorial standards