Computer scientist Yann LeCun: “Intelligence really is about learning”
Technology
News

Computer scientist Yann LeCun: “Intelligence really is about learning”

AR
Ars Technica
1 day ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Jan 7, 2026

I arrive 10 minutes ahead of schedule from an early morning Eurostar and see Yann LeCun is already waiting for me, nestled between two plastic Christmas trees in the nearly empty winter garden of Michelin-starred restaurant Pavyllon.

The restaurant is next to Paris’s Grand Palais, where President Emmanuel Macron kick-started 2025 by hosting an international AI summit, a glitzy showcase packed with French exceptionalism and international tech luminaries including LeCun, who is considered one of the “godfathers” of modern AI.

LeCun gets up to hug me in greeting, wearing his signature black Ray-Ban Wayfarer glasses. He looks well rested for a man who has spent nearly a week running around town plotting world domination. Or, more precisely, “total world assistance” or “intelligent amplification, if you want.” Domination “sounds scary with AI,” he acknowledges.

The last time I met him was at a summer conference in Paris, where he was unveiling the latest iteration of his vision for superintelligent machines as Meta’s chief AI scientist. Now, he is preparing to leave his longtime employer, and fundraising for a new start-up that will bring that vision to life.

LeCun’s schedule has been relentless since the Financial Times broke the news that he was leaving Meta. “It basically pushed us to accelerate the calendar,” he says. Macron sent him a WhatsApp message after the story came out. LeCun declines to tell me exactly what the president said, but does hint that he was pleased the new “worldwide” company will have a strong connection to France.

LeCun will not be the company’s chief executive, but the executive chair, allowing him the same kind of freedom to do research that he had at Meta. (Since our lunch, the FT has reported that LeCun’s new venture is called Advanced Machine Intelligence Labs and will be led by Alex LeBrun, the co-founder and chief executive of French healthcare AI start-up Nabla.)

“I’m a scientist, a visionary. I can inspire people to work on interesting things. I’m pretty good at guessing what type of technology will work or not. But I can’t be a CEO,” LeCun says. “I’m both too disorganised for this, and also too old!”

The waitress offers us champagne to start. I opt for a glass of alcohol-free Blanc de Blancs. LeCun, a fan of wines, is curious to try it too. We clink glasses.

Things have changed for me as well since we last met: I am pregnant. I make a joke that I too am growing my own superintelligence. “It is the most efficient way,” he says.

LeCun would know, as he has been gestating ideas for the creation of such intelligence in machines for decades. He has also been vocal about his disdain for large language models (LLMs) and their potential to reach superhuman intelligence, which is the current obsession of Silicon Valley. He argues that LLMs are useful but fundamentally limited and constrained by language. To achieve human-level intelligence, you have to understand how our physical world works too.

His solution for achieving that relies on an architecture called V-JEPA, a so-called world model. World models aim to understand the physical world by learning from videos and spatial data, rather than just language. They are also able to plan, reason and have persistent memory. He calls this kind of intelligence Advanced Machine Intelligence, or AMI.

Born in 1960 and raised in the suburbs of Paris, LeCun has been fascinated by the question of how human intelligence emerged since he was a young boy.

It was the film 2001: A Space Odyssey, which he saw when he was eight or nine years old, that set him on the path he is on today. He gestures having his mind blown.

LeCun’s father, an aeronautical engineer and “a bit of an inventor,” instilled a love of building and tinkering with things. LeCun grew up constructing model aeroplanes and playing woodwind instruments such as the recorder and the crumhorn, a “weird Renaissance instrument,” which he played in a Renaissance dance music band.

We’ve both chosen the four-course “Champs-Élysées” lunch set menu. As we tuck into our starters (soft-boiled eggs, tuna tartare with smoked pike roe and croutons for LeCun, and a broth of root vegetables and horseradish ravioli with Chartreuse sauce for me), he tells me how a teacher deemed him too bad at maths to study it at university, so he decided to pursue engineering.

The waitress comes to check on us, and LeCun orders a glass of Chassagne-Montrachet from Burgundy. “What Americans would call Chardonnay,” he says, jokingly.

LeCun’s lightbulb moment came as a student at the École Supérieure d’Ingénieurs en Électrotechnique et Électronique in Paris in the 1980s, when he read a book about a debate on nature versus nurture between the linguist Noam Chomsky and Jean Piaget, a psychologist. Chomsky argued that humans have an inbuilt capacity for language, while Piaget said there is some structure but most of it is learnt.

“I’m not gonna make friends saying this . . . ” he tells me, “but I was reading this and I thought everything that Chomsky . . . was saying could not possibly be true, [because] we learn everything. Intelligence really is about learning.”

AI research—or neural networks, as the technology was then called, which loosely mimic how the brain functions—was practically a dead field and considered taboo by the scientific community, after early iterations of the technology failed to impress. But LeCun sought out other researchers studying neural networks and found intellectual “soulmates” in the likes of Geoffrey Hinton, then a faculty member at Carnegie Mellon.

He later joined Hinton at the University of Toronto as a postdoc researcher. The two, along with Yoshua Bengio, went on to lay the groundwork for deep learning and modern AI, which saw them rewarded in 2018 with the Turing Award, the most prestigious prize in computer science.

The waitress lays our second, gorgeous, dish in front of us, and launches into an enthusiastic description of the meal in French. I nod along equally enthusiastically, understanding nothing.

“Did you get that?” LeCun asks. “This is the foie gras, and this is the Comté soufflé, and the Comté is 18 months aged.” When in France, I think, and take a bite of the liver.

LeCun is the brain behind important early AI technologies. In the late 1980s and 1990s, while a researcher at AT&T Bell Labs in New Jersey—once known as the leading industry research lab in the world—he developed convolutional neural networks, an architecture used in image recognition technology, which he built into a system that was widely used by banks to read cheques.

He had conceived of the research at Toronto, but was able to roll it out in the real world thanks to the seemingly unlimited coffers of cash and cutting-edge technology available at Bell Labs.

LeCun recounts something his boss at the time, Larry Jackel, told him when he first joined. “He said, ‘You know, at Bell Labs? You don’t get famous by saving money.’”

Our main dish arrives, a portion of cod with herbed breadcrumbs and fried capers. LeCun is in a jovial mood, and I find myself engrossed in his colourful stories about the early years of AI research.

He, along with his pharmacist wife Isabelle and their three sons, ended up settling in New Jersey for good, although he visits Paris every five weeks or so. America was a “culture shock,” he says.

The party at Bell Labs was destined to end. A corporate shake-up meant the lab lost significant funding and was spun off to different subsidiaries. LeCun rejoined academia and started a new project at NYU aimed at researching neural networks, frequenting Greenwich Village’s jazz clubs after his lectures.

By 2013, it was clear that deep learning was going to work, with image recognition applications showing impressive results. Google had just started Google Brain, and a year later would acquire British AI lab DeepMind.

It was also then that Mark Zuckerberg called. He wanted to start an AI unit at Facebook, and to woo LeCun invited him over for dinner at his California home. A private chef prepared “chicken with some pretty good white wine,” LeCun recalls.

LeCun agreed to join on three conditions. He wouldn’t have to quit his job at NYU. He wouldn’t move to California. And the research results of the new lab had to be made publicly available.

Zuckerberg agreed, and the deal was done. LeCun was to join Facebook, one of the biggest technology companies in the world, to set up a new AI research lab focusing on fundamental research, called Facebook Artificial Intelligence Research (Fair).

The waitress interrupts us to bring our dessert, bricelets. “Magnifique,” LeCun says, as the dish is placed in front of him.

I shift the conversation to a more tumultuous time. In early 2022, pre-ChatGPT, all the major AI labs had some version of the technology kicking around, but it was seen as largely experimental. It was a small, relatively unknown AI lab called OpenAI that kick-started today’s AI mania, when it quietly launched the technology as an easily accessible chatbot.

ChatGPT caused a frantic scramble at Meta. The company’s leadership decided to put all their chips into work developing Llama, a large language model. Zuckerberg reshuffled the organisation to create a generative AI unit, which was tasked with accelerating research into products. LeCun insisted the model was released openly.

Llama 2, released with open weights for all users, meaning people could download and tweak it for free, was a “watershed” moment, which “changed the entire industry,” LeCun says. The model became the gold standard in powerful open LLMs, and championed an approach that was counter to the extreme concentration of power that Google and OpenAI were pushing. Meta were seen as the good guys in AI research.

Meta switched gears aggressively on AI last year. Zuckerberg placed more pressure on the GenAI unit to accelerate AI development and deployment, which led to a communication breakdown, LeCun says.

“We had a lot of new ideas and really cool stuff that they should implement. But they were just going for things that were essentially safe and proved,” he says. “When you do this, you fall behind.”

The subsequent Llama models were duds. Llama 4, which was released in April 2025, was a flop, and the company was accused of gaming benchmarks to make it look more impressive. LeCun admits that the “results were fudged a little bit,” and the team used different models for different benchmarks to give better results.

“Mark was really upset and basically lost confidence in everyone who was involved in this. And so basically sidelined the entire GenAI organisation. A lot of people have left, a lot of people who haven’t yet left will leave.”

Last June, Meta invested $15bn in data-labelling start-up Scale AI and hired its 28-year-old chief executive and co-founder Alexandr Wang. Wang took the reins of the company’s new bet on AI and its research unit, called TBD Lab. The lab is tasked with developing new frontier AI models.

Meta made headlines for trying to poach elite researchers from competitors with offers of $100mn sign-on bonuses. “The future will say whether that was a good idea or not,” LeCun says, deadpan.

LeCun calls Wang, who was hired to lead the organisation, “young” and “inexperienced.”

“He learns fast, he knows what he doesn’t know . . . There’s no experience with research or how you practise research, how you do it. Or what would be attractive or repulsive to a researcher.”

Wang also became LeCun’s manager. I ask LeCun how he felt about this shift in hierarchy. He initially brushes it off, saying he’s used to working with young people. “The average age of a Facebook engineer at the time was 27. I was twice the age of the average engineer.”

But those 27-year-olds weren’t telling him what to do, I point out.

“Alex [Wang] isn’t telling me what to do either,” he says. “You don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.”

LeCun doesn’t mince his words about why he ultimately decided to leave Meta after more than a decade. Staying became politically difficult, he tells me. And while Zuckerberg likes LeCun’s world model research, the crowd who were hired for the company’s new superintelligence push are “completely LLM-pilled.”

This clearly alienated LeCun. “I’m sure there’s a lot of people at Meta, including perhaps Alex, who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence,” he says. “But I’m not gonna change my mind because some dude thinks I’m wrong. I’m not wrong. My integrity as a scientist cannot allow me to do this.”

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.

Primary Source

Ars Technica