Why One AI Godfather Says AI Is Dumber Than a Cat
WSJ Tech News BriefingOctober 17, 202400:11:01

Why One AI Godfather Says AI Is Dumber Than a Cat

Yann LeCun, a professor at New York University and senior researcher at Meta, is one of the godfathers of artificial intelligence but unlike other leaders in the field he doesn’t think today’s AI tech presents an existential peril to humanity. WSJ tech columnist Christopher Mims joins host Zoe Thomas to discuss LeCun’s position and why he says today’s AI is dumber than a cat. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

Yann LeCun, a professor at New York University and senior researcher at Meta, is one of the godfathers of artificial intelligence but unlike other leaders in the field he doesn’t think today’s AI tech presents an existential peril to humanity. WSJ tech columnist Christopher Mims joins host Zoe Thomas to discuss LeCun’s position and why he says today’s AI is dumber than a cat.


Sign up for the WSJ's free Technology newsletter.

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:03] Exchanges, the Goldman Sachs podcast featuring exchanges on rates, inflation, and U.S. recession risk.

[00:00:12] Exchanges on the market impact of AI.

[00:00:15] For the sharpest analysis on forces driving the markets and the economy, count on exchanges between the leading minds at Goldman Sachs.

[00:00:24] New episodes every week.

[00:00:25] Listen now.

[00:00:33] Welcome to Tech News Briefing.

[00:00:35] It's Thursday, October 17th.

[00:00:38] I'm Zoe Thomas for The Wall Street Journal.

[00:00:41] Learnings about the threat of artificial intelligence have been coming from some of the leaders behind the technology.

[00:00:48] But one of the godfathers of AI says the idea that the tech is an existential threat is, quote, complete BS.

[00:00:57] AI pioneer Jan LeCun spoke with our tech columnist Christopher Mims.

[00:01:02] And Christopher will be here to tell us about the conversation.

[00:01:08] In 2019, Jan LeCun won the A.M. Turing Award, the highest prize in computer science, along with Jeffrey Hinton and Joshua Bengio for their work on AI.

[00:01:20] LeCun is now a senior AI researcher at Facebook parent Meta and a professor of computer science at New York University.

[00:01:28] And unlike others in the field, LeCun doesn't think current AI is capable of being the destructive monster depicted in sci-fi.

[00:01:37] He says it's dangerous to project human nature onto these systems because humans have evolved to survive, compete with other species and cooperate with each other.

[00:01:48] I mean, those are characteristics of human nature that exist in humans because of evolution and the situation that we have evolved in.

[00:01:56] But they're not linked with intelligence.

[00:01:58] So you can have very intelligent entities that have no desire to dominate, essentially no survival instinct, no desire to influence other entities or anything like that.

[00:02:09] Tech columnist Christopher Mims spoke with LeCun about this and joins us now.

[00:02:14] So Christopher, when did LeCun begin his work on AI?

[00:02:18] So he was working on neural networks in like the mid-1980s when it was a backwater.

[00:02:24] AI was in what's known as an AI winter and nobody took it seriously.

[00:02:30] Everybody thought, oh, what a funny academic curiosity you're working on.

[00:02:35] And of course, now people are winning Nobel Prizes for it and propping up the valuations of trillion dollar companies with it.

[00:02:42] What's LeCun doing now?

[00:02:43] LeCun is basically the most senior research scientist at Meta.

[00:02:50] And he also is a professor at NYU.

[00:02:53] And he's collaborating with people constantly.

[00:02:56] And he's kind of a spiritual leader at Meta where he works with their far-reaching futuristic AI research body called FAIR.

[00:03:10] And what FAIR works on is just fundamental basic science in AI.

[00:03:18] So the research that we're working on at FAIR really is those kind of new ideas, new architectures, new training paradigms, training from video, getting systems to acquire some level of common sense.

[00:03:30] Unlike some other AI pioneers, LeCun has expressed some skepticism about the idea that AI is an existential threat.

[00:03:38] What has LeCun said about the idea that AI could be a threat to humanity?

[00:03:43] He thinks that's completely ridiculous.

[00:03:45] Pardon my French, but it's complete BS.

[00:03:47] When I spoke with LeCun recently at Meta's satellite headquarters in New York City, he expressed extreme skepticism that we're going to achieve so-called AGI, human level AI, anytime soon.

[00:04:02] And or that if we did, it would be dangerous and we wouldn't be able to control it.

[00:04:09] And just to peel back some layers of that onion for a moment, part of his argument is that today's large language models and our dominant approaches to AI in which everyone's investing tens or hundreds of billions are not going to get us to human level intelligence.

[00:04:26] He says that large language models and things like chat GPT are just a demonstration that you can be very clever in manipulating language and not be in any way smart.

[00:04:36] And then his other argument is, OK, look, I am actually working on general AI and others are.

[00:04:44] And we don't know how long it will take us to achieve actual general type AI level intelligence, much less human level intelligence.

[00:04:53] But when we do, there will be these good AIs that could always combat the malicious ones that like hackers create or if an AI goes rogue.

[00:05:04] And his analogy there is cybersecurity.

[00:05:06] It's not like the fact that hackers exist and we're able to create viruses and malware has ended the Internet.

[00:05:14] Far from it.

[00:05:14] You have people with bad intention.

[00:05:18] You're trying to kind of get into computer systems of various types or exploit platforms for various purposes, you know, monetary or political or whatever.

[00:05:26] And you have to take countermeasures.

[00:05:28] And then it's, you know, my AI is better than your AI.

[00:05:31] And so it's whether the good guys have more powerful tools than the bad guys.

[00:05:34] How do Lacoon's views differ from other big names in the industry like Elon Musk or OpenAI CEO Sam Altman and even from his fellow Turing Award winners Jeffrey Hinton and Joshua Bengio?

[00:05:47] Sam Altman has said that in like a thousand days we might have artificial general intelligence.

[00:05:54] Elon Musk has warned that it could be coming soon.

[00:05:56] Other people whom we should take very seriously, like the other quote unquote godfathers of AI, including Hinton, who just won a Nobel Prize and Bengio.

[00:06:07] They're very active in being out there and saying this could be a real threat, if not an existential threat, just a big threat to us and we should take it seriously.

[00:06:19] Coming up, we'll hear what Lacoon thinks of the idea that AI will one day exceed human level intelligence.

[00:06:27] That's after the break.

[00:06:34] Turning an idea into a small business doesn't happen by accident.

[00:06:37] You need to be all in.

[00:06:39] At Constant Contact, we have the digital marketing tools you need to get the word out, close the sale, raise the funds and gather a community around this thing you've built.

[00:06:47] So let's do this together.

[00:06:49] Let's send this, share this and shout this from the rooftops.

[00:06:52] Let's segment this, automate this and AI this for real results fast.

[00:06:56] Let's get out of this what we put into this.

[00:06:58] Let's build this.

[00:06:59] Let's grow this.

[00:07:00] Let's win this.

[00:07:01] Get started today at ConstantContact.com.

[00:07:09] And we're back with WSJ tech columnist Christopher Memms.

[00:07:12] So Christopher, what does Jan LeCun make of the prospects for achieving artificial general intelligence or AGI?

[00:07:20] This idea that AI could go beyond human level intelligence.

[00:07:24] Does he think there's a path to reaching that?

[00:07:26] He does think there might be a path to reaching that.

[00:07:28] But in order to achieve it, we're going to have to create things that have more of the characteristics that humans or animal intelligences have.

[00:07:38] The basic idea of this is you have an NLM, but instead of having it produce one answer, you have it somehow using various tricks.

[00:07:45] You have it produce lots of different answers.

[00:07:47] And then you have some way of selecting which answers are more likely to be best.

[00:07:51] He likes to use the analogy of a cat.

[00:07:54] A cat has four things that today's AIs and especially large language models don't have.

[00:07:58] It has a persistent memory.

[00:08:00] It has a world model, some idea of how the universe works and physically what it looks like and how you can function within it.

[00:08:08] It has the ability to plan, even if only to a limited degree.

[00:08:13] And the fourth thing is reasoning.

[00:08:17] So using the world model, using persistent memory, it can in some sense do formal reasoning.

[00:08:23] We've all seen our pets do this.

[00:08:25] And today's AIs just don't do any of that.

[00:08:28] It's all a form of recall and remixing of what's inside their giant memories.

[00:08:32] Did Lacoon tell you if he feels like there are any risks created by AI?

[00:08:38] Lacoon has said that the risks created by AI that we need to address need to be dealt with at what's called the application level.

[00:08:49] And what he means is if somebody is using AI, for example, and this is a real world example, to determine who should get sentenced for a crime and for how long.

[00:09:00] And we discover that that algorithm is biased.

[00:09:03] And we did discover that.

[00:09:05] That is a harm inherent in the misapplication of the technology to a particular problem and a particular set of data.

[00:09:14] There are tons and tons of examples of this.

[00:09:18] And obviously, these are ways in which AI is very dangerous.

[00:09:22] What would it mean for the AI boom and for the companies that are developing AI if Lacoon is right and that today's AI just isn't that intelligent?

[00:09:32] If Lacoon is right, then companies that can only justify what they're spending on training these giant models,

[00:09:41] if they create so-called artificial general intelligence, then those companies are in deep trouble because it means they've got to pivot to other revenue models.

[00:09:54] That was our tech columnist, Christopher Mims.

[00:09:57] And that's it for Tech News Briefing.

[00:09:59] Today's show was produced by Julie Chang with supervising producer Catherine Millsop.

[00:10:04] I'm Zoe Thomas for The Wall Street Journal.

[00:10:06] We'll be back this afternoon with TNB Tech Minute.

[00:10:09] Thanks for listening.