AI Is No Substitute for the Human Brain
WSJ Tech News BriefingMay 13, 202500:14:02

AI Is No Substitute for the Human Brain

The underlying architecture of AI models can simulate intelligence by memorizing endless lists of rules. But our tech columnist Christopher Mims says “thinking” is way more complicated than that. Plus, personal tech columnist Nicole Nguyen answers your questions on chatbot privacy. Victoria Craig hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

The underlying architecture of AI models can simulate intelligence by memorizing endless lists of rules. But our tech columnist Christopher Mims says “thinking” is way more complicated than that. Plus, personal tech columnist Nicole Nguyen answers your questions on chatbot privacy. Victoria Craig hosts.


Sign up for the WSJ's free Technology newsletter.

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:00] It's the cinematic event of the year. Critics say incredible, jaw-dropping, action-packed with savings. You're talking about Thunderbolts? No, Sergei, it's two-for-one cinema tickets from Compare the Market. A wonderful offer. So? That you can use to see Marvel Studios' Thunderbolts only in cinemas. Woo-hoo! I'll get the popcorn. Get two-for-one cinema tickets every Tuesday or Wednesday with Compare the Market. Simple. When you take out a qualifying product, compare the market. One membership per year, participating cinemas, two standard tickets only, cheapest free. Tees and C's apply.

[00:00:30] Hey, TNB listeners. Before we get started, heads up. We're going to be asking you a question at the top of each show for the next few weeks. Our goal here at Tech News Briefing is to keep you updated with the latest headlines and trends on all things tech. Now we want to know more about you, what you like about the show, and what more you'd like to hear from us. So our question this week is, what kind of stories about tech do you want to hear more of? Business decision-making? Boardroom drama?

[00:00:57] How about peeking inside tech leaders' lives or tech policy? If you're listening on Spotify, you can look for our poll under the episode description, or you can send an email to tnb at wsj.com. Now on to the show. Welcome to Tech News Briefing. It's Tuesday, May 13th. I'm Victoria Craig for The Wall Street Journal. You asked.

[00:01:22] Now we're answering all of your burning questions about AI-powered chat tools and how to keep your personal data safe while using them. Then we're looking under the hood of those chat bots because the companies that make them say they get smarter the more we use them. But can they really think like humans? But first, every day millions of people turn to AI chat bots for solutions to problems large and small. What kind of home gym mat should I buy?

[00:01:51] How can I best showcase my job experience on a new resume? Can you help me draft this tricky work email or craft an itinerary for a summer getaway? But what happens to all of that search data? Who owns it? And how much can you trust the answers that AI gives you? WSJ personal tech columnist Nicole Nguyen explored these questions in her recent series called Chatbot Confidential. She asked you to send your questions about data privacy on platforms like ChatGPT or Claude.

[00:02:20] And now she's here to answer some of them. Hey, Nicole. Hi. So let's just start with a voicemail that we got from Daniel Stewart. I work for a community college and was wondering when we discussed AI privacy issues, how does it relate to FERPA? I normally have to replace student names just to be safe. Alter situations when I use AI just to be safe.

[00:02:49] I wonder if there's also similar issues with HIPAA. So for our listeners who may not know those abbreviations, FERPA and HIPAA are both federal laws that govern privacy. The former affords privacy to students and parents over education records. The latter, medical patients' privacy over medical records. So Nicole, how does AI factor into all of these concerns over privacy and AI?

[00:03:14] Privacy laws extend to AI tools, particularly if you're using the public-facing AI tools that are not enterprise versions that are commissioned by your company, where typically those enterprise versions are compliant with privacy regulations such as HIPAA or GDPR in Europe or California CCPA.

[00:03:36] So if you're using the consumer-grade version of, say, HIPAA or Anthropics Cloud, then what Daniel is doing, which is replacing student names, scrubbing as much personally identifiable information, sensitive information as possible, is the right move. That's a good idea. We've also got another question from Mitch. He's a photo archivist.

[00:03:58] And on X, he asked if personal family photos uploaded to AI chatbots could be stored, used to train AI models, or accessed by other people later on. So this is a complicated answer, but in many AI tools, for example, ChatGPT and Gemini, you can opt out of AI training. But you have to mark that in settings.

[00:04:23] You can also use what's called temporary chat in ChatGPT, which is like an incognito mode for ChatGPT. It does not use that information for AI training. It deletes the conversation immediately. But there's a caveat there. There is always a possibility if you're using these AI tools, because we are in the early days of generative AI, that your inputs and outputs could be subject to human review or stored for a longer time.

[00:04:52] And that's because the systems mark anything that is potentially harmful so that they can review and learn from those types of responses. And we don't exactly know what is harmful and what isn't, but we trust that these companies have reasonable policies. You know, if you use Google Drive, for instance, Google Photos, we trust that Google has reasonable policies around what is flagged and what isn't.

[00:05:21] So that's where I'll leave my answer. You can opt out of AI training with the caveat that in some instances it could be reviewed. That was WSJ personal tech columnist Nicole Nguyen. If you have more questions, throw them at us. You can send us a voice memo to tnb at wsj.com or you can leave us a voicemail at 212-416-2236.

[00:05:45] Coming up, we've been promised that AI chatbots will take on human-level smarts, but the list of skeptics is growing. We'll dig into that after the break. Hey, wouldn't it be great if life came with remote control? You know, you could hit pause when you needed to or hit rewind, like that time you knocked down that wasp's nest. Uh-oh.

[00:06:13] Well, life doesn't always give you time to change the outcome, but prediabetes does. With early diagnosis and a few healthy changes, you can stop prediabetes before it leads to type 2 diabetes. To learn your risk, take the one-minute test today at doihaveprediabetes.org. Brought to you by the Ad Council and its Prediabetes Awareness Partners. Can AI chatbots actually, at some point, solve problems in the same ways that humans can?

[00:06:43] In the industry, that ability is known as AGI, or Artificial General Intelligence. And increasingly, the research says AI models can take on more information to solve problems, but they do not think like humans. My colleague, Julie Chang, spoke to WSJ tech columnist Christopher Mims about what that means exactly. So Christopher, you're essentially saying that we're nowhere near AGI, is that right? That's correct. That's the right takeaway.

[00:07:10] We are definitely nowhere near AGI. And anyone who tells you differently, I think honestly just hasn't looked that deeply into what intelligence actually is. It turns out that what today's transformer-based AIs, and that's the kind of AIs that underlies ChatGPT and a lot of other generative AIs,

[00:07:36] the way that they work is just they have this kind of almost infinitely long list of little rules of thumb that they apply. And so to give you a concrete example, one reason historically these models have been really bad at math, even if you show them a million math problems and their correct answers, is that they learn weird stuff. You know, if you ask them to multiply two numbers and one of them is between 200 and 211,

[00:08:01] it has a different set of little rules of thumb it uses for multiplying those numbers than it does for any other numbers anywhere else on the number timeline. So this is how today's AIs simulate intelligence. And a lot of people have pushed back and said, oh, well, isn't this how people think? Like we're just a big pile of rules of thumb. No, you know, sorry, you're actually way more complicated than that. Humans have spatial three-dimensional models that include like causality and other things.

[00:08:31] Today's transformer-based AIs, this idea that if we just make them big enough and show them enough data, they will spontaneously generate in their cybertronic brains the machinery of thought. That seems to be nonsense. In your column, you bring up this Manhattan map example. Can you talk about that and how it explains the bag of heuristics theory? So researchers think that the way that modern AIs work is what's called a bag of heuristics model.

[00:09:00] And this just means a really long list of literally millions of rules of thumb. And so, for example, one researcher took a traditional large language model and gave it turn-by-turn directions from every point in Manhattan to every other point in Manhattan and discovered that it could then regurgitate directions between any two points on the island of Manhattan with 99% accuracy.

[00:09:26] Then they probed this model to look at what was the map of Manhattan that it had generated, that it was reasoning from, if you can use that word, to give back these directions when you ask it for directions on the island of Manhattan. And the map it regurgitated, or that they were able to extract from it, looked totally crazy. Streets were connected that are very far distant and diagonal to one another.

[00:09:54] It seemed to think that there were streets that jumped over Central Park and all this other craziness. And what this showed was that the AI had managed to learn a sort of mental model of what Manhattan streets work like or look like that could generate accurate directions when asked, but in no way resembled what the actual street map of Manhattan was.

[00:10:24] And so this just shows you how kind of really strange and weird and simulated is the quote-unquote understanding of an AI. Okay, but humans wouldn't be able to recreate a map either. Yes, there is some truth to that. The thing that reveals how weird the AI generated map of Manhattan is, is detours. So if I told you, okay, you're trying to get between like one block in Manhattan, one block of 7th Avenue is suddenly blocked.

[00:10:53] And you were an expert at navigating Manhattan. Would you have any trouble just going over another block, like taking a detour? No, you'd have no trouble. And the idea is because you have some kind of explicit understanding of, oh, Manhattan is a grid and I can, if I'm on a grid of streets, I can just go around a detour.

[00:11:16] And that kind of implicit understanding would be something that you had acquired in the course of learning your way around Manhattan. But the AI, when you block even 1% of the streets in Manhattan, it just completely breaks down. It also shows how in self-driving systems, they can get completely thrown by just the smallest thing, which would never throw a human being.

[00:11:46] So is AI intelligence plateauing then? Yes. I should amend that by saying that the sort of general abilities of these AIs have definitely hit a ceiling where, for example, the latest reasoning models from OpenAI, they're actually worse at some tasks in a lot of ways. It seems like the reinforcement learning that they do with them to make them better at coding and mathematics makes them more likely to hallucinate, makes them worse at other things. So just throwing more data at it is not improving these models.

[00:12:16] We can make them better in general by going in and manually tinkering or giving them access to. So for example, you can take a large language model and give it access to an explicit mathematical application that has been programmed by human beings. And then it can do math more like the way a person would if they were given access to a calculator. But the sort of important distinction there is now it's just software again.

[00:12:45] So we're just having to go in and put all this scaffolding around the AI because it's really not that capable at the end of the day. That was WSJ tech columnist Christopher Mims. And that's it for Tech News Briefing. Today's show was produced by Julie Chang with supervising producer Emily Martosi and additional support from Melanie Roy. I'm Victoria Craig for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.