We’re hearing from our reporters and columnists about some of the biggest companies, trends and people in tech and what could be in store for 2025. The underlying technology behind artificial intelligence chatbots called a “transformer” may have profound implications for a range of industries, and could hold the answers to some of science’s toughest questions. WSJ tech columnist Christopher Mims joins us to discuss how some researchers hope to push the boundaries of what’s possible with AI in 2025. James Rundle hosts.
Sign up for the WSJ's free Technology newsletter.
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:00] Amazon Q Business is the generative AI assistant from AWS because business can be slow, like wading through mud.
[00:00:07] But Amazon Q helps streamline work, so tasks like summarizing monthly results can be done in no time.
[00:00:12] Learn what Amazon Q Business can do for you at aws.com slash learn more.
[00:00:20] Welcome to Tech News Briefing. It's Friday, December the 27th. I'm James Rundle for The Wall Street Journal.
[00:00:27] We're hearing from our reporters and columnists about some of the biggest companies, trends and people in tech.
[00:00:32] And what could be in store for 2025? Coming up on today's show, artificial intelligence is everywhere,
[00:00:39] propelled by the runaway success of OpenAI's chat GPT and other models.
[00:00:44] But the tech behind generative AI is far more than just the engine for a fancy chatbot.
[00:00:49] Researchers are exploring how the technology might be used to create bacteria that eats plastic,
[00:00:54] self-driving cars or potential cures for cancer.
[00:00:57] Our tech columnist Christopher Mims joins us to talk about how the bleeding edge of AI research may go mainstream next year.
[00:01:06] Christopher, many people have become familiar with AI as essentially a conversational search tool in recent months,
[00:01:12] thanks to chat GPT and other platforms.
[00:01:14] However, the underlying technology has greater applications.
[00:01:18] Tell us about the transformer and why it's so important.
[00:01:21] So in 2017, some researchers at Google DeepMind, which is their AI outfit, published a paper called Attention is All You Need.
[00:01:30] And that started this supernova explosion of AI that we've seen since.
[00:01:37] And what was key about that paper was it introduced a suite of algorithms which give us a new model for how to create in a computer a universal learner,
[00:01:53] something which can extract from any large body of data that has inherent structure in it, like language,
[00:02:02] the sort of underlying order of that data.
[00:02:07] And it's the reason that we have chat GPT, for example.
[00:02:12] But what's interesting is the world is full of structured data,
[00:02:16] which we can apply the transformer architecture or algorithms to.
[00:02:22] And the result is kind of a GPT for all kinds of things, right?
[00:02:28] For drug discovery, for synthetic biology, for self-driving cars, for robots, etc.
[00:02:34] So what are the ways in which companies are hoping this can be used in 2025?
[00:02:39] Companies are using it to, for example, create new molecules.
[00:02:45] And the analogy here is when you're using chat GPT, you're not really having a conversation with an AI.
[00:02:51] It's like you're in the same Google Doc and you are collaborating.
[00:02:54] You're writing a collaborative story, but the narrative is it's a chat.
[00:02:58] So you write some, then the robot writes some, etc.
[00:03:02] And in biology, what people have done is instead of feeding these transformer models all of the text on the internet,
[00:03:08] which is what it took to get chat GPT, they've fed them every organic molecule
[00:03:14] which has ever been characterized in a scientific paper and everything we know about that molecule,
[00:03:19] what it does in the real world, its function.
[00:03:23] And so then you go to that kind of bio GPT and you prompt it with,
[00:03:28] well, I want a molecule that does this, you know, it treats this particular cancer.
[00:03:33] And just like chat GPT, it continues the dialogue.
[00:03:37] It autocompletes what might come next, which is instead of a sentence,
[00:03:42] it's a proposed sequence of molecules which would make up this new drug potentially,
[00:03:50] or it could be a new enzyme that would go into bacteria to digest all the plastic in the Great Pacific garbage patch.
[00:03:58] So this is, to me, one of the most interesting and compelling examples.
[00:04:01] But in this same vein, people are taking these transformers and feeding them tons of actions that a robot can take.
[00:04:09] And then they can help power robots, which can teach themselves how to perform certain tasks.
[00:04:16] And that's been kind of a holy grail of robotics.
[00:04:18] So the end result is a company called Physical Intelligence has been showing off a robot that can fold your laundry.
[00:04:25] Turns out this is probably the hardest problem in robotics right now.
[00:04:29] It's even harder than a Boston Dynamics robot doing parkour.
[00:04:33] And what's key about that is it basically learned on its own how to fold laundry.
[00:04:39] It wasn't a series of scripted actions.
[00:04:42] So a lot of highly specialized GPTs, for want of a better word, going into development.
[00:04:47] What are some of the limitations that researchers are running into?
[00:04:50] The biggest limitation when you're trying to create a new transformer model is always data.
[00:04:56] So it's not a coincidence that the world's first GPT was in language because, of course, the Internet is just full of text.
[00:05:05] And you can go and scrape it all.
[00:05:07] And it's a legal gray area about whether you're allowed to do that or not.
[00:05:10] But there isn't a similar gigantic free library of actions that a robot could take or decisions that a self-driving car could make in various situations.
[00:05:24] So getting that data, having that data, that's the real differentiator.
[00:05:30] That's the moat between startups and big companies that are able to leverage transformers to do these kind of miraculous new things and those that can't.
[00:05:40] I think that's a really interesting point because on the surface, it seems like magic.
[00:05:44] These algorithms can do incredible things.
[00:05:46] But how important is it to separate hype from reality?
[00:05:50] In your column, you describe what transformers do and essentially being able to ingest the totality of data that's given to it and make reasonable assumptions about, for instance, the keywords in a sentence or what comes next.
[00:06:02] But that's not the same as truly comprehending languages or truly making decisions on its own.
[00:06:06] Yeah, these models absolutely aren't intelligent in any real sense of that word.
[00:06:13] They are incredibly good at fooling us into thinking that they might be intelligent because, of course, they can autocomplete any chain of text that we can give them.
[00:06:24] It's good at taking huge amounts of data and using all of the implicit relationships of that data.
[00:06:32] You might call it knowledge to try to maybe reason by analogy or something in a very primitive way.
[00:06:39] But it doesn't have what psychologists call a world model.
[00:06:43] It doesn't have anything approaching human or animal sentience.
[00:06:48] It really is proof, as one AI researcher put it to me, that you can have really facile, fluid communication with zero intelligence behind it.
[00:07:02] Now, that doesn't mean it's not useful because something that can ape human intelligence, can copy it, can simulate it, can do a lot of really low-level knowledge work tasks very quickly.
[00:07:17] And replace a lot of the drudge work that humans are doing.
[00:07:21] And that's why you're seeing it show up in customer service chatbots or systems which do back-office drudge work, like processing invoices or something like that.
[00:07:33] Coming up after the break, we'll hear more about AI's promises and pitfalls.
[00:07:38] Stay with us.
[00:07:45] Amazon Q Business is the new generative AI assistant from AWS.
[00:07:49] Because many tasks can make business slow, as if wading through mud.
[00:07:54] Uh, help?
[00:07:56] Luckily, there's a faster, easier, less messy choice.
[00:07:59] Amazon Q can securely understand your business data and use that knowledge to streamline tasks.
[00:08:04] Now you can summarize quarterly results or do complex analysis in no time.
[00:08:09] Q got this.
[00:08:10] Learn what Amazon Q Business can do for you at aws.com slash learn more.
[00:08:19] As the world embraces AI, as companies look at different ways it can be used, what other considerations are there around AI?
[00:08:26] For instance, the environmental impact of data centers, the privacy concerns with data and everything else that goes into what's being used.
[00:08:33] My top three concerns with AI are number one, over-reliance on it.
[00:08:41] I mean, this is the subject of countless science fiction novels.
[00:08:44] And now we're seeing it happen in the real world.
[00:08:48] If we hand over decision-making to these AIs and we don't have sufficient knowledge about how they actually make decisions, we can get ourselves into a world of trouble.
[00:08:58] There are countless examples of this and there will be many, many more as we try to automate more tasks and hand them over to AIs where AIs are systematically making bad or biased decisions.
[00:09:09] That's my number one concern.
[00:09:11] Number two is the environmental impact of AI.
[00:09:14] Because, of course, it's incredibly energy-hungry now.
[00:09:17] I mean, that will change over time as it gets more optimized.
[00:09:20] But there's also some evidence that the more we ask of AI, the more energy it's going to take to answer our questions.
[00:09:30] So we seem to have, as with so many other things, an infinite appetite.
[00:09:34] So today's AIs might become way more efficient as computer scientists optimize those algorithms.
[00:09:42] But in the future, that probably will just mean we'll use more AI or we'll have the AI talking to itself more in order to do more reasoning on our behalf.
[00:09:52] The third thing that I'm worried about with AI is essentially malinvestment.
[00:09:57] You know, we're clearly in a bubble right now of spending and investment in AI.
[00:10:03] But it's very likely that at some point in the next few years, you're going to see a lot of AI startups go belly up.
[00:10:12] There could be another sort of AI winter.
[00:10:15] There's going to be a lot of big companies that are going to make big investments in certain types of AI and ultimately find that they don't yield the productivity boosts they were hoping for.
[00:10:25] Continuing that thought, since the advent of generative AI in particular development seems to have proceeded at a breakneck pace, do you expect that to continue?
[00:10:34] Or do you expect that it will plateau at some point, given the challenges we've spoken about in terms of access to data, environmental concerns, everything else?
[00:10:42] In terms of the capabilities of today's chatbot style generative AIs in particular,
[00:10:48] we're definitely hitting a wall in terms of improvements in their performance.
[00:10:52] What we're entering now is what one economic historian calls the installation phase of a technology,
[00:11:01] which is you go from the early adopters to, okay, now how does it really work for real people?
[00:11:05] How does it help people in the real world?
[00:11:07] And how can they figure out how to work it into their everyday?
[00:11:12] And that process can take decades.
[00:11:13] So even though the capability of the models seems to have plateaued for now,
[00:11:19] we're going to have decades of people figuring out how to make it a part of their lives and their businesses,
[00:11:25] just as we did with the PC or the mobile phone or 4G and the cloud and all of that.
[00:11:33] That was our tech columnist, Christopher Mims.
[00:11:36] And that's it for Tech News Briefing.
[00:11:38] Today's show was produced by Julie Chang.
[00:11:39] I'm your host, James Rundle.
[00:11:42] Jessica Fenton and Michael LaValle wrote our theme music.
[00:11:45] Our supervising producer is Catherine Millsop.
[00:11:48] Our development producer is Aisha Al-Muslim.
[00:11:51] Scott Soloway and Chris Zinsley are the deputy editors.
[00:11:55] And Philana Patterson is the Wall Street Journal's head of music.
[00:11:59] Thanks for listening.
[00:12:06] Amazon Q Business is the new generative AI assistant from AWS
[00:12:10] because many tasks can make business slow, as if wading through mud.
[00:12:15] Uh, help?
[00:12:16] Luckily, there's a faster, easier, less messy choice.
[00:12:20] Amazon Q can securely understand your business data
[00:12:22] and use that knowledge to streamline tasks.
[00:12:25] Now you can summarize quarterly results or do complex analysis in no time.
[00:12:29] Q got this.
[00:12:30] Learn what Amazon Q Business can do for you at aws.com slash learn more.
[00:12:35] So, let's go ahead and see if we have a few more.
[00:12:35] Let's see.

