AI, “normies,” and ethical consumption algorithms with Julia Longoria | Good Robot
TED TechApril 18, 202520:3337.65 MB

AI, “normies,” and ethical consumption algorithms with Julia Longoria | Good Robot

“The AI revolution is here. Can we build a Good Robot?” asks Vox’s newest miniseries, Good Robot. Join host Julia Longoria in conversation with Sherrell to discuss the ideological divide within the AI community. Sherrell and Julia talk about how Julia leverages her background as a Supreme Court reporter to condense complex topics into accessible and exciting explainers, AI’s encroachment on the media industry by “pilfering” works of authors and journalists, and why algorithms could be implemented to ensure ethical consumption – and higher quality information.

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

“The AI revolution is here. Can we build a Good Robot?” asks Vox’s newest miniseries, Good Robot. Join host Julia Longoria in conversation with Sherrell to discuss the ideological divide within the AI community. Sherrell and Julia talk about how Julia leverages her background as a Supreme Court reporter to condense complex topics into accessible and exciting explainers, AI’s encroachment on the media industry by “pilfering” works of authors and journalists, and why algorithms could be implemented to ensure ethical consumption – and higher quality information.

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

[00:00:05] This is TED Tech, a podcast from TED. I'm your host, Sherelle Dorsey. Today, we're bringing you something a little different. It's a special conversation between me and Julia Longoria, host and creator of Good Robot, a new series from Vox's unexplainable podcast.

[00:00:22] Good Robot is a four-part special series about the people shaping technology and the consequences of getting AI right or wrong. In our interview, Julia shared her experience diving into the world of AI. We talked about AI's future and the people leading the change. We also dig deep into the relationship between journalism and AI and how to report on a complex topic that might also be impacting the media industry directly.

[00:00:50] I loved Julia's pedestrian approach to how she asks questions of those in the know, being okay with being a novice and asking the kinds of questions many of us non-technical normies, as she says, have but haven't had the level of access or time to research in order to understand.

[00:01:10] We're also sharing the first episode of Good Robot here, so you can find it in our TED Tech feed after this conversation. We'll dive into my conversation with Julia Longoria next, but first, a quick break to hear from our sponsors. Julia, it's so great to be in the virtual room with you. Thank you. Likewise. Thank you for having me.

[00:01:39] Absolutely. Absolutely. So let's go ahead and kick things off from More Perfect to The Experiment and Now Good Robot. You've tackled very different subject matters. So what drew you to AI specifically for this latest project?

[00:01:52] Yeah, it's a great question. I'm drawn to topics that are complex and convoluted and unknowable and spoken in a language that lay people can't understand. I think AI is something that felt very daunting to me. Anytime I hear news about AI, I'm like, what are y'all talking about? So I think I was drawn to it as a challenge to bring it down to earth for people like me or normies like me, as people in AI called me a normie.

[00:02:20] What are some of the challenges and rewards of transitioning between such diverse areas of focus? And how does this really help you expand the breadth of reporting and really enrich your coverage of AI, you know, as a normie, but also as a reporter?

[00:02:34] There's definitely drawbacks to being a generalist like that. I like to say that I like to tell stories in a language that's nuanced and complex enough for people in the know to learn something new from it, but also like simple enough that a high schooler could understand.

[00:02:52] I think the challenge at the beginning of a reporting journey where you don't know anything is like, you're so ignorant that the people who want you to know stuff are a little frustrated by you and maybe don't want to talk to you. So you really have to do your research before you start reaching out to folks. So in that sense, I feel like I'm always learning.

[00:03:12] There's a steep learning curve, but at the same time, it's so rewarding to be able to bring a complex topic like that down to earth and realize, hey, wait, I can understand this too. Like anybody can understand this if they just ask people in the know to break it down for them.

[00:03:28] To that point, for those that refused or didn't have the capacity at the time to help you break some fundamentals down around AI, were you able to convince them later on through this process? This is not my first rodeo. So I don't think I approached people with total ignorance. Like I did my research before I went out in the field.

[00:03:52] But I think certainly when I was first starting out in like the law beat, I approached plenty of judges or professors who were like, don't waste my time. Have you read my eight books? And I learned to read the eight books or at least skim them and be familiar with them enough to be like, I know the answer to some of my questions.

[00:04:18] I'm a surrogate for the listener who doesn't know. So please bear with me as I ask some ignorant questions and I'm a little bit dense. I kind of preface that with some of my questions that are more simple to people in the know. Through the Good Robot series, you really delve into the ideological divide within the AI community. And there's tons and tons of research about the negative impacts of AI and all of the opportunities within AI.

[00:04:45] So how do you really navigate presenting these different perspectives as fairly as possible? I was tasked with doing a series about AI and it ended up almost being a series about the politics of AI or something, which makes sense because I come to this as a Supreme Court reporter. So I feel like I'm programmed or we're all programmed to listen for the disagreement. Or maybe it's just that humans always find their group and we often are polarized in our beliefs.

[00:05:14] Just like in American politics, right? There are beliefs and almost cultural norms and this language that different groups or political parties would try to find our tribe. And so I found that in the AI world, there was this infighting. And I think political parties, like they actually had more in common than they wanted to admit.

[00:05:40] And so what I love to do is help people who would never see themselves as allies or who can't talk to each other in real life, put them in dialogue with each other on the radio or on a podcast. And so navigating that was really tricky because I didn't even realize that there were these groups or this polarization within AI when I first started out. And it just slowly came out. I was like, wait a second. I think these people are fighting with each other.

[00:06:10] Why? And that's the way that ignorance and coming to this as a normie is a real gift because you can just come in and ask the ignorant question like, wait, what? Why don't you agree with them on that? Aren't you saying the same thing? You know, honestly, that's really reflective in your series as well because you do share voices from critics to advocates. And so I think as a listener, we get a balanced perspective in many ways.

[00:06:38] I think one that's kind of optimistic and one that's somewhat, you know, cynical or maybe not cynical, but offers a critique about, hey, there's more questions that we need to ask about some of these tools and technologies that are emerging. What do you see is the role of journalism, you know, in mediating somewhat of these like very, very heated debates surrounding AI's future? I think I came to this as someone frustrated with a lot of the journalism around artificial intelligence.

[00:07:08] A lot of times when you turn on a podcast about AI or you read an article about AI, it feels like you're listening into a conversation midway through where there's all these terms and these premises that were agreed on before you got there. And you're like, wait, what is AGI? What is superintelligence? What is all this stuff? What are you guys talking about? And it feels very, very isolating. Very like, oh, they're talking about something that I couldn't possibly understand.

[00:07:34] So I hope that there's more journalism that invites more people into the conversation about this technology that I do think is revolutionary, that I do think will change the world. I just want more and more people to be part of shaping its future. Okay, let's take a quick pause to hear from our sponsors. My conversation with Julia continues after this break.

[00:07:58] I have this conversation a lot. And just something I'm curious about on your end, because, you know, AI and many of the data sets, even when we're thinking of AIs and tools like ChatGPT being kind of now the go-to search.

[00:08:22] Well, they're also pilfering, you know, from journalists and authors, right? And, like, that's been a huge issue as early as last year when, like, the New York Times, you know, began to sue some of these instruments for leveraging and using their data, number one, without permission, or at least some kind of attribution or even payment, right, for all of this work that's been produced.

[00:08:46] So, how do you grapple with, you know, the fact that even your work as a journalist is being used by AI? You know, it's part of their training data. How do you reconcile that as a journalist? Or what do you even just think about it broadly? Yeah, I mean, it's something that you'll hear in the series Good Robot. Like, it's sort of part of my reporter's journey.

[00:09:09] Like, early on in deciding to tackle AI as a topic for this series, the news came out that Vox Media, along with this, which is Vox's parent company, along with a lot of other outlets, the AP, Condé Nast, dozens and dozens of newspapers, formed partnerships. In Vox Media's case, a partnership with OpenAI, the chat GPT company, where theoretically OpenAI could train on our journalism.

[00:09:38] Or I knew for a fact that, or at least that it was very likely that my work would be going into these models. And I won't lie, like, I was, honestly, I'm still not entirely sure what that means. I'm told that, like, they can't reproduce my voice. But, I don't know, there just seems like we don't have a lot of agency in all of that.

[00:09:58] And I understand that media companies like Vox Media are trying to have some kind of control over these companies and the way our work is used. But it does really feel like it gives you a little bit of deja vu from the internet moment, right, when all these media companies suddenly made content for free online. I guess they're trying to make a different choice now. They're trying to form these partnerships and put some guardrails around the way that our content is used.

[00:10:27] My honest answer to your question is it makes me feel helpless. And it makes me feel like we don't have a lot of control over how our work will be used by these tech companies. How do you feel about it? You know, I believe that information, accurate information and great reporting and why I even got into this space is really about the public service that journalism does.

[00:10:48] And to ensure that the population is educated and has information that they need to make decisions for themselves, for their lives, for their families, for their communities. With that said, I also think that when you have companies that profit off of said information, there should be a level of attribution and routes towards compensation as well.

[00:11:12] And so I think that, you know, good for the media companies for saying, hey, we're willing to work with you here. We're willing to license the work. Now, we have paid hardworking journalists to produce to verify and journalists and fact checkers and producers and everyone that, you know, makes up this industry.

[00:11:30] And also I think that having some level of like checks and balances or accountability, just as a user of some of these tools, it also makes me feel much more confident knowing that, hey, information is being derived from credible sources and checked. And these particular entities are working in tandem to ensure quality of information or quality of data.

[00:11:53] And there's also some level of an ethical consumption algorithm, which I think that as much as like we love the idea of advanced technologies, I think we are not always aware once these things are being produced, some of the harms as well as the potential. And so it's really hard to kind of predetermine here's how it's going to affect our industry. You've highlighted many personal and societal implications of AI throughout the series.

[00:12:23] I'd love to learn about how you balance explaining both the complex technical aspects of AI. So those really deep conversations with those who know or those who grow or those who develop this whole thing. And you really balance that with making it relatable and understandable for, to use your words again, for us normies.

[00:12:47] How do you go about creating that balance between the complexities and also breaking that down so that it is digestible and understandable? With a lot of drafts. With episode one, we were like on V31 or something like that. I think once I dove into this world that I felt like was very inaccessible to me as a normie, I found myself speaking the language that I was so annoyed by. I was using the shorthands.

[00:13:14] And I think it was really valuable to keep presenting our drafts of our episodes to new normies to make sure that we weren't getting too deep in the weeds and that we were always speaking to, you know, the person who's coming to this fresh.

[00:13:32] So it's not easy, I think, in anything, especially in like Supreme Court reporting and law reporting, just always being so committed to keeping it for the layman, keeping this, the audience in mind that we want. Just thinking about like the high schooler, like thinking about explaining it to my grandma or something like just always editing and revising. But also not wanting to dumb it down either.

[00:13:56] Right. I think when people talk to an audience that they think is dumb, it shows and people don't want to be talked down to either. So I think trying to include as much detail as possible as you needed to know to understand the technology. Yeah, I love that. It's, you know, really valuing your audience as, you know, capable and that they just haven't had the opportunity to get exposure. They're not going to just have a sit down with someone who's working in AI.

[00:14:25] And I think, too, just to kind of continue to delve into the language part, you know, we've heard lots of interesting terms to describe AI. There's a lot of loaded language that exists today. There's like, you know, this existential risk or this like magic intelligence in the sky.

[00:14:41] So how do you and your team demystify these terms and really get to the heart of some of the real world implications in a way that's not alarmist, but brings attention and calls attention to, again, both the opportunities, but also some of the maybe riskier aspects of AI? Yeah, I think that the whole series was really an attempt to impart to demystify some of those terms that I'd heard by telling their origin story.

[00:15:09] Here's where this idea of existential risk came from. Here's one of the early thinkers who wrote about it, who popularized it. Here are some of his other ideas. We tell a story of the rationalists and Eliezer Yudkowsky. And just knowing that it came from a certain like intellectual soup and, oh, this is how these fears emerged.

[00:15:32] And obviously more people are afraid of those things now than they were in like the early 2000s when you first started writing about this. But I think it helps to know where these ideas came from and who came up with the terms and to be able to be appropriately critical, but also take it seriously. It's such a broad space. It can mean everything at one time and nothing.

[00:15:58] And so how do you balance your own curiosity also with the need to present a balanced narrative and interesting storytelling to the listeners and to the audience? What are the core tenets of what you need to know in this space? We spoke to a lot of people. Like you only hear really a fraction of the people we ended up speaking to in the series.

[00:16:19] And I think we tried to include like a really wide diversity of voices in the early stages of what if you were going to introduce someone to the world of AI, like what do they need to know?

[00:16:34] And I also think Vox itself and Future Perfect, which is the section of Vox that I collaborated with in reporting the series, like they came to AI in a very particular way through effective altruist ideas. And so telling that story and understanding that's just one perspective and trying to put all these different perspectives in dialogue with each other. I don't know. It was clear to me that we were only going to tell a little slice of the story of AI.

[00:17:04] And I like to think about the anthropology of different worlds, the way that humans relate to each other as they try to build a technology or run a government. So that's really the angle that we approached it from. We knew we weren't going to be able to say everything about AI. So we tried to do the version of things that was going to sound the most compelling on in audio and would be a real introduction for people who didn't know much about it. So we know that AI is rapidly evolving.

[00:17:33] And as you mentioned, you can only cover so much ground in a four part series. What do you recommend for listeners in terms of staying engaged in this ongoing conversation? You know, I think what my hope is that if you are someone who listens to news about artificial intelligence and find yourself extremely confused most of the time, that this series will provide like a baseline understanding of some of the players and some of the ideologies at play.

[00:18:01] And I've found that after reporting the series, like I can engage in AI news a lot more. I can be critical. I can ask questions. And I think also it's important to realize that this is a technology that's really in its infant stage. It's in we're in the earliest days of AI. I think it's really easy to get wrapped up in the day to day coverage of it.

[00:18:25] But I try to take a long view, like try to take predictions with a grain of salt and try to be skeptical and critical of even the tools I use. And when I use ChatGPT or Claude, I try to draw on my understanding of how the technology works, of being very skeptical of the information it gives me, of double checking it. So yeah, I guess I would say don't put your head in the sand like I did before.

[00:18:53] And be critical and trust your gut. At the end of the day, this is a technology that is being built by human beings. And humans are flawed. And we can try to hold those humans who are building it to high standards and hold them accountable. I love that. So Julia, where can people find the series? Yes, you can subscribe to the Unexplainable podcast from Vox. And all the episodes are now out.

[00:19:21] So you can scroll down and find all the episodes labeled Good Robot on Unexplainable. Julia, thank you for taking the time to speak with me. And this is an incredible series. It's very timely. It's very now. And I think, again, for a lot of us normies trying to understand and break down what is a very, very complex topic. Phenomenal work and a phenomenal resource as well. Thank you.

[00:19:48] That was Julia Longoria, host of Good Robot, a special series on Vox's Unexplainable podcast. You can find the first episode of Good Robot, the magic intelligence in the sky on our feed now. You can also find more episodes of Unexplainable wherever you get your podcasts. And that's it for today. TED Tech is part of the TED Audio Collective.

[00:20:11] This episode was produced by Nina Byrd Lawrence, edited by Alejandra Salazar, and fact check by Julia Dickerson. Special thanks to Maria Latias, Tonsika Sungmarnivong, Farrah Degrunge, Daniela Belariso, and Roxanne Hylash. I'm Sherelle Dorsey. Thanks for listening.