The TED AI Show: Humanity’s first AI election w/ WIRED's Vittoria Elliott
TED TechSeptember 17, 202438:3270.49 MB

The TED AI Show: Humanity’s first AI election w/ WIRED's Vittoria Elliott

2024 is the biggest election year in modern history, with over 50 countries going out to the polls across the globe. And artificial intelligence has fully seeped into global politics – from deepfakes to AI bots that can ingest thousands and thousands of documents to make policy decisions. Bilawal talks with journalist Vittoria Elliot, who’s been leading on WIRED’s AI Elections Projects, to discuss how AI is reshaping the political landscape in surprising ways. The two explore the good, the bad, and the downright bizarre – and share what the U.S. can learn from other countries to adapt and critically engage with "the new normal."

For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts 

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

2024 is the biggest election year in modern history, with over 50 countries going out to the polls across the globe. And artificial intelligence has fully seeped into global politics – from deepfakes to AI bots that can ingest thousands and thousands of documents to make policy decisions. Bilawal talks with journalist Vittoria Elliot, who’s been leading on WIRED’s AI Elections Projects, to discuss how AI is reshaping the political landscape in surprising ways. The two explore the good, the bad, and the downright bizarre – and share what the U.S. can learn from other countries to adapt and critically engage with "the new normal."

For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts 

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

[00:00:00] [SPEAKER_02]: Audio Collective.

[00:00:38] [SPEAKER_04]: Following on X1 Day that I realized what the next few months might look like.

[00:00:42] [SPEAKER_04]: X had just updated grok, their AI chatbot so that it could generate images using a largely

[00:00:48] [SPEAKER_04]: un-censored open source model called flux.

[00:00:51] [SPEAKER_04]: And immediately, the results were far more unhinged than anything we've seen.

[00:00:56] [SPEAKER_04]: Now we've been able to generate images in the past, but this is the first time we've

[00:01:00] [SPEAKER_04]: had these capabilities put into a social media app that 250 million people use everyday.

[00:01:06] [SPEAKER_04]: I saw photo realistic pictures of Kamala Harris and Donald Trump in bizarre situations.

[00:01:12] [SPEAKER_04]: That range from easily-click-badable images of them lovingly holding hands to skin-crawling

[00:01:17] [SPEAKER_04]: images of the two of them celebrating 9-11.

[00:01:21] [SPEAKER_04]: Suddenly, my feed was full of wild, harrous and Trump memes, and I started to realize

[00:01:26] [SPEAKER_04]: It's kind of impossible to imagine a future where AI won't have an impact on elections moving

[00:01:31] [SPEAKER_04]: forward.

[00:01:34] [SPEAKER_04]: It feels like uncharted territory, but actually in many ways the US is late to this

[00:01:41] [SPEAKER_04]: specific party.

[00:01:42] [SPEAKER_04]: Other countries around the world have been adapting their elections alongside the rise

[00:01:46] [SPEAKER_04]: of AI and it goes beyond just means, deep fakes and chatbots have been deployed in surprising

[00:01:52] [SPEAKER_04]: ways in some of the world's biggest elections, like in India and Pakistan.

[00:01:57] [SPEAKER_04]: And in Europe, there's even AI candidates that are run for office.

[00:02:03] [SPEAKER_04]: So what can we learn from the rest of the world to prepare for our first AI election?

[00:02:08] [SPEAKER_04]: And is it really just doom and gloom or can AI actually be good for democracy?

[00:02:18] [SPEAKER_04]: I'm below the sedu, and this is the Tediasha, where we figure out how to live and thrive in a

[00:02:23] [SPEAKER_04]: world where AI is changing everything.

[00:02:32] [SPEAKER_03]: Imagine this. In 2030, the CFO of a Fortune 100 company is a bot.

[00:02:38] [SPEAKER_03]: I'm Paul Michaelman, and on Imagine This will be exploring possible futures

[00:02:42] [SPEAKER_03]: and the implications they hold for organizations. Joining me will be BCG's top experts

[00:02:47] [SPEAKER_02]: as well as my co-host gene, BCG's conversational Gen AI agent.

[00:02:52] [SPEAKER_02]: Blending human creativity with AI innovation, this podcast promises an unmasked listening journey.

[00:02:59] [SPEAKER_02]: Join us on Imagine This from BCG.

[00:03:02] [SPEAKER_00]: Hey Tediah listeners, if you enjoy this show, we think you'll enjoy a new show about how tech

[00:03:07] [SPEAKER_00]: is changing our politics. It's called Wired Politics Lab, hosted by Wired Senior Politics Editor

[00:03:13] [SPEAKER_00]: Leah Fyger, and a rotating cast of Wired's political reporters. Each week, you'll be guided

[00:03:19] [SPEAKER_00]: through the exciting, challenging and sometimes entertaining vortex of internet extremism,

[00:03:24] [SPEAKER_00]: conspiracies, and disinformation. From AI chat bots running for political office to deep-fix

[00:03:29] [SPEAKER_00]: influencing voters, you'll hear in-depth analysis and conversations based on facts and research.

[00:03:34] [SPEAKER_00]: Find and follow Wired Politics Lab wherever you listen.

[00:03:42] [SPEAKER_04]: Today I'm talking with Victoria Elliott, a reporter at Wired who specializes in disinformation

[00:03:47] [SPEAKER_04]: and social media platforms, and she's in the middle of a massive endeavor to collect stories

[00:03:52] [SPEAKER_04]: about AI and elections from around the world for Wired's AI Elections Project.

[00:03:58] [SPEAKER_04]: If you go to the website, you'll see they've got a geospatial map that's basically tracking

[00:04:02] [SPEAKER_04]: all the ways this tech is being used. When I checked it out, I was struck by how important a tool

[00:04:07] [SPEAKER_04]: like this can be. It's a perfect example of what I'd call open source intelligence,

[00:04:12] [SPEAKER_04]: a compilation of public sources that can reflect relevant insights and build awareness.

[00:04:17] [SPEAKER_04]: And who knows, maybe those insights could be the key to updating our democracies firmware

[00:04:21] [SPEAKER_04]: ahead of our first AI election. I was really interested in asking Victoria to walk me through

[00:04:27] [SPEAKER_04]: some of the most important stories she'd come across and how AI's reshaping elections around the globe.

[00:04:38] [SPEAKER_01]: Hi, Victoria. Welcome to the show. Hi, Billov. Well, thank you so much for having me.

[00:04:44] [SPEAKER_04]: So let's get right into it. Can you start by walking us through the Wired AI Elections Project?

[00:04:49] [SPEAKER_01]: Yeah. So this year is the biggest election year. Pretty much ever. There's about 65 elections

[00:04:56] [SPEAKER_01]: that are happening around the world. This is a massive year for democracy and it's also not only

[00:05:01] [SPEAKER_01]: the biggest election we've probably ever seen, but it is certainly the biggest election

[00:05:05] [SPEAKER_01]: since the advent of the internet and social media. And I think we saw it as a moment to really take

[00:05:12] [SPEAKER_01]: stock of how technology and democracy are interacting, particularly in this moment of generative

[00:05:19] [SPEAKER_01]: AI. And so part of it was to see how is this actually going to impact democracy, but also, you know,

[00:05:25] [SPEAKER_01]: are the things that people were afraid of? Are those the trends we're going to see or is it

[00:05:29] [SPEAKER_01]: going to be something else? And how are those trends going to look differently in different parts

[00:05:33] [SPEAKER_01]: of the world? So much of technology discourse is really US-centric because a lot of big

[00:05:38] [SPEAKER_01]: technology companies are US companies. But that doesn't mean that's what that technology or

[00:05:44] [SPEAKER_01]: that innovation is going to look like in other places, that doesn't mean that's how it's going

[00:05:47] [SPEAKER_01]: to be used in other contexts. We have a big world map that shows every country that's having

[00:05:52] [SPEAKER_01]: election this year and then whether or not there's been instances of generative AI related to

[00:05:58] [SPEAKER_01]: election showing up. So this could look like the Joe Biden Robo call that happened right before

[00:06:04] [SPEAKER_01]: the New Hampshire primaries, where a voice clone of Joe Biden called all these voters in New

[00:06:10] [SPEAKER_01]: Hampshire discouraging them from voting in the Democratic primary. That's an instance of generative

[00:06:15] [SPEAKER_01]: AI, but then we're also seeing things like in Pakistan, the former Prime Minister in Ron Khan,

[00:06:22] [SPEAKER_01]: a deep fake of him that he authorized, that he, his party authorized declaring victory,

[00:06:29] [SPEAKER_01]: speaking to people because he was in prison. And so we see both these like really interesting examples

[00:06:34] [SPEAKER_01]: of something being like sanctioned and unsanctioned. So we have these types of examples in the project

[00:06:40] [SPEAKER_01]: and they're broken down into regional categories like North America, Africa, Europe, etc.

[00:06:46] [SPEAKER_01]: And so a user can look at the map and see how far this technology is reaching in terms of

[00:06:52] [SPEAKER_01]: where it's showing up in elections and description of what happened when it happened, where it happened

[00:06:58] [SPEAKER_01]: and then a link out to like other sourcing that we used to sort of confirm that this was real.

[00:07:04] [SPEAKER_04]: I think it's super exciting to have this all collated in one place. It's also slightly disconcerting.

[00:07:09] [SPEAKER_04]: And on that note, you know, I think when people think about generative AI and elections,

[00:07:14] [SPEAKER_04]: people immediately jump to deepfakes. How are deepfakes affecting campaigns in the U.S. so far?

[00:07:21] [SPEAKER_04]: This national level you gave the Biden example, but also the local level.

[00:07:25] [SPEAKER_01]: The deepfakes I think are the most visible form and sometimes the easiest to detect.

[00:07:30] [SPEAKER_01]: So I think that's why we see so many instances of that. And I think even in the project,

[00:07:34] [SPEAKER_01]: you'll see that there is a bias towards visual media because that's just easier in many cases to confirm

[00:07:40] [SPEAKER_01]: that something's fake. And a lot of times just because we know something's fake,

[00:07:46] [SPEAKER_01]: doesn't necessarily mean it's still not emotionally resonant. So I think a really great example of

[00:07:50] [SPEAKER_01]: is a couple weeks ago Elon Musk, owner of formerly Twitter currently X, tweeted a parody video

[00:07:57] [SPEAKER_01]: of Vice President and Democratic presidential nominee, Kamala Harris saying she's the ultimate

[00:08:03] [SPEAKER_01]: DI higher, which is one of the things that the right really likes to say about her, but it was

[00:08:07] [SPEAKER_01]: using her voice saying that in the style of a campaign ad. And when he initially tweeted it out,

[00:08:14] [SPEAKER_01]: it didn't have a disclaimer saying that it was parody, it didn't have labels saying that it was

[00:08:20] [SPEAKER_01]: AI generated, which many platforms require. But I think even people who really do believe that

[00:08:26] [SPEAKER_01]: about Kamala Harris might know that that video's not real, but it's still really emotionally

[00:08:32] [SPEAKER_01]: resonant because it's the voice of this politician saying something that people kind of already

[00:08:38] [SPEAKER_01]: believed to be true. It's less about necessarily that people are getting tricked than the fact that

[00:08:44] [SPEAKER_01]: they are seeing out in the world represented something that up until this point, they've sort of

[00:08:50] [SPEAKER_01]: only personally believed to be true. And then I think when we're looking at the local level,

[00:08:54] [SPEAKER_01]: a really interesting example is Peter Dixon who's a congressional candidate in California.

[00:09:00] [SPEAKER_01]: He used AI in a campaign ad to sort of like look like he was jumping through these various

[00:09:05] [SPEAKER_01]: points in his life in different locations to sort of illustrate his background. And then we had

[00:09:11] [SPEAKER_01]: the example of Jason Palmer, who is an investor and businessman who ran in the Democratic

[00:09:17] [SPEAKER_01]: primaries in American Smoa. And he used very openly AI to conduct that campaign. He used AI

[00:09:24] [SPEAKER_01]: to have a deep fake avatar of himself. And it answered questions about his public policy,

[00:09:30] [SPEAKER_01]: people could ask it, so there's this sort of broad swath of how it can be used in a really

[00:09:36] [SPEAKER_01]: sort of legitimate way to be like, hey, this man is not going to assert fly to American Smoa,

[00:09:41] [SPEAKER_01]: but how can people in American Smoa feel connected to this candidate? Maybe it would be good for

[00:09:45] [SPEAKER_01]: him to have an AI avatar that can answer questions that can state his policy positions.

[00:09:51] [SPEAKER_01]: And then you know, on a more hyperlocal level, there is someone who's actually running for

[00:09:56] [SPEAKER_01]: mayor of Cheyenne while you're being as an AI candidate as in he has created an AI bot and the

[00:10:06] [SPEAKER_01]: right at citizen. And Vic is ingesting thousands and thousands of city council documents to make

[00:10:14] [SPEAKER_01]: policy decisions. And that's what we're seeing on a local level. And that's not a deep fake.

[00:10:18] [SPEAKER_01]: So I think we're seeing them using all these really innovative ways that go beyond just trying

[00:10:22] [SPEAKER_01]: to trick people, but more around how can they be useful to campaigns? And very specifically,

[00:10:28] [SPEAKER_01]: how can they make people feel connected to these issues, even if people know that thing they're

[00:10:34] [SPEAKER_04]: dealing with is not a real human. You know, it strikes me that there is this sort of sliding scale

[00:10:39] [SPEAKER_04]: of like pure utility to let's say like deceptiveness on the other end and it is a blurry line.

[00:10:45] [SPEAKER_04]: The example that you gave of, you know, a politician basically making themselves more accessible,

[00:10:50] [SPEAKER_04]: almost like a async virtual town hall where you can go ask this politician questions and maybe

[00:10:55] [SPEAKER_04]: learn about their views a little bit better and a more intuitive fashion. It's also interesting

[00:11:00] [SPEAKER_04]: to just see folks like bolster, they're like the media that they're going to push out there to like

[00:11:05] [SPEAKER_04]: illustrating the various stages of a person's journey and being able to like bring people

[00:11:09] [SPEAKER_04]: along for the ride. And it just make that feel a little bit more

[00:11:12] [SPEAKER_04]: transported and authentic. I think is very exciting. And then the last piece, which is, you know,

[00:11:18] [SPEAKER_04]: the meme one, you can create content that is overtly, hey, this is intended to be fake,

[00:11:24] [SPEAKER_04]: but it can still have a visceral impact on you both in a good sense. And maybe in a negative sense

[00:11:30] [SPEAKER_04]: to or kind of bypasses your firewall because it has this emotional reaction in you. And even if

[00:11:36] [SPEAKER_04]: you know, hey, this is factually incorrect, you're still going to be influenced by that, right?

[00:11:42] [SPEAKER_01]: Totally. I mean, I think when we think about the use of AI and satire and parody in situations

[00:11:49] [SPEAKER_01]: where again, people know its fake. That does just because people know its fake doesn't mean it's

[00:11:55] [SPEAKER_04]: resonant to them. On that note, the lines do get blurry at times. You know, we're getting to this

[00:12:00] [SPEAKER_04]: point where events and photos are being called in quotes AI. Of course, the recent example is

[00:12:06] [SPEAKER_04]: you know, Donald Trump claimed that, you know, Kamala's crowd size was AI in quotes. How do you

[00:12:12] [SPEAKER_04]: think this dynamic alter is how to run a campaign when you can like call out like factual things

[00:12:17] [SPEAKER_04]: as being fake? Even though I'd assume these are exceedingly photographed events? So there's actually

[00:12:23] [SPEAKER_01]: term for this. It's called the Liars dividend and it's sort of the idea that when everything

[00:12:30] [SPEAKER_01]: could possibly be fake, nothing is real as someone who used to cover specifically tech in the global

[00:12:36] [SPEAKER_01]: south, everything comes home to roost. We see it elsewhere before it comes here. We saw that with the

[00:12:41] [SPEAKER_01]: abusive social media for disinformation campaigns in places like India or in the Philippines before

[00:12:47] [SPEAKER_01]: it became a problem in the US. And I think when we're talking about the Liars dividend, same thing,

[00:12:53] [SPEAKER_01]: last year in India, we had politicians who really, to recordings of them were being shared, saying

[00:13:00] [SPEAKER_01]: bad stuff and their immediate response was, that's fake. That's not real. It's AI. And you know,

[00:13:08] [SPEAKER_01]: I think back on the Donald Trump access Hollywood tape. I think that happened now. He would just say

[00:13:15] [SPEAKER_01]: that was fake audio. He would just say that was AI generated. And so I think what we're really

[00:13:19] [SPEAKER_01]: going to see is this further blurring of a shared sense of reality and a shared sense of truth,

[00:13:26] [SPEAKER_04]: because if nothing is real, anything is possible. There's something so dare I say earth shattering

[00:13:34] [SPEAKER_04]: about that because it's like at the core of weight, I can't trust my eyes anymore because you know,

[00:13:39] [SPEAKER_04]: there's real evidence of wrongdoing. Now more and more people are deploying it and employing it.

[00:13:45] [SPEAKER_04]: I have to ask you when you see this happening and now you're tracking all of these examples. Good and bad,

[00:13:51] [SPEAKER_04]: where do you net out on your like zero to 10 scale of like excited to extremely scared spectrum?

[00:13:58] [SPEAKER_01]: I think I net in the middle, mostly because what this to me shows is the incredible

[00:14:10] [SPEAKER_01]: mispriorization of what we're innovating for on a sort of grander scale. Rather than like thinking

[00:14:15] [SPEAKER_01]: about the social impacts, Silicon Valley has always been move fast break things and I think

[00:14:21] [SPEAKER_01]: the AI thing is always very interesting to me because it repeats so many of the mistakes of

[00:14:26] [SPEAKER_01]: web 2.0. The idea of deploying technologies before we know their social impact,

[00:14:31] [SPEAKER_01]: the idea of not thinking about how something's going to be used outside of the context that you thought

[00:14:36] [SPEAKER_01]: you were designing for, not necessarily being able to differentiate like what's real and good

[00:14:41] [SPEAKER_01]: and what's not not being able to control who uses your product. Like all of these sort of baseline

[00:14:47] [SPEAKER_01]: things that we're still trying to figure out like we don't even understand on a policy level

[00:14:52] [SPEAKER_01]: and even I think on a company level how we deal with content moderation. That has been the

[00:14:57] [SPEAKER_01]: conversation from basically 2010 onwards. It's been how do we deal with content moderation on web 2.0?

[00:15:02] [SPEAKER_01]: And instead of thinking about like how can we deal with this, it's all the same mistakes,

[00:15:08] [SPEAKER_01]: all the same behaviors all over again in different iterations for the AI revolution and it doesn't

[00:15:14] [SPEAKER_04]: really feel like we've learned very much. Now it strikes me even web 2.0 and let's say social

[00:15:19] [SPEAKER_04]: being an exemplification of that. It ended up being quite valuable in the election context as well.

[00:15:27] [SPEAKER_04]: Right? I remember reading this article where it's like when the nerds go marching into the White

[00:15:30] [SPEAKER_04]: House, this is like Obama's reelection campaign circa 2012. And so I'm kind of curious,

[00:15:36] [SPEAKER_04]: given the tools at our disposal, do you think that AI could be an effective policy messaging tool?

[00:15:43] [SPEAKER_04]: If it can be factual and it seems like there's certain approaches to make it more factual than not.

[00:15:48] [SPEAKER_01]: Yeah, I mean the biggest issue is transparency. So I think a really good example for instance is

[00:15:53] [SPEAKER_01]: like Indonesia during their election this year, the man who was formerly the head of the military

[00:15:59] [SPEAKER_01]: under the country's former dictator won that election. And there were people who worked for

[00:16:06] [SPEAKER_01]: his party that openly admitted to using AI and specifically to write campaign speeches,

[00:16:11] [SPEAKER_01]: and that's great except apparently they had built that on top of chatGPT, which open AI

[00:16:18] [SPEAKER_01]: has said we don't want our product use for politics. How are they tracking that? How are they

[00:16:23] [SPEAKER_01]: ensuring that's not happening? So I think like yeah, there could be really valuable ways for this

[00:16:28] [SPEAKER_01]: to be used for, say, campaign messaging or like the example that we saw in American Smoa,

[00:16:34] [SPEAKER_01]: where someone can sort of as you describe like an asynchronous town hall.

[00:16:37] [SPEAKER_04]: You know this kind of brings me back to how you see social media factoring into all of this.

[00:16:43] [SPEAKER_04]: I mean since the last presidential election, we've seen how important it is in campaigning,

[00:16:49] [SPEAKER_04]: but also in spreading misinformation. The FEC made a decision earlier this year that they

[00:16:55] [SPEAKER_04]: wouldn't put new restrictions on AI and political ads, which means it'll be up on platforms.

[00:17:00] [SPEAKER_04]: And you gave this example of somebody like illustrated the jury and stages of their life

[00:17:05] [SPEAKER_04]: in a campaign ad, what are platforms doing right now to like prepare for this like very pivotal

[00:17:11] [SPEAKER_04]: year to curb perhaps AI-generated content related to elections? Are they trying to get ahead of it?

[00:17:17] [SPEAKER_04]: Or do you think it'll again be sort of more responsive and retroactive? So first off,

[00:17:22] [SPEAKER_01]: platforms, you know, everything they're doing right now is voluntary. Number two, you know,

[00:17:27] [SPEAKER_01]: a lot of them have really leaned into labeling, or they're saying, you know, it has to be

[00:17:32] [SPEAKER_01]: labeled if there's generated an AI content on our platform. But so often,

[00:17:39] [SPEAKER_01]: it is very difficult for them to detect AI-generated stuff on their platform. We don't really

[00:17:46] [SPEAKER_01]: have a ton of transparency into how they're detecting this stuff. What systems they have in place?

[00:17:53] [SPEAKER_01]: Companies are not necessarily sharing data back and forth about what's being generated on their

[00:17:58] [SPEAKER_01]: platform, so something can be labeled consistently across platforms. And we saw this also with

[00:18:03] [SPEAKER_01]: disinformation issues where like, you know, maybe something would get taken down on Twitter, but it

[00:18:07] [SPEAKER_01]: would live on Facebook or maybe it would get taken off a Facebook, but it still be platformed on YouTube.

[00:18:11] [SPEAKER_01]: Like, you know, there's all these sort of gaps because these companies are not sharing information

[00:18:17] [SPEAKER_01]: or coordinating with each other except on things like child sexual abuse material or like

[00:18:21] [SPEAKER_01]: terrorism where there's actual legal implications, right? A lot of companies that have AI models have

[00:18:26] [SPEAKER_01]: said, you know, we're going to start water marking, meaning that anything that comes off our platform.

[00:18:30] [SPEAKER_01]: It'll have some sort of signature on it whether or not that's visible to a human or not,

[00:18:35] [SPEAKER_01]: but that a machine can read to say, hey, this is AI-generated. Okay, well, that's great.

[00:18:40] [SPEAKER_01]: But that implies that everyone has to be a good actor. Yeah, exactly.

[00:18:44] [SPEAKER_01]: Everyone has to agree to watermark and everyone has to be able to read everybody else's watermarks.

[00:18:48] [SPEAKER_01]: Right? If you're a bad actor, you have no incentive to use technology that's going to watermark

[00:18:54] [SPEAKER_01]: your content. You know, we're talking about something that's detectable by a machine,

[00:18:59] [SPEAKER_01]: but will that be detectable by a human who knows? Yeah, this is going to be so challenging.

[00:19:04] [SPEAKER_04]: You're totally right. It's like, yeah, the good actors, you know, oh, yes, you know,

[00:19:07] [SPEAKER_04]: there happened to be some watermark in the image I uploaded from a commercial image generator.

[00:19:11] [SPEAKER_04]: But are people as they scroll through their feet, even actually going to care about that?

[00:19:15] [SPEAKER_04]: And certainly bad actors are actively going to try and avoid, you know, either removing the

[00:19:21] [SPEAKER_04]: watermark or using tech that doesn't have it. And that's on the visual side. I mean,

[00:19:25] [SPEAKER_04]: I'm even curious sort of about like fake engagements and bots have been a thing for,

[00:19:30] [SPEAKER_04]: for at least a half decade plus now. This is a well-publicized strategy from 2016, right?

[00:19:37] [SPEAKER_04]: Russia, so discord on social media. And their strategy was almost like, hey, let's just start

[00:19:43] [SPEAKER_04]: starring, you know, these sort of very polarizing issues. And, you know, you emulate the kind of

[00:19:48] [SPEAKER_04]: rage bait that then drives engagement and it's a total mess. And do you think there's a strategy for

[00:19:54] [SPEAKER_04]: addressing sort of AI-generated bot forms that can start impersonating voters and sort of add

[00:19:59] [SPEAKER_01]: themselves to this public discourse? Yeah, I mean, that is a massive problem. It's actually very

[00:20:03] [SPEAKER_01]: interesting. Open AI released its first threat reported the end of May. And what it showed actually

[00:20:09] [SPEAKER_01]: was that for an actors, still are trying to figure out this technology like they still are not

[00:20:14] [SPEAKER_01]: sure how useful it is, you know what I mean? Which I thought was very funny. I was like, oh,

[00:20:18] [SPEAKER_01]: I guess we're all confused. But we definitely do see instances of them sort of really big strategy

[00:20:24] [SPEAKER_01]: that we see in foreign influence operations is they will link to websites that are meant to look like

[00:20:32] [SPEAKER_01]: a legitimate information source. And then they will use generative AI to populate articles that

[00:20:38] [SPEAKER_01]: reinforce certain political views, those articles then get shared on social platforms. So it's

[00:20:43] [SPEAKER_01]: not necessarily the bots or the content themselves, those sometimes are also AI-generated.

[00:20:49] [SPEAKER_01]: It's creating these websites that are populated with chatchip-t-style AI bullshit articles

[00:20:56] [SPEAKER_01]: and then sharing those on social platforms. But we're definitely seeing foreign influence

[00:21:00] [SPEAKER_01]: operations experiment with this. And again, I think you know, it's one of those things like

[00:21:06] [SPEAKER_01]: they're still testing it out too and seeing what's most effective. After the break, I asked

[00:21:14] [SPEAKER_04]: Victoria about how AI's reshaping elections around the world. In ways you might not expect.

[00:21:29] [SPEAKER_04]: So I want to transition to global elections because as you mentioned, the global angle here is very

[00:21:34] [SPEAKER_04]: interesting because oftentimes we're seeing the initial instantiation of these technologies being used

[00:21:40] [SPEAKER_04]: for good and bad overseas before it makes its way back here. It feels new to us in the United

[00:21:46] [SPEAKER_04]: States and maybe even in Europe, but the rest of the world's like, yeah, we've been dealing with this.

[00:21:52] [SPEAKER_04]: So I've got friends in Pakistan who've been what's happening, they're like, yeah,

[00:21:55] [SPEAKER_04]: defect politicians, you know, new voice recording, so common. There's like another one

[00:22:00] [SPEAKER_04]: floats around on what'sapp groups almost every week. So how has AI normalized this kind of content

[00:22:06] [SPEAKER_01]: in Pakistan? South Asia, particularly India, Pakistan places with really high concentration of like

[00:22:14] [SPEAKER_01]: highly educated people and tech talent. That's where we're seeing a lot of this. And I think

[00:22:20] [SPEAKER_01]: you know, in Pakistan there's, you know, the sort of legitimate use of this as we mentioned with

[00:22:25] [SPEAKER_01]: Immoron Khan. And then during the elections, there were also deepfakes of local politicians telling

[00:22:31] [SPEAKER_01]: people not to vote, telling people that they were dropping out of the race. And I think one of

[00:22:37] [SPEAKER_01]: the big things, particularly from the global south is audio messages are particularly common,

[00:22:44] [SPEAKER_01]: especially when people use WhatsApp. So there's a ton of instances of audio deepfakes and they

[00:22:51] [SPEAKER_01]: are very difficult to detect because you're not going to have the same signals that you would have with

[00:23:01] [SPEAKER_01]: a really one medium. And a lot of times it circulates on platforms like WhatsApp, not publicly

[00:23:08] [SPEAKER_01]: on social media, like ex or Facebook. It's circulating in these close communities on encrypted platforms.

[00:23:14] [SPEAKER_01]: It's incredibly difficult to detect and harder to debunk. India and Pakistan, they both have

[00:23:19] [SPEAKER_01]: a lot of really fabulous tech talent. And we see a lot of companies coming up to service that market.

[00:23:26] [SPEAKER_01]: But in general, most of the AI that tools that we're seeing and the tools to detect AI

[00:23:34] [SPEAKER_01]: generated content are trained on and built for data from the Western market. When we're looking

[00:23:39] [SPEAKER_01]: at markets in the global south, people are recording stuff on phones that are not as advanced

[00:23:46] [SPEAKER_01]: as like an iPhone, meaning that the baseline quality of the content might be lower. And that makes

[00:23:52] [SPEAKER_01]: so much harder for these detection tools to flag if something's fake. False positives and false

[00:23:58] [SPEAKER_01]: negatives are so much more common when stuff is coming off of these lower quality phones.

[00:24:04] [SPEAKER_01]: Secondly, when you're working with markets that speak English in a non-Western way. So like

[00:24:10] [SPEAKER_01]: pigeon forms of English, accent it English. There's more likely to have content that's flagged

[00:24:16] [SPEAKER_01]: deep fake. Like AI notoriously bad when it comes to non-white people. And so I think when we're talking

[00:24:24] [SPEAKER_01]: about places I've pocketed on in the global south, there is a lot of interest in these technologies

[00:24:29] [SPEAKER_01]: there. But also the ways in which they are used and the ways in which they can or cannot be

[00:24:35] [SPEAKER_01]: detected is totally different. And these are all massive problems where like this kind of

[00:24:46] [SPEAKER_01]: ones that frankly, these companies, they're barely prioritizing working on issues here. You know?

[00:24:52] [SPEAKER_04]: Totally. I'm curious with all of these instances, is it actually changing how people ended up

[00:24:59] [SPEAKER_01]: voting or like change voter turnout? So I think that really depends on where you look at. A really

[00:25:06] [SPEAKER_01]: great example came from actually a wired story that I didn't write. It was written by friend of

[00:25:10] [SPEAKER_01]: Nilesh Christopher and his reporting partner Varsha Bansal out of the Indian elections. They're

[00:25:17] [SPEAKER_01]: at the time of AI companies that are coming up in India now servicing the Indian market. And obviously

[00:25:22] [SPEAKER_01]: India is a massively diverse place in terms of language, religion, culture, etc. And they found

[00:25:27] [SPEAKER_01]: that in the lead-up to the Indian elections, a lot of local politicians were employing these AI

[00:25:34] [SPEAKER_01]: companies to create deepfakes of themselves similar to the Joe Biden Robo call, but they were authorizing

[00:25:39] [SPEAKER_01]: it and they were calling people's numbers, the numbers of their constituents. And you know in the

[00:25:43] [SPEAKER_01]: US we might consider that kind of thing really annoying and invasive. But what Nilesh found was

[00:25:48] [SPEAKER_01]: that actually people felt that sort of personalized outreach, even though they knew it was

[00:25:53] [SPEAKER_01]: an AI tool, even though they knew it was automated. They felt very seen by that and they felt very

[00:25:57] [SPEAKER_01]: considered by that. And it actually did make them want to vote for a particular candidate to feel

[00:26:02] [SPEAKER_01]: that they were getting personalized attention through the use of these AI tools. So in that way,

[00:26:09] [SPEAKER_01]: you know, it can be extraordinarily powerful. In Indonesia, for instance, during the campaign,

[00:26:15] [SPEAKER_01]: probably Osobiyantho who ended up winning the election, the former general under the Suharto

[00:26:21] [SPEAKER_01]: dictatorship. They used a majority to create a avatar of him looking very cute, friendly,

[00:26:27] [SPEAKER_01]: grandpa sort of vibes and they shared those all over TikTok and it got 19 billion views and it

[00:26:33] [SPEAKER_01]: definitely helped make him popular with young people who don't remember the Suharto dictatorship.

[00:26:40] [SPEAKER_01]: They didn't live through it and so they were susceptible to this sort of softer image of him

[00:26:46] [SPEAKER_01]: and that was totally authorized by the campaign. You know, there are ways in which this technology

[00:26:51] [SPEAKER_01]: can be used for campaigning and image management and all this stuff and that can really

[00:26:56] [SPEAKER_01]: maybe affect how people perceive a particular candidate. Absolutely. I mean, it's interesting

[00:27:01] [SPEAKER_04]: the indie example that you also brought up earlier. There was a company called Refraise AI

[00:27:07] [SPEAKER_04]: that would characon one of the biggest Hollywood actors. He did like a Cadbury chocolate advertising

[00:27:13] [SPEAKER_04]: campaign where the call to action like purchase the chocolate was like the local confectionary store

[00:27:19] [SPEAKER_04]: across all these regions. It's like, hey, if you're going to go buy this stuff, go buy it from here.

[00:27:24] [SPEAKER_04]: And now it's interesting to see this being applied in a political context. I think I could make

[00:27:28] [SPEAKER_04]: huge difference. It feels like personalized outreach and then as far as distribution goes,

[00:27:34] [SPEAKER_04]: there's the one to one stuff that we're talking about. There's the TikTok example maybe on

[00:27:38] [SPEAKER_04]: the other end that you mentioned where it's like, yeah, if these videos go viral, of course,

[00:27:43] [SPEAKER_04]: that's going to make a dent at least like put you top of mind for a lot of presumably,

[00:27:47] [SPEAKER_04]: you know, in this case, young voters, the stuff in the middle going back to WhatsApp seems more

[00:27:53] [SPEAKER_04]: concerning because it feels like harder to moderate and you know, stop the spread. Like I

[00:27:58] [SPEAKER_04]: recall a few years ago in India around Kashmir. There was a little bit of instability and sort

[00:28:03] [SPEAKER_04]: of like the internet got shut off, but then also WhatsApp basically around that time,

[00:28:10] [SPEAKER_04]: instituted, hey, there's only a limited number of people we're going to let you forward a message to.

[00:28:15] [SPEAKER_04]: And that the idea is like, that's one way to sort of cap the spread because, you know,

[00:28:22] [SPEAKER_04]: post a, you know, stop the spread of misinformation and by the time the analysts go and send

[00:28:28] [SPEAKER_04]: it to like, I don't know, experts in Europe to go, you know, assess the defect. Like the damage has

[00:28:33] [SPEAKER_01]: been done, right? Yeah. Well, I think, you know, again, if everyone was sort of agreeing like,

[00:28:38] [SPEAKER_01]: hey, we're going to watermark or we're going to use a hashing tool for it. So like that's how they

[00:28:44] [SPEAKER_01]: deal with encrypted messages. That's how they deal with like child sexual abuse materials hashing.

[00:28:50] [SPEAKER_01]: So a image will be hashed. It'll be entered into a database. It'll have sort of the specific

[00:28:55] [SPEAKER_01]: signature. And if you try and send it before it even leaves your phone to sort of go into that

[00:29:02] [SPEAKER_01]: space of being encrypted, it'll be sort of like caught by the platform and you will be unable to

[00:29:08] [SPEAKER_01]: forward it because you're looking it up against this data base exactly because it'll it'll be hashed

[00:29:12] [SPEAKER_01]: again, that sort of digital signature that hash will sort of ensure that. So maybe there would

[00:29:17] [SPEAKER_01]: be a way to say we're going to hash everything created by AI anything that, you know, gets forwarded

[00:29:24] [SPEAKER_01]: that that can be sort of checked against this hash database auto label as AI. There might be a

[00:29:29] [SPEAKER_01]: way to do that if everybody agreed and invested in that. But that's an immense amount of time

[00:29:34] [SPEAKER_01]: technology. Like, and that's the thing that I think people don't really think about when we're talking

[00:29:37] [SPEAKER_01]: about all this creation of AI generated content is now you're requiring the other side of it, which is

[00:29:42] [SPEAKER_01]: detection. You know, now you've created a whole other industry that's also about detection and that

[00:29:48] [SPEAKER_01]: also requires an immense amount of technology and investment and time and money to scale up

[00:29:55] [SPEAKER_04]: to respond to this problem. Totally. I really want to talk to you about AI Steve, you know,

[00:30:00] [SPEAKER_04]: my mind was really blown and I think it's super, it's a super interesting story. So why don't

[00:30:05] [SPEAKER_01]: you start by introducing AI Steve? Yeah, so he was a British political candidate who stood for

[00:30:11] [SPEAKER_01]: Parliament from the town of Brighton. And AI Steve was literally the candidate. He's the digital

[00:30:19] [SPEAKER_01]: avatar of actual Steve, a real man named Steve who was running for office and sort of the way

[00:30:25] [SPEAKER_01]: the campaign described it to me was actual Steve would be the physical embodiment of AI Steve.

[00:30:31] [SPEAKER_01]: He would be in Parliament doing the negotiations talking to people but all of his decisions would

[00:30:37] [SPEAKER_01]: be informed by AI Steve. And AI Steve was a sort of in that similar way, an AI avatar who could

[00:30:43] [SPEAKER_01]: respond to constituents and collect their their questions and the point of having this AI model was

[00:30:49] [SPEAKER_01]: that constituents could say here's what we care about, here's how we want you to vote on things,

[00:30:54] [SPEAKER_01]: here are the issues that matter to us and then real Steve's job was to go to Parliament

[00:30:57] [SPEAKER_01]: and do the actions dictated by AI Steve and AI Steve did not win. But in principle, I think that's

[00:31:06] [SPEAKER_01]: a really interesting idea and actually the campaign told me that the two things that people are

[00:31:10] [SPEAKER_01]: most interested in when they first launched it were the conflict in Gaza and trash collection.

[00:31:16] [SPEAKER_01]: But you know, I think even then that has sort of its own limitations right because members of Parliament

[00:31:21] [SPEAKER_01]: members of Congress get classified briefings all the time and are making decisions based on information

[00:31:27] [SPEAKER_01]: that the general public may not have and so there would still be a negotiation you need to have

[00:31:32] [SPEAKER_01]: with that sort of campaign commitment to make all decisions in line with what the AI has been

[00:31:38] [SPEAKER_01]: able to collect from constituents because the reality is the AI is not going to the classified

[00:31:44] [SPEAKER_01]: national security briefing. The AI is not being given special documentation and numbers

[00:31:51] [SPEAKER_01]: in the way that actual members of government are. And so there probably is a way for AI technology

[00:31:58] [SPEAKER_01]: to be incorporated into a sense of good governance but I think that's not what we're seeing

[00:32:04] [SPEAKER_01]: prioritized in what's being built and that's certainly not necessarily I think possible when

[00:32:10] [SPEAKER_04]: the AI itself is the candidate. All right, well you certainly answered my question of like how

[00:32:15] [SPEAKER_04]: would this even work? And so it's almost like this is like a digital twin or proxy in the public domain

[00:32:21] [SPEAKER_04]: for a politician and really it's a way of sort of getting a pulse, you know, over your constituents

[00:32:27] [SPEAKER_04]: like a pulse of the nation if you will but this isn't the only example right there been other

[00:32:32] [SPEAKER_04]: examples of AI candidates around the world. Can you tell me the story about the Belarus elections

[00:32:37] [SPEAKER_01]: earlier this year? Belarus is a dictatorship and one might argue a sort of proxy country of Russia.

[00:32:48] [SPEAKER_01]: The Russian military currently is staged out of Belarus for its word Ukraine and it's dictatorship,

[00:32:54] [SPEAKER_01]: the Lukashenko dictatorship is one of the last remaining ones in Europe and there has been

[00:33:01] [SPEAKER_01]: incredible crackdowns on dissidents. One of the dissidents who is an exile created an AI candidate

[00:33:07] [SPEAKER_01]: called Gas Gas Bidar to run in the country's parliamentary elections and obviously this candidate

[00:33:13] [SPEAKER_01]: did not win the elections in Belarus earlier this year were widely regarded to be unfree

[00:33:18] [SPEAKER_01]: but because dissidents is so criminalized in Belarus and because many of the people who might have

[00:33:25] [SPEAKER_01]: actually stood for elections are in exile this was sort of a way of using AI to represent those people,

[00:33:34] [SPEAKER_01]: you know he can't be arrested he cannot be subject to the ways in which the government has used

[00:33:40] [SPEAKER_01]: force to crack down on dissidents in the past. So I think this was actually a very clever use of AI

[00:33:46] [SPEAKER_01]: just like you know we saw people in the Middle East use social media to really power the

[00:33:52] [SPEAKER_01]: Arab Spring. Like there are good generative democratic creative uses of these technologies and people

[00:34:01] [SPEAKER_01]: will find those but that doesn't mean that's always like what they're geared towards or the most

[00:34:05] [SPEAKER_04]: common use of them. Totally right I mean this is a perfect example of AI being effective if you

[00:34:10] [SPEAKER_04]: are a political dissident right like I was blown away by that one quote in that article which is

[00:34:15] [SPEAKER_04]: basically like yeah this person can't be imprisoned because he's just code and I was like wow

[00:34:21] [SPEAKER_04]: that's one way to give a voice to the voiceless. So we've gone through a lot of aspects of the current

[00:34:27] [SPEAKER_04]: state of you know let's just call it the state of AI elections in the US and some lessons

[00:34:33] [SPEAKER_04]: learned from other countries. I think at this point I just have to take a step back and ask like

[00:34:39] [SPEAKER_04]: how concerned should we really be about the ways that AI could affect the 2024 election?

[00:34:45] [SPEAKER_01]: In the US I don't know yet. I think there are a lot of things at play and I think one of the

[00:34:52] [SPEAKER_01]: things we didn't touch on which is important to note is that social platforms have rolled back

[00:34:57] [SPEAKER_01]: their investment in trust and safety and trust and safety are the people and teams who make

[00:35:02] [SPEAKER_01]: sure that hate speech, disinformation all that stuff stays off the platform and then on top of that

[00:35:07] [SPEAKER_01]: we're sort of adding this extra layer. So I think we are certainly going to see a ton of AI

[00:35:15] [SPEAKER_01]: bullshit on all the platforms. I think we're going to see more of it but I think the real thing is

[00:35:22] [SPEAKER_01]: personally I am less concerned about the elections themselves and more about what happens after

[00:35:31] [SPEAKER_01]: we are in a moment where there is less and less and less trust than there's ever been. We have

[00:35:38] [SPEAKER_01]: now Maskuhon's ex he has already started seeding narratives around you know illegal immigrants

[00:35:47] [SPEAKER_01]: voting things that could very easily sort of form the intellectual foundation to question a

[00:35:53] [SPEAKER_01]: democratifictory. Those are old problems. The platforms still haven't solved and I think we'll see

[00:35:59] [SPEAKER_01]: AI play a role in those whether that's through disinformation campaigns, whether that's through

[00:36:02] [SPEAKER_01]: AI generated media, whether that's through for instance, Grock AI recently returned answers saying

[00:36:09] [SPEAKER_01]: that Kamala Harris had missed the deadline to register to be on the ballot in nine states and the

[00:36:15] [SPEAKER_01]: secretaries general of five of those states had to write to ex and say your AI chatbot is spitting out

[00:36:21] [SPEAKER_01]: bad information. I think we'll see things like that but I think the core issues underlying this which is

[00:36:27] [SPEAKER_01]: lack of investment interest and safety, a lack of investment in thinking about the implications

[00:36:33] [SPEAKER_01]: of these technologies and the safeguards necessary and real lack of trust and sense of reality

[00:36:40] [SPEAKER_01]: in terms of the shared world we're living in and trust in institutions and systems. Those

[00:36:45] [SPEAKER_01]: underpin this question more deeply than any sort of variation in the technology could.

[00:36:51] [SPEAKER_04]: That is a very profound and nuanced answer to end on. Vittoria, thanks for being on the show.

[00:36:57] [SPEAKER_04]: Thank you. After talking with Vittoria, I had a couple of ideas about how we can prepare for the next

[00:37:06] [SPEAKER_04]: few months of the election season. The first thing I'm thinking about is actually something we

[00:37:10] [SPEAKER_04]: covered in the very first episode of this podcast. When I spoke with Sam Gregory from Witness,

[00:37:16] [SPEAKER_04]: he recommended a good exercise in media literacy that I think still applies beautifully.

[00:37:21] [SPEAKER_04]: Sift. As a refresher that stands for stop, investigate the source, find alternative coverage and

[00:37:27] [SPEAKER_04]: trace down the original content that you think might not be legit.

[00:37:31] [SPEAKER_04]: Especially since the political landscape is changing at this unprecedented pace,

[00:37:36] [SPEAKER_04]: it's very important to always hit those four steps and have a wide range of trusted sources

[00:37:41] [SPEAKER_04]: at your disposal. But the second thing is a little harder to break down into specific steps

[00:37:47] [SPEAKER_04]: because it has to do with this concept that Vittoria brought up. The liar is dividend.

[00:37:52] [SPEAKER_04]: That's the idea that in a world where anything can be falsified, nothing is real

[00:37:57] [SPEAKER_04]: and so it kind of doesn't matter if you can identify disinformation out in the wild.

[00:38:01] [SPEAKER_04]: Its message can still very much influence people, even when they know it's not true.

[00:38:07] [SPEAKER_04]: It's why memes can be such powerful weapons for political messaging. They can bypass

[00:38:11] [SPEAKER_04]: that part of our brains that question if something is real and go straight to our emotions.

[00:38:18] [SPEAKER_04]: When I saw those AI-generated memes that I mentioned at the top of the episode,

[00:38:21] [SPEAKER_04]: I realized that introducing this tool months before election day was opening the door

[00:38:26] [SPEAKER_04]: for full-blown, mimetic warfare, which means we're all going to have to come to terms with

[00:38:31] [SPEAKER_04]: the fact that none of us are exempt from being influenced by content. AI-generated or not.

[00:38:37] [SPEAKER_04]: It's going to be hard but we've got to modify our information diets.

[00:38:42] [SPEAKER_04]: That means adjusting how we let content impact our perceptions of reality.

[00:38:47] [SPEAKER_04]: There are some really exciting possibilities for AI technology to make politics more accessible,

[00:38:52] [SPEAKER_04]: more representative and maybe even more revolutionary. But ahead of our first AI election,

[00:38:58] [SPEAKER_04]: it'll be up to us to learn from the rest of the world and charter own path towards the future.

[00:39:07] [SPEAKER_04]: The Teddy I show is a part of the TED audio collective and is produced by TED with Cosmic Standard.

[00:39:13] [SPEAKER_04]: Our producers are Ben Montoya and Alex Higgins. Our editors are Ben Bencheng and

[00:39:18] [SPEAKER_04]: Alejandra Salazar. Our showrunner is Ivana Tucker and our engineer is Asia Polar Simpson.

[00:39:24] [SPEAKER_04]: Our technical director is Jacob Winick and our executive producer is Eliza Smith.

[00:39:30] [SPEAKER_04]: Our researcher and fact checker is Christian Aparta and I'm your host, Belabel Sadoo. See y'all in the next one.