The TED AI Show: Is AI destroying our sense of reality? with Sam Gregory
TED TechMay 21, 202427:2550.16 MB

The TED AI Show: Is AI destroying our sense of reality? with Sam Gregory

Could you spot a deepfake? We’re entering a new world where generative AI is challenging our sense of what’s real and what’s fiction. In our first episode, Bilawal and Sam Gregory, a human rights activist and technologist, discuss how to protect our sense of reality.

This is an episode of The TED AI Show, TED's newest podcast. Sure, some predictions about AI are just hype – but others suggest that everything we know is about to fundamentally change. Creative technologist Bilawal Sidhu talks with the world’s leading experts, artists, journalists, and more to explore the thrilling, sometimes terrifying, future ahead.

Listen to The TED AI Show on this feed every Tuesday -- or follow The TED AI Show wherever you get your podcasts.

For more, visit https://www.ted.com/podcasts/the-ted-ai-show 

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

Could you spot a deepfake? We’re entering a new world where generative AI is challenging our sense of what’s real and what’s fiction. In our first episode, Bilawal and Sam Gregory, a human rights activist and technologist, discuss how to protect our sense of reality.

This is an episode of The TED AI Show, TED's newest podcast. Sure, some predictions about AI are just hype – but others suggest that everything we know is about to fundamentally change. Creative technologist Bilawal Sidhu talks with the world’s leading experts, artists, journalists, and more to explore the thrilling, sometimes terrifying, future ahead.

Listen to The TED AI Show on this feed every Tuesday -- or follow The TED AI Show wherever you get your podcasts.

For more, visit https://www.ted.com/podcasts/the-ted-ai-show 

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

[00:00:00] TED Audio Collective

[00:00:08] Okay, picture this.

[00:00:10] We're in Miami.

[00:00:11] There's sun, sand, palm trees,

[00:00:14] and a giant open-air mall right on the water.

[00:00:17] Pretty nice, right?

[00:00:19] But then, late one Monday night in January of this year,

[00:00:22] things get weird.

[00:00:24] Cop cars swarm the mall, like dozens of them.

[00:00:28] I'm talking six city blocks shut down,

[00:00:30] lights flashing, people everywhere,

[00:00:32] and no one knows what's going on.

[00:00:34] The news footage hits the internet,

[00:00:36] and naturally, the hive mind goes into overdrive.

[00:00:40] Speculation, conspiracy theories are flying,

[00:00:43] and one idea takes hold.

[00:00:46] Aliens.

[00:00:48] Folks are dissecting grainy helicopter footage

[00:00:50] in the comments,

[00:00:52] zooming in, analyzing it frame by frame

[00:00:55] to find any evidence of aliens.

[00:00:58] So I thought, I'm a TikToker.

[00:01:01] What if I brought this online fever dream to life

[00:01:03] and shared it with the masses?

[00:01:05] Using the latest AI tools, I created a video.

[00:01:08] Towering shadowy figures, silently materializing

[00:01:12] amidst the flashing lights of the police cars.

[00:01:14] An alien invasion in the middle

[00:01:16] of Miami's bayside marketplace.

[00:01:19] Just a bit of fun, I thought.

[00:01:20] Some got the joke, a Miami twist on Stranger Things.

[00:01:25] But I watched as other people flocked

[00:01:26] into my comment section to declare this

[00:01:29] as bonafide evidence that aliens do in fact exist.

[00:01:34] Now you might be wondering, what actually happened?

[00:01:37] A bunch of teenagers got into a fight at the mall,

[00:01:40] the police showed up to break it up,

[00:01:42] and that's all it took to trigger this mass hysteria.

[00:01:46] It was too easy.

[00:01:48] Too easy to make people believe something happened

[00:01:51] that never actually happened.

[00:01:53] And that's kind of terrifying.

[00:01:58] I'm Bilal Al-Saddou, and this is the TED AI Show,

[00:02:01] where we figure out how to live and thrive in a world

[00:02:04] where AI has changed everything.

[00:02:13] This show is brought to you by Schwab.

[00:02:15] You're here because you like to keep a pulse

[00:02:18] on trends in technology.

[00:02:20] Well, now you can invest in what's trending,

[00:02:23] in artificial intelligence, big data, robotic revolution,

[00:02:27] and more with Schwab Investing Teams.

[00:02:30] It's an easy way to invest in ideas you believe in.

[00:02:33] Schwab's research process uncovers emerging trends,

[00:02:37] then their technology curates relevant stocks into themes.

[00:02:41] Choose from over 40 themes.

[00:02:44] Buy all the stocks in a theme as is,

[00:02:47] or customize to better fit your investing goals,

[00:02:50] all in a few clicks.

[00:02:52] Schwab Investing Teams is not intended

[00:02:55] to be investment advice or a recommendation

[00:02:58] of any stock or investment strategy.

[00:03:00] Learn more at schwab.com slash thematic investing.

[00:03:07] Support for TED Tech comes from Odoo.

[00:03:09] To put it simply, Odoo is built to save.

[00:03:12] Odoo saves time, Odoo saves money,

[00:03:15] but most importantly, Odoo saves businesses.

[00:03:17] That's right, Odoo's superhero software rescues companies

[00:03:21] from the perils of disconnected platforms,

[00:03:23] and Odoo's utility belt of user-friendly applications

[00:03:26] puts the power of total business management

[00:03:28] in the palm of your hand.

[00:03:30] Learn more at odoo.com slash TED Tech.

[00:03:32] That's O-D-O-O dot com slash TED Tech.

[00:03:36] Odoo, saving the world one business at a time.

[00:03:40] This episode is brought to you by Progressive.

[00:03:43] Most of you aren't just listening right now.

[00:03:45] You're driving, cleaning, and even exercising.

[00:03:48] But what if you could be saving money

[00:03:49] by switching to Progressive?

[00:03:51] Drivers who save by switching save nearly $750 on average,

[00:03:55] and auto customers qualify for an average of seven discounts.

[00:03:59] Multitask right now.

[00:04:00] Quote today at progressive.com.

[00:04:02] Progressive Casualty Insurance Company and Affiliates,

[00:04:05] national average 12-month savings of $744

[00:04:08] by new customers surveyed who saved with Progressive

[00:04:10] between June 2022 and May 2023.

[00:04:13] Potential savings will vary.

[00:04:15] Discounts not available in all states and situations.

[00:04:20] Hackers and cyber criminals

[00:04:22] have always held this kind of special fascination.

[00:04:25] Obviously I can't tell you too much about what I do.

[00:04:28] It's a game.

[00:04:29] Who's the best hacker?

[00:04:30] And I was like, well, this is child's play.

[00:04:33] I'm Dena Temple-Reston,

[00:04:35] and on the Click Here podcast, you'll meet them

[00:04:37] and the people trying to stop them.

[00:04:39] We're not afraid of the attack.

[00:04:41] We're afraid of the creativity

[00:04:42] and the intelligence of the human being behind it.

[00:04:45] Click here.

[00:04:46] Stories about the people making

[00:04:47] and breaking our digital world.

[00:04:50] AI machines.

[00:04:51] Satellite.

[00:04:52] Engine ignition.

[00:04:53] Click here.

[00:04:54] And lift off.

[00:04:55] Click here.

[00:04:56] Every Tuesday and Friday, wherever you get your podcasts.

[00:05:01] So I've been making visual effects,

[00:05:02] blending realities on my computer since I was a kid.

[00:05:06] I remember watching a show called Mega Movie Magic,

[00:05:09] which revealed the secrets behind movies' special effects.

[00:05:12] I learned about CGI and practical effects in movies

[00:05:15] like Star Wars, Godzilla, and Independence Day.

[00:05:19] I was already into computer graphics,

[00:05:21] but seeing how they could create visuals indistinguishable

[00:05:24] from reality was a game changer.

[00:05:27] It sparked a lifelong passion to blend the physical

[00:05:29] and digital worlds.

[00:05:32] Several years ago, I started my TikTok channel.

[00:05:35] I'd upload my own creations and share them with hundreds,

[00:05:37] then thousands, and now millions of viewers.

[00:05:40] I mean, just five years ago,

[00:05:41] if I wanted to make a video

[00:05:43] of giant aliens invading a mall in Miami,

[00:05:45] it would have taken me a week

[00:05:47] and at least five pieces of software.

[00:05:49] But this aliens video?

[00:05:51] It took me just a day to make using tools like Midjourney,

[00:05:55] RunwayML, and Adobe Premiere,

[00:05:57] tools that anyone with a laptop can access.

[00:06:02] Since ChatGPT came on the scene in late 2022,

[00:06:05] there's been a lot of talk about the Turing test,

[00:06:08] where a human evaluator tries to figure out

[00:06:10] if the person at the other end of a text chat

[00:06:13] is a machine or another human.

[00:06:15] But what about the visual Turing test,

[00:06:18] where machines can create images

[00:06:20] that are indistinguishable from reality?

[00:06:23] And now OpenAI has come out with Sora,

[00:06:25] a video generation tool

[00:06:26] that will create impressively lifelike video

[00:06:29] from a single text prompt.

[00:06:31] It's basically like ChatGPT or DALI,

[00:06:34] but instead of text or images, it generates video.

[00:06:37] And don't get me wrong,

[00:06:38] there are other video generation tools out there,

[00:06:41] but when I first saw Sora, the realism blew my socks off.

[00:06:45] I mean, with these other programs,

[00:06:46] you can make short videos, like a couple seconds long,

[00:06:50] but with Sora, we're talking minute long videos.

[00:06:53] The 3D consistency with those long dynamic camera moves

[00:06:57] definitely impressed me.

[00:06:58] There's so much high frequency detail

[00:07:00] and the scene is just brimming with life.

[00:07:03] And if we can just punch in a single text prompt into Sora

[00:07:07] and it'll give us full on video

[00:07:09] that's visually convincing to the point

[00:07:10] that some people could mistake it for something real,

[00:07:14] well, you could imagine some of the problems

[00:07:15] that might stem from that.

[00:07:18] So we're at a turning point.

[00:07:19] Not only have we shattered the visual turning test,

[00:07:22] we're reshattering it every day.

[00:07:24] Images, audio, video, 3D, the list goes on.

[00:07:28] I mean, you've probably seen the headlines.

[00:07:31] AI generated nude photographs

[00:07:33] of Taylor Swift circulating on Twitter.

[00:07:35] A generated video of Vladimir Zelinsky

[00:07:37] surrendering to the Russian army.

[00:07:39] A fraudster successfully impersonating a CFO

[00:07:42] on a video call to scam a Hong Kong company

[00:07:45] out of tens of millions of dollars.

[00:07:47] And as bad as the hoaxes and the fakes

[00:07:49] and the scams are, there's a more insidious danger.

[00:07:52] What if we stop believing anything we see?

[00:07:55] Think about that.

[00:07:57] Think about a future where you don't believe the news.

[00:08:00] You don't trust the video evidence you see in court.

[00:08:03] You're not even sure that the person

[00:08:05] on the other end of the Zoom call is real.

[00:08:08] This isn't some far flung future.

[00:08:11] In fact, I'd argue we're living in it now.

[00:08:14] So given that we're in this new world

[00:08:16] where we're constantly shattering

[00:08:18] and reshattering the visual Turing test,

[00:08:20] how do we protect our own sense of reality?

[00:08:24] I reached out to Sam Gregory

[00:08:26] to talk me through what we're up against.

[00:08:28] Sam is an expert on generative AI and misinformation

[00:08:31] and is the executive director

[00:08:33] of the Human Rights Network, Witness.

[00:08:35] His organization has been working with journalists,

[00:08:38] human rights advocates, and technologists

[00:08:41] to come up with solutions that help us separate

[00:08:43] the real from the fake.

[00:08:47] Sam, thank you for joining us.

[00:08:49] I have to ask you, as we're seeing these AI tools proliferate

[00:08:53] just over the last two years,

[00:08:55] are we correspondingly seeing a massive uptick

[00:08:58] of these visual hoaxes?

[00:09:00] The vast majority are still these shallow fakes

[00:09:02] because anyone can make a shallow fake.

[00:09:04] It's trivially easy, right, just to take an image,

[00:09:06] grab it out in Google search,

[00:09:08] and claim it's from another place.

[00:09:09] What we're seeing though is this uptick

[00:09:11] that's happening in a whole range of ways

[00:09:15] where people are using this generative media for deception.

[00:09:17] So you see images

[00:09:19] sometimes deliberately shared to deceive people, right?

[00:09:22] Someone will share an image claiming it's, you know,

[00:09:24] of an event that never happened.

[00:09:26] And then, you know, we're seeing a lot of audio

[00:09:28] because it's so trivially easy to make, right?

[00:09:30] A few seconds of your voice

[00:09:32] and you can churn out endless, endless cloned voice.

[00:09:37] We're not seeing so much video, right?

[00:09:39] And that's, you know, a reflection that, you know,

[00:09:42] really doing complex video recreation

[00:09:45] is still not quite there, right?

[00:09:47] Yeah, video is significantly harder,

[00:09:50] at least for the moment.

[00:09:52] And I personally hope that it would stay pretty hard

[00:09:54] for a while, though some of these generations

[00:09:57] are getting absolutely wild.

[00:09:59] I had a bit of an existential moment

[00:10:01] looking at this one video from Sora.

[00:10:04] It's the underwater diver video.

[00:10:06] For anyone who hasn't seen this,

[00:10:09] there's a diver swimming underwater, you know,

[00:10:12] investigating this historic,

[00:10:14] almost archaeological spaceship

[00:10:17] that's crashed into the waterbed.

[00:10:19] And it looked absolutely real.

[00:10:21] And I was thinking through what that would have taken

[00:10:23] for me to do the old-fashioned way

[00:10:25] and I was just gasping

[00:10:27] that this was just a simple prompt

[00:10:29] that produced this immaculate one-minute video.

[00:10:33] I'm kind of curious, have you had such a moment yourself?

[00:10:36] It's funny because I was literally showing

[00:10:38] that video to my colleagues

[00:10:40] and I didn't cue them up that it was made with Sora

[00:10:44] because I wanted to see whether they clicked

[00:10:46] that it was an AI-generated video

[00:10:48] because I think it's a fascinating one.

[00:10:50] It's kind of on the edge of possibility.

[00:10:52] There's definitely a kind of a moment

[00:10:54] that's happening now for me

[00:10:55] and it's really interesting

[00:10:56] because, you know, we first started working on this

[00:10:58] like five or six years ago

[00:10:59] and we were just doing what we described as

[00:11:01] prepare, don't panic

[00:11:03] and really trying to puncture people's hype,

[00:11:04] particularly around video deepfakes

[00:11:06] because people kept implying

[00:11:08] that they were really easy to do

[00:11:10] and that we were surrounded by them

[00:11:11] and the reality was

[00:11:12] it wasn't easy to fake, you know, convincing video

[00:11:16] and to do that at scale.

[00:11:17] So it's certainly for me,

[00:11:18] Sora has been a click moment

[00:11:20] in terms of the possibility here

[00:11:21] even though it feels like a black box

[00:11:23] and I'm not quite sure how they've done it

[00:11:25] and how accessible this is actually going to be

[00:11:26] and how quickly.

[00:11:27] So related to this,

[00:11:29] a lot of these visual hoaxes

[00:11:31] tend to be whimsical, even innocuous, right?

[00:11:33] In other words, they don't cause serious harm

[00:11:36] in the real world

[00:11:37] and are almost akin to pranks

[00:11:39] but some of these visual hoaxes

[00:11:41] can be a lot more serious.

[00:11:44] Can you tell me a little bit about

[00:11:45] what you're seeing out there?

[00:11:47] The most interesting examples right now

[00:11:49] are happening in election context globally

[00:11:51] and they're typically

[00:11:54] people having words put in their mouths.

[00:11:56] In the recent elections in Pakistan,

[00:11:58] in Bangladesh,

[00:11:59] you had candidates saying

[00:12:01] boycott the vote

[00:12:02] or vote for the other party, right?

[00:12:04] And they're quite compelling

[00:12:06] at a first glance,

[00:12:07] particularly if you're not very familiar

[00:12:08] with how AI can be used

[00:12:10] and they're often deployed

[00:12:11] right before an election.

[00:12:12] So those are clearly

[00:12:14] in most cases malicious,

[00:12:15] they're designed to deceive

[00:12:17] and then you're also seeing ones

[00:12:18] that are kind of these leaked conversation ones

[00:12:20] so they're not visual hoaxes

[00:12:22] and so you've got really,

[00:12:23] you know,

[00:12:24] quite deceptive uses happening there

[00:12:26] either directly just with audio

[00:12:28] or at the intersection of

[00:12:29] audio with animated faces

[00:12:31] or audio with the ability to make a lip sync

[00:12:33] with a video.

[00:12:35] If I wanted to ask you to zoom in on

[00:12:38] one single example

[00:12:40] that's disturbed you the most,

[00:12:41] something that exemplifies

[00:12:42] what you are the most worried about,

[00:12:45] what would it be?

[00:12:46] I'm going to pick one that is

[00:12:48] it's actually a whole genre

[00:12:50] and I'm going to describe this genre

[00:12:51] because I think it's the one

[00:12:52] that people are familiar with

[00:12:53] but once you start to think about it

[00:12:54] you realize how easy it is to do this

[00:12:56] and that is pretty much everyone has seen

[00:12:58] Elon Musk selling a crypto scam,

[00:13:00] right?

[00:13:01] Often paired up with a newscaster,

[00:13:03] your favorite newscaster

[00:13:04] or your favorite political figure

[00:13:06] in every country in which

[00:13:08] I work,

[00:13:08] people have experienced that.

[00:13:09] They've seen that video where it's like

[00:13:11] the newscaster says,

[00:13:12] Hey Elon, come on and explain

[00:13:14] how you follow this new crypto scam

[00:13:16] or come on political candidate

[00:13:17] and explain why you're investing

[00:13:19] in this crypto scam.

[00:13:21] For anyone who hasn't seen it,

[00:13:23] these are just videos

[00:13:24] with a deepfake Elon Musk

[00:13:25] trying to guilt you into buying crypto

[00:13:27] as a part of their

[00:13:28] Bitcoin giveaway program.

[00:13:31] And so the reason I point to that is

[00:13:33] not because it has massive human rights impacts

[00:13:35] or massive news impacts

[00:13:36] but it's just

[00:13:38] this is so commodified

[00:13:39] but we have this sort of bigger question

[00:13:41] of how it plays into our

[00:13:43] overarching understanding

[00:13:44] of what we trust,

[00:13:45] right?

[00:13:45] Does this undermine people's confidence

[00:13:47] in almost any way

[00:13:49] in which they experience

[00:13:50] audio or video or photos

[00:13:52] that they encounter online?

[00:13:53] Does it just reinforce

[00:13:54] what they want to believe

[00:13:56] and for other people just let them believe

[00:13:57] that nothing can be trusted?

[00:14:02] We're going to take a quick break.

[00:14:04] When we come back,

[00:14:04] we're going to talk with Sam about

[00:14:06] how we can train ourselves

[00:14:07] to better distinguish the real

[00:14:08] from the unreal

[00:14:10] using a little system he calls SIFT.

[00:14:12] More on that in just a minute.

[00:14:46] We're back with Sam Gregory of Witness.

[00:14:55] Before the break,

[00:14:56] we were talking about how these fake videos

[00:14:58] are starting to erode our trust

[00:15:00] in everything we see.

[00:15:02] And yeah,

[00:15:03] maybe you can find flaws

[00:15:04] in a lot of these videos,

[00:15:05] but some of them are really,

[00:15:07] really good.

[00:15:08] And nobody's zooming in at 300%

[00:15:10] looking for those minor imperfections,

[00:15:13] especially when they're scrolling

[00:15:14] through a feed,

[00:15:15] right?

[00:15:15] Like before their morning commute

[00:15:17] or something.

[00:15:18] Yeah, and you're hitting on the thing

[00:15:19] that I think,

[00:15:20] you know,

[00:15:21] the news media has often done a disservice

[00:15:23] to people about how to think about

[00:15:24] spotting AI, right?

[00:15:26] We put such an emphasis on kind of like,

[00:15:28] you know, you should have spotted

[00:15:29] the Pope, you know,

[00:15:30] had his ring finger on the wrong hand

[00:15:32] in that puffer jacket image, right?

[00:15:34] Or didn't you see that his hair

[00:15:36] didn't look quite right on the hairline?

[00:15:38] Or didn't you see he didn't blink

[00:15:40] at the regular rate?

[00:15:41] And it's just so cruel almost to us

[00:15:43] as consumers to expect us to spot those

[00:15:46] things. We don't do it.

[00:15:46] I don't look at every TikTok video

[00:15:48] in my, you know,

[00:15:49] and in my for you page and go like,

[00:15:51] let me just look at this really carefully

[00:15:53] and make sure if someone's trying to

[00:15:54] deceive me.

[00:15:54] And so we've done a disservice often

[00:15:56] because people point out these glitches

[00:15:58] and then they expect people to spot them

[00:16:00] and it's it's it creates this whole

[00:16:02] culture where we distrust everything

[00:16:03] we look at

[00:16:05] and we try and apply this sort of

[00:16:06] personal forensic skepticism

[00:16:08] and it doesn't lead us to great places.

[00:16:11] All right.

[00:16:12] I want to talk about mitigation.

[00:16:13] How do we prepare

[00:16:14] and what can we do right now?

[00:16:16] When we first started saying prepare

[00:16:17] don't panic.

[00:16:18] It was five or six years ago

[00:16:19] and it was in the first deepfakes hype cycle,

[00:16:22] which was like the 2018 elections

[00:16:24] when everyone was like deepfakes are going

[00:16:25] to destroy the elections

[00:16:27] and I don't think there was a single deepfake

[00:16:29] in the 2018 US elections of any note.

[00:16:32] Now, let's fast forward to now,

[00:16:34] right 2024 when we look around the world

[00:16:37] the threat is clear and present now

[00:16:39] and it's escalating.

[00:16:40] So prepare is about acting listening

[00:16:42] to the right voices

[00:16:43] and thinking about how we balance out

[00:16:45] creativity expression human rights

[00:16:47] and do that from a global perspective

[00:16:49] because so much this conversation often

[00:16:51] is also very US or Europe centric.

[00:16:55] So what can we do now?

[00:16:56] You know, the first part of it is

[00:16:58] who are we listening to about this

[00:16:59] and I often get frustrated in AI conversations

[00:17:02] could get this very abstract discussion

[00:17:04] around AI harms and AI safety

[00:17:06] and it feels very different

[00:17:08] from the conversation.

[00:17:09] I'm having with journalists

[00:17:10] and human rights defenders on the ground

[00:17:11] who are saying I got targeted

[00:17:13] with a non-consensual sexual deepfake.

[00:17:15] I got my footage dismissed

[00:17:17] as faked by a politician

[00:17:18] because he said it could have been made

[00:17:19] by AI.

[00:17:20] So as we prepare the first thing is

[00:17:22] how do we listen to right

[00:17:23] and we should listen to the people

[00:17:24] who actually are experiencing this

[00:17:26] and then we need to think

[00:17:27] what is it that we need to help people

[00:17:30] understand how AI is being used

[00:17:32] this kind of question of the recipe

[00:17:34] and I use the recipe analogy

[00:17:36] because I think we're not in a world

[00:17:38] where it's AI or not.

[00:17:39] It's even in the photos.

[00:17:40] We take on our iPhones.

[00:17:42] We're already combining AI and human right

[00:17:44] the human input

[00:17:45] then the AI modifications

[00:17:46] that make our photos look better.

[00:17:48] So we need to think

[00:17:49] how do we communicate

[00:17:50] that AI was used in the media we make

[00:17:54] we need to show people

[00:17:55] how AI and human were involved

[00:17:58] in the creation of a piece of media

[00:17:59] how it was edited

[00:18:00] and how it's distributed

[00:18:01] the second part of it

[00:18:02] is around access to detection

[00:18:04] can the thing that we've seen

[00:18:05] is there's a huge gap

[00:18:07] in access to the detection tools

[00:18:08] for the people who need it most

[00:18:10] like journalists and election officials

[00:18:12] and human rights defenders globally

[00:18:13] and so they're kind of stuck

[00:18:14] they get this piece of video or an image

[00:18:16] and they are doing the same things

[00:18:18] that we're encouraging ordinary people to do

[00:18:20] look for the glitches

[00:18:22] you know take a guess

[00:18:22] drop it in an online detector

[00:18:24] and all of those things

[00:18:26] are as likely to give a false positive

[00:18:29] or a false negative

[00:18:29] as they are to give a reliable result

[00:18:31] that you can explain

[00:18:32] so you've got those two things

[00:18:33] you've got an absence of transparency

[00:18:35] explaining the recipe

[00:18:36] you've got gaps in access to detection

[00:18:39] and neither of those will work well

[00:18:40] unless the whole of the AI pipeline

[00:18:44] plays its part in making sure

[00:18:45] the signals of that authenticity

[00:18:47] and the ability to detect is retained

[00:18:49] all the way through

[00:18:51] so those are the three key things

[00:18:52] that we point to

[00:18:52] is transparency done right

[00:18:54] detection available to those who need it most

[00:18:57] and the importance of having an AI pipeline

[00:18:59] where the responsibility is shared

[00:19:01] across the whole AI industry

[00:19:03] I think you covered like

[00:19:05] three questions beautifully right here

[00:19:07] so a key challenge is telling

[00:19:09] what content is generated by humans

[00:19:11] versus synthetically generated by machines

[00:19:14] and one of the efforts you're involved in

[00:19:16] is the appropriately named

[00:19:18] content authenticity initiative

[00:19:20] could you talk a bit about

[00:19:21] how does that play into a world

[00:19:22] where we will have

[00:19:23] fake content purporting to be real

[00:19:25] yes so

[00:19:26] about five years ago

[00:19:27] there was

[00:19:28] there were a couple of initiatives founded

[00:19:30] by a mix of companies

[00:19:31] and media entities

[00:19:32] and Witness joined those early on

[00:19:34] to see how we could bring a human rights voice to them

[00:19:36] and one of them was something called

[00:19:37] the content authenticity initiative

[00:19:38] that Adobe kicked off

[00:19:40] and another was something called

[00:19:41] the coalition for content provenance

[00:19:42] and authenticity

[00:19:43] the shorthand for that is C2PA

[00:19:46] so let me explain a little more about

[00:19:48] what C2PA is

[00:19:49] it's basically a technical standard

[00:19:51] for showing what

[00:19:52] we might describe as the provenance

[00:19:54] of an image or a video

[00:19:56] or another piece of media

[00:19:57] and provenance is

[00:19:58] basically the trail of how it was created right

[00:20:01] this is a standard

[00:20:01] that's being increasingly adopted by platforms

[00:20:04] in the last couple of months

[00:20:05] you've seen Google and Meta

[00:20:06] adopted as a way they're going to show to people

[00:20:08] how the media they encounter online

[00:20:11] particularly AI generated

[00:20:12] or edited media was made

[00:20:14] it's also a direction that governments are moving in

[00:20:16] some key things that

[00:20:17] that we point to

[00:20:18] around standards like the C2PA is

[00:20:21] you know the first thing is

[00:20:22] they are not a foolproof way of

[00:20:24] showing whether something was made with AI

[00:20:26] made with a human

[00:20:27] what I mean by that is

[00:20:28] they tell you information

[00:20:30] but you know we know that people can remove

[00:20:31] that metadata for example

[00:20:33] they can strip out the metadata

[00:20:34] and we also know that some people may not

[00:20:36] add this in

[00:20:37] for a range of reasons

[00:20:38] so we're creating a system

[00:20:40] that allows additional signals of

[00:20:43] of trust or additional pieces of information

[00:20:45] but no

[00:20:46] one confirmation of authenticity

[00:20:49] or reality

[00:20:51] I think that's really important

[00:20:52] that we be clear

[00:20:53] that this is in some sense

[00:20:54] a harm reduction approach

[00:20:56] it's a way to get people more information

[00:20:58] but it's not going to be conclusive

[00:20:59] in a kind of sort of silver bullet like way

[00:21:03] and then the second sort of thing

[00:21:04] that we need to think about these is

[00:21:05] you know we need to really make sure

[00:21:07] that this is about the how of how media was made

[00:21:09] not the who of who made it

[00:21:11] otherwise we open a back door to surveillance

[00:21:14] we open a back door to the ways

[00:21:15] this will be used

[00:21:16] to target and criminalize journalists

[00:21:19] and people who speak out against governments globally

[00:21:21] beautifully said

[00:21:22] especially in the last point

[00:21:23] I noticed Tim Sweeney had some interesting remarks

[00:21:25] about all of the content authenticity

[00:21:27] initiatives happening

[00:21:28] as kind of described it as sort of

[00:21:31] surveillance DRM

[00:21:32] where you cannot upload a piece of content,

[00:21:35] right?

[00:21:35] Like if people like you aren't pushing on this direction

[00:21:38] we may well end up in a world

[00:21:39] where you cannot upload imagery

[00:21:41] onto the internet without

[00:21:42] having your identity tied to it

[00:21:44] and I think that would be a scary world indeed

[00:21:46] the thing that we have consistently pushed back on

[00:21:49] in systems like c2pa

[00:21:51] and is on the idea

[00:21:53] that identity should be the center

[00:21:54] of how you're trusted online

[00:21:56] it's helpful right

[00:21:57] and in many times

[00:21:57] I want people to know who I am

[00:21:59] but if we start to premise

[00:22:01] trust online

[00:22:02] in individual identity as the center

[00:22:04] and require people to do that

[00:22:06] that brings all kinds of risks

[00:22:08] that we already have a history of understanding

[00:22:10] from social media,

[00:22:10] right?

[00:22:11] That's not to say we shouldn't think about things like

[00:22:13] proof of personhood

[00:22:14] right?

[00:22:14] Like how do we understand

[00:22:15] that someone who created media was a human

[00:22:18] may be important

[00:22:19] right?

[00:22:19] As we enter an AI generated world

[00:22:21] that's not the same as knowing that it was Sam

[00:22:23] who made it

[00:22:23] not a generic human who made it

[00:22:25] right?

[00:22:26] So I think that's really important.

[00:22:28] It's a slippery slope indeed

[00:22:29] and really good point on sort of the distinction

[00:22:31] between validating your

[00:22:33] a human being

[00:22:34] versus

[00:22:35] you know validating you are Sam Gregory.

[00:22:37] That's a there.

[00:22:38] It's a very subtle but you know,

[00:22:40] crucial distinction.

[00:22:42] Let's move over to fears and hopes,

[00:22:44] you know back in 2017.

[00:22:46] You felt the the fear around deepfakes were overblown

[00:22:48] clearly now it is is far more of a clear

[00:22:51] and present danger.

[00:22:52] Where do you stand now?

[00:22:53] What are your hopes and fears at the moment?

[00:22:56] So we've gone from a scenario in 2017

[00:22:59] where the the primary harm

[00:23:02] was the one that people didn't discuss

[00:23:04] that was gender-based violence

[00:23:07] and the harm everyone discussed political

[00:23:10] usage was non-existent

[00:23:12] to a scenario now where the gender-based violence

[00:23:15] has got far worse

[00:23:16] right? And targets everyone from public figures

[00:23:19] to teenagers in schools all around the world

[00:23:22] and the political usage is now very real.

[00:23:24] And the third thing is you have people realizing

[00:23:26] there's this incredibly good excuse

[00:23:28] for a piece of compromising media,

[00:23:30] which is just to say hey that was faked

[00:23:32] or hey plausibly I can deny

[00:23:35] that piece of media by saying that it was

[00:23:37] faked. And so those three are the sort of the core

[00:23:39] fears that I experienced now that have translated

[00:23:42] into reality.

[00:23:43] Now in terms of hopes,

[00:23:45] I don't think we've acted yet on those three

[00:23:48] core problems sufficiently right?

[00:23:50] We need to address those and we need to make

[00:23:52] sure that you know, we criminalize

[00:23:55] the ways in which people target primarily women

[00:23:57] with non-consensual sexual deepfakes

[00:24:00] which are escalating. In the second area of

[00:24:02] fears, which is the fears around their misuse

[00:24:04] to in politics and to undermine

[00:24:07] news footage and human rights content.

[00:24:09] I think that's where we need to lean into a lot of

[00:24:12] the approaches like the authenticity

[00:24:14] and provenance infrastructures like the C2PA

[00:24:17] the access to detection tools for

[00:24:19] the journalists who need it most and then

[00:24:21] smart laws that can help us rule

[00:24:24] out some usages right and make sure that it is

[00:24:26] clear that some uses are unacceptable.

[00:24:28] And then the third area, that's the hardest one

[00:24:30] because we just don't have the research yet

[00:24:33] about what is the impact of this constant

[00:24:35] sort of drip, drip, drip if you can't

[00:24:38] believe what you see in here.

[00:24:40] We can only reach an 84% probability

[00:24:42] that it's real or false which is not great

[00:24:44] for public confidence.

[00:24:46] But we also don't know how this plays into this

[00:24:49] broader societal trust crisis

[00:24:51] we have where already people want to

[00:24:54] lean into kind of almost plausible believability

[00:24:56] on stuff they care about or just plausibly ignoring

[00:24:59] anything that you know challenges those beliefs.

[00:25:01] I think you brought up a really good point about

[00:25:03] it's almost like the world is fracturing into

[00:25:05] the multiverse of madness.

[00:25:06] I like to call it where people are looking for

[00:25:08] whatever validation to sort of confirm

[00:25:11] their beliefs at the same time it can

[00:25:13] it can result in people being jaded right where

[00:25:15] they're just going to be detached.

[00:25:17] Well, I don't trust anything.

[00:25:19] And so I'm curious how do you see consumers

[00:25:21] behaviors changing in this world

[00:25:24] where the visual Turing test gets shattered

[00:25:27] over and over again for all sorts of different

[00:25:29] more complex domains?

[00:25:31] Are people going to get savvier?

[00:25:33] What do you think is going to happen to society

[00:25:35] in such a world?

[00:25:37] So we have to hope that we walk a fine line.

[00:25:40] We're going to need to be more skeptical

[00:25:42] of audio and images and video that we encounter

[00:25:44] online, but we're going to have to do that

[00:25:47] with a skepticism that supported by signals

[00:25:49] that help us what I mean by that is if we

[00:25:51] enter a world where we just like hey everyone

[00:25:53] everything could be faked.

[00:25:55] It's getting better every day.

[00:25:56] Hey look out for the glitch then we enter

[00:25:59] a world where people skepticism quite rightly

[00:26:02] will accelerate because all of us will experience

[00:26:04] like on a daily basis being deceived, right?

[00:26:06] And I think that's very legitimate for us to

[00:26:09] then feel like we can't trust anything right

[00:26:12] in the ideal world.

[00:26:13] Everyone's labeling what's real or fake,

[00:26:16] but when that's not happening,

[00:26:18] what do people do?

[00:26:20] I always go back to you know, basic media literacy.

[00:26:22] I use an acronym called sift that was invented

[00:26:25] by an academic called Mike Caulfield and sift

[00:26:28] is s I F T s stands for stop right because it's

[00:26:33] basically stop before you're emotionally triggered

[00:26:35] right whenever you see something that's too good

[00:26:37] to be true.

[00:26:38] I stands for investigate the source which is

[00:26:41] like who shared this is is it someone I should

[00:26:45] trust the F stands for find alternative coverage

[00:26:48] right did someone already write about this and

[00:26:50] say wait, that's not the pope in a puffer jacket

[00:26:52] in reality.

[00:26:52] That's an AI image and then the fourth part of

[00:26:55] that which is getting complicated is T for trace

[00:26:58] the original which used to always be a great way

[00:27:01] of doing it in the shallow fake era because

[00:27:03] you'd find that an image had been recycled but

[00:27:05] is getting harder now.

[00:27:07] So what I look at the sort of the knife edge.

[00:27:08] We've got to walk.

[00:27:09] It's to help people do sift in an environment

[00:27:12] that is structured to give them better signals

[00:27:13] of how I was used and where the law has set

[00:27:17] parameters about what is definitely not acceptable

[00:27:20] and where all the companies all the players in

[00:27:22] that AI pipeline are playing their part to make

[00:27:25] sure that we can see how the recipe of how our

[00:27:27] AI and human was used and that it's as easy as

[00:27:30] possible to detect when AI was used to manipulate

[00:27:33] or create a piece of imagery audio or video.

[00:27:36] I really like sift.

[00:27:37] I think that's also very good advice for people

[00:27:39] when they come across something that is indeed

[00:27:41] too good to be true.

[00:27:42] Very often we will be like, oh, well, that's

[00:27:45] interesting and go about our day.

[00:27:47] The devices we use every day aren't foolproof,

[00:27:50] right?

[00:27:50] They've got vulnerabilities.

[00:27:51] There is this game of whack-a-mole that happens

[00:27:54] with patching those vulnerabilities and now

[00:27:56] we've got these cognitive vulnerabilities almost

[00:27:58] and you know on the detection side the tools

[00:28:01] are going to need to keep improving because

[00:28:03] people are going to find ways to use the detectors

[00:28:05] to create new generators that evade them, right?

[00:28:08] And so that game of whack-a-mole will continue

[00:28:10] but that isn't to say that all hope is lost.

[00:28:13] We can adapt and we can still have an information

[00:28:17] landscape where we can all thrive together.

[00:28:20] That's the future.

[00:28:21] I want the way we describe it a witness.

[00:28:23] We talk about fortifying the truth, which is

[00:28:26] that we need to find ways to defend that there

[00:28:28] is a reality out there.

[00:28:30] Thank you so much, Sam.

[00:28:31] I will certainly sleep easier at night knowing

[00:28:33] there are people like you out there making sure

[00:28:35] we can tell the difference between the real and

[00:28:37] unreal.

[00:28:38] Thank you so much for joining us.

[00:28:42] Sam Gregory and I had this conversation in mid-March

[00:28:45] and a few days later there was another development.

[00:28:48] YouTube came out with a new rule.

[00:28:50] If you have AI generated content in your video

[00:28:53] and it's not obvious, you have to disclose its

[00:28:55] AI. This move from YouTube is an important one.

[00:28:59] The kind Sam and his colleagues at Witness have

[00:29:01] been advocating for.

[00:29:02] It shifts the onus onto creators and platforms

[00:29:06] and away from everyday viewers because ultimately

[00:29:09] it's unfair to make all of us become AI detectives

[00:29:13] scrutinizing every video for that missing shadow

[00:29:15] or impossible physics, especially in a world where

[00:29:18] the visual Turing test is continually being shattered.

[00:29:22] And look, I'm not going to sugarcoat this.

[00:29:24] This is a huge problem and it's going to be difficult

[00:29:27] for everyone. Folks like Sam Gregory have their

[00:29:30] work cut out for them and massive organizations

[00:29:33] like TikTok, Google and MetaDoo too.

[00:29:35] But listen, I'm going to be back here this week

[00:29:38] and the week after that and the week after that

[00:29:41] helping you figure out how to navigate this new

[00:29:43] world order. How to live with AI and yes, thrive

[00:29:46] with it too. We'll be talking to researchers, artists,

[00:29:50] journalists, academics who can help us demystify

[00:29:53] the technology as it evolves. Together, we're going

[00:29:57] to figure out how to navigate AI before it navigates

[00:30:00] us. This is the TED-AI show. I hope you'll join

[00:30:04] us. The TED-AI show is a part of the TED Audio

[00:30:09] Collective and is produced by TED with Cosmic

[00:30:12] Standard. Our producers are Ella Fetter and

[00:30:15] Sarah McCray. Our editors are Ben Bensheng and

[00:30:18] Alejandra Salazar. Our showrunner is Ivana Tucker

[00:30:22] and our associate producer is Ben Montoya. Our

[00:30:26] engineer is Asia Pilar Simpson. Our technical

[00:30:29] director is Jacob Winnick and our executive

[00:30:32] producer is Eliza Smith. Our fact-checker is

[00:30:35] Christian Apartha and I'm your host Bilal El-Siddiou.

[00:30:39] See y'all in the next one.