If there’s one AI company that’s made a splash in mainstream vernacular, it’s OpenAI, the company behind ChatGPT. Former board member and AI policy expert Helen Toner joins Bilawal to discuss the existing knowledge gaps and conflicting interests between those who are in charge of making the latest technology – and those who create our policies at the government level.
For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:00] TED Audio Collective.
[00:00:06] Hey, Belovl here.
[00:00:08] This episode is a bit different.
[00:00:11] Today I'm interviewing Helen Toner,
[00:00:13] a researcher who works on AI regulation.
[00:00:16] She's also a former board member at OpenAI.
[00:00:19] In my interview with Helen, she reveals for the first time
[00:00:22] what really went down at OpenAI late last year
[00:00:25] when the CEO Sam Altman was fired.
[00:00:28] And she makes some pretty serious criticisms of him.
[00:00:31] We reached out to Sam for comments,
[00:00:33] and if he responds, we'll include that update at the end of the episode.
[00:00:37] But first, let's get to the show.
[00:00:42] I'm Belovl Sidhu, and this is the TED-AI Show,
[00:00:45] where we figure out how to live and thrive in a world
[00:00:48] where AI is changing everything.
[00:00:55] This episode is brought to you by Progressive.
[00:00:58] Most of you aren't just listening right now.
[00:01:00] You're driving, cleaning, and even exercising.
[00:01:03] But what if you could be saving money by switching to Progressive?
[00:01:06] Drivers who save by switching save nearly $750 on average,
[00:01:10] and auto customers qualify for an average of seven discounts.
[00:01:14] Multitask right now. Quote today at progressive.com.
[00:01:17] Progressive Casualty Insurance Company & Affiliates.
[00:01:20] National average 12-month savings of $744 by new customers surveyed
[00:01:24] who saved with Progressive between June 2022 and May 2023.
[00:01:28] Potential savings will vary.
[00:01:30] Discounts not available in all states and situations.
[00:01:33] AI is making waves in every field it touches.
[00:01:38] President Biden is now on TikTok,
[00:01:40] and the election draws closer each day.
[00:01:42] With so much going on in the world,
[00:01:44] it is hard to keep up with it all, let me tell you.
[00:01:47] Hi, I'm Kai Riznali, co-host of Make Me Smart.
[00:01:49] It's a podcast from Marketplace.
[00:01:51] And every weekday, Kimberly Adams and I
[00:01:53] break down the latest in business and the economy
[00:01:55] with short daily episodes
[00:01:57] to make it easy for you to stay in the know.
[00:01:59] Listen to Make Me Smart wherever you get your podcasts.
[00:02:05] The OpenAI saga is still unfolding,
[00:02:07] so let's get up to speed.
[00:02:09] In case you missed it, on a Friday in November 2023,
[00:02:13] the board of directors at OpenAI fired Sam Altman.
[00:02:16] This ouster remained a top news item over that weekend
[00:02:19] with the board saying that he hadn't been, quote,
[00:02:21] consistently candid in his communications, unquote.
[00:02:25] The Monday after, Microsoft announced
[00:02:27] that they had hired Sam to head up their AI department.
[00:02:30] Many OpenAI employees rallied behind Sam
[00:02:32] and threatened to join him.
[00:02:34] Meanwhile, OpenAI announced an interim CEO.
[00:02:37] And then a day later, plot twist,
[00:02:39] Sam was rehired at OpenAI.
[00:02:41] Several of the board members were removed
[00:02:43] or resigned and replaced.
[00:02:45] Since then, there's been a steady fallout.
[00:02:48] On May 15th, 2024, just last week
[00:02:51] as of recording this episode,
[00:02:53] OpenAI's chief scientist,
[00:02:55] Ilya Sutskever, formally resigned.
[00:02:58] Not only was Ilya a member of the board that fired Sam,
[00:03:01] he was also part of the Super Alignment team,
[00:03:04] which focuses on mitigating the long-term risks of AI.
[00:03:08] With the departure of another executive, Jan Laika,
[00:03:11] many of the original safety-conscious folks
[00:03:13] in leadership positions have either departed OpenAI
[00:03:16] or moved on to other teams.
[00:03:19] So, what's going on here?
[00:03:21] Well, OpenAI started as a nonprofit in 2015,
[00:03:25] self-described as an artificial intelligence research company.
[00:03:29] They had one mission,
[00:03:30] to create AI for the good of humanity.
[00:03:33] They wanted to approach AI responsibly,
[00:03:35] to study the risks up close,
[00:03:37] and to figure out how to minimize them.
[00:03:40] This was going to be the company that showed us
[00:03:42] AI done right.
[00:03:44] Fast forward to November 17th, 2023,
[00:03:47] the day Sam was fired,
[00:03:49] OpenAI looked a bit different.
[00:03:52] They'd released Ali,
[00:03:53] and ChatGPT was taken the world by storm.
[00:03:56] With hefty investments from Microsoft,
[00:03:58] it now seemed that OpenAI was in something of a tech arms race
[00:04:01] with Google.
[00:04:02] The release of ChatGPT prompted Google to scramble
[00:04:05] and release their own chatbot, BARD.
[00:04:08] Over time, OpenAI became closed AI.
[00:04:12] Starting 2020, with the release of GPT-3,
[00:04:14] OpenAI stopped sharing their code.
[00:04:17] And I'm not saying that was a mistake.
[00:04:19] There are good reasons for keeping your code private.
[00:04:22] But OpenAI somehow changed,
[00:04:24] drifting away from a mission-minded nonprofit
[00:04:26] with altruistic goals
[00:04:28] to a run-of-the-mill tech company
[00:04:30] shipping new products at an astronomical pace.
[00:04:33] This trajectory shows you just how powerful
[00:04:36] economic incentives can be.
[00:04:38] There's a lot of money to be made in AI right now.
[00:04:41] But it's also crucial that profit isn't the only factor
[00:04:44] driving decision-making.
[00:04:46] Artificial General Intelligence, or AGI,
[00:04:49] has the potential to be very, very disruptive.
[00:04:52] And that's where Helen Toner comes in.
[00:04:56] Less than two weeks after OpenAI fired
[00:04:58] and rehired Sam Altman,
[00:05:00] Helen Toner resigned from the board.
[00:05:03] She was one of the board members
[00:05:04] who had voted to remove him.
[00:05:06] And at the time, she couldn't say much.
[00:05:08] There was an internal investigation still ongoing,
[00:05:11] and she was advised to keep mum.
[00:05:13] And oh man, she got so much flack for all of this.
[00:05:17] Looking at the news coverage and the tweets,
[00:05:19] I got the impression she was this techno-pessimist
[00:05:22] who was standing in the way of progress
[00:05:24] or a kind of maniacal power seeker
[00:05:26] using safety policy as her cudgel.
[00:05:29] But then I met Helen at this year's TED conference,
[00:05:32] and I got to hear her side of the story.
[00:05:35] And it made me think a lot about the difference
[00:05:37] between governance and regulation.
[00:05:40] To me, the OpenAI saga is all about AI board governance
[00:05:44] and incentives being misaligned
[00:05:46] among some really smart people.
[00:05:48] It also shows us why trusting tech companies
[00:05:50] to govern themselves may not always go beautifully,
[00:05:54] which is why we need external rules and regulations.
[00:05:57] It's a balance.
[00:05:59] Helen's been thinking and writing about AI policy
[00:06:03] for about seven years.
[00:06:05] She's the director of strategy at CSET,
[00:06:08] the Center for Security and Emerging Technology
[00:06:10] at Georgetown, where she works with policymakers in DC
[00:06:13] about all sorts of AI issues.
[00:06:16] Welcome to the show.
[00:06:17] Hey, good to be here.
[00:06:18] So Helen, a few weeks back at TED in Vancouver,
[00:06:21] I got the short version of what happened
[00:06:23] at OpenAI last year.
[00:06:25] I'm wondering, can you give us the long version?
[00:06:28] As a quick refresher on sort of the context here,
[00:06:31] the OpenAI board was not a normal board.
[00:06:33] It's not a normal company.
[00:06:34] The board is a nonprofit board that was set up explicitly
[00:06:38] for the purpose of making sure that the company's,
[00:06:41] you know, public good mission was primary,
[00:06:43] was coming first over profits,
[00:06:44] investor interests and other things.
[00:06:46] But for years, Sam had made it really difficult
[00:06:49] for the board to actually do that job
[00:06:52] by, you know, withholding information,
[00:06:55] misrepresenting things that were happening at the company,
[00:06:57] in some cases outright lying to the board.
[00:06:59] You know, at this point, everyone always says,
[00:07:01] like what? Give me some examples.
[00:07:03] And I can't share all the examples,
[00:07:05] but to give a sense of the kind of thing
[00:07:07] that I'm talking about, it's things like, you know,
[00:07:09] when ChatGPT came out, November 2022,
[00:07:12] the board was not informed in advance about that.
[00:07:14] We learned about ChatGPT on Twitter.
[00:07:17] Sam didn't inform the board
[00:07:19] that he owned the OpenAI startup fund,
[00:07:22] even though he, you know, constantly was claiming
[00:07:25] to be an independent board member
[00:07:27] with no financial interest in the company.
[00:07:29] On multiple occasions, he gave us inaccurate information
[00:07:33] about the small number of formal safety processes
[00:07:36] that the company did have in place,
[00:07:38] meaning that it was basically impossible for the board
[00:07:40] to know how well those safety processes were working
[00:07:43] or what might need to change.
[00:07:45] And then, you know, a last example that I can share
[00:07:47] that's been very widely reported relates to this paper
[00:07:49] that I wrote, which has been, you know,
[00:07:51] I think way overplayed in the press.
[00:07:53] For listeners who didn't follow this in the press,
[00:07:56] Helen had co-written a research paper last fall
[00:07:58] intended for policymakers.
[00:08:00] I'm not going to get into the details,
[00:08:02] but what you need to know is that Sam Altman
[00:08:04] wasn't happy about it.
[00:08:06] It seemed like Helen's paper was critical of OpenAI
[00:08:09] and more positive about one of their competitors, Anthropic.
[00:08:13] It was also published right when the Federal Trade Commission
[00:08:15] was investigating OpenAI about the data used
[00:08:18] to build its generative AI products.
[00:08:20] Essentially, OpenAI was getting a lot of heat
[00:08:23] and scrutiny all at once.
[00:08:26] The way that played into what happened in November
[00:08:28] is pretty simple. It had nothing to do with
[00:08:30] the substance of this paper.
[00:08:31] The problem was that after the paper came out,
[00:08:33] Sam started lying to other board members
[00:08:36] in order to try and push me off the board.
[00:08:38] So it was another example that just, like,
[00:08:40] really damaged our ability to trust him.
[00:08:42] And actually, it only happened in late October last year
[00:08:45] when we were already talking pretty seriously
[00:08:47] about whether we needed to fire him.
[00:08:50] And so, you know, there's kind of more individual examples,
[00:08:53] and for any individual case,
[00:08:56] Sam could always come up with some kind of, like,
[00:08:58] innocuous-sounding explanation of why it wasn't a big deal
[00:09:00] or misinterpreted or whatever.
[00:09:02] But the end effect was that after years of this kind of thing,
[00:09:06] all four of us who fired him came to the conclusion
[00:09:09] that we just couldn't believe things
[00:09:11] that Sam was telling us.
[00:09:13] It's a completely unworkable place to be in as a board,
[00:09:16] especially a board that is supposed to be providing
[00:09:19] independent oversight over the company,
[00:09:21] not just, like, you know, helping the CEO
[00:09:23] to raise more money.
[00:09:25] You know, not trusting the word of the CEO,
[00:09:28] who is your main conduit to the company,
[00:09:30] your main source of information about the company,
[00:09:31] is just, like, totally impossible.
[00:09:33] So that was kind of the background
[00:09:36] that the state of affairs coming into last fall,
[00:09:39] and we had been, you know, working at the board level
[00:09:42] as best we could to set up better structures, processes,
[00:09:46] all that kind of thing to try and, you know,
[00:09:48] improve these issues that we had been having
[00:09:49] at the board level.
[00:09:50] But then in mostly in October of last year,
[00:09:55] we had this series of conversations
[00:09:57] with these executives,
[00:09:59] where the two of them suddenly started telling us
[00:10:02] about their own experiences with Sam,
[00:10:04] which they hadn't felt comfortable sharing before,
[00:10:06] but telling us how they couldn't trust him
[00:10:09] about the toxic atmosphere he was creating.
[00:10:12] They used the phrase psychological abuse,
[00:10:15] telling us they didn't think he was the right person
[00:10:17] to lead the company to AGI,
[00:10:19] telling us they had no belief that he could or would change,
[00:10:23] no point in giving him feedback,
[00:10:24] no point in trying to work through these issues.
[00:10:26] I mean, you know, they've since tried
[00:10:28] to kind of minimize what they told us,
[00:10:30] but these were not, like, casual conversations.
[00:10:33] They were really serious to the point
[00:10:35] where they actually sent us screenshots
[00:10:38] and documentation of some of the instances
[00:10:40] they were telling us about of him lying
[00:10:43] and being manipulative in different situations.
[00:10:45] So, you know, this was a huge deal.
[00:10:47] This was a lot.
[00:10:49] And we talked it all over very intensively
[00:10:52] over the course of several weeks
[00:10:55] and ultimately just came to the conclusion
[00:10:58] that the best thing for OpenAI's mission
[00:11:00] and for OpenAI as an organization
[00:11:02] would be to bring on a different CEO.
[00:11:04] And, you know, once we reached that conclusion,
[00:11:07] it was very clear to all of us that
[00:11:09] as soon as Sam had any inkling
[00:11:11] that we might do something that went against him,
[00:11:14] he would pull out all the stops,
[00:11:16] do everything in his power to undermine the board,
[00:11:18] to prevent us from even getting to the point
[00:11:21] of being able to fire him.
[00:11:22] So, you know, we were very careful,
[00:11:24] very deliberate about who we told,
[00:11:27] which was essentially almost no one in advance
[00:11:29] other than obviously our legal team.
[00:11:31] And so that's kind of what took us to November 17th.
[00:11:35] Thank you for sharing that.
[00:11:36] Now, Sam was eventually reinstated as CEO
[00:11:39] with most of the staff supporting his return.
[00:11:41] What exactly happened there?
[00:11:43] Why was there so much pressure to bring him back?
[00:11:45] Yeah, this is obviously the elephant in the room.
[00:11:48] And unfortunately, I think there's been
[00:11:50] a lot of misreporting on this.
[00:11:52] I think there were three big things going on
[00:11:55] that helped make sense of kind of what happened here.
[00:11:58] The first is that really pretty early on,
[00:12:01] the way the situation was being portrayed
[00:12:03] to people inside the company was
[00:12:04] you have two options.
[00:12:05] Either Sam comes back immediately with no accountability,
[00:12:09] you know, totally new board of his choosing,
[00:12:11] or the company will be destroyed.
[00:12:14] And, you know, those weren't actually the only two options,
[00:12:17] and the outcome that we eventually landed on
[00:12:19] was neither of those two options.
[00:12:21] But I get why, you know, not wanting the company
[00:12:24] to be destroyed got a lot of people to fall in line,
[00:12:27] whether because they were in some cases
[00:12:30] about to make a lot of money from this upcoming tender offer
[00:12:33] or just because they love their team,
[00:12:35] they didn't want to lose their job,
[00:12:37] they cared about the work they were doing.
[00:12:38] And of course, a lot of people
[00:12:40] didn't want the company to fall apart, you know, us included.
[00:12:43] The second thing I think it's really important to know
[00:12:46] that has really gone underreported
[00:12:48] is how scared people are to go against Sam.
[00:12:53] They had experienced him retaliating against people,
[00:12:56] retaliating against them for past instances of being critical.
[00:13:00] They were really afraid of what might happen to them.
[00:13:02] So when some employees started to say, you know,
[00:13:05] wait, I don't want the company to fall apart.
[00:13:07] Like, let's bring back Sam.
[00:13:09] It was very hard for those people
[00:13:11] who had had terrible experiences
[00:13:13] to actually say that for fear that, you know,
[00:13:17] if Sam did stay in power, as he ultimately did,
[00:13:20] you know, that would make their lives miserable.
[00:13:23] And I guess the last thing I would say about this
[00:13:25] is that this actually isn't a new problem for Sam.
[00:13:29] And if you look at some of the reporting
[00:13:31] that has come out since November,
[00:13:33] it's come out that he was actually fired
[00:13:35] from his previous job at Y Combinator,
[00:13:37] which was hushed up at the time.
[00:13:39] And then, you know, his job before that,
[00:13:41] which was his only other job in Silicon Valley,
[00:13:43] his startup looped.
[00:13:45] Apparently, the management team went to the board there twice
[00:13:48] and asked the board to fire him
[00:13:50] for what they called, you know, deceptive and chaotic behavior.
[00:13:53] If you actually look at his track record,
[00:13:55] he doesn't, you know, exactly have a glowing future.
[00:13:58] He doesn't have a glowing trail of references.
[00:14:00] This wasn't a problem specific to the personalities
[00:14:03] on the board as much as he would love
[00:14:05] to kind of portray it that way.
[00:14:07] So I had to ask you about that,
[00:14:09] but this actually does tie into
[00:14:10] what we're going to talk about today.
[00:14:12] OpenAI is an example of a company
[00:14:14] that started off trying to do good,
[00:14:16] but now it's moved on to a for-profit model
[00:14:18] and it's really racing to the front of this AI game,
[00:14:21] along with all of these like ethical issues
[00:14:23] that are raised in the wake of this progress.
[00:14:26] I would argue that the OpenAI saga shows
[00:14:28] that trying to do good and regulating yourself isn't enough.
[00:14:32] So let's talk about why we need regulations.
[00:14:34] Great, let's do it.
[00:14:35] So from my perspective,
[00:14:37] AI went from the sci-fi thing
[00:14:39] that seemed far away to something
[00:14:41] that's pretty much everywhere,
[00:14:42] and regulators are suddenly trying to catch up.
[00:14:45] But I think for some people,
[00:14:46] it might not be obvious
[00:14:47] why exactly we need regulations at all.
[00:14:50] Like for the average person,
[00:14:51] it might seem like,
[00:14:52] oh, we just have these cool new tools
[00:14:54] like Ali and ChatGPT that do these amazing things.
[00:14:57] What exactly are we worried about in concrete terms?
[00:15:00] There's very basic stuff
[00:15:01] for very basic forms of the technology.
[00:15:03] Like if people are using it to decide who gets a loan,
[00:15:07] to decide who gets parole,
[00:15:09] you know, to decide who gets to buy a house,
[00:15:12] like you need that technology to work well.
[00:15:14] If that technology is going to be discriminatory,
[00:15:16] which AI often is, it turns out,
[00:15:18] you need to make sure that people have recourse.
[00:15:21] They can go back and say,
[00:15:22] hey, why was this decision made?
[00:15:23] If we're talking AI being used in the military,
[00:15:26] that's a whole other kettle of fish.
[00:15:28] And I don't know if we would say like regulation for that,
[00:15:31] but certainly need to have guidance rules,
[00:15:34] processes in place.
[00:15:35] And then kind of looking forward
[00:15:37] and thinking about more advanced AI systems,
[00:15:39] I think there's a pretty wide range of potential harms
[00:15:43] that we could well see
[00:15:44] if AI keeps getting increasingly sophisticated.
[00:15:47] You know, letting every little script kitty
[00:15:49] in their parents' basement,
[00:15:50] having the hacking capabilities
[00:15:52] of a crack NSA cell, like that's a problem.
[00:15:56] I think something that really makes AI hard
[00:15:58] for regulators to think about
[00:15:59] is that it is so many different things
[00:16:01] and plenty of the things don't need regulation.
[00:16:03] Like I don't know how Spotify decides
[00:16:06] how to make your playlist,
[00:16:07] the AI that they use for that.
[00:16:09] Like I'm happy for Spotify
[00:16:10] to just pick whatever songs they want from me
[00:16:12] and if they get it wrong, you know, who cares?
[00:16:14] But for many, many other use cases,
[00:16:16] you want to have at least some kind
[00:16:17] of basic common sense guardrails around it.
[00:16:19] I want to talk about a few specific examples
[00:16:21] that we might want to worry about,
[00:16:23] not in some battle space overseas,
[00:16:25] but at home in our day-to-day lives.
[00:16:27] You know, let's talk about surveillance.
[00:16:28] AI has gotten really good at perception,
[00:16:31] essentially understanding the contents
[00:16:33] of images, video and audio.
[00:16:35] And we've got a growing number
[00:16:36] of surveillance cameras in public and private spaces.
[00:16:39] And now companies are infusing AI into this fleet,
[00:16:42] essentially breathing intelligence
[00:16:44] into these otherwise dumb sensors
[00:16:46] that are almost everywhere.
[00:16:48] Madison Square Garden in New York City, as an example,
[00:16:51] they've been using facial recognition technology
[00:16:53] to bar lawyers involved in lawsuits
[00:16:56] against their parent company, MSG Entertainment,
[00:16:58] from attending events at their venue.
[00:17:01] This controversial practice obviously raised concerns
[00:17:03] about privacy, due process
[00:17:05] and potential for abuse of this technology.
[00:17:07] Can we talk about why this is problematic?
[00:17:10] Yeah, I mean, I think this is a pretty common thing
[00:17:12] that comes up in the history of technology
[00:17:14] is you have some, you know,
[00:17:16] some existing thing in society
[00:17:18] and then technology makes it much faster
[00:17:20] and much cheaper and much more widely available.
[00:17:22] Like surveillance, where it goes from like,
[00:17:23] oh, it used to be the case that your neighbor
[00:17:25] could see you doing something bad
[00:17:26] and go talk to the police about it.
[00:17:27] You know, it's one step up to go to,
[00:17:29] well, there's a camera, a CCTV camera
[00:17:31] and the police can go back and check at any time.
[00:17:33] And then another step up to like,
[00:17:34] oh, actually it's just running all the time
[00:17:36] and there's an AI facial recognition detector on there
[00:17:38] and maybe, you know, maybe in the future
[00:17:39] an AI activity detector that's also flagging,
[00:17:42] you know, this looks suspicious.
[00:17:45] In some ways there's no like qualitative change
[00:17:48] in what's happened.
[00:17:49] It's just like you could be seen doing something.
[00:17:51] But I think you do also need to grapple with the fact
[00:17:53] that if it's much more ubiquitous,
[00:17:55] much cheaper, then the situation is different.
[00:17:58] I mean, I think with surveillance,
[00:17:59] people immediately go to the kind of
[00:18:01] law enforcement use cases.
[00:18:03] And I think it is really important
[00:18:04] to figure out what the right trade-offs are
[00:18:06] between achieving sort of law enforcement objectives
[00:18:09] and being able to catch criminals
[00:18:11] and prevent bad things from happening
[00:18:13] while also recognizing the huge issues that you can get
[00:18:16] if this technology is used with overreach.
[00:18:18] For example, facial recognition works better and worse
[00:18:22] on different demographic groups.
[00:18:23] And so if police are, as they have been
[00:18:25] in some parts of the country,
[00:18:26] going and arresting people purely
[00:18:28] on a facial recognition match and on no other evidence,
[00:18:31] there's a story about a woman who was eight months pregnant
[00:18:33] having contractions in a jail cell
[00:18:35] after having done absolutely nothing wrong
[00:18:37] and being arrested only on the basis
[00:18:39] of a bad facial recognition match.
[00:18:41] So I personally don't go for,
[00:18:43] this needs to be totally banned
[00:18:45] and no one should ever use it in any way for anything.
[00:18:47] But I think you really need to be looking at
[00:18:49] how are people using it?
[00:18:51] What happens when it goes wrong?
[00:18:52] What recourse do people have?
[00:18:53] What kind of access to due process do they have?
[00:18:56] And then when it comes to private use,
[00:18:58] I really think we should probably
[00:18:59] be a bit more restrictive.
[00:19:01] Like, I don't know,
[00:19:02] it just seems pretty clearly against,
[00:19:04] I don't know, freedom of expression,
[00:19:06] freedom of movement for somewhere
[00:19:08] like Madison Square Gardens to be
[00:19:09] kicking their own lawyers out.
[00:19:10] I don't know, I'm not a lawyer myself,
[00:19:11] so I don't know what exactly
[00:19:13] the state of the law around that is.
[00:19:14] But I think the sort of civil liberties
[00:19:17] and privacy concerns there are pretty clear.
[00:19:21] I think the problem with sort of
[00:19:24] an existing set of technology
[00:19:25] getting infused with more advanced capabilities,
[00:19:28] sort of unbeknownst to the common population at large,
[00:19:30] is certainly a trend.
[00:19:32] And one example that shook me up
[00:19:34] is a video went viral recently
[00:19:36] of a security camera from a coffee shop,
[00:19:38] which showed a view of a cafe
[00:19:39] full of people and baristas.
[00:19:41] And basically over the heads of the customers,
[00:19:43] like the amount of time they spent at the cafe,
[00:19:45] and then over the baristas was like,
[00:19:47] how many drinks have they made?
[00:19:49] And then, you know, so what does this mean?
[00:19:50] Like ostensibly the business can one,
[00:19:52] track who is staying on their premises for how long,
[00:19:55] learn a lot about customer behavior
[00:19:57] without the customer's knowledge or consent.
[00:20:00] And then number two,
[00:20:01] the businesses can track how productive
[00:20:03] their workers are and could potentially fire,
[00:20:05] let's say less productive baristas.
[00:20:07] Let's talk about the problems and the risk here.
[00:20:09] And like, how is this legal?
[00:20:11] I mean, the short version is,
[00:20:12] and this comes up again and again and again,
[00:20:14] if you're doing AI policy,
[00:20:15] the US has no federal privacy laws.
[00:20:17] Like there are no rules on the books
[00:20:20] for how companies can use data.
[00:20:23] The US is pretty unique
[00:20:24] in terms of how few protections there are
[00:20:25] of what kinds of personal data are protected
[00:20:27] in what ways.
[00:20:28] Efforts to make laws have just failed
[00:20:30] over and over and over again,
[00:20:31] but there's now this sudden stealthy new effort
[00:20:33] that people think might actually have a chance.
[00:20:34] So who knows,
[00:20:35] maybe this problem is on the way to getting solved.
[00:20:37] But at the moment it's a big hole for sure.
[00:20:39] And I think step one
[00:20:40] is making people aware of this, right?
[00:20:42] Because people have, to your point,
[00:20:43] heard about online tracking,
[00:20:44] but having those same set of analytics
[00:20:46] in like the physical space in reality,
[00:20:48] it just feels like the Rubicon has been crossed
[00:20:51] and we don't really even know that's what's happening
[00:20:53] when we walk into whatever grocery store.
[00:20:55] I mean, again, yeah.
[00:20:56] And again, it's about sort of the scale
[00:20:59] and the ubiquity of this.
[00:21:01] Because again, it could be like your favorite barista
[00:21:05] knows that you always come in
[00:21:06] and you sit there for a few hours on your laptop
[00:21:08] because they've seen you do that a few weeks in a row.
[00:21:10] That's very different to this data
[00:21:13] is being collected systematically
[00:21:14] and then sold to data vendors all around the country
[00:21:17] and used for all kinds of other things
[00:21:18] or outside the country.
[00:21:20] So again, I think we have these sort of intuitions
[00:21:24] based on our real world person-to-person interactions
[00:21:26] that really just break down
[00:21:27] when it comes to sort of the size of data
[00:21:29] that we're talking about here.
[00:21:31] Hey, TED Tech listeners.
[00:21:33] We're supported by our friends at Working Smarter,
[00:21:36] a new podcast from Dropbox
[00:21:38] exploring the exciting potential of AI in the workplace.
[00:21:42] Working Smarter talks with founders,
[00:21:44] researchers and engineers
[00:21:45] about the things they're building
[00:21:47] and the problems they're solving
[00:21:48] with the help of the latest AI tools.
[00:21:51] Tools that can save them time,
[00:21:52] improve collaboration
[00:21:53] and create more space for the work that matters most.
[00:21:57] On Working Smarter,
[00:21:58] hear practical discussions about what AI can do
[00:22:01] so that you can work smarter too.
[00:22:03] Listen to Working Smarter on Apple Podcasts, Spotify
[00:22:06] or wherever you get your podcasts
[00:22:08] or visit workingsmarter.ai.
[00:22:12] I also want to talk about scams.
[00:22:14] So folks are being targeted by phone scams.
[00:22:17] They get a call from their loved ones.
[00:22:18] It sounds like their family members have been kidnapped
[00:22:20] and being held for ransom.
[00:22:22] In reality, some bad actor just used off the shelf AI
[00:22:26] to scrub their social media feeds for these folks voices
[00:22:29] and scammers can then use this
[00:22:30] to make these very believable hoax calls
[00:22:33] where people sound like they're in distress
[00:22:35] and being held captive somewhere.
[00:22:36] So we have reporting on this particular hoax now,
[00:22:39] but what's on the horizon?
[00:22:41] What's like keeping you up at night?
[00:22:43] I mean, I think that the obvious next step
[00:22:45] would be with video as well.
[00:22:46] I mean, definitely if you haven't already gone
[00:22:48] and talked to your parents or grandparents,
[00:22:50] anyone in your life who is not super tech savvy
[00:22:53] and told them like, you need to be on the lookout for this,
[00:22:56] you should go do that.
[00:22:57] I talk a lot about kind of policy
[00:22:59] and what kind of government involvement
[00:23:01] or regulation we might need for AI.
[00:23:02] I do think a lot of things we can just adapt to
[00:23:04] and we don't necessarily need new rules for.
[00:23:06] So I think, you know,
[00:23:07] we've been through a lot of different waves of online scams
[00:23:10] and I think this is the newest one
[00:23:11] and it really sucks for the people who get targeted by it.
[00:23:14] But I also expect that, you know,
[00:23:15] five years from now would be something
[00:23:17] that people are pretty familiar with
[00:23:18] and will be a pretty small number of people
[00:23:20] who are still vulnerable to it.
[00:23:21] So I think the main thing is, yeah,
[00:23:23] be super suspicious of any voice.
[00:23:25] Definitely don't use voice recognition
[00:23:27] for like your bank accounts or things like that.
[00:23:29] I'm pretty sure some banks still offer that,
[00:23:31] you know, ditch that.
[00:23:32] Definitely use something more secure
[00:23:34] and yeah, be on the lookout for video scamming as well
[00:23:38] and for people, you know, on video calls who look real.
[00:23:40] I think there was recently just the other day
[00:23:42] a case of a guy who was on a whole conference call
[00:23:45] where there were a bunch of different AI-generated people
[00:23:47] all on the call and he was the only real person,
[00:23:49] got scammed out a bunch of money.
[00:23:51] So that's coming.
[00:23:52] Totally, content-based authentication
[00:23:54] is on its last legs it seems.
[00:23:56] Definitely.
[00:23:57] It's always worth like checking in with
[00:23:58] what is the baseline that we're starting with?
[00:24:00] And I mean, so for instance, a lot of things,
[00:24:03] a lot of things are already public
[00:24:04] and they don't seem to get misused.
[00:24:05] So like I think a lot of people's addresses
[00:24:08] are listed publicly.
[00:24:09] You know, we used to have literal white pages
[00:24:11] where you can look up someone's address
[00:24:13] and that mostly didn't result in terrible things happening.
[00:24:15] Or, you know, I even think of silly examples
[00:24:17] like I think it's really nice that delivery drivers
[00:24:20] or when you go to a restaurant
[00:24:21] to pick up food that you ordered, it's just there.
[00:24:24] All right, so let's talk about what we can actually do.
[00:24:26] It's one thing to regulate businesses
[00:24:28] like cafes and restaurants.
[00:24:30] It's another thing to rein in all the bad actors
[00:24:33] that could abuse this technology.
[00:24:35] Can laws and regulations actually protect us?
[00:24:38] Yeah, they definitely can.
[00:24:39] I mean, and they already are.
[00:24:40] Again, AI is so many different things
[00:24:42] that there's no one set of AI regulations.
[00:24:44] There's plenty of laws and regulations
[00:24:46] that already apply to AI.
[00:24:47] So there's a lot of concern about AI,
[00:24:50] you know, algorithmic discrimination with good reason.
[00:24:53] But in a lot of cases, there are already laws on the books
[00:24:55] saying, you know, you can't discriminate
[00:24:56] on the basis of race or gender or sexuality
[00:24:58] or whatever it might be.
[00:25:00] And so in those cases, it's not even,
[00:25:02] you don't even need to pass new laws
[00:25:04] or make new regulations.
[00:25:05] You just need to make sure
[00:25:06] that the agencies in question have,
[00:25:08] you know, the staffing they need.
[00:25:10] Maybe they have the,
[00:25:12] maybe they need to have the exact authorities
[00:25:14] they have tweaked in terms of
[00:25:15] who are they allowed to investigate
[00:25:16] or who are they allowed to penalize
[00:25:17] or things like that.
[00:25:18] There are already rules
[00:25:19] for things like self-driving cars.
[00:25:20] You know, the Department of Transportation
[00:25:22] is handling that.
[00:25:23] It makes sense for them to handle that.
[00:25:24] For AI and banking,
[00:25:26] there's a bunch of banking regulators
[00:25:27] that have a bunch of rules.
[00:25:29] So I think there's a lot of places
[00:25:30] where, you know, AI actually isn't fundamentally new
[00:25:33] and the existing systems that we have in place
[00:25:35] are doing an okay job at handling that,
[00:25:38] but they may need, again, more staff
[00:25:40] or slight changes to what they can do.
[00:25:42] And I think there are a few different places
[00:25:44] where there are kind of new challenges emerging
[00:25:47] at sort of the cutting edge of AI
[00:25:49] where you have systems that can really do things
[00:25:51] that computers have never been able to do before
[00:25:53] and whether there should be rules around
[00:25:54] making sure that those systems
[00:25:56] are being kind of developed and deployed responsibly.
[00:25:58] I'm particularly curious
[00:26:00] if there's something that you've come across
[00:26:02] that's really clever
[00:26:03] or like a model for what good regulation looks like.
[00:26:06] I think this is mostly still a work in progress,
[00:26:08] so I don't know that I've seen anything
[00:26:09] that I think really absolutely nails it.
[00:26:12] I think a lot of the challenge
[00:26:13] that we have with AI right now
[00:26:16] relates to how much uncertainty there is
[00:26:17] about what the technology can do,
[00:26:19] what it's going to be able to do in five years.
[00:26:21] You know, experts disagree enormously
[00:26:23] about those questions,
[00:26:24] which makes it really hard to make policy.
[00:26:26] So a lot of the policies that I'm most excited about
[00:26:28] are about shedding light on those kinds of questions,
[00:26:31] giving us a better understanding
[00:26:32] of where the technology is.
[00:26:34] So some examples of that
[00:26:37] are things like
[00:26:39] President Biden created this big executive order
[00:26:41] last October
[00:26:42] and had all kinds of things in there.
[00:26:44] One example was a requirement
[00:26:46] that companies that are training
[00:26:48] especially advanced systems
[00:26:50] have to report certain information
[00:26:51] about those systems to the government.
[00:26:53] And so that's a requirement where
[00:26:54] you're not saying you can't build that model,
[00:26:56] can't train that model.
[00:26:58] You're not saying the government has to approve something.
[00:27:00] You're really just sharing information
[00:27:02] and creating kind of more awareness
[00:27:04] and more ability to respond
[00:27:05] as the technology changes over time,
[00:27:07] which is such a challenge for government
[00:27:09] keeping up with this fast-moving technology.
[00:27:11] There's also been a lot of good movement
[00:27:13] towards funding
[00:27:15] like the science of measuring and evaluating AI.
[00:27:18] A huge part of the challenge
[00:27:20] with figuring out what's happening with AI
[00:27:22] is that we're really bad at actually
[00:27:24] just measuring how good is this AI system?
[00:27:26] How do these two AI systems compare to each other?
[00:27:28] Is one of them sort of quote unquote smarter?
[00:27:30] So I think there's been a lot of attention
[00:27:32] over the last year or two
[00:27:33] into funding and establishing within government
[00:27:37] better capabilities on that front.
[00:27:39] I think that's really productive.
[00:27:41] Okay, so policymakers are definitely aware of AI
[00:27:43] if they weren't before
[00:27:45] and plenty of people are worried about it.
[00:27:47] They want to make sure it's safe, right?
[00:27:49] But that's not necessarily easy to do.
[00:27:52] And you've talked about this,
[00:27:54] how it's hard to regulate AI.
[00:27:56] So why is that?
[00:27:58] What makes it so hard?
[00:28:00] Yeah, I think there's at least three things
[00:28:01] that make it very hard.
[00:28:03] One thing is AI has so many different things
[00:28:05] like we've talked about.
[00:28:07] It cuts across sector,
[00:28:09] it has so many different use cases
[00:28:11] it's really hard to get your arms around
[00:28:13] what it is, what it can do, what impacts it will have.
[00:28:15] A second thing is it's a moving target.
[00:28:17] So what the technology can do
[00:28:19] is different now than it was even two years ago,
[00:28:21] let alone five years ago, 10 years ago.
[00:28:23] And policymakers are not good
[00:28:25] at sort of agile policymaking.
[00:28:28] They're not like software developers.
[00:28:29] And then the third thing is
[00:28:31] no one can agree on how they're changing
[00:28:33] or how they're going to change in the future.
[00:28:35] If you ask five experts,
[00:28:37] you know, where the technology is going,
[00:28:39] you will get five completely different answers,
[00:28:41] often five very confident,
[00:28:43] completely different answers.
[00:28:45] So that makes it really difficult
[00:28:47] for policymakers as well
[00:28:49] because they need to get scientific consensus
[00:28:52] and just like take that and run with it.
[00:28:54] So I think maybe this kind of third factor
[00:28:57] is the one that I think is the biggest challenge
[00:28:59] for policy for AI,
[00:29:01] which is that for policymakers,
[00:29:03] it's very hard for them to tell
[00:29:05] who should they listen to,
[00:29:07] what problems should they be worried about,
[00:29:09] and how is that going to change over time?
[00:29:11] Speaking of who you should listen to,
[00:29:13] obviously, you know, the very large companies
[00:29:15] in this space have an incentive
[00:29:17] and there's been a lot of talk
[00:29:19] about regulatory capture.
[00:29:21] When you ask for transparency,
[00:29:23] why would companies give a peek
[00:29:25] under the hood of what they're building?
[00:29:27] They just cite this to be proprietary.
[00:29:29] They don't have a policy
[00:29:31] and institutional framework
[00:29:33] that is actually beneficial for them
[00:29:35] and sort of prevents any future competition.
[00:29:37] How do you get these powerful companies
[00:29:39] to like participate and play nice?
[00:29:41] Yeah, it's definitely very challenging
[00:29:43] for policymakers to figure out
[00:29:45] how to interact with those companies.
[00:29:47] Again, because, you know, in part
[00:29:49] because they're lacking the expertise
[00:29:51] and the time to really dig into things
[00:29:53] in depth themselves.
[00:29:55] Like a typical Senate staffer
[00:29:57] might cover like, you know,
[00:29:59] infrastructure and education, you know,
[00:30:01] and that's like their portfolio.
[00:30:03] So they are scrambling.
[00:30:05] Like they have to, they need outside help.
[00:30:07] So I think it's very natural
[00:30:09] that the companies do come in and play a role.
[00:30:11] And I also think there are plenty of ways
[00:30:13] that policymakers can really mess things up
[00:30:15] if they don't, you know,
[00:30:17] know how the technology works
[00:30:19] and they're not talking to the companies
[00:30:21] they're regulating about what's going to happen.
[00:30:23] The challenge, of course,
[00:30:25] is how do you balance that with external voices
[00:30:27] who are going to point out the places
[00:30:29] to also be in these conversations?
[00:30:31] Certainly what we try to do at C-SET,
[00:30:33] the organization I work at,
[00:30:35] we're totally independent and, you know,
[00:30:37] really just trying to work in the best interest
[00:30:39] of making good policy.
[00:30:41] The big companies obviously do need
[00:30:43] to have a seat at the table,
[00:30:45] but you would hope that they have,
[00:30:47] you know, a seat at the table
[00:30:49] and not 99 seats out of 100
[00:30:51] in terms of who policymakers
[00:30:53] are talking to and listening to.
[00:30:55] There also seems to be a challenge
[00:30:57] with enforcement, right?
[00:30:59] You can't really put that genie
[00:31:01] back in the bottle,
[00:31:03] nor can you really start, you know,
[00:31:05] moderating how this technology is used
[00:31:07] without, I don't know,
[00:31:09] like going full 1984
[00:31:11] and having a process on every single computer
[00:31:13] monitoring what they're doing.
[00:31:15] So how do we deal with this landscape
[00:31:17] where you do have, you know,
[00:31:19] closed source and open source,
[00:31:21] like various ways to access
[00:31:23] and build upon this technology?
[00:31:25] Yeah, I mean, I think there are
[00:31:27] a lot of intermediate things
[00:31:29] that we can do.
[00:31:31] There's things like, you know,
[00:31:33] Hugging Face, for example,
[00:31:35] is a very popular platform
[00:31:37] for open source AI models.
[00:31:39] So Hugging Face in the past
[00:31:41] has delisted models that are,
[00:31:43] you know, considered to be
[00:31:45] offensive or dangerous
[00:31:46] or whatever it might be.
[00:31:48] And that actually does meaningfully
[00:31:50] reduce kind of the usage
[00:31:52] of those models because
[00:31:54] Hugging Face's whole deal
[00:31:56] is to make them more accessible,
[00:31:57] to make them more accessible
[00:31:59] to people who are
[00:32:01] not necessarily
[00:32:03] the most vulnerable.
[00:32:05] And so, you know,
[00:32:07] there's a lot of things
[00:32:09] that can be done
[00:32:11] to reduce the use of
[00:32:13] those models.
[00:32:15] And I think that's a really
[00:32:17] important thing to think about
[00:32:19] when we're talking about
[00:32:21] open source AI.
[00:32:23] And then, you know,
[00:32:24] particularly on the kind of
[00:32:26] image and audio generation side,
[00:32:28] there are some really interesting
[00:32:30] initiatives underway,
[00:32:31] mostly being led by industry,
[00:32:33] around what gets called
[00:32:35] content provenance or content
[00:32:37] authentication, which is
[00:32:38] basically how do you know where
[00:32:40] this piece of content came from?
[00:32:42] How do you know if it's real?
[00:32:44] And that's a very rapidly
[00:32:45] evolving space and a lot of
[00:32:46] interesting stuff happening
[00:32:47] there. I think there's a good
[00:32:48] amount of promise, not for
[00:32:49] perfect solutions where we'll
[00:32:50] always know, is this real or
[00:32:51] is it fake, but for making
[00:32:52] sure that we're not saying,
[00:32:53] this is fake.
[00:32:54] It was AI generated by this
[00:32:55] particular model or this
[00:32:57] is real. It was taken on this
[00:32:58] kind of camera and we have the
[00:33:00] cryptographic signature for
[00:33:01] that. I don't think we'll ever
[00:33:02] have perfect solutions.
[00:33:03] And again, I think, you know,
[00:33:05] societal adaptation is just
[00:33:06] going to be a big part of the
[00:33:07] story.
[00:33:08] But I do think there's
[00:33:09] pretty interesting technical
[00:33:11] and policy options that
[00:33:13] can make a difference.
[00:33:14] Definitely. And even if you
[00:33:15] can't completely control, you
[00:33:17] know, the generation of this
[00:33:18] material, there are ways to
[00:33:20] drastically cap the
[00:33:21] evolution of it.
[00:33:22] And so, like, I think that
[00:33:24] reduces some of the harms
[00:33:25] there. Yeah.
[00:33:26] At the same time, labeling
[00:33:27] content that is synthetically
[00:33:29] generated, a bunch of
[00:33:30] platforms have started doing
[00:33:31] that. That's exciting because
[00:33:32] like, I don't think the
[00:33:33] average consumer should be a
[00:33:34] deep fake detection expert.
[00:33:36] Right. But really, like if
[00:33:37] there could be a technology
[00:33:39] solution to this, that feels a
[00:33:40] lot more exciting, which
[00:33:42] brings me to the future.
[00:33:43] I'm kind of curious in your
[00:33:44] mind, what's like the
[00:33:46] dystopian scenario and the
[00:33:47] utopian scenario in all of
[00:33:49] this? Let's start with a
[00:33:50] little bit of the
[00:33:52] dystopian scenario.
[00:33:53] What does a world look like
[00:33:54] with inadequate or bad
[00:33:56] regulations? Paint a picture
[00:33:57] for us.
[00:33:59] So many possibilities.
[00:34:01] I mean, I think I think there
[00:34:02] are worlds that are not that
[00:34:03] different from now where you
[00:34:04] just have automated systems
[00:34:05] doing a lot of things,
[00:34:06] playing a lot of important
[00:34:07] roles in society, in some
[00:34:08] cases doing them badly and
[00:34:09] people not having the ability
[00:34:11] to go in and question those
[00:34:12] decisions. There's obviously
[00:34:13] this whole discourse around
[00:34:15] existential risk from AI,
[00:34:16] et cetera, et cetera.
[00:34:17] Kamala Harris had a whole
[00:34:18] speech about like, you know,
[00:34:19] someone loses access to
[00:34:20] Medicare because of an
[00:34:21] algorithmic issue. Like, is
[00:34:22] that not existential for
[00:34:23] that? You know, an elderly
[00:34:24] person, you know. So there
[00:34:26] are already people who are
[00:34:27] being directly impacted by
[00:34:29] algorithmic systems and AI
[00:34:31] in really serious ways.
[00:34:32] Even, you know, some of the
[00:34:33] reporting we've seen over the
[00:34:34] last couple months of how AI
[00:34:35] is being used in warfare,
[00:34:36] like, you know, videos of a
[00:34:38] drone chasing a Russian
[00:34:39] soldier around a tank and
[00:34:40] then shooting him. Like, I
[00:34:43] don't think we're full
[00:34:44] dystopia, but there's sort
[00:34:45] of plenty of plenty of
[00:34:46] things we worried about
[00:34:47] already.
[00:34:48] Something I think I worry
[00:34:49] about quite a bit or that
[00:34:50] feels intuitively to me to
[00:34:52] be a particularly plausible
[00:34:53] way things could go is sort
[00:34:55] of what I think of as the
[00:34:57] WALL-E future. I don't know
[00:34:58] if you remember that movie.
[00:35:00] Oh, absolutely.
[00:35:01] With the little robot. And
[00:35:02] the piece that I'm talking
[00:35:03] about is not the like junk
[00:35:05] earth and whatever. The
[00:35:06] piece I'm talking about is
[00:35:07] the people in that movie.
[00:35:09] They just sit in their soft
[00:35:11] roll around wheelchairs all
[00:35:12] day and, you know, have
[00:35:14] content and food and
[00:35:17] whatever to keep them
[00:35:18] happy. And I think what
[00:35:20] worries me about that is I
[00:35:21] do think there's a really
[00:35:22] natural gradient to go
[00:35:24] towards what people want in
[00:35:26] the moment and will, you
[00:35:27] know, will choose in the
[00:35:28] moment, which is different
[00:35:30] from what they, you know,
[00:35:32] will really find fulfilling
[00:35:33] or what will build kind of
[00:35:34] a meaningful life. And I
[00:35:36] think there's just really
[00:35:37] natural commercial
[00:35:38] incentives to build things
[00:35:39] that people sort of
[00:35:40] superficially want but then
[00:35:42] end up with this really
[00:35:43] kind of meaningless,
[00:35:45] shallow, superficial
[00:35:47] world. And potentially
[00:35:48] one where kind of most of
[00:35:49] the consequential decisions
[00:35:50] are being made by machines
[00:35:52] that have no concept of
[00:35:55] what it means to lead a
[00:35:56] meaningful life. And, you
[00:35:57] know, because how would we
[00:35:58] program that into them
[00:35:59] because we have no, we
[00:36:00] struggle to kind of put our
[00:36:01] finger on it ourselves. So
[00:36:03] I think those kinds of
[00:36:04] futures where not where
[00:36:05] there's some, you know,
[00:36:06] dramatic big event, but
[00:36:09] just where we kind of
[00:36:10] gradually hand over more
[00:36:11] and more control of the
[00:36:13] future to computers that
[00:36:14] are more and more
[00:36:15] sophisticated but that
[00:36:16] don't really have any
[00:36:17] concept of meaning or
[00:36:19] beauty or joy or
[00:36:20] fulfillment or, you know,
[00:36:22] flourishing or whatever it
[00:36:23] might be. I hope we
[00:36:26] don't go down those paths
[00:36:27] but it definitely seems
[00:36:28] possible that we will.
[00:36:30] They can play to our
[00:36:31] hopes, wishes, anxieties,
[00:36:32] worries, all of that. Just
[00:36:33] give us like the junk food
[00:36:35] all the time whether that's
[00:36:36] like in terms of
[00:36:37] nutrition or in terms of
[00:36:38] just like audiovisual
[00:36:39] content and that could
[00:36:40] certainly end badly. Let's
[00:36:42] talk about the opposite of
[00:36:43] that, the utopian
[00:36:44] scenario. What does a
[00:36:45] world look like where
[00:36:46] we've got this perfect
[00:36:47] balance of innovation and
[00:36:49] regulation and society is
[00:36:51] thriving?
[00:36:52] I mean, I think a very
[00:36:53] basic place to start is
[00:36:54] can we solve some of the
[00:36:55] big problems in the world?
[00:36:56] And I do think that AI
[00:36:57] could help with those. So
[00:36:58] can we have a world
[00:37:00] without climate change, a
[00:37:01] world with much more
[00:37:02] abundant energy that is
[00:37:03] much more, you know,
[00:37:04] cheaper and therefore more
[00:37:05] people can have more
[00:37:06] access to it? Where, you
[00:37:08] know, we have better
[00:37:09] agriculture so there's
[00:37:11] greater access to food.
[00:37:12] And beyond that, you know,
[00:37:13] I think what I'm more
[00:37:14] interested in is setting,
[00:37:16] you know, our kids and our
[00:37:18] grandkids and our great
[00:37:19] grandkids up to be deciding
[00:37:20] for themselves what they
[00:37:21] want the future to look
[00:37:22] like from there rather than
[00:37:23] having kind of some
[00:37:24] particular vision of where
[00:37:26] it should go. But I
[00:37:28] absolutely think that AI
[00:37:29] has the potential to really
[00:37:30] contribute to solving some
[00:37:31] of the biggest problems
[00:37:32] that we kind of face as a
[00:37:33] civilization. It's hard to
[00:37:34] say that sentence without
[00:37:35] sounding kind of grandiose
[00:37:36] and, you know, trite. But
[00:37:38] I think it's true.
[00:37:39] So, you know, I think
[00:37:40] that's a good question.
[00:37:41] I mean, I think that's
[00:37:42] true.
[00:37:43] So maybe to close things
[00:37:44] out, just like what can we
[00:37:45] do? You mentioned some
[00:37:46] examples of being aware
[00:37:48] of synthetically
[00:37:49] generated content. What
[00:37:50] can we as individuals do
[00:37:52] when we encounter, use, or
[00:37:54] even discuss AI? Any
[00:37:55] recommendations?
[00:37:57] I think my biggest
[00:37:58] suggestion here is just
[00:38:00] not to be intimidated by
[00:38:01] the technology and not to
[00:38:03] be intimidated by
[00:38:04] technologists. Like this
[00:38:05] is really a technology
[00:38:06] where we don't know what
[00:38:07] we're doing. The best
[00:38:08] experts in the world
[00:38:09] don't understand how it
[00:38:10] works. And so I think
[00:38:11] just, you know, if you
[00:38:12] find it interesting, being
[00:38:13] interested. If you think
[00:38:14] of fun ways to use it,
[00:38:15] use them. If you're
[00:38:16] worried about it, feel
[00:38:17] free to be worried. Like,
[00:38:18] you know, I think the main
[00:38:19] thing is just feeling like
[00:38:20] you have a right to your
[00:38:21] own take on what you want
[00:38:23] to happen with the
[00:38:24] technology. And no
[00:38:26] regulator, no, you know,
[00:38:29] CEO is ever going to have
[00:38:31] full visibility into all
[00:38:32] of the different ways
[00:38:33] that it's affecting, you
[00:38:34] know, millions and
[00:38:35] billions of people around
[00:38:36] the world. And so kind of,
[00:38:37] I don't know, trusting
[00:38:38] your own experience and
[00:38:39] exploring for yourself.
[00:38:40] And seeing what you think
[00:38:41] is, I think the main
[00:38:42] suggestion I would have.
[00:38:43] It was a pleasure having
[00:38:44] you on, Helen. Thank you
[00:38:45] for coming on the show.
[00:38:46] Thanks so much. This is
[00:38:47] fun.
[00:38:48] So maybe I bought into
[00:38:52] the story that played out
[00:38:53] on the news and on X, but
[00:38:55] I went into that
[00:38:56] interview expecting Helen
[00:38:57] Toner to be more of an
[00:38:59] AI policy maximalist. You
[00:39:01] know, the more laws, the
[00:39:02] better, which wasn't at
[00:39:04] all the person I found
[00:39:05] her to be. Helen sees a
[00:39:07] place for rules, a place
[00:39:08] for techno-optimism, and
[00:39:10] a place for society to
[00:39:11] just roll with, adapting
[00:39:13] to the changes as they
[00:39:14] come for balance. Policy
[00:39:17] doesn't have to mean being
[00:39:19] heavy handed and
[00:39:20] hamstringing innovation.
[00:39:21] It can just be a check
[00:39:23] against perverse economic
[00:39:24] incentives that are really
[00:39:26] not good for society. And
[00:39:27] I think you'll agree. But
[00:39:29] how do you get good
[00:39:30] rules? A lot of people in
[00:39:32] tech are going to say,
[00:39:33] you don't know shit. They
[00:39:34] know the technology the
[00:39:35] best, the pitfalls, not
[00:39:37] the lawmakers. And Helen
[00:39:39] talked about the average
[00:39:40] Washington staffer who
[00:39:41] isn't an expert, doesn't
[00:39:43] even have the time to
[00:39:44] become an expert. And yet
[00:39:46] it's on them to craft
[00:39:47] regulations that govern
[00:39:49] AI for the benefit of all
[00:39:51] of us. Technologists have
[00:39:53] the expertise, but they've
[00:39:54] also got that profit
[00:39:55] motive. Their interests
[00:39:57] aren't always going to be
[00:39:58] the same as the rest of
[00:39:59] ours. You know, in tech
[00:40:00] you'll hear a lot of
[00:40:01] regulation bad, don't
[00:40:03] engage with regulators.
[00:40:04] And I get the distrust.
[00:40:06] Sometimes regulators do
[00:40:08] not know what they're
[00:40:09] doing. India recently put
[00:40:11] out an advisory saying
[00:40:12] every AI model deployed
[00:40:14] in India first had to be
[00:40:15] approved by regulators.
[00:40:17] Totally unrealistic. There
[00:40:19] was a huge backlash there
[00:40:20] and they've since reversed
[00:40:21] that decision. But not
[00:40:23] engaging with government
[00:40:24] is only going to give us
[00:40:25] more bad laws. So we got
[00:40:27] to start talking, if
[00:40:29] only, to avoid that
[00:40:30] WALL-E dystopia.
[00:40:33] Okay, before we sign off
[00:40:34] for today, I want to turn
[00:40:36] your attention back to the
[00:40:37] top of our episode. I
[00:40:39] told you we were going to
[00:40:40] reach out to Sam Altman
[00:40:41] for comments. So a
[00:40:43] couple of hours ago, we
[00:40:44] shared a transcript of
[00:40:45] this recording with Sam
[00:40:46] and invited him to
[00:40:47] respond. We've just
[00:40:49] received a response from
[00:40:50] Brett Taylor, chair of
[00:40:51] the OpenAI board, and
[00:40:53] here's the statement in
[00:40:54] full. Quote, we are
[00:40:56] disappointed that
[00:40:57] Ms. Toner continues to
[00:40:58] revisit these issues. An
[00:41:00] independent committee of
[00:41:01] the board worked with the
[00:41:02] law firm WilmerHale to
[00:41:03] conduct an extensive
[00:41:04] review of the events of
[00:41:05] November. The review
[00:41:06] concluded that the prior
[00:41:07] board's decision was not
[00:41:09] based on concerns
[00:41:10] regarding product safety
[00:41:11] or security, the pace of
[00:41:13] development, OpenAI's
[00:41:14] finances, or its
[00:41:16] statements to investors,
[00:41:17] customers, or business
[00:41:18] partners. Additionally,
[00:41:20] over 95% of employees,
[00:41:22] including senior
[00:41:23] leadership, asked for
[00:41:24] Sam's reinstatement as
[00:41:25] CEO and the resignation
[00:41:26] of the prior board. Our
[00:41:28] focus remains on moving
[00:41:29] forward and pursuing
[00:41:30] OpenAI's mission to
[00:41:31] ensure AGI benefits all
[00:41:33] of humanity. End quote.
[00:41:35] We'll keep you posted if
[00:41:37] anything unfolds. The TED
[00:41:39] AI show is a part of the
[00:41:44] TED Audio Collective and
[00:41:46] is produced by TED with
[00:41:47] Cosmic Standard. Our
[00:41:49] producers are Ella Fetter
[00:41:51] and Sarah McCray. Our
[00:41:52] editors are Ben Bensheng
[00:41:54] and Alejandra Salazar. Our
[00:41:56] showrunner is Ivana
[00:41:57] Tucker, and our associate
[00:41:58] producer is Ben Montoya.
[00:42:00] Our engineer is Asia
[00:42:01] Pilar Simpson, our
[00:42:03] technical director is
[00:42:04] Jacob Winnick, and our
[00:42:05] executive producer is
[00:42:06] Eliza Smith. Our fact
[00:42:08] checkers are Julia
[00:42:09] Dickerson and Dan
[00:42:10] Kalachi. And I'm your
[00:42:12] host, Bilal Al-Saddu. See
[00:42:14] y'all in the next one.

