How to govern AI — even if it's hard to predict | Helen Toner
TED TechJuly 26, 202415:4614.44 MB

How to govern AI — even if it's hard to predict | Helen Toner

No one truly understands AI, not even experts, says Helen Toner, an AI policy researcher and former board member of OpenAI. But that doesn't mean we can't govern it. She shows how we can make smart policies to regulate this technology even as we struggle to predict where it's headed — and why the right actions, right now, can shape the future we want. After the talk, Sherrell expands on what’s needed to ensure that AI aligns with the best interests of humanity, without stifling innovation. 

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

No one truly understands AI, not even experts, says Helen Toner, an AI policy researcher and former board member of OpenAI. But that doesn't mean we can't govern it. She shows how we can make smart policies to regulate this technology even as we struggle to predict where it's headed — and why the right actions, right now, can shape the future we want. After the talk, Sherrell expands on what’s needed to ensure that AI aligns with the best interests of humanity, without stifling innovation. 

Learn more about our flagship conference happening this April at attend.ted.com/podcast


Hosted on Acast. See acast.com/privacy for more information.

[00:00:00] If we're not technologists coding and developing AI, should we still be allowed to weigh in on its usage, implementation or governance? This is a question that often comes up in conversations about AI, but I find it to be

[00:00:29] a little out of touch because the real world consequences of AI impact all of us, whether we're scientists, engineers or otherwise. As this new digital species becomes more commonplace, we deserve to have a say on how exactly it shapes our lives.

[00:00:47] This is TED Tech, a podcast from the TED Audio Collective. I'm your host, Sherelle Dorsey. Today we hear from Helen Toner. She's an esteemed AI policy research and a former board member of OpenAI.

[00:01:01] In this talk, she shares her insights on how to foster trust and transparency as AI systems evolve. She also proposes some compelling ideas on the safety measures AI companies should adopt for public protection. But before we dive into the talk, a quick break to hear from our sponsors.

[00:01:25] And now, Helen Toner, an AI policy researcher and a former board member of OpenAI, takes the TED stage. When I talk to people about artificial intelligence, something I hear a lot from non-experts is I don't understand AI. But when I talk to experts, a funny thing happens.

[00:01:45] They say, I don't understand AI and neither does anyone else. This is a pretty strange state of affairs. Normally, the people building a new technology understand how it works inside and out. But for AI, a technology that's radically reshaping the world around us, that's not so.

[00:02:07] Experts do know plenty about how to build and run AI systems, of course. But when it comes to how they work on the inside, there are serious limits to how much we know. And this matters because without deeply understanding AI, it's really difficult for us to know what

[00:02:24] it will be able to do next or even what it can do now. And the fact that we have such a hard time understanding what's going on with the technology and predicting where it will go next is one of the biggest hurdles we face in figuring

[00:02:38] out how to govern AI. But AI is already all around us, so we can't just sit around and wait for things to become clearer. We have to forge some kind of path forward anyway.

[00:02:53] I've been working on these AI policy and governance issues for about eight years, first in San Francisco, now in Washington, D.C. Along the way, I've gotten an inside look at how governments are working to manage this technology.

[00:03:09] And inside the industry, I've seen a thing or two as well. So I'm going to share a couple of ideas for what our path to governing AI could look like. But first, let's talk about what actually makes AI so hard to understand and predict.

[00:03:29] One huge challenge in building artificial intelligence is that no one can agree on what it actually means to be intelligent. This is a strange place to be in when building a new tech. When the Wright brothers started experimenting with planes, they didn't know how to build

[00:03:46] one, but everyone knew what it meant to fly. With AI, on the other hand, different experts have completely different intuitions about what lies at the heart of intelligence. Is it problem solving? Is it learning and adaptation? Are emotions or having a physical body somehow involved?

[00:04:09] We genuinely don't know, but different answers lead to radically different expectations about where the technology is going and how fast it'll get there. An example of how we're confused is how we used to talk about narrow versus general AI.

[00:04:25] For a long time, we talked in terms of two buckets. A lot of people thought we should just be dividing between narrow AI, trained for one specific task like recommending the next YouTube video, versus artificial general intelligence or AGI that could do everything a human could do.

[00:04:45] We thought of this distinction, narrow versus general, as a core divide between what we could build in practice and what would actually be intelligent. But then a year or two ago, along came chat GPT.

[00:05:00] If you think about it, is it narrow AI trained for one specific task, or is it AGI and can do everything a human could do? Clearly, the answer is neither. It's certainly general purpose. It can code, write poetry, analyze business problems, help you fix your car.

[00:05:20] But it's a far cry from being able to do everything as well as you or I could do it. So it turns out this idea of generality doesn't actually seem to be the right dividing line between intelligent and not.

[00:05:33] And this kind of thing is a huge challenge for the whole field of AI right now. We don't have any agreement on what we're trying to build or on what the roadmap looks like from here. We don't even clearly understand the AI systems that we have today.

[00:05:47] Why is that? Researchers sometimes describe deep neural networks, the main kind of AI being built today, as a black box. But what they mean by that is not that it's inherently mysterious and we have no way of looking inside the box.

[00:06:02] The problem is that when we do look inside, what we find are millions, billions, or even trillions of numbers that get added and multiplied together in a particular way. What makes it hard for experts to know what's going on is basically just there are too many

[00:06:18] numbers and we don't yet have good ways of teasing apart what they're all doing. There's a little bit more to it than that, but not a lot. So how do we govern this technology that we struggle to understand and predict?

[00:06:34] I'm going to share two ideas, one for all of us and one for policymakers. First, don't be intimidated either by the technology itself or by the people and companies building it. On the technology, AI can be confusing but it's not magical.

[00:06:53] There are some parts of AI systems we do already understand well, and even the parts we don't understand won't be opaque forever. An area of research known as AI interpretability has made quite a lot of progress in the last

[00:07:06] few years in making sense of what all those billions of numbers are doing. One team of researchers, for example, found a way to identify different parts of a neural network that they could dial up or dial down to make the AI's answers happier or angrier,

[00:07:23] more honest, more Machiavellian, and so on. If we can push forward this kind of research further, then five or ten years from now, we might have a much clearer understanding of what's going on inside the so-called black box.

[00:07:38] And when it comes to those building the technology, technologists sometimes act as though if you're not elbows deep in the technical details, then you're not entitled to an opinion on what we should do with it.

[00:07:52] Expertise has its place, of course, but history shows us how important it is that the people affected by a new technology get to play a role in shaping how we use it, like the factory workers in the 20th century who fought for factory safety or the disability advocates

[00:08:08] who made sure the World Wide Web was accessible. You don't have to be a scientist or engineer to have a voice. Second, we need to focus on adaptability, not certainty. A lot of conversations about how to make policy for AI get bogged down in fights between,

[00:08:34] on the one side, people saying we have to regulate AI really hard right now because it's so risky, and on the other side, people saying, but regulation will kill innovation, and those risks are made up anyway.

[00:08:45] But the way I see it, it's not just a choice between slamming on the brakes or hitting the gas. If you're driving down a road with unexpected twists and turns, then two things that will

[00:08:56] help you a lot are having a clear view out the windshield and an excellent steering system. In AI, this means having a clear picture of where the technology is and where it's going and having plans in place for what to do in different scenarios.

[00:09:14] Concretely, this means things like investing in our ability to measure what AI systems can do. This sounds nerdy, but it really matters. Right now, if we want to figure out whether an AI can do something concerning like hack

[00:09:29] critical infrastructure or persuade someone to change their political beliefs, our methods of measuring that are rudimentary. We need better. We should also be requiring AI companies, especially the companies building the most advanced AI systems, to share information about what they're building, what their systems

[00:09:49] can do, and how they're managing risks. And they should have to let in external AI auditors to scrutinize their work so that the companies aren't just grading their own homework. A final example of what this can look like is setting up incident reporting mechanisms

[00:10:13] so that when things do go wrong in the real world, we have a way to collect data on what happened and how we can fix it next time, just like the data we collect on plane crashes and cyber attacks.

[00:10:26] None of these ideas are mine, and some of them are already starting to be implemented in places like Brussels, London, even Washington. But the reason I'm highlighting these ideas, measurement, disclosure, incident reporting,

[00:10:42] is that they help us navigate progress in AI by giving us a clearer view out the windshield. If AI is progressing fast in dangerous directions, these policies will help us see that. And if everything is going smoothly, they'll show us that too, and we can respond accordingly.

[00:11:03] What I want to leave you with is that it's both true that there's a ton of uncertainty and disagreement in the field of AI, and that companies are already building and deploying AI all over the place anyway, in ways that affect all of us.

[00:11:21] Left to their own devices, it looks like AI companies might go in a similar direction to social media companies, spending most of their resources on building web apps and fighting for users' attention. And by default, it looks like the enormous power of more advanced AI systems might stay

[00:11:38] concentrated in the hands of a small number of companies, or even a small number of individuals. But AI's potential goes so far beyond that. AI already lets us leap over language barriers and predict protein structures. More advanced systems could unlock clean, limitless fusion energy, or revolutionize

[00:11:58] how we grow food, or a thousand other things. And we each have a voice in what happens. We're not just data sources. We are users, we're workers, we're citizens. So as tempting as it might be, we can't wait for clarity or expert consensus to figure

[00:12:20] out what we want to happen with AI. AI is already happening to us. What we can do is put policies in place to give us as clear a picture as we can get of

[00:12:33] how the technology is changing, and then we can get in the arena and push for futures we actually want. Thank you. That was Helen Toner at TED 2024. I concur with Toner's viewpoint. We cannot afford to wait for a consensus on how AI should be deployed or managed.

[00:12:59] AI is already making significant societal impacts, including potentially harmful ones like deepfakes and misinformation. In a world where this technology is advancing at an unprecedented pace, the need for proactive governance is pressing, but this must be approached thoughtfully.

[00:13:17] Data and social scientist Dr. Rumin Chowdhury, who gave her talk on the TED stage in 2024, works to engage and educate people about AI. She knows what she's talking about. Dr. Chowdhury is a CEO and co-founder of Humane Intelligence, and she's also the responsible

[00:13:33] AI fellow at the Berkman Klein Center for Internet and Society at Harvard University. She believes we should not fear AI. She also thinks it's important to create guardrails for this tech. In 2023, Dr. Chowdhury testified about AI in front of the U.S. House of Representatives.

[00:13:52] She offered four solutions to advance trustworthiness in AI innovation, arguing that this was in the national best interest. First, we should support open access to AI data and models. This would enable independent audits of AI systems made by private companies, and it

[00:14:10] would create more public transparency about how AI works. Second, we provide more funding and support to the National Institute of Standards and Technology to further develop the U.S. AI Safety Institute. This group would supplement and inform existing government efforts to regulate AI.

[00:14:28] Next, we should invest in independent ethical hacking, which involves contracting trusted groups who aren't associated with big tech companies to test out the strengths and weaknesses of their AI systems without any conflict of interest. And finally, we should develop a minimum requirement standard to evaluate whether or

[00:14:47] not certain AI systems should be adopted by the government. This requirement could consider whether an AI system is an improvement to the current system, evaluate its efficacy, and proactively combat adverse outcomes. Like Toner, Chowdhury does not ask that we stifle AI.

[00:15:07] But we do need to have checks and balances to ensure that AI is serving the best interests of humanity, not taking advantage of or taking away from it. And that's it for today. TED Tech is part of the TED Audio Collective.

[00:15:26] This episode was produced by Nina Byrd-Lawrence, edited by Alejandra Salazar, and fact-checked by Julia Dickerson. Special thanks to Maria Ladius, Vera de Grange, Daniela Bellarezo, and Raxane Hailech. I'm Sherelle Dorsey. Thanks for listening in.