We don't have to sacrifice our freedom for the sake of technological progress, says social technologist Divya Siddarth. She shares how a group of people helped retrain one of the world's most powerful AI models on a constitution they wrote — and offers a vision of technology that aligns with the principles of democracy, rather than conflicting with them.
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:00] .
[00:00:05] .
[00:00:10] When we hear about AI in the news, it often feels like a powerful technology that has many unintended consequences.
[00:00:19] Conceived, built and trained by tech companies whose processes can sometimes be hidden behind a black box.
[00:00:27] But what if everyday people could retrain AI from the ground up to help improve one of our most important systems of government?
[00:00:36] I'm Shirelle Dorsey and this is TED Tech.
[00:00:41] Today we hear from Divya Siddarth, a political economist, social technologist and co-founder of the Collective Intelligence Project.
[00:00:51] And she believes that AI can be a transformative technology that can align with the principles of democracy.
[00:00:59] Let's listen in.
[00:01:02] This show is brought to you by Schwab.
[00:01:17] You're here because you like to keep a pulse on trends in technology.
[00:01:21] Well, now you can invest in what's trending in artificial intelligence, big data, robotic revolution and more with Schwab Investing Themes.
[00:01:31] It's an easy way to invest in ideas you believe in.
[00:01:34] Schwab's research process uncovers emerging trends, then their technology curates relevant stocks into themes.
[00:01:42] Choose from over 40 themes, buy all the stocks in a theme as is or customize to better fit your investing goals.
[00:01:51] All in a few clicks.
[00:01:53] Schwab Investing Themes is not intended to be investment advice or a recommendation of any stock or investment strategy.
[00:02:01] Learn more at schwab.com slash thematic investing.
[00:02:08] AI is making waves in every field it touches.
[00:02:11] President Biden is now on TikTok and the election draws closer each day.
[00:02:16] With so much going on in the world, it is hard to keep up with it all, let me tell you.
[00:02:20] Hi, I'm Kai Rizdali, co-host of Make Me Smart.
[00:02:22] It's a podcast from Marketplace and every weekday, Kimberly Adams and I break down the latest in business and the economy with short daily episodes to make it easy for you to stay in the know.
[00:02:33] Listen to Make Me Smart wherever you get your podcasts.
[00:02:38] Recently, I told someone my work is on democracy and technology.
[00:02:43] He turned to me and said, wow, I'm sorry.
[00:02:49] But I love my work.
[00:02:51] I know that we can build a world where technological marvels are directed towards people's benefit using their input.
[00:03:00] We have gotten so used to seeing democracy as a problem to be solved, but I see democracy as a solution, not as a problem.
[00:03:11] Democracy was once a radical political project, itself a cutting edge social technology.
[00:03:17] A new way to answer the very question we are faced with now in the era of artificial intelligence.
[00:03:24] How do we use our capabilities to live well together?
[00:03:30] We are told that transformative technologies like AI are too complicated or too risky or too important to be governed democratically.
[00:03:42] But this is precisely why they must be.
[00:03:46] If existing democracy is unequal to the task, our job is not to give up on it.
[00:03:51] Our job is to evolve it and to use technology as an asset to help us do so.
[00:03:58] Still, I understand his doubts.
[00:04:00] I never meant to build my life around new forms of democracy.
[00:04:05] I started out just really believing in the power of science.
[00:04:09] I was modifying DNA in my kitchen at 12.
[00:04:13] And when I got to Stanford as a computational biology major, I was converted to a new belief, technology.
[00:04:20] I truly believed in the power of tech to change the world, maybe like many of you.
[00:04:25] But I saw that the technologies that really made a difference were the ones that were built with and for the collective.
[00:04:33] Not the billions of dollars pumped into the 19th addiction-fueling social app,
[00:04:38] but the projects that combine creating something truly new with building in ways for people to access, benefit from and direct it.
[00:04:47] Instead of social media, think of the Internet, built with public resources on open standards.
[00:04:53] This is what brought me to democracy.
[00:04:56] Technology expands what we are capable of.
[00:05:00] Democracy is how we decide what to do with that capability.
[00:05:07] Since then, I've worked on using democracy as a solution in India, the US, the UK, Taiwan.
[00:05:14] I've worked alongside incredible collaborators to use democracy to help solve COVID, to help solve data rights.
[00:05:21] And as I'll tell you today, to help solve AI governance with policymakers around the world and cutting edge technology companies like OpenAI and Anthropic.
[00:05:33] How? By recognizing that democracy is still in its infancy.
[00:05:39] It is an early form of collective intelligence, a way to put together decentralized input from diverse sources and produce decisions that are better than the sum of their parts.
[00:05:52] That's why my fantastic co-founder, Saffron Huang and I left our jobs at Google DeepMind and Microsoft to build new democratic governance models for transformative tech.
[00:06:03] I named our nonprofit The Collective Intelligence Project as a nod to the ever evolving project of building collective intelligence for collective flourishing.
[00:06:17] Since then, we've done just that. Building new collective intelligence models to direct artificial intelligence, to run democratic processes.
[00:06:26] And we've incorporated the voices of thousands of people into AI governance.
[00:06:30] Here are a few of the things we've learned.
[00:06:33] First, people are willing and able to have difficult, complex conversations on nuanced topics.
[00:06:40] When we asked people about the risks of AI they were most concerned about, they didn't reach for easy answers.
[00:06:47] Out of more than 100 risks put forward, the top cited one, overreliance on systems we don't understand.
[00:06:55] We talked to people across the country from a veteran in the Midwest to a young teacher in the South.
[00:07:00] People were excited about the possibilities of this technology.
[00:07:04] There were specific things they wanted to understand about what models were capable of before seeing them deployed in the world.
[00:07:10] A lot more reasonable than many of the policy conversations that we're in.
[00:07:15] And importantly, we saw very little of the polarization we're always hearing about.
[00:07:20] On average, just a few divisive statements for hundreds of consensus statements.
[00:07:27] Even on the contentious issues of the day, like free speech or race and gender, we saw far more agreement than disagreement.
[00:07:34] Almost three quarters of people agree that AI should protect free speech.
[00:07:38] 90% agree that AI should not be racist or sexist.
[00:07:42] Only around 50% think that AI should be funny though, so there's still contentious issues out there.
[00:07:49] These last statistics are from our collective constitution project with Anthropic,
[00:07:54] where we retrained one of the world's most powerful language models on principles written by a thousand representative Americans,
[00:08:01] not AI developers or regulators or researchers at elite universities.
[00:08:07] We built on a way of training AI that relies on a written set of principles or a constitution.
[00:08:13] We asked ordinary people to co-write this constitution.
[00:08:15] We compared it to a model that researchers have come up with.
[00:08:18] When we started this project, I wasn't sure what to expect.
[00:08:22] Maybe the naysayers were right.
[00:08:24] AI is complicated.
[00:08:26] Maybe people wouldn't understand what we were asking them.
[00:08:29] Maybe we'd end up with something awful.
[00:08:32] But the people's model, trained on the co-written constitution,
[00:08:36] was just as capable and more fair than the model that the researchers had come up with.
[00:08:43] People with little to no experience in AI did better than researchers,
[00:08:48] who work on this full-time in building a fairer chatbot.
[00:08:53] Maybe I shouldn't have been surprised.
[00:08:55] As one of our participants from another process said,
[00:08:58] they may be experts in AI, but I have eight grandchildren.
[00:09:03] I know how to pick good values.
[00:09:05] If technology expands what we are capable of and democracy is how we decide what to do with that capability,
[00:09:11] here is early evidence that democracy can do a good job deciding.
[00:09:16] Of course, these processes aren't enough.
[00:09:18] Collective intelligence requires a broader reimagining of technology and democracy.
[00:09:23] That's why we're also working on co-ownership models for the data that AI is built on,
[00:09:28] which after all belongs to all of us,
[00:09:31] and using AI itself to create new and better decision-making processes,
[00:09:36] taking advantage of the things that language models can do that humans can't,
[00:09:39] like processing huge amounts of text input.
[00:09:43] Our work in Taiwan has been an incredible testbed for all of this.
[00:09:48] Along with Minister Audrey Tong and the Ministry of Digital Affairs,
[00:09:53] we are working on processes to ask Taiwan's millions of citizens
[00:09:57] what they actually want to see as a future with AI,
[00:10:01] and using that input not just to legislate, but to build.
[00:10:06] Because one thing that has already come out of these processes
[00:10:10] is that people are truly excited about a public option for AI,
[00:10:13] one that is built on shared public data that is reliably safe,
[00:10:18] that allows communities to access, benefit from and adjust it to their needs.
[00:10:23] This is what the world of technology could look like,
[00:10:27] steered by the many for the many.
[00:10:30] I often find that we accept unnecessary trade-offs when it comes to transformative tech.
[00:10:37] We are told that we might need to sacrifice democracy
[00:10:40] for the sake of technological progress.
[00:10:43] We have no choice but to concentrate power to keep ourselves safe from possible risks.
[00:10:48] This is wrong.
[00:10:50] It is impossible to have any one of these things,
[00:10:53] progress, safety or democratic participation, without the others.
[00:10:59] If we resign ourselves to only two of the three,
[00:11:02] we will end up with either centralized control or chaos.
[00:11:06] Either a few people get to decide, or no one does.
[00:11:10] These are both terrible outcomes and our work shows that there is another way.
[00:11:15] Each of our projects advance progress, safety and democratic participation
[00:11:21] by building cutting edge democratic AI models,
[00:11:24] by using public expertise as a way to understand diffuse risks,
[00:11:29] and by imagining co-ownership models for the digital commons.
[00:11:33] We are so far from the best collective intelligence systems we could have.
[00:11:39] If we started over on building a decision-making process for the world,
[00:11:43] what would we choose?
[00:11:45] Maybe we'd be better at separating financial power from political power.
[00:11:50] Maybe we'd create thousands of new models of corporations or bureaucracies.
[00:11:55] Maybe we'd build in the voices of natural elements or future generations.
[00:12:01] Here's a secret.
[00:12:03] In some ways, we are always starting from scratch.
[00:12:07] New technologies usher in new paradigms that can come with new collective intelligence systems.
[00:12:14] We can create new ways of living well together if we use these brief openings for change.
[00:12:22] The story of technology and democracy is far from over.
[00:12:25] It doesn't have to be this way.
[00:12:27] Things could be unimaginably better.
[00:12:30] As the Indian author Arundhati Roy once said,
[00:12:33] another world is not only possible.
[00:12:36] She is on her way.
[00:12:38] On a quiet day, I can hear her breathing.
[00:12:42] I can hear our new world breathing,
[00:12:44] one in which we shift the systems we have
[00:12:47] towards using the solution of democracy to build the worlds we want to see.
[00:12:53] The future is up to us.
[00:12:55] We have a world to win.
[00:12:57] Thank you.
[00:13:01] Support for this show comes from Factor.
[00:13:04] Are you looking for gourmet meals?
[00:13:06] And no muss, no fuss?
[00:13:08] Try meals that feature premium ingredients like filet mignon, shrimp.
[00:13:12] I'm getting hungry just thinking about this.
[00:13:14] Truffle butter, broccolini, and asparagus.
[00:13:17] Factor eliminates the hassle of prepping, cooking, and cleaning up.
[00:13:22] It was a dream.
[00:13:23] This is why I got so many boxes of it.
[00:13:25] Simply heat and savor the good stuff.
[00:13:28] It's tailored to your schedule.
[00:13:29] You can customize your weekly meals.
[00:13:31] There's so many to choose from.
[00:13:32] You can eat keto, you can eat calorie smart,
[00:13:34] you can eat vegan, vegetarian, your choice.
[00:13:37] It is your solution for fast, premium meals without the need for cooking.
[00:13:40] Head to factormeals.com slash TED Tech 50
[00:13:43] and use code TEDTECH50 to get 50% off.
[00:13:46] That's code TEDTECH50 at factormeals.com slash TEDTECH50 to get 50% off.
[00:13:58] I'm your host, Microsoft Vice Chair and President Brad Smith.
[00:14:01] Each episode features insight you won't find anywhere else
[00:14:04] from the center of the conversation surrounding emerging technologies like AI.
[00:14:08] Right now on the podcast, you can hear a special episode
[00:14:11] where Brad Smith lays out Microsoft's vision for a vibrant marketplace
[00:14:15] driving the new AI economy.
[00:14:17] To hear more, follow or subscribe to Tools and Weapons with Brad Smith
[00:14:21] wherever you get your podcasts.
[00:14:23] TED Tech is part of the TED Audio Collective.
[00:14:26] This episode was produced by Nina Lawrence,
[00:14:29] edited by Alejandro Salazar, and fact-checked by Julia Dickerson.
[00:14:34] Special thanks to Maria Ladias, Ferre de Grunge,
[00:14:37] Corey Hajime, Daniella Balareso, and Michelle Quint.
[00:14:41] I'm Sherrell Dorsey.
[00:14:43] Thanks for listening and talk to you again next week.
[00:14:53] TED Talks

