AI is shaping every aspect of our lives — but only a handful of tech giants are having a say in what this technology can do. So what’s going on with world governments? Bilawal sits down with geopolitical expert Ian Bremmer to unpack the UN’s just-released plan for “Governing AI for Humanity,” a report that focuses on the urgent need to guide AI towards helping everyone – rather than the powerful few – thrive. Together, they explore the complexities of AI’s rapid growth on a worldwide scale and take a clear-eyed look at the pivotal decisions facing us in the very near future.
For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:01] [SPEAKER_02]: Audio Collective.
[00:00:07] [SPEAKER_03]: Usually the TED AI show comes out every Tuesday but we're dropping this episode a little off
[00:00:12] [SPEAKER_03]: schedule because something of global significance has happened and we want it to be one of
[00:00:17] [SPEAKER_03]: the first to share it with you.
[00:00:19] [SPEAKER_03]: So without further ado let's get right into it.
[00:00:24] [SPEAKER_03]: Imagine a world controlled not by nations, but by a handful of tech companies driven primarily
[00:00:29] [SPEAKER_03]: by profits holding the reigns of artificial intelligence.
[00:00:32] [SPEAKER_03]: A techno-polar world as geopolitical expert Ian Bremer calls it, where algorithms
[00:00:38] [SPEAKER_03]: shape our lives, our economies and even our wars.
[00:00:43] [SPEAKER_03]: Now this isn't some sci-fi movie, it's the future we're barreling towards right now
[00:00:48] [SPEAKER_03]: and the question is who gets to write the rules?
[00:00:52] [SPEAKER_03]: Well now the UN has stepped into the ring with a bold plan, global AI governance.
[00:00:59] [SPEAKER_03]: They've released this plan on September 19th, 2024 and we're giving you the scoop right
[00:01:05] here.
[00:01:06] [SPEAKER_03]: I had a chance to read an advanced copy of the reports and it gave me a lot to think
[00:01:10] [SPEAKER_03]: about.
[00:01:12] [SPEAKER_03]: So this plan is a set of guardrails that the UN wants to put in place for all sovereign
[00:01:16] [SPEAKER_03]: nations to adhere to, just like what we do around nuclear power or aviation.
[00:01:22] [SPEAKER_03]: Only this time the stakes are even trickier, because it's AI right?
[00:01:27] [SPEAKER_03]: It's a domain tightly controlled by tech giants.
[00:01:30] [SPEAKER_03]: It's an ever evolving technology that even the most seasoned researchers are still trying
[00:01:35] [SPEAKER_03]: to understand.
[00:01:36] [SPEAKER_03]: It's a tool that has the potential to both solve humanity's biggest challenges and unleash
[00:01:41] [SPEAKER_03]: worldwide chaos.
[00:01:43] [SPEAKER_03]: In other words, do global rules of conduct for such a powerful young technology ruled by
[00:01:49] [SPEAKER_03]: tech alliances even apply and is the UN in almost 80-year-old bureaucracy consisting of
[00:01:56] [SPEAKER_03]: sovereign nations the right institution for the job.
[00:02:02] [SPEAKER_03]: I'm Beloved O'Villus to do and this is the TET AI show where we figure out how to live
[00:02:06] [SPEAKER_03]: and thrive in a world where AI is changing everything.
[00:02:16] [SPEAKER_02]: Imagine this.
[00:02:17] [SPEAKER_02]: In 2030, the CFO of a Fortune 100 company is a bot.
[00:02:21] [SPEAKER_02]: I'm Paul Michaelman and on Imagine This will be exploring possible futures in the implications
[00:02:26] [SPEAKER_02]: they hold for organizations.
[00:02:29] [SPEAKER_02]: Joining me will be B.C.G's top experts as well as my co-host gene, B.C.G's conversational
[00:02:34] [SPEAKER_01]: Gen AI agent.
[00:02:35] [SPEAKER_01]: Blending human creativity with AI innovation, this podcast promises an unmatched listening
[00:02:41] [SPEAKER_01]: journey.
[00:02:42] [SPEAKER_01]: Join us on Imagine This from B.C.G.
[00:02:46] [SPEAKER_00]: Hi listeners, it's Cheryl, your host of TETTEC.
[00:02:48] [SPEAKER_00]: I want to share a podcast with you, I think you'll love.
[00:02:51] [SPEAKER_00]: Me, myself and AI is a podcast featuring AI leaders from organizations like NASA, Upwork,
[00:02:57] [SPEAKER_00]: GitHub, and Meta who explore with expert hosts what success looks like with generative
[00:03:01] [SPEAKER_00]: AI.
[00:03:02] [SPEAKER_00]: And what challenges and ethical considerations they face along the way.
[00:03:06] [SPEAKER_00]: Whether you're leading a strategic technology function or are simply curious about
[00:03:10] [SPEAKER_00]: what's behind the hype of AI, me, myself and AI delivers actionable insights while sharing
[00:03:15] [SPEAKER_00]: the back stories of the people who make this technology work.
[00:03:18] [SPEAKER_00]: Listen to me, myself and AI, wherever you stream podcasts.
[00:03:26] [SPEAKER_03]: Today we're joined by Ian Bremer, an American political scientist and one of the co-authors
[00:03:31] [SPEAKER_03]: of this report.
[00:03:32] [SPEAKER_03]: He's not just a policy walk, but someone who's in the room with world leaders, feeling
[00:03:37] [SPEAKER_03]: the pulse of this tech revolution.
[00:03:39] [SPEAKER_03]: He's explored the very question of governance in the digital age and it's shift in power
[00:03:44] [SPEAKER_03]: from sovereignty's to big tech.
[00:03:46] [SPEAKER_03]: And he believes the UN is the only international body in a position to hammer out a credible
[00:03:52] [SPEAKER_03]: global consensus for this world-changing technology.
[00:03:56] [SPEAKER_03]: And hey, you're all going to want to buckle up for this one because this is a ride into the
[00:04:00] [SPEAKER_03]: future where all building, whether we're ready or not.
[00:04:10] [SPEAKER_03]: So, hey Ian, welcome to the show.
[00:04:12] [SPEAKER_03]: Thanks, Bill.
[00:04:12] [SPEAKER_03]: Well, looking forward to it.
[00:04:14] [SPEAKER_03]: Cool. So, this report comes at a pivotal time.
[00:04:17] [SPEAKER_03]: AI is obviously no longer this futuristic concept that we hear about in movies.
[00:04:21] [SPEAKER_03]: It is certainly in the zeitgeist, but I have to ask what prompted the UN to take the step
[00:04:26] [SPEAKER_03]: towards global AI governance at this specific moment in time?
[00:04:31] [SPEAKER_04]: I've been talking with the Secretary General about this issue for more than five years now.
[00:04:38] [SPEAKER_04]: He recognized when he became Secretary General Antonio Gautetis that there were two global issues
[00:04:45] [SPEAKER_04]: where governance was wholly inadequate that he needed to make utter priorities of his time.
[00:04:53] [SPEAKER_04]: One being climate where it's happening, just happening way too slow and with far two limited resources
[00:05:00] [SPEAKER_04]: for the majority of people on the planet.
[00:05:03] [SPEAKER_04]: And second, disruptive technologies, notably artificial intelligence that was coming way too fast
[00:05:10] [SPEAKER_04]: and there was a complete absence of governance. The reason it took so long for it to happen
[00:05:17] [SPEAKER_04]: was in part because the UN is a bureaucratic creature and it takes a while to get things set up.
[00:05:23] [SPEAKER_04]: But secondly, because they didn't have a senior envoy for geotechnology in place.
[00:05:31] [SPEAKER_04]: But having said that, this has always been a top priority for the Secretary General and the fact
[00:05:39] [SPEAKER_04]: that you then had billions of dollars being poured into this technology with not just enormous hype,
[00:05:48] [SPEAKER_04]: but real excitement around real use cases, changing the way humans interact with each other,
[00:05:57] [SPEAKER_04]: changing the way we innovate, changing the way society and the global economy works,
[00:06:03] [SPEAKER_04]: meant that an absence of global governance was going to leave a lot on the table
[00:06:08] [SPEAKER_04]: and was also likely to get us in real trouble. So the urgency has clearly stepped up
[00:06:14] [SPEAKER_04]: in the past, I'm going to say 18 months. And in that regard, I think that this report is landing
[00:06:21] [SPEAKER_03]: at a really critical time. Yeah, one might even say it's coming just the right time because you're
[00:06:26] [SPEAKER_03]: totally right, especially let's call it the the chatGPT moment or the large language model moment
[00:06:31] [SPEAKER_03]: where certainly ML has been on the steady trajectory, but really, you know, the fact that the average
[00:06:37] [SPEAKER_03]: person in the world can now start like experiencing this technology seems like a more recent phenomenon
[00:06:42] [SPEAKER_03]: from machines sort of that can just understand to machines that can create and so long with
[00:06:47] [SPEAKER_03]: that sort of this bucket of technologies that requires governance, you know, the report points out
[00:06:53] [SPEAKER_03]: that shared understanding of the opportunities and risks is key. If you have to distill down for
[00:06:58] [SPEAKER_03]: our audience, just a couple of the key points in terms of what the opportunities and risks are when
[00:07:03] [SPEAKER_04]: it comes to this technology, what would they be? Well, the opportunities you mentioned that the
[00:07:08] [SPEAKER_04]: average person can now see can now use artificial intelligence in a way that it affects their daily
[00:07:15] [SPEAKER_04]: lives and they will engage with chatbots. They will use creative AI tools, they'll create art
[00:07:23] [SPEAKER_04]: with it, they'll create videos with it, they'll edit with it, they will brainstorm with it.
[00:07:29] [SPEAKER_04]: Anyone that has access to a smartphone or a computer now has access to AI in at least the limited
[00:07:34] [SPEAKER_04]: way that is transformative as a tool. It's a help mate and it's about to become vastly more than
[00:07:40] [SPEAKER_04]: that. In some cases, it's a friend, it's a counselor, it can be an advisor, those are the
[00:07:45] [SPEAKER_04]: opportunities for individual human beings but then you have the industrial, the corporate, the
[00:07:56] [SPEAKER_04]: new batteries, you get more efficient ways of stacking traffic. You get all of that kind of thing
[00:08:04] [SPEAKER_04]: and I've spoken with so many CEOs in global pharma, energy, retail, transport who are talking to me
[00:08:17] [SPEAKER_04]: today. This week, this month about the billions of dollars that they are already saving
[00:08:26] [SPEAKER_04]: from deploying AI in their industrial use cases. So the opportunities in my view, this is not
[00:08:35] [SPEAKER_04]: a technology that is being overly hyped. In my view, these are technologies that are still radically
[00:08:44] [SPEAKER_04]: underappreciated because most people don't yet know or see all the ways that they are about to
[00:08:51] [SPEAKER_04]: be deployed and are already being deployed. So that's the first point. The second point is on the
[00:08:59] [SPEAKER_04]: on the risk side, is that all of the companies and their mostly companies, mostly not governments
[00:09:06] [SPEAKER_04]: that are driving the investment in AI, are capitalists when it comes to their profits.
[00:09:15] [SPEAKER_04]: But they are socialists when it comes to any losses or costs that come from their products.
[00:09:23] [SPEAKER_04]: I don't know about you but I would like them to be capitalists for both profits and losses. That's
[00:09:31] [SPEAKER_04]: what we got ourselves in trouble on climate change is we had large numbers of industry and
[00:09:38] [SPEAKER_04]: governments aligned with that industry that was very happy to be capitalists in making lots of money
[00:09:44] [SPEAKER_04]: from exploiting the natural wealth that exists in the planet. And we're very happy to push the
[00:09:51] [SPEAKER_04]: losses off to be everyone's problem, which means the global south, your children, the poor.
[00:09:58] [SPEAKER_04]: And that's not capitalism. That is repatious all a garky. And so what we need is to ensure that
[00:10:08] [SPEAKER_04]: the negative externalities that come from deploying AI, like any jobs that are lost, like skill sets
[00:10:17] [SPEAKER_04]: that need to be redeployed, like disinformation and the use of AI by malevolent actors or
[00:10:25] [SPEAKER_04]: thinkers in ways that are dangerous for our economy or national security. And also to make sure
[00:10:34] [SPEAKER_04]: that the opportunities generated by AI are not simply taken advantage of by small numbers
[00:10:41] [SPEAKER_04]: of comparatively wealthy people who have the market served them most effectively because they
[00:10:47] [SPEAKER_04]: have the deposits. Those risks will grow if they are not addressed and they will not be addressed.
[00:10:55] [SPEAKER_04]: If the marketplace is allowed to capture the governance model. Remember, a functioning free market
[00:11:03] [SPEAKER_04]: economy is one where corporations compete against each other in a level playing field with governments
[00:11:12] [SPEAKER_04]: and regulators that act as independent and fair arbiters. And that is not the way the American
[00:11:20] [SPEAKER_04]: economic system runs, and that's not the way the global economic system runs. And that's particularly
[00:11:27] [SPEAKER_04]: dangerous when you're deploying new technologies where there's an absence of complete absence of
[00:11:32] [SPEAKER_04]: regulation. That is I think the context where what the United Nations is presently trying to do
[00:11:41] [SPEAKER_03]: is operating it. You know, I think you very nicely outlined why this is sort of a double-edged sword
[00:11:47] [SPEAKER_03]: right? You're making it brilliant point which is that we're hearing about these large language
[00:11:51] [SPEAKER_03]: models that are largely trained off of publicly available data but all the latent value
[00:11:55] [SPEAKER_03]: that is sitting in all these enterprises and even governments across the world. I mean we haven't yet
[00:12:00] [SPEAKER_03]: begun to see the economic impact that's going to have. At the same time yes, this technology could
[00:12:05] [SPEAKER_03]: absolutely obliterate the world a bit so to speak. And the climate and allergies are very interesting
[00:12:11] [SPEAKER_03]: I want to do two things. I want to talk about the players here and then I want to talk about incentives.
[00:12:16] [SPEAKER_03]: You've stayed in the past that were headed towards this world that you call a techno-polar
[00:12:20] [SPEAKER_03]: world, right? Where these large tech companies are effectively sovereign in this digital world that
[00:12:25] [SPEAKER_03]: we're talking about. You know yet it's important for incumbent sovereign powers to govern
[00:12:30] [SPEAKER_03]: both of these digital and physical worlds that we spend so much time in and that are getting
[00:12:34] [SPEAKER_03]: more and more interconnected. Could you break down the shift in sort of power dynamics that is taking place
[00:12:41] [SPEAKER_03]: in how AI plays into all of it? Sure. I mean when you think about artificial intelligence
[00:12:48] [SPEAKER_04]: first of all the bots and the models that are being created that were all using,
[00:12:56] [SPEAKER_04]: they are being developed largely overwhelmingly by private sector corporations who have business
[00:13:08] [SPEAKER_04]: incentive by profitability and to be able to outcompete other companies in this space. That will
[00:13:15] [SPEAKER_04]: determine what they create unless and until they are insented in other non-market ways
[00:13:23] [SPEAKER_04]: by governments and by other actors and overwhelmingly in the first few years, a couple years of
[00:13:30] [SPEAKER_04]: the explosion of AI the governments are playing catch up. There's no global governance model it's
[00:13:37] [SPEAKER_04]: overwhelmingly just a small number of advanced industrial economies and further a lot of it is
[00:13:42] [SPEAKER_04]: catching up to the state of where AI is today, where you have no governance at all, as opposed to
[00:13:50] [SPEAKER_04]: thinking ahead to where AI is likely to be in three or five years which is a dramatically different
[00:13:56] [SPEAKER_04]: and more capable place. And the fact that the models, the data, the priorities, the incentives,
[00:14:06] [SPEAKER_04]: the actions are being determined by the companies as opposed to the governments does create an
[00:14:13] [SPEAKER_04]: environment which is technopolar. In other words when you talk about the AI environment at least
[00:14:19] [SPEAKER_04]: today in 2024, you're really not talking about an environment where governments are sovereign.
[00:14:26] [SPEAKER_04]: Now I mean ultimately of course governments are have the ability to enforce laws
[00:14:34] [SPEAKER_04]: and these companies don't exist in the virtual world. They exist in localities, they have bank
[00:14:41] [SPEAKER_04]: accounts and they're not just all on crypto off the grid and they live in houses and they have
[00:14:48] [SPEAKER_04]: citizenship and so they absolutely can be governed but the de facto reality is that the decisions
[00:14:57] [SPEAKER_04]: that are being made about what AI is and what is done with it are being made overwhelmingly
[00:15:05] [SPEAKER_04]: by the technologists, by a small number of the technologists so they're the ones that are deciding.
[00:15:11] [SPEAKER_04]: And that means a couple of things. It means first of all if you are a government actor or a
[00:15:17] [SPEAKER_04]: multilateral organization comprised of government actors and you want to create effective governance,
[00:15:25] [SPEAKER_04]: you actually need to work with the private sector. You can't do this by yourself because you don't
[00:15:30] [SPEAKER_04]: have the knowledge, you won't move fast enough, you don't understand what needs to be done. They're
[00:15:35] [SPEAKER_04]: the ones doing it and if you don't do that they will de facto regulate themselves and that is a
[00:15:41] [SPEAKER_04]: challenging thing because their interests are in treating citizens at best as consumers and at worst
[00:15:53] [SPEAKER_04]: as products, not that didn't work out too well for social media did it. Not as citizens and
[00:16:00] [SPEAKER_04]: governments care at best in people as citizens so I mean with the government's going to work for the
[00:16:07] [SPEAKER_04]: people the government has to work with the private sector on AI that is the reality of where we are today
[00:16:13] [SPEAKER_04]: and what we have certainly learned with a lot of humility at the United Nations is that if you want
[00:16:22] [SPEAKER_04]: to start governing something effectively, you first need to define it. You need to understand it.
[00:16:27] [SPEAKER_04]: You need a common understanding of what AI is where it's going both the opportunities and the
[00:16:34] [SPEAKER_04]: concerns and this is where the UN was so effective on climate change and it took a long time
[00:16:40] [SPEAKER_04]: but we're now in a situation where almost every single member state no matter if they're poor or
[00:16:49] [SPEAKER_04]: rich, small or big a dictatorship or democracy or something in between they all agree that we've had
[00:16:58] [SPEAKER_04]: 1.2 cent degree of climate change. They all agree on how much carbon and methane is in the atmosphere.
[00:17:04] [SPEAKER_04]: They all agree on the implications of that for extreme weather conditions and loss of biodiversity,
[00:17:14] [SPEAKER_04]: impact of plastics in the oceans. You name it right? In other words all of the negative
[00:17:18] [SPEAKER_04]: externalities from globalization, the world agrees on that. Now they don't agree on what to do about
[00:17:24] [SPEAKER_04]: it. They don't agree on the resources but my god if you can agree on the problem in the opportunity
[00:17:30] [SPEAKER_04]: then when you have resources, you'll deploy them more rationally when you have conversations
[00:17:35] [SPEAKER_04]: about global governance it's much more likely to move in the right direction. It's so much easier
[00:17:40] [SPEAKER_04]: to harvest cooperation as opposed to zero some conflict when you all agree on what the challenge is.
[00:17:49] [SPEAKER_04]: So we are now at the point in the United Nations on global governance not where we're calling for
[00:17:56] [SPEAKER_04]: we've got to create an authority that has the arm to compel behavior. It's not that. No,
[00:18:04] [SPEAKER_04]: we're at the point where we want to commonly define the opportunities and the risks
[00:18:12] [SPEAKER_04]: that come from the state of play of artificial intelligence right now and going forward because
[00:18:18] [SPEAKER_04]: doing that will give us the tools we need to govern. I think that is very important for
[00:18:24] [SPEAKER_03]: technology as nebulis, lead to find and many associated technologies that need to be taken and
[00:18:30] [SPEAKER_03]: concerted to govern this effectively. And of course, the central point in which you're saying
[00:18:35] [SPEAKER_03]: seems to be that we cannot leave this to market forces right to influence the development and
[00:18:41] [SPEAKER_03]: deployment of this technology. We know exactly how that went with social media not super well
[00:18:46] [SPEAKER_03]: for the end of user maybe not even well for society at all. Clearly, clearly horribly for society
[00:18:52] [SPEAKER_04]: as totally. I mean, a social media as it stands right now is undermining the civic architecture
[00:18:59] [SPEAKER_04]: of our societies and that is not the intention of social media. It is purely an unintended
[00:19:05] [SPEAKER_04]: but logical consequence of the business model. Oh, 100%. If you're optimizing for engagements
[00:19:11] [SPEAKER_03]: and that's the dial you're moving things towards funny things will happen. Now AI again presents
[00:19:17] [SPEAKER_03]: its own challenges and you're kind of alluding to this. So I have to ask what should be the guiding
[00:19:23] [SPEAKER_03]: principles for technology that as the report states is one non-explanable by design number two isn't
[00:19:30] [SPEAKER_03]: still fully understood by experts. Number three is borderless by nature. How on earth do we govern
[00:19:36] [SPEAKER_04]: a technology like this at the global level? Well, I think that what you want from a technology like
[00:19:45] [SPEAKER_04]: this in addition to having the world come together to define the state of climate today and how
[00:19:54] [SPEAKER_04]: it's been changed by humanity. The world has also come together to define, we have a universal
[00:20:01] [SPEAKER_04]: declaration of human rights and we have sustainable development goals. The world agrees that people
[00:20:08] [SPEAKER_04]: should have the opportunity to engage in productive labor. The world agrees that people should
[00:20:15] [SPEAKER_04]: have adequate shelter. The world agrees that people should have adequate food. The world agrees
[00:20:20] [SPEAKER_04]: that people should have adequate water. World agrees people should have access to healthcare.
[00:20:26] [SPEAKER_04]: World, I mean there's a basic things and yet the world agrees that there should be goals
[00:20:31] [SPEAKER_04]: for the 8 billion plus people on the planet and we should orient towards those goals. So the
[00:20:37] [SPEAKER_04]: baseline for artificial intelligence and governance should be how do you help
[00:20:43] [SPEAKER_04]: use and deploy these technologies in service of those goals. There's nothing really bigger
[00:20:51] [SPEAKER_04]: than that, right? It's not more complicated than that. Right now we are not actually on a path
[00:20:59] [SPEAKER_04]: to achieving those goals. In fact, since the pandemic we've actually backslid on things like
[00:21:05] [SPEAKER_04]: hunger and on forced migration, AI is a singular tool available to humanity that could help.
[00:21:16] [SPEAKER_04]: I think we'll help humanity achieve those goals if it is deployed in service of those goals.
[00:21:26] [SPEAKER_04]: There is nothing more noble and nothing more sustainable for our planet than to accomplish that.
[00:21:35] [SPEAKER_04]: And so I think those are your principles and I think the goals of the UN High-level
[00:21:41] [SPEAKER_04]: panel on our artificial intelligence governance should be in service of that.
[00:21:46] [SPEAKER_03]: It really is a question of objective functions and what you're saying is, hey there's already
[00:21:50] [SPEAKER_03]: global consensus on these things that we want to achieve. You know gender 2030, this sustainable
[00:21:55] [SPEAKER_03]: development goals, let's put this new technology to bear on making a huge dent in all of
[00:22:03] [SPEAKER_04]: those goals as quickly as possible. And by the way, I think it's easier than climate in a fundamental
[00:22:09] [SPEAKER_04]: way and it makes me very excited which is climate change. There were an awful lot of very rich,
[00:22:17] [SPEAKER_04]: very powerful actors, governmental, individual, corporate banking that really saw
[00:22:26] [SPEAKER_04]: that addressing the climate agenda, never mind advancing it, just addressing it was
[00:22:32] [SPEAKER_04]: excessentially threatening to their well-being. That is absolutely not true of AI.
[00:22:38] [SPEAKER_04]: You do not have people in industry out there that are saying, I need to pull the wall over collective
[00:22:45] [SPEAKER_04]: humanity's eyes and not have AI used in service of humanity. It's not that. It's actually that if
[00:22:52] [SPEAKER_04]: these people focus purely on the business models and the market agenda, they're just not going
[00:22:59] [SPEAKER_04]: to get to all of the other opportunities for AI because it's not a priority. So they won't
[00:23:04] [SPEAKER_04]: bother with the global south. They won't bother with a lot of the poor. They won't bother as much
[00:23:09] [SPEAKER_04]: with biodiversity because there's so much low-hanging fruit for them to make working with the
[00:23:22] [SPEAKER_04]: states. And so the point is if you just create that knowledge, if you create those opportunities,
[00:23:31] [SPEAKER_04]: if you let everyone know, give you another point, Billua. If you talk to the owners of these AI
[00:23:38] [SPEAKER_04]: companies, the technologists, the CEOs, the founders, the shareholders, you don't think all are
[00:23:44] [SPEAKER_04]: looking for good things to do. They want me it's not like they're not charitable people, right?
[00:23:51] [SPEAKER_03]: Altruism is a huge part of the culture in many of these companies too. It really is, but they don't
[00:23:56] [SPEAKER_04]: necessarily know what the right thing is to do. So if you don't have a global conversation with
[00:24:03] [SPEAKER_04]: common standards and with a common global scientific and policy understanding of here are the ways
[00:24:09] [SPEAKER_04]: that AI can have the greatest impact, then you are going to waste so much money. You're going to waste
[00:24:15] [SPEAKER_04]: so much good effort instead of having AI for humanity, you're going to have AI washing. You know,
[00:24:21] [SPEAKER_04]: you're going to have people just saying, hey, here's what I'm doing with AI and just make sure that
[00:24:26] [SPEAKER_04]: at least they have a good story to tell. I'm not interested in giving people good stories to tell,
[00:24:31] [SPEAKER_04]: right? So this is actually, this is something that is completely doable and it's not a collective
[00:24:37] [SPEAKER_03]: action problem. It is just a coordination issue. I love that. Yeah, it's like the saying in
[00:24:45] [SPEAKER_03]: fine, the metrics to move, everyone's just going to be running in conflicting directions exactly.
[00:24:51] [SPEAKER_03]: And so by having sort of a common set of goals to move towards, I think that could be extremely
[00:24:56] [SPEAKER_03]: extremely impactful. You brought up the global south and the report emphasizes protecting the interests
[00:25:03] [SPEAKER_03]: of the global south, right? And in a couple of key ingredients are outlined, I'm paraphrasing here,
[00:25:08] [SPEAKER_03]: but roughly I'd break it down into compute data and talent. What specific measures are being proposed
[00:25:14] [SPEAKER_03]: to ensure this equitable access to AI technologies and sort of prevent this widening of the digital
[00:25:19] [SPEAKER_03]: divide? There's so many languages that are already underrepresented in the modern,
[00:25:24] [SPEAKER_03]: large language models. But there's so much more than that too, right? Like how the heck do you even
[00:25:29] [SPEAKER_03]: get access to the Nvidia GPUs that you need to train or fine tune your models? So how is that
[00:25:35] [SPEAKER_04]: going to work? Well, I mean, first of all, it's going to start slowly and then hopefully it picks up.
[00:25:40] [SPEAKER_04]: So you've got two concrete recommendations that we hope to be taken up by the member states.
[00:25:46] [SPEAKER_04]: One is an AI capacity development network and that idea is to have UN-affiliated
[00:25:53] [SPEAKER_04]: development centers that make expertise compute an AI training data available. And that means,
[00:26:02] [SPEAKER_04]: you're going to have sandboxes to test AI solutions and learn by doing. It means you have
[00:26:10] [SPEAKER_04]: educational opportunities and resources for university students in the global south
[00:26:17] [SPEAKER_04]: and programs for researchers, social entrepreneurs and training public sector officials,
[00:26:23] [SPEAKER_04]: which they generally don't have right now, a fellowship program for individuals to spend time
[00:26:30] [SPEAKER_04]: from the global south into top programs working on AI in the developed world. These things have to
[00:26:36] [SPEAKER_04]: be created. There's a second recommendation which is connected to that, which is a global fund for AI
[00:26:43] [SPEAKER_04]: in other words that you end up raising the money from the private and the public sector that would
[00:26:48] [SPEAKER_04]: support this development capacity. Now, look, I'm not suggesting that this is easy or done overnight.
[00:26:55] [SPEAKER_04]: And when you have half of Africa that doesn't even have access to electricity,
[00:26:59] [SPEAKER_04]: capacity development for their data isn't going to get them on AI. Right? So there are
[00:27:06] [SPEAKER_04]: big issues out there that have to be addressed, but this is a unique tool. And the idea is that
[00:27:13] [SPEAKER_04]: every million dollars that goes into this global fund is going to be oriented in a way that the
[00:27:20] [SPEAKER_04]: world believes is most effective to get AI to be used by the world for the development of humanity.
[00:27:29] [SPEAKER_04]: That's inconceivable that an individual corporation would try to do this or government
[00:27:34] [SPEAKER_03]: would try to do this by themselves. It's well said, and I think you also make the point that the
[00:27:38] [SPEAKER_03]: incentives are aligned, which is often not the case. You are basically by providing this
[00:27:44] [SPEAKER_03]: baseline level of compute and abilities for these markets, you're basically unlocking markets
[00:27:48] [SPEAKER_03]: for these companies as well. And so I think the incentives there are quite well aligned. I have
[00:27:54] [SPEAKER_03]: asked one question related to that, which is the sort of common pushback in Silicon Valley when
[00:27:59] [SPEAKER_03]: it comes to all of this stuff, which is especially AI companies, particularly in Europe,
[00:28:04] [SPEAKER_03]: sort of feel that the existing regulatory landscape, GDPR and so forth, DMA, are already so
[00:28:11] [SPEAKER_03]: restrictive and it's the large companies that benefit from this. And hey, what's going to happen
[00:28:15] [SPEAKER_03]: to all the smaller companies and the smaller players? How do you sort of balance consumer protection
[00:28:21] [SPEAKER_04]: while also enabling innovation? How do you think about that? I do fear that at the platform level
[00:28:30] [SPEAKER_04]: the amount of compute that is required, the energy to service that compute the water,
[00:28:38] [SPEAKER_04]: to service that compute. I mean, you're talking about a small number of corporations and governments
[00:28:43] [SPEAKER_04]: that are capable of doing that. And the data that they have available to them allows them to
[00:28:50] [SPEAKER_04]: identify startup corporations very quickly that they can buy support or kill. And we've seen
[00:29:01] [SPEAKER_04]: that historically with Facebook slash meta, we've seen it with a lot of organizations and we're
[00:29:08] [SPEAKER_03]: seeing it play out right now. Some of the startups that raise hundreds of millions of dollars are
[00:29:12] [SPEAKER_04]: like we just can't keep paying this compute cost. They can't keep paying the compute cost. So look,
[00:29:16] [SPEAKER_04]: I mean, it's certainly there's an argument to be made that maybe if AI models go open source
[00:29:23] [SPEAKER_04]: that it's possible that you end up with a whole bunch of players that are using that AI in their
[00:29:31] [SPEAKER_04]: own local applications, their own sectoral applications that make a big difference. You wouldn't make
[00:29:37] [SPEAKER_04]: most people in AI would not make a strong bet that that's where AI is going. Right? I mean,
[00:29:42] [SPEAKER_04]: meta's making that model, making that bet. But most of the money is not heading in that
[00:29:47] [SPEAKER_04]: direction. Certainly, that's not true in China, which we haven't talked about as a wholly different
[00:29:51] [SPEAKER_04]: kettle of fish because it's largely self-contained. But on the industrial side, what China does
[00:29:55] [SPEAKER_04]: is going to be extremely important. Look, I mean, I could answer this question in a more dystopian way,
[00:30:01] [SPEAKER_04]: which is for most of my life, the reason that state capitalism was a failure is because you have
[00:30:12] [SPEAKER_04]: kleptocracy, inefficiency, nepotism that makes those companies make very bad decisions. But it might
[00:30:28] [SPEAKER_04]: be a time with access to massive data sets that AI making decisions at the enterprise level to drive
[00:30:38] [SPEAKER_04]: corporate growth might prove to be more efficient than what the private sector can do by itself.
[00:30:46] [SPEAKER_04]: And if that is true, then you will have a very strong anti-competitive impulse because suddenly
[00:30:54] [SPEAKER_04]: economies of scale together with efficiencies driven by AI means that the best possible
[00:31:02] [SPEAKER_04]: outcome you can have economically is non-competitive. But if that's where we're heading,
[00:31:09] [SPEAKER_04]: then the role of effective regulation to ensure that individuals have rights that are safeguarded
[00:31:16] [SPEAKER_04]: becomes far more important. Right? Because people then consumers no longer have the ability to choose
[00:31:23] [SPEAKER_04]: as opposed to be, right? They're going to be in an environment, an AI environment that
[00:31:31] [SPEAKER_04]: controls everything that has, you know, it's like a fish and water. Right exactly. I mean,
[00:31:35] [SPEAKER_04]: I'll be like, I mean, you can, it'll get, I am of the view that consumers within five years
[00:31:44] [SPEAKER_04]: will probably use an AI ecosystem for an incredible amount of social, business, educational,
[00:31:57] [SPEAKER_04]: health and other relationships. And they'll probably all be the same AI because there'll be
[00:32:02] [SPEAKER_04]: advantages in having that data set together in one place. And you'll have preferences depending on
[00:32:08] [SPEAKER_04]: your economic well-being and what kind of service you want in everything else. Also your geography,
[00:32:12] [SPEAKER_04]: okay, but if that's the case, there would be real costs in you switching. You already feel this,
[00:32:18] [SPEAKER_04]: like I mean, if you're on Twitter and you've spent like a bunch of years building up your following
[00:32:22] [SPEAKER_04]: in your connections and suddenly you want to go off. Like, you have to look, you're locked in,
[00:32:27] [SPEAKER_04]: you're locked in, you have no rights. If that's where we're heading on the consumer side,
[00:32:32] [SPEAKER_04]: and if on the industrial side, we end up with, again, small number of players that have extraordinary
[00:32:39] [SPEAKER_04]: amounts of data that are able to drive the innovation because they have the data that compute
[00:32:45] [SPEAKER_04]: the energy, then the role of effective governance and regulation at the national and at the global
[00:32:52] [SPEAKER_04]: level becomes far more urgent. So I am not answering your question well because your question was
[00:33:00] [SPEAKER_04]: like, well, what happens to the small companies? I'm like, I'm more concerned, it's a small company's
[00:33:05] [SPEAKER_04]: up dying or you can have a whole bunch of small companies, but as they become big enough to be noticed,
[00:33:11] [SPEAKER_04]: they have to align what happens to the small creators. They can all create, but if you become big
[00:33:15] [SPEAKER_04]: enough that you make a move, you've got to sign up with a record company. You've got to sign up with,
[00:33:20] [SPEAKER_04]: like, you got to be a Facebook influencer Instagram, whatever it is, right? I mean, I'm saying,
[00:33:24] [SPEAKER_04]: I could see that happening in every sector with every corporation and startup. But if that happens,
[00:33:31] [SPEAKER_04]: then we've got to spend a lot more attention making sure that individuals still have these rights.
[00:33:37] [SPEAKER_03]: You know, bring up this point where the current paradigm at least that we're experiencing,
[00:33:41] [SPEAKER_03]: it does seem to be that the companies that control large amounts of compute large amounts of data
[00:33:46] [SPEAKER_03]: are going to be at a huge advantage. They're going to be creating these products and experiences
[00:33:50] [SPEAKER_03]: that really mediate our interactions with the digital world and increasingly the physical world too.
[00:33:55] [SPEAKER_03]: And so, you know, in this context, open sources brought up as this sort of counterbalance,
[00:34:02] [SPEAKER_03]: if you will. And it's a very hotly debated topic too, because there's a spectrum of sort
[00:34:07] [SPEAKER_03]: of open source and close source, right? Like, let's be honest, most of the open source
[00:34:11] [SPEAKER_03]: models in the wild today are more like open weights versus fully open source models where
[00:34:15] [SPEAKER_03]: you have access to the training data and the source code and all that good stuff. The report
[00:34:21] [SPEAKER_03]: advocates for this stance of meaningful openness. Can you just elaborate on that?
[00:34:27] [SPEAKER_04]: It means that you are hoping that people, governments and organizations around the world
[00:34:35] [SPEAKER_04]: aren't purely takers of AI models, values, preferences, and products that come from the biggest
[00:34:45] [SPEAKER_04]: global platforms in the Western China that you hope that there is meaningful openness.
[00:34:52] [SPEAKER_04]: Enough openness that individuals and enterprises and governments will be able to
[00:35:00] [SPEAKER_04]: actually deploy AI in meaningful ways for their benefit over time. That is the hope.
[00:35:07] [SPEAKER_04]: That's what you're calling for, but there's no power behind that. There's no money.
[00:35:12] [SPEAKER_04]: There's no authority behind that. That's just the collective view of 39 people around the
[00:35:18] [SPEAKER_04]: world that happen to have a various expertise on AI. And it's something that was fairly easy
[00:35:23] [SPEAKER_04]: to agree on. Now, I will say, because I am agnostic as to which way this is going to go,
[00:35:31] [SPEAKER_04]: as opposed to which way I would like to see it go. I mean, analysis is not preference here,
[00:35:36] [SPEAKER_04]: right? And that's a very important thing to keep in mind when you're talking about these issues.
[00:35:40] [SPEAKER_04]: There are different risks that come from these models. So if it turns out
[00:35:47] [SPEAKER_04]: that overwhelmingly we have closed models and a very small number of companies,
[00:35:54] [SPEAKER_04]: maybe that need to be aligned with either the United States or China that end up dominating the
[00:36:00] [SPEAKER_04]: AI space, then the critical governance framework you will need for national security and sustainability
[00:36:08] [SPEAKER_04]: is a series of arms control agreements in AI between the Americans, the Chinese and
[00:36:15] [SPEAKER_04]: relevant private sector actors. Like what you had between the Americans and the Soviets, but we only
[00:36:20] [SPEAKER_04]: got that after the 1962 Cuban missile crisis when we almost blew ourselves up. That's a serious problem.
[00:36:27] [SPEAKER_04]: Right? Because in the first 15 years, the Americans and the Soviets are developing
[00:36:33] [SPEAKER_04]: aboms and then age bombs and then missiles to launch home and all of this capacity. And we're
[00:36:38] [SPEAKER_04]: going as fast as we can to assert dominance and the Soviets as fast as they can to undermine that
[00:36:43] [SPEAKER_04]: dominance. And you know, where you don't want to talk to your enemy, you let them know what you're
[00:36:48] [SPEAKER_04]: doing because I mean, that's just going to give them information. It's going to constrain you. Well,
[00:36:52] [SPEAKER_04]: no, it turns out you really need to have those agreements because otherwise you will blow yourself up.
[00:36:57] [SPEAKER_04]: And so that is essential. I would say for the next US administration.
[00:37:04] [SPEAKER_04]: Secondarily, if it turns out that actually the drivers of cutting AI turns out more like not just
[00:37:14] [SPEAKER_04]: meaningful openness but radical openness where, you know, almost anybody in the world, good actor,
[00:37:22] [SPEAKER_04]: bad actor in between, tinkerer can take AI models and deploy it for their own purposes.
[00:37:30] [SPEAKER_04]: That means that the future of AI technology is going to be more like the global financial marketplace.
[00:37:37] [SPEAKER_04]: It'll be systemically important. Everyone will use it and everyone will need it for their own
[00:37:43] [SPEAKER_04]: purposes but also almost anyone could be a systemic threat to it. Right? And could bring it down
[00:37:50] [SPEAKER_04]: like kind of like, you know, you've got a couple of Yahoo's on Reddit talking about game
[00:37:55] [SPEAKER_04]: stop and before you know it, you've got like a challenge for the global markets. Right? And so then what
[00:38:01] [SPEAKER_04]: you need is a geo technology stability board, something like the financial stability board
[00:38:08] [SPEAKER_04]: that sees that okay, where we are actually not politicized, we're independent technocrats
[00:38:17] [SPEAKER_04]: with an intention of identifying potential threats to the AI marketplace, ensuring resilience
[00:38:28] [SPEAKER_04]: and responding to crises, bringing the bell and saying we've got a crisis and then responding
[00:38:34] [SPEAKER_04]: to them collectively as soon as humanly possible. That's a radically different kind of governance
[00:38:41] [SPEAKER_04]: than the US China Arms Control Governance. And we presently need to start thinking about both
[00:38:49] [SPEAKER_04]: because we're not actually sure which sort of methodology, if you will, is going to be most
[00:38:58] [SPEAKER_04]: successful in how AI will be deployed over the coming years. We just don't know. I like the
[00:39:05] [SPEAKER_03]: science analogy a lot, like you almost want these parties to sort of be regardless of sort of
[00:39:11] [SPEAKER_03]: military or diplomatic ties that open lines of communication are there because if you've got this
[00:39:17] [SPEAKER_03]: interconnected web of open source AI running the world, if you will, it could very well be
[00:39:22] [SPEAKER_03]: something where traditional diplomatic channels might be far too low bandwidth or almost
[00:39:27] [SPEAKER_04]: stifling to respond as quickly as possible. Well, it's fun. I'm sure you read ministry of the
[00:39:35] [SPEAKER_04]: books. Okay, so it's interesting. It's a book on climate and it's about kind of medium term
[00:39:44] [SPEAKER_04]: somewhat dystopian, massive wet bulb climate incident, millions dying in India, lots of
[00:39:52] [SPEAKER_04]: terrorism, echo terrorism and need to have global technocratic independent governance over carbon
[00:40:01] [SPEAKER_04]: emissions and related things because the world just has failed to help out. Fascinating concept
[00:40:08] [SPEAKER_04]: but probably not necessary for climate in part because the money is coming the technology is
[00:40:14] [SPEAKER_04]: there at scale we are moving to a post-carbon environment that is, you know, I mean it's later than
[00:40:20] [SPEAKER_04]: you'd like it to be with a lot of costs, but nonetheless it's going to happen in the next two generations.
[00:40:27] [SPEAKER_04]: AI may well require a ministry of the future. You may need, like you have central bankers today,
[00:40:34] [SPEAKER_04]: you may need to have ministers of AI play that kind of a role who are truly like in, they're
[00:40:42] [SPEAKER_04]: appointed, there are parts of national governments but they act in a politically independent way.
[00:40:49] [SPEAKER_04]: They're not driven by ideology. They're not driven by political cycle. They're driven
[00:40:53] [SPEAKER_04]: to ensure that the system globally continues to work and when you have a major financial crisis,
[00:41:01] [SPEAKER_04]: crisis, you know, the head of the people's bank of China and the head of the Fed have the same tools.
[00:41:07] [SPEAKER_04]: You've got fiscal tools, you've got monetary tools and you've got the same playbook.
[00:41:12] [SPEAKER_04]: You've got the same definitions and you all want to get the markets working and you really
[00:41:16] [SPEAKER_04]: want to avoid a global depression. I could easily see that being what the future of AI needs to
[00:41:22] [SPEAKER_04]: be like, but it's much harder. It's much harder because the marketplace is dealing with
[00:41:28] [SPEAKER_04]: exposures and movements of massive amounts of money. But AI, it's not just about digital
[00:41:36] [SPEAKER_04]: equivalent of money, right? It's like everything. It's national security, it's an election,
[00:41:45] [SPEAKER_04]: it's bio, it's drones, it's anything. So it is like the central bank issue but it would be
[00:41:56] [SPEAKER_04]: much more powerful and for people that are thinking about artificial general intelligence.
[00:42:03] [SPEAKER_04]: I mean, the reality is that humanity could probably never handle artificial general intelligence
[00:42:10] [SPEAKER_04]: unless you had vastly more powerful governance to deal with that and governance at a
[00:42:19] [SPEAKER_04]: technocratic and global level. I have to ask about, you know, you asked about the China
[00:42:28] [SPEAKER_03]: question. Like there's this ambitious vision for global AI governance that's been outlined here.
[00:42:33] [SPEAKER_03]: A common critique of the UN tends to be, how are you going to enforce these guidelines? Especially
[00:42:38] [SPEAKER_03]: with major tech companies that might play along right now but at some point might want to
[00:42:42] [SPEAKER_03]: shirt their own independence or countries like China that might be reluctant to comply.
[00:42:48] [SPEAKER_04]: Well, that's why I used the climate change example is you did not start with compliance. You started
[00:42:54] [SPEAKER_04]: with defining the challenge, the opportunity and the risk. So you had an intergovernmental panel
[00:43:01] [SPEAKER_04]: on climate change and that's what predated all of the, you know, committed even if non-binding,
[00:43:10] [SPEAKER_04]: you know, carbon reductions and targets and everything else. First you had to have a whole bunch
[00:43:16] [SPEAKER_04]: of countries that all agreed these are what the opportunities in child star. And that's what we
[00:43:21] [SPEAKER_04]: are starting with. We are starting with an international scientific panel on AI just like the
[00:43:27] [SPEAKER_04]: intergovernmental panel on climate change which is meant to regularly convene and define
[00:43:34] [SPEAKER_04]: the capabilities, the opportunities, the risks, the uncertainties, the gaps in scientific
[00:43:39] [SPEAKER_04]: consensus on AI development and trends. And the point is if you get the world to agree on that
[00:43:47] [SPEAKER_04]: and exchange standards and talk about opportunities and align with sustainable development goals.
[00:43:53] [SPEAKER_04]: Well then clearly you'll have your annual summits on AI and you'll get governments to say,
[00:43:59] [SPEAKER_04]: okay, let's apply some resources to this and let's make some commitments on this and you have
[00:44:04] [SPEAKER_04]: some countries that'll want to be leaders on it and they will and they'll, you know,
[00:44:08] [SPEAKER_04]: they'll push, they'll prod, they'll try to get other countries to come on board. Now the latter is
[00:44:12] [SPEAKER_04]: much harder and much slower than the former but you can't do the latter without the former.
[00:44:16] [SPEAKER_04]: My view which I think has has been one that has been shared really by consensus is the purpose
[00:44:23] [SPEAKER_04]: of this group is to help the world define this opportunity and this challenge. And by the
[00:44:32] [SPEAKER_04]: way, the opportunity is bigger than the challenge. We truly think of it that way and that is
[00:44:40] [SPEAKER_04]: this is not coming out there saying we want to have enforceable governance power over what
[00:44:50] [SPEAKER_04]: technology companies do and don't do. That was never going to be in the cards but also it
[00:44:54] [SPEAKER_04]: would be premature if you don't actually know what you're trying to accomplish. So don't
[00:45:01] [SPEAKER_04]: don't put the wagon before the horse get get the problem right first to find it. It makes a
[00:45:07] [SPEAKER_03]: ton of sense. Right. Like if you're not going to survey the landscape and build a common map,
[00:45:10] [SPEAKER_03]: you're obviously going to have disagreements from the get go. But what about more challenging
[00:45:15] [SPEAKER_03]: topics like for instance in light of the increasing use of AI in both the Ukraine Russian conflict.
[00:45:21] [SPEAKER_03]: Right. And the Israel Gaza war will position as the report take on developing international
[00:45:26] [SPEAKER_03]: norms or regulations specifically addressing the use of AI and warfare. Well, I mean there's already
[00:45:32] [SPEAKER_04]: been efforts at the United Nations to ban lethal autonomous drones and artificial intelligence
[00:45:38] [SPEAKER_04]: in those areas where national security is directly in the target. You know, you also have
[00:45:46] [SPEAKER_04]: the beginnings of the US China and South Korea talking about the nuclear decision-making process.
[00:45:54] [SPEAKER_04]: For example, my view is that governments, a small number of governments are probably going to be
[00:46:08] [SPEAKER_04]: ultimately this will need to be policed. And the way you polise is by having willingness to deter
[00:46:16] [SPEAKER_04]: and some kind of actual enforcement, sanctioning enforcement. They're going to be cost. I mean the
[00:46:22] [SPEAKER_04]: Iranians are not standing up to the non-proliferation treaty. If they're not developing
[00:46:28] [SPEAKER_04]: nukes, it's because they're concerned that the Israelis are going to blow them up with you
[00:46:32] [SPEAKER_04]: support if they actually decide to go full, full nuclear breakout. And that I think when it comes
[00:46:39] [SPEAKER_04]: to using AI in these weapon systems when you are talking about life and death for the Russians
[00:46:47] [SPEAKER_04]: and Ukrainians on the ground and they have access to those tools to make into weapons, they will make
[00:46:53] [SPEAKER_04]: them into weapons. So you're going to have to have the most powerful countries in the world
[00:46:59] [SPEAKER_04]: be willing not just to open but to govern, to exert force to ensure that AI is not used by
[00:47:09] [SPEAKER_04]: itself in targeting decisions is not used by itself in warfighting that takes decisions and life of
[00:47:17] [SPEAKER_04]: death completely out of the loop. We've already seen what happens as war becomes more and more
[00:47:23] [SPEAKER_04]: distant from humanity, is that it becomes much easier for people to pull the trigger and much
[00:47:28] [SPEAKER_04]: easier for worse to escalate and that's not where we want to be. Are you personally more worried
[00:47:34] [SPEAKER_03]: about digital attacks or political interference, via the internet or sort of these physical
[00:47:39] [SPEAKER_03]: manifestations of AI systems when it comes to drone warfare and things like that? I'm much more
[00:47:43] [SPEAKER_04]: worried about the vulnerability of economic systems to AI manipulation. I'd be much more worried about
[00:47:51] [SPEAKER_04]: financial markets crash or critical infrastructure crash especially when you already have antagonists
[00:47:58] [SPEAKER_04]: that are deeply inside other systems and just sitting there. In other words a crowd strike type
[00:48:04] [SPEAKER_04]: situation but instead of a mistake, a malevolent, such hit. I mean you saw what happened to cost billions
[00:48:12] [SPEAKER_04]: of dollars immediately and shut down all sorts of systems computers, air travel, you name it
[00:48:18] [SPEAKER_04]: you know around the West and that was just a mistake that you know it's the mistake in patch
[00:48:23] [SPEAKER_04]: what happens if that was done intentionally by an adversary whether a rogue actor or by a government
[00:48:29] [SPEAKER_04]: I think that's a much bigger concern but that's not my biggest concern. My biggest concern is
[00:48:32] [SPEAKER_04]: something actually much more prosaic. What is it? My biggest concern is that human beings are programmable
[00:48:40] [SPEAKER_04]: we are very susceptible to algorithms and you know when I was growing up, it was all about nature and
[00:48:48] [SPEAKER_04]: it was about you know sort of you know what your genetics say and then how that potentiality is
[00:48:55] [SPEAKER_04]: shaped by human beings around you individual human beings and collections of human beings. That's now
[00:49:03] [SPEAKER_04]: shifting to algorithms and you know social media is a part of this problem but when it's AI
[00:49:11] [SPEAKER_04]: and when human beings are developing principal relationships algorithmically and they're no longer
[00:49:18] [SPEAKER_04]: engaging principally with other human beings so nurture is being replaced by algorithms and by AI
[00:49:27] [SPEAKER_04]: I worry that we will lose our humanity. I think that's a much more from me it's a much more profound
[00:49:33] [SPEAKER_04]: it's a much more philosophical issue but it's also a really real near-term threat you know we made
[00:49:39] [SPEAKER_04]: a VL of AGI in 20 years but humans may become algorithmic in five and and I think we should resist that
[00:49:51] [SPEAKER_04]: I think that at the very least we need a lot of testing to understand the implications
[00:49:57] [SPEAKER_04]: before we want to run real-time experiments on humanity and right now what we're doing is running
[00:50:04] [SPEAKER_03]: real-time experiments. We need a smaller sandbox I think you're totally right it goes back to
[00:50:09] [SPEAKER_03]: technology mediating our interactions in the physical and digital world and that's right that gives us
[00:50:14] [SPEAKER_03]: that sort of the surface area for us to be impacted and influenced is greater than ever before
[00:50:19] [SPEAKER_03]: I'd argue this is already the case today that's in part why online advertising is just so darn effective
[00:50:25] [SPEAKER_03]: that's why magnetic warfare is is a thing right and your relationship still are principally with people
[00:50:30] [SPEAKER_03]: hmm right oh yeah now the one once we have intelligent entities that are sort of talking to us and
[00:50:37] [SPEAKER_04]: oh good lord. And then we've already you know basically seeing a i pass the touring test in the last year
[00:50:44] [SPEAKER_04]: and now we have a bunch of people in a i to tell us that that's not that important that's
[00:50:48] [SPEAKER_04]: moving the goalpost see it's moving the goalpost i i think actually it is essential for what matters
[00:50:53] [SPEAKER_04]: matters for us at least is humanity and so if we can no longer tell the difference between what is
[00:51:01] [SPEAKER_04]: and is not a human being and we start engaging more principally with non-human actors in service
[00:51:07] [SPEAKER_04]: of other goals that have nothing to do with human beings then we have just unwillingly unknowingly
[00:51:14] [SPEAKER_03]: given up something essential i'm deeply rage and see uncomfortable with that once this report is made
[00:51:22] [SPEAKER_03]: public you're obviously going to get a huge amount of feedback from various stakeholders right
[00:51:26] [SPEAKER_03]: what would you like the audience that's listening to this to do like to should they engage with
[00:51:30] [SPEAKER_03]: this report and how should they give their feedback to which all are proposing i think they should
[00:51:35] [SPEAKER_04]: certainly talk about the fact that AI is a tool that needs to be used for all of us you know if there's
[00:51:45] [SPEAKER_04]: any real message here it's that we are turning much more inwards at a nation by nation level
[00:51:55] [SPEAKER_04]: and we're not thinking about common collective humanity we have we are now inventing tools
[00:52:04] [SPEAKER_04]: that allow us to improve ourselves together and to redress a lot of the damage that we've done
[00:52:12] [SPEAKER_04]: to the planet i mean if i if i look at young people today we need to be honest with them you know
[00:52:19] [SPEAKER_04]: we've not been effective stewards of the planet and and we need them to make better decisions
[00:52:27] [SPEAKER_04]: that we have made i would argue this report has been an effort over the last year to try
[00:52:34] [SPEAKER_04]: to give young people an opportunity to make better decisions right and and that's the way
[00:52:42] [SPEAKER_03]: I want people to respond to this report awesome in thank you so much for joining us not
[00:52:47] [SPEAKER_03]: much okay humans insights are awake up call AI isn't just about code and algorithms it's a global
[00:52:59] [SPEAKER_03]: challenge that demands a global cooperation just like climate change or the interconnected web of financial
[00:53:05] [SPEAKER_03]: markets i mean think about it when a financial system teeters on the brink central bankers don't
[00:53:11] [SPEAKER_03]: care about borders or political alliances they pick up the phone because their fates are intertwined
[00:53:18] [SPEAKER_03]: their incentives align to prevent disaster that's the kind of urgency can cooperation we need
[00:53:24] [SPEAKER_03]: for AI what gives me hope is that this isn't some impossible challenge we have achieved this
[00:53:30] [SPEAKER_03]: level of cooperation before now the question is can we replicate it for AI before it's two
[00:53:37] [SPEAKER_03]: we see a world dominated by a few tech giants hoarding the power of AI or can we foster a scenario
[00:53:44] [SPEAKER_03]: where a thousand flowers bloom where AI empowers individuals and nations across the globe
[00:53:50] [SPEAKER_03]: in a truly interconnected yet decentralized AI ecosystem this conversation with the in has been a
[00:53:57] [SPEAKER_03]: reality check AI isn't just about futuristic hype it's about the choices we make today
[00:54:04] [SPEAKER_03]: this UN report is a vital blueprint to build a shared understanding on what exactly it is
[00:54:11] [SPEAKER_03]: that we're trying to regulate but the real work begins with us understanding the complexities
[00:54:17] [SPEAKER_03]: demanding transparency and pushing for responsible AI development and doing that without stifling
[00:54:24] [SPEAKER_03]: innovation and nimble startups the future of AI is not predetermined it's an interconnected tapestry
[00:54:31] [SPEAKER_03]: in a global story we're all writing together the Tedi I show is a part of the TED audio
[00:54:40] [SPEAKER_03]: collective and is produced by TED with cosmic standard our producers are Ben Montoya and Alex
[00:54:46] [SPEAKER_03]: Higgins our editors are Ben Ben Chang and Alejandra Salazar our showrunner is Ivana Tucker
[00:54:52] [SPEAKER_03]: and our engineer is Asia Polar Simpson our technical director is Jacob Winick
[00:54:57] [SPEAKER_03]: and our executive producer is Eliza Smith our researcher and fact checker is Christian Abarta
[00:55:03] [SPEAKER_03]: and I'm your host Belabel Sedu see y'all in the next one

