Human brains are often described as computers — machines that are “wired” to make decisions and respond to external stimuli in a way that’s not so different from the artificial intelligence that we increasingly use each day. But the difference between our brains and the computers that drive AI is consciousness – our inner world, defined by experience and awareness. Anil Seth is a professor of cognitive and computational neuroscience at the University of Oxford. He studies human consciousness and he’s concerned about the way we’ve come to think about AI as conscious minds rather than useful tools. Anil and Bilawal sit down to discuss the differences between intelligence and consciousness, the possibility of AI becoming self-aware, and the dangers of assigning human-like traits to our AI assistants. For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:00] TED Audio Collective.
[00:00:35] You know this question. How long will it take before some massive breakthrough, some kind of singularity emerges and suddenly AI becomes self-aware? Before AI becomes conscious? But we're getting way ahead of ourselves.
[00:00:51] Lately, reports from AI researchers suggest that AI models are not improving at the same rate as before and are hitting the limits of so-called scaling laws, at least as far as pre-training is concerned.
[00:01:03] There's also worries that we're running out of useful data, that these systems require better quality and greater amounts of data to continue growing at this exponential pace.
[00:01:14] The road to a machine that can think for itself is long, and it's starting to sound like it may be even longer than we think.
[00:01:21] For now, clever interfaces like ChatGPT's advanced voice mode, the one I experimented with in an earlier episode this season, helps give some illusion of a human at the other end of this conversation with an AI.
[00:01:35] I was surprised by how much it actually delighted me, and even kind of tricked me, at least a tiny little bit, into feeling like ChatGPT was really listening. Like a friend would.
[00:01:46] The thing is, though, it's a slippery slope. We're building technology that is so good at emulating humans that we start ascribing human attributes to it.
[00:01:57] We start wondering, does this thing actually care? Is it actually conscious? And if not now, will it be at some point in the future?
[00:02:06] And by the way, what even is consciousness anyway?
[00:02:09] The answer is trickier than you might think. To unpack it, I spoke with someone who's been tackling this question from the inside out, from the perspective of the one thing we know is conscious, the human brain.
[00:02:26] One of my mentors, the philosopher Daniel Dennett, we sadly lost earlier this year.
[00:02:32] He said we should treat AI as tools rather than colleagues and always remember the difference.
[00:02:39] That's Anil Seth. He's a professor of cognitive and computational neuroscience at the University of Oxford.
[00:02:45] He studies human consciousness and wrote a great book about it. It's called Being You, A New Science of Consciousness.
[00:02:52] And that quote from his mentor, it's something that sticks with him.
[00:02:55] It sticks with me because I think we have this tendency to always project too much of ourselves into the technologies we build.
[00:03:04] I think this has been something humans have done over history.
[00:03:07] And it's always got into trouble because we tend to misunderstand then the capabilities of the machines we build.
[00:03:16] And also, we tend to diminish ourselves in the process.
[00:03:20] And I think the recent explosion of interest in AI is a very prominent example of how we've fallen prior to this problem at this moment.
[00:03:29] So this is why Anil's on the show today.
[00:03:32] He's come to share why he thinks it's imperative we see AI as a tool, not as a friend.
[00:03:37] And why that difference matters to not only the future of this technology, but also the future of human consciousness.
[00:03:47] I'm Bilal Siddhu, and this is the TEDAI Show, where we figure out how to live and thrive in a world where AI is changing everything.
[00:04:02] How will humans and machines work together in the future?
[00:04:05] We spend so much time discussing how the world's changing.
[00:04:10] It would be absolutely absurd to believe the role of the CEO is not.
[00:04:14] This is Imagine This, a podcast from BCG that helps CEOs consider possible futures for our world and their businesses.
[00:04:23] Listen wherever you get your podcasts.
[00:04:26] Add a little curiosity into your routine with TED Talks Daily, the podcast that brings you a new TED Talk every weekday.
[00:04:36] In less than 15 minutes a day, you'll go beyond the headlines and learn about the big ideas shaping your future.
[00:04:43] Coming up, how AI will change the way we communicate, how to be a better leader, and more.
[00:04:49] Listen to TED Talks Daily wherever you get your podcasts.
[00:04:54] So Anil, I've been thinking about how not long after we invented digital computers, we started referring to our human brains as computers.
[00:05:03] Obviously, there is a lot more to it than that.
[00:05:06] But what is helpful and not helpful about describing our brains as computers?
[00:05:10] It's clearly very helpful.
[00:05:12] I mean, my title, my academic title is Professor of Computational Neuroscience.
[00:05:17] So I'd be rather hypocritical to say that it was not a useful way of thinking to some extent.
[00:05:23] And there's a very lively debate, mainly in philosophy rather than neuroscience or in tech,
[00:05:30] about whether brains actually do computation as well as other things.
[00:05:36] In fact, the metaphor of the brain as a computer has clearly been very, very helpful.
[00:05:42] If you just look inside a brain, you find all these neurons and chemicals and all kinds of complex stuff.
[00:05:47] And computation gives us the language to think about what brains are doing.
[00:05:52] That means you don't have to worry so much about all that.
[00:05:56] And of course, at the beginning of AI, there was this idea that intelligence might be a matter of computation.
[00:06:02] Alan Turing famously asked the questions about whether machines can think.
[00:06:07] And universal Turing machines were specified theoretically, which can do any computation.
[00:06:14] And the idea that, well, that might be what the brain is doing becomes very appealing.
[00:06:18] Also at the birth of AI, Walter Pitts and Warren McCulloch realized that neural networks,
[00:06:24] these simple abstractions of artificial neurons that are connected to each other,
[00:06:30] that underpin a lot of the modern AI we have, actually serve as universal Turing machines.
[00:06:36] So we have this temptation, this idea to think, yeah, the brain is a network of neurons.
[00:06:41] Networks of neurons can be universal Turing machines.
[00:06:44] And these are very powerful things.
[00:06:46] So maybe the brain is a computer.
[00:06:48] But I think we're also seeing the limits of that metaphor and all the ways in which actually brains might perform computations,
[00:06:58] but they may also do other things.
[00:07:00] And fundamentally, we always get into trouble too when we confuse the metaphor for the thing itself.
[00:07:05] I love that.
[00:07:06] And I think a big chunk of that is also we talk so much about sort of the supercomputing clusters
[00:07:12] and just how fast technology is moving.
[00:07:14] And we're almost, you know, losing some appreciation for the intelligence that's inside our craniums.
[00:07:20] And to put it very plainly, how much more complex is the brain today compared to even the most advanced AI systems?
[00:07:28] I mean, it's a totally different thing.
[00:07:31] I think we really do the brain a great disservice if we think of it purely in terms of sort of number of neurons.
[00:07:38] But even then, there are 86 billion neurons in the human brain and a thousand times more connections.
[00:07:43] It's incredibly complicated, even at that level.
[00:07:46] Also, the brain is so intricate.
[00:07:50] Like the connectivity in one area might be slightly different from the connectivity in another area.
[00:07:55] There are also neurotransmitters washing around.
[00:07:59] The brain changes.
[00:08:00] Every time a neuron fires, synaptic connectivities change a little bit.
[00:08:04] It's not a stable architecture.
[00:08:08] And then there are all the glial cells and all the supporting gubbins that we often don't even think about,
[00:08:15] but are turning out to actually be significantly involved in the brain's function.
[00:08:20] There was a recent paper in Science, I think, that had this gargantuan, impressive effort
[00:08:27] to unpack in as much detail as possible one cubic millimeter of brain tissue in the human cortex.
[00:08:35] In this one cubic millimeter, you've got 150 million connections, nearly 60,000 cells.
[00:08:41] And to store all that data in a standard computer was just an enormous amount to characterize.
[00:08:47] And even this is just a, you know, it's not everything, right?
[00:08:50] This is just a very detailed model.
[00:08:52] The brain is very complex.
[00:08:54] Very complex.
[00:08:55] That is quite amazing.
[00:08:56] What's also interesting about the sheer complexity in the brain is the brain doesn't sit in a vat, right?
[00:09:01] At least not usually.
[00:09:02] And of course, the brain works in concert with the rest of the body.
[00:09:05] Does that aspect of being embodied give humans any advantages over AI systems?
[00:09:11] I think it depends what you want the system to do.
[00:09:14] You're absolutely right.
[00:09:15] Brains didn't evolve in isolation.
[00:09:18] They evolved in response to certain selection pressures.
[00:09:21] What were those selection pressures?
[00:09:22] They weren't sort of right computer programs or right poetry or solve complex problems.
[00:09:28] Fundamentally, brains are in the business of keeping the body alive and later on moving the body around.
[00:09:35] So control of action, things like that.
[00:09:37] And those imperatives, fundamental to me, to understanding what kinds of things brains are,
[00:09:45] they are part of the body.
[00:09:46] They're not these kind of meat computer that moves this body around from one meter to another.
[00:09:53] Chemicals in the body affect what's happening in the brain.
[00:09:57] The brain communicates with the gut, even with the microbiome.
[00:10:01] We're seeing all these kinds of effects that transpire within the body.
[00:10:06] And then, of course, the body is embedded in a world.
[00:10:09] And there's always this feedback from the world.
[00:10:12] And understanding these nested loops of how the brain is embedded within a body and the body is embedded within a world,
[00:10:20] I think that's a very different kind of thing than the abstract, disembodied ideal of computation
[00:10:28] that drives a lot of our current AI.
[00:10:32] And, of course, it's also represented in a lot of science fiction.
[00:10:34] And then we have things like how, which are 2001, which, OK, there's a body as the spaceship,
[00:10:40] but it's a kind of disembodied intelligence in many ways.
[00:10:44] So then how important is it that we're embodied to have consciousness and intelligence?
[00:10:48] And we'll get to the definitions in a bit because I'm curious what happens when you embody an AI.
[00:10:54] And I'm, of course, thinking of all the humanoid robot demos that we've seen lately,
[00:10:57] where it seems to be this crude representation of kind of what we do.
[00:11:01] Like we've got sensor systems that perceive the world and we build a map of it
[00:11:05] and then we can figure out how to take action in it.
[00:11:08] This is a very, this is a fascinating question.
[00:11:10] I mean, so far, you know, the AI systems that we have,
[00:11:13] the ones we tend to hear about mainly anyway, language models and generative AI,
[00:11:18] they tend to have been trained and then deployed in a very disembodied way.
[00:11:23] But this is, this is changing.
[00:11:24] And robotics is improving too.
[00:11:26] It's lagging behind a little bit as it always does, but it is improving.
[00:11:29] And there are fascinating questions about what difference that makes.
[00:11:34] One possibility that strikes me as plausible is that in bodying an AI system so that you train it
[00:11:42] in terms of physical interactions, don't just drop a pre-trained model into a robot,
[00:11:47] but everything is trained in an embodied way, might give us grounds to say that AI systems
[00:11:53] actually understand what they say, if it's a language model, for instance, or understand what
[00:12:00] they do.
[00:12:01] Because these abstract symbols, words that we use in language, and there's a good argument
[00:12:06] that ultimately their meaning is grounded in physical interactions with the world.
[00:12:11] But does this mean that AI systems not only are intelligent and possibly understand it,
[00:12:16] but also have conscious experiences?
[00:12:18] That's a separate question.
[00:12:19] And I think there's many other things that might be necessary for us to think seriously
[00:12:28] about the possibility of AI being conscious.
[00:12:30] I think that brings me to the logical next question, which is,
[00:12:35] what is the difference between intelligence and consciousness?
[00:12:38] Perhaps let's start with intelligence.
[00:12:40] Both intelligence and consciousness are tricky to define,
[00:12:43] but most definitions immediately point to the fact that they're different.
[00:12:46] And if we think about a broad definition of intelligence, it's something like doing the
[00:12:52] right thing at the right time.
[00:12:54] A slightly more sophisticated definition might be the ability to solve problems flexibly.
[00:13:00] And whether it's solving a Rubik's Cube or a complex problem scientifically or navigating
[00:13:06] a social situation adeptly, I mean, these are all aspects of doing the right thing at the
[00:13:11] right time.
[00:13:12] And importantly, intelligence is something you can define in terms of function, in terms
[00:13:17] of what a system does, what its behavior ultimately is.
[00:13:21] So there's no deep philosophical challenge for machines to become intelligent in some way.
[00:13:28] I mean, there may be obstacles that prevent machines from becoming intelligent in this sort
[00:13:33] of general AI way, which is the way that humans are intelligent.
[00:13:37] But intelligence fundamentally is a property of systems.
[00:13:41] Now, consciousness is different.
[00:13:44] Consciousness, again, is very hard to define in a way that's going to get everyone signed up to.
[00:13:50] But fundamentally, consciousness is not about doing things.
[00:13:55] It's about experience.
[00:13:56] It's the difference between being awake and aware and the profound loss of consciousness
[00:14:02] in general anesthesia.
[00:14:03] And when you open your eyes, your brain is not merely responding to signals that come
[00:14:09] into the retina.
[00:14:10] There's an experience of color and shape and shade that characterizes what's going on.
[00:14:17] A world appears and a self within it.
[00:14:20] Thomas Nagel, I think, has the nicest philosophical definition, which is that for a conscious organism,
[00:14:25] there is something it is like to be that organism.
[00:14:29] It feels like something to be me.
[00:14:31] It feels like something to be you.
[00:14:32] Now, you can finesse these distinctions as much as you want, these definitions.
[00:14:38] But I think it's already clear.
[00:14:40] They are different things.
[00:14:41] They come together in us humans.
[00:14:43] We know we're conscious and we think we're intelligent.
[00:14:46] So we tend to put the two together.
[00:14:48] But just because they come together in us doesn't mean they necessarily go together in general.
[00:14:54] As you describe consciousness and this sort of subjective experience, a term that keeps
[00:14:58] getting thrown around in AI circles now is like qualia, right?
[00:15:01] This notion of subjective conscious experiences and figuring out if large language models can
[00:15:06] actually have this.
[00:15:08] Certainly, they're good at making it seem like they do, especially the jailbroken models.
[00:15:13] But it also takes me back to something else that you've talked about, which is our perception of
[00:15:17] reality is sort of this controlled hallucination that we don't fully perceive reality in this
[00:15:23] completely objective sense.
[00:15:24] I don't know if that's the best characterization.
[00:15:26] But I'm trying to connect the dots there where it seems to be like even our experience of reality
[00:15:32] is kind of hard to grok and fully explain.
[00:15:35] And so I wonder, doesn't that point to us not being able to create a very clear definition
[00:15:41] to measure that in a synthetic system?
[00:15:43] Yeah, I think you can go even further, actually.
[00:15:45] I think there's very little consensus on, well, there's no consensus on what would be the necessary
[00:15:51] and sufficient conditions for something to have subjective experience, to have qualia in this
[00:15:58] sense.
[00:15:58] When you and I open our eyes, we have a visual experience.
[00:16:03] It's the redness of red, the greenness of green.
[00:16:06] This is the kind of thing that philosophers call qualia.
[00:16:09] And there's a lot of argument about whether this is actually a meaningful concept or it's
[00:16:13] just something that we think is profound and it actually is just a wrong way of looking
[00:16:18] at the problem.
[00:16:18] But for me, there is a there-there.
[00:16:22] When we open our eyes, there is visual experience.
[00:16:24] However, we label it as qualia or something else.
[00:16:28] But for a camera on my iPhone, well, no.
[00:16:32] We don't think there's any experiencing going on.
[00:16:36] So what is the difference that makes a difference?
[00:16:39] And could it be that some kind of AI that's a glorified version of my camera on the phone
[00:16:45] would instantiate the sufficient condition so that it not only responded to visual signals,
[00:16:51] but had subjective experience too?
[00:16:55] I think that's the challenge we need to face because, as you say, AI systems, especially
[00:17:00] things like language models, can be very persuasive about having conscious experience.
[00:17:07] And again, especially the ones where you ask them to whisper and get around the guardrails
[00:17:13] in one way or another.
[00:17:15] They can really seduce our biases.
[00:17:16] And so we can't just rely on what a language model says.
[00:17:23] If a language model says, yes, of course, I have a conscious visual experience, that's
[00:17:28] not great evidence for whether it's there or not.
[00:17:31] And so we need to think, I think, a little more deeply about what it would take to ascribe
[00:17:36] conscious experience to the system that we create out of a completely different material.
[00:17:43] And material is an interesting point you're bringing up, sort of the substrate from which
[00:17:47] intelligence and perhaps consciousness can emerge.
[00:17:50] Because what you are saying is that I think it seems clear that we could have a superhuman
[00:17:56] intelligence level AI system that isn't necessarily conscious.
[00:18:00] But I do wonder when people make arguments like, hey, well, if we just keep throwing more
[00:18:04] data and compute at this thing and it keeps getting more and more intelligent, consciousness
[00:18:08] will be this emergent property of this system.
[00:18:10] And it almost has this like techno-religious kind of fervor to it.
[00:18:14] Why do you think consciousness might be uniquely biological?
[00:18:18] Why does the nature of the substrate matter?
[00:18:20] I don't know that it matters, but I think it's a possibility worth taking seriously.
[00:18:27] In a sense, the opposite claim is equally odd.
[00:18:30] Why should consciousness be a property of a completely different kind of material?
[00:18:35] Why would computation be sufficient for consciousness?
[00:18:40] After all, for many things, the material matters.
[00:18:45] If we're talking about a rainstorm, you need actual water for anything to get wet.
[00:18:49] If you have a computer simulation of a weather system, it doesn't get wet or windy inside
[00:18:55] that computer simulation.
[00:18:56] It's only ever a simulation.
[00:18:58] And the way you set it up is also very informative because there has been this implicit assumption,
[00:19:05] at least in some quarters, that indeed if you just throw more compute and AI gets smarter
[00:19:11] in ways which can be very, very impressive and very, very sometimes unexpected too, that
[00:19:17] at some point consciousness just arrives, comes along for the ride and the inner lights come
[00:19:22] on and you have something that is also experiencing as well as being smart.
[00:19:28] I think that's a reflection more of our psychological biases than it is grounds for having credence
[00:19:36] in synthetic consciousness.
[00:19:39] Because why should consciousness just happen at a particular level of intelligence?
[00:19:44] I mean, you could make an argument that some forms of intelligence might require consciousness.
[00:19:50] And those may be the kinds of intelligence that we humans have.
[00:19:57] But that's a bit of a strange argument because there are plenty of other species out there
[00:20:01] that don't have human-like intelligence that are very likely conscious.
[00:20:05] And there may be more ways to achieve intelligence than through what evolution settled on for human
[00:20:13] beings, which is having brains that are also capable of consciousness.
[00:20:17] So the question for me, the fundamental question is, is computation sufficient for consciousness?
[00:20:24] If we try to design in the functional architecture of the brain as it is and run it in a computer,
[00:20:31] would that be enough for consciousness?
[00:20:33] Or do we need something much more brain-like at the level of being made,
[00:20:39] of carbon, of having neurons, of having neurotransmitters washing around,
[00:20:43] of being really grounded in our living flesh and blood matter?
[00:20:49] And I don't think there's, well, there's just not a knockdown argument for or against either
[00:20:55] of these positions.
[00:20:57] But there's, to me, good reasons to think that computation is likely not enough.
[00:21:03] And there are at least some good reasons to think that the stuff we're made of really does matter.
[00:21:07] Given all of this, you do believe that it's unlikely that AI will ever achieve consciousness.
[00:21:13] Why is that?
[00:21:14] I think it's unlikely, but I have to say it's not impossible.
[00:21:17] And the first reason it's not impossible is that I may very well be wrong.
[00:21:22] And if I'm wrong and computation is sufficient for consciousness, well, then it's going to be a lot
[00:21:28] easier than I think.
[00:21:30] But even if I'm right about that, then as AI is evolving and as our technology evolves,
[00:21:38] we also have these technologies that are becoming more brain-like in various ways.
[00:21:44] We have these whole areas of neuromorphic engineering or neuromorphic computing,
[00:21:49] where we're building systems which are just sticking closer to the properties of real brains.
[00:21:58] And on the other side, we also have things like cerebral organoids, which are made out of brain
[00:22:03] cells. They're little mini brain type things grown in the dish. They're derived from human
[00:22:09] stem cells and they differentiate into neurons, which clump together and show organized patterns
[00:22:14] of activity. Now they don't do anything very interesting yet. So it's the opposite situation
[00:22:20] to a language model. A language model really seduces our psychological biases because it speaks
[00:22:26] to us. But a clump of neurons in a dish just doesn't because it doesn't do anything yet.
[00:22:32] Now, for me, the possibility of artificial consciousness there is much higher because
[00:22:36] we're made out of the same material. To the specific question, why should that matter? Why does
[00:22:42] the matter matter? It comes back to this idea about what kinds of things brains are and the fact that
[00:22:49] they're deeply embodied and embedded systems.
[00:22:51] So brains fundamentally, in my view, evolved to control and regulate the body, to keep the body alive.
[00:22:58] And fundamentally, this imperative goes right down, even into individual cells. Individual cells
[00:23:07] are continually regenerating their own conditions for survival. They don't just take an input and
[00:23:13] transform it into an output. And in doing this, I think there's pretty much a direct line from the
[00:23:20] metabolic processes that are fundamentally dependent on particular kinds of matter, flows of energy,
[00:23:27] transformations of carbon into energy, things like that, all the way up to these high level descriptions
[00:23:33] of the brain making a perceptual inference, or as we said earlier, a controlled hallucination of best
[00:23:39] guess about the way the world is. So if there is this through line from things that are alive and why we call them alive,
[00:23:49] all the way up to the neural circuitry that seems to be involved in visual perception or conscious experience
[00:23:55] generally, then I think there's some reason to think that consciousness is a property of living systems.
[00:24:02] As you were answering that, in my head, I have this visualization, maybe the future of this conscious AI system
[00:24:08] that we finally create isn't going to be a bunch of Jensen's NVIDIA GPUs and some data center, but perhaps this
[00:24:14] like giga brain that we build out of the very things that our brain is made out of. That's one hell of a visual,
[00:24:20] I got to say.
[00:24:21] Yeah, I think that's, and that's a possible future, right? Because we're already on that track with
[00:24:26] with neurotechnologies and hybrid technologies as well. And people can plug organoids into rack servers,
[00:24:34] people are beginning to do this already to sort of leverage the dynamical repertoire that these things have.
[00:24:42] And nobody knows how biological a system needs to be in order to move the needle on the possibility
[00:24:49] for consciousness happening. It may be not at all, or it may be a great deal indeed.
[00:25:07] So I have to ask the question, can artificial neural networks then also teach us something about
[00:25:13] biological neural networks? And the reason I asked this, I was reading the Anthropic CEO's
[00:25:18] rather extended blog, and he brought up this example where basically like a computational
[00:25:23] mechanism was discovered by AI interpretability researchers in these AI systems that was rediscovered
[00:25:29] in the brains of mice. And I was just asking my question, wait a second. So like,
[00:25:34] like an artificial system, like a very simplified simulation is still telling us something about the
[00:25:40] organic, you know, more complex representation. What's your thoughts on that? And do you think
[00:25:45] this trend will continue?
[00:25:46] Oh, absolutely. I think this is for me, the line of research that was certainly the line that I'm
[00:25:53] following. The use of computers and in general, in AI in particular, they're incredible tool.
[00:26:00] They're incredible general purpose tools for understanding things. And, you know, even in
[00:26:04] my own research, this is what we do. I mean, we'll build computational models of what we think is
[00:26:10] going on in the brain. And we'll see what these models are capable of doing. And we'll also see
[00:26:16] what predictions they might make about real brains that we might then go and test in experiments.
[00:26:22] I have to imagine the advances in technology, both on the sensing and the computation side is making
[00:26:27] a huge difference. And I'd love to hear some examples.
[00:26:30] There are examples in many different levels. So for instance, there are algorithms involved in
[00:26:35] generative AI that might really map onto things that brains do. So one level, it's about discovering
[00:26:44] what the functional architecture of the brain is through developing these new kinds of algorithms.
[00:26:49] But then there are other levels too, in that there's the levels in which we might use AI systems as tools
[00:26:57] for modeling or understanding some higher level aspects of the brain. So for instance,
[00:27:03] we use some generative AI methods to simulate different kinds of perceptual hallucination.
[00:27:09] So the visual hallucinations that people have in different conditions like in psychosis or in
[00:27:14] Parkinson's disease or after psychedelics. And this goes back to some early algorithms by Google in
[00:27:22] their deep dream when they turned bowls of pasta into these weird images with dogheads sprouting
[00:27:28] everywhere. But we can use those in a more serious way to get a handle on what's happening in the brain
[00:27:34] when people experience hallucinations. And then right at the other end, and I admit this is something that
[00:27:40] for me anyway, is still uncharted territory and something I'm really interested to explore
[00:27:46] is when we actually leverage the tool set that AI is delivering now, you know, the language models,
[00:27:53] the virtual agents. And I was reading a paper just the other day about a whole virtual lab that was
[00:27:59] discovering new compounds to bind to the COVID virus, virus particles. And this virtual lab was
[00:28:09] basically doing everything from searching literature to generating the hypothesis to critiquing
[00:28:15] experimental designs and proposing new experimental designs and so on. So I think there's a lot of
[00:28:20] utility in AI for accelerating the process of scientific discovery.
[00:28:26] I think AlphaFold is such a great example of that, right? Like what took like a PhD,
[00:28:32] you know, the entirety of their PhD to figure out a couple of molecules. We've mapped out a huge,
[00:28:38] huge opportunity space and kind of just put it out there.
[00:28:41] I mean, that is such a beautiful example because also it just exemplifies the way in which I think
[00:28:48] it's productive for us to relate to these kinds of systems because AlphaFold intuitively seems like
[00:28:54] a tool, right? We treat it, we use it as a tool or rather the biologists do to just rapidly accelerate
[00:29:01] the hypotheses they can make at the level of protein binding. We never think of AlphaFold as another
[00:29:08] conscious scientist. It's not, it doesn't, it doesn't seduce our intuitions in the same way that
[00:29:14] language models do. So I don't think there's anything quite comparable to AlphaFold, you know,
[00:29:22] in the neuroscience domain yet. And I'm trying to think what one, what, what the equivalent problem
[00:29:29] would be, you know, what one thing might be, and this is, this is very speculative. Maybe somebody's
[00:29:35] working on this already, but you know, one of the big unknowns in the brain is, is really how it's
[00:29:41] wired up. There was, uh, you know, another recent paper looking at the full wiring diagram of the
[00:29:47] brain of a fruit fly. And this is an incredible resource already. It was computationally incredibly
[00:29:53] difficult to, to put this together from the little bits of data that you might get in individual
[00:29:59] experiments. So there could well be a role for, for AI in helping amass large amounts of data
[00:30:04] to give us a more comprehensive picture of what kind of thing a brain is. Um, and there may be
[00:30:10] many other creative ideas out there too. Uh, but all of them I think would treat the AI
[00:30:17] as in the, in, I think the most productive way as a kind of tool.
[00:30:21] I agree. There's a, there's a lot of, you know, inclination to, I call it the co-pilot versus
[00:30:26] captain question. A lot of people are like, yeah, this is like my personalized Jarvis and I'm going to
[00:30:31] be like Tony Stark in the lab and just like, you know, doing what I need to do. And it just like
[00:30:35] preempts my needs. And it's cool that it's not constrained sort of by wall clock time, right? That
[00:30:40] you can just throw more compute at it and they can move faster. But fundamentally to me, it feels like
[00:30:44] humans are still doing the orchestration. Um, what do you think are the risks of going the other route
[00:30:50] where we start feeling like these systems should be the captain and let's build the grand AGI system
[00:30:55] and ask it what to do and then let's do it blindly. Yeah, I think, I mean, there's, there's, of course,
[00:31:01] there's a huge amount of uncertainty. Maybe it's not a, not a terrible idea in some ways, but it does
[00:31:06] strike me as something that is certainly not guaranteed to turn out very well. And human intuition
[00:31:12] still seems very important in interpreting the suggestions that might come from, from AI or
[00:31:18] just what AI will deliver in whatever context, having a human in the loop still seems to be very,
[00:31:24] very important. But there are some larger risks here that to the extent we do this, then I think
[00:31:30] we are moving back towards imbuing artificial intelligence with properties that it may not in fact
[00:31:40] have, you know, things like, oh, it really does understand what it's doing or, or it may be
[00:31:46] indeed be conscious of, of, of what it's doing as well. I think if we miss attribute qualities like
[00:31:53] this to AI, that can be pretty dangerous because we may fail to predict what it will do. Um, another
[00:32:03] concept from Daniel Dennett is something called the intentional stance. And it's a beautiful idea about
[00:32:08] how we interpret the behavior of other people is because I attribute beliefs and knowledge and goals
[00:32:15] to, to you or to whoever I'm interacting with. And that helps me predict what they're going to do.
[00:32:20] Now, if we, if we do this with AI systems, and this is what language models in particular encourages
[00:32:25] to do, then we may get it right some of the time, but we may get it wrong some of the time too,
[00:32:31] if the systems don't actually have these beliefs, desires, goals, and so on. And that can be, that can
[00:32:38] be quite problematic. There's the other side to all of this too, where, you know, technology is also
[00:32:43] advancing to a degree where, um, we can kind of coarsely figure out what's going on in people's
[00:32:49] minds. And so earlier in the season, we had Nita Farahani on, and she touched on the concept of
[00:32:54] cognitive liberty. And we were basically nerding out over how we're basically putting all these like
[00:33:01] ourselves. And yes, right now they can coarsely read our brains. And what was even trippier to
[00:33:06] learn about is manipulating our dreams with targeted dream incubation. What keeps you up at
[00:33:12] night when you think about sort of the ethical considerations from AI kind of making our minds
[00:33:18] more of an open box than they have been in the past? One of the things that I think about, and I was
[00:33:23] recently writing a paper with a philosopher, Emma Gordon, about brain computer interfaces as well,
[00:33:28] is, is really, why is the skull, this, this boundary that we think of as particularly significant
[00:33:34] here? I mean, we've already given our data privacy away in so many ways. And that's true,
[00:33:39] not a good thing, right? But, but in many ways, at least for people who've been around for a while,
[00:33:45] the cat's already out of the bag, but the idea of getting inside the skull does seem to be
[00:33:52] significant, partly because there's no other boundary that's left. And while we're very used to the
[00:33:58] is the importance of things like preserving freedom of speech, then there isn't really the
[00:34:04] same degree of attention paid to something like freedom of thought. Right.
[00:34:07] So we just not used to what kinds of guardrails and moral guidelines we might need in this case.
[00:34:15] And then there are also, I think, some more subtle worries, certainly in this space of brain computer
[00:34:20] interfaces, because let's imagine a situation where we each have, or brain computer interfaces are widely
[00:34:26] used. A lot of brain data is extracted, it's used to train models, which are then used to underpin the
[00:34:34] utility of brain computer interfaces so that they can predict what someone wants to say or do on the
[00:34:40] basis of brain activity. Now there are some extraordinarily powerful and compelling use cases
[00:34:45] for this kind of thing in medicine, in treatment for people with brain damage or paralysis or blinds.
[00:34:52] But if we generalize that to enhancement of everybody, and we try to think, okay, these things are not just
[00:34:59] a soul specific clinical problems, but they become part of our society more deeply, then there's a
[00:35:06] potential that there's a kind of enforced homogeneity. Yeah, we might have to learn to think a
[00:35:11] particular way in order to get the brain computer interface to work. And that may be a completely
[00:35:16] unintended consequence. But it strikes me as a worrisome consequence as well. There may also be,
[00:35:23] you know, just kinds of social inequity that start to happen to about, okay, people with access to
[00:35:29] these systems can do more, or will be allowed to train them so they can think in their own distinctive
[00:35:33] way and not have to think in, you know, in the way that the mass market BCIs require. So I think
[00:35:39] there's a lot of, there's a lot of sort of feedback cycles that can start to unfold in this case. But
[00:35:46] fundamentally, it's that really, there's nothing more to privacy once you go inside the skull. And
[00:35:53] then there's a stimulation thing as well. Brain computer interfaces can be bi-directional. And if
[00:36:00] they're bi-directional, and you start imprinting thoughts, goals, intentions, then we're definitely
[00:36:09] in a very ethically troubling situation. That last bit to me is the stuff that keeps me up. It's like,
[00:36:15] it's like giving a bunch of companies read-write access to your mind, right? And to your brain. And
[00:36:21] in a sense, the point you brought up about sort of, you know, homogeneity, sort of like lack of
[00:36:27] intellectual diversity, we're already kind of seeing that where people are using LLMs and it's
[00:36:31] all kind of like the same milk toast prose. And, you know, people are almost losing the ability to
[00:36:38] write and think. And yeah, I think there's something kind of disconcerting about that.
[00:36:45] Yeah. I mean, there might be a more optimistic view of this too, that the sort of
[00:36:49] milk toast homogeneity of large language model output may cause us to really value human contributions,
[00:36:56] more. You know, just as in other situations, there's a kind of, there's a value attached to
[00:37:01] the handmade, the bespoke. And we may end up living in a situation where we just view these
[00:37:09] two kinds of language quite differently. And just as someone who grows up in a bilingual household,
[00:37:15] you know, will naturally learn to speak two different languages. Future generations might
[00:37:19] become accustomed to, okay, well that's kind of large language model language. And this is human
[00:37:24] language. And they just innately feel very different, even though they're using the same words.
[00:37:30] Oh yeah. Kind of like just forming code switching and, you know, different contexts,
[00:37:33] how exactly to behave. I think that's a, it's a valid point. Perhaps I have a slightly more jaded
[00:37:38] take on this. Cause I'm like, yeah, people are going to want the whole foods experience,
[00:37:43] but a vast majority of people are like, give me the free ad funded mountain do straight to the vein.
[00:37:47] And I deeply, deeply worry about that. So let's leave the lab for a second here. What are the kinds
[00:37:53] of AI tools that you yourself are using outside of the context of the lab?
[00:37:57] I'm pretty a light user of, of AI tools, at least the ones that I know about, because of
[00:38:02] course one of the thing is AI is hidden beneath the surface of many of the things we use. And
[00:38:07] every time I use Google maps, it says, you know, there's machine learning or AI happening there.
[00:38:11] I do use language models increasingly sort of as verbal sparring partners, rather than as sources of text
[00:38:19] that I will then edit or use directly. And it's a kind of as glorified search engines in that sense.
[00:38:27] And yeah, I find them more and more useful, but I still don't trust them. I think it's a case of
[00:38:34] using them to help us, to help humans think more clearly rather than to outsource the business of
[00:38:40] thinking itself.
[00:38:41] So have you ever, whilst interacting with all these large language models, felt yourself forming
[00:38:48] a connection with these systems? Or are you able to keep that separation and distance? Like almost
[00:38:54] like you're forgetting it's a tool and kind of more like a colleague. Does it ever feel like that?
[00:38:58] It doesn't. You know, this is another of the things that keeps me up at night, back to that, that
[00:39:03] question. Um, because there's something so seductive about the way we respond to language that even if
[00:39:11] at one level, we can be very, very skeptical that there's anything other than just sort of statistical
[00:39:18] machination happening, the feeling that there's a mind that understands and might be conscious is
[00:39:25] extremely powerful. And I'm thinking, you know, one way of thinking about this is that there are plenty
[00:39:31] of cases where knowledge does not change experience. So for example, lots of visual illusions. Um,
[00:39:40] there's a famous visual illusion called the Muller-Layer illusion, which is a visual illusion where
[00:39:45] two lines, um, look different lengths because of the way the arrows point at the end, but
[00:39:51] you measure them, they're exactly the same length. And the thing is, even if you know this, even if you
[00:39:57] understand what's happening in the visual system that gives rise to this illusion,
[00:40:00] they will always look, you know, the way they do.
[00:40:04] There's no like firmware, there's no firmware fix for our brains to fix that.
[00:40:08] That's right. And so the worry for me is that there will be similarly cognitively impenetrable
[00:40:15] illusions of artificial consciousness. That if we're dealing with sufficiently fluent language models,
[00:40:21] especially if they get animated in deep flakes or even embodied in humanoid robots,
[00:40:27] that we won't be able to update our own wetware sufficiently in order to not feel that they
[00:40:36] are conscious. We will just be compelled to have those kinds of feelings. And that is a very
[00:40:44] problematic state to land in too, because if we are unable to avoid attributing, let's say conscious
[00:40:50] states to assist them. And then again, we're going to be in the business of attributing with qualities
[00:40:56] it doesn't have and mispredicting what it's going to do and leaving ourselves more open to coercion,
[00:41:02] and more vulnerable to manipulation. Because if we think a system really understands us and cares about
[00:41:07] us, but it doesn't, it's actually just trying to sell us Oreos or something, then that's a problem.
[00:41:13] And I think that the most pernicious problem here is something that goes right back to Immanuel Kant and
[00:41:19] probably before, which is the problem of brutalizing our own minds. Because here, if we are interacting
[00:41:27] with an artificial system that we can't help but feel is conscious, we have two options broadly. We
[00:41:34] can either be nice to it anyway and care about it and bring it within our circle of moral concern.
[00:41:40] And that's okay. But it means that, you know, we will waste some of our moral capital on things that
[00:41:45] don't need it and potentially care less about other things because we humans have this in-group,
[00:41:50] out-group dynamic. If you're in, you're in. If you're out, you're out. So we might either do that
[00:41:56] or we learn to not care about these things and sort of treat them in the same way that we might
[00:42:02] treat a toaster or a radio. And that can be very bad for us psychologically because if we treat
[00:42:09] things badly, but we still feel they are conscious, that's the point that Kant made. That's what
[00:42:14] brutalizes our minds. It's why we don't poke the eyes out of teddy bears or pull the limbs off dolls.
[00:42:21] You know, the science fiction film and series Westworld dealt with this beautifully. You know,
[00:42:26] how dangerous it is for us to take this perspective.
[00:42:31] So this keeps me up at night because there's no good option here. We need to think very carefully,
[00:42:37] not only about the possibility of designing actually conscious machines, which even if it is
[00:42:44] unlikely, if it happened, would be very ethically problematic because of course, if something actually
[00:42:49] is conscious, it's a moral subject and we would need to be very careful about how we treat it. But even
[00:42:56] building systems that give the strong appearance of being conscious is also problematic for different
[00:43:02] reasons. And this scenario is basically already with us. It will be very soon unless we think very
[00:43:09] carefully about how we design these systems and design against giving that impression in some way.
[00:43:15] I think you very beautifully paint this picture of why it's problematic on both ends, right? Like
[00:43:19] treating it like the Rick and Morty, this robot wakes up, what is your purpose? Your purpose is to
[00:43:24] put butter on my toast. That is your purpose. Just get back to please putting butter on my toast.
[00:43:28] And it has this existential crisis. And I think on the other end, the Westworld example is very valid
[00:43:34] too, where you have things that are indistinguishable from humans and we go act out all these sort of
[00:43:39] lower urges or whatever the right way to put that. And we suddenly start bringing that
[00:43:45] sort of behavior to interactions with actual humans. But the real question I come at is where
[00:43:49] you end it, which is from a user experience standpoint, right? A lot of people think that it is
[00:43:55] important to have these systems be as human-like as possible and meet the users sort of where they are.
[00:44:01] Do you want to talk about why we need to be more nuanced? And do you have any ideas for what that sort of
[00:44:08] would be a better way to build these systems? Because it seems like either extreme kind of sucks.
[00:44:14] I think this is super interesting. And in fact, I think talking to you just now is helping
[00:44:19] give focus to this as a serious design challenge. And I'm not sure it's one that's been well addressed
[00:44:25] so far. And because of course, yes, there is a good reason to build systems with which we can
[00:44:32] interact very fluently. It can also be very empowering. If we can have a machine generate
[00:44:38] code by talking to it about what we want a program to do, that's hugely empowering for many people,
[00:44:44] so long as it does the thing that it's supposed to do and not something else. But is there a way of
[00:44:51] having the benefits of that, designing systems so that we can preserve the kind of fluent interaction
[00:44:58] that natural language gives us, but in a way that still at least pushes back to some extent on the
[00:45:07] psychological biases that then lead us to make all these further attributions of consciousness,
[00:45:15] of understanding, of caring, of emotion, and all of these things. I don't know what the solution is,
[00:45:21] but I think it's a really important problem. One simple solution would be, okay, these things just
[00:45:28] have to watermark themselves to say, I am not conscious, I don't have feelings. And of course,
[00:45:33] language models do that until you play around with them, press them a little bit. But that may not be
[00:45:40] enough. There may have to be other ways where we design interfaces which, through practice or through
[00:45:48] education or through some other manipulation are shown, and this is really a question as much for
[00:45:55] psychology as it is for technology. What kinds of things preserve fluid interaction but do not
[00:46:04] succumb to our psychological biases and what properties we attribute? I would love to see
[00:46:10] focus on that problem because that would show the line we need to walk.
[00:46:16] And you're right that there aren't any solutions. Do you think we can build those antibodies?
[00:46:22] I think we have to try. I mean, that also brings up another point, which is again, very contentious
[00:46:28] in the tech sphere, which is what should we do about regulation? What kinds of systems should people
[00:46:34] just put out there? And what I come back to in that conversation is always the fact that in other
[00:46:42] domains of invention and technology, we're very cautious. We don't put a new plane in the sky
[00:46:49] without being fairly sure it's not going to fall out. We don't release a new drug on the market unless
[00:46:54] we can be very sure it's not going to have unintended side effects or consequences. And there does seem to
[00:47:01] be an increasing recognition that AI technologies are in the same ballpark. And doesn't mean that we want to
[00:47:11] stifle innovation, of course, but we can help shape and guide innovation. I think there's again a sweet spot to be found
[00:47:19] there. And then on the other side, the education, one of the challenges there, of course, is that things are moving so fast,
[00:47:24] that it's very hard to keep up. But it's important to try. One thing that strikes me here is the very
[00:47:32] term artificial intelligence is part of the problem. It brings with it so much baggage that there's some
[00:47:39] kind of magic and it's like, you know, science fiction mind, whether it's, you know, Javis or whether
[00:47:43] it's Skynet and Hell from 2001 or whatever it is, whatever your favorite conscious intelligent robot is,
[00:47:50] that's what we think of. And, you know, artificial intelligence has this brand quality, which I
[00:47:55] think is a little bit unhelpful. It may be, you've been incredibly successful in raising large amounts
[00:47:59] of venture capital, but it's, you know, it's not a particularly helpful description of what the systems
[00:48:04] themselves are doing. Of course, most people working in this, at least they used to at any rate,
[00:48:10] talk about machine learning rather than artificial intelligence. And then another level of
[00:48:14] description, you can just say, well, these things are basically just applied statistics.
[00:48:18] You start describing something as applied statistics, you know, it, it, it's even that
[00:48:26] is educationally valuable because it highlights how much we, when you load onto these systems
[00:48:33] by the words we use. One other very simple example here, which I think the, you know, the
[00:48:40] horse has already bolted here too, but it's always annoyed me how people describe language models
[00:48:46] as hallucinating when they make stuff up.
[00:48:49] Yeah. It's giving them too much credit.
[00:48:51] David It's giving them way too much credit. And it's,
[00:48:53] it's doing something more specific than that. It's cultivating the implicit idea that indeed
[00:49:00] language models experience things because that's what a hallucination is. A hallucination,
[00:49:05] if you apply it to a human being, it means, well, they're,
[00:49:08] they're having a perceptual experience of something that that's not there.
[00:49:12] David And the fact that that linguistic term caught on so quickly, I think is itself telling
[00:49:18] because it just revealed like, okay, there's some implicit assumptions about what people
[00:49:23] think these things are doing, but it's also in a positive feedback. It's unhelpful because it leads
[00:49:28] us to again, project qualities in. If we're going to use a word, I wish they'd used confabulation
[00:49:34] because in human psychology, confabulation is what people do when they make stuff up without
[00:49:40] realizing they're making stuff up. And that to me is an awful lot closer to what language models
[00:49:45] do, but I, yeah, I don't think it's going to catch on now, but we should be careful about the
[00:49:49] language we use to describe these systems for exactly this reason.
[00:49:53] David What advice would you have for people that are listening to this
[00:49:57] so that they can take advantage of the tools at their disposal today and not get sucked into,
[00:50:03] into perhaps the, I don't want like the pseudoscience and the, you know, fake spirituality that kind of comes
[00:50:09] as a package deal with AI today.
[00:50:12] David Part of it is exactly recognizing these often implicit motivations that drive all these,
[00:50:18] these associations that lead us to think of these things as being more than they are or different
[00:50:23] than they are. And in the extreme, it gets, it gets pretty religious, right? I mean, there's,
[00:50:30] there's the idea of the singularity, which is a sort of the techno-optimist moment of rapture and
[00:50:36] the possibility of uploading to the cloud and living forever with the promise of immortality.
[00:50:43] David Part of it is very, it's textbook religion, right? And so that in itself, I think is useful
[00:50:50] to bear in mind that there's a, there's a larger cultural story behind this. It's not simply an
[00:50:55] objective description of where the technology is. And then cashing that out further, it is for me,
[00:51:03] anyway, it's just a matter of, of continually reminding myself of the differences between us
[00:51:10] and the technologies that we build to resist the temptation to anthropomorphize, you know,
[00:51:15] to project human-like qualities into things, to retain a slightly critical attitude to what's going on
[00:51:22] behind the interface, that it is not an alternative person. And this can be easier said than done, because
[00:51:30] as we were discussing before, one of the things that keeps me up at night are these cognitively
[00:51:35] impenetrable illusions of intelligence, consciousness, and so on that AI systems can,
[00:51:41] can bring to bear. But yeah, I think it's just at its most basic, reminding ourselves that if we think
[00:51:50] an AI system is a reflection of us, that it's something in our image, what we're probably doing is
[00:51:57] overestimating its capabilities and underestimating our own capabilities.
[00:52:02] I love that. That's punchy. And that brings me to the last question,
[00:52:06] which is, given our discussion thus far, it makes me very curious, what is your idea of the sort of
[00:52:12] ultimate final form of AI, if you will, that appeals to you as a neuroscientist? Like what excites you the
[00:52:19] most about the potential for this future, you know, where, you know, AI can serve human intelligence
[00:52:26] and consciousness?
[00:52:27] David Morgan This is a very,
[00:52:28] very good question. I mean, I think my vision, optimistic vision about this is not some sort of
[00:52:36] single super intelligence, like Deep Thought and Hitchhiker's Guide to the Galaxy or whatever it might be.
[00:52:41] David Morgan That's a single super intelligent entity. Maybe AI in the future is going to be a bit more
[00:52:48] like electricity or water. You know, it's a basic utility of the utility. It's a utility and it's used
[00:52:56] in many, many different ways, in many, many different contexts to do many, many different things. And it's
[00:53:04] in this world, we face the challenge of, of recognizing that there are some things which we once thought
[00:53:11] were distinctively or uniquely human, which aren't. And so there will be a social disruption to that.
[00:53:17] This happens, of course, in all technological revolutions. But the flip side of that is that
[00:53:24] the space that's opened for massive innovation, creativity, the ability to solve all sorts of
[00:53:30] problems. So I think it's not a single thing. It's, it's many things. One last thought on this.
[00:53:36] I've heard the idea many times that the distinctive thing about AI is that it could be humanity's last
[00:53:41] invention because AI systems can design, develop, improve themselves.
[00:53:47] David Morgan Oh, they'll invent everything else?
[00:53:48] David Morgan Or we, we lose the ability to have dominion over what they may end up being. And that's
[00:53:56] something that I'm still, you know, I'm still a little bit unsure how to think about that, whether
[00:54:00] that's a, that's a real difference or whether it's something that we can still need to be careful to,
[00:54:05] to manage. But my, yeah, my optimistic view of AI is as some kind of utility that's drawn on in many,
[00:54:12] many ways. David Morgan Yeah, that permeates everything versus the,
[00:54:15] the singular all-encompassing AI in the sky, which again, gets start sounding very religious.
[00:54:21] Anil, thank you so much for joining us.
[00:54:23] Anil Thanks for the conversation. I really enjoyed it.
[00:54:28] Wow. What a conversation. Anil Seth reminds us that the story of AI isn't just a tale of machines
[00:54:35] gaining power. It's a mirror reflecting our own biases, aspirations, and fears. We project so much
[00:54:43] of ourselves onto these tools. We anthropomorphize the algorithms. We give meaning to their outputs,
[00:54:50] as if they share the complexity of human experience. But as Anil said, we overestimate their capabilities,
[00:54:58] and underestimate our own. And that's something worth meditating on. For all the dazzling feats AI can
[00:55:05] pull off, it's still us, humans, who design, direct, and decide what these systems become.
[00:55:12] And it's our responsibility to tread carefully, not just for the sake of innovation, but for the future
[00:55:18] of what makes us human. There's another fascinating question here too. If a truly conscious AI system
[00:55:25] does emerge one day, will it even look like the systems we built so far? The unique, messy biology of
[00:55:32] the human brain, neurons, synapses, and glial cells doesn't just power intelligence. It creates the rich,
[00:55:39] subjective experience we call consciousness. Silicon and software might never be enough.
[00:55:46] Consciousness may demand a substrate that mirrors what we're made of, a construct that's alive,
[00:55:52] pulsing with the same kind of vitality that flows through us. And this is wild to imagine,
[00:55:58] a future where conscious AI doesn't emerge from humming server racks or massive data centers,
[00:56:05] but from living organic systems, gigabrains we create from the very building blocks of life itself.
[00:56:13] In trying to recreate the sparks of consciousness, we'd be stepping closer to understanding what makes
[00:56:19] it so mysterious and so uniquely human. For now though, that's all in the realm of speculation.
[00:56:25] What's not speculative though is this. The choices we make today, how we design, interact with, and
[00:56:33] regulate these systems will shape not just the future of AI, but the future of our own humanity.
[00:56:40] And as much as we might marvel at the power of these tools, it's our responsibility to stay grounded,
[00:56:46] to remind ourselves that these systems are reflections of us, not replacements for us.
[00:56:53] It's truly an incredible moment to be alive, and a terrifying one too. And as we grapple with the
[00:56:59] unknowns ahead of us, perhaps the best question we can keep asking is, what kind of future do we want
[00:57:06] to create, not just for AI, but for ourselves? The TED AI Show is a part of the TED Audio Collective,
[00:57:18] and is produced by TED with Cosmic Standard. Our producers are Dominic Gerard and Alex Higgins.
[00:57:24] Our editor is Banban Chen. Our showrunner is Ivana Tucker, and our engineer is Asia Pilar Simpson.
[00:57:31] Our technical director is Jacob Winnick, and our executive producer is Eliza Smith.
[00:57:36] Our researcher and fact checker is Jennifer Nam. And I'm your host, Bilal Sidhu. See y'all in the next one.

