We may think the complexities of the human mind can only be understood by other humans. Yet research on chatbots and psychology suggests non-human bots can actually help improve mental health. Bilawal talks with Dr. Alison Darcy, the founder of mental health app Woebot, and Brian Chandler, an app user, to learn what chatbots reveal about our inner lives and what they can (and can’t) do when it comes to emotional wellness.
Check out the 99% Invisible episode we reference in the show here:
For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:00] TED Audio Collective Like many of us, Brian Chandler was struggling with his mental health during the 2020 lockdown. Yeah, so the beginning of 2020, I couldn't go anywhere. And I knew, you know, in the past that I had some anxiety.
[00:00:21] But I was always able to find things that I could kind of use to distract myself, you know, get out, do things. But when I was stuck at home, it was like I couldn't run away from that anymore.
[00:00:33] And I was forced, like so many other people, to confront that anxiety. And from there, I was just seeking something that could help me. You know, I was trying to read books on mental health. I was trying meditation. I was trying all kinds of different apps.
[00:00:49] And Wobot just was in the mix of those apps I downloaded. Wobot, a therapy chatbot. Users type in how they're feeling. The chatbot deciphers what kind of problem they're talking about and then offers a prescripted response.
[00:01:05] So if Wobot asked Brian how he was feeling and Brian said something like frustrated or annoyed, Wobot would chat back asking Brian to rephrase his feelings in a more concrete way, sort of like the way a therapist might to promote deeper self-reflection.
[00:01:20] And I remember, you know, one afternoon and I opened up the app and I just went through the prompts. And afterward, I found myself feeling a little bit better. There's something about a chatbot therapist that can make people a little squeamish.
[00:01:38] You know, it's the idea that the messiness and intricacies of the human mind can be helped by something entirely unhuman. But the power of therapy chatbots is nothing new.
[00:01:48] As most of the coverage of Wobot has reminded us, the very first chatbot ever was designed to mimic a psychotherapist. In the 1960s, MIT computer scientist Joseph Weizenbaum created a bot called Eliza, largely to make a point.
[00:02:04] Eliza was supposed to emulate a kind of talk therapy where the practitioner would often repeat their patient's responses back to them. If the patient mentioned their father, the bot said, How do you feel about your father? When a patient said, my boyfriend made me come here.
[00:02:20] The bot said, your boyfriend made you come here. And if the bot didn't know what to say next, it would just say, tell me more. Weizenbaum's whole point with Eliza was to show just how bad robots were at understanding the complexity of human emotion.
[00:02:34] But his experiment totally backfired. Turns out the people who got access to Eliza were so entranced by the bot's active listening skills that they wanted to keep talking to it. In fact, Weizenbaum's secretary asked him to leave the room so she could be alone with Eliza.
[00:02:52] Could it be that humans are just so bad at listening to each other? That we're willing to convince ourselves that robots really can understand what we're saying? Or maybe human emotion really can be broken down into repeatable patterns, data that these chatbots can respond to effectively.
[00:03:09] Regardless, now that mental health AI is back on the rise, we have to ask the question. Are we looking at a future where AI systems replace human therapists altogether?
[00:03:20] I'm Bilal Al-Sadduh and this is the TED-AI show, where we figure out how to live and thrive in a world where AI is changing everything. Ever wish you could look around the corner to make sense of today's big business and social issues? And prepare for what's coming tomorrow?
[00:03:46] Dozens of podcasts promise to bring you the latest news and the latest trends. But where's the so what? Why does it matter? And what does it all mean for you? BCG's flagship podcast, The So What from BCG, features award-winning British journalist Georgie Frost.
[00:04:02] Interviewing BCG's leading thinkers and doers to get you the answers you want and need. Hear the ideas that are shaping and disrupting the future. This is not your typical business strategy podcast. Listen to The So What from BCG wherever you get your podcasts.
[00:04:18] Here at Shortwave Space Camp, we escape our everyday lives to explore the mysteries and quirks of the universe. We find weird, fun, interesting stories that explain how the cosmos is partying all around us. From stars to dwarf planets to black holes and beyond, we've got you.
[00:04:35] Listen now to the Shortwave podcast from NPR. In this episode, we're digging into therapy chatbots. There are many on the market based on different therapeutic models and with different capabilities.
[00:04:52] But today we're going to focus on WoBot, which we should note now requires a unique access code from your healthcare provider or in some cases from your employer. It's far from the only therapy chatbot though.
[00:05:04] And as these bots become more sophisticated and accessible, it seems more and more likely that they could disrupt the field of mental health care.
[00:05:11] Later in the episode, we're going to circle back to Brian Chandler, the WoBot user, about his experience with the bot and why he finds it effective even without additional therapy. But first, I'm speaking with Dr. Allison Darcy, founder and president of WoBot Health.
[00:05:28] When Allison was working in a Stanford Health Innovations Lab, she and her colleagues tested a whole bunch of ways to make mental health care more accessible and engaging. They tried to gamify it using immersive video experiences and even explore face-to-face models.
[00:05:42] But it turned out that the most effective option seemed to look a lot more like Eliza. Allison, walk us through what is WoBot and how did it come to be? Okay, WoBot is an emotional support ally.
[00:05:56] It's basically just a chatbot that you can talk to during the day that helps you sort of navigate the ups and downs. So it's explicitly not therapy. Very, very different, but is nonetheless based on constructs borrowed from the best approach to mental health that we have today.
[00:06:14] So how did it come to be? I guess I was a sort of a clinical research psychologist. And at the same time, I think I had this brief moment in my early 20s of learning to code and sort of being in that dev world.
[00:06:27] I mean, very brief moment. And there's something called the research practice gap, which is really a sort of intervention science problem, which means whatever we do in the lab doesn't actually get translated well in the community.
[00:06:39] And so you have all of these people and actually a growing number of people with problems and fewer and fewer resources. And they're available and those that are available, just hard to access and expensive and stigmatized.
[00:06:54] And actually, you know, in learning about really great cognitive behavioral therapy, it was very clear. Like, why are we sort of keeping this up into the clinics? Like, should we not be teaching this stuff in schools? Should this not be preventative?
[00:07:07] Could we not do it in a way that's scalable before it becomes a very clinical issue? And so we set out to kind of try and look at ways that we might go about that. Like, how can you develop that habit?
[00:07:20] It was like, how can we make good thinking hygiene as engaging as it possibly can be so that people will want to interact every day? You said it's based on a therapeutic model of CBT, but not explicitly therapy.
[00:07:33] What is the user experience of using Wobot today and how is it different from a therapist? That's a great question. And one we get, I think, not enough, actually.
[00:07:42] In the case of mental health, you know, we almost always hear, you know, like, if you're struggling, reach out to someone. But the experience, the lived experience of being in a difficult moment, it actually is the hardest moment to reach out to somebody else.
[00:07:56] And so, you know, really what we're doing is meaningfully designing for that moment. How do you make that as simple as it can be? And it turns out not being a human is so important to that equation, as was demonstrated by a different research group in Southern California.
[00:08:11] People are more likely to disclose to an AI than to a human. And while that might sound dystopian to some folks, it's not that they're choosing the AI over a human. Really, the choice is how can I do something right now to help myself?
[00:08:27] Whereas the architecture of therapy is just obviously completely different and it is based on fundamentally a relationship, right? The human to human relationship. That's not what this is. We have certain relationship dynamics, but that's really about being able to facilitate the encounter when it occurs.
[00:08:48] But in much, much smaller, simpler nuggets as you live your life. In fact, interestingly, our data shows that about 80% of conversations are happening between 5pm and 8am the next morning. So absolutely when there aren't other folks around, other professional folks around. And so that's actually why it works.
[00:09:07] This is about a sort of a toolkit that folks can use momentarily as they feel like, OK, I'm in this rotten place. They can just reach out to this thing and be like, hey, I'm not doing great.
[00:09:24] And then Wobok can say, OK, do you want to help right now with this thing? And then, OK, if you accept that, then step by step talking somebody through how to use their own resources to maybe feel a little bit better and then just get back to life.
[00:09:40] Right. You know, I really like this notion of sort of meeting the person where they are in that moment. You're perhaps most vulnerable where it's hardest to reach out to people. And while it's fresh in your mind, you're living that experience.
[00:09:53] It's playing out in real time. The product person in me is curious. How do you measure success and what are what are those like success metrics for the user experience right now that you do optimize for if it's not about engagement? It's about feeling better. That's it.
[00:10:08] Ultimately, the ground truth to us is like, do you feel better now or not? And and then it's a sort of, well, how much better do you feel? And if you feel better, oh, this is great. Right.
[00:10:19] Let's let's kind of build on that. And if you don't, OK, let's troubleshoot that. What went wrong? It's a tool set, you know. And I think one of the beauty, one of the beautiful things about a Wobot is and an AI in general,
[00:10:33] in this role of kind of guide based on CBT as an approach is that the person is doing the work right. There's no I'm bringing some extra wisdom that you don't have about yourself. Right. Like I'm reading your palms almost like that's not what it is.
[00:10:49] It's like this is your this is your skill set. I'm just going to step you through it. And so they get that experience of having stepped through it and then saying, oh, wow.
[00:10:58] Yeah, I actually do have the answers. I just need to be asked the right questions to get there.
[00:11:02] And I think that's tremendously powerful because I think some of the dynamics that we can sometimes see in maybe more clinical settings is a sort of a diffusion of power to some extent. Interesting. So there's like a different power dynamic.
[00:11:18] It's almost like because it's a bot, you're constantly remembering that it's up to you to take the steps to improve how you feel like you're in control.
[00:11:27] You have the agency. So besides the unconstrained access to this resource, what can a chatbot do that a human therapist cannot do? The challenge of those of that kind of line of questioning is that it's almost set up like a replacement.
[00:11:42] And I think, you know, it's it's it's clearly not a replacement. But there are things that an AI can be great at and that a human can't. And availability is definitely one of those things. And perfect memory is another one of those things.
[00:11:56] Never getting tired, never retiring, never having a bad day, never being hung over. Right. Those are all good things that AI's can bring to the table to flip the question like what can a human do that an AI can't?
[00:12:08] Human connection, you know, and I think that is so clear. AIs can never be human and they shouldn't pretend to be because they're best when they're not pretending to be.
[00:12:17] And I think if AIs can actually just lean into the fact that they're AIs, it's it's a lot less complicated for our heads to get around. So, you know, it sounds like you're clearly making this design decision, right?
[00:12:30] How is that design decision being reflected in the product experience for a robot? I think we made this design decision very early on that like robots should be like very clear that it's a robot.
[00:12:41] Like I'm an AI, I'm an AI. And like we really leaned into it, I think more than most people. Even the robot as a name and as a visual is like to remind people this is an AI. This is not human.
[00:12:53] But in the experience itself, I do think there are specific things that one has to pay attention to that is that is nuanced in AI. For example, robot becomes concerned with some something robot should say, you know, because you said this thing, I'm concerned about X.
[00:13:10] Right. So you're showing people this is the phrase that I'm worried about. Robot can say, you know, I'm I'm not able to, in fact, give you some medical medical advice here.
[00:13:21] Right. Like or I'm not like I'm constrained here just being very, very clear about the boundaries, what it is and what it isn't. And I think that's where good consultation with clinicians and and specialists in the field comes in.
[00:13:37] Can you talk a bit more about how this product will work in concert with, let's say, a traditional therapy experience?
[00:13:44] Right. And like, you know, do you think robot could be sort of this gateway allowing users to start opening up without this pressure of a real person at the other end receiving this stuff without the fear of judgment and and then eventually transitioning to in-person therapy?
[00:14:00] Has that happened? What does that look like?
[00:14:02] This was the point of of of a robot. It was how could you be the most gentle, unintimidating on ramp into into the experience of, you know, managing one's own mental health really early on and and hopefully in a way that demystifies what full blown therapy might look like as well.
[00:14:27] And actually, that's we've had lots of anecdotal feedback from from users to say, yeah, actually using robots made me see like what CBT with a therapist might look like.
[00:14:38] And then to your other part of the question, what does robot look like in conjunction with, you know, a clinician or a health care professional?
[00:14:47] And we've had lots of really interesting feedback here as well. And when they give a robot to their patients, robot comes back sort of ready to engage in a therapeutic process, a little bit better informed that the things that they're sharing with their patients, then robot is sort of facilitating the practice with those concepts or those skills in between sessions because therapy doesn't really happen in a void either.
[00:15:12] The more opportunity people have to practice certain skills based on CBT, the better their outcomes tend to be. And so robot can facilitate that practice and, you know, sort of reinforce what the therapist is sharing and teaching.
[00:15:27] Right now, it feels like it's very much intentionally a sort of choose your own adventure on rails experience for plenty of reasons. Right. I'm sure you were pressured enough to be like, hey, Alison, we got to use the latest generative AI model, etc. And so, yeah. Talk to me a little bit about where you see things going, perhaps in the near term. But let's let's start talking about the long term too.
[00:15:51] Yeah, I mean, yeah, you've hit the nail on the head. There are so few opportunities now in our interactions with technologies or elsewhere to learn how to objectively sort of challenge our thinking. And I kind of worry about that being lost as a skill, you know, because our almost like our emotions are hijacked with a lot of online platforms, as we know.
[00:16:16] And, you know, there's an awful lot of like very strong opinions and not so much opportunity to say, oh, well, is that correct? I think that's exactly, you know, what we'd like robot or how we'd like robot to operate.
[00:16:30] And the future, our objective function is human well-being. Right. And I'm sort of agnostic to how we get there within reason. You know, we want to use the best tools that are available to us to be able to get us there.
[00:16:47] And I think having that objective function be based on wellness, not attention is so crucial. And that's why, you know, that's why we operate as much as we can, you know, in partnership with health care professionals and those those settings and health systems, because then your incentives are aligned in the right way.
[00:17:07] But yeah, look, I think when technologies are available to us that enable us to do a better job there, we'll use them. Right. I think we've been in terms of like generative AI, we still have our writers write every line that Wobot says. And we have we just finished a trial where we looked at a sort of LLM based version of Wobot versus the rules based Wobot.
[00:17:36] And that was just fascinating from the that was really just to explore the user experience. And like, where's the difference here exactly? And like, you know, would there be some glaring limitation that can only or some huge gap that could only be filled with with LLMs?
[00:17:52] And what we found was actually interesting and intriguing in spite of that context of like twice the accuracy, we find the huge like our users didn't seem to notice a difference. So in the context where there is equal amount of trust and an equal amount of like the feeling of that this is a safe space and not being judged and so on.
[00:18:16] Yeah, maybe the accuracy doesn't matter so much because honestly, we've built the conversation over the seven last seven years to be so tolerant of imperfection. You know, I think, for example, if Wobot sort of thinks they're hearing something, they'll say, Oh, like, it sounds like you're talking about this thing. Is that true? Like, are we talking about relationship problem here? Is that what I'm hearing? Am I hearing you correctly? And that's just a very empathic conversation. Right.
[00:18:43] So if that's what Wobot says in response to a low confidence classification, that's fine. Right. And even if you have more accurate reading there, wouldn't you always want to say, Am I hearing you correctly? Like, you know, that's that's what good empathy looks like because even humans don't hear properly.
[00:18:59] It is interesting that, yeah, humans perhaps are very good at figuring out sort of the implicit rules of what they're engaging in and just working around them, especially if you set the expectation that this is not a human at the other end. So they're not pretending like they're trying to have this thing passed the Turing test or something like that. Right.
[00:19:18] It's never been about that. Yeah. And it's like and people think, well, they might be let down by the fact that it's not a that it's not a human. And I'm actually don't know. You're missing the point. It's like it works because it's not a human or not in spite of it.
[00:19:30] Also, it reminds me just you have a hammer. Everything looks like a nail and everyone wants to reimagine everything with like, you know, transformers and diffusion models these days. And it's interesting because, you know, they do use a lot more compute. And you've got a brilliant case study here perhaps where good old fashioned AI is is like good enough to get the job done.
[00:19:50] It reminds me of this quote of yours from this recent article where you were upon being asked about Gen AI. You said, you know, we can stop the large language model from just butting in and telling someone how they should be thinking instead of facilitating the person's process. So I'm kind of curious. Like, can you imagine a path to getting there with AI where AI could do just a good job as perhaps a real life human embodied therapist?
[00:20:17] And AI is never going to deliver what a human therapist does. So recently, somebody sort of said to me, but you know, like an AI can't like pick up on, you know, body language signals. But, you know, the jury's out on how much an AI needs to be able to detect that particular set of nonverbal communication because because it's the fact that it's it's a human to human relationship is why the therapist needs to be able to do it.
[00:20:47] To be able to read all of that stuff because people don't feel able to disclose to a human all the time. Do you know what I'm saying? So I'm like, it's a fundamentally different kind of encounter. It's just a completely different way of engaging. And I think one that humans are very sensitive to the idiosyncrasies of.
[00:21:06] And so I think the things that people think of threats, well, someone's going to get addicted to this and they're never going to go to a human therapist if they get really complacent with this. I don't think those things are true because people don't see them as the same thing. Right. It's not like because people start eating sandwiches, they never go to a restaurant again because they're like, well, I'm fed. I'm hooked on sandwiches. You know, that's just that's just not the way it works. Those things kind of coexist really well.
[00:21:33] Yeah, I could see that being super beneficial. But you are drawing this line about, you know, I think like I would never do that. And I just want to poke on that claim a little bit, because what you described is, you know, let's say, you know, me as a patient, if I go talk to a therapist, there's stuff that I will explicitly say.
[00:21:50] And then there are these more implicit signals, facial cues, how I react to stuff. What's to stop AI models from being able to understand all of those nuances? Right. Like already, if that happens, would that not come close to encroaching upon that therapist, you know, kind of patient relationship?
[00:22:09] OK, I guess there's something controversial. Why not? Let's do it. I think this idea of like divining people's emotional state is a bit of a red herring, honestly. I think for a lot of not all, but for a lot of mental health, because here's the thing.
[00:22:28] There's a reason why we have self-report measures for mental health and people say, well, it's not objective. But the point of mental health problems, the actual way that we conceptualize them today is that they are fundamentally a subjective experience. There has to be subjective suffering. So the correct question is, you know, how do you feel? That is the right question to ask for a start.
[00:22:58] Now, I'm oversimplifying. I think there are times in which you'll want to look at the discrepancy between, you know, the person's self-report and what they're sharing. But I think for, you know, maybe 80 percent of mental health problems, I think it's preferable to ask somebody and start there. The other thing is, yeah, you're totally right. AIs can pick up on nonverbal communication. It's just not the same set of nonverbal communication that a human therapist would look at.
[00:23:27] Do you know what I'm saying? Maybe the speed with which somebody is answering a question, you could pick up a nonverbal just in the way that somebody's texting. There's a whole set of nonverbal communication that might be relevant here for sure. But AIs are great at picking up. In fact, we've seen it in some of the algorithms that have been used to predict first episode psychosis among high risk groups.
[00:23:49] And I believe it's an NLP algorithm was able to predict with 100 percent accuracy who would go on to develop first episode psychosis compared to, I think, the gold standard measure, which is about 67 or 68 percent accurate. So and that makes a ton of sense because again, because so much of human to human communication is nonverbal.
[00:24:14] But the idea of like, let's replicate what humans do, I think is misguided and leaves a lot of value on the table.
[00:24:22] You know, you talked about the research practice gap. It's like, hey, all this awesome research is happening, but like practitioners aren't actually reading up on it and like making the latest insights available to their patients.
[00:24:35] And so Wobot is this very cool complement to therapy in that sense, where you can bridge this research practice gap, make the state of the art findings available to folks.
[00:24:47] But I'm curious, you know, like it means that people will use Wobot without access to a therapist. Do you think there's any risk to users if they're using Wobots without a human clinician?
[00:24:59] And this is, in full disclosure, my roundabout way of like, you know, coming back to the question of, is this really not going to replace therapists? You know? Pinley veiled.
[00:25:13] So Wobot's no longer available for people with, you know, just to download in the app store. You have to get it through a treatment provider.
[00:25:22] We've done a lot of research and a major objective of that research is to look at the user experience in a very controlled manner to very carefully quantify any risk or any, you know, safety issues that might come up and things like that.
[00:25:38] This is why Wobot is primarily rules based right now and everything Wobot says is written by a therapist.
[00:25:44] But I think the risk, the major risk associated with this technology actually is that we don't have the correct conversations about it and that people get spooked and think, oh, this is all the missteps that people inevitably will make because they really underestimate the complexity of what it takes to deliver something, you know, a mental health based intervention into the world.
[00:26:06] There isn't adequate data. Somebody makes a misstep, it blows up on the launchpad and then everybody starts to think, wow, this is a lot of good technology used for this. Whereas really AI, any AI that you're using is really a tool and how you use that is the most important thing.
[00:26:21] And I think, you know, the risk that we have facing us is like we are systematically going to undermine public confidence here and this and the ability of technology like this to help.
[00:26:31] And that is a big potential problem because I think this is probably the greatest public health opportunity that we've ever had.
[00:26:41] There's a lot of responsibility on your shoulders to, you know, make sure this is a shining beacon and like a great example for how you do, you know, AI augmented therapy essentially. In terms of ways this could blow up on the launchpad, right?
[00:26:56] Obviously, data privacy is one that comes up and Wobot, when people use Wobot, people are sharing some of their most personal and private data. So will user data be used to improve the Wobot product experience or the underlying models?
[00:27:11] We're HIPAA compliant, obviously. And, you know, using like all the data are encrypted and it's, you know, we have consent for each and every use, like similar to what you have with GDPR.
[00:27:25] Privacy and security is a topic that is absolutely front and center the whole time because I think a breach there from any kind of semblance of any kind of negligence on our side would be catastrophic.
[00:27:40] Going back to the future a bit, in an ideal world, Alison, what would mental health care look like in five years time? I'm curious.
[00:27:52] In an ideal world, we would shut down all our clinics and my profession would become obsolete because everybody is looking after their own mental health. We should be doing such a good job and everybody is so, you know, happy and healthy. Now that's not realistic.
[00:28:08] You know, remember in COVID we were talking about flattening the curve? I think we need to flatten the curve here as well.
[00:28:14] We need to try and keep people out of clinics if we can by, you know, providing access to really good preventative, you know, tools that they can use.
[00:28:27] We should absolutely not be waiting, you know, a decade or so before somebody, before first sort of people start to struggle a little bit with maybe a couple of symptoms here and there.
[00:28:38] And then actually getting in front of a clinician. People should have, I think, very good evidence-based tools that they can turn to from the first moment of like, you know, intense emotion that gives rise to distorted thinking because that's part of the human experience.
[00:28:55] It's not about being in a clinical realm. It's not about necessarily needing a diagnosis. It's about sort of being there in the moment of need as early as you possibly can and getting somebody well when you can.
[00:29:07] And then freeing up our precious human resources for when people actually do need more significant help. I love that. We've got a bunch of technology that we use every day that, you know, plays to our hopes, wishes, desires, anxieties, worries literally all the time without us even knowing.
[00:29:26] And so it strikes me that there should be a countervailing influence to that, you know, a correction measure to that.
[00:29:34] And, you know, starting as early as possible and making it as accessible and frictionless to get access to this type of evidence-based care strikes me as one great way of making a dent towards that goal you have. And who knows, maybe you will accomplish it in five years.
[00:29:48] Well, thank you very much. As you've heard, Dr. Darcy insists that Wobot is best used in conjunction with a human therapist. But after the break, we're going to hear more from Brian Chandler, who's actually been using Wobot without additional therapy since 2020.
[00:30:04] So my name is Brian. I'm about to turn 25 here and I've been seeking mental health and honestly mental clarity, I would really say since the pandemic. So it's been about four years now. I've been kind of working on my mindfulness journey.
[00:30:28] Do you remember your first interaction with Wobot? When I first used it, I guess I had such a low success rate with the other apps.
[00:30:40] I wasn't expecting too much, but when I used it, I did feel better, you know, and I thought, okay, well, maybe this was just a fluke. Let me get back on the app the next day.
[00:30:54] And I had a similar feeling. I was like, okay, well, let me get back on the next day. And, you know, eventually when it's three, four or five days of feeling better afterwards, you start to realize, okay, I think it is the app.
[00:31:06] I think it is what it's teaching me. It's teaching me coping mechanisms. It's teaching me how to label. It's doing exactly what a therapist would tell you to do. But I'm doing it from my phone for free in the comfort of my home.
[00:31:21] And it was very convenient because at the time I couldn't go anywhere. Can you describe your typical interactions with Wobot? Do you find yourself using it in a certain way? I typically like to use it really twice a day.
[00:31:36] So I think it's really important to start the morning off right with the right headspace. Reminding myself that we do have some control of how the day can go or some control of at least being mindful of our thoughts.
[00:31:52] And understanding that anxiety doesn't have to have power over you. That kind of helps me get the day on the right foot. And then, especially lately, I do like to close the day off using it.
[00:32:04] That way before I go to bed, I am still in the right headspace. So really trying to have those two anchors at the beginning of the day and the end of the day. But that is the nice thing about the app. It's 24 hours.
[00:32:15] You can use it whenever you want. There were certainly a few times where maybe I was having a panic attack at 2am. I would open up the app and it would help me. It was kind of like a reset.
[00:32:28] So you're kind of checking the monkey mind, if you want to call it that. Starting the first thing in the morning, which is an analogy I really, really enjoy. When you wake up in the morning, kind of trying to gauge how your mental headspace is to weather.
[00:32:43] So if maybe you're waking up and you're grouchy or maybe you're depressed or maybe you're anxious. You can kind of use the analogy that today it's thunderstorms. And what can I do to better prepare myself? Maybe you're the type of person where I need a distraction.
[00:33:00] I need to hang out with a friend. Or maybe that's the opposite of what you need. And you're like, I just need to be by myself today. I'm just curious, do you have any experience with conventional therapy? So prior to using Wobot, I didn't have any experience.
[00:33:16] So I didn't really have a reference to regular therapy. In 2022, I believe it was 2022, I wanted to see really if I felt a difference. And I did try for a couple weeks regular therapy.
[00:33:36] And while I think regular therapy can be very good and very important for some people, I don't want to discourage that. For me personally, I didn't see really a difference between using the app and regular therapy. And again, I know regular therapy is fantastic.
[00:33:53] I think if you need it, I definitely recommend it. But sometimes it's inconvenient. Sometimes you can't just talk to a therapist whenever you want. And you're doing the same practices that you're doing on this app that's free. Not to mention, therapy can be very costly.
[00:34:12] So I just figured if I'm doing the same practices, I'm feeling the same relief after I use it, and I can use it anytime without leaving my house just on my phone, it was a no-brainer to continue using the app.
[00:34:25] Now I have to follow that up and ask, in therapy, there are all kinds of boundaries, right? There's time. There's what you can or can't or perhaps shouldn't know about your therapist. Do you get a sense of boundaries when you're using Wobot? If so, what are they?
[00:34:43] In the beginning when I was using it, especially because I didn't really have experience with therapy, I thought it was – it took a moment to get used to it. It felt natural, but you did kind of feel like, you know,
[00:34:54] this kind of feels like I'm talking to a human. But the more you use it, you realize it's not trying to be anything other than what it is. You know, it's AI, and like you said, there are some boundaries
[00:35:07] because there are going to be some things, you know, the app might not understand about like the human experience. But I think it's programmed in such a wonderful way where it's never pretending to be more than AI, if that makes sense.
[00:35:24] So as far as the boundary goes, I would just say, you know, okay, this is not a human. If this is an emergency 911 situation, I need to, you know, reach out to a human.
[00:35:41] No, but one nice thing about the app is you can use it anytime, you know. So you don't have to deal with that boundary you would have to deal with traditional therapy. Are there any telltale signs to you that it is AI?
[00:35:55] Like how do you know it's never pretending to be something more than it's not? A lot of it is how it frames and words certain things. It does frequently tell you, honestly, probably each time you use it that it's a robot
[00:36:11] or it might make a joke of, you know, something to that degree. You never feel like it's trying to be human. It does a good job just this is research, this is what works. And you can take that at face value, you know.
[00:36:27] So if I had to ask you knowing, you know, what we know now and what you said, how would you define your relationship with Wobot? I guess I would describe it as my mental health companion. Do you think you'll continue using Wobot for mental health care?
[00:36:47] Yes, I think I will continue to use it. You talked about mindfulness earlier on. Have you tried some of the meditation apps out there like Headspace, etc.? Yeah, so I think meditation is great, but it is a totally different pace. I enjoy the app Calm.
[00:37:07] I think that's a very good app. I've had a really good experience with that. But it's just I've found Wobot to be a little bit more helpful in situations that are a little bit more urgent. I am kind of curious, like more of a hypothetical question, right?
[00:37:25] If you could have an AI model understand more of your life and kind of give you contextual advice, right? Maybe it involves you sharing all your conversations during the day, your emails, your text messages. Is that something that you'd be interested in
[00:37:39] if you could get sort of contextual advice through the day really tailored to your situation? Or would it be creepy? It wouldn't bother me if I knew the information being stored was safe. So I think that needs to be a priority going forward
[00:37:56] that the information isn't being sold. The information is being stored safely. And then I would feel comfortable with remembering because I know it's going to help with the tools in the future. Taking this technology to the limit, when it does get better, when it can understand your context,
[00:38:15] when it can be respectful of the data that it collects, would you want Wobot to evolve into something that feels more like talking to a human? Or would you rather that it stays in this very clean delineation of a tool?
[00:38:28] In this moment in time, I think I would rather it stay. I would like it to evolve, but I don't want it to ever get to the point where maybe they add a voice and you're talking to the voice and it sounds very human-like.
[00:38:44] I don't think I would like that. But kind of like what I mentioned to you, it's just all of this technology, it's so new, and we just aren't used to it. So I don't know if in a few years that's just going to be the new norm.
[00:38:59] But right now I do enjoy kind of the boundaries the app creates where if you were needing a human connection, go talk to a human. I think it's so important for people to be able to work on their mental health. And especially in this day and age
[00:39:18] where we're spending more and more time on our phones, we need to have a moment where we can put TikTok down and go to something that's going to benefit us. Brian, thank you so much for your time. It's amazing to have you on the show. Yeah, thank you.
[00:39:31] Like many of you, I've been on my own mental health journey. Over the last decade, I've really gotten into the work of Alan Watts and Guru Nanak and I've cultivated a meditation practice. And probably like many of you, I got started by hopping from one app to another
[00:39:49] with the goal of keeping myself centered. And I know I'm not the only one, far from it. And you know, it makes me wonder if many of us are already using apps to seek mental calm and clarity, it might not take a lot more convincing
[00:40:02] for us to start using apps like Wobot. As of now, AI therapy is a tool. And like many other therapy tools, its mileage will vary from person to person. But as this tech continues to advance, that gap between the in-person
[00:40:15] and virtual therapy experience will also continue to close. You literally got the supercomputer picking up on every single intonation, nuance, voice change, inflection. Is that necessarily a bad thing? I don't think so. Mental health is such a problem. And if we've got some technology
[00:40:30] that can help us tame our monkey minds, I think that's a win. If you want to know even more about the history of the therapy bot, Eliza, 99% Invisible made an incredible episode about her. The link to that episode will be in our show notes.
[00:40:48] The TED-AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard. Our producers are Ella Fetter and Sarah McCray. Our editors are Ben Bensheng and Alejandra Salazar. Our showrunner is Ivana Tucker and our associate producer is Ben Montoya.
[00:41:05] Our engineer is Asia Pilar Simpson. Our technical director is Jacob Winnink and our executive producer is Eliza Smith. Our fact checker is Christiane Aparta and I'm your host, Bilal Al-Saddu. See y'all in the next one.

