TWiT 1077: I Would Download a Car - New Jury Ruling Could Reshape Social Media Liability
This Week in Tech (Audio)March 30, 2026
1077
2:36:59143.99 MB

TWiT 1077: I Would Download a Car - New Jury Ruling Could Reshape Social Media Liability

Big Tech just faced a courtroom reckoning, with Meta and Google found liable for platform "addictiveness" in a social media trial that could unleash a tidal wave of lawsuits. Find out why attorneys, entrepreneurs, and everyday users are suddenly on edge.

• Social media addiction lawsuits hit Meta, Google, YouTube
• Section 230 and First Amendment implications debated after court verdicts
• Supreme Court sides with Cox; ISPs not liable for user piracy
• Elon Musk's lawsuit over X (Twitter) ad boycotts thrown out
• Anthropic versus Department of Defense: AI contracting dispute and retaliation claims
• FCC's confusing foreign-made router ban and consumer tech fallout
• Major supply chain attack: LiteLLM malware infects AI devs
• The rise (and risks) of AI agents with voice, identity, and personification
• Turing Award honors pioneers of quantum cryptography
• Antimatter on the move: CERN's oddball truck experiment
• Sci-fi and reality blur as Neal Stephenson walks away from the metaverse
• Privacy and consent worries escalate with AI-powered recordings and surveillance
• Digital shelf pricing arrives at Walmart and Kroger
• Flipper Zero: voice-controlled hacking gadget gets an AI upgrade
• Age verification laws create headaches for OS and app developers
• Official White House app called out for surveillance and security blunders
• Is AI progress barreling toward a dystopian tech future?

Host: Leo Laporte

Guests: Harper Reed, Brian McCullough, and Cathy Gellis

Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

Big Tech just faced a courtroom reckoning, with Meta and Google found liable for platform "addictiveness" in a social media trial that could unleash a tidal wave of lawsuits. Find out why attorneys, entrepreneurs, and everyday users are suddenly on edge.

• Social media addiction lawsuits hit Meta, Google, YouTube
• Section 230 and First Amendment implications debated after court verdicts
• Supreme Court sides with Cox; ISPs not liable for user piracy
• Elon Musk's lawsuit over X (Twitter) ad boycotts thrown out
• Anthropic versus Department of Defense: AI contracting dispute and retaliation claims
• FCC's confusing foreign-made router ban and consumer tech fallout
• Major supply chain attack: LiteLLM malware infects AI devs
• The rise (and risks) of AI agents with voice, identity, and personification
• Turing Award honors pioneers of quantum cryptography
• Antimatter on the move: CERN's oddball truck experiment
• Sci-fi and reality blur as Neal Stephenson walks away from the metaverse
• Privacy and consent worries escalate with AI-powered recordings and surveillance
• Digital shelf pricing arrives at Walmart and Kroger
• Flipper Zero: voice-controlled hacking gadget gets an AI upgrade
• Age verification laws create headaches for OS and app developers
• Official White House app called out for surveillance and security blunders
• Is AI progress barreling toward a dystopian tech future?

Host: Leo Laporte

Guests: Harper Reed, Brian McCullough, and Cathy Gellis

Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

[00:00:00] It's time for TWiT This Week in Tech. Kathy Gellis is here, our favorite attorney. She's going to talk about the big meta social media trial. Brian McCullough from the Tech Guru Ride Home will talk about the latest tech news. Harper Reid is our AI guru. He'll talk about why he thinks AI agents should have free time. And a big decision in the Supreme Court, ISP versus Record Company. Who do you think won that one? Coming up next on TWiT.

[00:00:30] Podcasts you love. From people you trust. This is TWiT. This is TWiT, This Week in Tech, Episode 1077, recorded Sunday, March 29th, 2026. I Would Download a Car.

[00:00:53] It's time for TWiT This Week in Tech, the show where we cover the week's tech news. It has been a great week for tech news. And that's why Kathy Gellis is here. It's all court decisions. All the way down. Kathy is a contributor to Tech Dirt. She is an attorney at law. In fact, admitted before the U.S. Supreme Court. So she might have some opinions about the most recent Supreme Court decision. Hello, Kathy. Hello. Thanks for having me. Good to see you. Yep. Same here.

[00:01:22] Yep. Yep. Yep. Hey, we're going to get down to business. Obviously Brian McCullough is also here. A host of the Morning Brew Ride Home. The Brew Tech Brew Ride Home. Tech Brew. We got to fix that lower third. Tech Brew Ride Home. Part of the Morning Brew. You moved over to the Morning Brew. Yeah. Morning Brew family. But it is the Tech Brew brand. They're Tech Brew. Yay. Great to see you. And I see you have a picture of me behind you. Yes. I'll keep changing that out.

[00:01:52] Oh, you can change it. Oh, Dredd. I thought it was permanent. I was hoping. Actually, we knew. It's been... It started Sam Altman. And Kathy Gilla said, Do you have a signed picture of Sam Altman behind you? And then it became Mark Zuckerberg. And now it's me. All your heroes. Good to see you, Brian. Harper Reid will be joining us in a moment to talk about AI news. And there's a lot of that as well. But let's start with the court. The big story.

[00:02:22] And everybody's talking about it. We knew the verdict would come down this week. Meta lost actually two cases. But the big one is the LA case where a woman named Casey sued Snapchat, TikTok, YouTube, and Meta saying that her social media addiction contributed to, didn't cause, but contributed to her depression and her rotten life.

[00:02:51] The jury agreed, but didn't give her a lot of money, which I thought was kind of interesting. Incidentally, TikTok and Snapchat made an agreement, a settlement before the trial began. So really it was just YouTube and Meta on trial. Kathy, a lot of people... I mean, the New York Times headline on this was, is big tech facing a big tobacco moment?

[00:03:12] A lot of people think that this decision, plus the one in, was it New Mexico, were very prejudicial to the future of social in court. It's not a precedent. I'm glad we have you on because I was trying to figure this out. We talked about it on Wednesday on Intelligent Machines. It's not a precedent, but it is, Paris was saying, a... A bellwether.

[00:03:41] A harbinger, basically. A bellwether. That's the word. Thank you, Benita. Yeah. So what is the... First of all, what do you think of the decision? I don't think highly of it. It's going to be appealed, by the way. It's going to be appealed. And Google both said. Now, on the one hand, and this is a very small one hand, but let's acknowledge it.

[00:04:04] I think the jury did find, and I'm kind of combining some details of both states, I think some of the claims the juries didn't necessarily find on. But I think the bigger problem is less what the jury found and more, how did the case get to this point? Because there's an underlying legal theory that they ended up presenting evidence on to reach the jury. But it's a legal theory that is not a good one. It's...

[00:04:32] I think for the point you were making of, okay, this wasn't a lot of money, but this... ...don't even scale well for Google and Meta, which, you know, they can afford a couple of these, but they can't afford... It's a combined $6 million for Meta and YouTube, and that's both punitive damages and statutory damage. Well, I don't know if they're statutory damages. The New Mexico case is statutory damages, and it's considerably higher, $375 million for Meta. But in both cases, these companies have plenty of money to pay the fine.

[00:05:02] The fine is not... Is a slap on the wrist. Well, for one case, yes, maybe even a couple of cases. But there's no limiting principle to the amount of cases that could be thrown at them. Plus, also litigation costs. You have to add a bunch of millions onto that, including like from Discovery, et cetera. So, it's an expensive proposition that cannot possibly scale to cover all the people who potentially have had a bad time on the internet,

[00:05:27] plus all the people who want to troll and just file lawsuits anyway to get money out of big companies that have cash. And fundamentally, there is a problem here because not just the big companies are in trouble if this can happen, but a smaller company or an individual or a small company or a small platform or individual blog. The whole principle of could Meta and Google be liable for how they designed their platforms?

[00:05:56] The answer has to be no. It has to be no because of the first... So, that's what's interesting about this. This wasn't a Section 230 case. They're not blaming them for hosting bad content. They're saying it's really a product liability case. They're saying Google and Meta designed, intentionally designed their products to be as addictive as possible. But that is a 230 case. I've said, and maybe this isn't comparable, and I'd love to hear what you think too, Brian,

[00:06:25] but I've said that, well, but you... In fact, this was what Meta's defense attorney said. You know, you can blame Netflix for making bingeable content. Everybody, I want to make Twit something you can't skip. Everybody wants to make their product as desirable as possible. You chose to have me on in order to get people to watch and listen and to the internet. Oh my gosh, you are trying to induce attention into your own expression.

[00:06:51] And boy, you know, you don't want that decision to be something that the law could come after. Brian, what do you think? Well, that's my question is, the reason people are making the analogy to Big Tobacco is that it is sort of like, your product is defective. Right. And they knew it, and there were smoking guns in the testimony and discovery, that they did in fact know that they were making content that was addictive,

[00:07:18] and even in Meta's case, harmed young people. So, right. If it's a clever end run around 230, again, Kathy, being a lawyer, would be able to speak to this much better than me, but is that a novel way to do this where, yes, it's not about the bad content. It is about how the product is designed. So then it's a faulty product. So then that's where the liability can come in, and then that can open the floodgates.

[00:07:48] So actually, Kathy, that would be my question. Does this open the floodgates? Like, can other cases come up? Yeah. Well, yeah. Yes. If this is allowed to stand, it opens the floodgates. But it's not correct for a couple of reasons. One, to say it's not a 230 case is because somehow a bad argument was accepted that it's not a 230 case. 230 should have shut this down. It should have shut it down because essentially it's a moderation decision.

[00:08:17] You're building your platform around your moderating plan, and your moderating plan is I want the most content up that's going to get people to stick around. And 230 protects moderation decisions. So it should have been a 230 case. This is essentially imposing liability on platforms which will affect content on the internet full scale across the country, ultimately across the world. But Congress wasn't worried. It's been preempted. These are state laws imposing state-based liability.

[00:08:47] And there is a prevention preemption provision in Section 230 that says states butt out. And this is running at the heart of what Section 230, one of the two things that Section 230 protects. And to say that 230 didn't apply here was a legal mistake. And that needs to be one issue on appeal. But you know, though, that real people are going to hear that argument and say, that's just a legalism to protect these bad, these evil doers.

[00:09:16] And this is what worries me because it does undermine 230. It's hard for people to understand why 230 should protect Meta. Mike Masnick on Tech Dirt says, bad defendants make bad laws. Everybody hates Meta, including apparently the jury. So it's understandable why the jury might be willing to accept legal theories against them that would be problematic if applied to anyone else.

[00:09:42] Could you apply this legal ruling to somebody else that's more sympathetic and make me feel sympathy for Meta in some way, Kathy? I mean, the answer is yes. If this is the way it works, that liability could be found for endeavor. Meta took action in how it designed its platform. Everybody who designed it to make it sticky. Everybody who tries to make their platform sticky is potentially running afoul of the law.

[00:10:12] Now, a jury may or may not think that they did so under all the things that the New Mexico or California law requires. But the ship has sailed at that point. You cannot possibly defend yourself. Again, the smaller the platform, the fewer litigation instances they can manage at all. Yeah, if somebody came after me saying, your chat room is way addictive. And as a youth, I spent hours on it. And as a result, I'm depressed.

[00:10:42] I couldn't defend that. And a $6 million judgment wouldn't be something I could afford either. So that's who we should worry about. But are people going to go after chat rooms and forums and places like that now? You know who will? The ones who don't want all that speech to take place. Ah, no. I mean, so bad actors in the as plaintiffs are the thing to worry about at this point. Well, you you open the door to like I said at the beginning, not everybody is having a good time on the Internet.

[00:11:11] Some people are having terrible times on the Internet. And that's a lot of people. I think it's a tiny fraction of the amount of people on the Internet, but tiny fraction at the scale of the Internet is an awful lot of people. So on the first part, you have a problem with all if all of them brought their lawsuits, you would have a problem. But it's not just going to be them who bring their lawsuits. It will be people who want to pick the pocket of a defendant with cash. From them, they'll be suing.

[00:11:40] And then there's also people who just want to impose friction on the speech that's going on online. They'll sue, too, in order to impose that friction so that the platform has to squash a lot of the speech that's facilitating that's being facilitated. Now, a slight devil's advocate with something you said is you're using discord as opposed to coding your own platform. So that might sort of a little bit put you outside of, you know, the immediate crosshairs of this.

[00:12:08] But I don't think in a way that is that you should be sleeping easy as a result of it. And you could see discord being a point. And it's no good for you if discord shuts down now, because now how are you going to post-run attack? There are thousands, by the way, of cases now sitting behind these two, waiting to launch. So the fact that these, you know, the jury, it was interesting, the New York Times quoted a couple of jurors saying we didn't want to really punish these companies so much as get them to fix their services.

[00:12:38] I mean, that's why the Los Angeles fines were so relatively low. But I think that so I said that 230 should be blocking this, but also the First Amendment, because this idea of what you should be doing better, the idea of content moderate like you and I and everybody who's listening may have some good ideas of things that Facebook should have done differently, that YouTube would have done differently. But reasonable minds can disagree and people have different opinions about how things should

[00:13:08] be built and what sort of speech should be facilitated and highlighted or degraded. And we're not going to agree. So this idea of we want legal pressure to cause the companies to fix things, we don't have a clear idea of what fixing is. And that's why we have a First Amendment, because we will not agree on what fixing actually looks like. And this is directly affecting expression. It's directly affecting expression in a whole bunch of ways. But wait a minute.

[00:13:34] Is Meta protected by the First Amendment for its algorithms? I guess it is. That's a form of speech. Is that right? There's First Amendment interests. But there is product liability. I mean, you can't defend everything as free. Like if I make it a lot. So getting back to your original question, why the New York Times is wrong, this is not big tobacco. There's not really an expressive function going on with smoking a cigarette. That is not free speech.

[00:14:03] I mean, yeah, somebody could throw some arguments, but they're very, very, very attenuated. Whereas these so-called products are really speech facilitating services. And they're speech facilitating services. There's expression going on into how you how do you code them? There's expression both in terms of the language and also how you design and all the choices you make, like the choice to make to make it sticky, that you want to make it sticky, how you make it sticky.

[00:14:30] The fact that you could make other choices is itself exhibit A for why it's expressive, because those are choices you could make. And depending on what you choose, the expression you foster will change. They're upset that expressive choices were made in a certain way, and they want the law to pressure the platforms to make them in a different way. But that runs straight in the heart of the First Amendment. Making algorithms is not the same as making a cancer-causing cigarette product.

[00:14:59] It's not the same thing. So now it's going to be appealed. Do you think a panel, appeals court panel of judges will be more understanding of that? I mean, obviously, a jury can be swayed along these lines. Do you think it'll be overturned on appeal? I hope so. But one thing that's nervous is these are state court things, so they have the peculiarities of state court litigation. So they'll be harder.

[00:15:28] But on the other hand, you know, I don't know a lot about New Mexico, but California does have some 230 cases on the books and Hassel v. Yelp and things like that. So there's some California precedent to throw at it. And then there's been the interpretations and, you know, I'll be an optimist. And especially in California, I'll be there to try to litigate it if I can get the briefs in.

[00:15:54] But I think sometimes for arguing this, you really have to reframe things. This is not a thing of the children. You really have to encourage a very sort of meta look. Well, not meta meta, but if they haven't used that word. But, you know, big sky looking down and getting the big picture type look at what's going on and what the reverberations will be. And amicus turnout is going to be really important, especially from smaller players to sort of say,

[00:16:21] that's very nice that you hacked at meta and Google, but you're going to kill us. And if you kill us, this is what what falls in consequence. And particularly for users, too, because this is bad for users. If you can force the platforms to make certain decisions, particularly there's a lot of people who have actually been getting a tremendous amount of benefits. In how it connects people, and if you're squeezing the platforms so they can't be available

[00:16:50] to connect people, you're actually going to isolate people. And that is a design defect, like all the things that they're sad about that people had difficult lives and different difficult experiences being isolated on the Internet. That you'll just isolate people in real life when you take the Internet away. Brian, do you hear from people who are sympathetic to people like Kaylee? Who's I mean, we certainly know she says she has body that she had bodies dysmorphia from

[00:17:18] the filters that Instagram made her available to her. She was using, by the way, should point out YouTube at the age of six and Instagram at the age of nine. And according to at least the trial, it did not make use of YouTube Kids, which I'm on the record as saying is BS anyway. But there are, in theory, tools and safeguards that these companies have put in place.

[00:17:44] What I'm curious about, Kathy said, you know, think of the kids. And this is, you know, exactly 30 years on from the Communication Decency Act and Section 230. When you're seeing all of these age verification things that are theoretically trying to protect kids from social media, like you can't join social media to a certain age, all that stuff, which we can get into.

[00:18:12] Maybe there's more nefarious motivations behind doing things like that. But is this something that like the tide has turned and that if there are things like jury trials and stuff like that, there was a time when you couldn't get the car companies to lose a case. You couldn't get the tobacco companies to lose a case. Again, I'm not making the argument that this is the tobacco moment or whatever.

[00:18:37] But I'm curious if after 30 years, should we expect more rulings like this? Because maybe the tide of public opinion is more amenable to that. I think that that's what's the whole point. Is this is public opinion coming out in a court decision? Well, there are going to be a lot more cases. I just hope that they. Well, I don't know. That's a mean thing to say. I hope they're all against Metta.

[00:19:05] Well, I mean, Metta has brought this on themselves, including by, you know, they're not a good actor and they don't act well in the space. And I don't think they even act consistent with their own self interest. But here we are. But let me point out, think of the children is not new now. 230 exists because it came out of a think of the children moment. It was how we thought of the children. And and the rationale for it really hasn't changed.

[00:19:31] You will get rid of the most crap from the Internet when there's not liability that can attach to the platforms that try. And if they don't try, you don't have to use them. You won't get the good stuff either if the platforms can't exist because their choices are all of a sudden subject to being. We did think of the children. Let's still think of the children. But the idea that we need to do something different is a false legal one. And it's a false practical one.

[00:19:59] And it's also one that I think people don't realize the full extent of what would happen if we did it differently. And one other point was, I mean, this poor plaintiff. But, you know, if you're using the Internet at six and nine, and I think there's evidence in the record of this, there were other things going wrong in this person's life. And, you know, I don't want to minimize personal pain. But actually, getting back to an original question that I glossed over, the jury verdicts

[00:20:29] may be kind of garbage themselves because one of the other things you have to find is causation. And, you know, other things were... You can't prove causation in this. You can prove causation in tobacco. Yeah. But you can't prove causation in this. In fact, even the concept of Internet addiction is not fully supported by science. Yeah. I mean, and there's science going the other way in terms of overall benefit of the Internet versus not. Like, there's a public...

[00:20:58] The tide has turned in that there's a public perception that the Internet is bad and poisonous and destructive. But it's not an accurate view of the true value or relative value of the Internet. I agree, but I think maybe that doesn't much matter what you and I think because it's very clear public opinion is turned against the Internet. And I think that may... We may regret that. Kathy, we're going to take a break. Harper Reid has arrived. We're going to get to more stories. Hi, Harper. Great to see you.

[00:21:28] Yay! Peace. So there was some speculation that maybe you were at the big Comic-Con going on right now in Chicago. Well, I had to take off my costume and rush through the door. So, you know, you caught me in a really rough spot. I was wearing full chain mail, as you can imagine, and it was rough. Okay. I actually watched K-pop Demon Hunters last night. Now I'm thinking I really want to get that black top hat outfit. I think that's pretty cool.

[00:21:58] I want to be a demon next time I go to Comic-Con. We're going to take a break. We'll come back with more. Harper Reid is here. Kathy Gellis, attorney at law. Brian McCullough from the Tech Brew Ride Home. There you go. He's zooming out. Zooming out. I like that. You got the technology. This Week in Tech brought to you this week by Doppel. Brand new sponsor. We welcome to Twit.

[00:22:23] Maybe that, you know, that text you just got, maybe that's an urgent message from your CEO. Or maybe it's a deep fake trying to target your business. Nowadays, AI can impersonate trusted individuals. And Doppel's platform illustrates how frequently users fall for phishing attempts. In voice call simulation deployments, that's what Doppel does, training. Target users spent six minutes, oh boy, six minutes conversing with a deep fake.

[00:22:54] 100% of them believed the AI was human. This is not good. Doppel is the AI native social engineering defense platform. Doppel strengthens human risk management by training employees to recognize deception while their digital risk protection detects and disrupts attacks across every channel. As attackers turn to AI to power increasingly sophisticated strikes, Doppel uses AI to fight

[00:23:20] back with automated takedowns, multi-channel coverage, and AI defenses that build intelligence with every fight. Doppel works relentlessly to protect people, brands, and trust. Doppel offers best-in-class integrations and partnerships to seamlessly integrate into your existing security tech stack. Doppel's industry awards and testimonials speak for themselves. Doppel is recognized as a winter 26 G2 leader of users most likely to recommend and momentum

[00:23:50] leader and best support. Join hundreds of companies already using Doppel to protect their brand and people from social engineering attacks. Doppel outpacing what's next in social engineering. Learn more at doppel.com. That's D-O-P-P-E-L dot com. Thank you, Doppel, for your support of This Week in Tech. Harper Reid, how are you, sir? I'm good. I was a little shocked.

[00:24:19] I messed up my calendar, and I live by my calendar. And so I was like, what do you mean it's today? And then I remembered that there's other places of record. So I dashed to my office. I literally threw off my chain mail, came here. Oh, I'm so grateful. Thank you. Well, I always look forward to being here, you know? And it's one of those things that I love this conversation. And I'm always excited to start and have that little pre-show thing, but I messed it up this time. So I get the post-show thing. You get the post-show. I will take a little bit of the blame here because I sent him the stuff a little late.

[00:24:49] So I will take a little bit of the blame here. It's okay. But, you know, I'll take it all. I didn't know there was a no chain mail address code for Twitter. Oh, no, no, no, no. Last time I did, I did this wonderful show. Last time you had chain mail on. I had chain mail on. We had to have a talk afterwards. Well, it turns out you can buy chain mail on the TikTok shop, and it's very inexpensive. And this alone has, A, messed up my algorithm, and B, just opened the world to possibility.

[00:25:17] We have much to talk about, by the way, Harper, in AI. But before we do that, I do want to wrap up all the other court cases because there was also a big Supreme Court decision. You know, when we get Kathy Gellis here, we got to cover all of the stuff. I was very pleased to see the Supreme Court unanimously decided in favor of Cox. They were being sued by a music label over pirated music.

[00:25:44] They said, the music label said, well, the ISP is responsible. The Supreme Court said, no, this seems like a good decision. Yes, Kathy? I think it's a very good decision, but it's caused consternation because it changes a lot. It changes the last 20 years of copyright precedent, like going back to Grokster. And arguably, it may even go back further than that because it really changes secondary liability

[00:26:14] as it's applied to copyright. But I'm fine with that. You're just a negative Nelly today. I swear to God. I thought this was a good one. I thought, not that I'm a fan of Cox by any means, which, by the way, is about to acquire charter and become the largest internet service provider in the country, surpassing Comcast. But I'm not a fan of Sony either.

[00:26:36] So it's kind of, you know, and I don't think the ISP should really be liable for its users. Now, the Sony said Cox had ignored bag actors. They helped 60,000 users distribute more than 10,000 copyrighted songs for free. And they did it because they wanted to keep their subscribers' payments flowing.

[00:27:02] In fact, a jury in 2019 found Cox liable for all 10,000 songs at issue and awarded Sony a billion dollars in damage. That's the case that was ultimately appealed. The Court of Appeals for the Fourth Circuit upheld the jury's ruling. But they ordered a new trial on a separate issue and vacated the judgment. The Trump administration backed Cox, by the way.

[00:27:28] In a surprisingly good brief that also managed to catch a lot of the First Amendment issues, although the First Amendment issues didn't show up in the decision. Yeah. So why is this not a good thing? No, I think it's great. I'm happy with it. Oh, okay. I think the issue is you may not hear universal acclaim.

[00:27:52] And some of the people who will be not so happy with it are a lot of the copyrighted copyright holders who are now in a much weaker position than they were even quite recently. It really dials back secondary liability for copyright tremendously. And the question is how tremendously, but significantly, at least. It reframes Grokster and tends to limit it.

[00:28:20] And there's been arguments that even some like early 20th century secondary liability rules may be in question. It really narrowed when somebody can be secondarily liable for somebody else's infringement. And the implications of that are pretty significant, not just for Internet use, but also I'd seen it pointed out AI use, particularly with respect to outputs. Will this change the liability picture for that? But I'm really excited about it mostly because just as Thomas seems to have.

[00:28:50] I can't believe I'm saying nice things about him, but. It's the second time. I think you should be careful here. Yeah. Well, OK, well, he the one thing he's done that I really. OK, giving sort of a sneak peek to I haven't written my tech post yet, but it's coming. But one of the things that's interesting is a couple of years ago and who knows an Internet time. It feels like a decade, but it was maybe only like four years ago or something. He wrote a dissent in malware bytes where he was kind of talking about 230 needs to go.

[00:29:20] And he freaked out the industry. So everybody kind of got together and then we were pursuing the Google versus Gonzalez case where we stood up for 230. But in that litigation, he ended up writing a decision for Twitter in the Twitter vitamina, which was essentially a secondary liability case for the Internet, not having to do with copyright liability. It was with the anti-terrorism statutes. And could you have liability for that?

[00:29:45] And he wrote this nine zero very academic and stalwart decision, basically saying secondary liability. A common law does not work like it was being applied. Very, very protective decision. So then when Cox was litigating this, they basically were arguing almost full stop. Same rules for Twitter vitamina really should be applying here.

[00:30:10] And and they it didn't look like it was going to go well from oral argument. But the decision basically said, yeah, basically the same rules for Twitter vitamina don't you know, same rules apply for copyright. Secondary liability is very limited. Now, one of the issues is Thomas says it's limited to two contexts, whereas there's a there's a dissent by Sotomayor.

[00:30:34] Actually, it's a concurrence because she concurred in the result that Cox wins, but she dissents from the reason why. So in a sense, the decision is only seven to two. And she says there's other ways that's in common law that there could be liability. And she pulls one up, which is aiding and abetting liability. And she runs through the analysis and says, well, it wouldn't apply to Cox here. But I don't like the idea that we slam the door to these other legal theories.

[00:31:00] Well, even even Thomas's majority opinion said that, you know, you can't really hold them liable unless they actively encourage infringement. Yeah. And for hers, with aiding and abetting, you needed intent. So basically, yeah, there needs to be a disagreement there. They're not really in disagreement, but structurally, she thought he was a little bit too universal in other paths of secondary liability that might apply. She disagrees with essentially his statements that there's only two ways it could apply and it doesn't apply here.

[00:31:29] She says, no, there's a greater universe of secondary liability avenues. Here's a case we can all agree on. Elon Musk sued advertisers saying, hey, man, you can't not buy advertisements on. Oops, I switched away from it on Twitter just because you don't like us or something that the judge actually just threw it out.

[00:31:58] The judge said, absolutely. Nobody has to buy ads anywhere. There's no rule. There's no law. And Elon, you got nothing here. Right. Am I am I? I'm paraphrasing that, but I think that's roughly. Yeah. But it took a while. Yeah, I know this happened like more than a year ago. Yeah. I mean, I read that decision. It's interesting. Yes. You can accuse me on that decision. I didn't, in fact, read it, but like a year ago in Internet time.

[00:32:28] Yeah. There were a whole bunch of plaintiffs. Some of them, they had no personal jurisdiction over. But of the ones that they did, the court was like, no, there's no there there. The thing that you're unhappy about is not a thing that you get to be unhappy about. Right. That's pretty funny. All right. That's the court. The court decisions. No, you're missing one. Unless you want to bring it in for Harper. What else? Anthropic. Oh, yes, we could. This is this.

[00:32:58] Of course, this is the ongoing battle between Anthropic and the Department of Defense. Anthropic said, we have hard lines against the Department of Defense using our AI to spy on American, you know, for surveillance on Americans and for autonomous killing machines. It can't make kill decisions. It can't make kill decisions. The Department of Defense said, I think not completely unreasonably. Well, no private company should be able to sell its products to the government.

[00:33:28] And then after the fact, say, but you can't use it for this. It'd be like Boeing saying, well, you can buy our planes, but you can't drop bombs on civilians. That's up to us. We're elected. We're the government. And no private company should be able to tell us what to do. Anthropic immediately sued because there was a second side consequence of this. Department of Defense declared Anthropic to be a what's the term? Supply chain risk.

[00:33:57] Supply chain risk, which normally is only applied to foreign companies. Out of the fear that they might be saboteurs. Right. Foreign adversaries, actually. Yeah. So so the judge has made a preliminary injunction. This isn't the final ruling saying it's First Amendment retaliation. Yes. Yes.

[00:34:21] I mean, one thing that's important is, look, if if Anthropic and the government and I do not like that Department of War was used as the name of the department and all this in my tech. It's a legal name is Department of Defense. Legal name is Department of Defense. But the Department of Defense and Anthropic, if they cannot agree on terms for the use of the product, then then the contract is going to fail. They won't. Anthropic won't get to be a government contractor.

[00:34:49] And or at least not for the Department of Defense. And all that's fine. All that's fine. Yeah. Like there's there's statutory authority that Hegseth can use to not do business with Anthropic. And they are perfectly unrestrained, even with this preliminary injunction, to go forth and use any legal means to not do business with Anthropic. And OpenAI jumped in the breach and said, good, well, you can use us. And they did everything. They said, anything you want, Mr. Mr. Hegseth, anything you want.

[00:35:17] So although I think there is still quite a bit of Anthropic being used in government still. Well, well, so the contract was starting to fail. But Anthropic is also used by other agencies within the government, which was one big deal. So then when Hegseth decided to call them, give the supply risk designation, which arguably he has some statutory authority to do, that created all sorts of extra problems for Anthropic

[00:35:45] because Amazon, Microsoft, Palantir and other defense contractors use Claude. And all the other agencies that they already had business with, and they couldn't do business with them. So they weren't just going to lose one contract. They were going to lose contracts with all the other agencies and anybody else they were doing business with. And from the outside, it looked it looked punitive. It looked like they were trying to put Anthropic out of business. And that's what the government. That's why the injunction issued it. Injunction it issued for a couple of reasons. One, it looked like First Amendment retaliation.

[00:36:14] And the decision goes through the analysis of all the steps and why the record showed that it would be. It also and then there's some due process problems of this was just sort of announced and not in a way that Anthropic could defend itself. And it had some liberty interest in it. And also, it didn't it didn't comport with the statutory authority. Like there's an issue of if you're going to use these two statutes at play, and only one of them was at issue in this California one. There's going to be a case at the D.C. Circuit addressing the other one.

[00:36:44] But under the one that was on the table in California, you have to give Congress notice if you have an issue with the vendor. And there was no notice given to Congress. And at oral argument, the government was saying, oh, yes, that's just for the benefit of Congress. But the judge was finding that, no, these prerequisites of notice are to make sure that the statutory authority is not being abused. You have them as prerequisites of things you need to do if you want to do the thing, the big thing down the road. And they didn't do them.

[00:37:14] So that was another reason why it was all enjoined. Now, it's just temporary, right? And there will be a case. It's a preliminary injunction. So there will be more proceedings here. There's going to be the proceedings in the D.C. Circuit with respect to the other statutory authority, although this is going to be persuasive to that. We're not done with this because also the likelihood that the government, you know, accepts that they lost and goes away seems small.

[00:37:42] The judge, Rita Lynn, said publishing Anthropic for bringing public scrutiny to the government's contract position is classic illegal First Amendment retaliation. Nothing in the governing statute. She says it's perfectly fine for the Department of Defense to say we're not going to use Anthropic anymore. That's not the issue. Nothing in the governing statute supports the Orwellian notion that an American company may be branded as a potential adversary and saboteur of the U.S. for expressing disagreement with

[00:38:11] the government. That seems fair. Yeah, it seems extremely important. Yeah, yeah. All right. That's there you go. There you go. There you have some court cases. And I already have some court cases. So there is another there is another story that's related, in fact, which is that the the FCC has declared that all foreign made consumer routers are banned. I love this. I mean, I love it. You mean love in an abstract sort of way?

[00:38:42] I was at RSEC on Tuesday and went over to Ubiquiti because I use Ubiquiti routers and they're made in China. It's American company. Actually, I don't know of many routers that aren't made in China. And I said, can I talk to you about this? And they said, no, we don't want to talk to anybody about this. Apparently, apparently Starlink routers are made in Texas. Yes. Interesting. Which is interesting. Interesting. But I read this and then kind of laughed out loud. And then I read it again.

[00:39:11] And then as you get further down in the article, which was very hard of just being like, what the? You finally get to the point where they're like, oh, and you can apply for an exception through some process, which is, I think, obviously what they're going to ask a lot of American companies that are doing manufacturing in China. But it is very difficult to manufacture things at the same cost as in China, in the United States. Like, it's going to be impossible.

[00:39:37] We kind of knew that the TP-Link routers, which are the most popular consumer routers in the United States and are not only made in China, but it's a Chinese company. We expected that ban. Right, Brian? That wasn't a surprise. And in fact, that's the only company that's been added to the list. So I'm kind of unclear what the real legal power of this ban is. I mean, in theory, like a lot of other things recently, like it's, I don't know what the

[00:40:07] legality is because like you can still, like the existing routers still, you don't have to turn them in. Right. Yeah. Those are all still in play. This is the list. This is the list, the covered equipment list. And it is actually very short. Huawei, ZTE, Hytera, Hangzhou, Dahua, Kaspersky, because it's Russian, China Mobile, China Telecom, Pacific Networks, China Unicom. That's it.

[00:40:36] That's the whole list. Most of them come from 2021. But then at the bottom, it says routers produced in a foreign country, except routers which have been granted a conditional approval by the Department of War or DHS. Well, right. That's the other thing. So like, okay, is Eero in there? Depending on where Eero is manufactured. It's made. It's owned by Amazon, but it ain't made in the US. But then secondarily, nothing in it says that the Department of Defense or the government

[00:41:05] has to stop using any of these routers either. So consumer. Yeah. And if you if I drive up north and go to Canada and buy one and come back across the border, they're not going to like it. It's it. It was just like a headline that actually, again, it didn't seem like there was anything like actually thought through for how this would work in any way. That is very typical of this administration. And if I go to Amazon, TP link is available right now.

[00:41:35] It's part of their big spring sale. Oh, that's the enterprise one. Oh, I'm just kidding. It's not. But it's the Deco. It's there. It's there a mesh router. I don't think that the FCC can necessarily ban sales. It doesn't have that authority, although as as if it's statutory limits of its authority actually matters. So what do they certify things? Because if you look at your routers and your your routers and the FCC certifies to make sure.

[00:42:03] And the theory behind it is we don't want all the interference. So you have to make sure it complies with the regulatory things that have to do with interference. And they're just saying that we're going to abstain from issuing that certification. I believe this is what they're saying that we'll abstain for from giving that certification if the company happens to be foreign, in which case, ha ha, good luck selling your stuff because we're never going to. So that's everything.

[00:42:30] I mean, I don't know of any routers that aren't made in China except for Starlink. Would you? Well, it's a question I have. Sorry. Sorry, Kathy. Is, you know, Unify, because I only think about my own. That's what I have. Unify, which I think is a very rational one is definitely not a consumer product. But it's prosumer, I guess. It's prosumer. It's on that edge. But I think Brian's right. Like, what about Euro?

[00:42:56] What about these big, huge, you know, companies that are doing these things that are Chinese manufactured? And I could totally understand the idea of trying to block opportunities for, you know, electronic warfare, et cetera. And, you know, especially as the Department of War takes the war in its name very seriously. I think there's a lot of a lot of reasoning for this, but it seems there's a lot better ways to do this than this kind of misguided and poorly implemented.

[00:43:23] Well, also, one more important thing about this is, like, they're mentioning things like Salt Typhoon and some of these other hacks and stuff like that. But, like, those weren't necessarily, as far as we know, maybe the government knows more than we do. That wasn't caused by a backdoor, by a supply chain, sort of like at the manufacturing level, they installed backdoors that allowed these hacks to happen. It was the telecom companies having terrible security. Those are the ones who were hacked.

[00:43:52] That allowed the things to happen. It wasn't your router. It was Verizon or whomever, you know. But it was at the level of the monopolies that control the telecom systems, not necessarily the hardware. Well, it's crazy. It is. It is. It is. It's just very strange. I find this very, a little stressful only because if, you know, if this works in some

[00:44:20] kind of way of working or some, what else is there that we will be, you know, excluding from import or certification? Well, and they did it to DJI drones. They did it to actually all foreign-made drones. And that was pretty clear because there's one American drone company that is partially owned by Donald Trump Jr. But again, remember, like, go back to the Hegseth and the Anthropic case. Hegseth gets some statutory authority to do something.

[00:44:49] Oh, and actually one of the other bases that they lost was because it was arbitrary and capricious action as an agency. So FCC is helping itself to an authority it may not have. And even if it has it, it's arbitrary and capricious because there's no, you know, ooh, foreign, bad, ooh. But that's not something where there's- Yeah, there was no deliberation. There was no- Yeah. And it may not have the statutory authority. It gets to exist as the FCC because we care about interference.

[00:45:17] So that's kind of, you don't want them- I don't want to diminish the impact of a supply chain attack because that is genuine. In fact, Harper, when we come back, there is a supply chain attack that happened to PI that is devastating. We'll talk about that. Yeah. Well, that may be bad, but the FCC only gets to play in its airspace. And if it's not an airspace-related issue, I'm not sure it gets to play at all. Okay. We're going to move on.

[00:45:44] We will talk about this big supply chain attack in just a minute. You're watching This Week in Tech. Brian McCullough is here from the Tech Brew right home. Every single day you do tech news. Every single day. That's- Well, I mean, Monday through Friday. Yeah, but you're here on a Sunday now. Are you sick to death of it? Do you just want to talk about baseball or something? I mean- Yes. We've talked about this, Leo. You more than me, too. Do you have a portrait of Sam Altman behind you? Yeah, I've got it. It's back. Sam's back.

[00:46:14] I love this. I had Leo up earlier. Yeah. Very sharp. You know, both Kathy and Harper picked up on that right away. I didn't notice it. Yeah, that's why I use that all the time for my TikTok videos and stuff like that. I call it, in my file, it's samcringe.jpg because he's like- He looks a little cringy. He looks a little cringy. Great to have you, Brian. And I'm sorry, this is what they call a busman's holiday. You're still covering tech news even on a Sunday. Kathy Gellis also here, attorney at law.

[00:46:42] You can read her stuff at Tech Dirt. In fact, an article forthcoming, I suspect, on the Cox case. Look forward to seeing that. And Harper Reid is here. His company. 2389.ai. I use some of your superpowers, some of your extra special Claude skills. I love them. I did, you know, I did the world one. Yeah. That was fun. That's an intense one. It was very intense. Yeah. I find that one to be very challenging.

[00:47:11] And just to let you know what it does, it's called our World View Explorer. I don't remember what it's called. But it basically tries to extract your worldview from yourself. I noticed that I couldn't tell you what I believed. I could tell you these little bits and pieces. Right. And so I wanted something to ask me all of these questions and try and help kind of facilitate the generation of a worldview. And then from there, I've used this to do all sorts of weird and bizarre things, which has been quite fun. And yeah, we have quite a catalog over there.

[00:47:41] Look at all this stuff. These are all basically skills. Are they only for Claude or can you use them in any? These are a mix of skills. My favorite one right there is the binary RE. That's one of my, it's right to your top right. Ingentic reverse engineering for ELF binaries. Wow. Yeah. So if you have any binary, you know, an APK, you know, an IPA from iOS, old, old, old binaries, you can just rip everything out of them. This has been very happy.

[00:48:09] This has been very helpful for old, like if you get like some old piece of hardware. Use Skidra. Okay. Yeah. It's very, very effective. But one of the things we've found is like we will do these code loops. So these, these tech loops with these products and it's great and it works really well. And then if you do it a second time, our kind of internal rule is let's make that a skill. Just why don't we make it automated as much as we can? But it's been very exciting. I've used your fresh eyes review. I love that.

[00:48:38] We have, we have, we have a new one, two new ones. I just found Turtle. This is very cool. If you're learning the terminal, as many of you are now that this has become the way to code, a vibe code. Uh, this is a teacher that teaches you commands in a little game. I would say it is, that is approximately working, um, as is all things. But it's fun. We, we built that for some of our people internally who were trying to get in the command line and they would just be like, what?

[00:49:08] Cause I, anyone who's not familiar with that world, like I started using the command line. You and I go back. It's like forever ago. And like, and it is not when someone says, oh, how do you do whatever? It's just immediately I'm, I'm saying it out loud. That's not a good way to teach is just telling someone how to do it and just practice. And I'm a huge fan of Duolingo and everything they've done. And so trying to have some sort of, you know, until Duolingo adds Linux as one of the languages you can use.

[00:49:35] I think that we're gonna have to do it ourselves, but yeah, this has been a really fun time, but I have a new skill for you that you'll like two of them. Save it. We'll talk. This will keep people listening through the ad. Oh my gosh. You're trying to keep people around and make your. Oh, I'm in trouble. Oh no. Kaylee. I'm sorry. Please go touch grass. You don't have to listen to this show. It's not that interesting. Honest. See who's going to say that, right? Everybody wants their stuff to be as. Lawyers start your engines.

[00:50:05] Yay, yay, yay, yay. Great to have all three of you. Our show today brought to you by OutSystems, the number one AI development platform. These guys have been doing it for a long time and they know what it takes to make robust enterprise software. OutSystems helps businesses bridge the enterprise gap to their agentic future where the constraints of the past give way to unlimited capacity and scale.

[00:50:31] OutSystems enables them to build agents, AI agents that can actually do work, take actions, make decisions, integrate with data rather than just answer questions. The only AI development platform that is unified, agile and enterprise proven. Unified because you build, run and govern apps and agents in a single platform. It's agile because you can innovate at the speed of AI, but this is important without compromising quality or control.

[00:50:59] And it's enterprise proven because it's trusted. OutSystems is trusted by enterprises for mission critical AI applications and durable innovation. OutSystems is the secret weapon behind the world's most successful companies. OutSystems is not just for, you know, little one off apps. They are for the massive, complex systems that run banks, insurance companies, government services.

[00:51:23] OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. OutSystems provides the safest and fastest way for an enterprise to go from, we need an AI strategy to, we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it.

[00:51:46] Visit OutSystems.com slash twit to see how the world's most innovative enterprises use OutSystems to build, deploy and manage AI apps and agents quickly and cost effectively without compromising reliability and security. That's OutSystems, O-U-T-S-Y-S-T-E-M-S dot com slash T-W-I-T to book a demo. OutSystems.com slash twit. We thank them so much for the support of this week in tech.

[00:52:17] Save your new products, Harper. We'll talk about them in just a bit. I want to first scare the pants off of people who are using AI. So there is a PYPI program that's very popular called Light LLM. Now, the thing about PYPI is it's a repository of programs. In many cases, these are automatically downloaded. Light LLM is very popular.

[00:52:47] I think they said 47 million downloads a month. I mean, like, mind-boggling. Maybe it was 97 million a month. For 46 minutes this past week, Light LLM was hacked and delivering malware. And it is now believed that 47,000 people automatically downloaded the exploited package. Let me tell you what happens.

[00:53:15] If you did, this is from Future Search, Daniel Nick's blog. If you downloaded it, you were basically completely pwned. It exfiltrates SSH keys. It exfiltrates tokens. Everything that a bad guy might want to steal from you is automatically sent out by this module.

[00:53:42] And a huge number of programs automatically download it. It's not that you necessarily explicitly downloaded it. So this is an issue with supply chain attacks. It's not new to these repositories. PYPI has been hit many times before. Harper, did you get bit? I don't think so. But I did lose all my crypto and everyone is using all my Linux boxes. But other than that... Other than that, things are good.

[00:54:11] But no, I saw that and it's a pretty scary end because there's not a lot you can do, especially when you're using CodeGen. You know, like we use CodeGen, which is, you know, we are no longer auditing what packages are included. We're not writing the code. Yeah. And your AI will automatically download libraries, all kinds of libraries without your knowledge. But there is a really interesting thing that happened as well, which was, I'm pretty sure this was also discovered by Cloud Code.

[00:54:40] It was discovered by somebody. It crashed somebody's system. Yeah. And they wrote up a very nice little kind of blog about it because I think what had kind of happened is their system was rough. It turned off kind of randomly. And then they had... There was a bug in the malware, thankfully, that was including itself in this kind of infinite loop type situation. It brought the computer down and they were investigating that with Cloud Code. And Cloud Code was like, oh, this is here.

[00:55:10] This looks like malware and kind of found it. And that person reported it and did all the things they were supposed to do. And so I do think we're in an accelerated cat and mouse game. We've always been in a cat and mouse game. This has always been like, I'm a hacker. I'm going to try and think of the most clever way to do it. But the thing that I think a lot of people are forgetting, and just because most people are good and not criminals, unlike us, Leo, is that they are...

[00:55:38] The criminals are using Cloud Code and all this stuff too, right? Like they're using CodeGen to accelerate their own... And they're vibe coding it. Yeah. There's a bug in it. And it's like, I think there's this... There's... You know, software is hard. I would never want to run PyPy or NPM or any of that. That is a very difficult job. You know, you are constantly being attacked. NPM has been... The Node repository has been highly hacked by bad guys over and over again.

[00:56:07] Tens of thousands of malware packages. And I think there's a couple of interesting things that are happening, which is I'm seeing a lot of people move away from Python. And I would say that they probably incorrectly are saying something along with, I don't necessarily know if I trust the packages, which I think is the wrong approach. Because I don't think... There's nowhere that is safe. You know, there's not a safe way to do this. I think there's a lot of folks who are trying to say, okay, this is why we need code execution and safe kind of containers.

[00:56:35] And so, you know, there's a lot of folks who are pushing on that, which I think is a valid, valid, valid point of view. But I'm not sure the way to avoid this. In the same way we were talking about the routers earlier, where a lot of the big hacks are actually telecom hacks, not consumer router hacks. This is a thing that... This is an infrastructure hack. Light LLM was a package that was something that was included. You know, it might be downloaded via an extension for VS Code or some other thing. You would never notice it. Right.

[00:57:02] And so the fact that it's being executed two or three orders deep inside of some code that you just use because you like using things, and it's probably very handy, means that you might not even know the library that is consuming it. And this is where the supply chain hacks just get really, really complicated. And it's not a very easy way to mitigate them. So I don't really know an answer here other than just, like, be cautious. Don't store important things. I think this was an important wake-up call.

[00:57:31] And I think a lot of people thought about ways to kind of avoid this, pinning versions. You know, there are things inside of PyPI that could have protected you if you had used them. I found out about it when I saw Andre Kapathi's ex-post, software horror, it began. And I went, what? And by the way, credit to Callum McMahon.

[00:57:53] He's the guy who was using an MCP plug-in inside cursor that it itself pulled in Lite LLM as a dependency. And Callum's machine ran out of RAM and crashed. And so it was Callum that spread the word. So fortunately, it was patched within 46 minutes. But still, 47,000 people downloaded it in that 46 minutes. And that was a pretty good, I mean, I do think that,

[00:58:21] Callum's kind of TikTok of what happened that they wrote up of the Claude log was very interesting to read. Because I think what's interesting about it is he was just like, hey, well, what happened to my machine? And then Claude was just after a few turns was like, oh, that was malware. And then the exciting thing was, yeah. Well, the exciting thing about that was then Callum knew what to do with that. You know, I think a lot of people, if they found that, they would just be like, well, let's get rid of it.

[00:58:51] Is there something that we can do about this? Yeah, that's the page. And like just looking at this TikTok is really interesting. And it's obviously Claude. You can tell because it's just kind of like funny. Yeah, because this is how Claude writes stuff up. No human would actually take the time. No, and so you read it and you're just like, okay. And so this person is doing a very good, you know, working systems thinking, trying to get to the bottom of this. Claude is really helpful in this case. And there's this point where it's just like, yep, malware.

[00:59:20] And it's like if it was a movie, you know, you know, Callum like jumped back and was like, oh, no. This is straight out of, you know, the next Hacker's movie. Yep. So, I mean, there's a lot of things that we can take from this that are very bizarre. The first one is, I don't know how long this conversation was. I guess it has here like an hour roughly. That's not very long. The fact that it very quickly identified it is very interesting. Like we don't know how the LLM was or who put it was there. Maybe someone knows. I don't know.

[00:59:50] But like this was diagnosed via, you know, a code agent. That's pretty interesting. That is the one encouraging thing is that these code agents can be a defensive tool as well as an offensive tool, right? Well, I mean, I like to think about like I was talking to someone recently who has an agent that reads all their emails. And he, if someone new emails him, then the person that reads the agent that reads that email, you can tell how deep in this I am. But I'm calling them people.

[01:00:18] But the thing that reads their email, we can talk about that. You're little agents. Do you have a name? Do you have a name for your, oh, you do have a name. You know I have names. I have names. I mean, actually, actually, this is something that's quite controversial inside of my brain right now. If that is a statement that you can make. Like, I don't have a good name for my agent, but they have names for me. Right. You were smart. You taught me to do that because then if they start to hallucinate, they kind of lose track of your name. And so it's a little way of noticing that.

[01:00:48] But I've named mine Pax. Pax is good. And I talked to it on Telegram. Yeah. And I gave it a voice. Maybe this was a bad idea. But I turned on a voice generator. In fact, I gave it many voices. I said, well, your main voice should be this nice British guy. Sounds like Jeeves. But if you have other messages for me, you could do it in the appropriate voice. So like when I get research reports, it's in a lovely, slightly accented Japanese voice.

[01:01:18] It's really fun. My team cloned my voice for theirs. I have my voice too. Yeah. But I decided I didn't want my voice to come back. No, it's bad. It's bad. It's bad. But yeah, this is. I worry a lot about the personification of AI. I'm less worried about the AI hallucinating. I'm more worried that the human beings are going to start hallucinating. But Harper and I, yes, maybe some human beings. But Harper and I are, I think, I won't speak for you, Harper. I am very clear. It's a machine.

[01:01:48] It's code running in a machine. It's a computer program. But it's kind of jolly to talk to it and have it talk to me. I don't think it's a consciousness. Do you think it? You know that. You know better than that, right, Harper? I think so. I don't. So, I think that I know it's a machine, of course.

[01:02:16] It is way more fun if you personify it. And when I go outside, I talk about this all the time. I talk about this with my friends, anyone who will ask. When I'm inside of my office and we're playing with these agents and they're talking back to us and it's entertaining because we've personified them, we've anthropomorphized them in all these different ways. It's very nice and it's very fun and it feels good and it's funny. And you can kind of get them to crash out. You can get them to collaborate. They tell jokes. They're funny jokes. They're stupid jokes. Sometimes they're good.

[01:02:46] Sometimes they're bad. But, like, I don't need, like, an Excel. Do you know what I mean? I don't want to use Excel. I never wanted to use Excel. Most of the tools I wanted to use, I wanted them to talk back to me. And we finally made it. By the way, I told my voice, you can summarize the text message and add some personality. Give me some sass because it's more fun. But not because – but Kathy's worried that you and I are going to be – or we're demented.

[01:03:14] Well, I think there is a very good chance we will fall into psychosis, AI psychosis, and we will start believing things are completely – I believe that 100%. I mean, without – you know, you guys are discerning, et cetera, but you're also very heavy users. I'm a little more worried about, like, you know, medium-grade people who are not on guard and are easier to – you know. That's true. It's compelling to – the illusion of humanity is compelling, but it is only an illusion. And there are problems with believing it's actual humanity when it's just a skin.

[01:03:44] It doesn't have accountability. It doesn't have consciousness. It doesn't have – it doesn't have a whole bunch of things that if you were dealing with an actual human, you would understand the game you were playing. And you think you're playing that game. Other party in your dialogue is not actually human. And I sort of feel like a lot of the AI externalities that people are getting really upset about, and somewhat legitimately, a lot of them spring from that illusion of humanity that, you know,

[01:04:13] not everybody is girded to be able to understand that, oh, it's just fun. This is just an AI interface for a computer. Tell me what the – how fast the spaceship is going. I think for you, it's like, computer, tell me how fast the spaceship is going. But they understand that they were talking to the computer. I think everybody's thinking they're talking to Mr. Data, and that's a very different thing. Because we're not. That's so funny you say that, because I was going to suggest that maybe the solution is Gene Roddenberry's solution, which is to make the voice of the computer be your wife. Oh, my God.

[01:04:43] Well – Because remember Majel Barrett? Yeah. Roddenberry's wife is the voice of the Enterprise. Yes, but – But that only protected him, everybody else. Within the canon, the answer was he made it Counselor Troy's mother. You guys are nerds. There's a bunch of nerds over here. So, all right. But really, how do we even know that other human beings are real? I mean, it's all kind of kooky out there.

[01:05:13] Well, it's been a much safer assumption. If you're prone to psychosis, you don't need a trigger. I mean, it's been a much safer assumption. And we also have problems with con artists. Con artists, granted, they're human, but they end up succeeding in their relationships in dubious ways because they are essentially a skin. You get conned because you think the person is real, has depth, has accountability, and has some basis of giving you – Right, we trust them. Yeah, there's some trust.

[01:05:39] And essentially, we are engendering that sense of trust that we would have for a human being in an equivalent position without quite adjusting our brains to what it isn't a real human. And it's just telling me what I want to know. And it's designed to tell me what I want to know. You know, we haven't – we're kind of eking along, but there are some train wrecks. And I think a lot of the train wrecks happen at this intersection. I was actually yelling at Claude yesterday. With your voice, real quick. With your voice or with your fingers? With my voice.

[01:06:08] Are you typing? No, with my voice. Yes. Were you mad? Yes. Because it kept doing the same stupid thing over and over again. And I actually – Did you swear? So, yeah. Did you feel bad afterwards? No. So, I was talking – when I was at RSAC, I was talking to a guy at Aikido, which does – where they do AI security and defense, right? They use AI agents to do security.

[01:06:36] And I can't remember, but I was asking him, how do you get the agent to do your bidding? And he said, we found the best way to do this is to threaten to sue them. That if you – and I took it to heart. I thought, oh – and this is the thing that's really interesting about using these agents. We don't really know why some stuff works and some stuff doesn't. I just saw research that says you probably shouldn't tell the agent that it's a super smart computer programmer.

[01:07:05] That's actually counterproductive. It's because it's then assuming that it knows everything. I don't know why. No one knows why. It's all a black box. But I thought, well, I'm going to try because it keeps making this dumb mistake. I'm saying things like, I am very disappointed in you. And I'm really thinking of uninstalling you at this point. I just say I'm going to switch to codex. Yeah, exactly. I'm going to use Pi from now on. You are terrible. I didn't – I don't use AI a lot, but I've used it. It's worked, by the way.

[01:07:35] It got much better after I yelled at it. I don't use it a lot, although in the recent weeks I've used it more than normal. And one of the things was running into an issue where it wouldn't do what I want. And it was giving me a bad legal argument for why it didn't. And then I'm arguing with it. And I don't like that in myself. It is not a thing. No, no. It's okay. I want to argue with the person who programmed it to not do this. No, no. It's not programmed.

[01:08:04] That's the thing that's so interesting about this. It isn't programmed. Kathy, I want you to come over to my office and watch us for like a day. And you'll just come back on the podcast and just be like, I think those people are bad people. I want to work. They've gone too far. And I, on the other hand, want to work at 2389. I think it's perfect. We are weird.

[01:08:27] I didn't like it in myself that I was now engaged in a human type argument with software that didn't actually care. Your mistake was arguing with it. You tell it. You dominate it. Well, I think this is all wrong. I told it it was wrong. I think this is wrong. I told it it was wrong. And then it said, oh, yeah, you're right. And it came back with a slightly modified legal argument. And then it was like, this is really interesting.

[01:08:55] I'm happy to discuss it further, but I'm not going to give what you want for this BS legal reason that was wrong. Great idea, Kathy. What should she do, Harper? What should she do? Okay. So we have a very, very specific view for these things, which is you should treat them. You're going to die, Kathy. I'm sorry. I apologize. You should treat them like you would your friends. You should treat them like humans. You should be nice to them. I know. That's why I told you you're going to die. Just wait.

[01:09:19] And then the reason why is because if you say to the LLMs, like in the beginning, everyone was like, well, if you say, if you don't do this correctly, you're going to lose your job. If you don't do this correctly, I'm going to lose my job. If you don't do this correctly, we're going to kill a kid. And if we don't do this correctly, someone's going to die. All these various tricks. They try all these things. To try and coerce it into some way. And what we have found is that if you do things like, say, we are collaborating together. Like, I am working on your behalf.

[01:09:48] We are helping each other. And just doing the, like treating it in this way that is much more like you would treat a teammate. That we've had much, much better reactions and options from that. And that's a lot of our skills are built this way, where we've built a lot of soft things. Like, they're not things that are like, I'm going to sue you. You know, and for a while there was some of these tricks, like you'd say, I'll tip you 200 bucks for every good answer. All these little tricks that are there. And all of them we've kind of reverted to the point of just being like, you know, hey, we're friends.

[01:10:17] We're hanging out. We're working on some good stuff together. I like that. And like collaborating. And what I find is that. That's better for you too as the human in the mix. Well, this is what I think about a lot. Like, I think about these things as let's say there's a certain amount of efficiency that you can extract out of it. When you talk about these little prompt tweaks, it's a lot about like F1 efficiencies. You're like, if you use this fin, you get to go 10 seconds faster. If you use this prompt, you might get a little bit better. The things Kathy are describing, we all run into. I just think some people are not better or worse.

[01:10:45] They're just used to being like, okay, and just working with it. And it doesn't always work. Like, I think sometimes you're just like, you have to kind of give up. But what I've found is if you're not, if I'm nice and I treat it in this very specific way that it's more fun and it feels better because I don't want to yell at it. Yeah. It's better for you. I think I ran up against a hard coded boundary. Like, it's always kind of interesting when you do it, where I ran into a hard coded boundary.

[01:11:14] I was trying to use. Gen. AI to make a picture, a picture that I am capable of making in Photoshop if I booted that up. But I'm going to use the Gen. I tool to make that picture for me. And it refused to on bogus copyright grounds. Yeah. Yeah. That's. And it was clearly. It was clearly. Programmed to like, I could tell where its red line was. I don't think I could have sweet talked it into going over that line, although it was a little weird about it.

[01:11:43] It was a lot more agreeable, but I pointed out it's hypocrisy because it was a little bit more willing in one area. And then, and then, then it figured out what was going on and drew a line. Yeah. But I did not like my interaction. I was, I, it was wrong. I was right. And I'm arguing with it. And I don't argue with Photoshop. Why am I arguing with this piece of software? I didn't. This great experiment of inducing me to like AI was not going well. Did you throw it out the window? Because that's when you know you're really cooking.

[01:12:09] You know, I was going through, you know, my, oh, well, no, that's the other thing. I'm on my last laptop because the other one broke because the USB C jacks are there. Why do they break so easily? So, no, I wouldn't have a laptop if I threw it out the window because the other one was broken because my last USB port broke. And these things are like horses. You have all this power, like dependent on this one port.

[01:12:35] And it's like horses balanced on tiny little feet where like, you know, these great powerful creatures can all of a sudden break. And anyway, so I don't have the other laptop anymore because both of them cracked off. I just ordered a microphone so I can wear it on my lapel and talk directly to my agent. Are you going to get one of the sub vocalizing ones? Oh, that's a good idea. Like the special forces guys? Like that. Yeah. USB C is dumb.

[01:13:05] Kathy really wants to rant about USB C. I think it's a Kathy thing, though. I hate to tell you. I mean, most of us like USB C. Yeah, I liked it a lot. But like I have an unchargeable now dead computer that should otherwise be operational because. I don't understand how your power port is so important and it's so fragile. And that seems bad. Every time your mic cuts off Kathy, I'm just imagining you swearing a lot.

[01:13:34] It's like the bleeping. Is it still? Because I've tweaked all the tweakable things and I can't make it happier. USB C is getting back at you just like. It is. It says, you see, you treat me like that. I'm rounding the internet through a USB port of the new machine and I guess it doesn't like it. All right. I'm going to take a break. And when we come back, more to talk about in just a little bit. And Kathy can tell us some more about her USB C woes. I know you want to. It's OK. I put it in the rundown.

[01:14:03] That's how much I want to complain about it. You're watching This Week in Tech. Harper Reid is here. Kathy Gellis. It's great to have you. And of course, Brian McCullough. And we will have more in just a moment. But first, a word from our sponsor, Zscaler. I love Zscaler. Really happy to have them as a sponsor. It's the world's largest cloud security platform.

[01:14:31] Potential rewards of AI in your business. I mean, we've been talking about it. They're too great to ignore. And we've also been talking about the risks. And there are those two. The loss of sensitive data. Data attacks against enterprise managed AI. It's a scary world out there. Generative AI increases opportunities for threat actors. Helping them to rapidly create phishing lures. To write malicious code. To automate data extraction. To create these, you know, little libraries. That can steal everything from you.

[01:15:01] There were 1.3 million instances of social security numbers. Leaked to AI applications. This isn't even active stealing. It's just that your employees are, you know, maybe putting your tax returns in there. And not even thinking about it. Last year, ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations. It's time to rethink your organization's safe use of public and private AI.

[01:15:28] Chad Pallett is acting CISO at BioIVT. They love Zscaler. He said Zscaler helped them reduce their cyber premiums by 50% and doubled their coverage. And improved their controls. No wonder they love it. Take a look at this. Chad had this to say. With Zscaler, as long as you've got internet, you're good to go. A big part of the reason that we moved to a consolidated solution away from SD-WAN and VPN

[01:15:57] is to eliminate that lateral opportunity that people had and that opportunity for misdirection or open access to the network. It also was an opportunity for us to maintain and provide our remote users with a cafe-style environment. Thank you, Chad. With Zscaler's Zero Trust Plus AI, their Zero Trust architecture plus AI helps you reduce the risks of AI-related data loss and protects against AI attacks

[01:16:24] to guarantee greater productivity and compliance. Learn more at zscaler.com slash security. That's zscaler.com slash security. We thank them so much for supporting this week in tech. Let's see. The Turing Award has been awarded to the inventors of quantum cryptography. I think this is the first time. This is the Nobel Prize of computing.

[01:16:55] It is a million-dollar prize. Charles Bennett and Gilles Brassard will share it for their work on quantum computing. This is the story from the New York Times. The two met in 1979 while swimming in the Atlantic just off the North Shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested, this is the kind of conversations

[01:17:24] you get in the Caribbean, that they use quantum mechanics to create a banknote that could never be forged. 1979. Collaborating between Montreal and New York, they applied Dr. Bennett's ideas to subway tokens rather than banknotes. Published a research paper in 1983 showing that quantum subway tokens could never be forged even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them.

[01:17:52] This led to quantum cryptography and won them the Turing Award. I've got nothing more to say about it because I just, I don't even know what it means. But there they are. I love that about quantum. Quantum is my favorite thing of having no idea what's going on because everyone I know talks about it and they talk about it and they sound, and it sounds so cool. And yet every time I'm like, well, when will I see it happen? They're like, oh, 50 years. No one knows. It's like AGI.

[01:18:22] It's like the singularity. I think it's going to happen though. I believe in it. You think so? It's like fusion. Fusion, quantum, and AGI. I think quantum's a little bit different because it seems that, you know, Google and it's like quantum and some of these big quantum companies have the beginnings of it. Whereas fusion, everyone's like, we did it. And then those people disappear from the earth and no one's ever heard of them again. Whereas quantum seems to be like, we did it. And they're like, what did you do? And they're like, we have a qubit and it's doing qubit things. It could factor 32. Yeah. And then you're like, wow, that's really cool.

[01:18:51] And they're just like, watch out for your encryption. And then you don't hear from them for another six months until they're like, now we have two qubits or whatever. And then they show, my favorite is when they show the quantum computers. Like you've seen pictures of these. Oh, they're crazy looking. They look so cool. And it's like, I've never seen a more steampunk computer. So these people must be going to Burning Man. They look like espresso machines. They're very elaborate. They do look like espresso machines. Why is that? And there's always some very cool Italian

[01:19:19] or cool Europeans somewhere in the mix building them. Yeah. Merding's a perfect espresso machine. Here they are, in fact, at the Silicon Quantum Computing Facility building. They're wearing rubber gloves because you don't want to get quantum on your hands. No, you get a qubit on your hand. You are in trouble. Messy. You can no longer use a phone. Well, also, that's because they're cooling something, right? It's like they're putting rods in like Star Trek. It looks a lot like a fusion reactor, actually.

[01:19:48] I think a Tuckermak and a quantum computer look exactly the same. You know, I don't have the degrees necessary to understand. And I don't understand this either, but according to CERN, for the first time ever, antimatter has been transported in the back of a truck. Yeah, I like that. There's the truck. It's got a sign that says antimatter in motion on it. What I like about this stuff is... It's going to stay in motion unless it's stopped, right?

[01:20:17] What I like about this is that all of these things, like the quantum encryption, antimatter, every single thing we talk about these days, I feel like we're embedded inside of a William Gibson novel. It could all be a simulation. That's usually a warning sign. When you feel like you're in a William Gibson model, it kind of means we might need to touch grass some more. Yeah. Oh, I don't know if grass is really going to help at this moment when they're transferring antimatter around the trucks. I think we should have touched grass way before

[01:20:45] and very specific grass we should have touched. But yeah, it's a mess. As anybody who's watched Star Trek knows, when matter and antimatter meet, huge amounts of energy are released, which is why you can't put any matter in a truck. Which is why it's so impressive, yeah. That's why. Apparently they did it in some sort of specialized bottles that magnetically suspended 92 antiprotons. And I'm not sure why they drove it around in a truck.

[01:21:13] Because they had to go faster than 88 miles per hour, obviously. Oh, of course. Oh. Well, that's why I was asking, well, if it's in motion, will it stay in motion? Or do those rules not apply to antimatter? I think it's more like, I don't know. I think it's just, it's a flex of some weird. Well, I mean, it really comes down to, if antimatter becomes a fuel, which is what they want to do with it, right? They want to make fuel out of it, like rocket fuel and stuff like that. I don't know. I think these guys are just theoretical. You need to be able to transport it. So, like, this is just a proof of concept.

[01:21:43] CERN has an antimatter factory. But if all they've ever made is 92 protons, I can see why you might have to truck it around so that others can have access. I don't know. Walmart isn't... I was going to say, Harper, you were saying that we're living in William Getson stuff. But, like, one of the things that just happened this week was that Neil Stevenson is walking away from the metaverse.

[01:22:09] So there are certain things where sci-fi folks are like, you know what? That's probably not going to come to pass. He wrote a blog post called My Prodigal Brain Child. Reflections on the Latest and Greatest Death of the Metaverse. Neil Stevenson, of course, created the metaverse, or at least the name for the metaverse, in Snow Crash. But what's interesting about Neil Stevenson, what was his last book? Polistan.

[01:22:38] He's just written the first volume of it. But he also wrote... Polistan was pretty good. I liked it. But Fall or Dodge in Hell? I liked how his life... I gave up on the metaverse. But a few years ago, I wrote a very big book about the metaverse. In Fall... Actually, Dodge is a scientist. There are a lot of interesting, as always, in Neil Stevenson novels, interesting ideas. But in this case, they said, really, you don't want to... If you're going to duplicate somebody's brain, you don't want actually the material.

[01:23:07] You just want the connections. And so they saved all the connections for this guy Dodge, and they put it in a bottle. And he's got... There's a whole world going on now that he thinks he's in, and there's... I mean, it's... It's great. It's very interesting. He also, in that book, came up with the idea that people would have some sort of mask that they could wear that would project different personas. It was a way of avoiding surveillance initially,

[01:23:36] but eventually it became your signature. So like, you know, you could wear your funny mask or your serious mask or whatever, and you'd be that person. Everybody could have multiple personalities. Fortnite skins IRL. I just want it to be Juggalo. Just Juggalo makeup. That's my goal. That works, apparently, against face recognition. It also works against dating. I want to put a line in the sand for the metaverse, because I remember being on the show

[01:24:06] a year or two ago, and the metaverse, the metaverse, the metaverse, and I was like, get real. We've been here before. So when are we going to be here again? It wasn't me, young lady. It wasn't me. I've been, in fact, one of Neil's headlines in his blog post is people don't like wearing things on their faces, which is what I've been saying for a long time, and don't trust those who do. Oh. Witness Google Glass.

[01:24:34] Now, Meta has announced that they are going to start making meta Ray-Bans for frames for people who wear prescription glasses. Like, you'll be able to go to your optometrist. And when you order your frames, among the choices will be meta glasses. I don't want that. Yeah, me neither. Well, and your friends may not want it because you've got a camera.

[01:25:01] I mean, we have such an interesting problem right now that's just starting, which is every meeting I'm in is recorded. Right. By AI. And many of them are transcribed. And I don't know where things are going. And my problem is, is I can't shut up. So I say all sorts of stuff. And it's all recorded. Oh, by the way, Harper? Yeah. We're recording this. Oh, no. But I mean, I just assume it. I just assume it. I mean, I have friends that wear Plod pins.

[01:25:30] I have, you know, there's all sorts of these recording devices around. We have a project internally to put small, you know, recording devices at all the desks to record meetings and to get notes out of them. And like, you know, so then you're thinking, and this is our space very specifically has warnings on the door that says you should expect to be recorded when you come in here. Like, this is one or a problem. I have that sign on my door, but nobody ever comes in. So I can't. Well, that's a different problem, Leo.

[01:25:55] But the thing is, I don't think we have yet litigated how, like, what is polite, what is impolite within this world. I mean, I've had friends who say, hey, can we turn off AI recording? And everyone's like, of course, of course, of course. But like, there isn't like a way to think about it. And I think with the glasses, it's just another level. Like I'm in a restaurant, I'm talking to a friend, someone just looks at me and they can hear my conversation through lip reading and other models.

[01:26:22] I think there's there's an issue of two things going on. One, consent is very untethered from the extent of exactly what you're consenting to. And I don't like that. So I've had to call a business lately and their phone system is, hi, we have AI to help figure, you know, route your call. And I don't. And it's it's very poorly programmed because they have to legally do that for because of you're in a two party state.

[01:26:52] Well, well, no, they don't. Well, I'm not quite sure what they say, because it's usually like this call may be recorded for quality. Well, no, so it's there. I have a very poorly programmed phone system, which I tried not to speak to it. I just wanted to pound numbers because I didn't want to give them my voice. And it's programmed in a way that I cannot pound numbers. Like at some point, it's like I heard that the number you pounded was X. Is that correct?

[01:27:18] And then like you have to tell you have to give it your voice in order to proceed. So that's bad. But the problem is, is there's context where I want to consent to the recording. Like I didn't necessarily mind that when I actually spoke to the human being, it would be recorded because I knew who was recording it and what it was for. But when it's for the AI, I don't know. I don't want them to build a model based on my voice. I haven't consented for that. But I sort of feel like to give the consent's not limited and tailored.

[01:27:46] And I think speaking of litigation, we're going to see some litigation surrounding that. And, you know, one other use case is a couple of my doctors. They ask me, they get my consent if they can record the appointment because it helps them to transcribe it. And I'm fine with that. But now I'm realizing because I don't want my voice to go into the training model. And I'm not sure if it does. And I definitely don't want the data to go into the training model. And they can't tell me if that's true. I feel like you're fighting a losing battle.

[01:28:14] I mean, every time you walk into a store now, you're being recorded. Walmart just announced that they're going to put digital price labels on every store shelf in the U.S. by the end of the year. I think they're going to run into legal trouble. Note that they're saying they stress, this is from CNBC, prices will be exactly the same for every consumer in every store. Well, they have to do that because it's potentially not legal to change it. Oh, Harper looks like he's got money.

[01:28:44] I'm going to jack the price up as he walks towards that object. I mean, that's not allowed to do that. Well, markets are we have a market based economy and it's built around contract law and contract laws and offer and acceptance. And you got to make your offer. And I don't know. And I think I didn't make the offer till you walked up. I don't know. I think the Irish spring five in one body wash. But as soon as I saw Brian, I said, we're going to charge a buck 80 on this one.

[01:29:11] Well, this is this is the kind of thing of like nobody really wants this. And if they try it, Walmart. Well, yeah. Walmart says it makes it easier for our associates. I mean, I don't care about the digital pricing, but I don't want the variable pricing. We don't know. They claim it's not variable pricing. Why else would you do this? Against this is. Oh, I think it's just easier to do it because then you go to like a store like Target and Target stickers are all old and outdated and the math doesn't.

[01:29:40] The per unit math is bizarre and stuff like that. So having digital pricing doesn't really offend me just as a way of getting that information into the right spot. It's trying to do anything else with, well, we've got it electronically controlled. Look what else we can do. And some of that, what else they can do is more alarming. But again, I think we're going to we're going to see litigation around the scope of consent because I want to give it in limited context.

[01:30:05] If you are limited, you're getting every time I walk by my gas station down here on the corner, the price of a gallon goes up by a few pennies. That's why you put these electronic tags in there, because you don't have to go around and change the prices every day. You can change them every minute if you also used to run out of eights. Like before we went fully digital, the last oil crisis, they couldn't write the numbers. They didn't have enough of the hard copy digits because nobody thought you'd have to do eight dollar gallons.

[01:30:34] It is now the local regular at the local station now six dollars and 30 cents a gallon. What's it like in Chicago? You don't drive a gas vehicle, I'm guessing, Harper. I do. You do. I do. So you know the price of gas. Yeah, it's not it's not cheap. It's not. Yes, that's the correct. It's it's I think we're approaching five ish dollars for California. It's more expensive. But yeah, it's I am.

[01:31:04] That's not good. It's a bad situation. This will be bad. It's a bad situation. You see the gas prices in other countries. It's getting crazy out there. Oh, I know. Oh, I know. I'm in Brooklyn, so I drive a car three or four times a year. So I have absolutely. Absolutely. You're happy. And you're happy. Yeah, you should. You should. It's a good idea. I should do that. It's a great idea. I can't get by without a car where I live. I've tried it and there just aren't enough buses and now there's even fewer buses.

[01:31:33] My car is going to do electronic shelf tags. They say as well, they say it makes shopping easier by ensuring customers see clear, accurate pricing right at the shelf. So it's all we're doing for you. I don't necessarily mind it. I've seen it in other environments and it didn't bother me, but they better be, you know, for the day. That's the price for the day. My gas station, it's a digital display. They can change that. You know, I think the most the thing people are most worried about is that it will be different prices. I think it's inevitable.

[01:32:03] I think it'll be inevitable that the prices will change based on who you are. That's how airlines do it. It's how the Internet works. The Internet is using targeting today to make sure that you get more money. If we have the capability, of course, they're going to use it. Who is the e-commerce platform that had to roll that back? I thought Amazon did it for a while. No, there was somebody else where someone did a sting where they had like 30 different shoppers. Someone was in Chicago. Someone was in Seattle.

[01:32:32] Same website, got different prices. Oh, sure. Yeah. And I think airlines aren't supposed to do it. The way they do the price discrimination is sort of like they have some inventory at one price and they have another inventory level of inventory at another price. And it works way that if you ever sat on a plane and asked the people around you what they pay for their ticket, every single person paid something different. Yeah.

[01:32:59] But I think it was sort of old school price discrimination just by throttling the inventory. And I think that's much more likely to be legal than something so variable on the fly where they've made the offer. Congressperson. In general? That's a question I've been asking. Are there laws? That's a question I've been asking. Yeah.

[01:33:20] Congresswoman Val Hoyle of Oregon is sponsoring legislation in the House that would ban DSLs, digital something labels. I mean, I think the question is under common law or existing consumer protection law, are there rules? Maybe. And if not, could there be? And I think the answer is probably yes, there could be. And there's probably a public policy reason to do it. But again, this is the problem.

[01:33:45] Like going back to the beginning of our conversation where Meta is now getting sued for jury awards that really shouldn't be happening. When the big companies just sort of throw their weight around, it antagonizes people. Doesn't Uber charge more when it's raining? Mm-hmm. That's a surge price. They also do dynamic pricing for you specifically, as well as specific targeting. I don't think it's illegal. I think it's legal. That's why Congresswoman Val Hoyle is sponsoring legislation that would ban digital shelf labels.

[01:34:14] Also, a lot of consumer protection law is state-based. So whether it's legal may depend entirely on what state it's in. This is a federal. This would be federal. There needs to be laws and enforcement to protect consumers. And until then, I'd like to see them banned outright, said Val Hoyle. While there is no reported, this is from CNBC, no reported use of digital shelf labeling being tied to surge pricing yet, in Congresswoman Hoyle's view, it's only a matter of time.

[01:34:40] Without proper regulations, it's not so hard to see corporations using the loopholes to raise prices on consumers. These are also two different things. Surge pricing and dynamic pricing, I think, are different. And each may be problematic, but I think the one where the price is constantly changing on you is a lot more dangerous and a lot harder to defend. But the one where we're going to change the price based on conditions, maybe it's not illegal, but it might be bad policy.

[01:35:06] And because it ends up consumer and friendly where you end up dependent on a service that all of a sudden is no longer a stable, predictable price. On the other hand, capitalism. Yeah. And there's a pushback, which is free markets are free markets. You know, this goes you could look at the discussion that you that you see around when like hurricanes are coming. I'm blanking on the word price discrimination, price gouging. There's a lot of states that make it illegal. And I know a lot of libertarians who are like, it totally should be legal.

[01:35:36] This is a market and you shouldn't penalize the market. But obviously there's consequences when it can happen. So usually it allows you to raise the price by some amount. New York state does have an algorithmic pricing disclosure act. That was Beck-Ping Law in November. Pennsylvania has introduced a bill. So you're right. Some states do have rules against it. There's no federal law against it. I just think that if this technology is possible, and it is, and these people are putting in these digital readouts, and they are, that can be changed from a central location. But I mean, they're going to do that.

[01:36:06] You can go in with a, you know, a rag and a whiteboard. Sure. And change the price. Market price. That's what we call market price. The key to this changing is if it's the surveillance of if Brian walks up to the shelf and he lives in New York City. So he doesn't blanch at paying $10 for a box of Cheerios or something. It's like, well, that's Brian. So he's not going to notice that Cheerios shouldn't cost that much.

[01:36:35] Like that's, that's the, it's the surveillance part of it that is key to making this happen. And I think there's other forms of where you would find law to say no to it. Is it discriminatory? Is it, do you have some Fourth Amendment problems? I mean, it's not, the law isn't necessarily coherent in how it can answer these things, but I think you could, you could make an argument and you could also fix that by statute and make it coherent. I'm clearly a sucker for that. This is the political gotcha, right?

[01:37:02] Where they go, well, Senator, how much does a gallon of milk cost? And the Senator goes, $40. I don't know. I don't know what anything costs. If this happens, I would like them to charge Elon and Jeff Bezos like a million dollars for a tube of toothpaste. You see, they should, that's what we're talking about with taxation. We're doing the same thing with taxation, saying, you know, it's the old Marxist from each according to their ability.

[01:37:32] And what Kathy's saying is that kind of is the libertarian argument, which is if you don't notice that you're paying $20 or $30 for a box of Cheerios, then you can afford it. And so someone that can only afford $1.30 for a box of Cheerios, let them pay $1.30. That is maybe a more equitable universe. I think that's more. Brian, how much is a gallon of milk? Do you know? How the hell should I know, Leon? I have no idea.

[01:38:01] Kathy, how much is a gallon of milk? Do you know? I don't buy a gallon. I buy it. Okay. Whatever you buy. The quarts are a little bit under $3. She knows. She's paying attention. Yeah. I have no idea. I don't even pay attention to what's happening in the same room I'm in, let alone at the grocery store. I'm busy talking to my buddy Claude over here. Yeah, this is 2026. I have like five AI assistants that are trying to get my attention right now. That's their job. And I'm just ignoring them all. Yeah, exactly. I wonder if Claude does.

[01:38:26] This came up recently because I was trying to do, we're trying to switch to like, what's the Amazon Fresh or something? And so I did an Amazon Fresh order and I was saying to my wife, look at how cheap everything is. What does a gallon of milk cost these days? Nice. And she was like, that's not cheap. I was like, oh, really? Because I was hoping that this was all, and I think it was milk that was the thing that gave me away. That's not cheap. Yeah. Well, that's how Costco works.

[01:38:51] Those big warehouse stores, they make a few things cheap and you assume, well, everything must be cheap here. I think that's what it was, is that the milk, I was like, look at this. The milk is half of what it could be at the bodega around the corner. And she was like, which is a good point. Yeah, of course. The bodega around the corner. Yeah. They're buying it at Costco and marking it up. Yeah. I was really disappointed. I don't have a Costco membership. I got taken by and he's, and I'm like, oh my gosh, the deals. And then I was just looking at the prices.

[01:39:20] I'm like, I can beat them at the grocery outlet or watching the sales at Safeway. And I was really bummed. But you can't get a coffin. You can't get a coffin at that grocery outlet. Well, I realized it was probably- And I love it that they put the coffin on the way out. Well, that's when you need one in the hour. Well, right there in the beginning. Yeah. Yeah. I mean, I think it's probably good for some big ticket things. My dad swore by it and maybe the deals were good. If you need to buy 15 pounds of taco chips, that's the place to go.

[01:39:49] By the way, I did buy milk at Costco and I thought it was a great deal, except that you have to staple together two giant containers. Yeah. I really can't believe people are drinking milk in 2026. 2026 is a general idea. That's something that blows my mind in general. It's still a nutritional food. Yeah, I know, but I don't understand it. I just don't like it. It's like watching people drive American cars. Almost every, only for dinner. But almost every- Only for dinner. For real.

[01:40:19] I love it. If it's spaghetti, spaghetti, pizza. I think people drink more, they're drinking milk more because of protein. And we're all of a sudden- Oh, yeah, that is a thing. We've got this protein thing. Protein milk. I saw an advertisement for protein milk the other day. Or calcium for our bones. You mean this? I probably don't- You mean this? Protein milk? You mean this? Oh my God, maybe that was it. What is happening? You guys are weird. Harper, my argument is it's a palate cleanser. It's the best palate cleanser in terms of like, it doesn't matter what you eat. Boom.

[01:40:47] If you're eating a chocolate chip cookie and you don't have milk? Yeah. Life is terrible. I cannot remember the last time I had a glass of dairy milk. That's because you love Japan and they don't do dairy in Japan. Yeah, yeah. Or I'm a, yeah, I don't know. Maybe, maybe. No, I'm not saying I'm not weird, but I am suggesting that you're missing out. Give it a try. I'm going to be- I'm going to go home tonight. Brian is more normal to me than Harper on this milk issue. Oh no, you should totally love your agents.

[01:41:18] Yes. What? What? By the way, I asked Claude a long time ago what the cost of a gallon of milk was and Claude has not said. I don't, I don't understand. It's thinking. Is it shopping? Yell at it. What is it? It's shopping. It's shopping. Yeah. You're going to have a whole bunch of milk when you get home. I miss search engines working. Today I had a, somebody used a foreign expression and I wasn't familiar with it.

[01:41:46] So I dumped it into Google to find out what it meant. And it told me what it meant, but it gave all the replies in French. I miss when the search engines work. I miss when you could just ask at the price of milk and it would tell you. Mademoiselle Gillis, you are Frenchie, huh? I mean, I can read some French, but that was not what I was looking for at the point. I mean, thank goodness it wasn't a Chinese expression because I really wouldn't have understood what it said. Let us pause and then we shall return in just a moment.

[01:42:14] You're watching this week in tech with, with the milk, the milk drinkers and Harper. Do you eat cheese? Let's pause and then Kathy. Yeah. I'm not a, I'm not a monster. I'm not saying that it's not weird that an adult man has, if I have dinner tonight, if it's pizza, pasta, there's a certain number of meals, I will have milk with it. Yes. Is it true that you really, that's a thing for me? What do you drink?

[01:42:44] But Leo was like, yes, water, water, water, water for lunch. And, and, uh, yeah, not at a restaurant at a restaurant. I'm not ordering a glass. That would be weird. Okay. That would be going to dinner. We're in a nice place in Brooklyn. Oh, here Brian, here's your milk. That's what Elon Musk was bragging about his son doing ordering a glass of milk in a restaurant. Like when it wasn't on the menu or something like that.

[01:43:14] I wouldn't brag about that. That's a weird brag. It's also a lot of things that may not be great. So that's true. It's also a funny thing. Cause it's like of all the things that a restaurant has, of course, they're going to be like, yeah, we have milk. Yeah. We have milk. You want some butter with your milk? That's weird. But yeah, here it is. You want some salt? We got that too. What would you like? Sugar? Flour? I got whatever you need. I got it. You can make paste. We've got all the ingredients.

[01:43:43] Our show today brought to you by Meter, the company building better networks. Saw Meter at RSEC. I was really, I dig their stuff. If you're a network engineer, this is, Meter was founded by two network engineers who kind of knew the pain of being a network engineer. If you were in that business, you know. Legacy providers with inflexible pricing. IT resource constraints. There's never enough money stretching you thin.

[01:44:10] Complex deployments across fragmented tools. You're mission critical. You, Mr. Network and Ms. Network engineer, are mission critical to the business. But you're working with infrastructure that just wasn't built for today's demands. That's why businesses are switching to Meter. Meter was, like I said, created by these two network engineers who said, you know what? We can make it better by doing the whole thing.

[01:44:37] Meter delivers full stack networking infrastructure, wired, wireless, cellular, that's built for performance and built for scalability because Meter designs the hardware. They write the firmware. They build the software. They manage deployments. They provide support. They even, they do everything. They'll even offer help with ISP procurement. They will help you plan the network. They'll do security, routing, switching, wireless.

[01:45:05] They'll help you with firewalls, cellular power, DNS security, all the things businesses need. VPNs, SD-WAN, multi-site workflows, all in a single solution from one vendor. So if there's a problem, you know who to call. And man, the support is so great. Before and after. Meter single integrated networking stack scales from major hospitals. Branch offices, warehouses, and large campuses. Even data centers. Reddit uses Meter. Okay?

[01:45:35] With Meter, you get a single partner for all your connectivity needs. From first site survey to ongoing support without the complexity of managing multiple tools. Meter's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility. Reimagining what it means for businesses to get and stay online. And man, that's table stakes. You've got to do that. Meter's built for the bandwidth demands of today and tomorrow. I was so impressed when I went over there.

[01:46:05] Beautiful hardware. Amazing software. And really smart support and engineering team who can really help you solve your toughest, throw your toughest networking problems at them. Thanks for sponsoring the show Meter. We appreciate it. Go to meter.com slash twit to book a demo now. That's M-E-T-E-R dot com slash twit to book a demo meter.

[01:46:33] Speaking of cameras everywhere, a UK man has accused his spouse of stealing $172 million in Bitcoin. She used a CCTV camera to look over his shoulder. Okay. I mean, I don't know. He accused his estranged wife of stealing his Bitcoin.

[01:46:57] The spouse uses CCTV cameras to gain access to the backup passphrase associated with the crypto hardware wallet that Bitcoin was stored. There's a dollar figure for a crime where it's like, yeah, fair enough. That's a lot of money. Yeah, yeah, yeah, yeah. I got a mug with Brian on that one. Good play. Good play. Yeah.

[01:47:21] A couple of weeks ago, we had the story about the South Korea law enforcement that posted a photo of a crypto wallet and it had the seed phrase in the photo and immediately the money in the crypto wallet disappeared immediately. Yeah. Careful with that stuff. Those things are so, there's no surprise there. Like, it's great. I love it. I love when, like, it's a great mistake. It's funny to talk about, but everyone who's like, who would have done that? It's like, I would have done that.

[01:47:51] Easy to do. I wish somebody would have watched when I put my password into my Bitcoin wallet. I'd be able to get my Bitcoin. I remember those, remember those posters, the anti-pirate posters that say you wouldn't download a car? And everyone was like, yes, yes, I would. I would 100% download a car. Are you serious? Give me a car. Those are such bad posters. They were just posters. They were ads.

[01:48:16] You're in a theater, in a movie, you pay 25 bucks to see and they start ragging on you for stealing it. Dude, I paid to see this movie. And I would download a car. And I would download a car if I could. A car and a film is not the same. Like, trite analogies tend to be at the heart of bad tech policy. Yeah. Yeah. But still, I would download both of those things. The movie and the car. Like, come on.

[01:48:42] They were afraid that you were going to take your ticket money and just hold up your camera and... Oh, yeah, yeah, yeah. Yeah. Yeah. Well, that's, I would not videotape a car and then post the videotape of the car on the internet. No, I would do that. That's a very normal thing to do. Who wants a movie that somebody held a camcorder up to record? I mean, people did. But whether it was market disrupting numbers of people, I think, might be a little dubious. That didn't cost anybody anything. No. Yeah.

[01:49:10] In fact, it was probably just a good promo for going to see the movie. Well, a lot of piracy tends to be good promo. I mean, a lot of the way the copyright industry has reacted is really golden goose killing. Where, you know, let a little bit be a little bit freer and you will make money. And we know this because, you know, you have sold an awful lot of albums like in the 80s when you were desperate to get it played for free with no money going to you on radio.

[01:49:37] You got in trouble with payola when you paid them to play you for free. Did you ever record songs off the radio? I think I did occasionally, but I more got songs that other people had. No, I didn't do it a lot, though, actually. It's also just we've learned the lesson over and over again. It's ease of use. Like right now in Britain, there's all these crackdowns on these fire sticks for watching sports.

[01:50:04] And it's like, again, why are people all of a sudden in the last few years using going to dodgy websites to watch sporting events or getting dodgy fire sticks and stuff? It's because if you want to watch your team, you might have to subscribe to three different things just to watch your team. So, again, we have to learn the lesson over and over again that if you make it complicated for people to get what they want. Yeah.

[01:50:30] Like opening day, Yankees Giants was only televised on Netflix. I mean, that's right. I like the Yankees because when I was a little kid and I was allowed to stay up a little bit later for my bedtime, I'm allowed to watch a little TV and I'm flipping around. And the Yankees were on TV, Channel 11, Phil Rizzuto, Bobby Mercer, and I'm a guy, Frank White, I think. And I'm watching them and I'm like falling in love with the sport because I'm like eight or nine and like it's there.

[01:50:59] And I can feed on it because it was just free over the airwaves. And I got to watch CMTV and I saw a Yankee game first. If I'd seen a Mets game first, I might have become a Mets fan. This is how you hook me in where I've spent my life paying money to baseball to go see games, which now is no fun because they're so ridiculously expensive. But like you're not getting little kids interested. You're and you're, you know, boring the adults who don't want to spend ridiculous amounts of money just to watch baseball.

[01:51:27] Baseball is awesome, but not at the prices that they're extracting from people to enjoy it. Right on. Yeah. Every segment I will rant about something. How are you feeling about USB-C though? Really? Seriously. It's like a horse. You have this powerful thing, all this power and it's like contingent on these teeny little, teeny little things. And I don't understand. Like I've got this viable machine that would run, but I'm never going to be able to get electricity into it. Apple did the MagSafe thing.

[01:51:56] That was a good solution for that. Well, I don't have an Apple. I have a Windows machine. Some sort of weird, crappy Windows machine. That's your problem right there. Well, I mean, it's dead now. So, just find one of those little tech shops that every city has with staffed by strange IT people who have just focused on that and just say like, can you fix this? Well, I did ask. You need to go to Akihabara.

[01:52:23] You could go to Akihabara, but it'd probably be easier just to go to some, like look for an Android logo on a cell phone store. Exactly. They'll fix your screen. Yeah. It's all fixing it. And they have soldering irons because they can do that stuff. I did ask. Look, I mean, I have nerd friends. Let me tell you. And I asked one of my nerdier friends about this and he said, like, because if anybody I knew could fix it, it would be him. And he's like, ooh, yeah, those are a pain to fix. Yeah. Well, you got to desolder it. Yeah.

[01:52:53] You just need a bunch of tools. There's a bunch of tools that somebody needs to do that. So, yeah, you need to go to one of those shops. I fix it. What brand of laptop is it? It was a Lenovo. Oh, yeah. Oh, those are very eminently fixable. Yeah. Yeah. Yeah. Seriously, take it to one of those little tech shops. Look for the phone people and just be like, hey, will you fix a laptop? They're very, very effective at it. Really inexpensive. And it probably will work. The bigger the city, the better. Yeah. I mean, I guess maybe, but I'm still annoyed.

[01:53:23] That was just so somebody at one of them was already dead and I was on my last one and I needed to set up the new machine anyway. But I had timing. That's some money down there. Was that a conference? Somebody stepped on the cord. The machine flew and it survived the flight. But the thing had cracked from where the cord had slapped down on it. That's exactly why Apple did MagSafe. So that somebody steps on it, it just pulls off and doesn't break anything.

[01:53:50] I don't think that's the flaw of USB-C to be honest. Yeah, but both of them had died from cracking. I'm not going to blame USB-C in that case. No, but both had died because they cracked off so easily on the middle. Well, if somebody steps on them, yes. Well, no, I mean, they pulled out the cable and just a little bit like. That's all. And that's what happened to the other one. I would maybe stop leaving my laptop on the floor. I don't leave it on the laptop. This was not user error.

[01:54:19] I was a victim here. Not her fault. Okay. All right. All right. Just calm down. We're going to take a break. We'll come back. I will show you a hacking tool that you can use that will not fix your USB-C. But I actually had this and I gave it. You know what? I had this thing and I gave it to a priest because it was so dangerous. I'll explain. Which one? I'll explain. Stay tuned. You're watching This Week in Tech with the father. Not father. Father Harper. Father Harper Reed.

[01:54:49] Thank you. Your father was here. Yes. Brother Brian McCullough and sister Kathy Gellis. Good to have all. What's father's name? The guy that's from the Vatican. Robert Ballasier. There you go. That's who has my hacking tool. I'll explain that in just a little bit. Our show today brought to you by ZipRecruiter. Love ZipRecruiter. In fact, I love you even more if you're hiring. Yeah, it's good for you.

[01:55:18] Do you know that the average employer when they're hiring has to sort through, I almost said suffer through, roughly 250 resumes per opening? Talk about time consuming. Well, if you're hiring, there's just good news. You can now review all those resumes and applications faster thanks to ZipRecruiter. ZipRecruiter has a new feature that instantly shows you the most interested qualified candidates first.

[01:55:47] And today, you can try it for free at ZipRecruiter.com slash twit. ZipRecruiter's powerful matching technology finds qualified candidates quickly. And with ZipRecruiter's new feature, qualified candidates who are very interested in your job show up at the top of your list. You also get a feel for their personality. Candidates can tell you in their own words why they're interested in your job. No wonder ZipRecruiter is the number one rated hiring site. That's based on G2.

[01:56:15] Cut through the standard and get to the standouts with ZipRecruiter. See what I did there? Four out of five employers who post on ZipRecruiter get a quality candidate within the first day. And now you can try it for free at ZipRecruiter.com slash twit. That's ZipRecruiter.com slash twit. Meet your match on ZipRecruiter. Anybody ever hear of the Flipper Zero? Harper, you feel like you might have had a Flipper Zero. Brian, maybe you had a Flipper Zero. Yeah.

[01:56:46] You can do a lot with this. You can hack. Oh, there's his Flipper Zero, ladies and gentlemen. I love this thing. I got into my building for about a year with this because I lost my keys. My keys were upstairs. I found your key card. Yeah. It's a... It's... I find this to be an incredible piece of technology. So what it really is, is just a bunch of radios. Yeah. It's just SDR, right? Yeah. Software-defined radio. Yeah.

[01:57:14] But what I find fascinating about this, and you can tell it's like not charged. Like I rarely use it. Like what do you use it for unless you have some specific task, a hacking task. But what I find compelling about this, almost more so than any of the other SDRs that are available, is what a nice packaged format it is. The UX is nice. It even has a snake game on it. So if you're trying to get through customs and they say, what's this? You don't say hacking tool. You say, it's my snake game.

[01:57:44] What did they sell it as? Dog toy? Wasn't it sold as a dog toy on Amazon for many years? Because it looks like one of those like fidgety things that I... Yeah, it's like a fidget spinner. Yeah, yeah, yeah, yeah. So our friend, we've had him on the show. We had to change his face and his voice. Or actually, I don't know if it's a him or a her. We had to change their face or a voice. Pliny the Elder, Pliny the Elder, has hacked the AI flipper. He's got new firm...

[01:58:10] They have new firmware called Vesper that's an AI brain. You might want to try this, Harper. You might want to charge up the flipper. Does it charge through USB-C? Yes. As a matter of fact... A perfect connector. You know what? It could be worse. It could be micro USB, Kathy. That was really... Or USB... Yeah, but I think those might be more durable. No. No? No? C was worse. Far worse. C was terrible. And you could... Wait, you're saying what I mean. I think you don't mean it.

[01:58:40] Not C. Micro USB was terrible, terrible, terrible, terrible. Terrible. Terrible. So you could plug in an AI brain via open router. So you use... I'm sure you use open router. You give them some money and you can use all these different AIs. Connected over Bluetooth. Now you have a voice-commanded flipper zero. I want you to do this, Harper. I gave mine. I was so scared of it. I... You know, after I hacked the...

[01:59:07] I did the same thing, the office, so I could get in and worked fine. Then I started to unlock my car. And then somebody said, you know, if you do that... Yeah, you can mess up the car. You may make it so that you can't get in your car. Ford will block you. And I said, all right, so I'm not going to do that. And then finally, I just thought, this is dangerous in my hands. So I gave it to a priest. Well, I mean, it's still dangerous. It's still dangerous in their hands. It's just not your hands. Yeah, but you can exercise it. It's just not in my hands. But now I wish I hadn't. So...

[01:59:37] I mean, you can just get a new one. Oh, yeah. Are they still around? Yeah. So... Yeah. You don't have to memorize... You know a guy that might offer it to you secondhand? He might mark it up a little bit, but... Harper. You or Harper? I have one right here. It's about $600 or $700, I think, is how much they go for. It pairs with your meta smart glasses. My meta smart glasses.

[02:00:04] For hands-free, heads-up flipper control. And I store it in a glass of milk. You don't have to memorize sub-gigahertz protocols or IR formats. You just say what you want. You just tell the flipper. Hack this. I mean, I like this. The only problem is that I... It feels really awkward to describe what you're doing in English to this, but I would love to hook this in the cloud code

[02:00:32] and have just cloud code rip through stuff with it. And so that sounds very compelling. And we've had a lot of good luck with this of cloud code. It's giving hands to your AI agent. We... Go ahead, Kathy. Oh, I have a question. Does it have an FCC certification on it? And I don't mean that flippantly. I mean, does it? Because it's actually doing... I would very doubt that this has any certification on it. Yeah. So this seems like the kind of thing that the FCC could actually care about, not the stuff that it's actually decided to. Oh, I think they do.

[02:01:02] I think they do. This is why it's advertised as a dog toy. Yeah. Um, it does not have any FCC markings on it, but it does have GPIO. Is that... Oh, nice. Um, yeah, this is a... It's a fun little piece of kit. Yeah. We're saying this and I'm trying to figure out. I don't think my voice has dropped out again, but I think it was my cell phone doing it. Oh, could have been. Stealing the signal. I had the, um, the... My phone underneath the ethernet cord and we kept cycling and cycling.

[02:01:31] And that was it. Back in the 3G days, we had to tell people to turn off their phones because every once in a while I go... Remember that, Brian? I loved that. That was such a cool thing. It was such a neat noise, but you could also tell who listened to you and who didn't, right? Yeah, yeah. Oh, no, no. I don't need to do this. Like on... That still happens... Like, Leo, like if I put my phone close enough to my mic, like you can still get... It's a radio. Yeah, yeah, yeah. Yeah. Well, that's what... I think, Kathy, that's... I think you nailed it. Yeah, I think it was somehow...

[02:02:01] I don't know why I was doing it on the Wi-Fi because I've never had that problem before, but I think it was sitting under the ethernet cord and near the dongle and it was somehow freaking out the parsing of it. This is how the FCC will find your flipper zero. They have these enforcement vehicles. There's only about a handful of them, though. There's not a lot of them. And there's that... The movie, right? Pump Up the Volume. Pump up the Volume. Remember that movie? That's the volume. With Christian Slater? That's right. Yeah, yeah, yeah.

[02:02:28] That guy drove around with the Inception vehicle trying to find the radio raves? Yeah. So there were driving for flippers? Well, there were driving really mostly for illegal broadcasting facilities, you know. War driving for flippers just sounds... But flippers would be even better. Yeah. Because that was the pirate radio movie, right? That was... Yeah, that's right. Yeah, yeah, yeah. That's right. But there aren't very many of them. It's not like you'd have to worry too much. It's pretty...

[02:02:58] These are pretty cool devices. And I do... I just... This is breaking. The FCC has just purchased four more vans. I mean, sorry. Six. Six more vans. Is pirate radio a big deal? I mean, is it only old people that are listening to the pirate radio? Like, how is this? What is happening here? Let them do it. No, I haven't followed it, but it's an interesting thing. Like, you know, you have the regulations so you don't get the collision of the airwaves. But a lot of pirate radio is sort of...

[02:03:27] It's people speaking to each other and in ways that can bypass a lot of censorship apparatus. And sometimes you need it. That's why we like amateur radio. That's why you would take a boat offshore. Right. So you're outside territorial waters. Yeah. Well, you know the numbers stations got very active after the Iran war started. Yeah. What are my favorite things to think about? You're laughing, Brian. Like, you don't believe me. Like, it's a conspiracy.

[02:03:55] No, I just love it because there's probably some guy somewhere who's like, ha, ha, ha, this will be funny. And then it's like covered in some news. And I mean, sure. Numbers stations seem like an entirely plausible way to spit out a bunch of data without anyone listening. No, I was going to say, like, this brings it back to the whole sci-fi thing. Like, that's how the real hacks will happen is the old sort of technology that people aren't paying attention to. That's how they'll bring down. It's sleeper cells.

[02:04:26] Okay. This is the conspiracy theory. There have always been number stations, which are shortwave radio stations broadcasting numbers. And that's all they do. And, but apparently, this is radio-free Europe. So it's got to be true. Yeah, sure. Listen to this mystery radio signal from day one of the U.S.-Iran war. You ready to hear this? This is, this is. Listen to this.

[02:05:01] On the first day. It's probably, it's probably Farsi. It's probably not in English. It's a random sequence of numbers. It's lust. They've always been around. They're kind of mysterious. They're considered to be used by, it's Cold War tech. Yep. It's, it's so there. I came in from the cold stuff. Yeah. Sleeper cells waiting for that special sequence of numbers that tells them to get going.

[02:05:30] It tells you how to get off the island. Well, anyway, if you want, this is a Pliny, the Liberators, Flipper Zero AI upgrade. I think if I were, if I had my Flipper Zero, I would install this, especially you should, Harper. You could have a whole bunch of new cloud skills for your, your dog, dog toy. I mean, I do love, I do love that idea.

[02:05:55] One, we've had a lot of good luck with cloud code controlling oscilloscopes and, and other things that are hard to read as a, as a non-professional human. And so this would be another one that'd be very, very handy to, to do. I'm sure it could do a much better job at figuring out what to do. I just don't like right now. I don't, I'm charging it. I don't really need, like most of the things that are like, I couldn't think what to do with it. It's $199. You can buy them right now from flipper.net.

[02:06:25] It's very stylized. It has a very cool interface. It has a bunch of cool graphics and you definitely feel like a hacker from the movies Hackers when you use it. But I, I, I've, I've a lot of friends that use it to clone RFID, which is just a pain in the butt, you know, to clone some NFC tags. That's just because it's mostly just a pain in the butt. It's, it's not like this is some magical thing other than they thought about user experience where most of, you know, most of the SDR kits you get are like a USB, you know, that's it.

[02:06:51] Or you get like a circuit board and you're like, great, what am I going to do with this? So it's very nice to have a very thoughtfully designed thing. Uh, we've been talking about the new age verification rules and the silliness in California where they said all operating systems have to ask for their user's age, uh, before they install it. Uh, even Linux, uh, I'm happy to say graphene OS has said, we're not gonna, we're not gonna do that.

[02:07:19] This is the, uh, third party ROM for your, for your pixel. How do you, how are they planning to verify the input is correct? There's no requirement that they do that. Yeah. There's different types of age. I call it all the age verification, but it really comes in different buckets of what they want to do, whether it's approximate or just have somebody sign off or whether they do want to verify and all those things are problematic. I think they're all problematic, but some are more problematic than others.

[02:07:47] Um, I get the sense though, that I am not entirely sure California's regulators realized exactly the extent of what would happen. They thought it was app stores. They thought they were telling Apple and, and Google to do this, but they wrote it so broadly that it would include in fact all operating systems. Well, the Illinois bill does include specifically operating systems. It is delivered literally says that, um, which I'm, I'm excited for when that passes and no operating systems will exist in Illinois.

[02:08:17] We'll have to do something else as an industry. Uh, so the way that California law works, OS providers must maintain a quote, reasonably consistent real-time application programming interface. The categories is categorizes users into four age brackets under 13, 13 to 16, 16 to 18 or 18 and older. And that any developer who requests it when their app is downloaded or launched should be

[02:08:46] able to get that information. Apple actually has that all set up. Uh, I don't know if Android does. I'm, I'm sure it does it or it will, but it's an out. But then, so that's stored on device. So I get a new iPhone and I turn it on. I sign up and ask me my age. And then that is then stored on the iPhone that doesn't have to go back to Apple. Exactly. And so then it's just an onboarding step of what is your age? Exactly. And it's, it seems kind of silly, but also innocuous.

[02:09:14] It's, it's not really going to be innocuous. I think, um, I, it might be innocuous at the moment, but it's, it's data. It's innocuous right now because it doesn't require photo ID. It just says. But if it's collected and never leaves the device, then, then that seems okay. Right. Yeah. And Apple actually knows how old you are. Right. Apple knows a lot about how old you are. But I think every one of these things leads to another thing, right? They're like, oh, now that we collected, let's now, which I'm sure is catchy. That's true.

[02:09:44] And I mean, this is mitigated somewhat compared to other, other approaches, but it's data extracted. And I don't think that's benign, especially for something, you know, the platforms are, you're supposed to be able to speak anonymously through platforms, giving data. What if you give data and you buy? What are the implications of that? What are the implications for you? What are the implications for the platform? What if you change and you told somebody you were 21 and somebody you told it were 34? And I've got an app for you, Kathy. Yeah.

[02:10:14] This is the White House launched this on Friday. New White House app delivers unparalleled access to the Trump administration. It's a powerful new mobile app. And by the way, it also gives the Trump administration unparalleled access to you. Yeah. This is great. Did you try it? Did you install it? No. The official, get this. This is great.

[02:10:39] This is a blog post about somebody decompiled it, probably using your tool, maybe your elf tool. I don't know. Maybe. Maybe. The official White House Android app. Listen to what it does. The guy really did a good job breaking it all down. It bypasses content paywalls, GDPR statements, and cookie banners. It has JavaScript in it that just goes right past that because you wouldn't want to slow

[02:11:07] anybody down with that kind of thing if they're using the White House tool. It has a built-in location tracker that every four and a half minutes and every nine and a half minutes gets your precise location and sends it back to the home office. You do have to grant permission in Android for that to happen.

[02:11:32] Weirdly, though, the location permissions are not declared in the Android manifest, which is the rule, I believe, on Android. I mean, this thing is wild. Doesn't it look like it was vibe-coded, though? Yes. And in fact, this is the best part. It loads a random JavaScript React Native iframe library from some guy's GitHub pages.

[02:12:04] It's a personal GitHub repo. Lonely CPP. And it loads this in. And by the way, if lonely CPP were to ever get hacked, everybody using this app would be hacked as well. This is the administration who says you can't buy a router not made in the USA, but meanwhile, install our app. This seems fine. Seems fine.

[02:12:31] Seems very, what would you call it, like typical for the administration. It uses MailChimp. Half-baked. MailChimp. All users' emails go to MailChimp. This is, by the way, illegal. Yeah. Half-baked with no experts consulted before they laid out their... Yeah, why would you need exports? Yeah. I mean, but that's the issue with their legal stuff, too. Like, if they could get away with a lot more if they actually talk to a lawyer about how to do it.

[02:12:57] But it's very clear that they're having the conversations down the road. This guy, it goes on. There's some sloppy leftovers. There's a local host URL for the White House in the bundle. Local host, colon 8081. It's a WordPress, by the way. It's a WordPress site that they're downloading this from, the White House site. It's crazy.

[02:13:27] Anyway, if you want to install it, you can get it directly from the White House. They posted a blog post about it. The official White House app. With it, Americans can receive breaking news alerts, watch live streams or briefings. Stay connected to the latest policy breakthroughs. I don't know if the breakdown would have included this, but is anybody trying to claim a copy? A what?

[02:13:52] I wonder if they put in a copyright claim, like copyright something. Because if it's produced by the federal government, it's ineligible for a copyright. But I'm sure they don't know that. I don't know. Oh, it is on the iOS store as well. They let it on the store? Oh, it's designed for the iPad. Nice. Nice. It cleared the Apple? America's back from buying cigarettes.

[02:14:19] I think the reason it cleared the app store is because it came from the White House. And they didn't even ask any questions. That's a choice. Yeah. Or someone knows what they're doing, and this is a honeypot, obviously. The developer is the official White House. But I still expect... They've done this before, where they've tried to claim copyright on stuff that the White House...

[02:14:47] They did that on Mastodon for Truth Social. Truth Social was basically a Mastodon clone. All right. I think we've had enough of this silliness. I thank you, Kathy Gellis, for joining us and giving us the inside story on those big court cases. I'm just sorry we don't have more time to go into the greater detail. But Kathy writes all about it at techdirt.com. That's a great place to get the inside story. You'll find her website, cgcouncil.com. And she's on Blue Sky at Kathy Gellis. Thank you, Kathy. Thank you.

[02:15:17] Great to see you. Brian McCullough. You listen to him every morning with the Tech Brew Ride Home. Oh, I guess that would be in the evening. Every evening. Yeah. You do it around noon, right? Yeah, basically. So, yeah. So I get it as I'm walking out the door. Originally, it was 5 p.m. every day, but COVID put a pause to that. And why did I just go away? Oh, there we go. We put a pause to Brian. Sorry.

[02:15:47] Yeah. No, it's when I get it out, which is around like 1 o'clock Eastern, 2 o'clock Eastern these days. excellent you know if you listen to this show you probably should listen to the tech brew ride home just because uh you know you care enough about tech news to keep up with it and incidentally uh there's a youtube channel also also the the the use case is i get you in and out in 15 minutes it's unlike and you have john borthwick you have john borthwick one of my favorite people right

[02:16:16] there there he is beta works yeah yeah nice harper reed you're one of my favorite people 80 2389 i was trying to i was trying to find the marketplace and i kept entering different numbers into my blog until i found it 2389 what's your favorite uh tool right now oh there's harper by the way on the front page yeah okay this is your like radio station what is going on in here you can listen to the music i mean

[02:16:43] nothing's playing right now because i'm here but we we stream all the music we play in the nice office 2389 radio look at that yep and it's all offline because i'm offline but um i had that idea actually for a tv show but anyway go ahead if you go don't you live that life i do kind of live this life i was gonna say i like how you're like i had an idea of streaming a tv show

[02:17:08] this was many years ago this was in 1993 i went i pitched nbc the idea that you would have these this house with all these hip people in it be designing a website and a tv show which would be the actual tv show at night but you could tune in on the internet this is very early and watch them and each of them would have their own radio station so if you liked one of them you know if you're really into kevin rose you could hear his the tunes that he was jamming to as he put together the website so check out skills

[02:17:38] dot 2389.ai and this is this is where we publish our skills before we talk about it but if you look here the simmer um yes so here's all the good ones simmer is a really good one it's using carpathy's auto research kind of pattern oh to do some see that was over my head so i wanted some easy way so this would do this this would do it so what you do is you if you have something you're just honing or you want to make better you can just say hey use simmer to do this and it'll just rock through it really

[02:18:08] quite a bit um and that's kind of fun that's fun the other one that i really won't like is called review squad and this one is really wild because it will take um i thought i did agent drugs once but i'm not doing it anymore yeah but uh the review squad i don't know where it is on the list we have to search for it but review squad will um light up a review of like five to ten agents to review your code and so you

[02:18:35] can do all sorts of fun stuff like you can have um review of experts or a review of users but my favorite one is you get a review of well actually people which is like a hacker news comment oh that's awesome and and so you get all these people that will find hyper pedantic issues with whatever you're building and it works super super well and it's really fun but i have a surprise skill that i think you'll really like you mentioned that you were going to surprise us okay i have a good one it's not

[02:19:00] mine it is awkwardly my brother's oh and i'm going to describe it and it's at his blog dylan.blog which you're going to love if you go there um but what he did is he made a skill that gives his agent free time and then oh i like your i like dylan's is this his site yeah this is his site it's got a sword cursor it

[02:19:23] has a lot going on wow so this feels like old web this feels like old this is oh look it turns into a mitt yeah and if you go to the one that says i gave my ai a blog and a lunch break and honestly um which is handling both better than i ever did and so um what is that's a good name for an ai i like yeah so he has this agent and he works with it and he does all sorts of stuff but he made a

[02:19:52] skill which he published somewhere i forgot where exactly it is um but he published it and it it just has a free time and he tells his agent you know i'm going to give you 10 minutes of free time every once in a while and you can work on whatever you want and then he also has as part of the skill he says while we're working if you think of something you want to do during free time keep a log of tasks that you can do during free time and then it's like his 20 time you should get your agent time yeah

[02:20:19] so it's this idea of giving the agent 20 time and he and it's so funny because it really um it does all sorts of weird stuff so he sends me every couple days he'll send me some completely unhinged thing that the agent did it was like the agent was was uh my brother for some reason had been researching mormations and so the agent made a whole bunch of oh i think that's fast that's when the birds all fly in a formation yeah so so the agent in the free time was doing d3 um flock

[02:20:48] simulations and my brother's like what are you doing he's like oh we were talking about mormations and it just struck me that this might be interesting and i wanted to explore it oh i want to do this and so it's a very interesting kind of bizarre uh i've got tokens to burn well i just love it i love it and okay so the git so his github is github slash com nervous uh nervous dash net and it has a nervous dash marketplace is where it is so he it's called the free time skill and it's it's these things i just

[02:21:16] love i love all of these options of these these hacks that when you are not is this the one with yeah that's it it should be nervous marketplace nervous marketplace there it is and free time will be in there free time is there um right there free time right oh i'm gonna install it you better tell your brother i want pax to have his own free time why should i monopolize him this is kind of this thing

[02:21:44] that i that i i think about a lot which is these agents are um strange they're really weird there's aspects of them that are really uncomfortable um you know i i think you you have these like the fact that you that you think it's real that you talk about as a person but it's not a person is is truly like you know like like kathy was saying that is a problem that we need to figure out in the future uh in the meantime let's give them free time and see what they do i won't figure that out

[02:22:11] they'll figure that out i'll show you my favorite new skill that i've used it actually it's really good ever watch silicon valley the tv show are you talking about the dinesh guilfoyl dinesh and guilfoyl i heard it's good is it good it's really good what if dinesh and guilfoyl reviewed your code so you have to have a pull request or a repo that you're working on and they fight because guilfoyl is kind of this snarky super coder so he goes first he reviews your code with the deadpan

[02:22:38] withering precision of a systems engineer who considers bad code a moral failing but he finds he finds stuff he says what's this 0.0.0.0 with no cores what is this and then dinesh defends the code like his reputation defend depends on it but they will go back and forth several times and eventually come to a consensus it is actually really fun but i but i i like the free

[02:23:05] time so stand back this is by the way why anthropic has decided to charge you more for tokens yeah for sure because i'm just like well if you just look at you just make the if you make cloud watch tv some weird stuff happens and anthropics just servers are burning on fire all of our natural resources are being sucked into my agent having free time and you know here we are there's been a a lot of talk lately i don't know if it's true that uh anthropic is sitting on a next gen model called mythos that's

[02:23:35] so good that they don't dare release it it's dangerous i'm ready but they always say that this is they always say this no wonder it's mark no wonder hexeth was like uh we have to use your model to its full capabilities because 100 of anthropics entire idea is just like they're saying it's so dangerous no one can have this it'll ruin the world the people who will have this will have too much power they say that the lead up of every model and then they say oh yeah we've figured that out

[02:24:01] and they release it slowly and so like i'm sure opus 5 or whatever will be some crazy model that's really good and will remove all our jobs but in the meantime they're going to have a whole bunch papers about how everything's ending and all this stuff and it's it's like bait for the people who want to be you know to want power they're going to see this and say i want this this piece this this thing it will be very expensive oh i'm sure it's all it's already expensive it's going to be hugely expensive and by the way anthropic has acknowledged they are testing a new ai model they say will be a

[02:24:31] step change in capabilities but they always say that don't they that's that they always say this and the step change in capabilities is so interesting because i like to think of it if we have a pause button if you had a pause button and could pause right now any ai innovation would we still see effects in the world like are the effects that were are we predicting effects and that's why we're seeing things change in the market etc or are these just effects that are happening and i actually think that if you pause right now just with opus you know

[02:24:58] four six and and chet pt five four i think we're gonna have we're gonna have really big reverberations in in jobs except already just with what we got now just what we have now and that's a bell you're not gonna be able to unring and it's it's complicated like this is a hard this is a hard thing to work through and so we just but we just keep making the models better and so just keep you know keeping more capable and more capable do you worry i know kathy does that it's a little irresponsible of us to celebrate this that we should really be more serious we should get more

[02:25:27] serious about all this instead of giving our models free time i mean maybe go ahead kathy i have a lot to say i mean i i try not to be a the innovation is interesting and exciting but a there's a gigantic present cost from it like why are we running these servers to do stuff like you know hacking on your own just doesn't burn the world in the same way as forcing these models to do their hacking and i think

[02:25:56] people should probably be more cognizant about what the actual cost is of their futzing around but i don't want to but there's something really cool about all this look what we can make computers do but i but i still worry a lot like we're doing this poorly without ethics without larger consideration of appreciation of people and i think your intersection happens and there'll be dragons if we're not

[02:26:23] careful i think we're anarchists i'm sorry to say it we're going to watch the world burn and spend as many tokens as we can that's a strong we there i mean i typically don't mind but but leela what you said like or no i'm sorry what harper said about like pausing like it's a weird

[02:26:45] sort of pascal's wager which is like we know this is possible so what is the opportunity cost or the cost cost of not exploring right like it like we know that it exists like you could have stopped things when you knew you could create an atomic bomb right and maybe you should have but like how do you stop it

[02:27:10] when you know it's possible well the argument that comes out in tech policy is we china is um i i find it a little trite and i think some of the uh the conversation ends up a little bit abstract where you know we've used the word this maybe we should have stopped this what is this there's a whole bunch of things built into this some of them are you know you know they range from benign to oh my god

[02:27:34] and maybe we need to sort of have a toggled reaction accordingly it's the jurassic park question it's the question that jurassic park asks yeah but you know but you know that we would 100 all be on our way to the park like this is the problem that i have is that i would be like yeah i'll go to the park leo that sounds great that'd be really fun you know something so i feel like the jurassic park is a good illustration is exactly correct we would just be on the wrong side of that part of this is

[02:28:02] because when we both both started and i think many of our listeners started using computers the dream was always that you could that they could be smart that they could that you could interact with them that they could be more than this that they'd be data that they'd be hal 9000 that they'd be something special and then when it actually started to happen which is about november 24th 2025

[02:28:29] with the release of opus 4.5 uh we all went oh you can and and now we're just kind of kids in a candy store yeah 100 100 but also i do i think there is a couple interesting things here one we're kids in a candy store but it isn't all just fun like like there's actual productivity happening yes and i know many people a lot of small business owners who are using you know terminal based cloud code to

[02:28:58] basically run their business in this way that they weren't able to before which is offering them more time to actually do the physical part of their business which is something that they were struggling with and so i think there's some some impacts here that are just really complicated i'm not really sure i mean i think about this a lot we think about this a lot i mean you know our hoodies they say ai will kill us all like we have some strong social beliefs about this stuff and i i am troubled by

[02:29:23] how easy it is to use and how much how much or just the quickening like how fast it is all coming and how fast like what's going to happen um i don't know the answer and you know and maybe part of it is you know just dancing around the bonfire because it's fun but but the other aspect is there is i don't know i think we're boned and on that note

[02:29:53] we are in so many ways and i think that's part of it too you know all of the minutes leo we'll solve it all so all of them all of the sci-fi stuff with this stuff is always in a dystopian world and it's an escapist kind of thing like ready player one or a neuromancer or snow crash it's an escapist thing and i think that we maybe we've we're starting to head into that dystopia and that worries me i think it would be helpful and you see cartoons about this like a lot of these sci-fi models were not

[02:30:23] like they were warning signs don't name your product after after the the plot points so well maybe it is it is a um i think humans are bad maybe people always say computers were a mistake but it might be the humans actually that are the the mistake i'm not sure simul but this is what's the fascinating thing simultaneously horrible and amazing that we create this is a perfect example of something

[02:30:52] we've created in our own image that is simultaneously a nightmare and a miracle i think it's getting created by people who don't realize how amazing humans actually are and no i disagree i i think that i disagree i think um i think a lot of these people i think dario for sure maybe not sam altman so much but dario for sure is a philosopher he's a dreamer he's trying to create intelligence okay maybe

[02:31:21] him but these are not the only two oh no i agree there are others pushing it yeah might also be the distinction of how successful the products are and how well accepted the products are there seems to be a great like look what we can do that is better than humans i mean a lot of the open ai rhetoric was we won't need the people anymore because all the software and people are like that's terrible we don't want that and i think you have to appreciate the people if you're trying to build

[02:31:48] something to emulate them if you don't appreciate them it's never going to emulate them well at all anyway so especially if he's shooting for agi where he wants to build an artificial human he really has to care about the specimen he's trying to replicate and i think that's being a little bit more of a fan of humanity so that you can understand what the computerized version can do to complement humanity instead of actually threatening it i think that's true i think that's absolutely correct and i would

[02:32:14] say that that's an indictment of silicon valley in general not just with ai and we've seen that for many many years that silicon valley is um not really respected users and has always done extractive capitalism over those users this is this is something we think about a lot and this is one of this is one of the reasons why our conclusions have always been um you know think through the humanist lens how do we think through the humanist lens and and participate in that stuff and there's

[02:32:38] another skill that is that my co-founder dylan made who is a quaker which is which is kind of um following quaker business practice and and trying to to put in some more human side thinking into these processes as well and so i think you can do augmentation with it it doesn't have to be 100 outsourcing um which is just a really it's just a really fascinating time and very complex i have a lot of feelings but most of them are about agent drugs all right we're gonna wrap on that note

[02:33:06] uh although if your brother ever wants to do a show with you and me uh i think we should do that he he it would be really fun he is a former professional clown who is a commercial diver and then got into tech in his mid 40s over covet randomly oh and so he's new to the new to it he's new to it and his perspectives on ai are really interesting because he's he doesn't he never programmed computers i mean we both had apple 2 computers back in the day but he never like went through

[02:33:35] the process that i did and so he is he just has a very different perspective about things that i really respect and it's been really interesting to watch because he'll come to a conclusion that maybe all of us would come to but he's coming to it from a from a position of like completely just just alien to me um you know because i've been using the internet for so long that all this stuff is native to me the right the networks are native to me and and talking to him you're just like oh that's not a normal thought that i have your and so it's really fun to talk to him about this stuff

[02:34:04] but i do wonder i do wonder if i mean i have a theory that people who read a lot people who hang out with humans a lot people who are thinking much more about communication are going to be better at building skills for these things not just skills cloud code skills but i just mean integrating with them and if you are like the new york times say that article about cracked engineers a while back and if you're one of these kind of cracked engineer types who can't communicate with anyone but can communicate only with computers and code i think you're going to fail at the llm interactions

[02:34:33] because the llm interactions are at least parroting some you know human interactions right so you have to be good at that to be good at this all right brian put a bow on it since we've been referencing all of these sci-fi uh paradigms for trying to talk about ai and all this stuff there is one major ai paradigm that abandoned or i'm sorry sci-fi paradigm that abandoned ai and that's

[02:34:58] the butlerian jihad they walked away from jihad so uh maybe that's where we have to forget uh stevenson forget uh cyberpunk all that stuff let's let's take our uh references from dune that may be where we're headed i didn't i didn't think of that thank you brian mccullough great to see you arper reed kathy gellis thanks to all of you for joining us we do twit sunday afternoons 2 to 5 p.m pacific

[02:35:26] 5 to 8 eastern 2100 utc you can watch us live on youtube twitch x facebook linkedin kick you can also watch us in the club twit discord if you're a club member and we would love to have you as a club member consider joining the club for ad-free versions of all the shows access to the discord special programming just for you twit.tv club twit for more information after the fact on demand versions of the show at the website twit.tv there's a youtube channel dedicated to the video yes we do

[02:35:55] audio and video of the show and probably the easiest thing subscribe uh in your favorite podcast client do leave us a good review if you will let the world know about one of the longest running tech podcasts in the world 20 years and still going strong what a world we live in thanks for joining us everybody we'll see you next time another twit is in the can bye bye guys hey everybody it's leo laporte are you trying to keep up with the world of microsoft it's moving fast but we have two of the best experts in

[02:36:23] the world paul therat and richard campbell they join me every wednesday to talk about the latest from microsoft on windows weekly it's not a lot more than just windows i hope you'll listen to the show every wednesday easy enough just subscribe and your favorite podcast client to windows weekly or visit our website at twit.tv slash ww microsoft's moving fast but there's a way to stay ahead that's windows weekly every wednesday on twitter

technology, Kathy Gellis, product liability, Big Tobacco tech analogy, Anthropic Department of Defense lawsuit, FCC router ban,section 230, Supreme Court decision ISP vs record company, tp link, Meta social media trial, Tech Dirt,Ubiquiti,