An explosive supply chain hack in Light LLM nearly unleashed catastrophic malware across millions of AI systems, and it took a coder's quick thinking to catch it before it snowballed into disaster.
- Will California require Linux to verify its user's age.
- Apple's iOS 26.4 requires UK users to prove their age.
- Russia chooses to use home grown 5G mobile encryption.
- Ukraine knew the webcam was installed by Russian spies.
- Google moves quantum computing "Q Day" to 2029.
- At RSA, UK's NCSC CEO warns of vibe-coded SaaS replacements.
- More information about nasty ClickFix campaigns.
- More than one in seven Reddit postings are an AI-bot.
- The story behind the LiteLLM disaster that was averted.
Show Notes - https://www.grc.com/sn/SN-1072-Notes.pdf
Hosts: Steve Gibson and Leo Laporte
Download or subscribe to Security Now at https://twit.tv/shows/security-now.
You can submit a question to Security Now at the GRC Feedback Page.
For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
Sponsors:
[00:00:00] It's time for Security Now. Steve Gibson is here with a show about something that should send a chill into the heart of every coder. The nightmare PI PI exploit, LightLLM, we'll do a kind of deep dive on to what happened, how it happened, and what we can do to prevent it in the future. Plus, we'll talk about age verification on Linux, a good move from Apple on the click fix vulnerability and
[00:00:28] Is quantum computing moving closer? Steve has thoughts, next on Security Now. Podcasts you love. From people you trust. This is TWiT. This is Security Now with Steve Gibson, episode 1072, recorded Tuesday, March 31st, 2026. LiteLLM.
[00:00:58] It's time for Security Now. I know, you wait all week for Tuesday. Best day of the week. Leo's back, Leo's back. Well, I'm back, but so is Mr. Gibson. Steve Gibson's here. Mikah did a great job last week. Thank you, Mikah, for filling in for me. Holding the fort down. I was at RSAC, the big security conference in San Francisco. I ran into a friend of yours, Marcus Hutchins, the hacker. In fact, I kind of relived old times because I said, yeah, we were when, what was it, WannaCry that he did?
[00:01:27] We were following along with what he was doing. Yeah. And then he left Black Hat in Vegas, got picked up by the feds before he could board his plane. In the airport. And it was held, and because of his youthful indiscretions, not because they didn't recognize the valuable contributions he'd made as an adult. Anyway, we went through that and everything. And I guess he was there because there's a new documentary being made called Midnight in the War Room, a documentary of cyber warfare.
[00:01:57] And so we went over there and then he was sitting there signing hats. So I got him, they got a Marcus Hutchins hat. And it says, as if I didn't know who he was, it says, cyber security guru on the side there. By the way, we made a wonderful RSAC, about an hour long kind of tour through RSAC. We talked to, oh, a dozen people or so, something like that. And you can watch that on the Twit feed. It's on YouTube.
[00:02:24] Securing the agentic era, RSAC 2026. That is the challenge, isn't it? Those are all pesky agents. They get up to all kinds of things. Well, I was really, you know, I specifically wanted to talk to people who are using AI defensively. And there are a number of companies doing it. There was one company called Aikido, Aikido, right? And we were talking about which models they were using and how they were getting the models to do the job.
[00:02:51] And the guy said, well, at one point he said, you know, we had the best thing we found was to tell the models, we will sue you if you don't find the flaw. And it worked. It scared them. I thought, it's like, we are living in a weird time with AI. We're going to need AI HR before long. Absolutely. It's like, don't you mistreat, you know, you got to give your AI a little, literally, time to cool off. Yes. Yes. Exactly.
[00:03:20] Let those chips cool. Oh my goodness. Let the fans take the heat off of them. Well, and they do, the Claude code was accidentally leaked from Anthropic today or yesterday. And they do have instructions in there to say, be more personable, you know, be a little more sassy. They have, they're telling it to have some personality because it makes it more sticky. If you go to a youtube.com slash twit, there it is. Our sack 2026, securing the agentic era.
[00:03:49] And you can talk, hear me talk to Marcus Hutchins and a bunch of other people in that, in that video, if you want to play it. Cool. Now let's talk about though, what's going on today on security. So, um, probably the biggest news, the most, you know, a lot has happened in the last week. I actually had a couple of pieces of email saying, Hey, I thought you were going to talk about this or that.
[00:04:12] Like the, like the foreign routers being outlawed and like, I went up to ubiquity at our sack. And I said, do you, I want to talk to you about that. And they said, no, we don't, we don't know, no, we're not giving any interviews. They wouldn't talk to me. Yeah. And I looked around and I couldn't find any like definitive dates. I even looked at the official government document and it's like, okay, like when or where, or what, like, like what.
[00:04:39] And, you know, my takeaway was for most of our listeners, there are really good alternatives like, you know, running open sense on a little, you know, arm box. I think that's what a lot of people are saying. I'll just run my own. Right. Exactly. You really don't need to get something from Asus any longer. And arguably you get a much stronger and more capable result.
[00:05:03] But anyway, I want to talk about this big event with light LLM and the, it's such a perfect glimpse into a supply channel attack. And so we'll sort of use that as our, as our armature for discussing just in general, the problem we're having with our supply chain. Because boy, are we seeing an acceleration of that?
[00:05:29] You know, I think probably we first really touched on it with the, with the log for J exploit, which scared the entire industry. And it turned out to be a nothing burger because the, because it was difficult to do. And what we've learned is that it's the easy attacks that there's that a much larger percentage, a lot, much greater percentage of the population of hackers are able to jump on.
[00:05:56] But overall, I mean, and we've been talking about, you know, all of the infections found in, in NPM and, and, and, and, and Pi Pi. So that's where we're going to focus on this episode 1072 for the last day of March, March 31st.
[00:06:15] But we're also going to talk about, Oh, the other real hot button Leo is this California Looney Tune law that says that operating system platforms must enforce age verification. And of course, every Linux person's head exploded. It's like, wait a minute. I mean, the reason they are a Linux person is so that they can't have the government or anybody telling them what to do. So, wow, a lot happened there. We'll talk about that.
[00:06:45] We've also got, uh, some new behavior from iOS 26.4, which Apple just moved their whole ecosystem to, which requires is requiring UK users. To prove their age proactively. Um, and it turns, well, I don't want to, I have to calm myself down because we'll get there.
[00:07:09] Uh, also Russia in this continuing move to just apparently withdraw from the world and society. And good luck with that, uh, has chosen to use a homegrown 5g encryption for their future mobile network, which won't be compatible with anything else or anybody's phones. So, okay.
[00:07:34] Uh, uh, oh, there was a great story of a, a, a Ukraine drone maker who was aware that a, a Russian spy agency was installing a, a bad spying thermostat in their facility. And what that, and what happened with that? We've got Google moving forward, forward or backward closer.
[00:08:01] The, what they call the Q day, the day when we actually think that we have to worry about quantum computing. They moved it into this decade into 2029. Hmm.
[00:08:12] So we're going to look at that, uh, also at RSA, uh, where you were the UK's CEO of the NCSC, you know, their, their big, uh, cybersecurity, uh, agency, uh, stood up on stage and warned about the danger of vibe coded. SAS, you know, software as a service replacements, uh, we're going to touch on that.
[00:08:39] Uh, we've got more information about nasty click fix campaigns, which continue to proliferate. Uh, and many people emailed me to say, Hey, Apple took your advice. Well, they didn't take my advice, but they did the right thing, which Microsoft refuses to do. Yeah. Um, we've also got, uh, the news that there were more than one in seven Reddit postings.
[00:09:04] Uh, uh, uh, posted by AI bots and our, the, the CEO of Reddit is not happy and he doesn't know what to do except something that Reddit users really don't want to do. Uh, and then we're going to talk a little bit about, uh, well, actually a lot about this, what was going on behind this light LLM disaster that was averted, but only because the people who coded it, the, the, the, the,
[00:09:32] the malware were apparently in too big a hurry. It was five coded. They made a mistake. Yep. They made a mistake that allowed it to reveal itself almost immediately to, I mean, which was a good thing. Cause again, this absolutely constitutes dodging another bullet. And how many are we going to dodge before we get hit by one? I'm not even sure how much it was dodged.
[00:09:59] I mean, we don't know yet how many people we know, 47,000 people downloaded it. We don't know how many of them got bit, which is not to say that nobody got hurt, but when you're down, when you're downloading 3.7 million per day, right? It could, it could have exploded. Yeah. Well, and there was a new one this morning, uh, X and Axios, uh, which is an NPM library compromise. This is not the, uh, this is just the beginning.
[00:10:24] This is not, and we'll be talking about why, because there's, there's a, as usual, a takeaway that we try to find here. And to my way of thinking, this is the trade-off we are still making in preferring convenience over security. Yeah. It's like, we're hoping for the best. And so far, uh, you know, maybe AI will change it. We're not sure.
[00:10:50] Well, that was the interesting thing at our sec is the number of companies that were proposing AI based defense. They were using AI agents for defense. It was, it was very, you know, it was probably the number one topic at our sex was very interesting. Yeah. Well, we're going to get to the picture of the week. Something you should avoid with wet nosed doggies. Apparently I haven't, I only saw the caption, but that's coming up in just a little bit.
[00:11:17] But, uh, first a word from our sponsor, the show brought to you by threat locker. Steven, I know them well, threat lockers, zero trust platform now delivers the industry's most comprehensive suite of zero trust solutions. Actually, this is something new protecting end points networks and the cloud. They've announced this. They announced this, uh, I think at our sack, I believe by extending.
[00:11:44] In fact, I'm going to have a, I have a phone conference with them on Thursday to get a tour of this. I'm very excited by extending zero trust enforcement to cloud services and company networks. Threat locker now ensures that devices are validated through a secure broker before connecting to platforms like Salesforce, Microsoft 365, Asana, Google workspace, GitHub. This is so important. This is so important.
[00:12:09] Even if a user is successfully phished, attackers cannot use that end point to access resources unless they actually have possession of the user's trusted device. Threat locker works across all industries and provides 24 seven us based support. It's really amazing solution. Very affordable, very easy to install, very easy to configure. They support windows, Mac and Linux environments and enable comprehensive visibility and control.
[00:12:39] Ask Rob Thackeray. He's end user technical architect at Heathrow airport. Now Heathrow does not want to be brought down by ransomware. That's why they use threat locker. He said, quote, threat locker was the most intuitive solution we tested and the responsiveness of the organization, the willingness to engage with us, set up a demo, work with us on weekly audit reviews is very good. It's great to have an ongoing relationship with a company that's so responsive to our requests. That's Heathrow.
[00:13:09] And I have to say, that's what I also got from all the threat locker customers we saw at Zero Trust World. It's really a remarkable story. They're trusted by people who just can't afford to be attacked. Global enterprises like JetBlue, the Indianapolis Colts football team use threat locker. The Port of Vancouver. Threat locker consistently gets high honors and industry recognition. They're a G2 high performer. Best support for enterprise summer 2025.
[00:13:37] Peer spot ranked them number one in application control. Get app gave them their best functionality and features award in 2025. Threat locker means you can confidently ensure users have access to a consistent, safe network connection. Offices, remote users, internal servers and critical services can maintain smooth operations without the need to open inbound ports or deploy traditional VPN solutions. Get rid of them.
[00:14:06] Your end users will get the secure, reliable, internal system access they need without complex infrastructure changes and without risk. Get unprecedented protection quickly, easily and cost effectively with threat locker. Another reason to get a demo. Actually, this was fascinating. It's in our RSEC piece. I talked to their, uh, threat lockers, uh, chief product officer.
[00:14:32] And he was telling me, he says, just install threat locker on one service or one device. He said, because it also is a way to see if you've got problems, right? Because immediately you'll get something, get, it will be blocked and you'll go, wow. We didn't, we didn't know we were having a problem here. It's very informative. Here's what I would say. Go to threat locker.com slash twit. Get that free 30 day trial.
[00:14:58] Learn more about how threat locker can help mitigate unknown threats and ensure compliance. That's threat locker.com slash twit. Really like these guys. They've got a great product. You should check it out. Threat locker.com slash twit. Thank you so much for supporting Steve Gibson. So. Picture time. Yes. Uh, I gave this, as you noted, I gave this picture, the caption, this solution is not recommended.
[00:15:28] If your dog has a wet nose. Okay. So I'm thinking that's going to have something to do with electricity here. Let me just, uh, scroll up. We can discover it together. There's a nail. Oh, oh, oh boy. That's okay. So this is an interesting solution. If, uh, you, you, you have the wrong country's power. Yes. You know, and Leo, really all of these solutions are interesting.
[00:15:56] I was Benito and I were talking about this before we began recording. You know, that there was that one with the two nail clippers that I thought was particularly inventive. Um, but these, uh, I think if we were to stand back and look at the 20 year plus history of the podcast, it would be people who seem to have the wrong plug for their outlet.
[00:16:18] And the mystery of fence gates standing alone in the middle of fields seem to be true of the, of the overriding themes of our pictures of the week. Uh, so for those who aren't, who are listening and are not seeing, uh, we've got a, the, the, the individual here is trying to plug a, a European style. A C plug.
[00:16:46] That's got the round pins into, well, he wants it. He would like to plug it in apparently in the U S or somewhere where we have the parallel straight slots. I got to tell you, that's the worst look at power strip I've ever seen. Everything about this. I mean, and you, and when I say, when you use the term rusty nail, you're normally not really literal, but here, I don't even know how he's getting a connection with there's so much rust on. On these nails. Cause of course rust is an oxide, which is an insulator.
[00:17:16] But, uh, anyway, so we basically sort of have a Jacob's ladder made with two, two nails stuck into the slots of this power strip and then push down between the nails. So that they're sort of splayed apart. That's creating the Jacob's ladder effect is this round plugged European styles, uh, plug, which just sort of hovers there.
[00:17:44] And my point was, if you have a curious dog that likes to go around and sniffing in the corners, uh, and I don't think even its nose would need to be wet. It would get a very rude surprise. Uh, if it will stick its nose across. And how did they get the nails in without shocking themselves? They must've had rubber gloves or so. I mean, this is, this is insane. It's nuts. Yes. Uh, hopefully the, the outlet strip has a switch that we don't, that we don't see.
[00:18:14] Oh, there you go. Of course. Or it just could not be plugged into its normal, uh, American style, uh, outlet where it gets its power. Anyway, thank you again, listeners for another, uh, entertaining picture of the week. Always appreciate them. Okay. So, uh, our first news of the week was inspired by a question I actually received from a listener.
[00:18:40] Um, it relates to much of our recent discussions about internet age verification and separately or specifically to its recent escalation, which we've been seeing everywhere to include. As just happened operating system platforms themselves, our listener who identified himself as Fred M. He wrote, hi, Steve.
[00:19:03] I recently read that free DOS was not going to comply with California's age verification requirements. He said, since free DOS is the OS distributed with spin, right? I was wondering how this would affect you when the new law takes effect. Thanks Fred. Okay. So the good news is spin, right is not age restricted content.
[00:19:30] So I don't think we have a problem, even if we were going to have a problem, which I don't think we would, but his question refers to California's assembly bill. 10 43. And it's unclear to me why this issue suddenly and recently popped up on everyone's radar. Uh, but the internet is currently buzzing about it. And our listeners have been sending their questions and opinions to me, which I appreciate.
[00:19:57] Uh, the bill in question was approved by California's governor back on October 13th of last year of 2025. And it doesn't take effect until the start of next year, January 1st of 2027. So why all of the sudden this awareness of it? Cause I mean, it was a while ago. And so I, I, I did some looking around over the past month.
[00:20:22] The only thing that I could find was that the very popular and respected and widely read Tom's hardware site did post an article about this on March 1st, which sort of seems maybe to have been the catalyst for everyone going, what? What are you talking about?
[00:20:41] So Tom's hardware, hardware's headline was California introduces age verification law for all operating systems, including Linux and steam. And so, you know, mostly no Linux users want a nosey government to have its mitts on their beloved independent open source operating system.
[00:21:11] And since Linux doesn't have any central control authority, you know, the way windows does Mac, Android and iOS, they reasoned Linux users reasoned there would be no way for that to happen. Right? Right. So, okay.
[00:21:30] Since California legislators have also recently proposed, as we talked about requiring all 3d printers to somehow magically identify and refuse to print any component that might be part of a handgun. No one knows how that could possibly be made possible either. Unfortunately, our, and I say our cause Leo and I are both residents of California.
[00:21:57] Our legislators here in California do seem to be having fun asking for things they cannot realistically have. Not that that's stopping them from, you know, asking. So, okay, first let's step back and take a look at what this legislation is because it does exist. It was signed into law on October 13th and it is coming into effect on January 1st. That's all happening.
[00:22:21] The section heading in this bill is age verification signals, software applications and online services. And the sections overview, just the overview of the detailed, you know, point by point says existing law generally provides protections for minors on the internet, including the California age appropriate design code app.
[00:22:49] The fact that among other things requires a business that provides an online service product or feature likely to be accessed by children to do certain things, including estimate the age of child users with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business.
[00:23:41] And the reason why is meta and Google with YouTube losing those major cases. And there was also one in Arizona, I think, which was beginning to hold these, you know, big tech accountable for the design practices in their applications, which do exactly what this California age appropriate design code act says they shouldn't do.
[00:24:08] So they wrote this bill beginning January 1st, 2027, which did get signed on October 13th.
[00:24:18] They said would require, among other things related to age verification with respect to software applications and operating system provider as defined to provide an, an accessible interface at account setup that requires an account holder as defined to indicate the birth date, age, or both of the user of that device. And the purpose of providing a signal.
[00:24:48] And the purpose of providing a signal regarding the user's age bracket to applications available in a covered application store and to provide a developer as defined who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface, you know, API. I. Regarding whether a user is in any of several age brackets as prescribed.
[00:25:16] The bill would require a developer to request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched. This bill would prohibit an operating system provider or a covered application store from using data collected from a third party in an anti-competitive manner as specified.
[00:25:43] This bill would punish noncompliance with a civil penalty to be enforced by the attorney general as prescribed. OK, so that's as much as I'm going to quote from that. So. While it's true, the details matter, I'll first note that, you know, this is like what this bill is asking for is what we've been suggesting.
[00:26:09] Apple with iOS and Google with Android should both somehow manage to provide. And the way this should be done, at least for the use case of smartphones, is beginning to take shape. We're beginning to see this manifested in, you know, Apple's apparently reluctant incremental movement on this front.
[00:26:37] The parents or guardians of a minor child should be able to configure the birth date of the user of a smartphone and be able to securely lock that date into their child's device. From then on. And and I'll say optionally should be able to. But like so that the point is the platform would provide the capability.
[00:27:05] But it should be at their discretion from that point on. Anytime a website, a local application or app store download contains age restricted content and thus needs to obtain age gated permission. They may cause the user's operating system.
[00:27:25] The user who's the kid may cause the user's operating system to display a clear and uniform pop up asking for an age bracket to be provided.
[00:27:49] If the user wishes to again, not automatically, not unless they set it that way, but if they want control, if they choose to, they may then allow the operating system to inform the requested application on their behalf. Whether its user is under 13, between 13 and 15, between 16 and 18 or over 18.
[00:28:14] Those are the brackets California specifies in which the world seems to sort of be settling on. If the user declines to provide their bracket or if their device has not been set with a date of birth, the requesting site will be told that no age assertion is available and should probably not deliver this age restricted content.
[00:28:38] So what seems right about this is that this solution places the handling and responsibility of their young child's age into the parent's hands where it should be. Not the government, not the OS, not the platform provider. The platform provider provides the capability to configure the device to do this if the parent or guardian should so choose.
[00:29:06] And all it requires of Google and Apple and Apple's like they're both almost there now is that they provide the means to accept lock and protect that decision and provide a uniform platform specific API for making that information available on a case by case basis. Again, if it's been configured to do that. To any entity that inquires.
[00:29:31] And as I said, both companies apparently reluctantly have been moving incrementally in this direction.
[00:29:40] So at this point, given what's happening on the legal side in local and national governments, I can't find any sympathy for someone who complains that this, like what I described, would represent an invasion of an online user's absolute privacy, which is what we see. There's a lot of that on the net.
[00:30:09] You know, opening the front door of your home, walking outside and down the street compromises someone's illusory, absolute privacy. You know, we live in a world of laws which attempt to protect vulnerable young people by age gating where they can go and what they can do, you know. And, you know, with the vast resources that are now online, there's a lot of stuff that needs arguably young people.
[00:30:37] You know, parents should have the right to decide if that's something that they want their children to have access to. So as a society, we're now working to more fully incorporate all of the many facets of the Internet, which are many now, into our daily lives. So to do that responsibly means that a user's age, although it hasn't previously been, it's going to have to be taken into account moving forward.
[00:31:05] Okay, so, but what happens when we leave the mostly well and clearly and cleanly defined realm of personal use smartphones, you know, which have per user accounts? Things become a lot less clear and clean. And I argue, I mean, I agree with everyone who's upset about California and what this means for Linux, that we're stepping into a huge mess.
[00:31:33] So here's what Tom's Hardware wrote, which may have been, as I thought, the catalyst for this recent upsurge of, you know, interest and outrage. They said, California's Digital Age Assurance Act, that's Assembly Bill 1043, signed by Governor Gavin Newsom in October 2025, requires every operating system provider in California.
[00:31:59] And I don't even know, like if Linux has an operating system provider, right? Well, I mean, Ubuntu is, for instance, that's a company that makes a distro. It would have to be by distro. There's also Android operating systems like Graphene. Graphene's already said, we're not going to do this. Well, and did you see that it got stuck into System D and then got yanked? Yeah. Yeah. So, I mean. Nobody wants this. But there's no enforcement mechanism.
[00:32:28] There's no, it's not even clear how they would know. And it doesn't require ID, right? It just says you say how old you are. Correct. So what good is that? Correct. Silly. Yeah. Just nonsense. Yeah. So they said the law's, Tom's Hardware said the law's broad definition of operating system
[00:32:51] provider as anyone, quote, who develops, licenses, or controls the operating system software on a computer, mobile device, or any other general purpose computing device. Well, that means smart TVs too, right? So like, okay, you're going to tell your TV how old the viewer is. Tizen and WebOS, all the TV operating systems. Wow.
[00:33:14] So, and so Tom said pulls in not just Windows, Mac OS, Android, and iOS, but also Linux distributions and Valve's Steam OS. According to AB 1043, OS providers must maintain a, quote, reasonably consistent real-time application programming interface, again, API, that categorizes users into four brackets. Those are the ones that we mentioned.
[00:33:39] Developers who receive the signal are, quote, deemed to have actual knowledge. So if the OS makes a claim, then they're kind of off the hook. Well, the OS said this was the age of the user. So they are, at that point, deemed to have actual knowledge of their user's age range under the law, which shifts legal liability for age-appropriate content decisions onto them.
[00:34:07] So that says if they've been told, then they must act accordingly. And Tom's wrote, penalties for noncompliance run up to $2,500 per affected child for negligent violations and $7,500 for intentional violations. And it's important, though, that this is all enforced by the California attorney general.
[00:34:30] And that was a point made elsewhere, that it means random groups can't sue under this law. That's good. It's only the California attorney general. So that gives them a lot of discretion about who they're pursuing. Exactly. So Tom said the law does not require, as you said, Leo, photo ID uploads or facial recognition with users instead simply self-reporting their age. What? 99. That's right.
[00:35:00] So he says this sets AB 1043 apart from similar laws passed in Texas and Utah that require, quote, and we've seen this when we talked about this, commercially reasonable, whatever that means, verification methods such as government issued ID checks. Assembly member, California Assembly member Buffy Wicks, who authored the bill, said this,
[00:35:26] quote, avoids constitutional concerns by focusing strictly on age assurance, not content moderation. In a press release, the bill passed both chambers unanimously, 76 to zero in the Assembly and 38 to zero in the Senate. That's kind of like a, yeah, sure. Why not vote? Yeah. It's like, okay. That's like that. This is an easy one. Yeah.
[00:35:54] However, Gavin was a little circumspect. Uh, Tom's wrote, despite signing it, Governor Newsom issued a statement urging the legislature to amend the law before its effective date, citing concerns from streaming services and game developers, right? A streaming service on a smart TV, a streaming service and game developers about, quote, complexities
[00:36:20] such as multi-user accounts shared by a family member and user profiles utilized across multiple devices. In other words, you know, we're talking about the same thing, a version of the same thing we've been talking about with networks for since the beginning of the podcast, authentication. On one hand, there's identity authentication. Now we're facing age authentication.
[00:36:45] And it's just as messy because you're remote and these are just not easy problems to solve. So, uh, Tom's, uh, said whether amendments will materialize before January, 2027 remains to be seen enforcement against Linux distributions. However, is likely to be problematic. Wrote Tom's distros like Arch Ubuntu, Debian and Gentoo have no centralized account.
[00:37:15] Infrastructure with users downloading ISOs from mirrors worldwide and can modify source code freely. The small distros lack legal teams or resources to implement the required API. It's easy to do. So a more realistic outcome for non-compliant distros is a disclaimer that the software is
[00:37:41] not intended for use in California and maybe eventually anywhere. Really? Not use this anywhere. Can't use this on earth. So good luck. So I spent some time reading everything I could because I, you know, this is again, this, this cross network authentication of anything is hard.
[00:38:08] Uh, I found a posting made by the reason foundation a few months ago that I, I, it's worth sharing. It summarizes the current state of affairs, highlights the ways in which California's new legislation actually represents a useful step forward. And also suggests a way out of the mess that California also created. So they wrote California governor governor Gavin Newsom signed the digital age assurance act,
[00:38:36] assembly bill 1043 in a law on October 13th, marking a significant evolution in state approaches to online youth safety. There is room for improvement, but the act introduces a meaningful first step toward a more privacy preserving age signaling model intended to minimize data exposure while improving compliance certainty for businesses.
[00:39:02] This step is a welcome advancement over earlier approaches, but it also creates potential complications if later paired with more restrictive bills. A trade-off the policymakers should weigh carefully. So they wrote California's AB 1043 mandates. And so, yeah, we got the, we got the four age bracket thing. So they said this approach contrasts with that of Utah, Texas, and Louisiana, which enacted the
[00:39:31] first statewide app store age verification laws in 2025. The bills require app stores and developers to verify users ages through the, here's that expression, commercially reasonable methods. Utah and Louisiana's laws are set to go into effect in 2026. While Texas, I think it's in the summer.
[00:39:55] While Texas has been temporarily blocked by a federal judge on constitutional grounds, two federal bills, the app store accountability act. The app store accountability act introduced by Senator Mike Lee, a Republican from Utah and the parents over platforms act introduced by representative Jake. Archon Kloss, who's a Massachusetts Democratic senator included the same commercially reasonable languages as the state bills.
[00:40:24] Although this phrasing does not explicitly mandate government ID or biometric checks, it creates strong incentives for app stores to collect the most precise forms of evidence available. Driver's licenses, passports, or credit cards. Fearing the risk of lawsuits and noncompliance penalties, companies would default to the most definitive identification techniques, which that's a problem, right?
[00:40:53] So what they're saying here is that by, by having state and even federal laws, which say you must do the best job you can. Unfortunately, the best job you can is very intrusive of, of privacy.
[00:41:12] So they said in 2025 alone, several popular apps that already required government ID checks for age verification suffered significant data breaches, highlighting the privacy risks associated with such mandates.
[00:41:28] The T app, a woman only dating advice platform that required users to upload selfies and copies of government issued IDs as part of its account verification experienced a major breach in July that exposed over 70,000, 7,0, 0, 0, 0, 0 identification images and sensitive personal data.
[00:41:52] And again, it's like, why are they not deleting this the moment they've, they've determined someone's age, but okay. In October, global messaging platform discord, as we know, suffered a breach directly tied to its compliance with the United Kingdom's online safety act, which mandates robust age verification for platforms likely to be accessed, accessed by minors.
[00:42:18] To meet these legal requirements, discord began requiring UK based users to submit either facial scans, government IDs, or the last four digits of credit cards for age checks, vastly expanding the pool of highly sensitive data at risk. When hackers later compromised a third party vendor managing this information, thousands of ID photos and partial credit card details were exposed.
[00:42:45] These incidents underscore how rigid age verification systems can turn well-intentioned privacy protections into security liabilities and inadvertently create new vectors for harm. In contrast, Assembly Bill California, Assembly Bill 1043 correctly prioritizes privacy and security by using a self-declared age signal rather than a verification process.
[00:43:12] The law integrates core privacy by design principles by separating identity from compliance status and ensuring that user data never leaves local systems in identifiable form. That is, all it ever discloses is brackets.
[00:43:31] It also provides developers with clearer compliance certainty than Utah-style frameworks, which remain mired in vague terms like commercially reasonable. However, there is still issues with AB 1043 that should be addressed.
[00:43:49] First, the law's mandate that device makers integrate age signals into all devices risks sidelining parents from key digital literacy decisions. For AB 1043 to achieve its stated balance between safety, privacy, and parental empowerment, California could modify its framework to make age signaling optional for parents rather than required.
[00:44:19] Second, debates over youth online safety laws raise a subtler issue. Their impact on family relationships and parental oversight. Age verification and age signal frameworks are often presented as empowering parents, but automation can easily displace meaningful dialogue between parents and their children.
[00:44:43] True digital literacy depends on ongoing dialogue, trust, and continuous education about online risks, not on technical filters alone. When technology assumes the entire role of risk management, it can foster complacency and a false sense of security, as if software settings could replace parental judgment. Boy, I really like that. That's really good. I know. It sounds like you, Leo.
[00:45:11] That's exactly the point that you've often been making here. They said policymakers should therefore ensure that digital safety tools operate as supports for families, not substitutes for them. California's initial framework in this respect could be refined through a simple but meaningful adjustment.
[00:45:35] Make the device level age signal optional for parents rather than compulsory. An opt-in structure would preserve AB 1043's privacy benefits while strengthening family agency.
[00:45:51] Parents could choose to enable the system during device setup if they desire automated filtering or app age controls, or skip it entirely for now if they prefer to guide their children's use through household rules and open communication. Optional enrollment would further align the policy with California's broader digital rights precedents, reinforcing choice, consent, and proportionality.
[00:46:20] And they finished writing, on the whole, California's AB 1043 represents a meaningful advancement in the national debate on age verification. It replaces high-risk identity checks with privacy-preserving signals, curtails constitutional litigation risks, and clarifies enforcement responsibility.
[00:46:43] But if the state were to shift to an opt-in model, it could preserve the law's privacy protections, align with its digital rights values, and restore parents to the central role in guiding children's online well-being. Age assurance need not come at the expense of privacy or parental autonomy. So I think this author gets a lot of this exactly right.
[00:47:10] No, we would be moving toward an environment where the devices used by someone less than 18 years old could optionally be configured by that minor's parent or guardian to conditionally supply its user's age bracket. Never its date of birth, just where you are in those brackets or which bracket you're in.
[00:47:35] The idea would be that the various operating systems would implement a simple API. And, you know, iOS is there. Android is there. I mean, they're like right there. That could be queried by applications running on the platform. If that application's a video game offering age-restricted content, it could learn which version of its game to display to that platform's user.
[00:48:03] If the application were the application's app store, it would learn which applications to list and allow to be downloaded and which should simply be filtered and not shown to someone under age. And if the application was a web browser, it would learn the age range of its user and could use that information when queried by a remote website.
[00:48:28] Now, we would need, for that, we would need the W3C, the World Wide Web Consortium, to define a standard means for a remote site to query the browser client for its user's age, which the browser would have received by an API from the underlying platform. But even that should be trivial. I mean, that would not take more than a day to define.
[00:48:54] So as for the non-smartphone platforms, such as Windows, Mac OS, Linux, and Steam, and, of course, smart TVs, all of those platforms, at least the OS platforms, not, I don't know about smart TVs,
[00:49:12] but I guess it could be added, operate with the concept of a root or an admin, right, whose account should not be used as the daily driver and daily users who work with far more safe and reduced privilege user accounts.
[00:49:30] So those platforms could easily arrange to add date of birth awareness to their user accounts, and then an API would be added to surface the brackets of those to the online requester of that information.
[00:49:50] So, you know, basically, following exactly the model of the smartphone, that would give parents who wished to govern what their young children were able to do online, you know, what they saw and where they went, a clear and clean means for doing so.
[00:50:07] And I so much favor date of birth over age because that automatically changes the bracket at the user's birthday rather than needing to constantly update the age after a birthday. And, of course, parents could set whatever birth date they wanted.
[00:50:25] If they felt that their child was more mature than the typical child at that age, they could say that they were born earlier, which would then move them into a later bracket sooner. So, you know, all of the online fury and indignation raging over the idea of California attempting to effectively outlaw, as I've seen online, any platform that doesn't provide these services,
[00:50:53] is that all disappears when it's just an optional feature capability that a systems admin, you know, in this case, mom or dad might choose to employ if in their role of parent or guardian, they would like to exert some control over the age-gated use of that platform.
[00:51:14] So, I think it's also worth noting that this solution also nicely resolves the whole VPN backlash dilemma that is also beginning to appear. We're hearing the legislators saying, well, you know, those VPNs are being used to bypass, you know, the laws we put in place, so we need to outlaw those.
[00:51:37] You know, a VPN in this case would be of no benefit since the user's platform, not their IP address, would be producing the age bracket indication. So, you know, I thought that was an interesting take that Reason published, and I thought it was good that Newsom said, well, I'm going to sign this, but I hope we, you know, maybe make some modifications before it goes into law.
[00:52:05] And in any event, I don't think Linux people have anything to worry about. I mean, it is open source. And we did see somebody kind of very quickly added the capability into user accounts on Linux, and Linux was immediately forked without that provision in there, even though it didn't even have an API,
[00:52:29] it didn't have a UI, it wasn't in any of the desktops, it was just down basically in the JSON structure, they added a field for date of birth. And, you know, a lot of people in the Linux community freaked out over that. So I was like, okay, I mean, I get it. But again, there's no question. We need to have strong identity online. We've needed that for a long time. Everyone wants it to be anonymous.
[00:52:59] We're trying to hold on to that. Age gating and age verification is coming to the internet. And let's hope we do it in a responsible way. You know how we can be responsible, Leo? Is by telling our listeners about one of our sponsors. We will get right back to Steve in just a moment. But first, a word from our sponsor this episode of Security Now brought to you by Adaptive,
[00:53:27] the first security awareness platform built to stop AI-powered social engineering. What? How could you do that? Well, here's what's changed, right? And you know this. Attackers don't just need malware anymore. They just need trust, right? A cloned voice, a convincing deep fake on Zoom, or an AI-written fish that looks like it came from your IT team. We get those a lot.
[00:53:53] You heard about the CFO who thought he was on a Zoom call with the board of directors and the CEO and wrote a massive check based on their instructions, except they were all deep fakes. It wasn't the CEO and the board. It was a bad guy. Anyway, how can you prepare your organization for this? Well, you can with Adaptive. They do simulations everywhere. Email. Yes, of course. But also SMS.
[00:54:22] They even do voice. So it protects you against deep fakes. It protects you against phishing, you know, voice phishing and AI-generated phishing emails and texts, including the scenarios that can mirror your own brand and executives. You need this nowadays. And when employees report something suspicious, Adaptive can help you triage it fast so security teams aren't buried in false alarms, but they can do something about the things that really matter.
[00:54:48] If you need training fast with Adaptive's AI content creator, you can turn a breaking threat, an incident report, or a compliance doc instantly into interactive multilingual modules right away. No design team required. Just push a button and you're go. With Adaptive, you can build, customize, and monitor every part of your training with complete personalization. So as a result, you get a more resilient security culture. And that's essential for companies.
[00:55:17] Companies like Plaid, right? I use Plaid when I log into my financial reporting tool. It uses Plaid to log me into my accounts, right? Plaid better be secure. Plaid platform powers thousands of digital finance apps and links consumers, developers, and institutions. I rely on Plaid. So fortunately, Plaid relies on Adaptive. With sensitive data at its core, Plaid security and compliance simply are non-negotiable.
[00:55:46] Plaid's head of security, GRC, says, this is a direct quote, quote, Adaptive has equipped our teams with cutting edge tools and built a smarter, more resilient security culture across the company. Trusted by Fortune 500s and backed by NVIDIA and OpenAI, Adaptive is building the defensives we need for the AI era. Learn more at AdaptiveSecurity.com. That's AdaptiveSecurity.com. What a cool idea.
[00:56:14] We're so glad they're there and so glad they're supporting Steve and his work at security now. Now back to the show. Okay. Okay. So before we leave this discussion of age verification, I just wanted to note that Apple has started requiring age verification for their users in the UK and South Korea.
[00:56:35] With the rollout of this latest 26.4 version of iOS, Apple account holders in those two countries may be asked to register a credit card or take a picture of a government issued ID. If Apple is able to determine an account holder's age without asking that, like by looking for other signals, like the length of time they've had an account, which would sort of automatically
[00:57:04] say, well, they've got to be an adult by now, then no other information is needed. But otherwise, they're going to get intrusive. So, you know, as I said before, if this has to happen, I trust Apple more than any other third party to protect its users' privacy. Apple really does, you know, appear to be doing everything as right as they can. I've got links in the show notes for anyone who wants more information that like of an Apple
[00:57:34] support page that just says, if you are asked to verify your age, you know, we need to do that. I heard from a couple of our listeners who are in the UK that they had a variant of a driver's license, which Apple software did not recognize. So they were having a problem with that.
[00:57:57] So it's looking like there's like on phone, you know, image recognition of UK government issued IDs and driver licenses, which Apple is able to ingest and then use to satisfy the UK's law. This is all, you know, it's not like Apple wants to be doing this. They definitely don't want to be doing this. Yeah.
[00:58:24] If you're asked to confirm that you're an adult and then, you know, you, you proceed. So this is, this is now happening, you know, by Apple when they're operating in countries that require it. It's selfish of me to say this, but I'm glad that they're testing this in UK and Korea so that we don't have to deal with it until it's a, the kinks are worked out a little bit. Yeah. Yeah. Wow. But it's, you know, it's ultimately this could be better, right?
[00:58:53] I mean, they're a choke point, right? So Apple and Google, because everything goes and it should be for app stores, not for desktop operating systems, not for TV sets and things like that. But I can see why with the app stores, you might want to have that kind of. And, and also because a smartphone is inherently more of a personal use device, right? You know, kids have their, the fact that we use the pronoun, their smartphone, right? Says that, you know, they're bound to it.
[00:59:21] It's their social media that is on that phone and, you know, their use of the phone, you know, they take it into their bedrooms with them. So it makes sense that, you know, the parents could, could work with Apple to establish a, a secret date of birth and the phone only divulges brackets when necessary. So I think that's where we're going to end up being.
[00:59:46] And, and again, if it worked, if anybody had to be doing this, I would trust Apple, much as you and I both Leo are increasingly annoyed by some of what Apple is doing. You know, as I said, I've lost, I've, I don't even know how my photo app works anymore. My phone, I get, I get into some strange mode where I can't get rid of half the screen is information about the photo. And I try to like to push it down. It won't go away. It's like, what happened?
[01:00:11] And anyway, uh, I chose to share this next story because it's so loony and because it serves as another example of the disturbing and growing intersection that we are seeing everywhere of politicians and technology. The risky business newsletter covered this puzzler by writing the Russian government is working
[01:00:38] on a law that would require all mobile operators in Russia, all Russian mobile operators to use a custom domestically developed encryption algorithm for the country's 5G mobile network. If the bill passes and what it's expected to all phones sold in Russia going forward, will need to
[01:01:03] support the NEA hyphen seven algorithm. Apparently because one through six were no good or they will not be able to connect to Russian mobile networks, which by 2032 will only support NEA seven foreign algorithms such as snow used in Europe.
[01:01:28] AES course used in the U S and ZUC used in China will be supported only until 2032 as part of a transitional phase to allow current smartphones to reach their natural end of life. Work on the proposed regulation began last year. And the bill is now in its second draft, according to Russian news outlet, uh, I vestia.
[01:01:55] The bill is part of a broader set of measures designed to hinder the operations, which is so stupid of Ukrainian drones and missiles, which have you are stupid. You'll see why in a second, which have used Russian SIM cards to connect to mobile towers, determine their location and then guide themselves to plan targets.
[01:02:16] However, using a custom encryption algorithm to encrypt 5g traffic won't stop the Ukrainian side from using Russia's existing mobile network since they can always fall back to older protocols, not the 5g protocols, LTE and 3g, both of which will continue to function.
[01:02:37] So on the other hand writes risky business, the proposed law represents a quote, patriotic legislative flex. It's a type of unrealistic stuff. They wrote that's been happening in the Russian Duma recently to show that Russia is important and still matters on the global stage. That's why show that by withdrawing from the rest of the world. Wow.
[01:03:04] As is Vestia points out itself, Russia is insignificant on the mobile market where it only accounts for 2% of annual sales. So it's very possible that most phone makers won't bother to implement any a seven on their chipsets. Why, why would they? There's also no base tower equipment that supports the algorithm, which raises the possibility that Russia will be years behind
[01:03:33] in rolling out its 5g network because it's going to have to design and implement all of the base tower technology to, for, to add any seven support. They said the Russian news outlet warns that any a seven may be used as a Trojan horse by foreign manufacturers to request a favorable market position or a monopoly in exchange for adding the algorithm to their firmware. Okay. I suppose that's possible on Ukraine side.
[01:04:03] The answer is likely to be the same as with Russia's rollout of max. Remember that was their, the Russian only messaging system. Uh, and, and, and Russia demanded that everybody use it and then began cutting off access to all the others as they are now. The Ukrainian intelligence services were delighted that Russia was mandating everyone in their country use an incredibly insecure and easy to hack mobile app, meaning max.
[01:04:32] Rolling out an untested and largely unknown encryption algorithm for your entire future mobile network may create a major opportunity for hacks and surveillance operations who know their way around encryption as intelligence services usually do.
[01:04:48] So I, I think as I look at this and think of like what Russia is doing, it occurs to me that one of the most important lessons taught by the industrial revolution is the incredible power that comes from, from standardization.
[01:05:09] If, if, for example, if every country had their own screw thread standard, then nuts and bolts would be incompatible with one another. And it would be necessary for a shop to redundantly stock a separate supply of bolts for every country of origin. Orn'tran gauges, uh, incompatible for a long time. Another great example. Yes. Yes. It's a little hard to go from country to country.
[01:05:39] Right. And, and Leo think, you know, think about it as it is. We do have separate metric and imperial threading and look what a mess that creates. Yep. Just, just, just both sockets. Yep. Yes. Just that. So, you know, another example of standards failing, uh, when they differ is as, as our picture of the week showed AC outlet plugs around the world.
[01:06:04] Although I'll admit that they have provided a terrific supply of pictures of the week for this podcast. Um, but that further demonstrates the failure, right? So my point is that the standards that the world has agreed to the standards surrounding the internet, ethernet, USB, all being examples. They've resulted in an astounding, like in astonishing economies.
[01:06:30] Thanks to their interchangeability and interconnectivity, which we get free of charge simply by choosing not to roll your own. So I think this clearly demonstrates the insanity of what mother Russia is choosing to do.
[01:06:47] Um, after 2032, what five years from now, Russian citizens will likely be stuck with Russian made Android smartphones with God awful hardware and no choice in the matter. If they want 5g, they've got to use their rescue phone or, you know, Russian phone. That's right. In Soviet union, Russian phone call you.
[01:07:18] And that's not progress. That's hysterical. Wow. Of course, speaking that some people drive on the left side of the road, some people drive on the right side of the road. Oh, and boy, have someone drive you if you're in one of those countries, because all every instinct you have is wrong. Merging on the freeway is really tough for me anyway. Yeah. Okay.
[01:07:42] So speaking of Russia, it seems that Russian intelligence services somehow arranged to install some spying hardware into a Ukrainian drone factory. Now, the device was embedded inside a thermostat, but it did much more than control the room's temperature. Wow. Since it included a camera, a microphone and a little router.
[01:08:11] The story becomes even more interesting and fun, though, when we learned that the Ukrainian drone maker Tech X was fully aware of the device before it was installed, thanks to a warning which they received from Ukrainian intelligence services.
[01:08:29] So the Russian surveillance device was installed, after which Tech X worked with Ukrainian intelligence to supply a constant stream of disinformation, which was regarded as highly trusted because the Russian spies were certain that nobody knew about it. So why would they be making stuff up in front of the thermostat? Oops.
[01:09:00] That's smart. Love it. Okay. Okay. So last year, as we recall, we had some fun looking at a very clear demonstration of the claims being made about quantum factorization.
[01:09:15] Remember that, you know, the threat posed by the emergence of practical quantum computers is that they may be able to solve the prime factorization problem upon which rests all of the cryptographic security provided by the invention of RSA-style public key crypto.
[01:09:37] Last year, it appeared that the world had a much longer way to go than was assumed because that takedown of all of the progress that was being claimed, which we examined carefully, convincingly revealed that not a little bit, but a lot of sleight of hand had been going on behind the scenes with the use of, for example, highly contrived factorization.
[01:10:06] Factorization targets. Now, Google appears to disagree. Or perhaps they're just taking the better-to-be-safe-than-sorry approach. The news of last week is that Google has moved what they call the so-called Q-Day to 2029, only three years from now.
[01:10:29] Google expects threat actors to break classic public key encryption using quantum computers by the end of this decade. Okay. Okay. You know, they've introduced a 2029 timeline to secure their products. That is as their deadline to finish securing their products with post-quantum crypto PQC protections.
[01:10:57] Both Chrome and Google Cloud already have PQC post-quantum crypto protections in place, and Android is getting them later this year. We also know that Apple and Signal have both already added post-quantum crypto to their messaging platforms. In addition, Cloudflare, AWS, Azure, Meta, and Zoom all have PQC in place today.
[01:11:25] Plus, TLS version 1.3, the current and latest version of TLS, is already capable of negotiating post-quantum crypto encrypted connections. And Cloudflare tells us that more than half of all the traffic moving through Cloudflare is now already quantum safe. So, you know, hats off.
[01:11:53] You know, we've been covering this move and the need to move toward quantum safety for years now with the cryptographers getting to work on post-quantum algorithms way before it seemed that we had a problem. I still think it's way before we have a problem, even now, based on, you know, the real evidence that we've seen. But, hey, our chips have the power.
[01:12:23] Our processors have the power. No reason not to do dual quantum or dual crypto schemes where you encrypt with both a pre and a post-quantum crypto to be safe. And in that case, I don't know what the NSA is going to do with all that data that they've been sucking down, Leo.
[01:12:43] I mean, I guess historically, the older pre-post-quantum crypto communications, they could decrypt if it's still of any value. It's going to be old, yeah. Yeah, really old. Yeah. Do you really think that quantum is going to happen by 2039? No, I don't. I do not. I do. Well, Google is saying 2029. I'm sorry, 2029. Yeah. I just don't see.
[01:13:12] To me, it doesn't look like we're even close. And it's not as if you're able to break down the prime factorization problem into smaller pieces. If you could, we would have. Right. You know, we would have already decomposed it into something that classic computers can solve. It's intractable right now.
[01:13:35] So, you know, if they're jumping up and down about factoring 31 and then we find out they cheated, it's like, OK, I have a hard time getting worked up about this. You know, I might get surprised. OK. OK. It's prudent to have post-quantum crypto available. Why not? I don't see any reason not to. Why not? Exactly. It doesn't cost us anything at this point. Our chips are fast enough. We've got the algorithms.
[01:14:03] And we're not just using them. We're using both. So if a problem is found in either one, the other one protects us. So why not? Right. OK. So last Tuesday during the annual RSA Security Conference, where you and Lisa were present to hobnob with many of the podcast networks supporters,
[01:14:32] the CEO of the UK's NCSC, which is the UK's cybersecurity agency, spoke to the conference. The publication, The Record, wrote about his presentation.
[01:14:47] What they said was, Britain's National Cybersecurity Center warned Tuesday that a rise in so-called vibe coding could reshape the software as a service industry while introducing new cybersecurity risks if organizations fail to adapt.
[01:15:09] The warning, they wrote, coincides with remarks by NCSC Chief Executive Richard Horn at the RSA Conference in San Francisco, where he urged security professionals to ensure AI coding tools become a net positive for security, unquote.
[01:15:29] He said, highlighting again how digital societies are facing a surge in cyber attacks, exploiting classes of software vulnerabilities that are known about and can be fixed. Horn said there was a risk AI tools would simply propagate the production of insecure software.
[01:15:53] His comments followed a sharp market sell-off in shares in software and cloud companies in February, driven by investor concerns that vibe coding, a term used to describe software developed using AI tools and minimal human input, could reduce demand for subscription-based software as a service, you know, SaaS platforms. Horn said, quote, during his speech, quote, the attractions of vibe coding are clear.
[01:16:23] Disrupting the status quo of manually produced software that is consistently vulnerable is a huge opportunity, but not without risk of its own. The AI tools we use to develop code must be designed and trained from the outset so that they do not introduce or propagate unintended vulnerabilities.
[01:16:48] In a blog post published alongside the speech, the NCSC itself said, advances in AI-assisted software development are already changing how organizations approach writing code, potentially setting the stage for significant disruption of the SaaS model over the next few years. And I'm going to be talking about that as soon as I'm going to be talking about that as soon as I finish with this because I think this is really interesting.
[01:17:14] They said, describing the February sell-off as a billion-dollar wobble and referencing the term SaaSpocalypse. Yes, the SaaSpocalypse.
[01:17:26] The agency cited anecdotal examples of developers using AI tools to build replacements for SaaS products in a matter of hours, particularly in response to rising subscription costs or feature restrictions.
[01:17:45] The SaaS industry has dominated enterprise IT by offering subscription-based access to software while offloading infrastructure, maintenance, and security to vendors. We've talked about this, all this outsourcing that is now being done.
[01:18:05] In its blog post on Tuesday, less than a week ago, the NCSC said this dynamic could shift as AI tools make it faster and cheaper to build bespoke enough software in-house, driven by the same business incentives that triggered the original rise of SaaS companies themselves and the early uptake of cloud computing.
[01:18:30] But it warned that AI-generated code can be unreliable, difficult to maintain, and prone to security flaws, increasing the chance that vulnerable systems could be deployed if those behind the vibe-coded systems were too tolerant of the risks.
[01:18:51] The NCSC urged organizations to prioritize security as the technology develops, including ensuring AI systems generate secure code by default, verifying the integrity of models, and expanding the use of automated code review and testing. The blog post stated, quote,
[01:19:13] If security professionals do not lean in from the start, the landscape will evolve without this crucial input, as was arguably the case in the early years of cloud adoption. A challenge the security community will face is that no one yet knows exactly what we need to introduce to ensure the vibe-coded future is a safer one.
[01:19:40] If we face this challenge head-on from the start, we have a chance to introduce some strong security fundamentals, unquote. The article finishes saying, The NCSC said, Any disruption to SaaS is likely to take place over several years, with adoption varying depending on system complexity and organizations' risk tolerance. But the agency said,
[01:20:08] But the agency said it could easily imagine that the only companies in the sector that will survive will be those that cannot be easily replaced with a vibe-coded alternative, perhaps because their services have themselves become critical to a business, or there are regulatory requirements they meet, or they simply have a critical mass of data across customers. Okay, so there are a number of stakeholders. Interesting takeaways here, I think.
[01:20:39] The first is the obvious, when you look at it, you know, threat that vibe-coded replacements for software as a service represent. Okay, think about it.
[01:20:52] Why would any large enterprise rent under an expensive recurring subscription what a handful of their in-house coders could whip up overnight using the benefit of vibe-coding AI to create bespoke software that more perfectly fits their needs? I hadn't really stopped to consider it before now.
[01:21:19] But this entire world of outsourced service industries that have sprung up over the past decade are hugely vulnerable to the emergence of DIY homegrown in-house coded alternatives that vibe-coding now makes so easy to create.
[01:21:43] You know, remember we heard that C-suite executive saying, quote, we're only going to hire someone new if you first demonstrate that AI cannot do their job, unquote.
[01:21:58] So it's not much of a stretch to imagine a similar executive asking, why should we be paying tens of thousands of dollars per month to this annoying outsource company?
[01:22:15] When a couple of our guys in-house coders can use AI to write the same thing that we will then own, can customize to work exactly the way we want, and can use going forward without any recurring cost. No more subscription fees. So it does seem pretty clear that this is going to be an accelerating trend in the future.
[01:22:41] But the other shoe to drop was this NCSC CEO's primary concern, which was that the threat that carefully created, refined, and secure SaaS solutions would be too hasty, which is what we have today from third parties who wrote these things, you know, 10 years ago, carefully and with human programmers, encoders.
[01:23:10] And have since worked all the bugs out, but they're not free, that they would be too hastily replaced by half-baked, unproven, and insecure vibe-coded clones. One of the tendencies we have seen over and over is that security will truly be sacrificed at the altar of economics.
[01:23:37] Why is everyone pulling and blindly using libraries from open source repositories? Well, because they appear to work and solve a problem. And the price is right. It's zero. But the truth is that everyone is just holding their breath and hoping for the best, right? Hoping that this library they pulled isn't malware. No, it doesn't seem to be. Nobody else says it is. So, okay.
[01:24:07] But that's not the way security is obtained and maintained. On the other hand, it doesn't cost anything. It's free. It's going to be very interesting, Leo, to see what happens as enterprises develop, you know, for in-house use, the various systems that they've been outsourcing because you know they're going to. And I overall think it's probably going to be a win. But I imagine there will be a few stumbles along the way.
[01:24:37] It's not like, you know, the SaaS software you buy from these big companies is necessarily secure, robust, and reliable. We talk about all the problems they have all the time on this show. So that's true. You're trusting somebody unless you write it yourself. And nobody can afford to write it themselves. Doing it in-house is really hard.
[01:24:58] But, you know, yes, but Vibe offers the opportunity of a bunch of programmers saying, okay, Claude, you know, here's what we need. We need a customer relations management system, and it needs to take our database, and here's the schema, and we want to have this UI, and blah, blah, blah, and presto, bango, you got an app. I mean, that's Claude.
[01:25:28] Yeah. I've been very tempted to write a sales system for Twit. We had it. It was written by a low-level employee many years ago in .NET, and he knew what he was doing, I guess. But when he left, he said, I'm not maintaining it, so you're on your own. So it has little bugs, like two people who can't use it at the same time where it crashes, and you have to have a hard reboot and stuff.
[01:25:52] And that's often the case with your typical bespoke home in-house software. Yep. It got written. We were using for – well, actually, we only just retired. We call it Dino Database. I don't know why. You probably wrote it yourself, though, right? Actually, it was the only coder who ever wrote any code that we actually used. A brilliant guy named Steve Rank. Yeah.
[01:26:22] And he wrote it in D-Base 2, which we then moved to – Fox Pro, probably. Fox Pro, exactly. And Sue, as recently as until the release of 6.1, would look old customers up on – Fox Pro. Yes, it worked great. Sure. I wrote a lot of D-Base 2 software in my time for the radio station I worked at. Yep. Yeah, no.
[01:26:50] And that's not even really coding because it's just a database, and you're writing a front end to a database, really. But – True. Yeah. Yeah. I don't know. But a lot of what this – Go ahead. I'm very bullish on what cloud code can do, but obviously, you know, it may introduce errors. And then pulling these libraries is nowadays really risky. There are solutions, though. People have found solutions.
[01:27:15] One guy said, well, look, just pin the version until – say you can't download it until it's been out for a week. Yes. And presumably, somebody will have caught on by then, right? Yes. And that was a problem with Lite LLM is it was not pinned. And so everybody grabbed the laser. Let's take a break, and then we're going to look at an update on the ClickFix campaigns. Okay. Because that's still bad. Well, and yeah, I'm curious what you think of what Apple's kind of sort of solution was, which I thought was interesting. Anyway, we'll talk about that in a second.
[01:27:44] Our show today brought to you by GuardSquare. Do you do mobile apps? Are you a mobile app developer? Mobile apps today have become an inescapable part of life, but your users are really trusting you, aren't they? I mean, financial services, healthcare. You know, I have an app for my doctor. I have an app for my medications. I have an app for my banks. There's retail. There's entertainment. Users are trusting these apps with their most sensitive personal data.
[01:28:14] You better be secure. Sure. But a recent survey showed that 72% of organizations experienced a mobile application security incident last year. 92% of respondents reported rising threat levels over the last two years. And man, attackers are clever. They're not just, you know, hitting a library you use. They actually, in some cases, are really getting ingenious. One trick they have now, they take your app. They reverse engineer it.
[01:28:44] They use tools like Hydro. We've talked about that. They repackage it. It looks exactly the same as your app, except there's a little code in there. They distribute it via, you know, phishing campaigns and side loading. It's loading third-party app stores. And it looks, your users think they're using your app. Who gets the blame when it turns out your app, your quote, I put that in air quotes, app is sending out their personal information. You do. It's a different world.
[01:29:13] By taking a proactive approach to mobile app security, you can stay one step ahead of these attacks and maintain the trust of your users. Because the reputation hit comes to you. It doesn't go to the bad guy. That's where GuardSquare comes in. GuardSquare, this is what they do. They do mobile app security without compromising, providing advanced protections for both Android and iOS apps. And it's not just, you know, analyzing your code base.
[01:29:40] It also has, they have automated mobile application security testing to find vulnerabilities. But they also have real-time threat monitoring. So you gain insight into attacks. You'll know if somebody's modified your code and redistributing it. Discover more about how GuardSquare provides industry-leading security for your mobile apps. You need this at GuardSquare.com. That's GuardSquare.com. We thank them so much for supporting Steve and the work he's doing.
[01:30:09] And I think it's worth a visit. Find out what they can do for you. GuardSquare.com. Steve? So last Wednesday, Recorded Future posted the results of one of their threat forensics groups that was looking closely at the insidious click-fix social engineering attacks. As they wrote elsewhere in describing the nature of these attacks, they said,
[01:30:37] First documented in late 2023, click-fix has transitioned from a niche social engineering tactic to a cornerstone of the global cybercriminal ecosystem.
[01:30:54] Click-fix is a social engineering methodology that lures victims into manually executing malicious commands by masquerading as a necessary technical resolution for fabricated system errors or human verification prompts. Which I think perfectly sums up, describes the nature of this.
[01:31:18] Okay, so here's what we learn from Recorded Future's INSICT, I-N-S-I-K-T, group. They write, InSICT, group identified five distinct clusters leveraging the click-fix social engineering technique to facilitate initial access to host systems.
[01:31:40] Observed since at least May of 2024, these clusters include those impersonating financial application Intuit QuickBooks and the travel agency Booking.com. INSICT group leveraged the Recorded Future HTML content analysis dataset, which enables systematic monitoring of embedded web artifacts to identify and track new malicious domains and infrastructure.
[01:32:10] So basically, this is their, you know, cyber forensics system. They said the clusters demonstrate significant operational variance in lure themes and infrastructure patterns.
[01:32:26] And highlight the technique's evolution moving past simple verification by visually fooling victims with various fake challenges and demonstrating technical sophistication through operating system detection to tailor execution chains.
[01:32:42] Despite these structural differences, its operation is largely the same, showing that click-fix's core techniques work across platforms and only the social engineering lure needs to be adapted to the victim.
[01:32:59] Threat actors manipulate victims into executing malicious, obfuscated commands directly within native system tools like the Windows Run dialog box or Mac OS terminal. This living off the land approach allows malicious scripts to execute in memory, effectively bypassing traditional browser security and endpoint controls.
[01:33:26] Parallel clusters targeting sectors as diverse as accounting, real estate and legal services indicates that click-fix has transitioned into a standardized high ROI template, you know, high return on investment template for both cyber criminal and potentially advanced persistent threat APT groups.
[01:33:52] To protect against these threats, security defenders should move beyond simple indicator blocking and prioritize aggressive behavioral hardening.
[01:34:05] Key recommendations include disabling the Windows Run dialog box via group policy objects, implementing PowerShell Constraint Language Model, CLM, and operationalizing digital risk prevention tools, such as recorded futures malicious websites to identify and mitigate threats to your digital assets.
[01:34:29] Based on increasing use since 2024, INSICT group assesses that the click-fix methodology will very likely remain a primary initial access vector throughout 2026 as threat groups continue to social engineer victims to enable exploitation.
[01:34:53] Looking ahead, INSICT group anticipates click-fix lures will become increasingly technically adaptive, incorporating more selective browser fingerprinting while continuing to use infrastructure that can be built and dismantled quickly.
[01:35:11] In addition to technical refinements, INSICT group predicts that the social engineering component will continue to evolve, leveraging new techniques to lure victims into executing malicious commands. Okay, well, we all know how annoyed I am with Microsoft.
[01:35:33] This entirely preventable, you know, detectable and preventable vulnerability is now three years old and its use has been accelerating rapidly to the point that this family of readily blocked exploits, as we learned a few weeks ago, now accounts for more than half of all security breaches.
[01:35:59] Just one technique, more than half. That's how effective it is. It is that effective. Exactly. Everybody is going to fall for it unless they have some savvy and is like, wait a minute, why am I to confirm that I'm human? Why am I opening the Windows run and pasting this string into and then hitting enter? That's so bad.
[01:36:28] But again, most people are just script followers. I mean, most Windows users don't really know how Windows works, right? I mean, I hear Paul saying the same thing. So, by comparison, our listener Jeff Adamson sent a note Friday with a link to a story over at apple.gadgethacks.com with the headline,
[01:36:52] Mac OS 26.4 adds terminal paste prompt to block paste jacking. You think it's specifically aimed at click jack? Yes, it is. It is exactly aimed at it. And so they called it packet jacking. They wrote, well, packet jacking is the term used in the headline. That's another name for click fix.
[01:37:18] Whenever a Mac OS user of terminal attempts to paste a suspicious string into terminal, an intercept dialogue will be displayed to caution the user about the possible implications of what they're attempting to do. The dialogue reads, possible malware paste blocked. And it says, your Mac has not been harmed.
[01:37:47] Scammers often encourage pasting text into terminal to try and harm your Mac or compromise your privacy. These instructions are commonly offered via websites, chat agents, apps, files, or a phone call. And then you've got two options. Don't paste is what's highlighted and recommended or paste anyway.
[01:38:13] So that's, you know, a nice like stop sign comes up and says, whoa, no, don't just follow these instructions. Let's think, you know, double, you know, think about this for a second. So, you know, I don't suppose that Windows 10 users who still compose one quarter of the Windows desktop population will ever see Windows behavior change. Right. Microsoft has moved on.
[01:38:42] But it would sure be nice if Windows 11 users could have this simple exploit prevented by Microsoft caring, which is all it takes. A little bit of care from Microsoft as Apple has just demonstrated they do because this is so easy to fix.
[01:39:07] I also wanted to highlight the tips near the end of the recorded future article that talked about available mitigations for this under Windows. The mitigations of disabling the Windows run dialog box via group policy objects and implementing PowerShell constrained language mode. You know, that won't help the general Windows population because, you know, they're just using Windows at home.
[01:39:34] But within any enterprise, I would jump on both of those immediately. You know, unless an enterprise IT staff know that the Windows run dialog box is needed and I'm not sure why it would be, disable it. You could the good news is you could turn it off for all of your Windows users inside an enterprise and immediately prevent this most easy solution.
[01:40:03] So and then also constrain what you can do with PowerShell so that it's basically neutered. I mean, what we see here, the problem is that over time, just like has happened with our iPhones, Windows has gotten incredibly complicated. I mean, it always it still has everything it ever had. And they just keep adding more stuff.
[01:40:29] And most users just want to run an app to, you know, open word or open email or run. You know, they don't want need all this other crap and they don't know what it is. Right. And it is all dangerous. I don't know, Leo.
[01:40:49] Meanwhile, Reddit has been detecting a growing prevalence of AI posting bots on their site and may need to resort to various proof of humanity measures moving forward. And Reddit users are not happy. PC Mag provided the details under their headline. Reddit could soon face required face ID to prove you're not a bot.
[01:41:21] They wrote Reddit, like practically every other social media platform, has been struggling as of late with a deluge of bots and AI generated content. In a study from last year, roughly 15 percent of posts on the platform were found to be AI generated. OK, so that's more frequent than one out of every seven Reddit posts.
[01:41:49] They wrote, now it may soon start experimenting with asking users for biometric data like face ID or touch ID or other forms of passkey technology to stem the tide of bots.
[01:42:04] In an interview with the TBPN podcast, first spotted by Engadget, Reddit's CEO Steve Huffman said, this tech is the most lightweight way, that's his quote, to ensure all users are human. Huffman indicated the platform may use decentralized third party information providers. Oh boy. To verify users' personal details.
[01:42:34] You know, we've recently been talking about all of the uses to which residential proxies could be put. Bouncing AI bot traffic through such residential proxies makes detecting and blocking them based upon IP address impossible. You just look like any random user, like spread around the globe.
[01:42:57] So, yes, some sort of logon time verification is needed. But we all know the potential downside of using any third party identity verification system. PCMag continues writing, Steve Huffman told the podcast hosts, part of the promise to users is we don't want to know your name. But we do need to know that you're a person.
[01:43:26] In 2026, they write, bots are an existential risk to online platforms. Content aggregator, DIG, which was in beta ahead of its comeback, was recently forced to pause operations and lay off staff in response to the horde of bots on its platform. Meanwhile, the ability of bots to influence the discourse on Reddit has already been demonstrated.
[01:43:54] In April of 2025, researchers from the University of Zurich secretly deployed AI-powered bots to influence debate in a subreddit called ChangeMyView, with bots pretending to be a rape victim, a black man who is opposed to the Black Lives Matter movement, and someone who, quote, works at a domestic violence shelter, unquote.
[01:44:21] Reddit founder Alexis Ohanian said his website using Face ID was not something that he had on his bingo card, but argued that something has got to be done about all the fake, botted content. I don't know when that interview was, but Alexis Ohanian and Kevin Rose recreated DIG. Yeah.
[01:44:49] And had to shut it down last week. Yep. Because of the bots. Oh, my God. It's a nightmare out there. It really, really is. And Leo, when you add AI to the mix and hundreds of thousands of residential proxies that the AI bots are able to bounce their traffic through, you cannot detect them. Yeah.
[01:45:16] I mean, we have an undetectable bot problem. Yeah. So many Reddit users have already expressed grievances with the move, with one user saying, tell me you want to kill Reddit without telling me, meaning this kills Reddit if you start requiring people to de-anonymize themselves. Yeah. So the article finishes saying,
[01:45:44] Reddit would not be the first anonymous platform to start requesting users providing biometric data to prove who they are. For example, Discord earlier this year started to demand that some users provide face scans so its AI tool could determine if they were over 18 as part of efforts to keep miners off the platform. And as we know, it wasn't Discord's AI tool. They farmed it out and those people got hacked.
[01:46:09] And 70,000 some, you know, personal private information got loose. So, I mean, Leo, there's no solution. I mean. Yeah. I'm sympathetic. I don't know what these guys are going to do. I mean, I think there's a lot of, when I'm on Reddit, half the time I see a post, there's somebody who'll say, that's AI. Stop using AI. You're using AI.
[01:46:37] And I don't know if it's obvious that it's AI or not. If you use bullet points in a post, AI. If you use certain words, AI. And I don't know if that's true or not. I don't know how you would know. And it may once have been, but AI is a moving target. I mean, if it's doing something that is getting it called out as AI, it's going to change its behavior. And I use bullet points and dashes.
[01:47:05] And occasionally I'll use the word delve. That doesn't mean I'm AI. So, I mean, the problem is as AI gets better and better, it looks more and more like average content. That's the whole thing. We have, this is a problem that has no solution. And I don't say that often. I mean, I spent seven years devising a solution for online identity authentication because I thought there was one. You know, Squirrel was that. But I don't see a solution here.
[01:47:33] I do not see a solution. And you're pretty ingenious when it comes to that stuff. That's my point is I'd be like saying, well, you know, we could do this or that. No, I don't see a solution. Somebody fed the Declaration of Independence to an AI detector. And it said, well, about 93% chance that's AI written. I saw that. Now, wait. Wasn't that 1776? And I don't think that we really. We didn't have AI back then. But that Thomas Jefferson, I don't know. Actually, they said it was 98%.
[01:48:04] 98% AI generated. Certain that it was AI. Yeah. This is the problem is that these AI detectors aren't really any good. Right. You know, that's they don't. We can't detect AI. And I think a lot of people assume that something's AI when it's not on Reddit. I mean, how do you know it's one in seven? It could be one in two or it could be one in a thousand. It's just you don't know.
[01:48:31] And I think it's way too draconian to say, okay, well, from now on, everybody has to give us a driver's license before they post. That will kill Reddit. A lot of what Reddit's all about is anonymity. Yeah. Not for any nefarious purpose. No, it's just that's what people want on the Internet. We know that. They want to be able to say what they want to say without being held personally responsible. I'll give you a completely innocuous example. I hope I'm not. I think she said this on the show.
[01:48:57] Paris Martineau is a fan of reality TV shows and she moderates a reality TV show subreddit, but she doesn't want that to be in her real name. That's a guilty secret. She should be able to do that privately without revealing that, you know, I'm she. You know, I should be able to, you know, say that I like, you know, leather boots without having to admit it in public. Wait a minute.
[01:49:27] I mean, I didn't mean to say that. That was a mistake. Do you want to take a break right now, Steve? Yep. And then we're going to look at light LLM and what happened last week. Oh, I can't wait to hear about this. The bullet we dodged. Oh, boy. Still two breaks to come, Steve. So at some point you might want to pause in the middle of that. Yep. I'm going to. Our show today is brought to you by Meter. They were at RSEC. I didn't get a chance to interview them and I was really looking forward to seeing all their gear.
[01:49:56] They make network gear that is sweet. Meter was founded by a couple of network engineers who feel your pain if you're a network engineer in a business and you're trying to get that internet to work. And people yelling at you and hounding you. And, you know, it's a challenging environment. They know it better than anyone. They realize the solution is for us to control the whole stack.
[01:50:21] This is what, you know, Apple did to their great benefit was, no, we're going to build the whole thing. That's what Meter does. It is a company building better networks and building them from scratch. If you're a network engineer, I don't have to tell you. The headaches, the legacy providers, the inflexible pricing. And, of course, you never have enough money. IT resource constraints stretch you thin. You've got complex deployment across fragile and fragmented tools.
[01:50:49] You know, your mission, you network engineer, you're mission critical to the business. But you're forced to work with infrastructure that just wasn't built for today's demands. They thought, you know, a T1 is all anyone should ever need. That's why businesses are switching to Meter because it's the modern solution designed to solve your pain points. Meter delivers full stack networking infrastructure, wired, wireless, or cellular. It doesn't matter. It works with all three. It's built for performance.
[01:51:18] It's built for scalability. It's built for you. Meter does it all. They design the hardware. They write the firmware. They build the software. They manage deployments from beginning to end. They even provide support afterwards. So you're never on your own. And there's always one number to call, which is really great. Meter offers everything. They'll do ISP procurement for you. Security, routing, switching, wireless, firewall, cellular. They'll help you with power. That's critical too, right? DNS.
[01:51:48] They'll help you secure your DNS. VPN, SD-WANs, and multi-site workflows. Actually, I was talking to Meter. That's a huge pain point. And it happens all the time. A company acquires another company or maybe get some warehouses that were built out by somebody else. And now you've got to integrate that network into your network. And it's 40,000 square feet. And the wireless doesn't work in all the corners. And it doesn't work with your network. And it, oh my gosh, but there is a solution. Meter.
[01:52:17] Meter's single integrated networking stack scales. And they are in everything. The most challenging environments. Major hospitals. Hospitals are a terrible place for Wi-Fi, right? Branch offices. Warehouses. Those giant warehouses. Large campuses. Even data centers. In fact, speaking of Reddit, that's who does Reddit's data centers. That's a pretty good recommendation. Here's another one. The assistant director of technology for Webb School of Knoxville. This is a direct quote.
[01:52:45] He said, we had more than 20 games on our campus going on at the same time between our two facilities. Each game was streamed via wired and wireless connections. And the event went off without a hitch. We could never have done this before. Meter redesigned our network. Doesn't that sound like heaven? With Meter, you get a single partner for all your connectivity needs from first site survey
[01:53:10] to ongoing support without the complexity of managing multiple providers or tools. You'll lose a lot less here. Meter's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility, reimagining what it means for businesses to get and stay online. Meter's built for the bandwidth demands of today and tomorrow. And we are in a very fast changing world and you need a partner like Meter.
[01:53:39] We thank Meter so much for partnering with us. And we invite you to go to meter.com slash security now. Book a demo. M-E-T-E-R dot com slash security now. Book that demo. Be glad you did. And now, back to Steve. Okay. We're going to look at Light LLM. And we will take our last break here before we finish this. So, okay.
[01:54:06] So, let's start by answering the question, what is Light LLM and why would we want it? It was backed by, initially, by Y Combinator. And the Light LLM page over at Y Combinator describes their project. They said, Light LLM is an open source LLM gateway with 18K plus stars on GitHub.
[01:54:32] Now, that's over 41,300. So, yes, very popular. And they wrote, trusted by companies like Rocket Money, Samsara, Lemonade, and Adobe.
[01:54:45] Light LLM provides an open source Python SDK and Python fast API server that allows calling 100 plus, more than 100, LLM APIs, Bedrock, Azure, OpenAI, Vertex AI, Coher, Anthropic, and on and on and on, in the OpenAI format.
[01:55:11] They said, we've raised a 1.6 million seed round from Y Combinator, Gravity Fund, and Pioneer Fund.
[01:55:18] Over at GitHub, the about paragraph for Light LLM says, Python SDK proxy server, parens AI gateway, to call more than 100 LLM APIs in OpenAI or native format with cost tracking, guardrails, load balancing, and logging.
[01:55:41] Then they say, they enumerate some, Bedrock, Azure, OpenAI, Vertex AI, Coher, Anthropic, SageMaker, HuggingFace, VLLM, NVIDIA, NIM. Okay, so the Light LLM site itself largely echoes this and highlights a couple testimonials.
[01:56:02] It quotes David Lean, a Netflix staff software engineer who says of Light LLM, has let my team provide the latest LLM models to our users, usually within a day of them being released. Without Light LLM, this would be hours of work each time a new model is announced. It means we don't have to transform inputs and outputs across providers and has saved us months of work.
[01:56:30] And Mark Holtnuck, a principal architect of generative AI platforms over at Lemonade, says, Our experience with Light LLM and LangFuse at Lemonade has been outstanding. Light LLM streamlines the complexities of managing multiple LLM models. Okay, so I think everybody gets the idea, right?
[01:56:53] With the general chaos that currently reigns across the AI domain, with new models appearing daily, pricing varying, and today's top dog latest and greatest, you know, being tomorrow's, you're not still using that, are you?
[01:57:12] At the same time, everyone is in a frenzied, frothing, and frantic rush to mark out and claim some territory in whatever this is all going to eventually wind up being.
[01:57:24] Essentially, you know, with LLMs being the hottest, fungible, commercially tantalizing mystery that humankind has ever created, the last thing anyone wants to be is locked in to yesterday's less glamorous, you know, now it's underperforming or it's overpriced model.
[01:57:50] So, to their credit, the guys at Light LLM, the guys who created this idea, they were very quick to see a need and an opportunity. They created what is essentially a universal large language model API translator that allows front-end developers to code to a single fixed model, the one originally developed by OpenAI by default,
[01:58:18] and the Light LLM proxy shim would allow any other model to be swapped in behind it on the back-end without needing to recode any of the front-end. The famous Code Academy folks have a page titled, What is Light LLM and how to use it?
[01:58:40] Where they write, Light LLM is an open-source Python library that acts as a unified interface for large language models. It allows us to connect with multiple AI providers, such as OpenAI, Anthropic, Google Gemini, Mistral, Cohere, and even local models through Olama, all using a single standardized API.
[01:59:04] Working with multiple LLMs results in juggling different API formats, authentication methods, and SDKs. Is that the ice cream truck? Oh, no. Is that Lori? That is my lovely wife who has forgotten that I'm in the middle of a podcast right now, and she just put her hands over her. Hi, Lori! Hi, Lori!
[01:59:38] Oh, she hung up. So anyway, they said this usually requires code rewrites, new dependencies, and manual adjustments. Light LLM resolves this by acting as a bridge between the application and major LLM providers, letting you manage requests, responses, and errors consistently.
[02:00:03] So, you know, basically, it's a big, you know, switching hub that decouples what you're doing on the application end using a large language model from whichever large language model you want to use. So, it's kind of a no-brainer, right? Like, why would you not want to use this? They've been working on it since the winter of 2023.
[02:00:27] And, good boy, as you might imagine, the challenge, I don't want the job, of supporting an exploding number of individual varying and evolving AI LLMs, each with their own API, requires a great deal of never-ending work. They're hiring, by the way. But that's the path these guys have taken. And until recently, things have been pretty smooth sailing. So, what happened last week?
[02:00:57] Okay, let's start with TechCrunch's overview, and then we'll dig a bit deeper. TechCrunch wrote, This week, some really atrocious malware was discovered in an open-source project developed by Y Combinator graduate LightLLM. LightLLM gives developers easy access to hundreds of AI models, blah, blah, blah.
[02:01:22] It's a breakout hit, writes TechCrunch, downloaded as often as 3.4 million times per day, according to Sync. That's, you know, S-Y-N-K. We've talked about them before. One of the many security researchers monitoring the incident. The project had 40,000 stars on GitHub and thousands of forks.
[02:01:48] The malware was discovered, documented, and disclosed by research scientist Callum McMahon of Future Search, a company offering AI agents for web research. The malware slipped in through a dependency, meaning other open-source software that LightLLM itself relied upon.
[02:02:16] It then stole the login credentials of everything it touched. With those credentials, and this is, as Leo, you know, your point is we don't yet really know how much damage was done. It said that TechCrunch said with those credentials, the malware gained access to more open-source packages and accounts to harvest more credentials and so on.
[02:02:41] The malware caused McMahon's machine to shut down after he downloaded LightLLM. That event prompted him to investigate and discover it.
[02:02:54] Ironically, a bug in the malware caused his machine to blow up because that bit of nasty code was so sloppily designed, he, as well as famed AI researcher Andre Carpathy, concluded it must have been vibe-coded. As you said, Leo, the LightLLM developers have been working nonstop this week to rectify the situation.
[02:03:22] And the good news is that it was caught relatively fast, likely within hours. Okay, so last Tuesday, as mentioned by TechCrunch, this developer, Callum McMahon, with Future Search, explained what he had discovered and how.
[02:03:41] At the end of a separate but related posting, Callum explained their use of LightLLM, knowing what we now know about LightLLM, and it's exactly what we would expect. He said, we use LightLLM to let us use models from a wide range of providers, letting us strike the best balance between quality, speed, and cost.
[02:04:07] In other words, current LLMs are just fungible. So here's what happened. Callum's posting was titled, No Prompt Injection Required, where he's kind of tongue-in-cheek. He wrote, earlier today, I got taken out by malware on my local machine.
[02:04:28] After identifying the malicious payload, I reported it directly to the PyPy security team who credited our report and quarantined the package, as well as to the LightLLM maintainers. I wrote a blog post that became the primary source cited by the register, Hacker News, Sync, and others. The play-by-play is pretty interesting when looking back.
[02:04:55] It started with my machine stuttering hard, something that really shouldn't be happening on a 48-gig Mac. Htop took tens of seconds to load. The CPU was pegged at 100%. All signs I'll be working on my local environment for a time, meaning things got messed up.
[02:05:21] He said, after failing to software reset my Mac, I took a final picture for evidence and then hard reset it. Wow. So he said, so far, the clues had been cursor asking me for network access right as the machine was freezing up. The process list showed a bunch of Python commands all executing a base64 encoded string. Oh, that's not good.
[02:05:51] Yeah. Yeah. And 11,000 processes running. Oh, ho, ho, ho. He said, I set U-limit to 16K for machine learning workloads. So this was partly expected. In other words, he has his system configured to allow 16,000 different processes, but he had 11,000 running for no apparent reason at that moment.
[02:06:20] He said, on restart, I asked Claude to investigate. After going down a rabbit hole on the wrong shutdown due to my force shutdown, meaning that Claude started to look at something different because he had done a force shutdown. There were two. Yeah. Yes. Exactly. Exactly. Not generating the expected logs. He said, I presented it with the start of the base64 string.
[02:06:47] Just enough to decode import subprocess import temp file. Oh, boy. Before the remaining text went off screen. Claude then became adamant that this was its own doing, the standard Claude Code way of running bash commands to escape control characters. Despite the many bugs I've encountered with that CLI, I wasn't buying this explanation.
[02:07:18] Further Claude Code probing eventually found the offending cause. The rogue package buried within my UV cache, something I would have never found on my own. So he's crediting Claude with helping him, you know, forensically diagnose what it is that happened to him.
[02:07:42] He said two minutes later, it had reproduced the entire malware trigger within a local container to double check its claims this time. And a further two minutes later, I had a blog posted on our site detailing the specifics of the malware to share as a warning to others.
[02:08:03] Claude even proactively suggested the emails of both the PyPy security team who were quick to quarantine the package as well as the light LLM maintainers. By the way, that's PyPI. I just want to make that clear. There is something called PyPy. Oh, okay. Yes. PyPI. Good. Thank you. That's the library. Yeah. He says, so what actually happened? Okay. So, okay. I'll just interrupt you to note that McCollum is about to start reference, start referring to MCPs, which is the model context protocol.
[02:08:33] Um, the MCP site, the model context protocol site explains it's an open source standard for connecting AI applications to external systems. Using MCP, AI applications like Claude or ChatGPT can connect to data sources, local files, databases, tools like search engines and calculators and workflows.
[02:08:59] You know, like using specialized prompts, which enable them to access key information and perform tasks. Um, and, and they said, think of MCP like a USB-C port for AI applications. Again, another standardization, which is so very powerful.
[02:09:19] They said, just as USB-C provides a standardized way to connect electronic devices, MCP provides a standardized way to connect AI applications to external systems. Okay. So, armed with only that much understanding, what McCollum explains can make sense. And it's not necessary for us to deeply understand it more. McCollum says, the root cause was mundane.
[02:09:46] MCP clients like Cursor, Claude Code, and others are using local MCP servers via some executor tool, such as UVX for Python or NPX for Node.js. When you run an MCP via UVX, it automatically downloads dependencies of that MCP and runs the given command.
[02:10:15] Unfortunately, our mostly deprecated MCP server had an unpinned dependency of a Lite LLM package. When my cursor IDE tried to auto-load the MCP server, UVX stepped in to download that latest Lite LLM version. Again, because it was unpinned. It wasn't saying, I want this version.
[02:10:43] It was saying, give me the latest. Which, he writes, was malware. Uploaded to PyPI by hackers just minutes earlier. Minutes earlier. The seamless ergonomics of UVX meant I became one of the lucky beta testers of the freshly released malware. Congratulations. Congratulations. Okay.
[02:11:10] So, in other words, exactly the sort of textbook classic supply chain attack we've discussed so many times in the past. In this case, it wasn't a dependency such as a library that would be downloaded, compiled, and linked into a result like the log for J was. It was a working piece of tooling, the Lite LLM package.
[02:11:35] And by being unpinned, Collins' dependent packages were not saying, we want this exact version. So, the default behavior was to grab a copy of the current one. And in this case, that latest and greatest had been deliberately compromised by bad guys. Column continues saying, this is great too.
[02:11:58] A sloppy, likely vibe-coded mistake in the actual malware implementation led it to turn into what he called a fork bomb. It installs a file called Lite LLM underscore init dot PTH into the site packages directory.
[02:12:21] Python automatically executes dot PTH files on every interpreter startup. The first thing it does is that child Python process also triggers Lite LLM underscore init dot PTH since it's still in site packages, which spawns another child, which spawns another, which spawns another, which spawns another.
[02:12:49] Thus leading to the only sign I would have noticed that the malware was running. That's where those 11,000 instances came. And the reason is 48 gig Mac crashed is it got into an infinite loop of spawning these Lite LLM underscore init dot PTH processes. And the system crashed.
[02:13:15] As Andre Carpathy pointed out on X, without this error, it would have gone unnoticed for much, much longer. The malware's own poor quality is what made it visible and discoverable. So we have to ask ourselves, what if the, if the author of this malware had not made that mistake? So what's the takeaway we've we've since.
[02:13:44] So he writes, we've since moved to a remote and MCP architecture. The server doesn't run on the user's machine anymore, which collapses this entire attack surface. No local code execution means a poisoned dependency can't touch your file system or request network access from your OS. And it's much more localized to one audited version that we have under control.
[02:14:14] However, sometimes you can't reliably do that. There are advantages and disadvantages of local versus remote MCP servers. And in that case, you still need to do what you can to minimize to mitigate this risk. He finishes saying, I don't think there's anything new to say here. It's the same thing we've been doing everywhere else to keep us safe.
[02:14:37] Reduce the attack surface, pin your dependencies, or even better, use lock files with checksums, audit packages before upgrading. And when Claude tells you everything is fine, maybe ask it again. He said, we analyzed the blast radius of this attack. 47,000 downloads in 46 minutes. 88% of dependent packages unprotected.
[02:15:05] So Leo, let's take our final break. And then we will continue looking at a little more of the forensics of this mess. Wow. This is amazing. You know, I saw Andre Kapathi's tweet almost instantly, thank goodness, and immediately went to Claude and said, hey, is there any light LLM anywhere in my system? And it said, no. I mean, you have the name is in your package list, but you never downloaded it, so you're okay. I know. It was terrifying. It was terrifying.
[02:15:35] You're watching Security Now. That there is Steve Gibson. And we're so glad you're here. If you're not yet a club member and you want to support this show, it's really important that you go right now to twit.tv slash club twit and sign up. We're kind of in a fiscal crisis right now. We're a little light on ads. I think to some degree due to the – this always happens when the economy tanks, when there's big news going on like a war.
[02:16:04] And we noticed that advertising gets a little skittish, numbers go down. But there is a way you can keep – help us get through this. This happened last time. This happened at COVID. But during the pandemic, it was Lisa's idea to start the club to kind of smooth over this – that little bump. And it did. And we're very grateful to all our club members. Well, we need you again. If you're not yet a club member, twit.tv slash club twit, fully a third of our operating expenses are now paid by club members.
[02:16:34] Thank goodness. If not, we'd have to cut back. We'd have to lay people off, cut back on shows. We don't want to do that. And we give you, I think, some fair value for your $10 a month. You get ad-free versions of all the shows. I don't need to give you ads or even plugs like this because, you know, you're already supporting us. You also get access to the great club twit Discord, a really fun place to hang out with some really fun, cool people. And listen to some extremely interesting special programs that we do in the club.
[02:17:04] Micah's Crafting Corner, for example. We do the AI user group that's coming up, actually, April 10th. It's the second Friday of – or am I sorry? Is that right? Yeah, the second Friday of every month. It used to be the first Friday. Maybe we moved it. Oh, we did. Maybe that's because I'm out of town or something. Anyway, Photo Time with Chris Marquardt. We have the Jet Set with Johnny Jett, Stacy's Book Club, all these special programs we do. As a way of thanking you, we do it because we love doing it.
[02:17:34] It's for club members, and we invite you to become one of that elect group of people. If you're not yet a member of Club Twit, we'd love to have you. Twit.tv slash Club Twit. And you will be welcomed to the club by all of our fabulous people. Many of our hosts are in the club. Mizuki Driver says the main reason I subscribe is the AI users group. That's great. Yeah, we really enjoy doing that. It's a lot of fun.
[02:18:02] So thank you to our Club Twit members for making Security Now possible. And if you're not a member and you really appreciate this show, it's a great way to show that you do. Twit.tv slash Club Twit. If you've been thinking about it, there could not be a better time to do it than right now. Twit.tv slash Club Twit. Now let's get back to Steve Gibson and a further dissection. By the way, I really appreciated Colm's write-up. It was a very good write-up.
[02:18:28] He did it very quickly, got the word out to the community 46 minutes after he discovered it, it was taken down, which was, thank goodness. Yeah, and I also thought, you know, he noted that Claude wrote the email for him. So I realized that speed of action is one of the things that we get from AI also. Oh, absolutely. It's a lever. It's a tool. And it used properly, it really adds to the power of what you can do.
[02:18:58] It also adds to the power of what bad guys can do. And that's the double-edged sword of all this. All right. Okay, so let's take a closer look at the malware itself. For that, we turn to Trend Micro, who titled their coverage of this, Your AI gateway was a backdoor inside the light LLM supply chain compromise, which they tease with the follow-on, Team PCP, those are the bad guys,
[02:19:26] Team PCP orchestrated one of the most sophisticated multi-ecosystem supply chain campaigns publicly documented to date.
[02:19:37] It cascaded through developer tooling to compromise light LLM and exposed how AI proxy services that concentrate API keys and cloud credentials become high-value collateral when supply chain attacks compromise upstream dependencies. So they led their coverage with three key takeaways.
[02:20:02] They said light LLM, a widely used AI proxy package, was compromised on PyPI with two of its versions containing malicious code. These light LLM versions deployed a three-stage payload, credential harvesting, Kubernetes lateral movement, and persistent backdoor for remote code execution.
[02:20:26] Sensitive data from cloud platforms, SSH keys, and Kubernetes clusters were targeted and encrypted before exfiltration. Second point, the light LLM incident was part of a broader campaign by the criminal group Team PCP,
[02:20:48] which has demonstrated deep understanding of Python execution models adapting their attack rapidly for stealth and persistence. In this case, a little too rapidly. Team PCP has previously compromised security tools like Trivi and Checkmark's KICS to steal credentials and propagate malicious payloads.
[02:21:14] Attackers leveraged compromised CICD pipelines and security scanners to escalate privileges and publish trojanized packages. So here's what more we learned from Trend Micro. They explain,
[02:21:57] The other Python package downloaded 3.4 million times per day that serves as a unified gateway to multiple LLM providers was compromised on PyPI. Upon analysis, it was found that versions 1.8, 2.7, and 1.8, 2.8 contained malicious code that stole cloud credentials, SSH keys, and Kubernetes secrets.
[02:22:23] The malicious versions deployed a malicious versions deployed a credential harvester targeting over 50 categories of secrets. A Kubernetes lateral movement toolkit capable of compromising entire clusters. And a persistent backdoor providing ongoing remote code execution. And just to pause, just think of if this had not been caught.
[02:22:48] 3.4 million instances downloaded per day would have been infected with this nasty malware. This is bad malware. They wrote, This compromise was not an isolated event. It was the latest link in a cascading supply chain campaign by a threat actor tracked as Team PCP.
[02:23:13] This post traces the cascade from its origin, the open source vulnerability scanner, Trivi, and then presents our technical analysis of the Lite LLM payload. Team PCP orchestrated one of the most sophisticated multi-ecosystem supply chain campaigns publicly documented to date.
[02:23:36] The campaign spanned PyPI, NPM, Docker Hub, GitHub Actions, and OpenVSX in a single coordinated operation.
[02:23:48] While it did not specifically target AI infrastructure, the campaign's cascade through the developer toolkit caught Lite LLM within its blast radius and exposed how AI proxy services that concentrate AI keys and cloud credentials become high-value collateral when supply chain attacks compromise upstream dependencies.
[02:24:14] Key sections of this blog, and I'm not going to share all the details because we don't need that. But they wrote, Key sections of this blog entry include a technical analysis of the malicious multi-stage payload and its impact on AI environments, a timeline, an operational review of Team PCP's campaign, and a deep dive into how security tools themselves became attack vectors. Key sections of this blog entry into the latest report, and the latest report is a technical analysis of the latest report.
[02:24:44] Trend AI research's analysis into the Lite LLM compromise also covers attribution challenges, gaps in public threat intelligence, and actionable defense strategies. Detailed indicators of compromise and MITRE attack mappings have been provided, but for an even more comprehensive understanding of the security incident, reach out to Trend AI Research for the full technical report. Okay, so that's much deeper than we need to dive for all that.
[02:25:13] But what they uncovered and reported about the root source of the vulnerability was interesting. Under their how your security scanner can become the attack vector, they wrote, Trivi is an open-source vulnerability scanner developed by Aqua Security.
[02:25:34] It scans container images, file systems, and infrastructure as code for security vulnerabilities. And it is integrated into the CICD pipelines of thousands of software projects via the Trivi action GitHub action. Security scanners, now, so, okay, the point is, Trivi was the root of this compromise.
[02:26:03] So, they explain, security scanners are uniquely dangerous supply chain targets. By design, they require broad read access into the environments they scan, including environment variables, configuration files, and runner memory.
[02:26:24] When a scanner is compromised, it becomes a credential harvesting platform with legitimate access to secrets. In late February 2026, an actor operating under the handle MegaGame10418 exploited a misconfigured pull request target workflow in Trivi's CI, their continuous integration,
[02:26:51] to exfiltrate the AquaBot personal access token. Aqua Security disclosed the incident on March 1st and initiated credential rotation. However, according to Aqua's own post-security, post-incident analysis, the rotation wasn't atomic, and attackers may have been privy to refreshed tokens.
[02:27:20] Okay, now, that's an important point, so I want to pause here to explain that. We've talked about the concept of so-called atomic operations. The name obviously comes from the word atom, and it's meant to imply that it cannot be further divided into smaller pieces. Molecules, of course, being collections of atoms, are divisible. Not so the atom.
[02:27:47] So to clearly illustrate the occasional need for atomic operations, you know, say that a computer program needed to count up to a certain number, but no more. If the program was single-threaded, meaning that it only ever had one thing going on inside itself at once, that would be easy to do.
[02:28:14] The program would read the value of the thing that's being counted. If it was already at its upper count limit, then the program—I'm sorry, if it was not already at its upper count limit, then the program would increment it to its next value. If it was already at the upper limit, it would just leave it there.
[02:28:39] But now imagine what happens if there's a lot more going on in the program with multiple simultaneous execution threads running around, perhaps because the CPU has multiple cores or the application itself has many threads running. In this environment, there's a chance that both CPUs would wish to increase the count at the same instant.
[02:29:08] So they would both be executing the exact same code at the same time. They would both read the counter's value. They would both see that it had not yet reached its limit. So they would both increment it, thus increasing its initial value by two.
[02:29:30] But if the counter had been previously sitting at one below its limit, that increase by two would move it up past the limit. A very subtle bug. You know, these sorts of so-called race conditions have historically been the source of, you know, many hard-to-find problems. You know, they're the sort that never happen while you're watching it, while you're developing the code.
[02:29:57] But they somehow always occur when you're on stage demonstrating what it is that you've got. So in our example, that test the value and maybe increment it, that would need to be made atomic so that the testing and the incrementing could not be broken apart and performed separately, even by different processors that are executing at the same time.
[02:30:26] That operation could only be done by one processor or execution thread at a time. So the other one, the other processor trying to do it would be briefly stalled until the first processor had finished with that atomic operation. And at that point, the second processor could proceed. And if it saw that the variable was already at its limit, it would not also increment it. Okay, so we left off with Trend Micro noting.
[02:30:55] Aqua Security disclosed the incident on March 1st and initiated credential rotation. However, according to Aqua's own post-incident analysis, the rotation was not atomic and attackers may have been privy to refreshed tokens. In other words, somebody might have still been logged in when a token was updated and then they would have grabbed that.
[02:31:22] Trend Micro then continues, the gap, the gap that is this race condition gap, that gap proved decisive.
[02:31:30] On March 19th at 1743 UTC, Team PCP used still valid credentials to force push 76 of 77 release tags in the privy action repository and all seven tags in setup privy, whatever those details mean. But it meant two malicious commits containing a multi-stage credential stealer.
[02:32:00] The malicious code scraped the runner.worker process memory for secrets, harvested cloud credentials, and SSH keys from the file system, encrypted the bundle using AES-256CBC with an RSA-4096 public key, and exfiltrated it to a typosquatted domain, scan.aquasecurity.org.
[02:32:26] According to analysis by CrowdStrike, the legitimate trivy scan still ran afterward, producing normal output, leaving no visible indication of compromise. Okay, in other words, because Aqua Security was, for whatever reason, logistically unable to rotate every single credential at once when no one was actively logged on,
[02:32:54] the bad guys were able to maintain their corrupting persistence. Trend Micro finished this portion of their write-up by writing, This is the meta attack. A security scanner the tool defenders rely on to catch supply chain compromise itself became the entry point for a supply chain compromise.
[02:33:18] The trivy compromise in GitHub actions gave the attacker the keys to publish arbitrary versions of LightLLM to PyPI. Everything that followed was exploitation of that initial foothold. And LightLLM was just a coincidental casualty of this. They said the lesson is uncomfortable but critical.
[02:33:43] Your CICD security tooling has the same access as your deployment tooling. If it's compromised, everything downstream is exposed. And what we're now seeing is the bad guys have gotten sophisticated enough to take advantage of that. I mean, it is truly terrifying.
[02:34:08] So what we see is that the enabling of this attack on LightLLM had nothing to do with AI per se. It's just its popularity that would have allowed it to explode at 3.4 million instances of compromise per day had the bad guys not made that crucial mistake that crashed the machines that it was trying to compromise.
[02:34:33] So after providing a fully detailed forensic analysis of this malware campaign, Trend Micro concluded with a summary and recommendations. They wrote,
[02:34:48] The tools that the developers install to interact with AI systems, proxy gateways, model routers, experiment trackers, and inference servers handle high-value secrets by design. Supply chain attacks against these tools inherit the trust and access of the AI infrastructure itself.
[02:35:17] So again, AI is not to blame here. It's really just a case of the more tools you're using, the more exposure there will be when any one of them might be compromised. Trend Micro continued saying, The malicious payload analyzed in this report is a direct exploitation of the systemic secret management failures extensively documented in prior Trend AI research.
[02:35:46] As previously described, developers have adopted .env files so profusely that they have forgotten their sensitivity, leaving them exposed. And threat actors are actively scanning for exactly those files. The harvester analyzed here operationalizes that attack surface at scale. It performs exhaustive file.
[02:36:15] It performs exhaustive file system walks targeting .env, .env.local, .env.production, and .env.staging files across up to six directory levels, while simultaneously extracting AWS credentials, cloud provider tokens, Kubernetes service account secrets, CICD pipeline configurations, and database connection strings.
[02:36:43] The same categories of secrets Trend AI research previously identified as most commonly stored in plain text inside .env files. And they finish off by some well-reasoned security recommendations. They said, This case highlights the risk of building an entire ecosystem on top of fragile trust.
[02:37:11] The light LLM hack is just the latest example of attackers exploiting the reliance on open source repositories and poor secret hygiene. Security is not an afterthought you can outsource entirely to a vulnerability scanner.
[02:37:34] So, the apparently very highly skilled, this team PCP, these attackers appear to have just been in a bit of a hurry. This led them to deploy otherwise very potent and sophisticated malware that must have taken a lot of time to generate,
[02:37:59] allowed them to deploy it, containing a flaw that unfortunately for them, and thank God for us, immediately caused that malware to draw attention to itself. This is why you want to test before you push to production, man. They should have known. Yep. They got bit by their own, you know, rush. Race condition. Yeah, by being in a race.
[02:38:24] And as a result, the infection was almost immediately spotted and stopped. Had the bad guys not made that mistake at a download rate of 3.4 million instances of the infected light LLM per day being used, you know, the damage that was all set up and engineered to occur likely would have. And the resulting mess would have been far worse.
[02:38:53] In the truest sense, as I said at the top of the show, we have dodged another bullet. Oh, yeah. There could be no question that the entire industry has built an ecosystem upon which it has become dependent, if you'll pardon the pun, or double entendre, I guess, whose security guarantees are truly fragile. These are fragile guarantees.
[02:39:19] We're essentially hoping for the best because the goodies are just too enticing for us to resist. Or phrased another way, the cost to us today of deploying truly secure solutions prices them out of reach, rendering them impractical.
[02:39:40] So we knowingly and deliberately create dependencies upon sprawling packages over which we have no oversight or direct control. All we can really do at this point is hope that our luck holds. It's not just the packages. It's also automation. I mean, it sounds like it was a CICD issue. And by the way, this happened a couple of weeks ago, a GitHub CICD issue with GitHub Actions.
[02:40:09] And I keep seeing this again and again. And so that's a case of, and I understand why CICD is incredibly useful. It's an automated way of delivering your building and delivering your software. But that's what it stands for, continuous integration, continuous delivery. Continuous development. Development. But if you're automating to that degree and you're not paying attention, there's some real risk involved. And AI is bringing us another layer of automation.
[02:40:40] Right. I mean, people are just stunned by what it does for them. Like they don't even understand what the installation on their local system is. It's just like. You know, Claude will automatically push to GitHub. So all my repos are pushed to GitHub. And it was automatically building it. It was setting up GitHub Actions and building the software. I wasn't even, I didn't even know how to do that. It just did it for me.
[02:41:08] And so, yes, we're kind of giving over a lot of our agency to systems, partly because it's so complicated these days. That's true. We created a super complicated system. Which was a security tool. Scanner. Scanner. It was an open source scanner used by these systems to scan themselves for malware. And it itself was compromised. It was compromised.
[02:41:38] And then it pushed a bad version of Light LLM? Because Light LLM used it to scan for malware. Right. And so it had access to it. The GitHub Actions, the Trivi keys were compromised. They rotated them as best they could, but they didn't get them all, apparently. Yep. Bad guys. This PCP team got the key.
[02:42:04] And then used that key to compromise Trivi to inject malware into Light LLM as part of the CICD process. Pipeline. Right. Wow. That's actually pretty sophisticated. Oh, no. As Trent said, these guys really know what they're doing. I mean, this is a... Unfortunately, the goodies are so big. I mean, they know that, you know, how many copies of stuff is being downloaded per day.
[02:42:33] This stuff exfiltrated tokens, SSH keys, crypto passwords, Kubernetes keys. It basically took all the secrets on your system and sent them to the bad guys. Encrypted them and sent them off to a spoof domain. I mean, imagine if this had gone on for a day or two, the night. I mean, I'm surprised we haven't heard more pain from the 47,000 who installed it. And I understand now why they were in a hurry, because they had a compromised key to Trivi,
[02:43:01] but they didn't know how long that key would stay good. Yeah. So they said, we got to strike while the iron's hot. Quick, get some malware out there. I mean, and it literally must not have been tested. Because apparently, immediately when you ran it, it crashed your system. And I think that explains why the 47,000 instances we haven't heard anything from. Probably because everybody crashed. It came out of the gate and just stumbled. Yeah. It didn't work. Yeah. Because they rushed. They said, quick, we got one minute to take advantage of this.
[02:43:32] Let's push something out. And they probably told Claude, write something real quick. We'll get on a system, encrypt the keys. And send them to this address. Wow. Ay, ay, ay, ay, ay, caramba. Well, I'm glad. You know, it's really good. Thank you. You know, Columns Write-Up was great, but I didn't understand the Trivi part of it. So thank you for explaining that Trend Micro report. This is why we listen every Tuesday to Security Now. I hope you'll tune in next Tuesday.
[02:44:01] If you want to share this show with your friends and family, there's many ways you can get it. But I would suggest going to Steve's site. He has some unique versions of Security Now. GRC.com. Of course, while you're there, it's an opportunity to pick up Spinrite, the world's best mass storage maintenance recovery and performance enhancing utility, which everyone should have. If you have mass storage, you do, I presume. Unless you're committing everything to memory. I don't know. You need it. You should have Spinrite.
[02:44:30] And he also has, of course, the brand new DNS Benchmark Pro. Brand new. Just came out. All at GRC.com. There is... When you go to the... There are a lot of other things on the site. We're getting a huge number of freebies that Steve just generously gives away, like the very famous Shields Up to test your router security. While you're there, the show is also there. He has two unique... Three, four unique versions. All his versions are unique.
[02:44:57] A 16-kilobit audio version, which is low quality, admittedly, but small. Easy to download for bandwidth-constrained individuals. There's a 64-kilobit full version audio. There's also the show notes, which he writes every week. Puts a lot of effort into this. Leaves his family life on hold just for this. 20 pages of goodness. And the show notes are very complete. Lots of links, pictures, information. You can get that at GRC.com.
[02:45:25] He also has the transcripts written by Lane Ferris, an actual human being. And they're very good. Those take a few days. But after the show's out for a few days, they'll be also at GRC.com. If you'd like to get the show notes emailed to you automatically, you can do that. Steve has a mailing list. If you go to GRC.com slash email, there's actually the real purpose of that page is to whitelist your email so you can send him comments, pictures of the week, things like that. But you can also click a box below it that says, send me the show notes.
[02:45:54] There's another box below that. Send me announcements about new software, which come out about once an eon. So you don't have to worry about getting too much email on that one. But you will get a weekly email if you sign up for the show notes newsletter. We have copies of the show at our website, twit.tv slash sn. There is a YouTube channel dedicated to the video. Yes, there's audio and video. We have audio and video on our website as well. Or subscribe in your favorite podcast client. Then you don't have to think about it from now on. It's automatic.
[02:46:23] And I promise no malware embedded. I don't think we could if we tried. I don't think MP3s can contain. I don't know. I shouldn't say that. I won't make any promises I can't keep. But I think that that's pretty safe. You can get it automatically in your podcast client so you can listen the minute it's available. We stream this show when we do it live every Tuesday right after Mac Break Weekly. That's 1.30 Pacific, 4.30 Eastern, 20.30 UTC.
[02:46:50] You can watch live in the Club Twit Discord, but you can also watch on YouTube. Everybody's welcome to watch on YouTube, Twitch, X.com, Facebook, LinkedIn, and Kik. Again, that is 1.30 Pacific, 4.30 Eastern, 20.30 UTC. Thank you, Steve. Have a wonderful week, my friend. We will talk in April. My goodness. Bye-bye. Bye. Hey there.
[02:47:20] It's Leo Laporte, host of so many shows. On the Twit Network, thinking about advertising in 2026, we host a network of the most trusted shows in tech, each featuring authentic, host-read ads delivered by Micah Sargent, my co-host, and, of course, me. Our listeners don't just hear our ads. They really believe in them because we've established a relationship with them. They trust us.
[02:47:45] According to Twit fans, they've purchased several items advertised on the Twit Network because they trust our team's expertise in the latest technology. If Twit supports it, they know they can trust it. In fact, 88% of our audience has made a purchase because of a Twit ad. Over 90% help make IT and tech buying decisions at their companies. These are the people you want to talk to. Ask David Coover. He's the senior strategist at ThreatLock. David said,
[02:48:12] Twit's hosts are some of the most respected voices in technology and cybersecurity, and their audience reflects that same level of expertise and engagement. It's the engagement that really makes a difference to us. With every campaign, you're going to get measurable results. You get presence on our show episode pages. In fact, we even have links right there in the RSS feed descriptions. Plus, our team will support you every step of the way. So, if you're ready to reach the most influential audience in tech,
[02:48:39] email us partner at twit.tv or head to twit.tv slash advertise. I'm looking forward to telling our qualified audience about your great profit.
