SN 1067: KongTuke's CrashFix - Click, Paste, Pwned
Security Now (Audio)March 03, 2026
1067
2:53:08158.62 MB

SN 1067: KongTuke's CrashFix - Click, Paste, Pwned

A crafty new breed of social engineering attack is tricking users into launching malware straight from their clipboard, exposing a fresh vulnerability in Windows that even tech pros could fall for. Leo Laporte and Steve Gibson break down how the latest ClickFix and CrashFix exploits are outsmarting traditional defenses.

  • The lowdown on last week's "no turn" picture of the week.
  • Is an AI-driven hacking campaign a big deal now.
  • Clause used in multiple Mexican government attacks.
  • Apple continues to be confronted with age restrictions.
  • COPPA needs an exception to allow age collection.
  • Meta swamps law enforcement with AI-slop CSAM reports.
  • Roskomnadzor has been busy blocking VPNs. Guess how many.
  • The UK tries to report their self-scanning success.
  • Remember that hacker who extorted the psychotherapy patients.
  • Scattered Lapsus$ Hunters is actively recruiting women.
  • Cisco lands another breathtakingly rare 10.0 CVSS.
  • VulnCheck's report on 2025 vulnerabilities and exploits.
  • Steve discovers a fabulous $72 Hardware Security Module.
  • A listener shares an interesting AI service discovery.
  • The very potent "ClickFix" exploit evolves

Show Notes - https://www.grc.com/sn/SN-1067-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

A crafty new breed of social engineering attack is tricking users into launching malware straight from their clipboard, exposing a fresh vulnerability in Windows that even tech pros could fall for. Leo Laporte and Steve Gibson break down how the latest ClickFix and CrashFix exploits are outsmarting traditional defenses.

  • The lowdown on last week's "no turn" picture of the week.
  • Is an AI-driven hacking campaign a big deal now.
  • Clause used in multiple Mexican government attacks.
  • Apple continues to be confronted with age restrictions.
  • COPPA needs an exception to allow age collection.
  • Meta swamps law enforcement with AI-slop CSAM reports.
  • Roskomnadzor has been busy blocking VPNs. Guess how many.
  • The UK tries to report their self-scanning success.
  • Remember that hacker who extorted the psychotherapy patients.
  • Scattered Lapsus$ Hunters is actively recruiting women.
  • Cisco lands another breathtakingly rare 10.0 CVSS.
  • VulnCheck's report on 2025 vulnerabilities and exploits.
  • Steve discovers a fabulous $72 Hardware Security Module.
  • A listener shares an interesting AI service discovery.
  • The very potent "ClickFix" exploit evolves

Show Notes - https://www.grc.com/sn/SN-1067-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

[00:00:00] It's time for Security Now! Steve Gibson is here, a show we recorded a little bit early because we're going to Zero Trust World in Florida. We have lots to talk about though, jam-packed programming. We're going to talk about scattered lapsus hunters. They're looking for female voices for their social engineering. AI hacking, is it here? Yes, it is. And a very potent click

[00:00:25] fix exploit. When you see how this works, you might wonder how you didn't get bit by it. All of that coming up next on Security Now. Podcasts you love. From people you trust. This is TWiT. This is Security Now with Steve Gibson. Episode 1067, recorded Sunday, March 1st, 2026.

[00:00:54] ConkTukes CrashFix. It's time for Security Now! Hello everybody! Normally I would say you wait all week for Tuesday, but if you're watching live, it's Sunday. March 1st, Steve and I are headed off to Orlando, Florida tomorrow for the incredible Zero Trust World Conference put on by Threadlocker. So we thought we'd do Security Now a little early. Those of you who listen after the fact will get

[00:01:20] the show at the same time. So you're going, what are they talking about? But you know, the only reason I mentioned this, Steve, you probably want to mention it too, is that if anything happens on Monday, it won't be in the show till next week. Well, and this has been actually a problem I've been conscious of because I've now got in the habit of preparing Tuesday's show on the previous weekend, Saturday and Sunday. So already things are like that. And there have been

[00:01:49] a couple of times where I've made notes for the following podcast or, and I try to make a note of this. I, there have been addition numbers of the show notes where I've after the mailing, which by the way, went out yesterday in the early evening, everybody got that. I've made notice that, you know, okay, I've updated the show notes because, you know, stuff has happened since. So yeah, I have to do that

[00:02:16] for our shows too. I, uh, it's yeah, it's just a, because we want to be up to date. So March 1st, uh, I assume your NASAs have reported in as mine have that air, that all looks good. No, nothing to see here at the first of the month. Do you do, on the first of the month your NAS says hello? Yeah. Yeah. Mine probably does too, but I don't. Just checking in. I have a folder where all the NAS messages go and I don't ever check it. So.

[00:02:40] Okay. So check it. Uh, we're going to talk about a bunch of things. This was, this is a jam packed news and opinion, a little editorial, what, which seems to be what our listeners prefer. Oh yeah. We care about what you think for sure. I called this Tong Kong toks crash fix, which is a tongue twister. What's Kong tok? Unless you're a Klingon, in which case Kong tok sounds very much like, yeah, very much like Klingon.

[00:03:10] Uh, uh, it's the name that is, I can't forget. I can't remember the name. Uh, I mean, I can't remember the security firm. We'll get to it. Uh, it's just a, it's a bad guys, uh, moniker that, uh, one of the firms have, uh, have they're obviously Klingon fans or start. Yeah. Yeah. I mean, where would Kong to come from? It does sound like normally the names come from the reverse engineered code where some

[00:03:40] references found to like, you know, the Kong tok.com domain or something. Anyway, we don't know. Uh, but there is a, it's been an evolution of this problem that Microsoft is going to have to, as they used to say, I don't know when belly up to the bar and fix. Uh, so that's the best way to fix it. Maybe that's how the bug happened in the first place.

[00:04:05] Okay. So we, we're going to start with the lowdown on last week's no turns allowed picture of the week, which captured our audience's imagination like few others have, although we've got another good one, uh, this week. Uh, we're going to look at whether, uh, an AI driven hacking campaign is a big deal now.

[00:04:27] Uh, and Claude used in multiple Mexican government attacks. Yeah. Um, Apple continuing to confront age restriction legislation, got some on that. Also, it turns out that COPA, the, the child protective, uh, act is going to need an exception for the age collection, which other legislation is now requiring.

[00:04:55] So it's like a hint that there's something wrong here. Exactly. Oh, you, we don't want to protect kids online when it comes to that. Yeah, it's right. Exactly. Also, um, meta is using an AI, AI, which is, I'm noticing also Leo that this term AI slop just immediately achieved traction. It's like everyone knows what AI slop is. It's, it's surprising how quick the adoption was. Anyway,

[00:05:21] we got AI slop CSAM reports that are drowning law enforcement in false positives. We'll take a look at that. Also our favorite, uh, internet watchdog Ross come nods or has been busy blocking VPNs, but you you will never believe how many, uh, the UK makes an effort at reporting on the success of their self

[00:05:48] scanning, uh, initiative. Although there's something fishy about their report, which we're going to look at. And Leo, I knew, I actually, I, I knew when I saw this, you would remember that hacker who was extorting psychotherapy patients, whose data had been exfiltrated from their psychotherapy center. How low can you get? We've actually heard back about this process that was in, yeah, that was in

[00:06:17] two in, uh, in 2020. So six years ago, anyway, he's back in the news. Um, it turns out that scattered lapses hunters, uh, is actively recruiting women. And we're going to find out why Cisco lands boy, no one does it like Cisco. Another breathtakingly rare 10.0 CVSS, just, you know, duck and cover as they used to say. Also, I mean, all bunch of

[00:06:46] people used to, uh, we've got vol checks report on 2025 vulnerabilities and exploits. Just a little tip of the iceberg there. That's probably going to be our topic next week. Uh, because there's lots of juicy information there. Uh, I have discovered a fabulous $72 hardware security module that does all my code citing multiple certificates, open source. It's fantastic.

[00:07:14] I'll be talking about that a little bit because I know that from, from previous, uh, feedback from our, from our listeners that, you know, anybody who needs to sign code needs something like this. Uh, we have a listener sharing an interesting, uh, AI service discovery, and then the very potent click fix exploit is evolving now being used by the Klingon ease, uh, outfit Kong Tuk for something

[00:07:41] called crash fix. And of course, what would a podcast be without a picture of the week? And, uh, I've already had a lot of feedback saying he should have used a different screw. A security screw would have been better. It's like, okay, thank you. That's true. You'll see. Yeah. For last week's picture, we got a confirmation from a number of people that that is a real picture from Canada. And that really did happen. And the local government was embarrassed. I have a link to

[00:08:08] the actual story saying, uh, we're sorry. That was a dumb thing to do. Hey, it's a very special Sunday security now with Mr. I didn't introduce you, but I think everybody knows that. If they're here, they're like, okay, they know, they know who you are move on. Yes. And, uh, we are so glad to have you on the show this week. Glad you're watching those of you who are alive and figured out we're going to be doing this early. We're glad you're here. Uh, our show today brought to you who are

[00:08:37] dead, you know, you're not watching it. You're not missing a thing though. So, well, maybe, I don't know. I don't, I think if you've passed, you don't have to worry about security so much. Opinions vary on that topic. Yes. We don't know. We don't know that's we'll just have to wait and find out. Uh, this episode of security now is brought to you by meter, the company building better

[00:09:01] networks. I love this idea. Meter was started by two network engineers who realized their pain points as they built their own networks of legacy hardware, legacy software, not controlling the stack ISPs who blame the router, the router companies who, who blame the, the security devices. And if you're a network engineer, I think, you know, this as well,

[00:09:27] you've got legacy providers, you've got inflexible pricing, you've got it resource constraints. That's a, that's permanent, right? Stretching you thin. Then you also have complex deployments across fragmented tools. This is often the case when you, your company does acquisitions, right? You've got a warehouse in Muncie suddenly has to work with a home office in, uh, in, uh, you know, Minnesota. And the thing is just a mess. And the funny thing is you and your networks are mission critical to the

[00:09:56] business, but you're stuck working with infrastructure that just wasn't built for today's demands. Enter meter. They know your pain. That's why companies are switching to meter meter delivers full stack networking infrastructure. They realized if we're going to make a good, solid, robust network for the future and the present, we got to control the entire stack. So they do it all wired, wireless, cellular. They build for performance. They build for scalability meter does it all. They design the

[00:10:25] hardware. They write the firmware. They build the software. They will manage the deployment. They'll even provide support. Even ISP procurement. They will help you with that too. Security, of course, routing, switching, wireless firewall, cellular power, DNS security, all the pain points, VPNs, SD wins, multi-site workflows. And it's all in a single solution from a single vendor. So that's one

[00:10:52] phone number to call. If there's something wrong, one place to go for support, they take care of it all meters, single integrated networking stack scales. They they're in major hospitals. They work in branch offices, that Muncie office. They can handle it. Warehouses, large campuses, even data centers. Reddit uses meter in their data centers. The assistant director of technology for web school,

[00:11:19] another great customer at web school of Knoxville. They had a problem. He said, quote, we had more than 20 games, athletic events on campus. Simultaneously between our two facilities, each game was streamed via wireless and wired connections. And the event went off without a hitch. We never could have done this before meter redesigned our network. With meter, you get a single partner for all your connectivity

[00:11:45] needs from first site survey to ongoing support without the complexity of managing multiple providers or tools or the, you know, the provider handoff. It's not our fault. It's their fault. Meter's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility, re-imagining what it means for businesses to get and stay online. Meter is built

[00:12:11] for the bandwidth demands of today and tomorrow. We're so glad meter found us and I hope you will find them. I was really thrilled to talk to them. I didn't really know anything about them and I was so impressed when I met them. We thank meter so much for sponsoring security. Now go to meter.com slash security. Now book a demo today. That's M E T E R.com slash security. Now to book a demo meter is the

[00:12:36] future of networking and it's going to be a lifesaver for you meter.com slash security. Now. Thank you, meter. Welcome to the show. Okay. I'm ready for the picture of the week. So our caption on this photo, this is dad saying to, because he's a dad, one of his kids, this is the last time I'm going to tell

[00:13:01] you to turn down the volume of what you call music. Oh, dad. Dad found a solution. Yes, he did. And, and given the location of the little volume indicator dot on the volume control, which is like right at minimum, it looks like now doesn't look like junior gets to turn this up, uh, very high.

[00:13:29] And now you can see why one of our listeners said you should have used us. He should have used a security screw, you know, where you can only screw it in, but they, but the Phillips head is unable to get a grip when you're trying to go in the other direction. It does look like somebody has tried to unscrew it. Actually. I think that the drill skittered. Oh, maybe that's it. Yeah. Yeah. So for those who are not looking at the video or don't have the show notes, I'm sorry. Uh, what we have

[00:13:57] is what we would call an old school volume limiter. Um, they, the problem of course is that the kids have, uh, you know, uh, a stereo system, which they just are unable not to turn up so that it's bugging mom and dad who can't think, uh, not only due to the nature of the music, but it's volume. So finally

[00:14:20] at the end of his rope, dad has come up with a solution. He's drilled a hole in the side of the volume, uh, knob with a screw sticking out of it about an inch and then another screw in the face plate of this stereo such that the, the screw that rotates as you try to turn the volume up will hit

[00:14:46] these, the limiting screw preventing it from going, uh, it looks like maybe more than maybe, you know, level two or three. Yeah. Not very much. So clearly regardless of the backstory here, this is obviously an, a, a someone's determined effort to prevent the volume control from being turned up very far. This is points out this, this would be good in a nightclub where patrons tend to go

[00:15:14] over to the sound system, you know, or your neighbor snuck into your apartment. Um, uh, as a frequent, uh, patron of restaurants, I've had the experience where I'm, I'm in early for dinner and the, the, the crew of workers have been there. They turned the volume up and then leave it up. And they, you know, they, they just forget. And it's like, God, can you turn the volume

[00:15:42] down? I can't, can't think. Don't you know it's early bird time. You got to turn it down. So a solution, a solution has been found. I of course would have put a small resistor network in line with the speakers in order to take the energy out of the speaker line. And then junior would think, God, what happened? Did I blow the amp? It's not working. Maybe I'm deaf. That's right. Okay. So I wanted to thank many listeners who were made curious by last week's

[00:16:12] picture of the week. Uh, and Leo, you heard from many people too. I did just to remind people that was the street, which was the stem of a T intersection. So the street we were seeing, you know, was leading up to a T intersection in the distance and signage, which would be encountered as the driver was driving toward that T intersection indicated that neither turning left nor right

[00:16:37] would be legal. Uh, thus I gave the caption, but officer, uh, to the picture, thanks to listener research of which there was much, um, and some used AI, you know, asked AI to track this down. We now know that the photo, first of all, was not synthetic. That was my, you know, a common thought was, Oh, come on. That was just Photoshopped. Uh, it was bizarre, but authentic. And after the photo

[00:17:05] went viral a few years ago, it became a significant embarrassment to the local government, uh, who was responsible for its emplacement. The location was a town called Simcoe in Norfolk County, Canada. And a news report that one of our intrepid listeners found and shared explained that quote, Drivers, please note that signs were installed this week, which restrict left and right hand turns at

[00:17:34] the intersection of Crescent Boulevard and Queensway in Simcoe. The intent of the new signs was to make Crescent Boulevard a dead end street. The signs have been removed. So anyway, uh, in other words, the signage was technically correct. And you were, it was like up to you to come to this stop sign,

[00:17:59] having seen the, you can't turn left, can't turn right. And what do you turn as if it dead ended at that point, rather than allowed you to cross into the, the cross street. So it's crazy. Anyway, my favorite quip about last week's photo was provided to us by a listener, Joseph Rourke, who noted, despite the presence of the Tim Hortons in the background, we know this cannot be Canada.

[00:18:26] Otherwise there'd be a line of cars sitting at the stop sign. So many thanks to our listeners among the more than 20,000 who received the weekly mailing, uh, and whose imaginations were captured and took time to do the research and or comment. Uh, anyway, and also a big thanks to whoever it was who sent that to me in the first place. Uh, you know who you are. And I ask our listeners to keep them coming because they're fun to share.

[00:18:54] Okay. So the headline in the news last week was, this is the headline, AI driven hacking campaign breaches 600 plus Fortinet devices. Now I'm going to first share the news report. And I have a few things to say about it. Uh, the reporting says a Russian speaking,

[00:19:18] financially motivated threat actor used commercial AI toolkits to hack more than 600 Fortinet firewalls. The campaign began at the start of the year around January 11th, according to the AWS security team, the attacker did not exploit zero days or older vulnerabilities. Instead they targeted Forti gate devices that had their management ports. Oh Lord.

[00:19:43] Exposed online used weak passwords and didn't have multi-factor authentication enabled. Okay. So just to interrupt your Forti gate devices, publicly exposed management ports, weak passwords, and no other authentication required. So no flaws were used, just very poor configuration hygiene story continues. Once inside the attacker employed a collection of

[00:20:10] scripts that AWS says were written by AI tools. While AWS did not name products researchers from, uh, cyber and ramen and control alt int three, three, I identified them as being clawed in deep seek. Deep seek was used to create scripts to perform reconnaissance and extract configurations from the hack devices.

[00:20:36] While Claude was used to generate scripts for vulnerability assessments and to run offensive tools against the networks. Since this is the intersection of AI and info sec writes this story, the report generated a tornado of feedback and opinions on social media. The general consensus was that the threat actor wasn't particularly sophisticated, which AWS also believes.

[00:21:05] AWS CISO CJ Moses said the attacker was more interested in scale than value. Every time they encountered errors caused by hardened or non-standard internal networks, the attacker just moved on to a softer target. Once they did move laterally from the Fortinet device, the attacker compromised the victim's active directory environment, extracted database credentials, and tried to gain access to backup infrastructure.

[00:21:33] This led everyone to believe the threat actor was a relatively low skilled initial access broker, right? An IAB that gain initial footholds on corporate environments and then sell their access to the hacked network on underground portals.

[00:21:50] Okay, so I think it's entirely expected that anyone who has any need for any sort of code or scripting for any purpose whatsoever will increasingly be using AI. That's just today's reality. Good guys are doing it and bad guys are doing it.

[00:22:14] And there's no reason to expect AI to be able to discriminate between the two. A high-level language compiler doesn't know or care who's using it or to what purpose the code it's helping to produce will be put, right? That's not its job.

[00:22:32] So the fact that we have now chosen to give consciousness-emulating large language models the marketing label of artificial intelligence should not and does not automatically mean that these new tools somehow have responsibility for what they're being asked to produce. So, okay, but don't these tools make attackers more powerful? Yes, they do.

[00:23:02] And they also make the good guys more productive. That's why everyone, both good and evil, is now using them. In the current instance, there's nothing inherently wrong with a script that performs a vulnerability assessment. White hat security researchers employ such tools to aid their beneficial research, much as bad guys may use the same tools to perform pre-attack vulnerability assessments.

[00:23:28] My point is that any social media hysteria arising from the fact that AI was involved is now ridiculous. If you encounter it online, I would recommend meeting it with a shrug and clicking on the thumbs down button. This is just the way the future is going to look now. It may have surprised us a few years ago, but it should surprise us no longer.

[00:23:56] And AI should not receive any of the blame for the way its creators, we humans, choose to use it. It's a tool and nothing more. It has no social obligations or responsibilities. It's not accountable. We are. I like that because that eliminates that whole issue of AI safety. Yes. Yes. Which, as I said, we might as well give up because we're not going to get it.

[00:24:26] And again, you know, we call it artificial intelligence. It's not intelligent. It doesn't know anything. It's a very powerful new tool, but it's still a tool and it's not responsible for the way we use it. As usual, it's the humans who are the problem. Exactly. Okay. Now I'm going to give everyone a quick self-test to see whether the point I hope I've just made has had the chance to sink in.

[00:24:55] Perform a self-assessment to see how you feel about this next piece of news. It reads, quote, a hacker has stolen more than 150 gigabytes of data from multiple Mexican government agencies. The attacker allegedly used Claude to assemble scripts to gain access to government networks.

[00:25:19] According to Bloomberg, the attacker breached and stole data from Mexico's tax authority, National Electoral Institute, and several state water utilities. The stolen data covers 195 million taxpayer and voter records, government employee credentials, and civil registry files. Okay. Should we care at all that AI was employed in these attacks? No.

[00:25:48] The fact that Claude was used in these attacks appears to be the highlight of Bloomberg's piece because they've got, you know, they're looking for clickbait, right? You know, it was certainly the headline which they attempted to make inflammatory. You know, eventually the world will get used to this and it will just be assumed. And I hope everybody listening to this podcast will be in the lead on that because, again, that's the technical reality here.

[00:26:19] Another technical reality is that Apple appears to be feeling the pressure to respond to the growing legislation-driven need for the providers of Internet services and online apps and app apps, you know, Apple Store apps, to know and to respond to the age of their users. Last Tuesday, Apple posted an update to their developers. Apple's developer portal addressed to their app developers.

[00:26:48] So this was written when you see the word like your app. So this is written to app developers. They said, today, we're providing an update on the tools available to developers to meet their age assurance obligations under upcoming U.S. and regional laws, including in Brazil, Australia, Singapore, Utah, and Louisiana.

[00:27:15] Updates to the declared age range API are now available in beta for testing. For Brazil, developers who are distributing apps in Brazil can use the updated declared age range API to obtain a user's age category.

[00:27:35] Age categories for users in Brazil will be shared when the user or a parent or guardian, where relevant, agrees to share the age category with you. The API will also return a signal from the user's device about the method of age assurance.

[00:27:53] For developers distributing their apps in Brazil, if you identify that your app contains loot boxes through the age rating questionnaire, the age rating of your app on the Brazil storefront will be updated to 18+. For apps rated 18+, in Australia, Singapore, and Brazil. And if this is all seeming like a big mess, you're getting the clue here, yes.

[00:28:22] They say, starting February 24th, which was last Tuesday, the date of this announcement. In other words, you know, why this was posted. Apple will block users in Australia, Brazil, and Singapore from downloading apps rated 18+, unless they have been confirmed to be adults through reasonable methods. And boy, I hate that kind of language. Like, okay.

[00:28:49] You know, it's like any legislation that is written that isn't airtight, that, you know, is subject to interpretation. And it's like, oh, let's let the attorneys sort this out. Oh, God. Through reasonable methods. Whatever that is. They say the app store will perform this confirmation automatically. Oh, that's good. However, developers may have separate obligations to independently confirm that their users are adults.

[00:29:17] To assist with this, the declared age range API available on iOS, iPadOS, and macOS provides developers with a helpful signal about a user's age. Okay, so they're being helpful. For apps rated 18+, Australia, Singapore, Brazil. However, for Utah and Louisiana. Oh, but not yet. Wait for it.

[00:29:42] For users with new Apple accounts in Utah as of May 6th. Okay, so, okay, wait. Fresh accounts? How new do they have to be? Are they accounts created after May 6th? We don't know. For users with new Apple accounts in Utah as of May 6th, 2026. So, a couple months from now.

[00:30:07] And in Louisiana, as of July 1st, 2026. What a mess. Age categories will be shared with the developers app when requested through the declared age range API.

[00:30:22] The tools we previously announced have been expanded to help developers meet compliance obligations for Louisiana and Utah, including declared age range API, significant change API under permission kit. That's that thing where if your app undergoes a significant change, you need to declare that because then that makes it potentially subject to all kinds of reevaluation.

[00:30:50] Then there's the new age rating property type in store kit and app store server notifications. They said new signals are now available through the declared age range API, including whether age-related regulatory requirements apply to the user. What a mess.

[00:31:11] And if the user is required to share their age range, the API will also let you know if you need to get a parent or guardian's permission for significant app updates for a child. Developers can use the declared age range API to present significant update notifications to adults in these states through the significant update action. Now in beta.

[00:31:40] When releasing a significant update, developers must follow the human interface guidelines and provide users with a meaningful description of the update. Leo, you know, on one hand, I would be I'm a little tempted to feel some empathy and a little sorrow for Apple.

[00:32:04] The same time, I would say, guys, you brought this on yourself by refusing to do this five years ago. Right. They could have so easily put a far simpler system in place that would that would have satisfied people that would have solved this problem and prevented all of this ridiculous fragmentation.

[00:32:25] I mean, you're going to need a whole new building at Apple in order to figure out what to do for who on what day, depending upon whether there's, you know. Oh, Lord, what a mess. Ed, this is what Meta wanted, by the way. They didn't want to do it. So they said, make Apple do this place. By the way, there is in California a law that goes in effect.

[00:32:48] You know about this on January 1st of 27 that will require operating systems, all operating systems to do this. And the Linux community is a little worried about it because nobody is. The real issue is it's unenforceable. California can't make Linux do this. They can make Apple do it. They can make Google do it because they're gatekeepers. They can go after the companies. And this is part of the larger plot, right?

[00:33:16] Like the 3D printer restriction is also unenforceable. It doesn't work. You can write a law. It doesn't mean you can get what you want. Yeah. Yeah. But Leo, we're going to let our listeners get what they want. What do they want? Do they want another commercial? I want some coffee. Whatever Steve wants, Steve gets. We'll be back with more security now. I know you really want that.

[00:33:43] I should tell you, though, if you're in IT, if you're responsible for the security of your company, our advertisers here at Security Now are always something you should be interested in. We have people. I think companies have realized if you want to reach these IT decision makers, you come to Steve. I'm so impressed by who our listeners are, Leo. When I hear from them, it's just like, I mean, I'm embarrassed that they're listening to me. What? Me? I know. Oh, I know.

[00:34:13] I have the same reaction constantly. Oh, you listen. It makes me a little nervous. We're going to meet a lot of our listeners in Florida, by the way. I'm very excited. Steve and I are headed to Zero Trust World. We'll tell you more about that in a little bit. And Steve's giving a presentation on Wednesday. And usually when we do these things, Steve, we've done them a couple of times before. There's a long line out the door to get a selfie with Steve Gibson. So we're going to have to. They wanted Leo, too. Oh, no, no. They wanted you.

[00:34:41] I usually jumped in just so they had me in case they went home and said, oh, where's Leo? Oh, well. Well, for what it's worth, I'm happy to smile into your phone, all of you listeners. We'll line up a photographer. If it doesn't break your camera, that's good. This episode of Security Now is brought to you by GuardSquare. This is a relatively new advertiser.

[00:35:10] But boy, if you listen to the show, you're going to realize you need GuardSquare. If you're doing mobile app development, you need GuardSquare. Mobile apps today have become an inescapable part of life. That's part of the problem. Financial services, health care, retail and entertainment. Users trust mobile apps with their most sensitive personal data.

[00:35:33] But a recent survey showed 72% of organizations experienced a mobile application security incident last year. 92% of respondents reported rising threat levels over the last two years. That's obvious. Meanwhile, attackers who are, you know, desperately want your users' personal data are constantly finding new ways to attack your mobile app. You don't want to be responsible for this. They reverse engineer it. They repackage it.

[00:36:02] They distribute a modified app via phishing campaigns, sideloading third-party app stores. Your end users don't know the difference as suddenly they've got your app plus a little bit of malware thrown in. But you can stop it by taking a proactive approach to mobile app security. You can stay one step ahead of these attacks and maintain the trust of your users. And that's really what's most important. That's where GuardSquare comes in.

[00:36:30] And GuardSquare delivers mobile app security without compromising, providing advanced protections for both Android and iOS apps. And it's more than just built into the app. It's also combined with automated mobile application security testing to find vulnerabilities, real-time threat monitoring to gain insight into attacks. So if somebody's doing something to your app, you know.

[00:36:54] Discover more about how GuardSquare provides industry-leading security for your mobile apps at GuardSquare.com. That's GuardSquare.com. You owe it to yourself. You owe it to your users. Check it out. GuardSquare.com. We thank them so much for supporting security now. Steve, now fully caffeinated. We'll continue. As if I need more caffeine.

[00:37:18] And speaking of online web-based services, there has apparently been some concern, I would say justified, you know, if you want to follow the rules, over the intersection of child's privacy enforcement and the apparent explicit need to violate that very privacy for the sake of complying with legislated age determination.

[00:37:44] Last Wednesday, on the heels of Apple's begrudging update to their age-related APIs and their app download enforcement, the U.S. Federal Trade Commission, our FTC, issued a formal policy statement with the headline, FTC issues COPA policy statement to incentivize the use of age verification technologies to protect children online.

[00:38:14] They wrote, The Federal Trade Commission issued a policy statement today announcing that the commission will not bring an enforcement action. I don't know if I would call it incentivizing. It's like de-threatenizing.

[00:38:30] Will not bring an enforcement action under the child's online privacy protection rule, COPA, C-O-P-P-A, against website and online service operators that collect, use, and disclose personal information for the sole purpose of determining a user's age via age verification technologies.

[00:38:50] The COPA rule requires operators of commercial websites or online services directed to children under 13 and operators with actual knowledge that they are collecting personal information from a child to provide notice of their information practices to parents and to obtain verifiable parental consent before collecting, using, or disclosing personal information collected from a child under 13.

[00:39:20] And what a pain in the butt it is to actually do that, right? So we see the problem here, right? The emerging age restriction regulations are placing the burden upon online services to, you know, to whatever they must do to determine their visitors' ages.

[00:39:41] But doing this could force the site to run afoul of other regulations, specifically COPA, which are already in place to protect the privacy of their underage visitors and users. In this instance, it's necessary to carve out an explicit privacy exception so that online services will be able to collect the data that they must without fear of tripping over COPA's restrictions.

[00:40:09] So the FTC explains, age verification technologies play a critical role in helping parents as they monitor their child's online activities. Since COPA was enacted in 1998, so it's been around for a while, there's been an explosion in the use of internet-connected technologies by children.

[00:40:30] To help parents navigate the challenges associated with their child's online activities, some states have started requiring some websites and online services to use age verification mechanisms to help determine the age of users.

[00:40:45] But as noted at the FTC's recent workshop on age verification technologies, some age verification technologies may require the collection of personal information from children, prompting questions about whether such activities could violate the COPA rule. Christopher Mufarage, director of the FTC's Bureau of Consumer Protection, said,

[00:41:09] quote, age verification technologies are some of the most child protective technologies to emerge in decades. Our statement incentivizes operators to use these innovative tools. Again, I would say, you know, doesn't, you know, suspends disincentivizing them because that's the threat of being, of action under COPA that is causing them to say, wait a minute,

[00:41:36] which empowers parents to protect their children online, unquote. The policy statement states that the commission will not bring, this is the statement from the FTC, will not bring an enforcement action under COPA rule against operators of general audience sites and services and mixed audience sites and services

[00:41:59] that collect, that collect, or disclose personal information for the sole purpose of determining a user's age without first obtaining verifiable parental consent if they comply with certain conditions, specifically that they, and we've got six bullet points,

[00:42:19] do not use or disclose information collected for age verification purposes for any purpose except to determine a user's age. Two, do not retain this information longer than necessary to fulfill the age verification purposes and delete such information promptly thereafter. Three, disclose information collected for age verification purposes only to those third parties

[00:42:49] the operator has taken reasonable steps, and here again, I hate that kind of language, but okay, to determine are capable of maintaining the confidentiality, security, and integrity of the information, including by obtaining certain written assurances from those third parties. Okay, so at least transferring responsibility, hopefully legally enforceable. Fourth, provide clear notice to parents and children of the information collected for age verification purposes,

[00:43:19] fifth, employ reasonable security safeguards for information collected for age verification purposes, and finally, sixth, take reasonable steps to determine that any product, service, method, or third party utilized for age verification purposes is likely to provide reasonably accurate results as to the user's age.

[00:43:43] Again, does that mean facial recognition, which we know is really prone to error? Whatever. Finally, they say the policy statement indicates that the commission intends to initiate a review of the COPA rule to address age verification mechanisms. The policy statement will remain effective until the commission publishes final rule amendments on this issue in the federal register or until otherwise withdrawn.

[00:44:13] Okay, so this policy statement is intended essentially to provide interim cover for online sites and services that do need to enforce privacy breaching age restriction measures today, which would otherwise expose the site to COPA infringement. This suggests that COPA itself, as they said here toward the end of this FTC announcement,

[00:44:42] COPA itself will require amending to provide a permanent and clear path for privacy respecting age verification for minors. So, again, well, one piece of legislation colliding with another. Surprise. Surprise.

[00:45:00] The Guardian reports that Meta's CSAM detection AI is flooding law enforcement with low-quality, unactionable, which is, as we'll see here, it's really sad, false positive reports of online child sexual abuse that are seriously hampering law enforcement's ability to function.

[00:45:26] Under the Guardian's headline, Meta's AI sending junk tips to DOJ, U.S. child abuse investigators say, here's what the Guardian reported. They said officers from the U.S. Internet Crimes Against Children, ICAC, task force, said that Meta's use of artificial intelligence to moderate its social media platforms

[00:45:53] is generating large volumes of useless reports about cases of child sexual abuse, which are draining resources and hindering investigations. Benjamin Zweibel, a special agent with the ICAC task force in New Mexico, said last week during his testimony in the state's trial against Meta, so this is New Mexico versus Meta, he said, quote,

[00:46:19] we get a lot of tips from Meta that are just junk. The state's attorney general alleges the company's platforms are putting profits over child safety. Okay, now, at first I have to say, I'll take a break here from this to say I was puzzled by that, but what I believe New Mexico's attorney general is saying is that rather than employing humans who would be able to use,

[00:46:44] you know, usefully discriminate between what is and is not actual child exploitation and abuse, Meta is endeavoring, they allege, to save money by using AI, which is not actually doing the job. So, Meta is failing in their obligation, but they're failing in a way that's causing lots of trouble. The report continues, saying,

[00:47:10] Meta disputes these allegations, citing changes it has introduced on its platforms, such as teen accounts with default protections. The ICAC task force is a nationwide network of law enforcement agencies coordinated with the U.S. Department of Justice to investigate and prosecute online child exploitation and abuse cases. Another ICAC officer speaking on the condition of anonymity to discuss internal matters said, quote,

[00:47:37] Meta is providing thousands of tips each month. It's pretty overwhelming because we're getting so many reports, but the quality of the reports is really lacking in terms of our ability to take serious action, unquote. The ICAC officer added that the total number of cyber tips their department had received doubled from 2024 to 2025.

[00:48:01] Both Zerweibel and two ICAC officers said that unviable tips from Instagram, Facebook, and WhatsApp often contain information that's not criminal. The anonymous officers added that in other cases, tips sometimes contain information indicating that a crime may have occurred, yet vital images, videos, or text are missing or redacted.

[00:48:31] The ICAC officer added unviable tips from Instagram have really skyrocketed recently, especially in the last couple of months. And that's one of the biggest places where we're seeing important information not being provided. In those cases, he said, we don't have the information to further the investigation.

[00:48:52] It weighs on you to know that this crime occurred, but we can't identify the perpetrator, unquote. So just to clarify that point, you know, these investigators are saying that what they see are clearly crimes which Meta's use of AI happened to have found. So not a false positive. It's true. It's true.

[00:49:17] But that the evidence that's needed to take any action about it is missing, which would not normally be the case if it were a human-driven investigation. So Meta's use of AI is not only flooding law enforcement with crap, but it's also serving to obscure the necessary details of actual crimes it detects. You know, if we didn't know better, we'd be inclined to think this had been deliberately designed by criminals for criminals. It wasn't.

[00:49:47] I'm not suggesting that. But it's having that effect, right? The story says, asked about Zweibel's testimony and the ICAC officer's remarks, a Meta spokesperson said, quote, we've supported law enforcement to prosecute criminals for years. The DOJ has repeatedly praised our fast cooperation that has helped lead to arrests.

[00:50:11] And NCMEC has praised our streamlined and improved tip reporting process. In 2024, we received over 9,000 emergency requests from U.S. authorities and resolved them within an average of 67 minutes and even more quickly for cases involving child safety and suicide.

[00:50:33] Consistent with applicable law, we've reported apparent child sexual exploitation imagery to NCMEC and support them to prioritize reports. From helping build their case management tool to labeling cyber tips so they know which are urgent. Okay, so I'll just note that while this sounds great, it doesn't appear to be responsive to the question of AI's use.

[00:51:02] That Meta spokesperson appears to be referring to the work of humans employed by Meta, not their cost-saving AI. The Guardian's reporting then shifts gears to provide some background on NCMEC, which is the National Center for Missing and Exploited Children.

[00:51:22] The Guardian writes, by law, social media companies based in the United States are required to report any detected child sexual abuse material, CSAM, on their platforms to the National Center for Missing and Exploited Children, NCMEC. It serves as a national clearinghouse for reports, which it forwards to the appropriate law enforcement agencies across the United States and internationally.

[00:51:50] NCMEC does not have the authority to filter out any tips that may be unviable before they're sent to the relevant law enforcement agencies. So 100% has to flow through.

[00:52:02] Meta is by far the largest reporter to NCMEC, and its data report for 2024, NCMEC said Meta made 13.8 million reports across Facebook, Instagram, and WhatsApp. Okay, so 13.8 million, right? Well, you have 12 months in a year.

[00:52:27] So simple math tells us that's over a million reports per month is coming from Facebook, Instagram, and WhatsApp. And that 13.8 million is out of a total of 20.5 million tips that NCMEC received in total. So, you know, well over half.

[00:52:52] NCMEC said that in 2024, more than 1 million cyber tip line reports were linkable to a specific U.S. state, and those reports were made available to the ICAC task forces around the country, as well as other federal, state, local law enforcement agencies for investigation. Meta and other social media companies use AI to detect and report suspicious material on their sites

[00:53:20] and employ human moderators to review some of the flagged content before sending it to law enforcement. The Guardian has previously reported that tips generated by AI that have not also been reviewed by a social media company employee often cannot be opened by a law enforcement officer without a warrant because of Fourth Amendment protections.

[00:53:45] This extra step also shows investigations of potential crimes lawyers involved in such cases have said. Meta spokesperson said, It's unfortunate that court rulings have increased the burden on law enforcement by requiring search warrants to open identical copies of content we've already reviewed and reported.

[00:54:11] Our image matching system finds copies of known child exploitation at scale that would be impossible to do manually, and we work to detect new child exploitation content through technology, reports from our community, and investigations by our specialist child safety teams.

[00:54:31] Under the REPORT Act, where REPORT is an acronym for Revising Existing Procedures on Reporting Via Technology, so REPORT, which came into force in November 2024, online service providers must broaden and strengthen their reporting obligations by notifying NCMEC's cyber tip line not only about child sexual abuse material but also about planned or imminent abuse,

[00:55:01] child sex trafficking, and related exploitation. They must also preserve evidence for a longer period and face higher penalties if they knowingly failed to comply. Since the act passed, the number of unviable tips supplied by Meta has increased dramatically,

[00:55:23] which could be because the company is acting to ensure it is not falling foul of the law, two ICAC officers said. So, in other words, Meta is complying because they're being forced to comply. The result, however, is a lot more noise among the signal. They said many of these tips could not be construed as a crime,

[00:55:52] such as adolescent girls talking about which celebrity they find most attractive. Special Agent Benjamin Zweibel said in court, Quote, Based on my training and experience, it appears that they are being submitted through the use of AI, as these are common mistakes that an AI would make that a human observer would not.

[00:56:17] Zweibel added that his department receives significantly fewer tips on legitimate cases of child sexual abuse material distribution from Meta than in previous years. So, in other words, not only has the noise gone up, but the signal, the quality, has gone down. Every tip that reaches an ICAC division must be reviewed,

[00:56:42] and the influx of unviable tips is taking time and resources away from investigating legitimate cases of child abuse, said two officers. One ICAC officer said, Quote, It's killing morale. We're drowning in tips. And we want to get out there and do this work. We don't have the personnel to sustain that. There's no way that we can keep up with the flood that's now coming in. Unquote.

[00:57:11] So, I want to chalk this up less to Meta being evil, which I don't think is the case, than to the growing pains of effective AI deployment. We're still very much learning how to best use the new and surprising capabilities of large language model networks.

[00:57:34] And I suspect that a strong case could be made for there truly being far too much content for humans to manually inspect. In other words, you know, and we've talked about this, right? With the legislation that the UK keeps circulating and trying to make happen, where it's just like, you know, how are we going to do this?

[00:57:57] Apple has proposed doing on-device CSAM image comparison, and nobody wanted that. I mean, the actual volume of content is beyond human management. So, you know, although the specter of having overlord AIs examining everything,

[00:58:27] examining everything that's transacted over social media, you know, it feels very Orwellian, our legislators are requiring a level of oversight from social media companies that likely has no other workable solution. It's, you know, AI, it will be. We just need to continue figuring out how to best use it.

[00:58:51] And again, all evidence is we're making headway and we're going to get a lot better than we are. You know, we can clearly see how much better we are now in using AI for code than we were a couple of years ago. You know, this is going to get better.

[00:59:10] And I think we're just going to, in the future, the legislators are going to force it to be the case that some machine intelligence is going to be watching dialogues. And we're just, you know, users are going to have to put up with that as a cost of the privilege of being able to communicate with encryption.

[00:59:33] I just saw a short mention blurb that surprised me.

[00:59:40] The news was just that Russia's wonderfully named internet watchdog, Roskobnadzor, has now blocked Russian citizens' access to, you're not going to believe how many, 469, Leo, individual VPN services inside Russia.

[01:00:07] All of them, in other words, all of them they could find. Yes. I mean, which means, but none of the ones that have sprung up since then, right? Right. It seems to me that the fact that there are 469 VPN services inside, you know, discrete individual VPN services inside Russia to be blocked in the first place. That's the real story here. You know, right?

[01:00:35] Talk about a citizenry that's desperate to escape the shackles of their own state's filtering and tampering and management. This is a citizenry that is desperate for contact with the outside world and a repressive government that's doing everything it can to prevent that. It's becoming increasingly clear why Russia has been experimenting with completely disconnecting from the global internet.

[01:01:02] They want the ability to just go, you know, internal sovereign and cut off all outside contact. In other Russian news, I saw a report that indicated that the Kremlin had decided to fully block telegram starting in April of this year. Right. Okay. Next month. That's puzzled me since I thought the telegram was already being fully blocked. We talked about that just recently.

[01:01:31] But this reporting stated that telegram was currently only 55% blocked. Okay. It's not clear to me what a 55% block might mean. The only thing I can figure is that perhaps access to telegram is currently being limited to specific regions or sectors or industries.

[01:01:59] And that additional regions are being added to the master block list so that by the end of this month of March, nothing will be left. Okay. Whatever the case, Russia appears to be quite intent upon controlling its citizens' access to information. Nope. Good luck. It's inevitable. If you want to do that, you got to get rid of VPNs. It's the next step. As we know, information wants to be free. Yeah. It's pretty hard.

[01:02:28] As has been said, it's very difficult. I mean, you know, we got satellite now too. Yeah. Okay. This one. Oh.

[01:02:36] About 14 months ago, in January 25, we reported that the UK was launching a plan to begin continuously and proactively scanning its own national public-facing network segments for the purpose of preemptively detecting vulnerabilities

[01:03:01] and alerting those owners of the IP addresses where vulnerabilities were found. Our listeners may also recall that I was jumping up and down over how much I thought this made sense and suggesting that this was something every nation should be doing to its own public-facing internet address ranges in its own self-interest. I think this is just a great idea.

[01:03:31] So we're talking about this again today, 14 months later, because last Tuesday, I'm sorry, last Thursday, the UK, out of a celebratory press release, used the headline, government cuts cyber attack fixed times by 84% and launches new profession to protect public services.

[01:04:01] A new profession. Huh? Okay. The press release led with three summary bullet points. They said critical cyber weaknesses across the public sector will now be fixed six times faster than before. Ministers are determined to go further with first ever dedicated government cyber profession.

[01:04:24] That's in caps, capital C, capital P, cyber profession to give the state the skilled staff it needs to protect UK's key services from cyber threats. And finally, the number of serious unresolved cybersecurity weaknesses across government cut by three quarters as part of government-wide efforts to strengthen Britain's digital defenses. Wow. Sounds great.

[01:04:53] Before I share what the press office of the UK said, allow me to preface this by noting that we're going to encounter something that makes no sense whatsoever to me. But regardless, here's what they wrote.

[01:05:05] They first said, public services millions of people depend on, from the NHS to the legal aid agency, are becoming significantly safer and more resilient thanks to major improvements by the government to identify and fix cyber threats. Great.

[01:05:26] A specialist government monitoring service introduced as part of the Blueprint for Modern Digital Government, published in January 25, means serious security weaknesses in public sector websites are fixed six times faster, cutting the average time from nearly two months to just over one week. Okay. So far, so good. But then this appears to go off the rails.

[01:05:55] The release next says, the vulnerabilities in the domain name system, DNS, the Internet's address book that turns website names into numbers computers use to find them. Weaknesses in DNS can allow attackers to redirect users to fraudulent sites, steal sensitive data, or take services offline entirely.

[01:06:25] With potentially serious consequences for anyone relying on government services. Okay.

[01:06:55] The vulnerability monitoring service to a fake site designed to steal their personal details, intercept sensitive communications, or disrupt services that people rely on. The vulnerability monitoring service has closed this window down to eight days. It alerts the right people with clear, practical guidance on how to fix the problem and tracks progress until each issue is resolved.

[01:07:25] Okay. What the hell are they talking about? What is a weakness in a government DNS record? What? What? What?

[01:07:38] In this day and age, when I see something that sounds entirely plausible and reasonable to a layperson, but which is actual nonsense, the first thing I think is that some AI somewhere was having a bad day. Okay.

[01:07:59] The press release said before this service was in place, a weakness in a government DNS record could go unnoticed for nearly two months. Again, what? What is a weakness in a – like, it makes no sense at all. There's no such thing. Okay.

[01:08:26] So let's just play along and see what else happens. The release continues. Speaking at the annual Government Cybersecurity and Digital Resilience Conference, Digital Government Minister Ian Murphy will outline how this will sharply reduce, right, the reduction in weak government DNS records, apparently. What? Will sharply reduce something.

[01:08:52] Oh, the risk of hackers targeting essential services like the NHS. Well, that's good. If you've got a weak DNS record, you don't want that. So by all means, reducing its effect somehow from almost two months of weakness down to just eight days. That's a big improvement. No one would argue.

[01:09:11] He'll also outline how the government has reduced its backlog of these weak DNS vulnerability records, okay, by 75%, significantly shrinking the window for cyber criminals to target essential government services due to weak DNS records. Okay. From GP surgeries and ambulance trusts to hospitals and social care providers.

[01:09:38] Today's announcement marks a decisive step in closing the door on such threats, whatever they are, with the government going even further with the launch of the first ever dedicated government cyber profession. Apparently, we're going to have a cyber profession, capital C, capital P, that focuses on the weakness.

[01:10:03] I don't know of what DNS monitoring. What are the, I'm, okay. So the press release says this program will recruit and train the top tier cyber experts needed to keep public services safe. Oh, good. Minister for digital government, Ian Murphy said, quote, cyber attacks aren't abstract ideas.

[01:10:31] Oh, no, we know that they delay NHS appointments, disrupt essential services, almost put Jaguar out of business. And that's, that's me, not him. And put people's most sensitive data at risk. When public services struggle, it's families, patients and frontline workers that feel it.

[01:10:50] The vulnerability monitoring service has transformed how quickly we can spot and fix weaknesses before they're exploited so we can protect against that. We've cut cyber attack fixed times by 84% and reduced the backlog of critical issues by three quarters.

[01:11:12] And as the service expands to cover more types of cyber threats, what, beyond weak DNS records, whatever those are, fixed times are falling there too. But technologies alone aren't enough. Today, he says, I'm launching a new government cyber profession, capital C, capital P, to attract and develop the talented people we need to stay ahead of increasingly sophisticated threats.

[01:11:40] Making government a destination of choice. That's right, baby. Government is a destination of choice for cyber professionals worldwide who want to protect the services that matter most to people's lives. Dr. Richard Horne, CEO of NCSC, said, cyber security is more consequential than ever today. It does sound like maybe some good AI wrote this part.

[01:12:09] Ever today. Are there bullet points? With attacks in the headlines showing the profound impacts they can have on people's everyday lives and livelihoods. As our public services continue to innovate, it's vital that they remain resilient to evolving threats and blah, blah, blah, blah, blah.

[01:12:27] So they finally said, the VMS, this is this new system that's been online for 14 months, continuously scans 6,000 UK public sector bodies, detecting around 1,000 different types of cyber vulnerabilities.

[01:12:47] When a weakness is identified, the service alerts the relevant organization with specific actionable guidance and tracks progress until the issue is resolved. Okay, now that finally makes sense. That is what we would expect.

[01:13:03] They have a continuously running internet scanner that's scanning 6,000 UK public sector agencies and entities looking for 1,000 different types of cyber vulnerabilities at each of the IPs of the configured targets. Yay!

[01:13:24] Unfortunately, the presence of that Looney Tunes nonsense about weakness in government DNS records casts the entire announcement into question. Just where does the AI brain fart that apparently occurred end in this announcement and reality begin? If that's in there, it's hard to know what else is just fuzzy.

[01:13:54] But we do now appear to be, you know, back on track. The release finishes up writing by automating and detecting and streamlining remediation. The service has, bullet point, reduced median time to fix domain-related vulnerabilities from 50 days to 8 days. An 84% improvement. Okay, now we're back to crazy town there.

[01:14:24] What is a domain-related vulnerability? And how can it have been reduced from taking 50 days to fix down to just 8 days? How can it take any days? You know, it really does seem as though an AI had a hand in the preparation of this release, which is too bad. The other three bullet points seem more reasonable.

[01:14:48] They are reduced median time to fix other cyber vulnerabilities from 53 days down to 32. Okay, not great, but better. Cut the backlog of critical open domain-related vulnerabilities, whatever that is again, by 75%. Processed and resolved around 400 confirmed vulnerabilities each month. So the press release finishes saying,

[01:15:17] The new government cyber profession is co-branded with the Department for Science, Innovation and Technology and the National Cyber Security Center. It will introduce a competitive total employee offer, establish a dedicated cyber resourcing hub to streamline recruitment, and create a clear career framework aligned with UK Cyber Security Council professional standards.

[01:15:44] It will also include a government cyber academy for training and deployment, a new apprenticeship scheme to build future talent, and structured career pathways to strengthen long-term capability across the public sector. The Northwest will serve as the primary hub for the profession, building on Manchester's growing digital ecosystem and the forthcoming government digital campus.

[01:16:12] So all that sounds great and reasonable, too. The UK has clearly implemented, although they seem unable to describe what it is they have, an extremely useful service. And I do seriously hope that other nations pick up on this idea and put it into practice. The idea of a country scanning its own internet infrastructure preemptively for known problems.

[01:16:42] I mean, this is what CISA should be doing. And then finding out who owns those IPs and letting them know they've got problems there. That's a win-win. I don't know what a soft government DNS record is. Wow. And I don't think anybody else does either, because, you know, we would know what that was, right? We know, we understand this stuff. And it's like, what? What are you talking about?

[01:17:12] You know, really, it's just a mystery. Leo, let's take a break. Okay. For a sponsor who's not a mystery. No mystery to you or me, because we're about to head to Orlando for Zero Trust World Threat Locker's big security conference. Steve's going to give a presentation Wednesday. Last event of the day. So it's right before the cocktail party.

[01:17:41] In fact, it might even overlap a little bit, but it'd be worth sticking around. And Steve and I will stick around afterwards to talk to you. And you're going to be in costume, right? Not for this. Oh. There is Thursday. They're very famous for every year Threat Locker has a costume party. And I think the theme this year is 60s space. Something like that. Oh, thank goodness. I thought I was going to be the only person not in costume. Not for the cocktail party.

[01:18:11] For the Wednesday evening cocktail party. No costumes. No costume. Just be normal. Okay. Which is black, right? You're going to wear black of some kind. I'll be wearing black. Even though Orlando is hot and black absorbs heat, just like it does for the crows, Leo. Oh, yes. They absorb the energy focused upon them, whether by the sun or some other third party.

[01:18:36] Actually, let's talk about Threat Locker since we're here, our sponsor for Security Now, for this segment of Security Now. I'm a big fan of Threat Locker because they do zero trust right. Threat Locker's zero trust platform takes a proactive, and this is the key, these are the three words you want to hear, deny by default approach. That means every unauthorized action is blocked.

[01:19:02] Unless you specifically, explicitly say, yeah, that program can do that, or that user can do that, it can't. And it's as simple as that. That protects you from both known and unknown threats. The problem, of course, is modern threats, modern attacks hide inside endpoints. Your employee brings the laptop home, gets malware on it, brings it back. Now it's inside the network.

[01:19:30] A lot of networks just assume, hey, if it's inside the network, it's an employee, it must be okay. We know better, don't we? Attacker-controlled virtual machines, sandboxed environments. Attackers are very smart these days. They hide inside, right? That VM-based malware will evade traditional antivirus software. So even if, you know, your employee brings in the laptop and your antivirus says, oh, well, I see the bad guy here.

[01:19:57] You don't know what else is going on in there unless you're using ThreatLocker's zero trust. It prevents these VM-based attacks before they can launch because you've not explicitly permitted it. ThreatLocker's innovative ring fencing, that's what they call it, constrains tools and remote management utilities. That's another big threat, right? People are logging in through a VPN or something.

[01:20:21] Attackers cannot weaponize them for lateral movement or mass encryption that stops ransomware cold. ThreatLocker works in every industry. Max PCs provides 24-7 US-based support. The support is great. And with it, one of the real benefits of zero trust, you also get comprehensive visibility and control. It's great for compliance. Ask Emirates Flight Catering. This is a global leader in the food industry, 13,000 employees.

[01:20:51] ThreatLocker gave full control of apps and endpoints, improved compliance and delivered seamless security with strong IT support. All of the above. The CISO of Emirates Flight Catering said this, quote, the capabilities, the support. Oh, and the best part of ThreatLocker is how easily it integrates with almost any solution. Other tools take time to integrate. But with ThreatLocker, it's seamless. That's one of the key reasons we use it. It's incredibly helpful to me as a CISO.

[01:21:21] End quote. ThreatLocker really works. It's affordable. It's effective. It works with what you're already using. And it's trusted by companies that just can't afford to be down for even one minute, like JetBlue. Heathrow Airport uses ThreatLocker. The Indianapolis Colts, the port of Vancouver. ThreatLocker consistently receives high honors and industry recognition. They're a G2 high performer and best support for enterprise summer 2025 report.

[01:21:49] Peers spot ranked ThreatLocker number one in application control. GetApp gave them the best functionality and features award in 2025. Get unprecedented protection quickly, easily, and cost effectively with ThreatLocker. Visit ThreatLocker.com slash twit to get a free 30-day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance. That's ThreatLocker.com slash twit. On we go with the show. Okay.

[01:22:18] So I mentioned this at the top as somebody I knew, Leo, you would remember. I was just scanning the news and I encountered a piece of news declaring that Vastamo hacker disappears. I thought, okay, I have no idea what that is.

[01:22:35] But then reading a bit into the story, it mentions that a Finnish hacker lost his appeal and will have to go back to prison after a court increased his original sentence. Okay. So again, like, okay, nothing stands out there. But we'll recall this event from six years ago.

[01:23:00] The report explains that this Finnish hacker was sentenced to six years and three months for hacking the Vastamo Psychotherapy Center in 2020 and then extorting its patients, which is what made that stand out both as I was reading this. And, of course, I remembered we talked about this at the time.

[01:23:24] This creep obtained the psychotherapy center's very personal and highly confidential medical psychotherapy records. That's just awful. Including, of course, the contact information that would be needed for them to be contacted for the sake of extorting them, which he then did. He threatened them with public exposure of their mental illnesses unless they paid up.

[01:23:52] So beyond this, as I also recall, Leo, you and I were shocked when we saw the sheer number of patient records that that Vastamo Psychotherapy Center had maintained online, which were stolen. That was the other part of the scandal.

[01:24:11] We noted that not only were they at fault for not better protecting their data, but they should not, you know, they should not have had that much old patient data around. They should be held accountable for leaving the data of years and years of previous patients in hot storage online and readily accessible.

[01:24:35] You know, I understand they might have felt they needed to retain records for some possible future need, but those could be archived offline for retrieval on demand, not sitting on the same server with all of the current records, all of which this hacker sucked up. So anyway, just a weird little aside. I mean, I'm like, I remember that guy. Yeah. We talked about him. It's funny how we seem to catch all the important bits. I'm happy about that.

[01:25:04] So in their cyber intel brief, the cyber intelligence firm Data Miner, they left the E out. So it's D-A-T-A-M-I-N-R, Data Miner, reports that the scattered lapses hunters, which we're now abbreviating SLH, although I don't know if anyone's going to remember what SLH is.

[01:25:25] So I'm going to keep saying the scattered lapses hunters because it's fun, that they've begun recruiting female individuals for their voice phishing campaign. SLH is offering upfront payments, big ones, for social engineering calls targeting IT help desks. Data Miner's report offered three key takeaways.

[01:25:49] They said under tactical evolution, SLH is diversifying its social engineering pool by specifically recruiting women to conduct voice phishing attacks, likely to increase the success rate of help desk impersonation. Under large incentives, they said the group is offering significant financial incentives between $500 and $1,000.

[01:26:19] Up front per call, which stuns me, and providing pre-written scripts to their recruits. And high-profile risk. They said SLH is a supergroup alliance of lapses, scattered spider, and shiny hunters, known for compromising major global corporations and stealing over 1.5 billion records so far and counting.

[01:26:47] The data minor posting then walks us through their discovery of SLH's online recruitment postings and ends with some useful advice to any potential enterprise targets. Under their heading, organizations should adopt a heightened defensive posture against social engineering. They enumerate four points. First, help desk training.

[01:27:12] Immediately brief IT help desk and support personnel on this specific recruitment trend. Emphasize that attackers may be using pre-written scripts and polished voice impersonation. And that if it's the fact that it's a girl on the phone doesn't mean, you know, it's not your typical hacker attacker guy. So don't be fooled by that. Strict identity verification.

[01:27:40] Enforce out-of-band, as they say, identity verification. You know, a video call or secondary internal verification of some sort. You know, it's like not like when you receive email that says phone this number if you'd like more information and pretending to be your bank. You need to, you know, go look up your bank's phone number yourself rather than using the number that came in the email. That kind of thing.

[01:28:07] So, harden MFA policies. They said move away from SMS or push-based MFA, multi-factor authentication, both of which are vulnerable to SLH's known TTPs like SIM swapping and fatigue bombing. Implement FIDO2 compliant hardware security keys wherever possible. And finally, monitor anomalous access.

[01:28:36] Audit logs for new user creation or administrative privilege escalation immediately following all help desk interactions. Meaning, you know, check your logs after a help desk interaction to see whether there might be anything going on that the bad guys immediately launched into following that interaction. So, the point being, you really do need to be proactive.

[01:29:01] I remember a phishing attack some years ago where a woman called a customer service. Remember, the customer service is the first two words in their title. They want to help customers.

[01:29:18] So, the way this phishing attack, this social engineering attack worked, the woman was frantic saying, my husband left his phone at home and he's on a business trip and he's going to desperately need it. I need to reach him. And they played a baby crying in the background on a recorder. I mean, it was this whole scenario. So, you would really get sucked in and believe. Yeah. And of course, you can't do that with a guy.

[01:29:47] So, yes, a woman's voice is going to, in some cases, really be more effective because I think you're right. People don't expect a woman to be social engineering them. Yeah. So, again, it just knocks your guard down a notch. Yeah. Yeah. And of course, it was a SIM jacking attempt. They just, all they wanted to do is get the phone number transferred so that they could get those SMS. Yep. You know. Text messages. Good night. Good night. Yep.

[01:30:17] So, last Wednesday, Cisco released the news of CVE 2026, 201-27. Once again, achieving that rarest of rare, CVSS 10.0 scores. Yikes. Good old Cisco. You know what? They always come in strong. It used to be, oh, Newman. Now it's, oh, Cisco.

[01:30:46] This was an actively exploited zero day. First discovered while it was being abused in the wild. The title Cisco gave their disclosure was Cisco Catalyst SD-WAN controller authentication bypass vulnerability. Yep. You heard it right. Surprise, surprise. An authentication bypass vulnerability. Cisco wrote,

[01:31:40] And obtained administrative privileges on an affected system. They said this vulnerability exists because the peering authentication mechanism in an affected system is not working properly. Huh. Not working properly? Okay. No one would disagree with that. Although calling it catastrophically defective might be more accurate. Okay.

[01:32:05] This one is so bad that both the U.S. NSA and CISA in, you know, here in the U.S., the Australian Signals Directorate's Australian Cybersecurity Center, the Canadian Center for Cybersecurity, New Zealand's National Cybersecurity Center, and the U.K.'s National Cybersecurity Center all published patch or perish announcements.

[01:32:31] In a desperate attempt to bring the need to patch all systems to the attention of their owners. The SD in SD-WAN stands for software defined. So it's a software based networking platform that connects branch offices, data centers, and cloud environments together through a centrally managed system.

[01:32:57] It uses a controller to securely route traffic, securely in quotes, of course, air quotes, between sites over encrypted connections.

[01:33:07] This is another instance where any company that recognized that simple authentication can never be relied upon for security and had therefore taken the trouble to preemptively separately restrict, for example, incoming SD-WAN connections to only known endpoint peers.

[01:33:37] And, for example, they'd never have anything to worry about. They wouldn't have anything to fear from these authentication failures. And A, would not have suffered a potentially devastating network compromise. And B, could therefore update their SD-WAN instances with something less than pants on fire emergency at their leisure. Once again, Cisco's own announcement moderately underplayed the consequences.

[01:34:07] They wrote, an attacker could exploit this vulnerability by sending crafted requests to an affected system. Of course, all systems are affected. A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN controller as an internal, high-privileged, non-root user account.

[01:34:33] Using this account, the attacker could access netconf, you know, net configuration, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric. It was the Australian Signals Directorate, I'll note, who first discovered and reported these attacks being used in the wild. Not surprisingly, they paint a somewhat less rosy picture of the consequences.

[01:35:01] Writing, this is Australia. Malicious cyber threat actors are targeting SD-WANs of organizations globally. These actors exploited a Cisco Catalyst SD-WAN controller authentication bypass vulnerability, CVE-2026-20127.

[01:35:22] After exploitation of this vulnerability, the malicious actors add a rogue peer and eventually gain root access to establish long-term persistence in SD-WANs. So, sorry Cisco, not just non-root user accounts. I like that new term, rogue peer. I'm going to keep that around. Rogue peer. Yeah. Yeah.

[01:35:53] Again, it's one of our main themes here. You cannot rely upon authentication. And more importantly, you don't need to. You can apply additional factors of authentication, not allow somebody to get to a port. You know, you've got, you have a bunch of offices scattered around, what, the world. They have their own networks.

[01:36:22] You know what their IPs are. They're there. Even their IP blocks, probably the specific IP of your peer SD-WAN. Why not take the time to put a rule in the firewall so that you only accept incoming traffic from that IP to your SD-WAN? Why not? Can you spoof an incoming IP? No. How interesting. No, because it requires a connection. It's a conversation. Yes. Yeah. Right. Yes.

[01:36:52] And so it was like, all anybody has to do is not assume that, I mean, first of all, I was about to say not assume that Cisco is perfect. Who would assume that? Well, that's a good thing to not assume. Please. Safe bet. So, you know, protect yourself. Put firewall rules in because you're talking to fixed endpoint IPs. Only allow the conversations from them.

[01:37:17] Why would you ever want China or Russia or North Korea to connect to your SD-WAN? You don't. Right. I mean, I do that with my freaking synology. It's not, I mean, how hard could it be? Exactly. Exactly. I mean, you know, yes. Yeah. Okay.

[01:37:37] So, Volncheck's annual report on the in-the-wild use of known security CVEs, like, you know, CVEs about security breaches is interesting. All I have, I have the entire 41-page report. It will probably be next week's topic because it looked like in a quick glance through it, there was just so much juicy stuff there.

[01:38:02] But the teaser summary, which is all you get until you, you know, give them your name and email address so they can market to you for the rest of your life. It was interesting, too. They said in 2025, barely 1% here. This is what was interesting. 1% of disclosed vulnerabilities were exploited in the wild, which might not be what we think.

[01:38:31] It means that the distribution of exploits is not uniform. It is very peaky. Of course, it's the juicy exploits, which were exploited, right? They said, but yet those that were exploited were operationalized quickly, attracted diverse threat actors, and often caused outsized damage before organizations had a chance to respond. Just like this SD-WAN nightmare.

[01:38:58] They said this report identifies which vulnerabilities mattered, why attackers targeted them, and where timing failures left organizations exposed. Like I said, that's going to be fun to talk about this, to look at this analysis. They said, Volncheck tracked exploitation patterns, threat actor behavior, and weaponization timelines across hundreds of thousands of vulnerabilities in 2025.

[01:39:26] The data revealed how quickly new vulnerabilities became bona fide threats, how AI proof of concept code is polluting risk assessment pipelines, interesting, and which threat actors ramped up vulnerability exploitation amid geopolitical tension. And we have three bullet points.

[01:39:48] Volncheck identified 50, 5-0, routinely targeted vulnerabilities from 2025 that had elevated risk profiles by the end of the year, drawing interest from ransomware, threat actors, botnets, and researchers, often all at once.

[01:40:06] Second, proof of concept exploits for new CVEs increased 16.5% in 2025, inundating organizations with risk signals that often turned out to be false or misleading AI-generated slop. Again, AI slop is a term which has taken hold. And finally, China nexus threat actor attributions increased 52% year over year,

[01:40:36] while ransomware groups shifted towards zero-day exploitation at accelerating rates, with 56.4% of ransomware CVEs discovered through zero-day activity. So the landscape is changing. These guys have analyzed everything that happened in 2025, produced a 41-page report full of information. And I suspect that's how we're going to start next week.

[01:41:04] We're going to start our listener feedback, Leo. But I think we should take one more break because then we'll have one before our final coverage. That sounds good to me. You're watching Security Now, special edition in a sense because we are doing this on a Sunday. Steve and I, as I mentioned, are going to Florida tomorrow and we'll be gone all week. I've got people covering the shows for me. We will put this show out in the normal Tuesday time slot.

[01:41:31] And if you're a Security Now fan, good news because this week we'll have two Security Now's, the second show, which will be the presentation Steve's giving at Zero Trust World. What's it called? The call is coming from inside the house. I'll leave it to you to speculate as to what that will be about. I'm just thinking, you have people covering your shows and we've got people covering our squirrels' needs to continue being fed while we're gone.

[01:41:59] I have Micah, you have a squirrel sitter. It makes sense. We have house sitters that are going to keep the squirrels fed because Lori said, what about the squirrels? I was like, okay. Anyway, we're glad. If you're watching live, we're so glad that you figured that out. Twit will be coming up in about an hour and we'll get to that. But first, a word from our sponsor. Our show today brought to you by Adaptive. This is, I think, a new sponsor. Really cool product.

[01:42:26] It's the first security awareness platform specifically built to stop AI-powered social engineering. We were just talking about this. Right? The time is right. Here's the shift. Here's what's changed. And if you've listened to the show, you just heard about it. Attackers, really, they don't need malware anymore. They just need trust. A cloned voice. A convincing deep fake on Zoom.

[01:42:52] An AI-written phish that looks like it came from your IT team. It's really very effective what people can do right now. Adaptive prepares your organization for this kind of attack with simulations across, of course, email, but also now SMS and, very important, voice. Deep fakes, phishing. You know, that's voice phishing. AI-generated phishing, including scenarios that can mirror your own brand and executives.

[01:43:23] And when employees report something suspicious, Adaptive can help you triage it fast so security teams aren't buried by false alarms. If you need training fast with Adaptive's AI content creator, you can turn a threat, an incident report, or a compliance doc into interactive multilingual modules in minutes. No design team required. With Adaptive, you can build, customize, and monitor every part of your training with complete personalization.

[01:43:52] You personalize it just like the bad guys are personalizing them. The result is a more resilient security culture, which is absolutely essential because guess what? The call's coming from inside the house. Companies like Plaid use Adaptive. Plaid's platform powers thousands of digital finance apps and links consumers, developers, and institutions. You better believe they need help with Adaptive security.

[01:44:19] With sensitive data as core, Plaid's security and compliance are absolutely non-negotiable. Plaid's head of security, GRC, says, Quote, Adaptive has equipped our teams with cutting-edge tools and built a smarter, more resilient security culture across the company. Adaptive really works. Trusted by Fortune 500s and backed by NVIDIA and OpenAI, Adaptive is building the defenses we need for the AI era.

[01:44:46] Learn more at AdaptiveSecurity.com. That's AdaptiveSecurity.com. It really works. Cool. A listener, David Benedict, he said, Hi, Steve. Not to pull you back, but he's going to, into the whole code signing discussion again. It's a lot of interest. A lot of our listeners have expressed a strong interest. He said,

[01:45:12] But what if we simply don't buy those code signing certs? What if we simply start self-signing code? Is there anything to stop us from self-signing and building our own reputation that way? Thanks, Dave Benedict. So, okay, that's an interesting idea.

[01:45:31] The moves that the CA browser form have been making on the code signing front feel entirely different from their earlier squeezing on the TLS certificate side.

[01:45:45] The reason Let's Encrypt was able to effectively replace and displace the traditional certificate authorities for the world's web server domain validation certificates is that Let's Encrypt is only providing what its name suggests. Encryption. Let's Encrypt. It's making no assertion of any kind about the reputation of the domain name holder.

[01:46:13] And when you think about it, where strong assertions of identity are needed and are being made about the owner of a certificate, you know, whether for a web domain, maybe the digital signer of a document, that matters too, or the authorship of code.

[01:46:33] We do need entities such as certificate authorities standing by to do the necessary work of verifying identity and carefully issuing certificates which attest to what their research has found. Unfortunately, while we've been going about our lives, the certificate authority business has been quietly consolidating.

[01:47:00] This has sometimes been triggered, as we've covered on the podcast, when an irresponsible certificate authority so flagrantly abuses its position of trust that the various root programs are finally forced to revoke their trust. In those cases, the disgraced CA is forced to sell off its certificate authority business assets to another certificate authority.

[01:47:26] In other cases, it's just a bigger fish swallowing up a smaller fish, reducing the competition. While I was scouting around for a new code signing certificate authority, I noticed that many of the smaller looking companies had exactly the same pricing as DigiCert. It turned out that many of them have simply become fronts for DigiCert's products. They're just resellers.

[01:47:51] The upshot of many years of CA industry consolidation is that the world no longer has a competitive certificate authority industry. We are watching the formation of a monopoly that has the gall to charge its customers per signature. We can see the writing on the wall.

[01:48:19] There are already plans like that happening. It's where we're headed. Dave began his note writing, Hi, Steve, not to pull you back into the whole code signing discussion again. It's not your fault, Dave. This whole thing obviously rubs me the wrong way. One of my personality hot buttons happens to be bullying.

[01:48:39] I've never been okay with the abuse of power, which is what I believe anyone observing the actions of the CA industry would conclude is happening. I don't see any way out of this, but I will gladly share any solutions I find. To that end, during this research, I discovered that all of the various CAs, certificate authorities, who offer code signing certificates.

[01:49:07] Remember that now any code signing certificate must be in hardware. You no longer get a software certificate. All of the code signing offering CAs provide the option of installing certificates into a customer-provided HSM, a hardware security module, rather than selling the certificate pre-installed in their own dongle token.

[01:49:36] Typically, they charge another $100 for that. But that's it. That's all it can do. And period. The reason I'm mentioning it is I found a gorgeous $72 form factor USB-A HSM dongle that I love. It's called the SmartCard-HSM 4K.

[01:50:06] 4K because it can handle 4096-bit RSA keys, which is now what's necessary. It also does elliptic curve keys, which can be much smaller. I have a link to this device in the show notes to one particular retailer of this device. It's got its own website at smartcard-hsm.com.

[01:50:33] And most significantly, all of the card's multi-platform support is open source. So this is a fully open source $72 beautiful little hardware security module. I've got a link to its GitHub page in the show notes.

[01:50:54] One of the very cool features of this for me is that HSM having a hardware security module enables secure and encrypted cross-HSM private key and certificate transfers. In other words, I have multiple machines where I want to be able to sign code. I've got two working locations and GRC servers in the Level 3 data center.

[01:51:22] So I first had the first HSM securely generate a 4096-bit RSA key pair. The private key never leaves the device, which is what the certificate authorities require. But the public key is exported in a CSR, a certificate signing request.

[01:51:46] I uploaded that CSR to IDENTRUST for it to receive their signature. They promptly returned the resulting certificate, which is then used to verify any signatures that the HSM generates for my code, since it'll be doing the code signing.

[01:52:08] One of the many cool things about this solution is that each of these HSMs includes its own permanent device certificates that enable it to establish a secure key sharing key, among others of its kind.

[01:52:30] This allows one HSM's private keys to be securely duplicated across many other devices, as many as you may wish, as well as being externally backed up for export without ever being able to expose its private key. So it meets all the requirements for security, yet gives us, as HSM users and code signers, way more flexibility.

[01:53:00] Each HSM also has a large amount of storage with room for hundreds of keys and certificates and whatever you want to put in there. PGP, GPG, all of that stuff is supported. All of the platforms are supported. And everything is open source on GitHub. So, for example, if an enterprise might have a number of trusted developers work from home, satellite offices or whatever,

[01:53:25] for the price of $72 each, as many developers can be given the ability to securely sign code as needed. Anyway, there's much more than what I've shared. So, if you have an interest or need, check out smartcard-hsm.com. The retailer I used, CardLogix, L-O-G-I-X dot com. I got a link in the show notes. They're the retailer which I found. It happened to be near me in the U.S.

[01:53:54] The where to buy page at smartcard-hsm.com also lists a German and a Taiwanese reseller. So, if you're over in Europe, you can find someone near you or in Taiwan. I have a Nitro. I have a bunch of Nitro keys. It works on that too, which I didn't realize. Yes. Nitro key is also supported by all of the software.

[01:54:15] Since my original DigiCert EV code signing certificate does not expire until November 20th, as it happens, of this year, but I wanted to remember that I wanted to obtain a three-year certificate before they were no longer available. My plan has been to see whether I can pre-establish a reputation for the new, now three-year duration,

[01:54:44] IDENTRUST certificate by having it co-sign GRC's freeware. That's now in place. GRC's most popular freeware, for example, Validrive, which is now being downloaded 1,000 times a day, is now co-signed both with DigiCert's original certificate and the new IDENTRUST certificate.

[01:55:09] So I'm hoping that once we get to November, I'll be able to drop the DigiCert certificates, the DigiCert signatures, because that DigiCert certificate, code signing certificate will have expired, and that my newer IDENTRUST certificate, which will by then have 10 months of exposure to Windows,

[01:55:33] Windows, Defender, and smart, whatever the hell, you know, Microsoft will have seen this and gotten used to it, and I'm hopeful that it will be able to stand alone and keep Windows, you know, trigger happy, gatekeepers happy. Okay, so, and then finally, just to see whether I could, because I had so much fun playing with this new technology,

[01:55:56] last week, as I mentioned, talking to DigiCert, I also reissued my existing DigiCert certificate in, instead of in, they provide me with a dongle, which they did the first time two and a half years ago, I did it in this customer-provided HSM mode.

[01:56:16] That allowed me to add DigiCert's certificate into my new HSMs alongside the newly minted IDENTRUST certificate. It all worked perfectly. Now I have HSMs containing both the existing expiring in November DigiCert code signing certificate and the new IDENTRUST code signing cert, which goes for three years.

[01:56:44] So, okay, believe it or not, I haven't forgotten about David. He started me off on all this by asking about the possibility of coders sidestepping all this nonsense by using self-signed certificates. Now, the use of self-signed certificates has been common practice for web developers for many years.

[01:57:05] I have a self-signed certificate for localhost sitting in the trusted root stores of my various workstations. I run a web server on those machines, which uses that certificate. And I use it for local web development. Having a self-signed certificate for localhost allows me to use HTTPS URLs starting with, you know,

[01:57:35] HTTPS colon slash slash localhost slash and then whatever without the web browsers that I'm using pitching a fit. You know, so it's just very handy. Okay, so let's explore how this might be extended for code signing. If, rather than having DigiCert or IDENTRUST or whomever, sign my code signing request, if I could instead use my private key to sign the certificate,

[01:58:05] creating a self-signed certificate, which would then be installed into the system's trusted root store, how would that work? Well, from that point on, any code I signed would carry that root certificate's matching public key. And any check on the validity of my code's signature on this machine would show its signature to be valid.

[01:58:35] But the reason this is not a useful solution for a software publisher, unfortunately, such as GRC, is that these code signatures would only be valid on machines that had previously installed its matching root certificate.

[01:58:57] What DigiCert, IDENTRUST, and all the other CAs have going for them is that their root certificates are already pre-installed, wherever any certificates they have signed might need to be trusted. Since Windows has now developed the practice of deleting on-site any executable that appears on its drive without a valid and trusted signature,

[01:59:26] especially one downloaded from the internet, and that's probably why people are able to compile their own code. It came from them, although I've compiled my own code too, and Windows has immediately stomped on it. It would be necessary for GRC, if I was using a self-signed cert, it would be necessary for GRC to tell its customers

[01:59:49] that before attempting to download, let alone run, any of our software, they must first download and install GRC's own trusted root certificate. Well, since I would never install anyone else's root certificate into my machine's root store, I would never ask anyone else to do that for us in order to download and run my code.

[02:00:17] The burden of making my code acceptable to someone else's machine should be on me, not on them. So, while signing one's own code for use on our own machines would work, just like using a self-signed web certificate for local use of a web browser and web server, I don't see any way to break our certificate authority's stranglehold on developers for code signing that needs to be universally trusted.

[02:00:46] And as I said before, I get the need for a certificate authority. Just for encrypting web domains, we don't need them. That's why Let's Encrypt is a viable alternative, and that's why it works. Because all we're saying is encrypt this traffic to wherever I'm going. And I'm not sure where I'm going, I'm going to this domain name.

[02:01:10] Certificate authorities, we need a third party like a CA when we need to have the ability to digitally sign a document and have that signature mean something because we proved who we were to them or to sign our code. Or if we want an organization validation certificate for TLS web servers, not just a domain validation certificate.

[02:01:35] So I'm not saying that certificate authorities don't have a place and we don't need them. I've got a problem with that abuse. Now, there is one place where self-signing could make sense. Because everything I said about individual developers like me, that does not apply to enterprises, right? Enterprises might choose to use, they could buy a cert from DigiCert.

[02:02:05] They could use a publicly trusted code signing certificate for their internal use. But within an enterprise, it might also work to sign code with a certificate that is only trusted within the enterprise's own enterprise machines. Remember that many enterprises already install their own TLS web route certificates on all internal workstations

[02:02:33] so that their networking middle boxes are able to intercept, decrypt, and scan everything that enters and exits their network. You can't get on the enterprise LAN and get out to the outside world unless you have one of their own TLS cert in your enterprise workstation machine.

[02:02:55] So I could see that it would make sense to add a private code signing route certificate to all enterprise machines for use, you know, for their own internal use. On the other hand, if you're an enterprise, you may not care that much about what, you know, the various CAs have now chosen to charge for their, for the privilege of signing code. Although it does appear to be escalating over time.

[02:03:23] DGC wrote, hey, Steve, long time listener, but I'm a few episodes behind right now. In episode 1062, you said, quote, we also see employees in positions of trust on internal enterprise networks being tricked into clicking malicious links and inviting malware inside the house. No form of fancy coding AI is going to fix any of those things. And then he writes, that may not be entirely true.

[02:03:52] I recently came across a new solution, Charlemagne, which runs on a desktop and monitors privately, locally, what the user is doing. It uses an SLM, a small language model, to detect potentially malicious actions like lookalike websites, malicious links the user might click on, etc. And then warns the user not to do those things.

[02:04:21] So an AI agent helping a user avoid accidental bad actions could be helpful. No? To which I say, could be helpful, yes. And I very much like the idea. As we were saying, you know, the talk that Leo and I will be holding during the Zero Trust World event is titled, the call is coming from inside the house.

[02:04:46] You know, that is obviously a metaphor for what I believe to be the biggest and most intractable problem facing today's enterprises. You know, stated as succinctly as possible, the problem is overprivileged users making mistakes. Though the term overprivileged is typically used in a derogatory context. I don't mean it that way at all.

[02:05:11] I'm using it in its strict computer science context, where overprivileged is the result of not following a strict least privileged model. The beauty of describing the problem as overprivileged users who make mistakes is that it points us toward two solutions to the problem.

[02:05:33] Either we no longer overprivilege our users, or we somehow arrange to prevent them from making mistakes. Whereas it might be possible to constrain what the employees of an enterprise might be able to do on the enterprise's network, a large part of the joy of using a personal computer is that its use is personal, which is to say almost entirely unconstrained.

[02:06:01] We can do anything we want with our own machines. Since operating within a least privileged environment is no fun and would not be tolerated by personal computer users, that suggests that the solution for personal computer users lies in somehow arranging to prevent their mistakes. Something previously not thought possible.

[02:06:25] So to that end, I love the idea of some form of active client side local AI agent that carefully scrutinizes everything the user is seeing and doing and interposes itself between the user's actions and the computer system.

[02:06:48] Leo, I and our listeners know how to examine a URL to detect trickery. But even the best of us are not always paying 100% attention. And even when we are, we might miss the use of embedded Unicode characters to create a lookalike URL.

[02:07:10] Or in our haste, we might click a link without first carefully checking all the way back to the right of its domain portion to make sure that its top-level domain is what we expect. So by all means, the idea of having an AI agent peeking over our shoulder and watching our back to help detect the things we might well miss, I think it makes all kinds of sense.

[02:07:39] And needless to say, I hope that it would totally, you know, freak out if its user were getting ready to paste the contents of the clipboard, which was pasted from their web browser into their Windows run dialogue. So, you know, if Microsoft wants to deploy AI, Leo, I would so much, instead of having something recording everything I do,

[02:08:05] I would much rather have something watching, you know, running locally, not phoning home, but making sure I don't click a link in email that could get me in trouble. I am 100% bullish, and I'll bet you we're going to end up seeing that. You know, you and I have complained for years that antivirus software has essentially become passe, obsolete.

[02:08:32] You know, I don't know anybody who would install it now, except that they have, you know, a loyalty to packages that they were using in the past, and so that has endured. I don't run any. I just, you know, I'm careful about what I do, and I assume that Windows is going to find something. And it never has, except it's found my own code, which is really annoying, because, you know, that's just what it does.

[02:08:57] So I've had to isolate a whole tree in my directory system in order to say, leave it alone. And in fact, I discovered that in Windows 11, there's something coming called a dev drive, because their own AV has become so intrusive and such a problem that they've, they said, okay, we're going to create a thing in Windows 11 called a dev drive,

[02:09:24] where we're going to give you responsibility for what's there, because they've had no choice. They're driving their, anyone developing on Windows crazy by their, because, I mean, in order to protect them, they have to delete anything on site. Right. I mean, it's becoming universal. Apple's going in that direction. Google's going in that direction. Everybody's doing that. That's, code signing is the future, I think, unfortunately. Yeah.

[02:09:53] I use, by the way, I use Claude now to do security audits on everything. And it's very good at finding security flaws and fixing them. It's- And we covered a couple instances of that last week, where there was a guy who was running Claude, he had his, he had a WordPress site and server and had a bunch of WordPress add-ons and had Claude checking them before he put them online.

[02:10:22] And in one case, it found many problems. And in one, it was a really bad problem that he was like, you know, really glad for. So we're going to see a lot of cleanup on aisle nine, I think. Claude, clean up on aisle nine, bring them up. All right. We're going to take a break and then we're going to go tong-tok, do a little Klingon in just a bit. You're watching Security Now, the early edition of Security Now.

[02:10:50] Don't get your hopes up. This is, we're not, we're going to go back to Tuesday after this week. But for those of you who have a free Sunday and can watch the show, it's great. I'm glad you're watching live. We do this stream on YouTube and Twitch and X and Facebook and LinkedIn and Kik and of course in our club. Lots of people do like to watch live, but you can always download copies from a variety of places. I'll tell you where at the end of the show.

[02:11:17] Our show, this segment of the show brought to you by OutSystems, the number one AI development platform. This is for enterprises and it is so cool. Well, you see it happening, the agentic shift. I mean, that's all we've been talking about since Claudebot, right? We're moving beyond simple chatbots. Yep. Well, you know what? OutSystems is right there with you leading the agentic conversation.

[02:11:45] OutSystems helps businesses build AI agents that can actually do work, take actions, make decisions and integrate with data rather than just answer questions. OutSystems is solving the talent gap because there just aren't enough AI engineers in the world. OutSystems empowers the developers you already have, the smart people you already have to build at an elite level.

[02:12:12] OutSystems is the secret weapon behind the world's most successful companies. And not just for small apps. They are for massive, complex systems. OutSystems. They run banks, insurance companies, government services. OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. And that is nice. They helped a top US bank, for example, deploy an app that lets customers open new accounts

[02:12:41] on any device, delivering 75% faster onboarding times. OutSystems. A global insurer. This was an in-house app. They helped a global insurer accelerate the development of a portal and app for its insurance agents, giving them a 360-degree view of customers, enabling those agents to grow policy sales. And it really worked. OutSystems combines the speed of AI with the guardrails that you're going to want of low code.

[02:13:11] It's kind of a match made in heaven. It's the safest and fastest way for an enterprise to go from, we need an AI strategy to, we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it. Visit OutSystems.com slash twit to see how the world's most innovative enterprises are using AI-powered low code to transform.

[02:13:38] That's O-U-T-S-Y-S-T-E-M-S dot com slash TWIT to book a demo and see the future of software development. Very cool. OutSystems.com slash twit. Thank you, OutSystems, for supporting security now. And now, Kong Tuk. So, yes, I wanted to finish today's podcast by sharing a newly arrived spin on the click

[02:14:07] fix attack, which we've discussed previously and which has me really worried. Remember, that's the attack where the familiar CAPTCHA prove your human test is maliciously extended to ask its victim to please paste the contents of the Windows system clipboard into the run dialog and just press enter. Just trust us.

[02:14:36] Prove you're human by doing that. Right. In the newer form of this, which its discoverers, Huntress Labs, that's the name I couldn't remember at the top of the show, Huntress Labs, they dubbed this crash fix because the victim's web browser is made to appear hung, broken, or defective, thus crashed.

[02:14:58] And as for the Klingonesque Tong Kuk, wait, Kong Tuk, Kong Tuk, it's the name Huntress Labs has given to this threat actor, which they've been tracking for the past year. So, Huntress wrote, in January 26, Huntress senior security operations analyst Tanner Philip observed threat actors using a malicious browser extension to display a fake security

[02:15:28] warning claiming the browser had stopped abnormally and prompting users to run a scan to remediate the threats. That looks very credible. I would fall for that. Yes, that is that. That is why this is so compelling. This could come up and you would think, oops, OK, it's exactly what Microsoft pop-up looks like. It says Microsoft Edge stopped abnormally.

[02:15:58] And it says Microsoft Edge has detected potential security threats that may compromise your browsing data. Oh, that's not good. You would believe that. And then there's a run scan button. And you'd think, oh, scanning is good. And then down below, there's a checkbox checked by default. Help make Microsoft Edge better by reporting current system information. And of course, you would think, oh, I got to prevent this from getting other people.

[02:16:26] So, I mean, again, is this all it takes? If you hit that button, you're done. No, not yet. OK, so that's the good news. But it does get you involved, right? They said, our analysis revealed this campaign is the work of Kong Tuk, a threat actor we've been tracking since the beginning of 2025.

[02:16:51] In this latest operation, we identified several new developments, a malicious browser extension called NextShield that impersonates, get this, Leo, the legitimate Ublock Origin Lite ad blocker, but impersonates it by stealing its source code, a new click fix variant we've dubbed CrashFix

[02:17:15] that intentionally crashes the browser, then baits users into running malicious commands. I forgot to mention, it doesn't make it that clear here. They have script which does just bring the browser down. So that was a real crash. It was a, well, no, it was, no, because it invokes their dialogue next. Oh, OK. But they do crash the browser. So your browser is dead.

[02:17:45] Your browser's dead. And you get their dialogue. And it's credible because your browser's dead. Exactly. Yeah. Exactly. Exactly. So they said, ironically, the victim was searching, the victim who got infected by this, the victim they wrote was searching for an ad blocker when they encountered a malicious advertisement. The ad directed users to the official Chrome Web Store's NextShield advanced web filter.

[02:18:16] Then they said, the deliberate targeting of domain joined machines, which is what, what, what, what this thing ends up doing suggests Kong took is, is after corporate environments where a foothold means access to active directory, internal systems and lateral movement opportunities. This is terrifying. Yes, it is.

[02:18:40] And look at the next page where you see the next stage of this attack, which is what you get after you click the scan button. Then you get the familiar open when, you know, press when are press control V press enter. Bing, bing, bing. So that's interesting. So they put on your clipboard the malicious code. Yes. Oh, so you don't even see that text. Nope.

[02:19:08] All you do is follow the instructions. Oh, and the, but here it is again, edge fixed browser hash. Yep. So they said home users on a standalone workstation receive a separate infection chain. So the infection chain, they had, they have an enterprise infection chain and a home user infection chain. They said they receive an infection chain. Sophisticated as hell. Yes. Good Lord. It appears. Oh, and they said that the home infection chain appears to still be in testing.

[02:19:37] They said when we got through all the layers, the, the C2, the command and control infrastructure responded on a home environment with test payload, meaning it didn't do anything yet. They said whether this means non-domain targets are lower priority or the campaign is still being built out. One thing is clear.

[02:20:01] Kong Tuk is evolving their operations and showing increased interest in enterprise networks. They said, so what are crash fix and next shield? The malicious next shield app is all the more diabolical by being a fully functioning working clone of the authentic open source. You block origin light browser extension.

[02:20:26] So its users will be pleased with the ad and annoyance blocking behavior of the extension they've just found that installed. But after using their browser for a while, the bogus Microsoft edge stopped abnormally display will appear with its run scan button.

[02:20:47] Upon pressing that, the user will be presented with a fake security issues detected alert and instructed to manually fix the issue by opening the Windows run dialogue with win plus R, pasting from their clipboard control V and pressing enter.

[02:21:11] The malicious extension silently copies a power shell command to the clipboard disguised as a legitimate repair command. When the user follows these steps, they unknowingly execute the malicious command. They said, so we tried copying the displayed command.

[02:21:38] Which starts with edge dot XE hyphen fix hyphen browser space hyphen hash equals blah, blah, blah, blah. Like civilized malware analysts. That's what they're calling themselves. Of course they are. They said the browser's response a complete freeze. When your fix causes crashes, the name writes itself. Thus, they named this crash fix.

[02:22:04] Before we go deep diving into how we ended up with a malicious pop-up message, let's take a step back and look at how it got delivered. You've probably heard the recommendation to install an ad blocker to protect yourself from malvertising malicious advertisements that deliver malware through legitimate ad networks. Our victim likely just wanted to get rid of annoying ads.

[02:22:27] Instead, they downloaded a malicious one, next shield, while searching for an ad blocker for Chrome. This header falsely attributes the code to Raymond Hill, the legitimate developer, as we know, of uBlock origin and references a non-existent GitHub repository. This tactic exploits the trust users place in well-known open source projects.

[02:22:56] The actual uBlock origin light repository is located at github.com slash uBlock origin slash uBOL hyphen home. Not the URL referenced in this malicious extension. The next shield extension is almost entirely, they write, a clone of uBlock origin light, legitimate extension by Raymond Hill.

[02:23:21] The threat actor likely ran a few find and replaces to replace every instance of uBlock with next shield. Okay, so then Huntress continues with their analysis of this latest discovery of theirs. But for us, the takeaway is that the malware community at large has stumbled upon a fundamental security weakness of Microsoft Windows,

[02:23:50] which is its users' comparatively script-following level of understanding of Windows when set against Windows' increasing power and sophistication. It's no longer useful to ask what can be done with PowerShell and .NET. The question is, what cannot be done?

[02:24:21] That pairing, you know, PowerShell and .NET comes to mind because, you know, while I was assembling today's podcast, I encountered other exploits which did exactly that. And this one is also using a PowerShell command. It used native users' invocation of PowerShell with a command they did not understand. They are just following instructions.

[02:24:49] Now that we're encountering a proliferation of similar abuses of powerful commands escaping the browser with unwitting users blindly pasting and executing these commands that they did not write and do not understand, it should be clear that this story only has one ending.

[02:25:11] Sooner or later, Microsoft will need to step up to protect users from themselves, much as they did with the mark of the web, which flags files that were downloaded across a network. Files containing the mark of the web are treated much more cautiously and with skepticism by Windows. You know, you're asked, are you sure you want to run this? This was downloaded from the Internet.

[02:25:41] The system's clipboard needs to be handled similarly. Contents that were sourced by any web browser need to be quarantined. Like I said, there's only one possible ending to this trouble. This problem is not going to go away because users are not going to get better or smarter suddenly. Let's hope Microsoft does not wait too long before updating Windows with this change. I wish I believed they would act responsibly here.

[02:26:12] You know, we can hope. And I'll just note that creating a third party utility to fix this because I kind of thought, well, maybe this I should fix this. That won't help. No. Since it's all of the people who would never know about such a utility who need it the most. Right. You know, we don't need it. We listeners of the podcast. The only fix for this is to come. It's got to come from Microsoft as an integral part of Windows. Yeah. It's got to be built in. Yeah. Yeah.

[02:26:41] Or it's not going to happen. It's got to happen. Give your folks Chromebooks, kids. And I just had a neighbor, as a matter of fact. And Lori and I encountered him while we were taking a walk yesterday. He's an ex-engineer. He said, I just got a Chromebook. He says, I love it. Yeah. And he said, I can't believe how everything imported into it. I mean, he was just blown away by it. It's all most people need, honestly. He is an Android user.

[02:27:11] And so when he explained that he connected his Android phone, I said, OK, well, that helps to explain its importation, at least. But Microsoft considered this with Windows S. They really and they really I wish they'd followed through. I think Apple, Microsoft should both offer Chrome OS like very restricted environments. And then they can let those of us who know what we're doing use the less restricted environments. Yeah. It would be a real good solution. Way too powerful for most people.

[02:27:41] They don't need all of this. They get lost in directory hierarchies and, you know, directly privileges. And, you know, basically you're running a machine you don't understand the operation of at all. And really today, who among us does? We have a deeper understanding. But I remember when we actually knew what the files were on our own hard drive.

[02:28:07] We were editing our autoexec.bats and our config.sys. And you remember my very techie friend, Bob, you know, he was like he was complaining. He like, I still know what everything does. And I said, well, Bob, good luck with that. Not for long. Yeah. Yeah. I mean, basically that's what mobile OSs are, are highly restricted operating systems. They're not truly general purpose operating system.

[02:28:34] Anything that's general purpose is going to be able to do anything. Including run malware. I can't even use my iPhone anymore, Leo. No. It's got, you know. It's locked down. Oh. Well, there's just so much in there. You hold this. Oh. It is. It's too complicated. You slide something. You go three times to the right and click your heels. And then you get a magic dialogue. Too many hidden things. Yeah.

[02:28:58] I spend many, many hours of my life looking through the settings, trying to find the setting that I want to change. And, you know, it's just so hard. And remember, that was the breakthrough from Xerox PARC of the menu. Right. The commands were discoverable. You could browse around and see everything. And, in fact, that's one of the big changes coming in the next version of the DNS benchmark

[02:29:26] that everybody will get for free is I put a traditional Windows menu on it instead of just overloading the system menu underneath the icon in the upper left. It's so much better. It's like, come on. Why did that take so long? Nice. Well, we'll look forward to that. That's a good reason to go to GRC.com. That's Steve's website. It is where you find the two programs he sells.

[02:29:52] This is entirely how he makes his living is with, of course, the wonderful Spinrite, the world's best mass storage maintenance recovery and performance enhancing utility. And the new one, which is the DNS benchmark pro, it's only 10 bucks, $9.99. But, boy, it really can make a big difference in speed. I've noticed lately, I've got to run it again because our DNS has become slow. And it feels like the whole internet slowed down, but it's not. It's just the lookup. And fixing that can really speed up everything.

[02:30:22] So check those out at GRC.com. While you're there, you can get a copy of the show. Steve has a bunch of unique versions because he's a iconoclast. He goes his own way. He's got the 16 kilobit audio version, which doesn't sound great, but it's small. As the virtue of being small, he has the 64 kilobit audio version, which does sound great, is all you really need. He also has the show notes there, which he prepares. I mean, these are really complete.

[02:30:51] I highly recommend reading the show notes. Around 20 pages today, it's 21 of... That's all the stuff you hear on the show with the links and everything and the pictures. I highly recommend that. That's all at GRC.com. He also has transcripts written by a human. Our transcripts are all AI-generated because we're lazy. Steve is not. He gets Elaine Ferris, who is a very talented transcriptionist, to write it all down. You can get those transcriptions a day or so after the show at GRC.com.

[02:31:20] You should also go to GRC.com slash email if you want to register your email address. Steve will whitelist it. That way, you can send him those great pictures of the week or ideas or questions. GRC.com slash email. And that's where you would sign up, if you wish, for the two newsletters he sends out, the weekly newsletter, which is the show notes, and then the very infrequent newsletter. Probably we'll send one out, I imagine, when you update the DNS Benchmark Pro. That's the product announcement newsletter. Both of those, GRC.com slash email.

[02:31:50] We have copies of the show at our website, twit.tv slash sn. We have 128 kilobit audio. For technical reasons to having to do with Apple, we have to make it a little bit bigger. So if you want a smaller, get it from Steve. We also have the video, which is unique. There is a website dedicated to that, twit.tv slash sn, our website. There's also a YouTube channel with a video.

[02:32:18] That's a great way to share little clips. Do us a favor. Steve is now the number two most subscribed YouTube channel in the twit universe. And I know he'd like to be number one. So subscribe. I was. Last week, I was number one. Well, the general twit channel is number one. You're number two. You beat the twit show, though, which is pretty good. Yeah, that's what I was telling you. Yeah. The twit channel itself is like the central channel. That has 280,000 subscribers.

[02:32:48] So that's going to be a hard one to beat. But you're 76,000. I don't know if everybody subscribes who's listening right now. You get right up there. There's also, of course, it's a podcast. So it's in every podcast directory. Every podcast app should have security now. And you can subscribe there and you get it automatically. And that's nice. We do the show live for your entertainment, if you want the freshest version, every Tuesday normally, not today, but every Tuesday normally at 11 a.m. Pacific, 2 p.m. Eastern.

[02:33:18] Now, the next time we do it, we will be in daylight saving time. It'll be summertime. So yes. That happens next Sunday then. Yeah. A week from Sunday. Yeah. A week from today. Yes. Oh, today. Yeah. March 8th. Yes. Today is Sunday. So, yeah. So the next Tuesday, we're going to be at 1,800 UTC. We change UTC doesn't, but the math is so crazy. It's not rational.

[02:33:48] It's irrational. So watch us live if you want. We stream, as I mentioned, on Twitch and YouTube and x.com, Facebook, LinkedIn, kick in, of course, in our Club Twitter Discord. If you are not a member of Club Twitter, I do want to urge you to join because that is how we stay alive. It is not a luxury by any means. It's a matter of life or death. The club supports everything we do, about a third of our operating expenses.

[02:34:16] And it's a way of saying, hey, I appreciate what you're doing. I want to support it. And we do like offering it for free. Ads support it for free because we want everybody to get it. So nobody has to pay. But if you can afford to, it would be nice to support that. That way you're supporting Steve. You're supporting our team. And you're supporting it for the free version for people who can't afford to pay. You do get some benefits, access to the Club Twit Discord, which is always a hoot, if you will. A lot of fun in there.

[02:34:47] Some very fun people. You will also find yourself listening to special shows that we only put out in the club. But mostly you'll get that great warm and fuzzy feeling that you know you've made some people at Twit very happy and you're keeping these shows on the air. Steve, I did find the clip that Anthony Nielsen made. He created this with a local AI, not even one of the big frontier models, but a Chinese model QN that does a very good text to speech.

[02:35:15] He says he only trained it with about two minutes of my voice and was able to make this little phishing recording. Hey, Burke, this is definitely not Leo asking you to buy gift cards. But seriously, can you grab me 100 Apple gift cards? Just kidding. This is Anthony testing text to speech. How's it sound? Hey, Burke, is that does that sound like me? Definitely you. A little bit different.

[02:35:43] But but if you weren't paying attention or you got that on the phone. Yep. Pretty amazing. Yeah. Burke says order more again. Fools him every time. Hey, thank you, Steve, for doing a very special edition of a security now on a Sunday. I appreciate that. Thank you. Thank your wife, Lori, too, for letting us have you. No, no mimosas today. We we had to do a show, but you can go have some now. Stay tuned.

[02:36:13] Well, you're watching live 15 minutes away from this week in tech or our roundtable news show. We will be in Florida for the week. If you're going to Zero Trust World, catch us Wednesday, 5 p.m. at the end of the day for a very special Steve Gibson presentation. The it's coming. The call's coming from inside the house. I will be there as well. And we will stick around afterwards if you want to say hi. We'd love to see you. We'll also make a recording of that available on the security now feed.

[02:36:42] So all your fans, you'll get two security nows this week. Steve, safe travels. I'll see you in Orlando. Thank you, my friend. Will do. See you on. Well, I guess we'll probably see each other Monday night or Tuesday morning. So yeah, the burgundy's on me Tuesday night for dinner. OK, sounds great. We'll see you later. And everybody else. Back on Tuesday per our regular schedule. Cool. Bye. Hi, I'm Leo Laporte, host of This Week in Tech and many other shows on the TWIT Podcast Network. Can you believe it?

[02:37:12] 2026 is around the corner. So this, my friends, is the best time to grow your brand with TWIT. Nobody understands the tech audience better than we do. We love our audience. And we know how to effectively message to them. We develop genuine relationships with brands, creating authentic promotions that resonate with our highly engaged community of tech enthusiasts.

[02:37:37] You know, over 90% of TWIT's audience is involved in their company's tech and IT decision making. Can you believe that? 90%, 88% have actually made a purchase based on a TWIT post-read ad. No one comes close. We're the best at this. As one TWIT fan said, I've bought from TWIT TV sponsors because I trust Leo and his team's knowledge of the latest in tech. If TWIT supports it, I know I can trust it. You cannot buy trust like that.

[02:38:08] Well, actually, you can. You can buy an ad on TWIT. But all our ads are unique. They're read live by our expert hosts, Micah Sargent, me. We simulcast all during the shows on our social platforms so everybody can be watching live. You know, one of our customers, Haroon Amir, the founder of Thinks Canary, he's been with us since 2016.

[02:38:28] Since 2016, he said, we expected TWIT to work well for us because we were longtime listeners who, over the years, bought many of the products and services we learned about on various TWIT shows, and we were not disappointed. The combination of the very personal ad reads and the careful selection of products that TWIT largely believes in gives the ads an authentic, trusted voice that works really well for our products. 10 out of 10 we'll use again. Thank you, Haroon.

[02:38:57] We love you. And it's been nine years. That's kind of, that's the proof, right? Partnerships with TWIT offer valuable benefits, including over-delivery of impressions. You get presence on show episode pages. So there's a link right there that our audience can click on. We're in the RSS feed descriptions, a link there too. And social media promotion. Our full-service team will craft compelling creative to elevate your brand and support you throughout your entire campaign.

[02:39:26] I work on the copy myself to make it authentic, to make it real. If you want to reach a passionate tech audience through a network that consistently over-delivers, please contact us directly. Partner at twit.tv. That's the email address. Partner at twit.tv. Let's talk about how we can help grow your brand. Or just go to twit.tv slash advertise for more information. I look forward to working with you. Thanks for listening.

[02:39:56] Security now.

Security Now,Leo Laporte, scattered lapsus hunters, AI-driven hacking, hardware security module,steve gibson, Zero Trust World, code signing certificate, Lapsus$,TWiT, Kong Tuk CrashFix exploit, Fortinet firewalls, Social Engineering,