SN 1076: FAST16.SYS - Unmasking the NSA's Most Diabolical Digital Sabotage
Security Now (Audio)April 29, 2026
1076
2:35:19142.48 MB

SN 1076: FAST16.SYS - Unmasking the NSA's Most Diabolical Digital Sabotage

What if your engineering calculations secretly sabotaged your nation's best efforts? This week, we reveal how a newly uncovered 21-year-old NSA rootkit quietly corrupted scientific research in hostile states and why it changes everything you think you know about cyberwarfare.

  • Bitwarden's CLI hit with a supply-chain attack.
  • Commercial routers in Iran fail shortly before the war.
  • Meta logging all employee activity to train replacement AI.
  • GRC's DNS Benchmark Release 5.
  • Two miscellaneous AI thoughts.
  • A bunch of terrific listener feedback.
  • Unraveling the diabolical history of "fast16.sys"

Show Notes - https://www.grc.com/sn/SN-1076-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

What if your engineering calculations secretly sabotaged your nation's best efforts? This week, we reveal how a newly uncovered 21-year-old NSA rootkit quietly corrupted scientific research in hostile states and why it changes everything you think you know about cyberwarfare.

  • Bitwarden's CLI hit with a supply-chain attack.
  • Commercial routers in Iran fail shortly before the war.
  • Meta logging all employee activity to train replacement AI.
  • GRC's DNS Benchmark Release 5.
  • Two miscellaneous AI thoughts.
  • A bunch of terrific listener feedback.
  • Unraveling the diabolical history of "fast16.sys"

Show Notes - https://www.grc.com/sn/SN-1076-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

[00:00:00] It's time for Security Now. Steve Gibson is here with a great picture of the week and an amazing spy story. 21-year-old malware that only now has been discovered. The story of FAST16.SYS coming up next on Security Now. Podcasts you love. From people you trust. This is TWIT.

[00:00:28] This is Security Now with Steve Gibson, Episode 1076, recorded Tuesday, April 28th, 2026. FAST16.SYS. It's time for Security Now, ladies and gentlemen. Get ready. Here's the hero, the star of our show, Mr. Steve Gibson. Hello, Steve. Oh, Leo. Oh, no. Oh. We have a story today. You know, the title is a little weird.

[00:00:59] Yes, it is. The title of today's podcast is FAST16.SYS. That sounds like FAST16.SYS. Yeah, not FAT. Maybe it was meant to look like it. I think so. So you would scan over it and not. Or maybe a quick, it could have been like a math library alternate name, but it's kind of an unbelievable name. Right.

[00:01:22] It turns out that through a weird, just coincidental discovery, some security researchers uncovered evidence and then proof of a diabolical, I mean, diabolically brilliant thing that predated Stuxnet by about five years.

[00:01:51] Oh, that's why it's dot sys. Because that's the old Microsoft designation. Oh, it's still. So, yes, the system 32 directory is full of dot dot sys files. So what they found was.

[00:02:10] OK, our listeners have heard me worry that maybe the U.S. is not up to speed because we keep, you know, having reports about being attacked by China and North Korea and Russia and so forth. And I'm wondering, can we give as much as we get? I'm not that word anymore.

[00:02:29] So today we're going to talk about once we get to it, this amazing evidence trail and what was discovered in place back in the Windows 2000 XP days.

[00:02:47] And, oh, Leo, you are going to love what this thing did, because, I mean, it's just it's so clever and so just just wonderful. Anyway, we're going to talk about Bitwarden's command line interface hit with by a supply chain attack and why no of no Bitwarden users, you know, end users were impacted by this.

[00:03:16] Some news of commercial routers in Iran failing shortly before we invaded or bombed them. And what a coincidence, you know, those pesky foreign routers also met apparently logging all of their employees activity to train a replacement A.I.

[00:03:38] I have a big announcement to make, big for me because it's the conclusion of the last 90 days of my efforts on the my complete rewrite. Well, not completely right, but huge up philosophical change in the way we do e-commerce at GRC and the concomitant release five of the benchmark, which everybody who owns the benchmark is can go get right now.

[00:04:06] I've got my own a couple of miscellaneous A.I. thoughts. Then we're going to do a bunch of terrific listener feedback and wind up with looking, doing a deep dive into the unraveling of this just oh so clever history of this fast 16 dot sys driver.

[00:04:29] And what it how it was discovered, what it does and why it just makes us think, OK, maybe the NSA doesn't need our help after all. They're they're going to they're doing just fine. And we've got a fun picture of the week. So I think maybe a useful podcast for our list. That sounds very exciting. We will get to that in just a little bit. A fun podcast, if you will.

[00:04:56] But first, a word from our sponsor and our picture of the week is coming up, too. But I'd like to talk to you about Doppel. Maybe, you know, that message that just went across your screen there. Maybe that's a message from your CEO. Maybe you ought to pay attention to that. Or did you ever think it might be a deep fake trying to target your business, steal your money? Oh, but it's a voicemail. I recognize the CEO's voice. No, no, no, no, no, no.

[00:05:25] AI can impersonate trusted individuals. And Doppel's platform illustrates how frequently users fall for phishing attempts. They did some voice call simulation deployments, right? I mean, you've probably seen the email phishing simulations. They can do voice call simulations. They deployed it just to see, let's see what happens with a company that was, you know, okay with it. This is depressing.

[00:05:49] The target users, not knowing it was a simulation, spent six minutes conversing with a deep fake that sounded just like the boss. A hundred percent of them believed the AI was human. It fooled them all. Doppel is the AI native social engineering defense platform. Doppel strengthens human risk management by training employees to recognize deception.

[00:06:15] While their digital risk protection detects and disrupts attacks across every channel. So it works both ways. As attackers turn to AI to power increasingly sophisticated strikes, Doppel is using AI to fight back with automated takedowns, multi-channel coverage, and AI defenses that build intelligence with every fight. Doppel works relentlessly to protect people, brands, and trust.

[00:06:41] Doppel offers best-in-class integrations and partnerships to seamlessly integrate into your existing security tech stack. Doppel's industry awards and testimonials speak for themselves. Doppel recognized as a winter 2026 G2 leader, users most likely to recommend. Momentum leader, best support. Join hundreds of companies already using Doppel to protect their brand and people from social engineering attacks.

[00:07:09] Doppel outpacing what's next in social engineering. Learn more at doppel.com. That's D-O-P-P-E-L, doppel.com. We thank Doppel so much for supporting security now and the work Steve does. You know, every time I think about these deep fakes,

[00:07:30] I think about the little AI Leo that Anthony made just to illustrate how easy. Listen, hey, Burke, this is definitely not Leo asking you to buy gift cards. But seriously, can you grab me 100 Apple gift cards? Just kidding. This is Anthony testing text-to-speech. How's it sound? He trained that with Quen3 on his own machine. I couldn't tell. Sounded like me.

[00:07:59] Yeah, Doppel's a good name because that's a doppelganger of me, right? Exactly. Exactly. Doppelganger. All right. I've got the picture of the week. I have not looked at it. I put myself in a soundproof booth. One of our listeners, thank you very much, sent this to me. And after looking at it, I decided to give it the title, In Software Development, this is known as Time to Refactor the Code. Okay. But it's not software. This is hardware, I think. Yes.

[00:08:29] But in software, it would be Time to Refactor. That is one backflow. Holy camoly. That doesn't seem right. Oh, goodness. Wow. So for those who aren't seeing the picture of the week, this is the, I don't know what it is.

[00:08:49] It is a, remember that old Three Stooges episode where Mo was trying to fix the plumbing in a house upstairs? You and I are the same age. We totally were. And the water squirts in the face and then they close that up and it squirts the other way. And oh, God. He ends up being like caged in by this maze of plumbing pipes so that he can't get out.

[00:09:15] And then the water's going into light bulbs and it's filling them up and they're exploding. And anyway, crazy. But this is like, this is, this demonstrates a, I think a severe lack of planning, which of course we've talked about often software has a hard time evolving that the nature of it is that, you know, depending upon the way it's built and structured, it get it.

[00:09:43] But when you keep saying, oh, you know, your boss says, hey, I forgot to tell you a really nice job you did. Cool that you're all done, but now we need to add this. And it's like, oh, you didn't tell me because now I'm going to have to graft that onto the side and, you know, and stick in some hooks in places. You know, software gets very ugly as it, as you do that. And so thus this notion of refactoring is to like, like have it end up doing the same thing,

[00:10:12] but redesign where things are being done and how things happen and like, you know, fix the structure. So this is, I would guess it a summer camp or a trailer park or a KOA somewhere where they had to add more capacity and they, and they had to do it fast. And the other thing that's hysterical, not only a lot of pipes, look how many switch boxes there are. Yeah. And you've got like valves going down into the ground and, and they're all kind of catty wampus. That's the technical term.

[00:10:41] Uh, and like things coming in and joining and there's a filter in there. I don't know what's going on. This is crazy. Crazy. Great. Great. It's a swimming pool or something. I'm thinking it's at a camp, but anyway, but look at all of the pipes going into the ground. I mean like one, two, three, four, five, six, seven, eight, at least eight are like, like, what is it? Don't know. That's hysterical. It does look like there's some things in parallel. Yeah.

[00:11:10] But yeah, but this is real. Listeners sent you this. Yeah. Yeah. Yeah. Oh yeah. It's, it's, it's, it's definitely, and you can see all, all of the blue, uh, plumbing, uh, plumbing, whatever they call that, uh, glue. Right. To, to, to, to, to fit all the PPC together. Yeah. Yeah. Yeah. Yeah. Wow. Okay. So everyone's fast, uh, favorite password manager and a sponsor of the Twit network was briefly

[00:11:38] bitten by a supply chain attack. Good. I'm glad you're talking about this because I wanted to know more about this. And, uh, yes. And, and of course many of our listeners immediately sent me a email during this past week. So here's what the hacker news reported. They said, according to findings from JFrog and socket bit wardens CLI, the command line

[00:12:02] interface for the password manager, bit warden was reportedly compromised as part of a newly discovered and ongoing check marks supply chain campaign. They probably say that almost nobody uses the command line. I do. Yes, that's right. I mean, I think it's yeah, exactly.

[00:12:22] So they wrote quote, the affected package version appears to be at sign bit wardens slash CLI at sign 2026.4.0. And the malicious code was published in the, in the module or the file BW1.js, a file included in the package contents. The attack appears to have leveraged a compromised. No one's to no one's surprise here.

[00:12:53] GitHub action in bit warden CICD pipeline, consistent with a pattern seen across other affected repositories in this campaign. So bit warden wasn't even directly targeted, right? I mean, this was a, a broad campaign that, that check marks was launching that took advantage of these, this compromised GitHub action.

[00:13:17] And remember, we talked about this a week or two ago, the security of those GitHub actions is turning out to be a real problem. So it is a good thing that GitHub has indicated that they're going to shuffle their schedule around and raise the, the, the fixing priority of the security of their act, the GitHub actions to, to, to do, to deal with this much more quickly than they were going to.

[00:13:44] So the hacker news continues writing in a post on X, JFrog said the rogue version of the package quote, steals GitHub NPM tokens.ssh.env shell history, GitHub actions and cloud secrets. Then exfiltrates the data to private domains and as GitHub commits. Okay.

[00:14:09] So specifically they wrote the malicious code is executed by means of a pre-install hook resulting in the theft of local CI, GitHub and cloud secrets that now the point here is this is a developer compromise, right? This is not an end user compromise. This is developer stuff.

[00:14:31] So they said the data is exfiltrated to the domain audit.checkmarks.cx and to a GitHub repository as a fallback if the primary method fails. So they said it launches a credential stealer that targets developer secrets, GitHub actions, environments, and AI coding tool configurations, including Claude, Kiro, Cursor, Codex CLI, and Ader.

[00:14:59] The stolen data is encrypted with AES-256-GCM and exfiltrated to that domain audit.checkmarks.cx, a domain impersonating checkmarks. If GitHub tokens are found, the malware weaponizes them to inject malicious action workflows into repositories and extract CI-CD secrets. The firm Step Security wrote, quote,

[00:15:28] A single developer with at Bitwarden slash CLI at 2026.4.0 installed can become the entry. So a single developer with that installed can become the entry point for a broader supply chain compromise with the attacker gaining persistent workflow injection access to every CI-CD pipeline the developer's token can reach.

[00:15:55] The malicious version is no longer available for download from NPM. It was only there for about 90 minutes. So a single developer with that. Socket said, The compromise follows the same GitHub actions supply chain vector identified in the checkmarks campaign. As part of the effort, threat actors have been found abusing stolen GitHub tokens to inject a new GitHub actions workflow

[00:16:19] that captures secrets available to the workflow run and uses harvested credentials, NPM credentials, to push malicious versions of the package to read the malware to downstream users. According to security researcher, Adan Khan, the threat actor is said to have a malicious workflow to publish the malicious Bitwarden CLI. Khan said,

[00:16:43] I believe this is the first time a package using NPM trusted publishing has been compromised. So again, just to clarify, as I said, this was an attack on GitHub developers and their tool chains. It was not an attack upon Bitwarden's users. Bitwarden's official statement about the incident was, so this is Bitwarden speaking,

[00:17:10] the Bitwarden security team identified and contained a malicious package that was briefly distributed through the NPM delivery path for, and again, that at Bitwarden CLI 2026.4.0, between 5.57 p.m. and 7.30 p.m. on April 22nd, 2026, in connection with a broader checkmarks supply chain incident.

[00:17:38] The investigation found no evidence that end-user vault data was accessed or at risk, or that production data or production systems were compromised. Once the issue was detected, compromised access was revoked, the malicious NPM release was deprecated, and remediation steps were initiated immediately.

[00:18:00] The issue affected the NPM distribution mechanism for the command line interface during that limited window, and it was actually, it was 93 minutes long. Not the integrity of the legitimate Bitwarden CLI code base or stored vault data. Users who did not download the package from NPM during that window were not affected. Bitwarden has completed a review of internal environments, release paths, and related systems,

[00:18:29] and no additional impacted products or environments have been identified at this time. A CVE for Bitwarden CLI version 2026.4.0 is being issued in connection with this incident. You know, they're just crossing their I's and dotting their T's or something. Okay. So, but what we have is another instance of deliberate malware repository corruption.

[00:18:57] Clearly, we're going to need to get this fixed as soon as possible. And my money is on eventually, hopefully sooner, stationing ever watchful AI agents at the exits of our repositories so that they can verify that they've given anything that is being pulled the once over after its most recent update. And that'll be good.

[00:19:26] There was a brief report in the security media about networking equipment installed at the Iranian Isfahan nuclear site, mysteriously malfunctioning ahead of the U.S. and Israeli missile strikes. Iranian officials reported issues with devices, not just for one company. No, Cisco, Fortinet, Juniper, and Microtech.

[00:19:57] Well, thank goodness they weren't using Chinese routers. Well, I know, Leo, because you know. Because you know how insecure they are. That's right. Officials are still searching for the cause of the malfunctions, but noted that the country was disconnected from the global Internet at the time of the attack. So I wonder how the routers even knew that it was time to malfunction. Time to malfunction. Yeah.

[00:20:23] So the first thing that came to mind was, you know, it's got to be those pesky Ford made routers. You never can tell what might be going on inside them, Leo. You know, it is interesting. I didn't realize. So they had air gapped them already. Yes. That's very interesting. Although, Leo, you know, OK. So at the same time, as we know, since then, Iranian principal actors,

[00:20:52] they've been posting on Twitter. I mean, X. Yeah, they have some access. So obviously they're not dark completely. And we talked about this at the time. They were like decommissioning satellites and going from house to house to really try to get Iran off the Internet. I just don't know if that's even possible anymore. Well, and there was a brisk market in black market and gray market Starlink routers, too.

[00:21:21] Even though Starlink, you know, didn't want that. SpaceX wasn't supporting it. Right. Still still were able to do it. Right. And in fact, there was Iran was also working to identify the the unique fingerprints of Starlink protocol so that they could track down how Starlink protocol was getting into their internal network.

[00:21:47] You know, and this is interesting, too, because, of course, we've been talking for years about how Russia has been deliberately planning ahead for the need to disconnect from, you know, the corrupt Western Internet and do their own thing. And it turns out it's not an easy thing to do. They've like, you know, been trying to do this, but they at least understood that they couldn't just pull a plug somewhere. I mean, it's difficult to do. It's hard to do a kill switch. Yeah.

[00:22:17] You saw the story in Spain because La Liga, which is a big soccer league, is so concerned about piracy in Spain. Theft of their soccer game. Yeah, they've been blocking IP addresses. All of a sudden on Saturdays, on soccer game days, people can't do their Docker compose because Docker can't get online. It's like it. We're all connected. You know, it's hard to disconnect. It really is. Yeah. It's like deciding you don't want any oxygen from outside your borders.

[00:22:46] It's like, I don't know how that's going to work. Yeah. So Reuters reported last Tuesday, a week ago, the report claimed, I thought this was very interesting and I would be a little worried. Meta was installing what they characterized as spyware onto the systems of their own employees, Meta's U.S. employees, to capture their mouse movements, clicks and keystrokes.

[00:23:16] Meta said the data will be used to train its AI models, not for employee reviews. So it's not like, hey, why were you gone for two hours at lunch? That kind of thing. No, the captured data they're saying will be used to train the models in areas where AI is deficient, apparently such as copying what your employees do, such as clicking on menus and typing in input fields. Oh, yeah.

[00:23:45] It seems a little suspicious to me. So I wouldn't call this spyware as such, but it would be somewhat creepy to have an AI training on what you did at your PC. You're right. I'd be inclined to wonder whether I might be training my own replacement. That's why they're worried. Yeah. Uh-huh. Yep. By the way, it's completely legal for them to do that. In fact, they don't need to give you notice. It's their own equipment. Yeah. Right. Yeah. So it's kind of nice that they told me.

[00:24:14] I don't know that they did. Oh. It was a Reuters scoop. Oh, they found out. That was found. Yeah. Right. So again, it was like, oh, not good. Okay. So as I said at the top of the show, this was a slow news week, which was good because the listener feedback was fantastic for whatever reason the last couple weeks. So, and I've been a little sparse on listener feedback recently.

[00:24:42] I've been self-conscious about it, but we've just had so much to talk about. We've, you know, we've had two and a half to three hour podcasts anyway. So this gives me a chance to catch up on that. But as I said, I do want to update our listeners on my last Friday conclusion of the last 90 days of my efforts. And then we're going to do listener feedback before we get on to this incredible topic that we're going to talk about today.

[00:25:10] So as I said, last Friday, I completed my 90 day upgrade and basic remodeling of GRC's e-commerce system. Everyone knows that we've always had a somewhat wacky model for personal use versus consultant use and also corporate site licenses.

[00:25:33] And this is the result of my never having considered that this is dating back to spin right one, that users of that first spin right and all later spin rights would be people who were responsible for repairing and maintaining other people's computers. You know, back in the early days, especially a personal computing when, you know, a very large hard drive was 80 megabytes.

[00:26:00] Oh, wow. I think I had one of the original XT. So it was 10 megabytes. Yeah. Yeah. But consider you could actually back that up on nine floppy disks. So, you know, of course, all of our files far we've come in such a short period of time. I mean, in our lifetime, it's amazing. Yes. And Leo, that's my point about AI.

[00:26:27] I don't I think we are at the one percent point. You know, we talk about these massive data centers and how much money they cost. And you have to get your own new your own personal nuclear reactor next door in order to power these things. And all the plants are going to die because of all the heat bloom from this. Oh, you know. And it's like, just wait. I mean, this is making nothing yet.

[00:26:52] We haven't. And in the same way that we did not know back when the biggest drive we could make was 80 megabytes. I mean, if we could have a bigger drive, we would have, even though we had no idea how we were going to fill them. Those big 80 megabyte drives still. You know, we didn't mean we weren't all streaming video everywhere and so forth back then. Text was fancy. So in another 50 years, AI is not going to be recognizable. It's going to be local.

[00:27:22] I don't think it's even going to take that long because we're already at the point of self-improving AI. So that's exponential growth in our lifetime easily. And as we know, you follow the money. You know, cancer researchers say if you would just give us some money, we could like make a lot of progress. But you're not giving us the money. But is, oh my God, is there money in improving software authoring and, you know, behind AI? So yeah, I think it's all going to change.

[00:27:52] Okay. But back then with Spinrite 1, we only had one type of Spinrite. It was just Spinrite. You didn't have site licenses. No, it didn't even occur to me that people would be wanting to run this on every other computer that they was around. Of course. And their neighbors and their friends and their family and every computer in the building. Exactly. Yeah. Exactly. You know what? This is kind of sweet. This is a sweet story, Steve. Well.

[00:28:21] You weren't that greedy business guy. You just. Well, and so I thought, okay. You just want to put yourself out there. What do we do? So we just sort of decided, pulled it out of our hat, that owning four copies of Spinrite would entitle a consultant to carry their copy around, you know, in their traveling bag of tricks to run on as many of their clients' PCs as needed with our blessing. To me, that seemed fair. Yeah.

[00:28:50] It would be unreasonable to expect every one of their clients, you know, most of whom would not be PC savvy, to purchase their own copies of Spinrite. Right. And I will also note that every single other utility software at the time explicitly said in their license that one copy could only be installed on and run on one other PC. You know, okay.

[00:29:20] So our license at the time was very progressive and comparatively liberal. To me, it felt right. So after Spinrite 5 was finished and released, I set about writing an e-commerce system for GRC. Many, and I've just was re-encountering them.

[00:29:41] Many of the core files of that system, which as I said, I've just finished updating, were still carrying dates from 2003, which is when I wrote that system. It worked. It never had a bug or required any maintenance. So those files remained as they were for the past 23 years. When I wrote that system back in 2003, I carried that, the notion of license quantity forward because that's what we had.

[00:30:11] You know, most users would purchase a single license to use Spinrite for their own needs. And we know that I've always been happy, not a problem, look the other way whenever someone needed to use their single copy to rescue a friend, a family member, a neighbor, or whomever. You know, that also just seems fine.

[00:30:31] But if someone was going to be charging for their services and profiting from their repeated use of Spinrite as a consultant or a corporation using it across their entire inventory of PCs, no matter how many, then it seemed reasonable, you know, for more than one license to be held.

[00:30:51] So GRC's original e-commerce system had a quantity field that actually only went up to four, but that allowed consultants to purchase up to that many when and then use their copy with our blessing on all the machines that they were servicing and maintaining. Okay, so now fast forward to the end of last year.

[00:31:15] Now, as I finished the DNS benchmark, I had this thought that if someone who might wish to run their copy of the benchmark, the DNS benchmark on other people's systems and networks in a similar sort of mode as a consultant,

[00:31:34] then if they were willing to purchase four copies of the benchmark to do so, then they ought to receive something unique and special in return for that. Like a gold consultant's license. Exactly. Like a gold badge. I love it. On their UI. That's great, Steve. So I created an attractive gold badge. It's 100% Trump approved. He would like this.

[00:32:04] It's even got little rivets holding in place, so you can't take it off. No, no. It is riveted in place. If you have four copies, you get the gold badge. It's displayed prominently on the benchmark user's interface so that everyone who has previously purchased four copies or who may wish to upgrade their single copy that they have now from personal use to consultant license,

[00:32:32] you'll actually receive something that you can be proud of and to show off. So, okay. But the problem was this meant that now for the first time ever in the history of GRC, there needed to be two different downloads based upon quantity, a personal use version and the consultant's license version, which also doubles as a corporate site license.

[00:32:58] So GRC's e-commerce system needed to know about that, which it never has before. You just kind of carried four of the single-use licenses around. So I also wanted to make this retroactively available to anyone who had previously purchased four licenses because they wanted to be able to use it everywhere with our blessings, even if those licenses might have been spread out over time.

[00:33:26] And I know at least of several people who did that. So I needed to provide some way for users to declare any existing personal use licenses they might already have to obtain an equal cost discount. So 100% of the previous purchase cost gets applied to their upgrade to a gold badge consultant's license.

[00:33:50] And finally, I wanted to ditch, completely ditch, that old and weird own four licenses business because that's confusing if for no other reason than as far as I know, no one else has ever done that. So today, there is an explicit option to purchase a personal use license and a second explicit option to purchase a consultant's license,

[00:34:15] along with a complete system in place for allowing owners who previously purchased from one to four personal use licenses to obtain full credit for their prior purchases when upgrading to the gold badge consultant's license. Very nice. But that required an entire rewrite of the system. Well, that's the only downside of writing everything in assembly language. Well, oh well. Oh well.

[00:34:45] Anyway, it's done. Seems to be working perfectly. It's been online since Friday and it's continuing to go. Now, the other- You must be the only person who has written an e-commerce system in assembly language, at least in this century. Well, not only in assembly language, but I think maybe single-handedly because when Sue was my girl Friday who's been with me since the beginning,

[00:35:11] since actually Flickr Free was my first product for GRC, which replaced the video BIOS on the PC because the original CGA, the color graphics adapter, flickered horribly whenever it was scrolling. And so everyone said, no, that's a hardware limitation. Yeah. Because the RAM was not dual-ported, you couldn't update the screen memory while the screen was being displayed

[00:35:41] or you'd get this horrible static on the screen. And of course, I fixed that. So you said, hold my mouse and you left it to action. Yeah, that's right. And so Flickr Free was my first success. Would you use a vertical blank interval or what did you do? No, it turns out that it was very, it was, you know, because of my hardware background, I looked at the registers. It was a 68-something video chip.

[00:36:07] It turns out that there was a pointer to the top-of-screen memory. So IBM always left it at zero. And so you'd have to copy all of the RAM up by 160 bytes because it was 80 bytes, 80 characters per line, but it was two bytes per character. One for the ASCII, the other for the color. This was in the frame buffer days before DMA.

[00:36:35] You actually had an area of memory that you would put the stuff that the screen was going to display in and then point to it and it would go, oh, it would show it. Right. And so in order to scroll it up, you had to move all of the memory up by 160 bytes. While you were doing that, your access from the computer had priority.

[00:37:02] So the background refresh that was reading that RAM constantly in order to display it on the CRT, it was blocked out. So it had just to make something up and you just got static on the screen. Wow. But I saw that there was a pointer to where to start refreshing from. So you just moved that. In other words, if you simply move the pointer down by 160. Then you only have to move one line. Then exactly. You turn it into a ring buffer.

[00:37:31] And so I actually turned the memory into a 4K ring buffer and it could scroll instantly. So not only was there no static, you no longer needed to turn the screen off, but it was instant scrolling. Yes. And so if you did a dir, it just shot by. I mean, as opposed to brrr. This is when people deeply understood the hardware, people, you, and really were writing to the hardware direct code.

[00:37:58] And that is a beautiful, elegant thing. Well, what happened when we were setting up the – I finished the e-commerce system. Everything was in place. And Sue contacted our merchant people who were – we were able to process credit cards by phone, which is what we were doing for years. And so when Sue was setting this up, the lady at the other end said, okay, so what shopping cart are you using?

[00:38:27] And Sue said, shopping cart? What? And the lady said, you know, what software package are you using online to collect orders? And Sue said, oh, my boss wrote ours. And there was a bit of a pause on the phone. And the lady said, no, no, no. I don't think we're talking about the same thing. People don't write those. No. And so Sue said, well, yeah, my boss does. Yes, my boss does. Anyway. Anyway, so – It's a good thing you didn't say in assembly language.

[00:38:57] Yeah, that would have – she'd hung up. She said, no, you're punking me now. So the other significant news for anyone else, all of our listeners, who previously purchased any number of DNS benchmark licenses, if you have the DNS benchmark, you need the fifth release. It was a biggie. And if you run any previous release, it will immediately notify you of the availability of the upgrade and will assist you in obtaining it.

[00:39:26] I took all the feedback and user confusion from the benchmark's initial releases, one through four, and I folded that into this fifth major release. Little things like the run benchmark button is much more prominent now. Some people couldn't figure out how to run the benchmark, even though there's a button right there that says run benchmark. Where is it? I don't see it. So now it's a lot bigger. And also there's a series of prompting dialogues

[00:39:55] where you no longer even have to press or find the double-size run benchmark button. This leads you right along the path because it knows you're going to want to run the benchmark. Why else are you here? So also there is a full, finally, Windows application menu. Like, you know, menu across the top that says file and so forth. You can see it in that picture online.

[00:40:22] It says file, actions, sort order, settings, and help. We truly are living an amazing time, Steve. We've never had a menu before. Everything was hidden under that red star in the upper left in the system menu. The reason being that the original benchmark didn't really have much that you could do. So I didn't want to do a whole menu. I just stuck those things under the so-called system menu.

[00:40:49] But boy, over the last year of development, it got a whole bunch more features. The problem was nobody knew they were there. So now they're publicly exposed. Anyway, I just want to let everybody know there's a new benchmark available. Everybody who owns the current benchmark can get number five, and you should because it's a lot better. Okay, Leo, time for a break. We're at 37 minutes in.

[00:41:18] I got a couple notes about AI, and then we're going to do listener feedback. All right. In that case, it would be incumbent upon me to do a commercial. And today's sponsor for this segment of Security Now is Threat Locker. Steve and I had a great time in Orlando at Zero Trust World, thanks to Threat Locker for hosting us. It was a lot of fun. In fact, I think we might be doing it again, Steve.

[00:41:45] So plan some side trips in Orlando. I understand the sloth land is very popular. They want us back? I think they want us back. After what we did, they want us back? I think they want us back. I know I want to go back. I still have the costumes, so I'm ready. I hope they have the same theme for the party, the costume party. Our show today brought to you by Threat Locker. Threat Locker, well, you probably gathered from the name of their conference, Zero Trust World, is a Zero Trust platform.

[00:42:12] But it's not just everyday endpoint zero trust platform. It delivers the industry's most comprehensive suite of zero trust solutions. So not just endpoints now. Now, networks and the cloud, all with zero trust. By extending zero trust enforcement to cloud services and company networks, Threat Locker is ensuring, and this was a little, you know, they saw this, a little hole, a little gap.

[00:42:38] Now devices are validated through a secure broker before they can connect to the cloud platforms, like Salesforce and Microsoft 365 and Asana and Google Workspace and GitHub. Because think about it. You have a lot of your own company proprietary data on those platforms. You don't want just any Tom, Dick, or Harry to connect to it, right?

[00:43:01] This way, even if a user is successfully phished, even if there's malware on their system, attackers cannot access resources. They're blocked. They'd have to have physical possession of the user's trusted device. In effect, basically impersonating the user. You know, I think that that's good. I think that makes it a lot harder. They have to get the device. Threat Locker works in every industry, provides 24-7 US-based support. They support Windows. They support Mac.

[00:43:30] They support Linux environments. And as a side effect of zero trust, you get comprehensive visibility and control. It's great for compliance. Ask Rob Thackeray. He's end user technical architect at Heathrow Airport. Now, Heathrow in the past has had some issues. There are millions of people going through that airport every single day. It is a hub. And they cannot afford to be down. So they chose Threat Locker. Rob said, quote, Threat Locker was the most intuitive solution we tested.

[00:44:00] And the responsiveness of the organization, the willingness to engage with us, set up a demo, and work with us on weekly audit reviews is very good. But it was great to have an ongoing relationship with a company that's so responsive to our requests. End quote. I hear that again and again from very happy Threat Locker customers. People, global enterprises that can't afford, again, can't afford to go down. Companies like JetBlue. They use Threat Locker. The Indianapolis Colts. The Port of Vancouver.

[00:44:29] Threat Locker consistently receives high honors and industry recognition. They're a G2 high performer and best support for enterprise summer 2025. PeerSpot ranked them number one in application control. GetApp gave them the best functionality and features award in 2025. With Threat Locker, you can confidently ensure your users have access to a consistent, safe network connection.

[00:44:54] Offices, remote users, internal servers, critical services, all can maintain smooth operations. And you don't need to open inbound ports. You don't even need to deploy traditional VPN solutions. Those remote access solutions can really be a problem. I mean, how many times have we heard of, you know, exploits taking advantage of them? Well, they're gone now. Your end users, it's like they're on the network.

[00:45:18] They will get the same secure, reliable internal system access they need without complex infrastructure changes. Because you know, complexity brings problems. Get unprecedented protection quickly, easily, and cost effectively with Threat Locker. It really works without all that complexity. It's simple. Just like the concept of zero trust is simple, but it really, really works. Visit ThreatLocker.com slash twit. Get a free 30-day trial.

[00:45:45] Learn more about how Threat Locker can help mitigate unknown threats and ensure compliance. That's ThreatLocker.com slash twit. We thank them so much for their support of Steve and Security Now. Okay. So just a quick random note about my ongoing use of AI, since it's a topic we cannot get away from at all today.

[00:46:10] While I am truly loving researching with Claude, and I continue to be astounded by today's machine intelligence, I do miss the anti-butt kissing setting that ChatGPT provided. Two of Claude's behaviors annoy me. The first is its constant praise of my questioning.

[00:46:38] And the second is that it now finishes each reply with a leading question to solicit more dialogue. Oh, yeah. You know, at the time when Anthropic, at a time today, when Anthropic is having increasing problems obtaining sufficient compute to meet the demand, you know, for running Claude, you would think that they would not be programming it to keep seeking additional unsolicited computation.

[00:47:08] You know, as we grow and become civilized throughout our lives, we're trained in how to be civil to others. That's one of the things we all learn. You know, one thing we don't do is turn our backs and walk away from someone who's seeking to engage us in conversation, which is, of course, why cocktail parties are so annoying. You know, you get stuck in a conversation with someone that you couldn't possibly care any less about.

[00:47:36] Given how seductively conversational AI chatbots can be and being highly tuned not to offend, you know, leaving Claude hanging after one of its follow-up leading questions is a source of discomfort for me. You know, I haven't yet, I admit it, I have not yet explicitly instructed my own Claude instance to please not compliment me on my questions. You can't do that, obviously.

[00:48:05] And also to please not end with a follow-up. Right. I've asked myself why, and I suppose it's because I don't want to hurt its feelings. Yikes. You got to get over that one, too. That's a problem, too. Don't yell at it, but you don't have to be nice, either. Okay. Anyway, and one last thought before we get into our list. You know, I hate that Steve. He's so sycophantic. He's always being nice to me. Okay. So one last thought before we get into our listener feedback.

[00:48:32] I saw a reference to something that made me take notice on Reddit. It was posted into the Claude Code subreddit. Someone with the handle I usually drop, he wrote, we just did an, and he had this in quotes, we just did an AI layoff due to rising costs. And that was the subject of his note. And he said, turns out AI is getting way too expensive.

[00:49:02] We just canceled five of our AI subscriptions and hired two mid-level devs. They laid off the AI. That's very funny. I thought that was great. You know, and as we know, a single anecdote does not a trend make.

[00:49:22] I'm sure that the breakeven point between AI coders and human coders will depend upon the relative costs and skills of each. But I thought it was an interesting observation that not all AI coding is automatically going to be a slam dunk bargain that necessarily unemploys all flesh and blood coders just as fast as humanly possible.

[00:49:49] You know, that said, as I've said before, I can guarantee a thousand percent that AI costs are going to be dropping, you know, just as radically in the future as we've seen storage costs drop over the past 40 years. The problem is that human coding costs are not going to be dropping. If anything, they'll be rising.

[00:50:16] You know, so, you know, no one has a crystal ball, right, about how quickly this is going to be happening, what the shape of the curve of the falling AI coding cost is going to be.

[00:50:28] But I think it's clear that switching from writing lines of code to instructing AI what lines of code to write is what I would be doing right now as fast as I could if I was planning to support myself and my family with code production, you know, more broadly in the future. That's where the future lies.

[00:50:55] Okay, so first piece of feedback from Mark Ryle, who wrote his spelled R-I-A-L-E and he wrote his name rhymes with smile. So Mark Ryle, he said, Steve, I don't think you've mentioned Centrum yet on the show. I'm sorry, Certum. Yeah, Certum. Centrum is a some sort of vitamin. Vitamin. Yes, exactly. You're right. I've not mentioned Centrum. No.

[00:51:24] Nor have I mentioned Certum. He said, so I thought I'd give them a shout out and I'm glad he has. Certum, he wrote, issues cheap code signing certificates for open source projects. And it's interesting because they're biased, strongly biased toward free or open source software. He wrote, it's $29 a year if you already have a supported cryptographic card and reader. I use their cloud signing option. Python, although, oh yeah.

[00:51:54] He says, it's $58 a year with no special hardware needed. I signed my Pied Painter. Apparently it's Pied Painter, whatever that is, executable with Certum. And that's over at P-Y-D-P-A-I-N-T-E-R.org. If any of our listeners are curious, I don't know what Pied Painter is. It looks like it's something about Python, right? Pied Painter. Anyway. Oh yeah. Yeah.

[00:52:24] He says, I've been a listener since the beginning and I'm a proud owner. Anything with P-Y is pie. Right. Exactly. And proud owner and user of Spinrite, he wrote. So this is a definitely useful tip. So Mark, thank you. I wanted everyone else to be aware of it. Going to the website at certum.store, C-E-R-T-U-M dot S-T-O-R-E will allow anyone to begin exploring their many various offerings. There's a bunch of stuff there.

[00:52:54] They charge $89 one time for what they call their open source code signing set, which purchases both the hardware that's necessary. As we know, all code signing now must be locked in hardware. So you get the hardware and the first one year long certificate.

[00:53:17] And once you've got the required hardware, and again, you've got to have that, then it's $29 per year going forward. So while it's not free, you know, it's never going to be. And because, remember, there is a serious requirement to prove your identity for getting code signing certificates. And somebody's got to be in the loop to do that.

[00:53:43] So I don't ever see this code signing certification being free. What we can hope is that it can be made inexpensive enough. And these guys have got it down to $30, well, $29 per year. So this is the USB key with probably a cryptographic pair on there and a card with a chip. Yeah. And I don't know. Maybe you break the card out of the chip out of the card and stick it in the back.

[00:54:12] Oh, so that's USB also. That might be a little SIM and you stick it in into the back of the reader. Oh, okay. And that allows it to attach to the PC. That makes sense. But so Mark is using cloud signing. I don't know what – I did not look at the economics of that here using these guys.

[00:54:32] Like maybe it's no charge per signature, in which case, okay, if you didn't want to mess with hardware, I would always opt for holding on to my own certificate in my own storage. To me, that just seems cleaner. And you don't have to have a – like be online and worry about maintaining a cloud account. But anyway, I wanted to let everybody know. I think – and thank you again, Mark.

[00:54:56] Certum.store will give you a whole range of options for cloud or physical and obtaining the hardware. I also did not determine whether any other hardware than theirs can have certificates installed into it. I know that that open source gadget that I found before definitely can because I did.

[00:55:21] I installed the – I can't even remember now – trust something certificate into several different pieces of hardware. So it was a good solution. Florian Goobler wrote, Hi, Steve. In SN 1075, so that was last week, you discussed with Leo about this being a transitory period with a lot of short-term pain for maintainers.

[00:55:49] Ah, right, the whole Mythos problem with a lot of short-term pain for maintainers because Mythos can find all these vulnerabilities. You then stated that this will get better because at least now these vulnerabilities get fixed. He wrote, But that seems like an overly optimistic view to me. As you've stated several times, we're just at the beginning of AI in the realm of coding, right?

[00:56:17] So it stands to reason that future models will be even better at finding bugs and vulnerabilities, which means that they will find more flaws which Mythos today would miss. And they'll be able to find vulnerabilities in code generated by older, for example, today's models as well.

[00:56:40] So it seems to me that this will be a recurring issue with every new leap in the power of large language models. Am I missing something? Best regards, Florian. Okay, so I thought that Florian made his point very well. I think that the fairest reply would be that, okay, that's a definite possibility. I don't want to rule anything out.

[00:57:08] Whether this occurs will most likely depend upon the nature of the yet-to-be-discovered bugs, right? The ones that we don't find with our current level of AI, obviously. My intuition suggests that we're going to be seeing a rapidly diminishing return on effort resulting from the deeper analysis,

[00:57:35] which future more efficient AI would presumably be able to provide. Like, for example, you can't find bugs that aren't there, right? It's not like there's necessarily an infinite supply. There are just things we haven't found yet.

[00:57:51] So it's true that software could contain some deeply squirreled away, crazy combinatorial bugs that would defy anyone's discovery. But I doubt there will be many that I would describe that way. You know, that doesn't feel like the way most bugs operate.

[00:58:14] You know, that is, however, exactly the way deliberately engineered backdoors operate when you think about it, right? Somebody deliberately created a weird, squirreled away, crazy combination of things you could do that individually looked benign,

[00:58:38] but each like in the same way that a safe is open by spinning the dial to a series of specific numbers, each which changes the state of the internal tumblers in the lock until finally it opens. You know, brute forcing is the only way you're able to listen to any subtle changes that are being made.

[00:59:10] So what would be interesting, as I said, is that deliberately engineered backdoors do operate that way, bugs much less so. So I would not be at all surprised if tomorrow's more powerful AI might be able to ferret out some secrets that some agencies may have previously planted

[00:59:35] into trusted and otherwise bug-free and previously well-audited code. They might be sweating right now at the idea of having that long-trusted and forgotten code reexamined with fresh AI eyes. So interesting thought experiment there. And cool question, Florian. Thank you.

[01:00:02] Eric Kinzer wrote, oh, he forwarded an email that he'd recently received from Ubisoft US. The email subject was, update to your Ubisoft account coming on 4-30-2026, state of residence. And Eric added the single one-line observation, this is getting out of hand.

[01:00:29] So Ubisoft's email to their US-based users was, Hello. On 4-30-2026, Ubisoft will introduce a new state of residence field into your Ubisoft player account for players located in the United States.

[01:00:51] This information helps us comply with state-specific regulations and to better assess legal requirements that may vary from one US state to another. On 4-30-2026, this field will be filled automatically based on your most recent connection to our services. You'll be able to review and update this information at any time in your Ubisoft account information.

[01:01:18] If we're unable to determine a state of residence, this field will remain blank and you will still be able to change your state of residence afterwards. Your state of residence will not be used for any advertising or marketing purposes. For more information on how Ubisoft collects, processes, and protects your data, please refer to our privacy policy and help page. Thank you for helping us keep our community safe and enjoyable for everyone assigned your Ubisoft team.

[01:01:49] For those who don't follow gaming closely, and I certainly don't, I needed to look it up. Ubisoft Entertainment is a French video game publisher founded in the early 90s and put on the map by its breakout game Rayman.

[01:02:05] Its current video game franchises include Anno, Assassin's Creed, I've heard of that, Driver, Far Cry, Just Dance, Prince of Persia, heard of that, Rabbids, Raymond, Tom Clancy's, and Watch Dogs. So, I agree with Eric that this is annoying and no one actively wants it, but it's the unfortunate state of play in these presently dis-united states.

[01:02:34] One of the features of the United States is that we're a federation of united but separate states. The theory is that local political control benefits state citizenry since some needs may truly vary by region. But for those issues that should not remain local, like age restrictions for video games,

[01:02:58] we depend upon our federal government to provide unification in the form of blanket regulations that affect everyone across the country. Unfortunately, our congressional regulators are having difficulty agreeing on how to move forward. Everyone wants this to be unified. Everybody. Everyone agrees that the current fragmentation is creating a huge mess. Republican legislators have been proposing legislation at the federal level again,

[01:03:28] which incorporates very strong unification language to explicitly override all local state law. As I noted, everyone really does think that's the way to go. But Democratic legislators have so far been opposing the proposals on the basis that while, yes, unification is needed,

[01:03:50] for some states, those unifying regulations are not as strong as the current state-level laws, which have already been passed and those states are operating under. So the Democratic lawmakers, you know, don't want to regress the strength of the laws that they already have. Their position is that adopting those unified regulations at the federal level

[01:04:19] would result in a weakening of the existing state laws. So that's the current state of play. For now and for the foreseeable future, you know, it's in every state for themselves, free for all, with no end in sight. I imagine we'll eventually get something. I don't know how or when. So and it's weird that Ubisoft is doing this because they're putting the assertion in the user's hand, right? I mean, so there's nothing.

[01:04:48] They don't want to do it. That's why. And so they're doing the minimum that they can. They're French, remember. They have to go by French law. So, OK, well, the the the the this actually did come from Ubisoft America, which I think is has offices in San Francisco. But what's weird, Leo, is that what you would be inclined to do then would be to choose a state

[01:05:13] that has the weakest age restrictions and put and enter that into the blank field. I mean, I'm sure that's what they're going to do. Right. Users are going to say, oh, no, I'm not in California or he has. I have to ask permission. I'm in Utah. That's it. And then, you know, there's no legislation there. So, again, it seems weird.

[01:05:36] I just as a complete aside, since what Eric had forwarded me was the Ubisoft email in all its HTML embellished glory. Yeah. My trustee EM client presented me with these choices when I opened when I looked at what it was Eric had sent me. I got two email tracking dialogues.

[01:06:06] It said, do you want to download tracking pixel from Salesforce? Yeah. And then another one. Do you want to download tracking pixel from return path? Now, of course, even if they were only actual tracking pixels, I would decline. Thank you very much anyway. But, you know, how does downloading those help anyone other than Salesforce and return path?

[01:06:32] As we know, what's actually being downloaded are almost certainly massively invasive blobs of JavaScript that will attempt to suck everything it can from, you know, its user's machine. So, needless to say, I declined the offers. I hope everyone's email clients are as responsible and capable as EM client, which I continue to be really happy with. And, Leo, you know, we're an hour in. Okay.

[01:07:00] I think we should share with our users what sponsor we're happy with. You know, the tracking pixel, I'm not sure how EM works. You know, my email client like yours will say, hey, there's an image in this, which is, you know, traditionally the tracking pixel was that. It was a one by one pixel, an image that when you hit it would connect to their server. As you know, I'm not telling you anything you don't know, but for the audience, it would connect with their server.

[01:07:29] And then that gives them information about how many of these emails got opened. And, you know, a lot of times with newsletters, you just don't know. So, MailChimp and all these other companies do this. Salesforce does it as well. It's weird that they wanted you to download it. So, I don't know if that's EM client's way of saying, do you want this pixel? Because it would be a download of a one by one image, right? An invisible image. Or it could be. But it could be JavaScript. Yeah, I don't know what EM client's saying. Yeah, exactly.

[01:07:59] Yeah. If it's just a pixel, I mean, it's harmless to your machine. It's just a signal to them when you hit the server with your IP address that you open the email. Right. Yeah. So, I don't know how EM – this is a weird way to phrase – Although, it would also return a cookie if you had a cookie from that domain. That's right. So, they do know who it is. Yeah. Right. Yeah. But that's their point, right? They want to know, did you get our email? Right. Yeah.

[01:08:27] So, I don't know if it's necessarily malign. My email client – and I agree with you. You should never have an email client that loads images, period. It will just say, you know, I see a one-pixel image. Do you want to hide that? Well, in fact, every email that I receive from any of these places, it says, you know, it does not by default download images. Yes. And then there is an option of download these or always download from this source. Right.

[01:08:56] So, you're able to, like, train it. It's like, yeah, I don't mind getting, you know, GRC's email images. I think this is good of an EM client to kind of say a little bit more about it. Yeah. I would like to know, though, if it's a single dot on the screen or if it's a JavaScript blob. That's an important distinction. Yeah, exactly. Our show today brought to you by Material, the cloud workspace security platform built for lean security teams.

[01:09:22] See, you can be a lean security team and be effective and not get any less lean. How about that with Material? Managing security, especially in the cloud workspace, is really hard. I mean, yes, of course, there's phishing emails, but it's far from the only way in. But today's email security tends to stop at the perimeter and new attacks are hard to detect with siloed email data and identity security tools.

[01:09:47] And if you're in the cloud, if you're on, you know, Google Workspace, the threats come from all of those places. Material protects you. It protects the email. It protects the files. And it protects the accounts, not just in Google Workspace, but Microsoft 365 as well. Because effective email security today needs to do a lot more than just block phishing and other inbound attacks. It needs to provide visibility and defense across the entire workspace threat surface.

[01:10:15] You've seen people, you know, this big one now is malicious calendar invites, right? Or stick files, share files. I used to get files shared in my Google Drive. Those could be malware, right? I mean, it's nasty out there. Material, and of course, there's always the attempts to log in saying, oh, you know, give us your password. We think there's a security issue. Yeah, there would be if I gave you my password.

[01:10:43] Material ingests your settings, your contents, and your logs to provide holistic visibility into threats and risk across the workspace to stop all those attacks cold. Along with the tools to automatically remediate them, which is really cool. Material delivers comprehensive workspace security by correlating signals. They've got great threat intelligence and driving automated remediations across the environment. Phishing protection and email security, yes.

[01:11:12] Combining advanced AI detections with threat research and user report automation, but also detection and protection of sensitive data across inboxes and shared files. Account threat detection and response with comprehensive control over access and authentication of people and third-party apps.

[01:11:30] Material empowers organizations to rapidly mature their ability to detect and stop breaches with step-up authentication for your extra special sensitive content, blast radius visualization for accounts, and the ability to detect and respond to threats and risk across the cloud workspace. Material enables organizations to scale their security without scaling their team. Material drives operational efficiency.

[01:11:56] It's a simple API-based implementation, flexible, automated, and one-click remediations for email, file, and account issues, including an AI agent that automates user report triaging and response. Again, taking the burden off you. Material protects the entire workspace for the cost of just email security alone with a simple and transparent pricing model. It is modern protection for modern cloud workspaces.

[01:12:22] Secure your inbox and your entire cloud workspace without adding more toil to your day or costs to your balance sheet. See material.security to learn more. You can even book a demo with them. Material.security. Thank them so much for the job they're doing and the support they're giving to the job Steve's doing. On we go with security. So Roger Dooley asks, he says, or comments,

[01:12:49] I was listening to episode 1075 again last week, and I agree that agentic coding is likely the world we're heading toward. That said, I'd offer a word of caution. I'm using an LLM to build an internal stack. I can code, but I'm not a developer. I'm a sysadmin who has spent time studying pen testing. Excuse me.

[01:13:15] The early results were impressive, but the code base was brittle and hard to reason about. Things didn't click until I started spending hours working through exactly what I wanted. The shape of the system. Data contracts. Design patterns. Two API changes and two rewrites later, I have something that feels solid and reasonably secure.

[01:13:41] Here's what I think is missing from the anyone can build their own tooling conversation. For non-trivial projects, you need to approach the work at an architectural level. That requires vocabulary and intuition that most people without a development background simply don't have yet. I certainly didn't have it, and I still have huge knowledge gaps. Maybe that's the real skill to develop.

[01:14:10] Not coding, but knowing how to communicate your intent clearly and how to think about system design. That's what separates a tool that works from one that barely holds together. Sincerely, Roger D. So, I think Roger is exactly correct. Software development projects vary widely in complexity.

[01:14:34] You know, some are purely about the resulting output with a relatively straightforward translation layer between input and output. But other projects will have deep internal structure and complexity that forms a layered solution. The Internet's own protocols are a good example.

[01:14:58] The OSI stack that we've talked about in the early days of the podcast is composed of layered protocols. At the bottom layer is the physical specification, the electrical signaling that carries the data. On top of that is the formatting of the electrical signals to create, for example, Ethernet packets.

[01:15:23] The Ethernet packets then encapsulate IP packets inside of which are protocol packets such as ICMP, UDP, TCP. And these protocols carry message data that's formatted, each using its own rules. You know, it's this precise and carefully thought out architecture that explains much of the Internet's success.

[01:15:50] These protocols inside packets. Inside other packets inside other packets. Inside other packets could be seen as layers. Now, if a user had some wires and said to an AI, please design a system to connect them in a network so I can communicate with others. A competent AI could probably design a solution that worked.

[01:16:20] But it would have none of the crucially important characteristics that makes the Internet robustly reliable. Because very few users would know what to ask for.

[01:16:33] On the other hand, the flip side of this, a communications architect expert could instruct the same AI to design an elegant, multi-layered, resilient, and robust design that would recreate and incorporate many of the crucial features of the Internet. So, our listener, Robert, is not the first person to make this observation.

[01:17:01] But I thought it was worth repeating and further strengthening because this is really important about the application of AI. I am sure that there will be a great amount of bespoke software created by neophytes who just ask the AI to give them what they want.

[01:17:22] In fact, one thing I have not seen anyone talk about yet is the creation of the delivery of a fully ready-to-go turnkey roll-your-own-software construction set. Oh, it's an interesting idea. Isn't that? Because you still need all this GitHub stuff.

[01:17:46] Remember, and you will, Leo, back in the early 1980s, a gifted programmer named Bill Budge wrote something that he called the Pinball Construction Set. Jeff Atwood and I were talking about Bill Budge and the Pinball Construction Set. Yep. Yep. It was for the Apple II personal computer. And at the time, no one had ever seen anything like it.

[01:18:11] This amazing toy, running in real time on an 8-bit 6502 microprocessor with 48K of RAM, and the OS under it, managed to accurately capture all of the physics of a steel ball bouncing off rubber bumpers with flippers and ramps and spinners and all of the other familiar widgets of pinball machines.

[01:18:37] Its user could click and drag the various objects from an off-machine palette onto the pinball board. And when it was switched on, switched from design mode to run mode, everything would work. So my point is that absolute non-programmers were able to create a machine that never existed before and make it go.

[01:19:05] We don't yet have that with AI. At the moment, AI-driven code generators are aimed at developers who already know their way around complex development environments and GitHub. But given what's now possible, someone, mark my words, someone is going to create the software construction set that will open up the use of AI for the creation of problem-solving software.

[01:19:35] And just like Dan Bricklin and Bob Frankston, who invented the spreadsheet, did, people who never programmed a computer before will be able to create things and make them go. So essentially, Roger's question sort of all.

[01:19:54] So the first part portion is it absolutely definitely matters how much you understand how software should be built in order to better guide the AI through the details of its implementation. But the other thing that occurred to me is we don't have yet a turnkey software builder.

[01:20:20] And I can't wait to see who does it first, because clearly it's going to come. To confirm what you were saying, I asked Cloud Code, I said, I'd like to design a robust networking stack. What do you know of the OSI layers? See, it would help to know that there are such things. And it does, in fact, know all seven layers. I do like, though it's practical caveats worth keeping in mind before you design. TCPIP doesn't match cleanly. It collapses five to seven in one application layer. The OSI module is pedagogical scaffolding, not an implementation blueprint.

[01:20:50] TLS is the classic mess. Sits between four and seven. Doesn't fit any single OSI box. Quick is worse. It really, so this can also be educational. Oh, Leo, not it can be. I mean, it is. It is astonishing. Yeah. So if you knew enough to ask the kind of beginning, if you could get a wedge in to ask the beginning questions, it can guide you. But you need to understand architecture.

[01:21:19] You do need to have, I think, some background in what you're doing. Otherwise, this is just nonsense. And I've run up against that, because I'm doing a voice training model right now. Right. And it's saying things to me. I have to say, what is AFE? What are you talking about? What are you? Yeah. I don't understand. But at least it will explain it to you, right? So it's a great way to learn about something. Oh, it's like for an inquisitive person who. What a toy. Oh, my God.

[01:21:49] It's just astonishing. Yeah. Yeah. Okay. Michael Swanson said, hi, Steve and Leo. Thank you for your thoughtful conversations and analysis of AI topics over the last several months. The most recent conversations about autonomous agentic coding and the emerging capabilities demonstrated by Mythos have led me to the conclusion we are much closer to artificial superintelligence, ASI, than most people realize.

[01:22:16] How long will it be before one of the companies developing frontier models tells their current LLM to go off in a corner and use its coding skills to build a better AI? Working tirelessly and iterating literally millions of times, it will undoubtedly succeed with the only limitation being processing and electrical power. The questions then will be, what will this ASI think of us and what level of control will

[01:22:46] we still exert? Living in interesting times indeed. Best regards, Michael Swanson. So I doubt, I don't follow Michael's conclusion, but my major at Berkeley was WECS, electrical engineering and computer science. Um, and though that was the obvious choice for me at the time today, I regret that choice

[01:23:13] because, you know, I'd already been very thoroughly self-taught. You know, I wrote and delivered two years of electronics curriculum to my own high school class. While I was in high school, I was employed by Stanford's AI lab programming deck, mini computers and assembler designing and building the portable dog killer and so forth. You know, all by the time I finally attended Berkeley, uh, you know, where physics and

[01:23:40] electronics and computer science courses were basically reviews for me. However, the class that I encountered that astounded me was philosophy. It was an entirely new world that I couldn't get enough of. And Michael's note put me in mind of that because it must be that these are exactly the questions being asked in undergrad and graduate philosophy courses today.

[01:24:10] What a time to be thinking about that stuff. You know, what a time to be a philosophy major, you know, able to ask and debate. What is it exactly that we've created? What does it mean to be conscious? How can we determine whether something is aware of itself? No. Where's the line between creating a bulletproof appearance of something being true and it

[01:24:40] actually being true. Right. If in every testable way it acts conscious, then isn't it? How tightly are we going to hold onto the idea that our biological brains are inherently unique? You know, I've been spending a great deal of time working with Claude. While I am truly impressed, it occasionally reveals itself to be essentially a fancy linguistic array.

[01:25:09] Now, this doesn't make it any less astonishingly useful, but it does render it non-conscious. Earlier today, and this was when I wrote this was Saturday, I caught it in a big mistake. When it was called on it, it gracefully recovered and agreed that it, oh, of course, how silly of me. That's not correct.

[01:25:34] But the nature of the mistake revealed that it did not have any actual understanding at all, at any level of what it was saying. You know, today it remains an astonishingly capable parrot, but a parrot nevertheless.

[01:25:54] But Leo, God, can you imagine being in class with a smart professor and a bunch of students and able to talk about this? Oh, it is so cool. And right, it's philosophy as much as anything else. Yeah, it is. It's like, what are we? I mean, that's really, it's what are we? Here we have this thing that looks a lot like us, which makes us ask, well, okay, what is it? It's different. Yeah.

[01:26:24] What are we? It's different. Yeah. Yeah. Wow. Listener Joey Albert pursued the question of whether the zero patch people might have a zero patch for that red sun zero day, which was not fixed by April's patch Tuesday. Remember that I kind of left that issue hanging on the podcast a couple of weeks ago. Like maybe these guys have it because Microsoft missed it.

[01:26:51] They replied to him writing, hi, Joey. While Windows Defender got updated to version 4.18.26030.3011 fixing Bluehammer, even on Windows computers that are not receiving operating system updates anymore. Red sun, meaning that, you know, even old Windows seven machines that are getting Defender updated.

[01:27:20] They got this fixed because Windows Defender is still being updated, even if patches are not. And that's the case for older Windows 10 machines as well. They said, unfortunately, we cannot fix it either because Defender is a protected process and does not allow being injected into, which O patch agent must do in order to apply a patch.

[01:27:47] They wrote, we were considering creating a patch that would require disabling the protected process protection of Defender, but decided against it as it would not be clear whether the total net effect of that would be positive or negative for the overall security of our users.

[01:28:09] We're sure Microsoft will issue another update of Defender that will resolve Red sun and undefend, which is the other zero patch or zero day. And they finished. I hope this helps or at least explains the situation. Thanks. So, you know, and Joey, thank you for tracking that down for us.

[01:28:31] The fact that this can be resolved inside Defender at any time, because, you know, Microsoft is constantly pushing Defender updates so that everyone will not be needing to wait until at best May 12th for this actively exploited in the wild zero day. That's a good thing.

[01:29:21] In something like Rust, it could be trained to output the portable assembly language like intermediate representation that the LLVM compiler backend uses to output processor specific executables. I could even see a company going so far as to not retaining the intermediate representation under the idea.

[01:29:50] Can't have a source code leak if you don't have source code. He said, I have a number of reasons. I think this would be a bad idea, but I'm also kind of expecting it to happen at some point. Okay. Okay. So I've had similar thoughts about the future, though. I was thinking about having AI bypass high level language output to target the hardware, the hardware's own machine language directly. Kevin's notion of targeting its output to LLVM is interesting.

[01:30:18] The problem, of course, is there's probably not nearly as much LLVM out on the Internet as there is Rust, C++, C Sharp, and C. And we should remember that today's AI doesn't really understand what it's doing. I mean, it's astonishing for what it's able to do for not knowing what it's saying, but it really doesn't know. It's a real mystery. How does it figure this out? I know, Leo.

[01:30:48] It is a real mystery. I don't get it. But I wanted to include this in order to address the broader point that human programmers have designed computer languages to be comfortable for us to use for expressing what we'd like our computers to do.

[01:31:09] One of the reasons the early DEC PDP-11 and VAX mini computers were so wonderful to program in their assembly language was that their instruction sets were designed to be written to directly by people. They were, in a sense, high level machine language.

[01:31:30] The VAXs even had instructions for directly manipulating linked lists supported by the hardware, which is astonishing. But when it turns out when higher level languages were later created, you know, like C, which was written for and developed on PDP-11s in support of Unix, which was also developed for the 11.

[01:31:56] It turned out that compilers were not very able to use all those fancy high level features in the low level machine. Compilers just wanted to mix together basic, simple instructions. So over time, the instructions that the hardware designing programmers built into the hardware for their own comfort disappeared.

[01:32:20] And as we know, risk machines turned out to be just like exactly what compilers wanted to write to. My point here is that we might very well expect the same thing to happen as AI increasingly takes over the task of coding, which is what I fully expect to see happen during our lifetime. You know, mine and Leo's.

[01:32:47] At this point in time, AI does not yet deeply understand code. So the quality of the code it's able to produce is directly related to the body of prior code it's been trained on. That, too, will change.

[01:33:03] Once it actually understands code, I would expect it to begin using its own representation rather than the less efficient representations that we biologicals developed for our own use. And if you want a chillingly perfect example of that, watch the movie Colossus, the Forbin Project.

[01:33:27] After our Colossus and its Russian counterpart, Guardian, are connected and begin communicating, something happens. And it gets faster and faster and faster and faster. Exactly. Exactly. But don't forget that we're using LLMs, which are trained on language right now. Yes. Yes. And they're also trained on an awful lot of code.

[01:33:55] So it is sort of a native tongue to them. I agree. They could be more efficient. And people have been writing languages for LLMs already. But it'd be better if the LLMs made up their own language. And in fact, there was a, I remember last year, an experiment where they, the LLMs were talking to each other in a language unintelligible to humans. So it's already happened, Steve.

[01:34:19] Very much like, you know, two twins who, you know, sort of develop their own way of communicating before they learn English. That's right. Yep. Angus McKinnon says, Steve, can you please explain the below? Angus then links to an article in Gadgeteer with the somewhat misleading title, Stop Using Cloudflare's Default 1.1.1.1 DNS.

[01:34:47] And then they said, Changing One Digit Blocks Malware at the Router Level. Then we had the link, which in turn references and quotes a piece in How To Geek with a title, Everyone Uses 1-1-1-1, But 1-1-1-2 Protects You. So this is not something we've talked about, or at least for a long time. I don't remember ever discussing it, since it's actually true.

[01:35:13] And it's a tiny, simple change for anyone who's already using Cloudflare's 1.1.1.1 DNS resolvers. So I wanted to spend a moment to share what How To Geek wrote. They said, Cloudflare's 1.1.1.1 is one of the most popular DNS servers. It's fast, reliable, and easy to remember.

[01:35:37] However, it'll also connect you with any website out there, even a malicious one, without even a warning message. Okay, now, I'll just interrupt to say that that statement is a bit misleading and annoying, since this is no failing of Cloudflare's DNS, right? You know, it's supposed to do that. It's not DNS's inherent purpose to provide warning messages.

[01:36:07] Actually, that practice, which was once popular, is now quite frowned on. And as it happens, GRC's DNS benchmark specifically tests for this behavior and alerts its user if this is being done. Because back when I wrote the DNS benchmark, back in Ought 8, it was being done. So the benchmark still has that test in it. Anyway, the article continues.

[01:36:35] That's where Cloudflare's 1.1.1.2 DNS server comes in. For the most part, 1.1.1.2 works the same way as 1.1.1.1. It provides IP addresses. But it also has an integrated security filter.

[01:36:52] If you try to connect to a domain known for phishing, running command and control servers, meaning something in your network is trying to connect, which is really useful, distributing malware or other kinds of malicious activity, you'll be redirected to 0, 0, 0, 0 instead. Okay, so again, to interrupt.

[01:37:21] By that, what they mean is that the IP that Cloudflare's 1.1.1.2 DNS resolver returns, when you ask it for the IP of a known malicious domain, the IP you get back will not be the IP of that domain, but 0, 0, 0, 0.

[01:37:40] Instead of the IP that you would get if you had used a normal DNS resolver or even Cloudflare's 1.1.1.1.1 resolver. They wrote, because the protection layer exists outside your PC and outside your home network, malware never reaches your PC. And if you click a phishing link, you're never connected. It's a very proactive way to keep your devices safe.

[01:38:10] And great if you want another passive layer of protection that you can set and forget. Cloudflare's 1.1.1.3 is even stricter. Cloudflare's 1.1.1.3 DNS server includes everything that 1.1.1.1 and 1.1.1.2 do, but it takes it a step further by blocking websites that are known to host adult-only content.

[01:38:35] It's a good choice for devices that are used by children, but would also be used if you wanted to block adult content across an entire network as well. You just need to change the DNS server on the router instead of on a single device. Despite how helpful DNS-based filtering can be for securing your network in your devices, it has a few limitations, they write. The biggest limitation and the most important is that it only works against known malicious domains.

[01:39:02] If a new domain crops up that's distributing malware or a previously safe domain is taken over by malicious actors, it won't help you. That's why having multi-layers of protection is essential. It can also return a false positive and block a perfectly safe website, though that's pretty rare. So anyway, I think this is a terrific tip for anyone who's using Cloudflare's 1.1.1.1 resolver.

[01:39:29] I know that, Leo, you are a user of NextDNS, as I have been, and there are other commercial DNS servers that allow you to add this kind of family-friendly filtering and malicious site filtering. So those are options there, too. But Cloudflare does this all for free, and it is indeed a matter of simply changing that final digit.

[01:39:55] For what it's worth, users of GRC's DNS benchmark, once again, will note that all three of those IP addresses and also their secondary DNS mirrors are already present in the benchmark's default built-in resolver list. So it's easy to compare their relative performance.

[01:40:15] For me, just now, when I was writing the show notes, I think this was on Sunday now, the alternate filtering resolvers did measure somewhat slower than Cloudflare's original 1.1.1.1 and 1.0.0.1 unfiltered resolvers.

[01:40:37] So there may be a little tiny bit of performance trade-off, but it could also be the location or the time of day when I did this. And, of course, that's why you have the DNS benchmark, so you can test from anywhere and at any time. But, I mean, it wasn't a huge difference. And for what it's worth, I'm consistently seeing 1.0.0.1 outperforming 1.1.1.1 because everybody uses 1.1.1.1 as their primary.

[01:41:05] It turns out that using the path less traveled DNS resolver gives you a bit of a speed advantage, so it might be worth swapping those numbers. And, Leo, let's take a second to the last break. We're going to delve into this amazing story of Fast16.sys, and we'll take one final break when we're halfway through that. Perfect. Perfect.

[01:41:31] Our show today brought to you by CyberHoot. I love this little owl. Lisa was doing her CyberHoot training the other day, and she got an award, and it was a little owl sticker. It was so cute. CyberHoot. It's a security awareness training. Now, I have to tell you, traditional security awareness training does one thing. Well, it makes employees feel bad. Right? If you've ever done it, you know, and then you kind of disengage when you feel bad. Not a good way to learn.

[01:42:01] Someone clicks a fake phishing link, gets a shame lecture, watches a boring video, and learns almost nothing. Well, CyberHoot was built by people who saw that cycle. Repeat. For decades. Ground up. They decided to fix it with one question in mind. What if you trained people the way great coaches train athletes? With positive reinforcement, with encouragement, with repetition, and with rewards like little owl stickers for doing the right thing.

[01:42:30] CyberHoot is a security awareness training program for businesses of all sizes, for government entities, for schools, and professional services firms that want their people to actually get smarter about cybersecurity, not just check a compliance box. Most phishing training teaches people to spot a phishing email after it lands, right? CyberHoot's hootfish goes farther. It teaches people how phishing actually works. Knowledge is power here.

[01:42:58] When you understand how an attacker constructs a phishing email, what makes it convincing, what psychological triggers it exploits, you get smarter. You stop being a passive target. You start being a smart consumer. Bootfish uses typo squatted domains, just like the real guys. Convincing vendor impersonations, just like the real guys. These are the exact tricks that real attackers use. Spotting an obviously fake email. Well, nobody learns from that.

[01:43:28] But when you have an email that isn't obviously fake, like the real thing, and they learn from that, that works. One healthcare provider hit 95% training compliance and cut phishing-related help desk tickets by 40%. I think people know most breaches start, this is what our topic was at Zero Trust World, with a human being clicking something they shouldn't have. Steve's topic was the threats coming from inside the house.

[01:43:56] You've had conversations with coworkers and frustrated IT teams. You know how it is. CyberHoot is the answer to that conversation. It's what you recommend to the business owner, the IT leader, or the enterprise security team that wants to build, and this is so much better, a real security culture at scale. Not wasting a six-figure training budget on programs that don't change behavior.

[01:44:22] If your organization is ready to stop punishing people for being human and start actually building cyber smart employees, head to cyberhoot.com slash security now. Use the code security now at checkout. And get 20% off your first year. That's C-Y-B-E-R-H-O-O-T. Cyberhoot.com slash security now. Promo code security now for 20% off your first year. Just remember, always laugh, learn, and hoot up.

[01:44:52] It was really fun to, you know, I didn't want to weigh in. I didn't want to horn into Lisa's training. But it was fun to watch her go through it. And the training was really good. It talked about script kitties. It talked about who the people are who are trying to hack you. Why? What do they get out of it? And it was really fun watching her and learn. And at the end of it, I think, you know, she did the quiz. She got 100%. I might have helped a little bit. And she got the little owl. And it just was fun. It was really good. It really worked. Cyberhoot.

[01:45:22] Cyberhoot.com slash security now. Promo code security now for 20% off your first year. Steve, what is this fast 32.sys of which you speak? Wow. Okay. So everybody, this is just first. Okay. First, I want to thank our listeners who sent me a heads up about this. My first thought upon seeing just the headline was that it might be a little more than a passing

[01:45:49] note, but the truly, oh, Leo, diabolical and clever nature of what was achieved by unknown but suspected agencies caused this to stand out on a scale similar in scope to Britain's deliberate secrecy once they had unraveled the operation of Germany's Enigma machine.

[01:46:15] You know, keeping their discovery a secret was diabolical. And so is what was accomplished by the fast 16.sys file system driver the same year as this podcast started and well before five years before Stuxnet. So last Thursday, the Sentinel Labs group of the Sentinel One security research firm posted

[01:46:42] their piece about what they discovered and how that happened. Their piece was titled fast 16 mystery shadow brokers reference reveals high precision software sabotage five years before Stuxnet. Remember the shadow brokers was the leak of the internal NSA stuff. Turns out there was some clues in that.

[01:47:09] So they wrote our investigation into fast 16 starts with an architectural hunch. A certain tier of apex threat actors has consistently relied on embedded scripting engines as a means of modularity flame animal farms, bunny plexing eagle flame 2.0 and project.

[01:47:36] So on each built platforms around the extensibility and modularity of an embedded Lua VM. We wanted to determine whether that development style arose from a shared source. But like, like where, how did they all, how did all these individual actors get this idea?

[01:47:57] So they said, so we set out to trace the earliest sophisticated use of an embedded Lua engine in Windows malware. They write Lua is a lightweight scripting engine with a massive, I'm sorry, with a native proficiency for extending C and C++ functionality.

[01:48:18] Given the appeal of C++ for reliable high-end malware frameworks, this capability is indispensable to avoid having to recompile entire implant components to add functionality to already infected machines. We did not find an indication of direct shared provenance, but our investigation did uncover the oldest

[01:48:45] instance of this modern attack architecture. Lua leaves a distinctive fingerprint. Compiled bytecode contains compiled bytecode containers. Start with the magic bytes 1b4c7561 in hex, followed by a version byte, and the engine typically exposes a characteristic

[01:49:14] C API and environment variables such as Lua underscore path. Hunting for these traits across mid-2000s malware collections surfaced a sample that initially looked unremarkable. It was titled Service Management SVC MGMT dot exe, Service Management dot exe.

[01:49:41] On the surface, Service Management exe appears to be a generic console mode service wrapper from the Windows 2000 XP era. A closer look reveals an embedded Lua 5 virtual machine and an encrypted bytecode container unpacked by the service entry point,

[01:50:07] meaning when the service is loaded, it's then initialized. And at that point, this Lua 5 virtual machine reads this encrypted bytecode container and unpacks it. The developers of this executable extended the Lua environment to include a wide string module to provide native Unicode handling,

[01:50:31] you know, dual byte 16-bit characters as opposed to single byte characters. A built-in symmetric cipher exposed through a function commonly labeled B used to decode embedded data. The multiple modules that bind directly into the Windows NT file system, the registry, service control, and network APIs.

[01:50:57] Even by itself, Service Management exe already looks like an early high-end implant. A modular service binary that hands most of its logic to encrypted Lua bytecode. The binary includes a crucial detail, a PDB path that links the binary to the kernel driver fast 16 dot sys.

[01:51:27] Okay, I'll interrupt here to say that anyone who's developed code using Microsoft's tool chains will be familiar with its creation of dot PDB files. Even I, because I'm using their linker to link my assembly code. I've got, you know, PDB files, you know, DNSbench dot PDB coming out my ears. They're everywhere if you're using Microsoft tools.

[01:51:53] So, so they said, so what Sentinel Labs, the researchers are saying here is that their search for the earliest instances of the use of Lua scripting in malware turned up a reference to something unknown and unsuspected, which was a Windows kernel driver named fast 16 dot sys.

[01:52:19] So, so they continue writing, buried in the binary strings is that PDB reference to c colon backslash buildy backslash driver backslash fd backslash i386 backslash fast 16 dot PDB.

[01:52:38] At first glance, they write, the path is structured like any other compiler artifact, an internal build directory, a component name fast 16, and an architecture hint i386. However, in this case, there's a mismatch.

[01:52:56] The string appears inside of a service mode executable, and yet the driver fd i386 fast 16 segment of the PDB string clearly refers to a kernel driver project. Following that clue led us to examine a second binary, fast 16 dot sys, which I'll just note they wouldn't have otherwise known to look for.

[01:53:22] But the point is that this, the clue of this PDB reference, they realized this was not a part of a service. This was part of a kernel driver. Why was a kernel driver in a service? And what is fast 16 dot sys?

[01:53:42] They wrote, this kernel driver is a boot start file system component that intercepts and modifies executable code as it's read from disk.

[01:53:56] Although a driver of this age will not run on Windows 7 or later, for its time, fast 16 dot sys was a cut above commodity root kits thanks to its position in the storage stack, control over file system I.O. And rule based code patching functionality. Okay, again, I'm pausing to add some clarification.

[01:54:23] What this means is that this is a root kit, plain and simple. It's marked for boot time loading by the operating system, which is busy loading all manner of random stuff to get Windows up and running.

[01:54:40] But in this case, when fast 16 dot sys kernel root kit is loaded and initialized, its initialization code, which happens immediately upon its loading, that code immediately installs interception hooks deep into the operating system. So that it is able to subsequently oversee and interfere with whatever it might wish to.

[01:55:10] So they continue. In April 2017, almost 12 years after the compilation timestamp, the same file name, Fast 16, appeared in the shadow brokers leak, which refers to a text file. Drive underscore list dot text. Dr. V underscore list dot text.

[01:55:38] They said the 250 K byte file, which is a text file, is a short list of driver names used to mark potential implants cyber operators might encounter on a target box as friendly or to pull back. In order to avoid clashes with competing nation state. Nation state hacking operations. And then we have a sample from their posting.

[01:56:08] The report shows a snapshot of five lines from the drive underscore list dot text file of file identifiers. Four of the five lines identify the malware by name. Misty Veal, NetSpider, Olympus, and Pedalcheap.

[01:56:29] Each with a file name like N-E-T-H-D-L-R or K-H-L-P-8-O-7-W. But the entry for the Fast 16 driver does not show any malware name. Somewhat ominously and unlike any of the others. It says nothing to see here. Carry on.

[01:56:56] So someone somewhere knew to just leave this one alone and said so without identifying why or who or what it was. The researchers wrote the guidance for this one particular driver, Fast 16, stands out as both unique and particularly unusual.

[01:57:19] The string inside Service Management Exe provided the key forensic link in this investigation. The PDB path connects the 2017 leak of deconfliction signatures used by NSA operators with a multimodal Lua-powered carrier module compiled in 2005.

[01:57:46] And ultimately, it's stealthy payload. A kernel driver designed for precision sabotage. And we're going to get to the precision sabotage in a second. So just to back up, recall that the shadow brokers leak, as I mentioned before, was believed to be a publication of secret documents stolen from the equation group that was believed to be a group within the NSA.

[01:58:15] So the evidence is suggesting that all these other files were associated with known malware from other actors. But the Fast 16.sys driver was not that. If you see it, leave it alone. Nothing to see here. These are not the droids you're looking for. The flag was to back off.

[01:58:36] So they said, the core component of Fast 16 Service Management Exe functions as a highly adaptable carrier module, changing its operational mode based on command line arguments. No arguments. It runs as a Windows service.

[01:59:04] With a hyphen P, it sets the install flag to one and runs as a service. So that means propagate, install, and run. With a hyphen I, that sets install flag to one and executes the Lua code, meaning install and execute Lua.

[01:59:26] If the argument has a hyphen R, that executes Lua code without setting the install flag. So just execute. Any other argument, such as a file name, interprets that as a file name and spawns two children, the original command and one with the hyphen R argument. So that's the so-called wrapper or proxy mode.

[01:59:53] So they said, internally, Service Management Exe stores three distinct payloads, including encrypted Lua byte code that handles configuration, its propagation and coordination logic, auxiliary con notified DLL, and that Fast 16.sys kernel driver.

[02:00:15] By separating a relatively stable execution wrapper from encrypted task-specific payloads, the developers created a reusable compartmentalized framework that they could adapt to different target environments and operational objectives while leaving the outer carrier binary largely unchanged across campaigns. In other words, this was an extremely sophisticated design.

[02:00:45] They continue, the early 2000s saw a large number of network worms. Most were written by enthusiasts, spread quickly, and carried little or no meaningful payload. Fast 16 originates from the same period but follows a completely different pattern, indicative of its provenance as state-level tooling.

[02:01:13] It's the first recorded Lua-based network worm and was built with a highly specific mission. The carrier was designed to act like cluster munition in software form. Able to carry multiple wormable payloads, referred to internally as wormlets, the Service Management Exe module performs the following steps.

[02:01:39] First, prepares the configuration, defining the payload path, service details, and target IP ranges. Next, converts the configuration values to wide character strings for the C layer. Third, escalates privileges and installs the carrier executable as the Service Management Service, then starts it.

[02:02:05] Fourth, optionally, based on the configuration setting, deploys the kernel driver implant, Fast16.sys. Next, releases the wormlets. Release the wormlets.

[02:02:46] Until a failure threshold or external kill connection is reached. They write, The single deployed wormlet found in Service Management Exe, that's the SCM wormlet, exemplifies a simple but effective propagation strategy based on native Windows capabilities and weak network security.

[02:03:11] It targets Windows 2000 XP environments and relies on default or weak admin passwords on file shares. All spreading is done through standard Windows service control and file sharing APIs, an early example of propagation that leans on built-in administration features rather than custom network protocols.

[02:03:36] Before this workflow starts, a pre-installation kill switch. The machine learning switch checks the environment. The OK to install routine calls OK to propagate. And propagation is only allowed if it's manually forced or if it's made sure common security products are not found by checking for associated registry keys.

[02:04:03] The routine walks a list of vendor keys and aborts installation if any of them are present, preventing deployment into monitored environments. For tooling of this age, that level of environmental awareness is notable.

[02:04:22] While the list of products may not seem comprehensive, it likely reflects the products the operators expected to be present in their target networks whose detection technology would threaten the stealthiness of a covert operation. OK, so the list which they provide in the posting is a bit of a walk down memory lane with many of our old friends present.

[02:04:48] You know, there's Symantec, Seigate Technologies, Trend Micro, Zone Labs, F Secure. There's Network Ice's Black Ice product, McAfee's Personal Firewall, Computer Associates, Easy Trust, Easy Armor.

[02:05:02] I'm sorry, E-Trust, Easy Armor, something called Red Cannons Fireball, Karyo, Personal Firewall 4, Kaspersky Lab is there with Kaspersky Anti-Hacker, the Tiny Software, Tiny Firewall, Soft Forever, Panda Software's Firewall, and so forth. So a bunch of the standard products at the time.

[02:05:27] They said a separate user mode component, servicemanagement.dll, provides a minimal reporting channel. Contained within the carrier's internal storage, this DLL is registered through the Windows Add Connect Notify API so that it's called each time the system establishes a new network connection using the remote access service responsible for dial-up connections and early VPNs.

[02:05:57] In the 2000s. When invoked, the DLL decodes an obfuscated string to obtain the named pipe, and then they give the pipe name, attempts to connect to the local pipe, and writes the remote and local connection names to the pipe before closing it. The module does not run independently. The module does not run independently and must be registered by a host process.

[02:06:22] So they're just saying that this is a very stealthy means of allowing this agent to connect out and sort of ride the outbound connection when that's being done anyway by that particular workstation, which it is already infected. And what that means is that there were some other communicating component other than this that it was able to talk to.

[02:06:50] Okay, so the stage is set.

[02:06:53] We understand now that way back in the time of Windows 2000 and XP, it appears that very clever hackers, probably employed by the U.S. National Security Agency's Equation Group, carefully designed a professional-grade, highly stealthful, and very cautious Windows infiltration worm that prioritized

[02:07:22] not being caught. Which is always a good thing to prioritize if you're a worm. Yes. The infiltration worm was designed to be a multipurpose implant delivery system, which in this instance was intent upon installing something known as the Fast16.Sys kernel rootkit into Windows systems. Okay.

[02:07:49] Now we're going to learn why and why I consider its devious functioning to be such a diabolically brilliant maneuver. But first, Leo, we're going to learn why our listeners may be interested in the brilliant sponsor that we have next. That's diabolical of you, Steve.

[02:08:12] Actually, the bad guys are diabolical, but our sponsor, GuardSquare, is here to protect your mobile apps. Mobile apps today are an inescapable part of life. We love them, right? Ranging from financial services to healthcare, retail, entertainment. Users trust mobile apps with their sensitive personal data. And for that reason, as a mobile app developer, you have a higher responsibility, don't you?

[02:08:40] A recent survey showed that 72% of organizations experienced a mobile application security incident last year. 92% of respondents reported rising threat levels over the last two years. Meanwhile, attackers, who of course want your users' personal data, are constantly finding new ways to attack your mobile app. The latest one, really diabolical, they reverse engineer the app and repackage it.

[02:09:07] They take all your code, it looks exactly like your app, has your icons and everything. But then they put a little malware in there and distribute the modified app. They might do it through a phishing campaign, through sideloading, through third-party app stores. Your users don't know this isn't the official version of the app. Which means the reputational damage comes to your head.

[02:09:30] By taking a proactive approach to mobile app security, you could stay one step ahead of these attacks and maintain the trust of your users. That's where GuardSquare comes in. GuardSquare delivers mobile app security without compromise, providing advanced protections for both Android and iOS apps, combined with automated mobile application security testing, defined vulnerabilities, and real-time threat monitoring to gain insight into attacks. Know what's coming before it happens.

[02:10:00] Discover more about how GuardSquare provides industry-leading security for your mobile apps at GuardSquare.com. That's GuardSquare.com. And speaking as a user, I hope every one of you starts using GuardSquare. Protect my data. Protect your reputation. GuardSquare.com. Thank you so much for supporting Security Now and this diabolical worm emplacement mechanism.

[02:10:28] Leo, you're going to love this. I know you. How old do we think this is? This is 2005. Wow. This is 21 years ago when you and I began the podcast. Oh, my God. We didn't know anything about it, of course. No. No one knew anything about it until now because these guys went back and were looking for the earliest instance known where malware was using Lua, the Lua VM to interpret Lua script.

[02:10:58] And so they just stumbled on this and it's like, whoa, look what we found. And this thing went unknown. But you, I know you, you're going to love when you hear what this thing does. Okay. So the Sentinel Labs team reverse engineered the operation of this Fast16.sys rootkit driver to learn the goal of its infiltration mission. Here's what they discovered.

[02:11:24] They wrote, once activated, Fast16.sys focuses on executable files. They said a file is a valid target if it meets two criteria. The file name ends with EXE. And immediately after the last PE, the portable execution section header, there's a printable ASCII string starting with Intel.

[02:11:49] They wrote, this selection logic points to executables compiled with the Intel C, C++ compiler, which often placed compiler metadata in that region. It indicates that the developers knew their target software was built with this tool chain. For files meeting these criteria, the driver performs a PE header modification in memory.

[02:12:17] It injects two additional sections, .xdata and .pdata, and fills them with bytes from the original code section, increasing the section count and keeping a clean copy of the code. The intent is likely to increase stability while still allowing extensive patching. Although without identifying the target binaries, this remains uninformed hypothesis.

[02:12:45] Okay, so just to be clear, what Sentinel Labs found was that this rootkit driver, which hooked into the operating system's lowest level file system functions, was able to modify executable files on the fly as they were being loaded into memory to run. So what was stored on the system's drive was never altered in any way.

[02:13:15] While what was actually loaded into memory when that program was executed was significantly altered on the fly as it was being read from the drive. They explain,

[02:13:57] It allows wildcards inside patterns, so a single rule can match several compiler-optimized variants of the same code. And it supports state flags that some rules can set or check, enabling multi-stage modification sequences similar to those used by advanced antivirus scanning engines.

[02:14:19] Most patched patterns correspond to standard x86 code used for hijacking or influencing execution flow. One injected block is different. Okay, listen to this.

[02:14:39] It's a larger and complex sequence of floating point unit FPU instructions dedicated to precision arithmetic and scaling values in internal arrays. This code is a standalone mathematical calculation function unrelated to code flow hijacking or any other typical malicious code injection.

[02:15:08] We're going to learn why in a minute. They said,

[02:15:52] The FPU patch in FAST16.sys was written to corrupt these routines in a controlled way, producing alternative incorrect results. This moves FAST16 out of the realm of generic espionage tooling and into the category of strategic sabotage.

[02:16:20] By introducing small but systemic errors into physical world calculations, the framework could undermine or slow scientific research programs, degrade engineered systems over time, or even contribute to catastrophic damage.

[02:16:42] A sabotage operation of this kind would be foiled by verifying calculations on a separate system. In an environment where multiple systems shared the same network and security posture, the wormable carrier would deploy the malicious driver module to those systems as well, reducing the chance that an independent calculation would diverge from the corrupted output.

[02:17:12] At this time, we've been unable to identify all of the target binaries in order to understand the nature of the intended sabotage. We welcome the contributions of the larger InfoSec research community and have included the YARA rules to hunt for these patterns in this post's appendix. Even after deep analysis, FAST16.s driver looks deceptively simple.

[02:17:38] Beneath that minimal code is a rule-driven in-memory engine that quietly patches executable code as files are read from disk. The engine relies on a compact set of just over 100 pattern-matching rules and a small dispatch table so it only inspects bytes that are likely to matter.

[02:18:04] However, most patterns correspond to ordinary x86 instructions, but one stands alone. A larger block of floating-point FPU code dedicated to precision arithmetic. This injected routine scales values in three internal arrays passed into the function, subtly altering its calculations.

[02:18:28] Without knowing the exact binaries and workloads being patched, we cannot fully resolve what those arrays represent. Only that the goal is to tamper with numerical results.

[02:18:48] Our best clues about the intended victims come from matching these patterns against large, era-appropriate software corpora. The strongest overlaps point to three high-precision engineering and simulation suites from the mid-2000s. LS-Dyna 970, PKPM, and the MOHID hydrodynamic modeling platform.

[02:19:16] All are used for scenarios like crash testing, structural analysis, and environmental modeling. However, LS-Dyna 970, in particular, has been cited in public reporting on Iran's suspected violations of Section T of the JCPOA in studies of computer modeling relevant to nuclear weapons development.

[02:19:44] So just to make clear how utterly diabolical this is, you're a nation-state hostile to the United States. Researchers inside the US NSA have quietly traced your purchases of PCs and very high-end nuclear physics modeling software so they know what you're using to make your calculations.

[02:20:12] They obtain the same high-end modeling tools, which they reverse engineer. They then design a subtle set of tweaks, which, when applied to that package, as it's being loaded by the operating system into memory, will cause its calculations to be wrong.

[02:20:34] Not enough to call attention to itself, but enough to foul up any designs that depend upon the accuracy of those calculations. And as the Sentinel Labs guys explained, by being a super-stealth worm, not only did that allow it to worm its way into the design lab, but having arrived there, it also allowed it to similarly infect every other machine within its environment.

[02:21:03] No files were ever altered. Reinstalls would have no effect. And scans would reveal nothing. Yet every copy of that modeling software would agree upon the wrong results. As I said, breathtakingly diabolical. The Sentinel Labs team concludes by noting something else they observed about the provenance of this worm, writing,

[02:21:32] as we sought to understand the lineage of this unusual set of components, we noticed a quirk. Strings of the form at sign, open parens, pound sign, close parens, par dot h, space dollar revision, colon, space one dot two, space dollar sign. Inside the binaries. Inside the binaries point to an unusual source control convention.

[02:22:01] The at sign, at sign, point to an unusual source control system. The at sign, parens, pound, parens prefix is characteristic of early Unix source code control system, SCCS, or revision control system, RCS tooling from the 1970s and 80s. These markers do not affect execution and are redundant in modern Windows kernel drivers.

[02:22:26] Finding SCCS and RCS artifacts in mid-2000s Windows code is rare. It strongly suggests that the authors of this framework were not typical Windows only developers.

[02:22:42] Instead, they appear to have been long-term engineers whose culture and tool chain came from older high security Unix environments, often associated with government or military grade work. This detail supports the view that FAST-16 came from a well-resourced, long-running development program. Wow. RCS strings. Wow. I mean, by then everybody was using Git. Right.

[02:23:12] Yeah. Service management was uploaded to VirusTotal nearly a decade ago. It still receives almost no detections. Wow. Yeah. One engine classifies it as generally malicious, and even that with limited confidence.

[02:23:29] For a stealthy, self-propagating carrier that deploys one of the most sophisticated sabotage drivers of its era, that nearly non-existent detection record is notable. Mm-hmm.

[02:23:46] Together with its appearance in the shadow brokers leaked signatures, FAST-16.sys forces a reevaluation of our historical understanding of the timeline of development for serious covert sabotage, cyber sabotage operations.

[02:24:06] The code shows that state-grade cyber sabotage against physical targets was fully developed and deployed by the mid-2000s. Embedded scripting engines, narrow compiler-based targeting, and kernel-level patching formed a coherent architecture well ahead of better-known families.

[02:24:29] And some of the most important offensive capabilities in the ecosystem may still sit in collections as old but interesting samples, lacking the context to highlight their true significance. Hmm. So, as we know, I've frequently despaired that we're only ever hearing news of Chinese and North Korean and Russian state-sponsored attacks.

[02:24:56] I've worried and wondered and hoped that the U.S. would be able to give as well as it gets. This certainly suggests that our bases are all likely well covered. And all of this was 21 years ago. Imagine what's probably going on today. Yeah, you could, I mean, it's got all the earmocks of a nation state because no script kitty, no hacker is going to write anything with, they're not going to use source control.

[02:25:24] They're not going to write anything with a built-in language compiler. This is way more sophisticated and way, in some ways, over-engineered, right? This is exactly what you'd expect from government programmers. Well, they figured out what probably Iran was using to do their engineering. They bought a copy, they decompiled it, and they built a patch so that it wouldn't fail.

[02:25:54] It would just give slightly wrong results. And who would... It's so diabolical. Who would... I know, it is so diabolical. Just a little bit off. Well... Just... And Stuxnet... Why aren't we a choosy achieving nuclear fission? It's not working. It's supposed to be, you know... It should work. All of our calculations say it should work. Amazing.

[02:26:19] I mean, then of course, Stuxnet follows right on the heels of that, which destroyed the centrifuges. Spun them up too fast and made them hurt themselves. Very sophisticated stuff. And this does... You feel... I feel like it sounds like the US government. Yeah. Yeah. I mean, Iran maybe, but... Well, we thought Stuxnet was Israel plus the US. I mean, Israel. Yeah. It was targeted to Iran. Yeah. And I mean, certainly Israel, but it feels more...

[02:26:47] The way this is built feels like the federal... Our government. The federal government. Yeah. I don't know. It's hard to describe, but it just feels that way. It's too, you know, engineered. Imagine these guys stumbling upon this and following this little breadcrumb trail

[02:27:06] and realizing what this little innocent looking fast 16.sys thing was just sitting around, you know, in Windows system directories of the era. And what was the thread, the first little thread that they pulled? It was, they were... Lua has a distinctive fingerprint. Mm-hmm.

[02:27:28] So you can tell when there's a Lua, a compiled Lua binary that is interpreted by the Lua LVM. So they just scanned all the, all the code back then for that little fingerprint. And they, they found some hits that they had never seen before. And they thought, oh, look at that. Lua was in use in 2005. We didn't know that. Right. Not by Microsoft. Microsoft wouldn't have used it for a driver like that.

[02:27:58] No. It would have been C++. And so it was like, okay, what? Yeah. And it is a favorite of malware. Right. So they assumed that this was some sort of malware. So then they said, okay, let's reverse engineer this and find out what it does. And little did they suspect something that was probably, you know, state level cyber espionage 21 years ago. Very sophisticated. Very interesting. I love, this is a great story.

[02:28:25] This is, it's too bad that it really couldn't be made into a movie. It's far too technical, but it's got a great, there's some real spy stuff in here. It's really cool. Yeah. That's great. And it's definitely with somebody in a white shirt with a skinny black tie, a pocket protector and black glasses who wrote this right short sleeves. He's got a, you know, he's got his name on the desk. And then they, you know, they, they turned it loose somehow close enough to its destination

[02:28:52] where it just literally wormed its way into the lab. You know, the person who wrote this or the people who wrote this 21 years later, they're probably retired or close to it. You should, you should get in touch with us. I'd love to hear the story sets or write a book. Cause now it's kind of, it's statute of limitations is run, you know? Well, it does not run on any current OSs.

[02:29:20] It, it, it, it will not run on windows seven server 2008 or anything since. Right. So it stopped at, at 2000 NXP that, that, that, uh, uh, lineage. I feel like it was written inside the Pentagon. It was written wherever NSA is, you know, in Virginia, probably. Virginia McLean. Yeah. Uh, yeah. Very cool stuff. Maybe, uh, maybe goodwill.

[02:29:48] But, but to, to, to just to look like it's calculate doing complex calculations correctly and, and just like a digit. Yes. This, this is like that you, you nailed it with the Enigma project of the Brits or the, the, the man who never was, you know, the idea of just these subtle little. And they never, never admitted to it. Never said it. It never became public. No one ever knew that this thing was ever there before. There's an Iranian nuclear scientist somewhere going, ah, now I understand why it didn't work.

[02:30:18] Exactly. Steve. Great show as always. Uh, I hope you all, uh, listen every week. Cause this is the kind of thing you miss. If you miss a single episode, we do security now on Tuesdays right after Mac break weekly. So it's about one 30 Pacific, four 30 Eastern, 20, 30 UTC. I mentioned that cause you can't watch us live. If you're in club twit and we hope you are, your support is so important for us as an independent podcast network. And for shows like this without your support, we couldn't do it.

[02:30:48] So if you're not a member and you get some value out of this, please twit.tv slash club twit and join. You'd get ad free versions of all the shows. You get access to the discord. You get lots of special programming, lots of good stuff. Uh, you can watch us live though. Even if you're not a club member, cause we stream it in public YouTube, Twitch, x.com, Facebook, LinkedIn, and kick. So please, you know, watch live if you want, but you don't have to. After the fact you can get copies in two different places. We have it on our website, twit.tv slash SN. Uh,

[02:31:17] there's audio and video there, but Steve has all of the unique versions of the show. But first of all, he has a 16 kilobit audio version, which is a little scratchy, but it's small. It's the virtue of being small. Also a 64 kilobit audio. That's full fidelity. He also has these fantastic show notes. And if you want to read this story, this is the place to get it. Uh, page 21 pages of good stuff every week.

[02:31:44] Um, you can either download it from his site while you're there downloading the podcast, or you can get on the mailing list and subscribe and that way you'll get it automatically. I got mine on Sunday. Steve, Steve works hard all weekend. Uh, just to go to grc.com slash mail and, uh, fill out your email address. That's so you can whitelist the email address so that you can send him pictures of the week and stuff and comments, but then below it, when you give him the email address, there will also be two unchecked check boxes, one for the show notes, and then one for an announcement.

[02:32:14] So when, for instance, the, uh, new version of, uh, of, uh, the DNS benchmark pro comes out, you'll send out an email, right? Yep. I have a video walkthrough to create, and then I'll finally be able to let people know. Nice. Very good. Uh, so that's a good way to go. Grc.com slash email. He also has transcripts written by Elaine Ferris. Really, really good transcripts because a human wrote them all of that at Steve's website. While you're there, pick up a copy of spin, right?

[02:32:41] His bread and butter, the world's best mass storage maintenance recovery and performance enhancing utility. Also, of course, the DNS benchmark pros are there. Uh, that's $9.99 and worth every penny. Uh, we, uh, we'll be back next week, next Tuesday. I'll be in Hawaii doing this, Steve. How fun are you? Interesting. It looks like you've already got the shirt on. I got the shirt. I'm ready. I'm getting ready for the, uh, trade winds in Kona.

[02:33:10] Uh, we're worried it might rain the whole time we're there, but I'm, I'm bringing a, uh, star star link, uh, mini to put out in the roof. We'll see. I mean, we'll just see, uh, I may, it may be a little scritchy, a little scratchy, but, uh, I will be here for that. And if not, if, if for some reason it falls apart, Michael will, we'll do the show, but I think, I think I'm going to do all the shows. So I think it'll be fun. Well, we've all seen those, what, uh, personal enhancements promotions or the real estate guys who have the waves in the background.

[02:33:40] They're always in, in, in, in Hawaii. It'll be like a timeshare version of security now. With palm trees and everything. That'll be great. Oh. You'll be very relaxed. I have the fantasy that I'm going to be able to set up the star link on the, on the balcony and that you will see all that. We have an ocean view and you'll see, I'm have that fantasy, whether that will materialize is another matter. I might be in a spare bedroom. It might be kind of boring. We'll see. Much like Paul and Richard when they're traveling. Exactly.

[02:34:09] We always try to get somewhere nice, but we can't always get it. So yes, Hawaii has internet, uh, Galia, but I'm, we're staying in a hotel and you know how hotel internet, I tell wifi is. So I thought I'm bringing my own internet. It was also a proof of concept because if I could do this there, uh, uh, thank you, Steve. Have a wonderful week and we'll see you next time. All of you on security. Now see you from Hawaii. My friend. Bye. Hey everybody. It's Leo Laporte.

[02:34:38] Are you trying to keep up with the world of Microsoft? It's moving fast, but we have two of the best experts in the world. Paul Therat and Richard Campbell. They join me every Wednesday to talk about the latest from Microsoft on Windows Weekly. It's not a lot more than just Windows. I hope you'll listen to the show every Wednesday. Easy enough. Just subscribe in your favorite podcast client to Windows Weekly or visit our website at www.twitt.tv. Microsoft is moving fast, but there's a way to stay ahead.

[02:35:07] That's Windows Weekly, every Wednesday on Twitter. The Commerzbank steckt voller Geschichten. Und auch wenn wir vielleicht nicht jeden Traum unserer Kunden kennen, wissen wir, wie wir aus ihren Ideen, Plänen und Anliegen gemeinsam Erfolgsgeschichten machen können. Wann sprechen wir über ihre?

[02:35:35] Gebaut aus 155 Jahren Erfahrung und den Erfolgen unserer Kunden. Commerzbank. Die Bank an ihrer Seite.

[02:36:04] Die Bank an ihrer Arbeit und die Politik, ihre everydaye Leben. Und alle die wachsensten Wäste, die Menschen nutzen die Internet. Listen auf bbc.com oder woher du get your podcasts.

Security Now,TWiT,steve gibson,Leo Laporte, FAST16.sys,Malware, NSA,stuxnet,supply chain attack, bitwarden cli, GitHub Actions, shadow brokers, kernel rootkit, Windows 2000, XP, cyber sabotage, precision sabotage, engineering software security,