Can AI really write malware better than hackers ever could? This episode exposes the first real-world case of advanced, fully AI-generated malware and why it signals a seismic shift in cybersecurity risk.
- CISA's uncertain future remains quite worrisome.
- Worrisome is Ireland's new "lawful" interception law.
- The EU's Digital Rights organization pushes back.
- Microsoft acknowledges it turns over user encryption keys.
- Alex Neihaus on AI enterprise usage dangers.
- Gavin confesses he put a database on the Internet.
- Worries about a massive podcast rewinding backlog.
- What does the emergence of AI-generated malware portend?
Show Note - https://www.grc.com/sn/SN-1062-Notes.pdf
Hosts: Steve Gibson and Leo Laporte
Download or subscribe to Security Now at https://twit.tv/shows/security-now.
You can submit a question to Security Now at the GRC Feedback Page.
For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
Sponsors:
[00:00:00] It's time for Security Now. Steve Gibson is here. He's worried. He's worried about the future of CISA in the United States. He's worried about Ireland's new lawful interception law that makes spyware legal. He's worried about AI-generated malware. Yes, it's here. All that and more coming up next on Security Now. Podcasts you love. From people you trust. This is TWiT.
[00:00:33] This is Security Now with Steve Gibson. Episode 1062. Recorded Tuesday, January 27th, 2026. Voidlink. AI-generated malware. It's time for Security Now! The show we cover the latest security news. This computer information that you need to know. With this guy right here. The King of the Hill when it comes to Security Mr. Steve Gibson. Hi Steve!
[00:01:01] Leo, great to be with you again for another Tuesday. The last one of January. So say goodbye to January everybody. I don't know where it went. I'd be happy to let it go. I'm not attached. I was going to say many, many people. It was a bad January and from many perspectives and certainly cold also. I guess the weather has just gone crazy. Crazy spromos. Oh man, we're freezing too. Yeah, really freezing cold. Well, we're going to... It's only 52 degrees here in California. It's just terrible. I don't know what to do.
[00:01:30] All right. What are we covering on the show today? Enough frivolity. Checkpoint Research, I think they call themselves. Checkpoint Research. Checkpoint is how we know them.
[00:01:42] Did a... Took a close look at what they recently discovered as the first very impressive and very concerning purely AI generated malware.
[00:02:05] Um, because the developer made the mistake, so not a rocket scientist here or a security guy, uh, kind of an anti-security guy actually, of leaving a directory exposed. His like it was one of his servers directories was exposed. They were able to get an literally an inside look at the production of an AI generated malware.
[00:02:32] And there's some important takeaways from that, which we're going to get to. Uh, but first we're going to look at, unfortunately, the, uh, SIS's uncertain future, which I kind of thought it would have been resolved, but no. Our old friend, Rand Paul has stuck his finger in this and is going to maybe cause some trouble.
[00:02:56] We'll see. Uh, we've got, uh, a worrisome new law, which has been passed in Ireland. Um, which we need to take a look at because for, for a while I actually had a working title for the podcast. Leo was the state versus encryption.
[00:03:18] Um, uh, and we're going to end up with a couple of stories that are that, which is why the podcast was carried that working title until I saw that. I really, we need to talk about malware and AI. Um, there is a group in the EU, the digital rights, uh, organization pushing back on some of what seems to be happening. So we're going to take a look at that. Uh, I, I never had a chance to hear what you guys and Alex Stamos on Sunday.
[00:03:48] We're talking about relative to Microsoft's acknowledgement that it has turned over some encryption keys to the FBI. Um, we're going to discuss that. And I have, again, I didn't hear what happened on Sunday, but I have a, maybe a surprising takeaway for our listeners relative to Microsoft. I try to be very fair. I know I'm very hard on them. But like most of the time in this case, uh, I take a different position.
[00:04:16] Yeah. I think you and Alex may actually be on the same page. I'll let you know what he thought about it when we get to it. Yeah. Uh, also our old friend, Alex Niehaus, uh, had some really useful and insightful, uh, feedback about AI, uh, enterprise usage and the, how it's fraught with some dangers.
[00:04:38] I want to share that with our listeners. Another listener, Gavin confesses that he deliberately put a database on the internet and explains why. Uh, oh, and we, there are some worries, Leo, about pod, the need to rewind podcasts and how there may be a massive backlog that is growing, which we need to deal with.
[00:05:03] And then we're going to take a look at the emergence of remember our DVD rewinder from last week. Oh, that's right. No, people have been leaving this show at the end and not rewinding it. I, I think this is a massive problem. So it's, it's really, you're not thinking about the next person, right? You're just saying, thanks very much. Start at the end. I took the last napkin and F you, I don't care. So I always do that.
[00:05:30] Lisa yells at me because I'll leave this much milk in the carton, you know, put it back in the fridge. And it's very disappointing to her. And Leo, let's not even get into being married and toilet rolls. Oh, yes. That's good. Oh, yes. Does it come off the front or off the back? But anyway, that's a whole nother topic. We do have a picture of the week, which we may get to someday. Do we have four ads? I didn't, or five. I didn't.
[00:05:55] We have, I believe three, which is for this show, a dearth, a paucity of ads, but we will, we will, I will pause. And sometimes people might've noticed this and ad will sneak in after the fact. You are a cunning linguist. That's all I have to say. So as far as I know, I will be reading three ads. Others may be inserted against your will. Okay. So we're going to do our five pauses. Yes. As always. The pause that refreshes my, my whistle.
[00:06:25] Always want to keep that whistle whetted. I will pause now to tell you about a sponsor that you probably everybody should know about, because this is exactly right up your alley, Steve. In fact, it's part of your presentation. When we go to Florida for Zero Trust World. Yeah. The problem being, if you are a company that your vulnerability sits right there in the seat, right there at the, at the desk. Yeah.
[00:06:55] Our show today brought to you by Hawks Hunt. As a security leader, you know, your job is you get paid to protect your company against cyber attacks, right? But that is getting harder and harder. Not only are there more cyber attacks than ever, but these phishing texts and emails and messages that are generated with AI are really good. Very deceptive. I got, as I mentioned, I got phished a couple of weeks ago. Here's the, here's the problem.
[00:07:23] You might be using one of those legacy one size fits all awareness programs. They really don't stand a chance in today's modern world. They send at most four generic trainings a year. Most employees ignore them. And, and, you know, in some ways worse when somebody actually clicks, you know, the, the fake phishing email, then they're forced into embarrassing training programs that feel like punishment. And, and, and we know that's no way to learn.
[00:07:50] If you're, if you are hating the training, you're not going to learn anything. That's why more and more organizations are trying Hawks Hunt. I, this sounds strange, but Hawks Hunt makes it fun. Hawks Hunt goes beyond security awareness. Actually changes behaviors. And they do it in the simplest of ways by rewarding good clicks and coaching away the bad. Whenever an employee suspects an email might be a scam, Hawks Hunt will tell them instantly,
[00:08:17] you know, providing a dopamine rush. It gets your people to click, learn, and protect your company. You know, like gold star stickers, confetti. As an admin, Hawks Hunt makes it easy to automatically deliver phishing simulations as often as you want in every form you want, email, Slack, teams, whatever. And you can use AI to mimic the latest real world attacks to make your simulations as good as the real phishing emails.
[00:08:45] Simulations can even be personalized to an employee based on department location and more. And these instant micro trainings are so much better than the kind of like two hour flash thing that you've had to sit through. These micro trainings are quick. They solidify understanding. They drive lasting safe behaviors. You can trigger gamified security awareness training that awards employees with stars and badges, boosting completion rates and ensuring compliance because it's fun.
[00:09:14] You'll choose from a huge library of customizable training packages, or you can even generate your own with AI. Hawks Hunt. They've got everything you need to run effective security training on one platform, meaning it's easy to measurably reduce your human cyber risk at scale. But you don't have to take my word for it.
[00:09:32] Over 3,000 user reviews on G2 make Hawks Hunt the top rated security training platform for the enterprise, including easiest to use, best results, also recognized as customer's choice by Gartner. Thousands of companies love it. Qualcomm, AES, Nokia all use Hawks Hunt to train millions of employees all over the globe. Visit hawkshunt.com slash security now today to learn why modern secure companies are making the switch to Hawks Hunt.
[00:10:01] That's hawkshunt.com slash security now. Hawks Hunt like Fox Hunt, but with an H. So it's H-O-X-H-U-N-T dot com slash security now. Instead of hunting the fox, you're hunting the wily hacker. Hawkshunt.com slash security now. We thank him so much for their support of Steve and the efforts he makes every Tuesday for us. So I've got a picture of the week. And it's another popular one.
[00:10:30] It generated a lot of feedback as these have been recently. I gave this in the title. This is what was once called Yankee ingenuity. Okay. Let me put it up on the big screen and we can look at it together here. I'm going to scroll up Yankee ingenuity. Okay. You want to describe that? So we don't really know what the backstory is. What's going on here?
[00:10:59] But when we were growing up, we probably encountered like garden gates where you had a, it was a, it was a slider that slid into a mating capture retainer. I'm sure there's a name. If you go to the hardware store, it's some sort of slide bolt or something. Yeah. Yeah.
[00:11:22] And, and so you would, you would, you would lift that the arm up, slide it over them, then put it back down and it would gravity would, would keep it rotated so that it would stay either locked open or locked closed. Anyway, you know, I mean, it's meant for some barn, right? A barn door kind of thing. Well, apparently somebody is having some sort of problem with their gas cap cover of their car. It's been popping open.
[00:11:48] And what's really impresses me Leo is they did not leave it looking like, you know, silver chrome. Oh no. It's been body painted to be, to match the car perfectly. So you, unless you were, I mean, if you were walking by it, you might miss the fact that they'd added a barn door closing lock to the outside of their gas cap cover.
[00:12:13] So, uh, one of our, uh, insightful listeners, uh, who received it, I got the email out actually, uh, early this week. It went out, uh, Sunday evening. Um, although Microsoft took it upon themselves to decide that GRC was not trustworthy. So 1,547 of the piece of the nine of the nearly 20,000 is 19,800 and something.
[00:12:41] 1,547 were all blocked at the way. If they went to outlook.com or hotmail.com. Um, when I saw that that had happened, I was able to collect them and they went out with no trouble at all last night. So it's like, okay, Microsoft, I guess maybe Sunday freaked you out. I don't know. Cause normally I do on Monday or Tuesday. Anyway, uh, no, I mean, it just, that's some AI, you know, had a spasm. And so it's never going to stop, Steve.
[00:13:09] There'll always be a little bit of this here or there. I'm sure I'm convinced it's just, you know. Yeah. Well, the it's, there's the spammers are trying to look legitimate. And so there's a value judgment that's happening to be made. So anyway, in any event, um, uh, uh, this listener said, well, you know what that is? I thought, no, that that's two factor authentication. You've got the in the inside. You have to pull the trigger to release the cap.
[00:13:35] And then you have to come around outside and use the second factor to slide the bolt over in order to open the cap. I suspect what most of our listeners do that the automatic cap closer holder broke. And so, you know, the cover was flapping in the breeze and they said, Hey, you know, we got it. We, we used to have a barn, but, uh, we don't have it, but we do have the lock that used to keep. They're probably closed patting themselves on the back.
[00:14:03] Cause initially they use duct tape to hold it closed. And then they decided to really upgrade it with the slide, which would explain the need to repaint the car in the same color. Yeah, that would make sense. Okay. So I've said for years that I have been pleasantly surprised by the success and effectiveness of CISA. Uh, you know, it's been a, an amazing success.
[00:14:32] The cybersecurity and infrastructure security agency awkwardly named, but boy, are they doing a great job since its creation 11 years ago, 2015. Um, it's been a CISA has been a huge win for our nation's cybersecurity. You know, my, my default belief is that government has a difficult time getting out of its own way way more often than not.
[00:14:57] So CISA was a welcome and actually well needed exception to that. Um, as this podcast has covered since its inception, they've been able to mandate that government agencies pay needed attention to many specific critical security problems that would otherwise have fallen through the cracks. You know, the government agencies have better things to do than, Oh, well, why we really don't want to do an update.
[00:15:25] They'll have to have our network down for, you know, blah, blah, blah, whatever there is there. And, and besides nothing's happened yet. Right.
[00:15:33] Um, you know, and CISA has also been empowered to set deadlines, which had to be honored, uh, their creation of Kev, K E V, the known exploited vulnerabilities catalog was a brilliant means of focusing those always limited and readily, you know, distracted bureaucratic resources where they were needed. And yeah, it's true.
[00:16:01] 125 security things happen, uh, on the second Tuesday of the month, but it turns out that two of those are really critical. And CISA has been, uh, been instrumental in saying, get to the other things when you can, but do these now because these really matter. So that's all been good news. The bad news is that CISA was not created to be a permanent entity.
[00:16:25] Sadly, the constitution of the United States is completely silent regarding the need for a permanent cybersecurity watchdog agency within the federal government. I guess our forefathers were unable to foresee that this means that politicians created CISA and politicians are required to keep CISA funded and authorized.
[00:16:49] Last week, the record updated us on the state of CISA's continuation writing congressional leaders on Tuesday, meaning last Tuesday released a compromise government funding bill that would once again, temporarily, temporarily extend the life of two key cybersecurity laws.
[00:17:15] The bipartisan legislation would reauthorize the 2015 cybersecurity and infrastructure security act CISA or agency and the state and local cybersecurity grant program, pushing it through the end of September. The extension in the 1.2 trillion, that's the entire funding bill.
[00:17:41] 1.2 trillion dollars is the latest short-term solution in a months long saga for CISA 2015, which provides liability. We've talked about this just last week, like crucial liability protections to encourage private companies to share digital threat information with the federal government. And as we've said, it's like they're not going to do it unless they have liability protection.
[00:18:09] It was the C-suite executives made that very clear. The record says both statutes received widespread support from the cybersecurity community and the Trump administration prior to their expiration last year. They received temporary reprieves in the continuing resolution that reopened the government in November. The House did approve a bill to extend the grant's effort, but there's been no action on the Senate side.
[00:18:37] Meanwhile, several proposals have been introduced to reauthorize the 2015 CISA long term. Please let that happen. Like, why? Let's not make this another football that we keep kicking. This we need. And not having it has already stalled. There's now a gap that needs to be filled because industry went silent as soon as they lost their liability protections.
[00:19:08] Anyway, the record wrote, The House Homeland Security Committee last year passed legislation to renew it for a decade. Thank you. With minor updates, but it hasn't been scheduled for a floor vote. A bipartisan Senate duo introduced a bill that would extend the law for 10 years. Yes. And provide retroactive protections for companies that shared cyber threat data even after the law lapsed.
[00:19:37] But, as I mentioned, the record writes, Senator Rand Paul, chair of the Senate Homeland Security Committee, has drafted a bill that would trash the legal protection outlined in the original statute. Well, thanks. They wrote, House leaders plan to hold a vote later in the week on the spending deal,
[00:20:02] which boosts defense funding to over $839 billion, with a B, dollars. Lawmakers have 10 days to clear the package for President Donald Trump's signature before federal funding is set to lapse for the programs it covers. With the Senate in recess this week, meaning last week, the upper chamber will need to approve the legislation when they return next week, meaning this week, if Congress is going to head off another funding lapse and a partial government shutdown.
[00:20:31] And as a total aside, there's a lot of conversation now that looks like we may be going into another shutdown over the DHS and the reauthorization and increase in other aspects of the budget. So, we'll see. And I don't know what's up with Senator Rand Paul. He's always something of a wild card and pretty much a pain in everyone's butt.
[00:20:59] But after first being elected to the Senate in 2010, he's been reelected twice since every six years. So, he seems to be what his state of Kentucky wants in a senator. Of course, he shares that position with Mitch McConnell. But in this case, it will be very bad if he gets his way. As again, as we noted last week, the executives of the nation's private infrastructure agencies consider
[00:21:24] their vulnerability and breach disclosure protections to be critical and a crucial feature of this legislation. So much so that in the bills that are being talked about, protections are being made retroactive because they've said, we need to have that. You asked us to keep talking to you after CISA lapsed. And we did, some of us. So, you need to protect us from that.
[00:21:54] So, anyway, no one can make these executives disclose information which is privately held if they choose not to. So, if the government wants to know what's going on as it should, then protecting those who are voluntarily disclosing is the entire point of this aspect of the reauthorization. We should know in another week or two
[00:22:22] whether the politicians have now screwed up what had been a surprisingly well-designed and well-working system. As I said, when we talked about this 10 years ago on the podcast, when it happened, it's like, oh, great, another Homeland Security agency. Well, then we got surprised because it was so effective and so useful. Of course, at the time, it was really well managed.
[00:22:51] It was Chris Krebs who was, was he the original? I don't remember. No, he was the one who was fired because he said the election was secure. Right. They looked very closely at the 2020 election. Yeah. So, I would imagine he was because if that was 2020 and it was created in 2015, then it would have only been five years old at that point. Yeah. We talked about this. Alex Stamos is a partner, of course, for a long time. Chris Krebs. Yeah. Right. Okay.
[00:23:20] So, one of the reasons that I was, the working title of the podcast until I got to the news about AI and malware was the state versus encryption is Ireland's new lawful. I just love this word. There's this phrase, lawful interception law. Right. They could make whatever laws they want. So, the first half of this next piece
[00:23:50] is the news that Ireland has just passed a new lawful interception law granting the government significant new powers. The short blurb that carried that news, the short blurb just said, this is all I first saw. It said, the Irish government has passed a new lawful interception law. The new legislation grants law enforcement and intelligence agencies
[00:24:18] the power to surveil any type of modern communications channel. It also grants the agencies the right to use covert software for their operations, such as spyware. The new law will also... Uh-huh. It's not... You don't have to be ashamed or bashful or shy or pretend you're not doing it anymore. Now it's a law. Now it's legal. The new law, this little blurb said,
[00:24:48] will also require communication service providers to work closely and aid any government operation. Okay, so add this to the other recent news of pending and enacted legislation. You know, we're clearly witnessing... Remember, we talked last week about Germany. Basically, we can't pronounce the name of the agency because it's got 25 letters in its name and mostly consonants, but they're doing the same thing.
[00:25:18] So we're seeing... We're witnessing an accelerating trend in governments legislating themselves sweeping rights to intercept, monitor, and eavesdrop upon pretty much anything they wish. Okay, so... This week, we have similar legislation to what we talked about in Germany last week, which has passed. It was pending in Germany. Ireland passed this.
[00:25:46] So I scanned last Tuesday's press release from the Irish government, from which I'm going to excerpt two pieces, just two notable pieces. The first point talks about the clear need for an update to their very old law. I don't think anybody would agree with... would argue with that because that was the original law that's being updated by this legislation is from 1993. And, you know, need to update,
[00:26:16] that's non-controversial. But point number two says that the new law includes, quote, a clear statement of the general legal principle that lawful interception powers needed to address serious crime and security threats are applicable to all forms of communications. So to call this sweeping, you know, I don't know, I mean, that's the right word for it, right?
[00:26:46] They specifically write, the minister proposes, and the language here is proposes, but this law passed, just to be clear. The minister proposes an updated legal framework which is flexible and includes comprehensive principles, policies, and definitions to allow for lawful interception powers to be applied to any digital devices or services which can send or receive a communications message,
[00:27:15] for example, the Internet of Things and email and digital messaging devices and services. The legislation will provide for a clear statement, which is to say the legislation does provide, a clear statement of general principle that lawful interception powers apply to all forms of communications, whether encrypted or not, and can be used to obtain either content data,
[00:27:45] they say the substance of a communication, or related metadata, data that provide information about a communication but not its content, such as phone call or email time, date, sender, receiver of a communication, the geolocation of an electronic service or source and destination IP addresses. But they're specifically also saying and the content. And it says the legislation will also apply to parcel delivery services.
[00:28:15] The minister's view, they write, is that effective lawful interception powers can be accompanied by the necessary privacy, encryption, and digital security safeguards. Right. Because they're experts in this, Leo. These legislators, they know their crypto. It says, in June 2025, the EU commission published a, roadmap for lawful and effective
[00:28:45] access to data for law enforcement, unquote, which stated that terrorism, organized crime, online fraud, drug trafficking, child sexual abuse, we knew we were going to get to the kids, online sexual extortion, ransomware, and many other crimes all leave digital traces. Around 85% of criminal investigations now rely on electronic evidence. Requests for data
[00:29:14] addressed to service providers tripled between 2017 and 2022, and the need for these data is constantly increasing. I love the fact that they treat the word data as plural. I always fail to do that. I know. It's just, it's so refreshing to see it, but it always surprises me because I don't remember to do it. They said, the commission paper includes proposals to deliver a technology roadmap on encryption issues
[00:29:44] with expert input and emphasizes the need to reconcile technology and lawful access concerns, oh gee, you think, through industry standardization activities. This EU initiative complements the minister's proposed approach to reforming the law on interception in Ireland and will inform the development of the general scheme, and that's capital G,
[00:30:13] capital S. So the general scheme is something that they're going to inform and develop. So, okay, I have, of course, much to say about this, but I want to first share the pre-release's fourth point regarding, quote, the inclusion of a new legal basis for the use of covert surveillance software as an alternative means of lawful interception to gain access to electronic devices and networks
[00:30:43] for the investigation of serious crime and threats to the security of the state. They write, the minister also proposes to provide a legal basis for the use of covert surveillance software as an alternative means for lawful interception to gain access to electronic devices and networks for the investigation of serious crime and threats to the security of the state. This is used
[00:31:12] legally in other jurisdictions for a variety of purposes when necessary, such as gaining access to some or all of the data on an electronic device or network, covert recording of communications made using a device, or disrupting the functioning of a personal or shared IT network being used for unlawful purposes. The minister proposes to take into account a 2024 report from
[00:31:42] the European Commission for Democracy through law to the Council of Europe, the Venice Commission on this subject, which was titled, quote, Report on a Rule of Law and Human Rights Compliant Regulation of Spyware. So in other words, conduct that has historically been denied by governments, which we're doing it anyway, and which no state agency would admit to using or doing,
[00:32:12] is now being ratified into law and made explicitly legal. I believe that any objective observer who's witnessed the earlier saber-rattling and more recently both the pending and enacted legislation that governments seem determined to pursue would have to conclude that we are currently in an environment of slowly
[00:32:42] eroding privacy protections. Encryption happened, right? I mean, the math happened. And along with everyone else who appreciated knowing that their communications were private, you know, which, you know, it was just kind of like, okay, thanks, that's nice to have, bad guys started using it and they soon discovered that it protected them from law enforcement. While encryption was
[00:33:11] not created by any means to protect criminals, the privacy it affords everyone, you know, the privacy it affords doesn't know or care whether you're doing good or breaking the law. It's your communications is encrypted. So when bad guys began hiding behind the same encryption that everyone else was using because it was there, law enforcement quite reasonably asked providers for the contents of the bad guys' encrypted
[00:33:41] messages. And they were told that the system had been deliberately designed to provide absolute communications content privacy for all of its users regardless of their use and that we, the providers of this technology were unable to comply with lawful court orders to turn over their users' data. They said they did not have
[00:34:11] that data and they had no means of obtaining it. Now, that stumped the world's governments for a few cycles until someone had the bright idea to simply require the world to work differently. They said, we all agree that citizens have fundamental rights to privacy except in cases where that privacy is being abused and is not in the public interest. So we've decided that we will determine when and where people should have privacy
[00:34:40] and since we're a nation of laws, we're going to make it legal to do whatever we need to in order to obtain the privacy violating access to our citizens' communications that we've determined we need to have always, of course, in support of the greater good and besides, think of the children. And, you know, that same objective observer that I talked about before would see
[00:35:10] that we're currently in a period of transition. encryption. The truth of encryption caught the world's governments off guard. They've all seen the same movies that we have. Those movies, think about it, uniformly depict both hackers and intelligence services cracking the encryption. Whether, you know, like whenever they were asked to, whenever it
[00:35:40] was really necessary to do so. So, everyone knows that really good encryption just takes somewhat longer to crack, right? That's what the movies all showed us. The politicians just assumed that was true. Why wouldn't they? They believed that was the way encryption really worked. Right up until they encountered the truth of today's encryption, they didn't really understand that modern encryption is absolutely unbreakable. That's what
[00:36:10] the industry created, period. So, what we've seen is that it took them several rounds of stumbling and failed legislation and trying to figure out how to ask for what they wanted to finally figure out that what's actually needed is for them to outlaw any encryption that no one can break. they want the encryption we have in the movies and they're
[00:36:40] going to keep writing and rewriting legislation until they get it. So, the formal legislation of the use of spyware is just the next step along that path. Now, they're saying we're going to make our use of spyware legal. That will be lawful if we decide that we need to deploy it in order to obtain access to encrypted communications. Another step down that path. So, we're not yet where
[00:37:09] we're going to end up. But, again, our objective observer of the last several years would have to conclude that the world's governments, their law enforcement and intelligence agencies will not be satisfied until it's possible to obtain access to the communications probably of anyone they desire. It strikes me this is a pretty savvy move on the part of the Irish government because I think what they're recognizing is, well, we can't demand
[00:37:39] clear text from Signal, WhatsApp, and all these companies. So, the next step is, and we've talked about this before, to go pre-encryption, to go where the messages are in plain text, that is on your device, pre-encryption, and to do that they need the spyware. So, I think this is the next stage. This is saying, all right, I get it, Signal is going to withdraw from Ireland if
[00:38:09] we make it illegal to have strong encryption. So, oh, I got it, we'll just get on everybody's phone. The next step after this is what Russia and China do, which is mandate that you put a special app on the phone so that we can see everything that's going on. But that's, now the interesting thing is they may have to go to that point because I think they may be, here's where they're technically less literate, they may
[00:38:38] overestimate the ability of spyware to do this, right? But these kinds of exploits are not easy to get. No. They're generally one-time use because Apple will patch it the minute they figure it out. Exactly. As soon as anyone finds it, they're able to say, whoops, you know. So they may have a, this may be where they're, you know, before they thought, oh, we can break encryption. Now that maybe they're starting to realize we can't. Oh, but spyware. But maybe they don't really understand that it's not as
[00:39:08] trivial as you might think. I mean, I guess the NSA had its tools. That's what we learned from Edward Snow. And you're everyone's phone. it's the ultimate, right? Yep. Everybody has to run this app. And then they can see everything. And it's all pre-encryption. So Signal, you can still use Signal. Well. Yes, it's bad pie.
[00:39:38] We, we, we, we, you know, pre-internet encryption was a good thing once. it's easy. It's well known how to do encryption now. So, uh, it's going to be pretty hard. counter-argument is when it's illegal, only the criminals will use it. Right. Or people who are motivated, uh, and, and,
[00:40:08] or smart enough to figure out how to do it without. Well, again, it, I mean, they're going to outlaw the, they're in, they're going to outlaw their inability to spy. Right. In which case you will be a criminal, even if you're just encrypting to talk to your mom, if they want to see what you said to mom and you're unwilling to give them the keys, then you're guilty of that. You know, I said this in an interview 25, maybe no, 30 years ago that ultimately hackers might be the freedom fighters
[00:40:38] of the 21st century, but the people who understand how to get around these things may actually be the people who are fighting for our freedom. Yeah, Neo. Neo. Yeah, right, Neo. This is before the Matrix. All right, I'll let, do you want to refresh? Do you want to hydrate? Okay, he's nodding, folks. You can't, you can't hear it, but his head is vigorously popping up and down and he has a gigantic
[00:41:07] mug of Joe in his hand. Our show today, actually a brand new sponsor I want to tell you about. I'm very excited to welcome trusted tech to security now. This is something you may need if you're using Microsoft 365. There's a pretty good chance you're paying for licenses you don't need or can go the other way. You might be missing ones you do. And let's not forget that coming in July, Microsoft is going to implement a significant price increase
[00:41:36] for M365. And with it, there'll be a lot of nuance, a lot of subtleties. Trusted tech helps businesses of all sizes get the most out of their Microsoft investment by ensuring that their M365 environment is well supported and aligned with how the business actually operates. So you're not spending too little, you're not spending too much, it's just right. But that's something that's nice to have some help with. Microsoft licensing is very complicated, the options can vary
[00:42:06] widely, but trusted tech, they're the experts, their team helps organizations understand what they have, what they need, and how to get the most out of what they're paying for. If you want to make sure you're getting M365 done right, Trusted Tech can help you. Right now, you can go there and get a free Microsoft 365 licensing consultation. It's very straightforward, won't take much time. They're really good at this. These are the pros. Visit
[00:42:36] trustedtech.team slash, and don't forget this part, securitynow365. That's trustedtech.team slash securitynow365, where you can get a clear data-backed view of your current licenses, optimization opportunities, and next step. You know, if you're wondering, Kevin Turner, former Microsoft COO, has very good things to say about Trusted Tech. He vouches for them. He says, quote, Trusted Tech has an incredible customer
[00:43:06] reputation, and you have to earn that every single day. The relentless focus you guys have on taking care of customers gives them the value and differentiates you in the marketplace. That's high praise from a guy who knows. Trusted Tech also can elevate the Microsoft support experience with its certified support services. You can ask him about that. And by the way, some of the biggest and best use Trusted Tech for their certified support services. Enterprises like NASA, Netflix, Neuralink,
[00:43:36] Apple, Intel, Google, and Lockheed Martin. Okay? You couldn't get a better list of clientele. And they're saving 32 to 52% compared to the average Microsoft Unified Support Agreement. Great support for less. Whether you're looking to fine-tune your Microsoft 365 licensing or improve the way your organization receives proactive Microsoft support or both, Trusted Tech offers free consultations to help you understand
[00:44:05] your options. This is a name you need to know. Go to trustedtech.team slash securitynow365. You can submit a form to get in contact with Trusted Tech's Microsoft licensing engineers, the guys who know. Trusted tech.team slash securitynow365. Use that address. Make sure you do because that way they'll know you saw it here. Trustedtech.team slash securitynow365.
[00:44:34] We thank them for believing in us and supporting the mission of Steve Gibson and security now. Thank you, Trusted Tech. Back to you, Steve. So, the day after Ireland proudly enumerated the various features of its newly passed expansive legislation, EDRI, the European Digital Rights Organization, perhaps in response to Ireland's announcement,
[00:45:04] posted under the headline, EDRI launches new resource to document abuses and support a full ban on spyware in Europe. And, you know, okay, good luck. It seems that the European Union has their own equivalent of the EFF and its EDRI. Their posted piece begins by stating the context. Europe's spyware crisis remains
[00:45:33] unresolved. And they write, spyware remains one of the most serious threats to fundamental rights, democracy, and civic space in Europe. And, of course, Leo, as you just pointed out, spyware is where Europe wants to go because they want the access that they can't get. EDRI said, over the past years, repeated investigations have shown that at least 14 EU member states have deployed spyware
[00:46:03] against journalists, human rights defenders, lawyers, activists, political opponents, and others. Notice, we're not saying criminals, we're saying people we don't like. for one reason or another, we're going to spy on them. So, who imagines that that won't accelerate hugely if spyware is made legal? Anyway, that's me. EDRI said, these cases have revealed the reality of an opaque, dangerous
[00:46:33] market that thrives on exploiting vulnerabilities and endangering us, and the state's reluctance to provide any accountability or justice for victims. Right, so they're going to legalize what they've been doing. Despite the findings of the European Parliament's PEGA inquiry committee in 2023 and the push from human rights organizations, the European Commission has so far refused to follow binding
[00:47:03] legislation to prohibit spyware. Not only that, it has done nothing. Right now, no EU-wide red lines exist against the use of spyware. Well, right, 14 states have done it and they want to be able to keep doing it. So, they wrote, this means that victims lack effective remedies, authorities face no scrutiny, and commercial spyware vendors continue to operate with near
[00:47:33] total impunity, enriching themselves by violating human rights and even benefiting from European public funding. Because, after all, this is taxpayer dollars that are, you know, and these spyware's not cheap. At the same time, they said, this political inaction is increasingly being challenged. Investigative journalists, researchers, and civil society organizations have continued to expose spyware's human impacts and the opaque markets behind its development and deployment.
[00:48:03] A broad coalition of civil society and journalism organizations has openly called on EU institutions to end their inaction and to adopt a full ban on commercial spyware. Adding to this push, EDI has also adopted a comprehensive position paper calling for a full ban on spyware in the European Union as the only possible path forward from a human rights perspective. So, you know, basically,
[00:48:33] the battle is escalating and it's being made more visible and more public. We've got EU states wanting to legalize their use of spyware and the human rights privacy protecting organizations saying, let's make it very clear that this is not legal. They said, our collective refusal to accept the normalization of the use of spyware is also visible inside
[00:49:02] the European Parliament. On the 21st of January this year in Strasbourg, an informal interest group against spyware was launched bringing together MEPs from across political groups with the aim of maintaining scrutiny and challenging the commissions in action. While this does not replace legislative action, it signals that political pressure is
[00:49:31] growing instead of fading. Right, like I said, it's becoming more and more public. So we'll see what happens. This spyware document pool that this posting introduces is a really terrific piece of work. I'm only going to share a tiny piece of it, but I've dropped a link to the entire pool into the show notes. The end of the URL is spyware-document-pool
[00:50:01] and it's at the top of page 6 in the show notes. The piece I wanted to share from it addressed the nature and the size of the commercial spyware market. They wrote, the commercial spyware market has grown rapidly over the past decade. This market is now worth billions of euros driven by the sale of these tools to governments, law enforcement agencies, and sometimes
[00:50:31] private actors. Its growth is fueled by an ecosystem that combines technological sophistication with near-total opacity, allowing companies to operate across borders and evading accountability. This makes spyware a highly profitable yet extremely dangerous sector where abuses remain hidden until uncovered by researchers or investigative journalists. The global
[00:51:00] spyware industry is estimated to be worth on the order of 12 billion euros per year. It's illicit though, right? I mean, that's absolutely it is not legal anywhere. Right. That's amazing. 12 billion euros companies are paying. You know, like Maduro in Venezuela is like, you know, he would be a typical customer because he's got lots of money and he would like to spy
[00:51:29] on anybody who opposes him publicly. Well, and now they're going to get the Irish government as a customer, so that's good. Right. Jeez. More than 80, eight zero governments have contracted for commercial spyware, according to the UK's cybersecurity agency. Well, that's like half. That's like everybody. Yeah. Jeez. In 2023, there were at least 49 distinct vendors,
[00:51:59] along with dozens of subsidiaries, partners, suppliers, holding companies, and hundreds of investors across the supply chain. 56 of the 74 governments identified by the Carnegie Endowment procured commercial spyware from firms either based or connected to Israel. The Israeli firm Paragon was acquired in 2024 by an investment
[00:52:29] firm in a deal worth up to 900 million euros. And this is the market that Ireland, as we've been saying, has just taken out of the shadows and made legal for their own use, for what they're calling lawful interception. The other piece of data that I thought our listeners would find interesting was about the market for the vulnerabilities that enabled the creation and deployment of this spyware.
[00:52:58] They write, the buying and selling of zero day vulnerabilities is closely linked to the spyware market, as these flaws allow spyware to bypass security protections and operate undetected. The vulnerabilities market is dangerous because it magnifies risk. A single zero day can compromise millions of devices. Once a vulnerability is found, the risk
[00:53:28] is that anyone can exploit it. And they're saying, for example, by comparison, if a good guy finds a zero day, they report it, probably receive a bounty, and it's removed from the ecosystem. It's removed from the device. However, spyware using a zero day never wants to disclose it. They want to use it as long as they can. So that zero day remains present until it's somehow discovered.
[00:53:57] So thus magnifying risk. Also, it drives innovation in spyware. Spyware vendors continuously adapt their tools to exploit newly discovered vulnerabilities. Of course, as we know, it also drives Apple to keep revising their chips in this ongoing cat and mouse battle against what the spyware is able to do. It lacks accountability. Vulnerabilities are traded secretly with minimal regulation, creating an ecosystem with no rules that
[00:54:27] poses a risk to all of us. Concentration multiplies risk. Many people are using only two OS, Android and iOS, and some apps are globally used, WhatsApp, Gmail, and so forth. Once someone breaks into one of these systems, they can have access to hundreds of millions of devices. And Leo, this is the point you often make about our monoculture, the fact that there's basically either
[00:54:56] Android or iOS. there are not 20 different OSs, each struggling to maintain their own security. So this makes the point that because we have such a very vertical and narrow selection of platforms, you find a problem, you get access to a huge chunk of the world. they wrote a zero day vulnerability costs
[00:55:25] via brokers between first. Okay, so this is what the payout is for zero day vulnerabilities today. Five to seven million dollars for exploits targeting an iPhone. That is, you find a zero day for an iPhone today and through a broker, you can obtain between five and seven million dollars. Android phones get up to five million. Chrome and Safari zero
[00:55:54] days are between three and three and a half million dollars. WhatsApp and iMessage pull between three or three million for WhatsApp, five million for iMessage. But this proves my point. They wouldn't be worth that much if they were so easy to use and you could use them in a widespread fashion. These are very targeted, very specific attacks. Yes, and the reason they're getting that much money is, of course, the spyware vendors then turn around
[00:56:24] and charge that much money per customer. To the nation states. Yeah. Yes, exactly. Which ultimately the taxpayers finance. Yeah, that's nice. You know, governments don't generate their own cash. In 2024, the Google Threat Analysis Group reported that 20 out of the 25 vulnerabilities found on their products, which this is Google's TIG group,
[00:56:54] so that's Android and Gmail. In 2023, 20 out of 25 were used by spyware vendors to perform their attacks. As of June 2025, more than 21,500 new vulnerabilities had already been published. So we're seeing a rate of 133 new vulnerabilities per day across, you know, not all
[00:57:24] high quality zero days in iOS, obviously, but broad spectrum, 133 vulnerabilities are being found of all types everywhere per day. They finish, or I'm finishing quoting this piece of it, by right, even though at least 14 EU countries are reported to have used commercial spyware, regulation in Europe remains entirely absent. So
[00:57:55] Germany is saying we want to do this, Ireland is saying now we can do this, and the European Union is for whatever reason making noises like, oh, this is bad, but they're not actually taking any action. So to say that the future of encryption currently exists in a state of tension and uncertainty I think would be no overstatement given the reality of the overwhelming power of the world's governments and the
[00:58:25] necessity for vendors to abide by their laws, right? I mean, as we know, all Signal can do is say, well, we're leaving. They just can't ignore the laws in the prevailing regions where they want to operate. And much as I wish it were not the case, I do not see the interests of the EFF and the EDRI ultimately winning out here. Governments are never going to be satisfied
[00:58:54] until and unless they're able to intercept and monitor the communications of specific groups of individuals under the order of their courts at a minimum. That's clearly the path that we're on. And as for the absolute or the legal use of absolute encryption, I would say, enjoy it while it lasts. Eventually, only criminals will be able to use unbreakable encryption.
[00:59:24] You know, its use will have been criminalized so that those who do use it, as I said earlier, will be guilty of at least that. And I think that's where we're headed, Leo. I mean, governments do not do, they're just going to object. And it's unfortunate too, because, you know, while pre-internet, when law enforcement had to use more analog means, you know, wiretaps
[00:59:53] and physical searches, everything wasn't binary, either like, yes, you have encryption or you don't. I mean, in the analog world, there's, you know, hiding stuff in a mattress. I mean, it was a different world. Now, it's either, it is absolute, I mean, it would almost be better if encryption actually worked the way it did in the
[01:00:22] movies, but was also very, very, very hard to break, where if you really, really, really, really needed something, you could get it. Unfortunately, what's going to happen is governments are going to legislate themselves the ability to flip a switch and have it all. They're going to say, you know, a phone operating within the European Union must have our software on it. And oh, it'll have some benefits, it'll be, you can use it to
[01:00:52] take the train and fly and, you know, it'll be, you know, stand in for you and be a digital ID and other things. And it'll also be there able probably to capture what they want to when they want to. It's, I mean, you could absolutely have a program on a phone that would see all plain text, you know, everything that was typed in or dictated in before it went
[01:01:22] into an encrypted. That wouldn't be hard to do. You'd have to violate maybe some of Apple's rules, but if you're the government, you say, oh, Apple, you don't have to approve this app. We're just going to put it on every iPhone. Yeah. Any macro program that we're used to is able to watch what you do, capture those actions, and then store them. Well, it could also be capturing the keyboard. This argues very strongly for an open source operating system.
[01:01:51] I wish I had a phone that I could really use with an open source operating system. And now with Vibe coding, I could probably, before this show's over, seriously, code up a, you know, I'd say use NACL or some well-known crypto library that's reliable, and I would like you to write me an encryption and decryption program, and I'm going to send my friend Steve the decryption program, and I think that
[01:02:21] would be, so it's going to be very difficult to control this. This is like a print-your-own-gun thing. Yes, it is difficult to control, but that encryption decryption program that you just mentioned hypothetically, it uses OS APIs. Right. So, I mean, it doesn't actually, there's no access to the XY coordinates that the user's touching on the screen. That
[01:02:51] service is provided by the OS, so that can always be tapped at that level. Yeah, I mean, you could capture scan codes from a keyboard, but there's nothing to keep the OS from seeing those as well. Yeah, you need to send me the Leophone, and Leophone is open source. We need to have open source hardware and open source software and make sure no government intrusion on either. Yeah. Well, you know that there are people who will be strongly enough incented to do that, to do that, and that
[01:03:21] ironically is the people the government wants to catch. The normal people who aren't doing that. Yeah. We're sitting ducks. Yeah. which brings us to Microsoft and BitLocker. Oh, yeah. After this next break. Okay. And we'll talk about this. This is a news story this week, which we talked about on Twitter. Alex Stamos, who is a very well respected
[01:03:50] security guru who did have his thoughts. I want to hear what you have to say about it, and I'll give you Alex's thoughts as well. Perfect. Coming up on Security Now, our show today brought to you by Zscaler, the world's largest cloud security platform. We talk about AI. If you're a business, I think it's just generally accepted that if you're not using AI, that would be the equivalent to a business saying, well, we don't really need a telephone either. We'll just take orders by carrier pigeon, I guess.
[01:04:20] So the rewards of AI cannot be ignored, but there are risks, and those should not be ignored. And the risks aren't just bad guys. The risk is also loss of sensitive data, through the use of AIs, both local and SaaS. And then, of course, there's the attacks against enterprise-managed AI, the prompt injection, and bad guys are also using generative AI, and that's really increasing their capabilities. They can rapidly
[01:04:50] create phishing lures that are pitch-perfect. They can write malicious code. They can automate data extraction. We're going to talk about writing malicious code in just a little bit. It's happening now. So, there's all these issues. There were 1.3 million instances of social security numbers leaked to AI applications by businesses using AI applications, right? Not intentionally. ChatGPT and Microsoft Copilot together saw nearly 3.2 million data
[01:05:20] violations last year. There is a solution. It's time for a modern approach. Z-scalers, zero trust plus AI. Zero trust removes your attack surface. It secures your data everywhere. It safeguards your use of public and private AI. Yeah, Z-scaler can do that. They can even protect you against ransomware and AI-powered phishing attacks. Check out with Siva, the Director of Security and Infrastructure at Zora, says about using Z-scaler. Watch this.
[01:05:50] AI provides tremendous opportunities, but it also brings tremendous security concerns when it comes to data privacy and data security. The benefit of Z-scaler with ZIA rolled out for us right now is giving us the insights of how our employees are using various Gen AI tools. So, ability to monitor the activity, make sure that what we consider confidential and sensitive information according to companies' data classification does not get fed into the public LLM models, etc. With
[01:06:20] zero trust plus AI, you can thrive in the AI era, you can stay ahead of the competition, and you can remain resilient even as threats and risks evolve. Learn more at zscaler.com slash security. That's zscaler.com slash security. We thank you so much for supporting security now and Steve Gibson. All right, let's talk about this BitLocker thing. Okay. So, on the heels of Microsoft's news and EDRI's pushback
[01:06:50] comes Microsoft's admission that they provided BitLocker keys to the FBI when asked. The headline of Thomas Brewster's piece in Forbes, which set off this firestorm of discussion and controversy, was quote, Microsoft gave FBI keys to unlock encrypted data, exposing major privacy flaw.
[01:07:20] With the tagline, the tech giant said it receives around 20 requests for BitLocker keys a year and will provide them to governments in response to valid court orders. But companies like Apple and Meta set up their systems so such a privacy violation is not possible. Okay, so here's what we know from Forbes reporting. Thomas wrote, early last year, the FBI served
[01:07:49] Microsoft with a search warrant asking it to provide recovery keys to unlock encrypted data stored on three laptops. Federal investigators in Guam believed the devices held evidence that would help prove individuals handling the island's COVID unemployment assistance program were part of a plot to steal funds. The data was protected with BitLocker, the software that's automatically enabled on many modern Windows PCs to
[01:08:19] safeguard all the data on the computer's hard drive. BitLocker scrambles the data so that only those with a key can decode it. It's possible for users to store those keys on a device they own, but Microsoft also recommends BitLocker users store their keys on its servers for convenience. While that means someone can access their data if they forget their password or if repeated failed attempts to log in, lock the
[01:08:48] device, it also makes them vulnerable to law enforcement subpoenas and warrants. In the Guam case, Microsoft handed over the encryption keys to investigators. Microsoft confirmed to Forbes that it does provide BitLocker recovery keys if it receives a valid legal order. Microsoft spokesperson Charles Chamberlain said, quote, while key recovery offers convenience, it also carries a risk of
[01:09:17] unwanted access. So Microsoft believes customers are in the best position to decide how to manage their keys, unquote. He said the company receives around 20 requests for BitLocker keys per year and in many cases, the user has not stored their key in the cloud, making it impossible for Microsoft to assist. The Guam case is the first known instance where Microsoft has provided an encryption key to law enforcement.
[01:09:46] Back in 2013, a Microsoft engineer claimed he had been approached by government officials to install back doors in BitLocker, but had turned the requests down. Senator Ron Wyden said in a statement to Forbes, quote, it is simply irresponsible for tech companies to ship products in a way that allows them to secretly turn over users' encryption keys.
[01:10:15] Allowing ICE or other Trump goons to secretly obtain a user's encryption keys is giving them access to the entirety of a person's digital life and risks the personal safety and security of users and their families, unquote. Ron Wyden, of course, a Democrat. The law enforcement regularly asks tech giants to provide encryption keys, implement
[01:10:44] backdoor access, or weaken their security in other ways, but other companies have refused. Apple, in particular, has repeatedly been asked for access to encrypted data in its cloud or on its devices. In a highly publicized showdown with the government in 2016, Apple fought an FBI order to help open phones belonging to terrorists who shot and killed 14 in San Bernardino, California. Ultimately, the FBI found a contractor to
[01:11:13] hack into the iPhones. Privacy and encryption experts told Forbes the onus should be on Microsoft to provide stronger protection for consumers' personal devices and data. Apple, with its comparable file vault and passwords system, and Meta's WhatsApp messaging app, also allow users to backup data on their apps and store a key in the cloud. However, both also allow the user to put the
[01:11:43] key in an encrypted file in the cloud, making law enforcement requests for it useless. Neither Apple nor Meta are reported to have turned over encryption keys of any kind in the past. Matthew Green, cryptography expert and associate professor at the Johns Hopkins University Information Security Institute said, quote, this is private data on a private computer and they made the architectural choice to
[01:12:12] hold and retain access to that data. They absolutely should be treating it like something that belongs to the user. If Apple can do it, if Google can do it, then Microsoft can do it. Microsoft is the only company that's not doing this, he added. It's a little weird. The lesson here is that if you, meaning Microsoft, have access to its users' keys, eventually law enforcement is going to come for them.
[01:12:44] Jennifer Granik, the ACLU's surveillance and cybersecurity council raised concerns about the breadth of information the FBI could obtain if agents were to access, gain access to data protected by BitLocker. And that's really a good point, too. It's like, you know, they're not getting selective access to just what they want. They've got your drive. She said, quote, the keys give the government access to information well beyond the time frame of most
[01:13:13] crimes, everything on the hard drive. Then we have to trust that the agents only look for information relevant to the authorized investigation and do not take advantage of the windfall to rummage around. In the Guam case, the court dockets show the warrant was successfully executed. The lawyer for defendant, Charissa Tenorio, who pleaded not guilty, said the
[01:13:42] information provided to her by the case's prosecutors included information from her client's computer that it included references to BitLocker keys that Microsoft had provided the FBI. The case is ongoing. Both Matthew Green and Jennifer Granick said Microsoft could have users install a key on a piece of hardware like a thumb drive, which would act as a backup or recovery key. Microsoft does allow for that option, but it's
[01:14:12] not the default setting for BitLocker on Windows PCs. Without the encryption keys from Microsoft, the FBI would have struggled to get any useful data from the computers. BitLocker's encryption algorithms have proven impenetrable to prior law enforcement attempts to break in, according to a Forbes review of historical cases. In early 2025, a forensic expert with ICE's Homeland Security Investigations
[01:14:41] Unit wrote in a court document that his agency did, quote, not possess the forensic tools required to break into devices encrypted with Microsoft BitLocker or any other style of encryption, unquote. In one previous case, federal investigators obtained keys by discovering that a subject had stored them on unencrypted drives. Now that the FBI and other agencies
[01:15:09] know Microsoft will comply with warrants, similar to the Guam case, they'll likely make more demands for encryption keys, Green said. My experience is, once the U.S. government gets used to having a capability, it's very hard to get rid of it. Okay, so the first takeaway from this is obvious, and it doesn't involve any sort of moral or ethical judgment either way.
[01:15:38] It's just the facts. Because encryption is absolute and unforgiving, it can be super useful to have a backup plan of some kind, right? You know, someone who will never forget, someone to hold on to one's emergency encryption backup keys. There's no doubt about that. Only if you are willing to take
[01:16:07] sober and full responsibility for never forgetting how to log in, would it make sense to have no backup whatsoever anywhere? That said, one option is to allow Microsoft to be the entity to hold on to your keys in the event of an emergency. They're certainly the default easy choice. The only downside
[01:16:36] to that, and again, without any judgment here, is that they will also turn your keys over to law enforcement after a judge approves their request. And, you know, that may not be a bad thing if you're certain that this would never become an issue for you. But if that's a concern, it's a good thing that you're now aware that Microsoft cannot be a trusted guardian
[01:17:06] of your privacy. They will capitulate. And now all global law enforcement and intelligence services know that. So it might be better to entrust those secrets to a close friend who law enforcement would never think to ask. But as I said, that's the first takeaway. There's another, and it's much more subtle. But I very much want to point it out to our listeners.
[01:17:36] This Forbes article reminded me of that previous instance 13 years ago, back in 2013, when a Microsoft engineer claimed he'd been approached by government officials to install back doors in BitLocker. My recollection was that it was more than a claim and that it was also more than once. For one thing, there were multiple people involved. So it wasn't just hearsay from one guy,
[01:18:05] you know, and you know, so the FBI asked. I don't have a problem with them asking as the saying goes, well, you can ask. Okay, so to set this up for our listeners, I needed to share the, I want to share the first portion of Mashable's coverage of this incident from 2013. Mashable's coverage of the story was introduced with the leading question headline,
[01:18:35] did the FBI lean on Microsoft for access to its encryption software? They wrote, the NSA is not the only government agency asking tech companies for help in cracking technology to access user data. Sources say the FBI has a history of requesting digital backdoors, which are generally understood as a hidden vulnerability in a program that would in theory let the agency peek into suspects,
[01:19:04] computers, and communications. In 2005, when Microsoft was about to launch BitLocker, its Windows software to encrypt and lock hard drives, the company approached the NSA, its British counterpart, the GCHQ, and the FBI, among other government and law enforcement agencies. Microsoft approached them saying, we're about to add encryption to Windows. They wrote, Microsoft's goal
[01:19:34] was twofold, get feedback from the agencies and sell BitLocker to them. However, the FBI, writes Mashable, concerned about its ability to fight crime, specifically child pornography, apparently repeatedly asked Microsoft to put a back door into the software. And then they tell their less technical audience a back door or trap door is a secret vulnerability that can be exploited
[01:20:04] to break or circumvent supposedly secure systems. For its part, the FBI categorically denies asking for such access, telling Mashable that the Bureau does not ask for back doors and that it only serves companies lawful court orders when it needs to access users' data. And legally, it would still need a warrant, even if a back door did exist. Peter Biddle,
[01:20:33] the head of the engineering team working on BitLocker at the time, revealed to Mashable the exchanges he had with various government agencies. Biddle told Mashable, quote, I was asked multiple times, confirming that a government agency had inquired about back doors, though he couldn't remember which one. He said, quote, and at least once the question was more like, if we were to officially ask you, what would you say?
[01:21:04] According to two former Microsoft engineers, FBI officials complained that BitLocker would make their jobs harder. An FBI agent reportedly said, quote, it's going to be really, really hard for us to do our jobs if every single person could have this technology. How do we break it? The story of how the FBI reportedly asked Microsoft to
[01:21:33] backdoor BitLocker to avoid, quote, going dark, the FBI's term for a potential scenario where encryption makes it impossible to intercept criminals' communications or break into a suspect's computer, provides a snapshot into how U.S. government agencies tried to persuade tech companies to weaken their security products or even poke a hidden hole to make them wiretap friendly. Last week,
[01:22:03] and this was written back in 2023, so 13 years ago, the New York Times, ProPublica, and The Guardian, Mashable writes, revealed that one of the ways the NSA circumvents internet cryptography is to ask companies to put backdoors into their products. The FBI is reportedly doing the same in the name of fighting crime, and its persuasion techniques appear to be very similar. According to reports,
[01:22:32] both the NSA and the FBI are subtle in their requests, which are never formal, never written, but are usually uttered during casual conversations almost jokingly. Niko Sell, plausible deniability, right? Exactly. Niko Sell, the former, I'm sorry, the founder of the privacy enhancing app Wicker, was approached by an FBI agent after
[01:23:02] speaking at the RSA security conference at the end of February, again, 13 years ago, as first reported by CNET. According to Niko, the agent asked, so, are you going to give us a backdoor? She declined, and after pressing the agent, asked him to explain if he had a written request and to reveal his boss. The agent backed down. Cryptography and security expert Bruce
[01:23:31] Schneier said he's heard of these same types of tactics from others the government has approached seeking technological backdoors. Bruce told Mashable, it's never an explicit ask. It's an informal, oblique mention, joking conversation where you're felt out as to whether you might be amenable to it. If you're amenable, then the conversation continues.
[01:24:00] If you're not, well, it's like it never happened. Despite the requests being informal, Schneier and other surveillance experts are concerned. a request is a request. And despite not being legal, Bruce said, it's in the case of Microsoft, according to the engineers, the requests came in the course of multiple meetings with the FBI. These kinds of meetings were standard
[01:24:30] at Microsoft, according to both Biddle and another former Microsoft engineer who worked on the BitLocker team who wanted to remain anonymous due to the sensitivity of the matter. Biddle said, quote, I had more meetings with more agencies than I can remember or count. He said the meetings were so frequent and with so many different agencies, he doesn't specifically remember if it was the FBI that asked for a backdoor, but the anonymous Microsoft engineer,
[01:25:00] we, meaning Mashable, spoke with, confirmed that it was, in fact, the FBI. During a meeting, according to Biddle and the Microsoft engineer who were both present at the meeting, an agent complained about BitLocker and expressed his frustration, saying, quote, you guys are giving us the shaft, unquote. Though Biddle insisted he didn't remember which agency he spoke with, he said he did recall this particular exchange, and Biddle
[01:25:30] wasn't intimidated. He replied, no, we're not giving you the shaft, we're merely commoditizing the shaft. Unquote. Biddle, a believer in what he refers to as neutral technology, never agreed to put a backdoor in BitLocker, and other Microsoft engineers, when rumors spread that there was one, later denied that was ever a possibility. Niels Ferguson, Microsoft cryptographer
[01:25:59] and principal software development engineer, wrote, quote, the suggestion is that we are working with governments to create a backdoor so that they can always access BitLocker encrypted data. That will happen over my dead body, unquote. For Biddle, this, I mean, these guys were serious, and if you take a look, Biddle has a Wikipedia entry, you get a sense for him. Those were the good old days
[01:26:29] of Microsoft. Mashable writes, for Biddle, this was proof of a fundamental paradox facing government agencies and security software. How do you get secure software you can rely on while also retaining the ability to break into it if people use it to commit or cover up their crimes? Biddle said, quote, I realized that we were in this really interesting spot, sort of stuck in the middle between wanting to
[01:26:59] do a much better job at protecting our users' information, and at the same time realizing that this was starting to make government employees unhappy, unquote. Despite Microsoft's refusals to backdoor its product, the engineers kept working with the FBI to teach them about BitLocker and how it was possible to retrieve data in case an agent needed to get into an encrypted hard drive.
[01:27:28] At one point, the BitLocker team suggested the agency target the backup keys that the software creates. In some instances, BitLocker prompts users to print out a piece of paper with the key needed to unlock the hard drive to prevent loss of data if the user forgets his or her key. The anonymous Microsoft engineer said, quote, as soon as we said that, the mood
[01:27:58] in the room changed dramatically. They got really excited, unquote. In that instance, law enforcement agents wouldn't need a backdoor at all. As the engineer suggested, all they would need was a warrant to access a suspect's documents and retrieve the document that would unlock his or her hard drive. Okay. And this finally brings me to the point I wanted to make. Mashable quotes Christopher
[01:28:28] Seguyen writing, for Christopher Seguyen, a privacy and security expert at the ACLU, whether or not BitLocker has a backdoor or not isn't even that relevant. Again, 13 years ago, since it's a feature that very few Windows users employ or even have access to. It's not included in most Windows
[01:28:57] versions and it's not a default setting. Something that Seguyen said, quote, is not an accident, unquote. He told Mashable, quote, the impact is minimal because so few people use BitLocker, but it does speak to a friendly relationship between the companies and the government. He said, if you want to keep your data out of the U.S. government's hands, Microsoft is not your friend. Microsoft is unwilling to
[01:29:27] really make the government go dark. They're never really willing to protect their customers from the government. They're willing to take some steps, but they don't want to go too far, unquote. This is from an era when we were still to see. Right. So what I wanted to share about that last bit was, I think it's wrong. Okay, so first of all,
[01:29:56] this is a reminder about the way the world has changed during the intervening 13 years. At the time Christopher Shigoyan was quoted, he correctly noted that BitLocker was a non-issue since it was so infrequently used, right? The FBI probably wouldn't actually encounter it in the field. I doubt he would feel the same way about BitLocker today. It will not be enabled on machines that had been
[01:30:25] upgraded to Windows 11 if earlier Windows was not using it. But most modern PCs that ship with Windows 11 pre-installed, even the Home Edition, with its simpler drive encryption, which is a BitLocker without as much UI and options, will have their hard drives encrypted out of the box if they're using a Microsoft account. To me, this seems
[01:30:54] entirely pro-consumer. Microsoft doesn't have to push BitLocker encryption. It certainly causes some pain and annoyance for both them and their Windows users. users. But it's extremely good for the privacy of their users that someone cannot remove their machine's drive and mount it on another machine to dump its entire contents. Those days are over.
[01:31:24] Might this mean that the FBI needs to obtain a court order to compel Microsoft to disclose the encryption keys of someone whom they have convinced a judge may have evidence crucial to a crime which they're working to solve. Yes, that might be necessary. But all of that is only necessary because Microsoft is pushing everyone to encrypt their drives in
[01:31:54] the first place. And there's no way any law enforcement agency anywhere is happy about that. If Microsoft were not defaulting to using BitLocker, things would still be the way they were 13 years ago when Christopher Seguyen said, who cares anyway? No one uses BitLocker. Today, most new systems do. And most Windows users obtain the huge benefit
[01:32:23] of having their drives data much more securely protected from non-casual, non-government attack than if it were not encrypted. I don't disagree that Microsoft might be able to do more and that they may be short changing their users' privacy when push comes to shove. If they can provide unlocking keys to
[01:32:52] their users in an emergency, and that's the point, right, of their escrowing, then they can also provide them to law enforcement when under order to do so by a court. On balance, I would venture that many, many, many more Windows users' data have been saved by this policy than have been compromised by a law enforcement subpoena.
[01:33:22] I'm sure that Microsoft would always require a court order before disclosing. That's a given. It's not as if anyone can just ask for someone else's decryption keys and get them. So the data does have the same protection as does our other personal property and possessions. The protection is not absolute, no. And yes, if users were capable of taking full responsibility for the decryption of their data, that is for
[01:33:52] backing up their keys, then it could be absolute. But at least in the United States, the protection is in line with what U.S. citizens enjoy in the other areas of our lives. So, anyone who objects to turning their keys over to Microsoft has now been forewarned that their keys, if it's a concern, may be disclosed to law
[01:34:21] enforcement upon legal demand. If that's a concern, Windows Pro and Enterprise and Education Edition allow users to disable Microsoft's default key escrowing by setting two policies. There are two policies in the registry. Do not allow BitLocker recovery information to be stored in Azure AD and do not allow
[01:34:50] BitLocker recovery information to be stored in Microsoft account. After doing that, their BitLocker recovery keys can be rotated so that Microsoft never obtains the updated keys. BitLocker was designed correctly the way all modern crypto is, so that there is a there's a full-volume master encryption key which never leaves the device.
[01:35:20] Then there is another key that encrypts that key and it's that secondary key which Microsoft gets a copy of. It's also written into the TPM protected by a pin and it's also, it's the thing that you're able to back up to your USB. It's the master key encryption key. So, it is possible with two
[01:35:49] simple commands in Windows to rotate that key basically to discard while BitLocker is unlocked so that the volume, the full-volume master key is decrypted to rotate the key that encrypts it. After you've disabled sharing this with Microsoft and if
[01:36:19] Microsoft has a key, it's no longer of any value. Remember, though, there's no one to come crying to if you're unable to log into your computer or there's any sort of problem. I would absolutely print this on paper. A USB thumb drive is not high-fidelity storage. I wouldn't trust anything that I care about to a USB drive printed on paper because it's too
[01:36:48] important not to. But my feeling is, again, Microsoft doesn't have to be encrypting everyone's drive, right? And if everyone's drive was not encrypted by default, most would not be, and the FBI would have no problem. The fact that they have chosen to switch to default encryption on, to me, that's 100% in the service of their customers. That's a good thing. And for everybody's sanity, because encryption is
[01:37:18] absolute, using an online account, which we all know they don't make easy not to use these days, if you can even find a way around it, they're backing up those keys. And again, they're not doing it to be a friend to the FBI, they're doing it because people are going to and have forgotten how to log onto their computer, and if you can't do that, you can't access your hard drive ever. So, again, I think it's exactly the right set of trade-offs, Leo. It's
[01:37:48] everybody's drive is encrypted, there is a get-out-of-jail free card stored up in Redmond's servers, and you're able to turn that off if you're knowledgeable about how to do so and wish to do so. So, what did Alex think? I think it essentially agreed with you that it wasn't hair on fire. It's very good for people to understand that this can happen so that you can make a choice depending on your
[01:38:19] tolerance risk and your threat model. And some people may just not want Microsoft to have that ability. I get it. Actually, I wouldn't because I don't want to have an online account. I don't want Microsoft in my business. There's a lot of reasons not to use an MSA account to log into Windows. It's a shame. Microsoft makes that harder and harder and harder. I think that's where maybe Microsoft is a little bit at fault. But you're right. I don't think they're doing it to make it easy for law enforcement to get the data. They're doing it
[01:38:49] because people lose their keys and they just want to protect them. He pointed out though that Apple has found a way to do all of this without having access to the keys in a way that they could give this to law enforcement. Apple by default encrypts the hard drives on your Macintosh with FileVault. They do have a way of saying I forgot but they're using some sort of key escrow system so that they don't actually
[01:39:19] have access to the key. As you pointed out, Google also does this. In fact, all phones do this without a back door. Remember, the FBI tried to get Apple to give up the keys in the San Bernardino case and Apple said no. That probably hurt their reputation with some people considerably and definitely buffered it up with some people like me.
[01:39:49] I think it's good that we know this. It's good that there are alternatives. It is interesting that Microsoft doesn't do what Apple does. They could. take that extra key escrow step and make it possible to do it all. I'll have to look into that because I don't understand what it is that they're doing. Maybe it's because you have access to the physical hardware, but if the user doesn't have to remember
[01:40:19] anything, then... Well, no. The TPM stores it. I think that when you log in... I know when you log in, it unlocks it. So it's not... You know, on my Linux box where I use Lux, I have to unencrypt the drive before I can log in. I mean, that's a two-step process. Apple's not like that. You log in, then there is a long process, fairly long process of unencrypting the drive and then you're in. And I'm not sure how...
[01:40:49] So if it's that your device contains the key and they're able to get it from the device, then it is device okay. And that makes sense. But then what happens if the user loses their device? They have a backup and that's when they want to use the backup to restore it onto a different device, but that new device won't have that secret. So again, I've not looked at this for so long and things change,
[01:41:18] but it would be interesting to know. It generates a recovery key, which you could print out, and that you can use. And Apple doesn't have access to that. In that case, it's exactly like what we would do with BitLocker, except that Microsoft defaults to also having a copy of that recovery key. Apple does not default to having a copy of the recovery key. Yeah. You can allow my iCloud account to unlock my disk,
[01:41:48] which is the BitLocker solution. But Apple is explicit, by the way, when you set this up about how to do this, unlike Microsoft. I've chosen them to be the backbone for my stuff. I trust them more than anybody else. I mean, look, we automatically on mobile devices, it's all encrypted, automatically. We talked about this the other day when I was setting up the SlimX laptop. I just think it's better to encrypt it because that way you don't have to worry about
[01:42:19] erasing it and the issues of is some of this going to be accidentally stored on an SSD and all of that stuff. It's gobbledygook without the password. Without the master key. I like that. I think that's the right way to go. I'm comfortable with that. And yeah, I have to enter my password twice on my Linux box. Well, I enter it once and I use a fingerprint the second time. I don't find that to be too onerous. No, I think biometrics is going to be the way things are going. We're having
[01:42:48] to authenticate more and more often to say that we're sitting in front of our computer. And so having a fingerprint reader in a keyboard or a mouse makes a lot of sense. It's a great solution. So we heard from me, we heard from Alex Damos. We're next going to hear from Alex Niehaus after this break. Our longtime friend, one of the very first advertisers. By the way, Chris Seguyen, for those who are wondering, is Sal Seguyen's brother. Sal, of course, very well-known
[01:43:18] Apple guy for a long time. His brother's a very well-known security guy. So that is the same Seguyen. That's the Seguyen family. You're watching Security Now with Mr. Steve Gibson and I'm Leo Laporte. We're glad you're here. We encourage you to join the club, support the show if you believe in what we're doing. Yeah, the advertisers support it, but that's not enough to keep us on the air. We need your support too. A quarter of our operating expenses are paid for by club twit members. It's going to be more this year as
[01:43:48] advertising support dwindles a little bit. That's partly our fault. Lisa did all the ad sales for a long time, my wife and our CEO. She decided to, I guess, semi-retire. She didn't want to do that anymore. So we've outsourced the ad sales, but it's taken a while to get up to speed. And right now we're a little bit underwater. So this becomes even more important. If you want to support the programming you hear on Twit, join the club. It's very simple. Twit.tv slash club twit. Lots of benefits, including ad-free
[01:44:17] versions of all the shows, access to the club to a discord, all the special programming we do just for club members. But the main reason to do it to keep these shows going, twit.tv slash club twit. Now, on we go with more security now. Steve. Friend Alex shares, I think, some tremendous business centric perspective on what the application of AI will mean to the enterprise and what
[01:44:47] pitfalls it's more than likely to offer. So he writes, gents, security now has spent a lot of time over the last couple of months on AI and how its first most natural application is software development. After a recent experience packaging up a hobby script as a public open source PowerShell module, I could not agree more that the development tool set is rapidly changing. But, there's
[01:45:17] always a but, I worry about mechanically produced code, particularly in enterprise systems that deal with financial and personal information at scale. Think a brokerage or a multi-state healthcare system. If we look at the historical waves of management thinking about the development costs of crucial enterprise systems, we see an analyst push to reduce those costs. That's inevitably led to
[01:45:46] declines in quality, reliability, and most of all, security. In the early days of enterprise development, engineers worked in-house. I started as a developer at Mass General Hospital in the 1980s when outsourcing was not yet in the lexicon. Yet MGH developed mumps in-house, which today is the core database environment of the largest electronic medical records vendor in the USA.
[01:46:17] Outsourcing became all the cost-cutting rage among enterprises, followed on quickly by the offshoring of enterprise development. Executives, based on then-current business consulting doctrine, decided that IT wasn't their core business. They thought their businesses just made widgets or sold products or provided a service. IT was orthogonal to their core function. History shows
[01:46:47] what a strategic mistake that thinking was. It led directly to the situation we find ourselves in today, a race to the bottom to procure development resources that cost a fraction of in-house resources. Security now regularly documents the results, breaches, botnets, system failures and worse. Over time, enterprises discovered their IT systems are the business.
[01:47:17] You can't make a widget, sell it or service it without an enterprise system. Unfortunately, many businesses continue to dismantle their core capabilities with a massive mistaken shoemaker's children syndrome. AI's rapid development means yet another giant epoch in computing technology is just starting. We're about to live through another turn of the wheel of technological progress. Having used
[01:47:47] AI in a simple vibe coding project, it's clear to me that AI cannot replace developers, but that's how it's being pitched to the same enterprises that previously committed the not-our-core-function mistake. Amazon and Microsoft are both insisting that replacement is the primary benefit. Enterprises now completely beholding to these hyperscalers take their
[01:48:16] cues from them. I use AWS and Azure cloud products every day in real client situations. I can tell you quality isn't their north star. Anyone who's ever struggled with Microsoft's automated API documentation can tell you it isn't worth the electrons used to display it. In other words, we should not repeat the mistaken business assumptions that drove the outsourcing debacle.
[01:48:46] Instead, we can upscale developers' skills while still retaining the focus on human development. Imagine the benefits if we used AI, not to replace those $25 per hour outsourced developers who produce the worst code we've ever seen, but instead train them to use AI to write tests, to check the level
[01:49:15] of every included NPM or PyPy package against the CVSS database before recompiling or to fuzz their functions. That last is the hallmark of outsourced code vulnerabilities. It works, but only on the happy path. We're still in the early hype cycle days of AI, but the hype sometimes becomes at least the partial reality. Thinking about AI only as replacement for
[01:49:44] developers makes the same mistake we made with outsourcing, only magnified many times. You both know how much I love the show and your tenacity in producing compelling podcasts week after week, decade after decade. Thanks for that. Security Now's number one fan, Alex. I wanted to share Alex's perspective because I think it is super valuable and exactly correct. It also makes
[01:50:14] such complete sense. My own perspective tends to get wrapped up in the technology, so the effect AI will be having on internal enterprise development isn't the sort of thing I tend to focus on. So thank you, Alex. I imagine this may give many of our enterprise centric listeners something to think about and perhaps discuss with their peers and managers. And Leo, you can see his point, right? Oh, he's absolutely right. It's absolutely being sold as think of all the people you can fire. Right.
[01:50:43] Well, and we talked about this last week, I mean, and I've been thinking about it since you posed the question, the skills that you have as a coder are not thrown away. The maturity. When you're doing vibe coding, you very much need similar skills. It's almost as if you're a team working with junior coders and instructing them. The sad thing, I think the real problem we're going to have is that a lot of companies are using AI coding tools to
[01:51:13] replace the junior programmers, which means there's no longer a pipeline for people to become, you know, senior programmers because they don't get to do it. So I hope companies will continue to hire entry level coders to work with these tools. What's interesting is you're going to get a lot more applicants who are very skilled with these tools. That's what kids are doing in college right now in computer science classes. They're learning how to use these tools. And maybe, let's face it, maybe that's the future of coding.
[01:51:43] I wouldn't be surprised. It's just another kind of high-level language. I do think it's the future. I agree with you completely. I think it's very clear that AI has such a profound ability to code. That learning how to get AI to give you the answer that you want is part of the learning. But Alex is right. We can't treat it as a panacea. We have to think about
[01:52:12] the consequences and how we're going to keep that pipeline going and how we're going to go forward instead of throwing everything out and starting over because that's not going to work either. Well, I think we're going to go through a period of pain, Leo. We know that. While the C-suite executives realize, whoops, we did throw out the baby. Gavin has a confession to make. Another listener of ours, he said, hi, Steve. And he says right up, I have a confession to make.
[01:52:42] I have knowingly opened up public database access in production systems. He said, here's how this came years ago. I became the sole software developer at a small UK ISP following the departure of several senior team members. The company had a plethora of legacy systems scattered among various cloud service providers. Following COVID-19, sales plummeted with many
[01:53:12] customers shutting up shop and very few businesses investing in one of our biggest costs was managed database instances. My predecessors had spun up individual database servers, many SQL and Postgre for each of our many applications across different clouds,
[01:53:41] AWS, DigitalOcean, et cetera, and multiple different accounts within each. In terms of application isolation, it was a great approach, but it was costing a small fortune. My task was to consolidate as many of these databases as possible, which would bring our costs down quickly, amounting to thousands of pounds per month. There were three possible ways of accomplishing this after first migrating all application
[01:54:11] databases to a central instance. First, move all of the applications from the various cloud accounts into a single account and virtual PC so they could all access the new database instances privately. Or, give each of our applications static IPs and set up security rules in front of the new database instances to limit their access.
[01:54:42] Or, open up full public access to make the passwords as strong as possible. Now, remember, Gavin is a listener, so he knows what he's saying. He wrote, the problem with moving all of our applications is that it would take a very long time and many of them relied on cloud provider specific utilities and network configurations for which we would need to find alternatives
[01:55:11] and rewrite large swaths of legacy code. In other words, there was a lot of lock in and actually moving them would have meant a huge burden. Or, the second question, he says, the problem with the static IP solution is that they became quite expensive, meaning static IPs, and some of our platforms, digital ocean apps, for example, at the time, didn't offer them at all. So, static IPs were out, you couldn't use static
[01:55:40] IPs and firewall rules. So, he writes, reluctantly, the third option was chosen, and management were happy to take on the additional risk. Right? Right up until there's a major breach, they're happy. He said, management were happy to take on the additional risk, which I explained to them, especially in light of the immediate expected cost savings. He says, and so, for about four years,
[01:56:10] we were running with our main databases publicly exposed to the internet. But now, today, after a lot of work, all of our apps are in private subnets and linked VPCs, and thankfully, our databases are no longer exposed. I know that my company was not alone in doing this sort of thing, having spoken with other devs in the industry. So, here it is, a real world example of how this
[01:56:40] can happen and not through negligence, rather through unfortunate resource pressures. Luckily, we were never compromised to our knowledge. Good for you, Gavin, for acknowledging that. And we just about managed to bounce back as a business. And I'm making sure this sort of thing won't happen again. All the best, Gavin, listening since 2018. So, first of all, Gavin, confession is always good for the soul. Second, I have no problem whatsoever
[01:57:09] with what you needed to do because none of it was done blindly or without thought and a clear understanding and balancing the costs and the potential consequences. So, I would judge that to be, well, of course, not maximally safe, at least entirely responsible. You were not irresponsible. And given the constraints you were operating under, the interim solution you adopted was
[01:57:39] the best you could achieve. No one should fault you for that. John David Hicken, his subject was on ISPs selling your DNS data. He wrote, can't you just set up the ISP's modem router as an edge router, he says turning off Wi-Fi as well, and connect another router or more behind that. He says, an old solution of yours repurposed.
[01:58:09] Cheers, John. Okay, so I received other similar questions, and he's talking about ISP spying. So, I wanted to take a minute to examine the ISP's advantage. What we need to consider is that our ISP's, minus Cox Cable, knows exactly who we are by name, address, and payment information. And they're in the
[01:58:38] very special and privacy sensitive position of having direct access to our individual internet traffic. We've invested endless podcasts examining cookies and fingerprinting and tracking beacons and all manner of privacy breaching and privacy protecting solutions and technologies. And there, among it all, sits our ISP's, through which
[01:59:08] every bit, byte, kilobyte, megabyte, and gigabyte of our traffic flows. No one else on the entire planet enjoys such direct access to exactly what those in our household are doing from moment to moment. Now, before the era of Let's Encrypt and their great Encrypt the Internet
[01:59:38] TLS revolution, our ISPs were often privy to the detailed content of everything we did. In retrospect, it was quite bracing. Today, with everything encrypted, ISPs are unable to see into our connections, but they can still see where we're connecting, and if we're not encrypting our DNS, they can also track every domain name anyone in our household and any of our
[02:00:07] home's IoT devices looks up. Tracking the remote IP we're connecting to is much less useful today than it was years ago due to the massively widespread use of multi-domain hosting. Cloudflare has a large pool of IP addresses which provide services to their large pool of customer websites. Among those, there's a many-to-many relationship.
[02:00:37] So having an ISP only able to see a customer's traffic destination is far less useful today than it was 20 years ago. However, there's still a problem, and that's that SNI, the server name indication that's carried in the TLS client hello handshake is only encrypted when both ends support and negotiate TLS version 1.3. TLS 1.3
[02:01:07] introduced ECH, encrypted client hello, for the express purpose of preventing anyone who might be examining internet traffic, like an ISP, from picking up the destination domain of any new connection. At this point in time, at the start of 2026, more than half of all internet traffic is now using TLS version 1.3, but less than half, and at
[02:01:37] least a third, is still not. But yet the privacy leakage that has continued to occur during the TLS handshake is slowly draining away over time, and eventually will all get to 1.3. After destination IP and TLS handshake leakage, DNS is the remaining potential privacy leak. Firefox users are now being
[02:02:06] automatically protected, thanks to an agreement between Mozilla and Cloudflare, to use Firefox's built-in DNS over HTTPS with Cloudflare's DNS resolvers by default. So, Firefox users default protection. But, the Chromium browser family by default will upgrade to DOH if and only if
[02:02:35] the DNS provider the user has manually configured for unencrypted DNS also supports DOH. This is called opportunistic DOH. But, since most users have not manually reconfigured their DNS and just run with whatever DNS their ISP has provided, that will be unencrypted DNS over UDP. So, only people using Firefox today will have
[02:03:05] their DNS lookups masked from DNS snooping. And, I don't know why, but that's the way it is. One increasingly popular solution is to use or obtain a home router that can perform its own remote DOH lookups and configure it to use one of the major free CDN DNS solutions offered by Cloudflare, Google,
[02:03:34] OpenDNS, NextDNS, or whatever service you choose. After that, all of the internal networks DHCP configured devices, meaning typically all of your computers and mobile devices and IoT devices, which will be using standard DNS to the router, but then all queries for domain names will be encrypted and handled by the
[02:04:04] router. ISP sees nothing. So, with everything using TLS now, and TLS moving to version 1.3 with encrypted client hello to mask the target domain, and your browser or router using DOH, the only remaining privacy concern is an ISP able to observe the destination of your traffic. I don't see that ever going away. As I noted, thanks to the increasing
[02:04:34] use of CDNs and cloud hosting, which aggregate many domains among IP addresses, that's far less certain than it once was. An ISP can't absolutely know where you're going. But for anyone desiring absolute privacy from ISP snooping, the final step would be to use some form of traffic tunneling. So that means Tor or a VPN. Using
[02:05:04] one of those means that the ISP is able to determine nothing beyond the fact that you're using Tor or a VPN. That they can see but nothing else. So circling back around to John's question, it should be clear that as long as an ISP is the carrier of a subscriber's traffic. Nothing else the user might do inside their network like a router within a router would change the nature
[02:05:34] of the traffic which emerges from their LAN to pass under the ISP's watchful and perhaps curious traffic logging and monitoring eye. If they're doing that, don't know one way or the other on any specific issue. And finally, Troy Shahumian said, Steve, OMG, I just realized
[02:06:03] how many podcast files I have that probably need rewinding. Can you recommend a program? Can you recommend a program to do this? Okay, so Troy, first of all, I feel your pain. And I can only imagine the size of the podcast rewinding backlog you might be facing now. I know that many of our listeners also listen to other podcasts,
[02:06:33] so your burden might be even larger. But just taking security now, since this is podcast 1062, if you've been listening from the start, or if you started late, then went back to catch up from the beginning, and if God help you, you have not previously rewound any of those before now, well, it's not going to be pretty.
[02:07:03] I don't envy the corner that you've painted yourself into. Okay, now what you really need is some sort of, and this is what you're asking for, right? Some sort of mass gang parallel podcast rewinder. And I was thinking about this. Once I finish reworking GRC's e-commerce facility, and Lori and I get moved to our new place, it'll be a project in the spring here, I plan to readdress GRC's
[02:07:32] valid drive freeware. And although I don't have it on my roadmap, I may see whether I might be able to sneak in some sort of mass podcast rewinding facility, sort of as a side feature. So you could copy the podcast onto a thumb drive or USB attachable storage, and then I would have valid drive just rewind them all for you.
[02:08:02] I have already started the vibe coding to help me rewrite a podcast rewinding program. Claude says, I'll be happy to help you build a podcast rewinding program. I need to understand more of what you're looking for. Do you want a command line, a desktop GUI, a web app, a mobile app? Can we get all the above, Leo? We could. Let's do it in Python. This is why we have AI and data centers, so we
[02:08:31] can rewind people's podcasts. It's an unappreciated problem. Do we want basic playback controls, which would include rewind? Variable speed playback, podcast feed management, progress tracking? Yes, let's do it all. Okay. We definitely need progress tracking, because with 1062 podcasts to be rewound, after you finish listening to this, you're going to have to see
[02:09:01] how far along you've gotten. It's not clear to me. We don't really know yet, do we? how quickly a podcast can be rewound? That's a good question. That's probably why I probably shouldn't have chosen Python, but maybe I should have written this in C, something a little faster. You've got to use a compiled language. Yeah, you don't want to be, you know. Maybe, you know, people did buy... Whoa. I'm sorry, I clicked a
[02:09:31] button and it erased us both. Here we go. People did what you were seeing? People did buy a little standalone PC in order to run Spin Right. If it turns out that rewinding podcasts does take some time per podcast, then maybe it would make sense to get a little auxiliary PC so you're not locked out of getting any work done while your podcasts are being rewound, especially if you've got a large backlog. We need
[02:10:00] asynchronous podcast rewinding for sure. Yeah, definitely parallel multitasking. Multitack. Concurrency. Yes, that means I'm going to have to use probably Rust to be good for that. Fire up a thousand threads and give you one a podcast. yeah. Patrick Delahandy said you wouldn't believe how much time Twit spends rewinding podcasts because listeners didn't do it themselves. So really it was that sticker. The sticker on Blockbuster said be kind, rewind,
[02:10:31] rewind. It's for us all. We're doing it for all. Don't think of yourself when you're done watching a podcast. Just leave it there. Leave it hanging. Oh, no, I've made a mistake. I've made a horrible mistake. Claude code says user declined to answer questions. So it's given up. It's given up. I'll start over. We're going to take our final break, Leo. Yes. And then we're going to look at
[02:11:01] my take on what this means, that AI has now been found generating malware. Yeah. Well, we knew it was just a matter of time. But it's worse than we thought. Oh, yeah, because it's probably pretty good. It has access to all the malware ever written before, so it can really refine the concept. That's coming up. You're watching Security Now with Steve Gibson. We're glad you're here. Keep watching.
[02:11:30] We do the show every Tuesday. So we'll be back in February with the first show of the month. Tuesdays. Next week. Next week. That's right. Steve always boils it down to the essentials, a.k.a. next week. We do the show every Tuesday, right after Mac Break Weekly. That's about 1.30 Pacific. I'm sorry. I keep breaking my pledge. I want to do this in 24-hour time from now on. I'm getting rid of the O'Clocks,
[02:12:00] the AMs, and the PMs. It's 1330 Pacific. That's 1630 East Coast time. That's 2130 UTC. You can watch us live on YouTube, X, Twitch, Facebook, LinkedIn, and Kick, of course, in the Club to Discord as well, or download episodes from Steve's site or twit.tv slash sn. I'll explain all that at the end of the show. Every day at Sierra, you'll find top brand apparel, footwear, and gear for 20 to 60% less than department
[02:12:30] and specialty store prices. And during clearance time at Sierra, those incredible prices are 40 divided by 2.5, carry the one even lower. And when you shop clearance at Sierra, you can save a whole lot more on everything you need to get active and outside. That's a lot of saving. Epic brands, fast selection, teeny tiny prices. That's Sierra. Visit your local Sierra store today or shop online at Sierra.com. Meanwhile, let's get back. He's hydrated to
[02:13:00] security now. Okay, what we've been expecting has happened and it's every bit as bad as we worried it would be. Last Tuesday, Checkpoint Research published their analysis of a newly discovered malware, which they named Voidlink. Their research was titled Voidlink Evidence That the Era of Advanced AI-Generated Malware
[02:13:29] Has Begun. What we all knew was coming has arrived. Checkpoint summarized this news with five key points. They wrote, Checkpoint Research believes a new era of AI-generated malware has begun. Voidlink stands as the first evidently documented case of this era as a truly advanced malware framework authored almost
[02:13:59] entirely by artificial intelligence, likely under the direction of a single individual. Second, until now, solid evidence of AI-generated malware has primarily been linked to inexperienced threat actors, as in the case of Funksec, or to malware that largely mirrored the functionality of existing open-source malware tools. Voidlink is the first evidence-based case that shows how dangerous
[02:14:29] AI can become in the hands of more capable malware developers. Third, operational security, OPSEC failures, by the Voidlink developer exposed development artifacts. These materials provided clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under one week. Fourth, this case
[02:14:58] highlights the dangers of how AI can enable a single actor to plan, build, and iterate complex systems at a pace that previously required coordinated teams, ultimately normalizing high-complexity attacks that previously would only originate from high-resource threat actors. And finally, from a methodology perspective, the actor
[02:15:27] used the model beyond coding, adopting an approach called spec-driven development, SDD, first tasking it to generate a structured multi-team development plan with sprint schedules, specifications, and deliverables. That documentation was then repurposed as the execution blueprint, which the model likely followed to implement, iterate, and test
[02:15:56] the malware end-to-end. Okay, so we've been rejoicing over the surprising jump in Claude's ability to operate. For example, Claude has made enabling end-to-end creation of applications possible. As they say, everybody's doing it. Unfortunately, we've known
[02:16:26] that everyone would eventually include malware authors. That's now happened, and it's as bad as we worried it would be. I'm not going to examine this particular instance in depth, because what's the point? There will be another one tomorrow, and the day after, or an hour from now. This is clearly the beginning of an entirely new problem domain. Nevertheless, Checkpoint's introduction is worth sharing. They wrote,
[02:16:55] When we first encountered VoidLink, we were struck by its level of maturity, high functionality, efficient architecture, and flexible dynamic operating model. Employing technologies like EBPF and LKM root kits, and dedicated modules for cloud enumeration and post exploitation in container environments. This unusual piece of malware seemed
[02:17:24] to be a larger development effort by an advanced actor. As we continued tracing it and tracking it, we watched it evolve in near real time, rapidly transforming from what appeared to be a functional development build into a comprehensive modular framework. Over time, additional components were introduced, command and control infrastructure was established, and the project accelerated
[02:17:54] toward a full-fledged operational platform. In parallel, we monitored the actors supporting infrastructure and identified multiple operational security failures. These missteps exposed substantial portions of VoidLink's internal materials, including documentation, source code, and project components. The leaks also contained detailed planning artifacts, sprints,
[02:18:23] design ideas, and timelines for three distinct internal teams, spanning more than 30 weeks of planned development. At face value, this level of structure suggested a well-resourced organization investing heavily in engineering and operationalization. However, the sprint timeline did not align with our observations. We had directly
[02:18:52] witnessed the malware's capabilities expanding far faster than the documentation suggested. Deeper investigation revealed clear artifacts in indicating that the development plan itself was generated and orchestrated by an AI model, and that it was likely used as the blueprint to build, execute, and test the framework. Because AI produced documentation is typically
[02:19:23] thorough, many of these artifacts were time-stamped and unusually revealing. They show how, in less than a week, a single individual likely drove VoidLink from concept to a working, evolving reality. As this narrative comes into focus, it turns long-discussed concerns about AI-enabled malware from theory into practice. VoidLink,
[02:19:53] implemented to a notably high engineering standard, demonstrates how rapidly sophisticated offensive capability can be produced, and how dangerous AI becomes when placed in the wrong hands. The general approach to developing VoidLink can be described as spec-driven development, SDD. In this workflow, a developer begins by specifying what they're building,
[02:20:22] then creates a plan, breaks that plan into tasks, and only then allows an agent to begin implementing it. Artifacts from VoidLink's development environment suggests that the developer followed a similar pattern, first defining the project based on general guidelines and an existing code base, then having the AI translate those guidelines into an architecture and build a plan across three separate teams, paired with strict coding
[02:20:52] guidelines and constraints, and only afterward running the agent to execute the implementation. VoidLink's development likely began in late November 2025, and remember we're in the end of January, when its developer turned to TRAE, T-R-A-E, solo, an AI assistant embedded in TRAE, an AI centric IDE. While we do not
[02:21:21] have access to the full conversation history, TRAE, again, T-R-A-E, if anyone wants to Google it, automatically produces helper files that preserve key portions of the original guidance provided to the model. Those TRAE- generated files appear to have been copied alongside the source code into the threat actor's server and later surfaced due to an exposed open directory. This leakage
[02:21:51] gave us unusually direct visibility into the project's earliest directives. In this case, TRAE generated a Chinese language instruction document. These directives offer a rare window into VoidLink's early-stage planning and the baseline requirements that set the project in motion. Okay, so TRAE, spelled T-R-A-E, is a creation of ByteDance,
[02:22:20] the famous Beijing-based creator of TikTok. It's been around since last February, so it's relatively new, and it's been maturing rapidly. What makes TRAE appealing is that it's an IDE, an integrated development environment-centric solution. TRAE's documentation explains, writing, TRAE IDE is your powerful AI-powered code editor from ByteDance featuring
[02:22:49] Claude 3.5 and GPT-4 and DeepSeek integration. By the way, that's back in February. It's updated now. It's designed to be your coding companion, offering AI-assisted features like code completion, intelligent suggestions, and agent-based programming capabilities. When developing with TRAE IDE, you can collaborate with AI to boost your productivity. TRAE IDE provides essential
[02:23:17] IDE functionality including code editing, project management, extension management, version control, and more. It supports seamless migration from VS code and cursor by importing your existing configurations. During coding, you can engage in real-time conversations with the AI assistant for help, including code explanations, documentation generation, and error repair. The interface is fully
[02:23:46] optimized for both English and Chinese users. The AI assistant understands your code context and provides intelligent code suggestions in real time within the editor. Simply describe your requirements to the AI assistant in natural language and it will generate appropriate code snippets or autonomously write project-level code and cross-file code. Tell the assistant what kind of program you want to develop and it will provide relevant code
[02:24:16] or automatically create necessary files based on your description. With support for multiple programming languages and a rich plug-in ecosystem, Trey IDE helps you build complete projects efficiently. So I want to give everyone a sense for what's happening in this segment of the world. So here's an independent review posting made last May, three months after Trey's released to the world. The guy wrote,
[02:24:45] Meet Trey AI, a free AI coding agent with model context protocol MCP. He wrote, AI code assistants are flooding the market, but most still feel like chat bots taped to an editor. Trey IDE takes a different route. It ships an integrated development environment with a built-in agent framework that parses your entire code base, talks to outside tools,
[02:25:15] through the model context protocol, and crucially, cost nothing to install. If you're still paying for a $20 monthly subscription, Trey AI is an AI coding agent that offers local first setup and a $0 price tag, making it worth a test drive. So what is Trey? Trey AI is a free AI coding agent with model context protocol that offers itself as a
[02:25:45] collaborative partner for software engineers. It's designed to fit into a developer's existing coding environment, not as a replacement, but as an intelligent AI assistant. Trey provides budget relief. The main editor and completion model are free, removing the time item, or the line item, that has kept many finance and ops leaders from greenlighting AI pair programming pilots. Agentic workflow. Instead of a
[02:26:14] single do everything helper, Trey lets you spin up specialist agents, one for refactoring, another for writing tests, a third for documentation, with each AI agent getting its own prompt tools and guardrails. Enterprise style data rules without enterprise pricing. Code stays on your machine. Any files briefly sent for indexing are wiped after embeddings are created.
[02:26:45] U.S., Singapore, Malaysia, etc. Keeps government teams calm about residency. What does Trey AI bring to the table? Working together. Trey's development environment is built to work with existing developer setups. The goal is to improve how developers and AI can cooperate for better outcomes and faster project creation. Direct AI communication. Developers can talk to Trey using straightforward language and simple instructions.
[02:27:15] And they can delegate work, facilitating a more interactive relationship between humans and AI. Custom AI assistance. Trey offers a flexible system for setting up specialized AI agents. It comes with a standard agent called Builder for everyday tasks. Past that, developers can create their own group of AI helpers, each with specific tools, skills, and ways of working so the AI can be adjusted to fit precise project requirements. Connecting to other
[02:27:45] tools. Trey can link up with different external applications. Currently, it uses a system known as Model Context Protocol, which allows its AI agents to gather information from outside resources to better complete the tasks they're given. Understanding project details. Trey gains a good grasp of a project's specifics by looking at code repositories, information from online searches, and
[02:28:14] documents provided by users. Developers can also set up custom rules to fine-tune the AI's behavior, making sure it handles tasks exactly as intended. And smart code suggestions. As developers type, Trey offers intelligent code completions as it can anticipate what the developer is trying to write and automatically fill in code segments, helping speed up the writing process. The idea is to make the interaction feel natural,
[02:28:44] allowing developers to assign tasks or ask for help using simple commands. This approach could fundamentally change team dynamics, making AI less of a tool and more of a team member. And so in conclusion, he adds, the arrival of free, capable AI coding agents like Trey AI isn't just another tech trend. It shows a maturing of AI into a practical aid for a
[02:29:12] highly skilled and often costly workforce. Its mix of free pricing, configurable agents, and tight privacy controls offers a low-risk way to explore agentic coding without rewriting procurement rules. For CTOs and engineering managers, the math is straightforward. Swap a paid co-pilot for a free, locally hosted agent system and redirect budget to
[02:29:41] GPU credits or headcount. While AI won't be replacing entire development teams anytime soon, tools that augment their abilities, especially free ones, are certainly worth trying. If your roadmap includes AI assistant development, but your finance team keeps asking for ROI proof, Trey may be the simplest yes you can give for the entire quarter. Okay, so I don't mean to
[02:30:09] suggest that this Trey IDE-centric AI coding system is in any way super special. Quite the contrary, in fact. I'm sure the world is already being flooded with similar and similarly powerful AI-based solutions. I just wanted to share a sample of the tool that happened to be picked by the Chinese language speaker who created this particular void link malware. As is always the case for these sorts of
[02:30:38] things, my interest in sharing this on the podcast is giving this event, you know, the news that this event brings some context. And as I said at the top of the show, unfortunately today, I truly fear that the worst, that the news is worse than bad. And I am unable to find a silver lining here. We're all familiar
[02:31:06] with the notion of asymmetric warfare, sometimes referred to as guerrilla war. The use of malware to any malware to penetrate, infect, exfiltrate, and encrypt an enterprise's resources is inherently asymmetric. One lone malicious hacker hiding somewhere, anywhere on the internet,
[02:31:36] perhaps literally in his mother's basement, is able to single-handedly attack and significantly negatively impact the national economy of the United Kingdom in one well-placed attack on Jaguar Land Rover. It's the very definition of asymmetry. The problem with this emergence of AI and its
[02:32:05] expected application to the empowerment of all forms of coding is that I believe history and the evidence suggests that the bad guys will be gaining a far greater advantage from their malicious application of AI to create malware than the good guys will be gaining through their use of it
[02:32:33] to do what? It's not at all clear what the good guys can do that isn't already being done. In other words, I cannot see how the benefit from the application of AI to both sides is in any way even close to being symmetric. I believe that AI's value is extremely asymmetric here
[02:33:03] and that the asymmetric battle that's being waged for the past decade is about to become far more asymmetric. In years past, we've observed that hacker talent encompasses a wide range from the so-called script kiddies at the low end to the elite hackers at the high end. And we know that this also takes a pyramid shape with a
[02:33:33] great many lower-end wannabe hackers at the bottom and a much more rarefied few at the top of the pile. Recently, we've seen that the followers of this podcast have already been employing AI to create successful solutions that they would never have been able to create otherwise. And you, Leo, as a lifelong coder, could have written your news feed reader
[02:34:02] from scratch the old-fashioned way. I'd still be working on it. Yeah, exactly. Claude's AI served as a powerful accelerant. But we know from the testimony of our listeners that for many of them who were coding adjacent but not coders, AI has now bridged that gap to allow them to create their own functioning tools that never existed before.
[02:34:33] So what AI has already done is completely eliminate coder wannabe script kiddies from the low end by empowering them to author their own powerful malicious code. They no longer need to follow somebody else's script. Any mischief they can think of to get up to, an AI will happily manifest in code just for them.
[02:35:02] Consequently, we are almost certainly facing a forthcoming explosion in the volume and variety of malicious attacking code. I would like to be able to imagine some form of silver lining for the defenders in this asymmetric war. But as I said, I have been unable to come up with any. What we
[02:35:30] see is an epidemic of misconfiguration and lazy configuration, communication failures and finger pointing, lingering old designs and practices and systems that remain online despite not having received any attention for years. AI is not going to fix any of that. We also see employees in positions
[02:35:59] of trust on internal enterprise networks being tricked into clicking malicious links and inviting malware inside the house. No form of fancy AI coding is going to fix any of those things. Every single one of those is a human factors failure. We already know how to fix every one of those things, but we haven't cared enough to do so. And there's
[02:36:28] reason to believe, I think, that we're about to pay the piper even more fully than we have been. A great many of the world's enterprises are sitting ducks, and entire new generations of would-be hunters who have been using slingshots have all just been uparmed with advanced cyber rifles. Kind of makes me want to sit down and try my hand at some malware myself.
[02:37:00] Got a three hour? Wow. Yeah. I mean, I don't think it's even. I think that the bad guys are going to jump on this. It's going to spread. They'll be sharing tips and tools and tricks within their communities. It's going to be a mess. And again, we know, I mean, yes, there are flaws in software.
[02:37:30] That's a problem. But those are not the stories that we're covering anymore, largely. It's big mistakes being made by enterprises, their employees, and their IT people that are just not, they're not fixing the things we already know how to fix. They're not patching servers for which there have been patches for years. And AI isn't going to help there. But AI is going to help the malware. It's job
[02:37:59] security for us, and that's the good news. Yeah, very interesting. Very interesting. You know, Leo, when I made the jump from three digits to four digits in my software to be able to handle this podcast, maybe it's glad that now we can go to 9999. We might have to at this rate. Fortunately, by then, it'll be our AIs doing the shows, not us. We'll be resting somewhere. Steve Gibson, he hangs his hat
[02:38:29] at GRC.com, the Gibson Research Corporation. That's where you'll find, of course, all of the great work that Steve does. Most of it's giveaway free stuff like Shields Up. paid programs there, his bread and butter. One is Spinrite, the world's best mass storage maintenance recovery and performance enhancing utility, which everyone who has mass storage needs. The other is a brand new, is DNS Benchmark Pro, both of them at GRC.com.
[02:38:59] He also, of course, has the podcast there. He's got several unique versions, a 16-kilobit audio version for the bandwidth impaired, a 64-kilobit audio version for people whose ears are good enough. He also has transcripts written by an actual human, not AI, Elaine Ferris. That's increasingly going to be a selling point, by the way. Oh, yeah, human did this one, which makes it extra special. He also has the show notes there, which are really great. Every week,
[02:39:29] he puts a lot of work into this. It's 21 pages. I don't know how many thousands of words, but this is more than the weekly column used to write for Info World, right? Yeah. Actually, that was one page. Yeah. It was an L shape that fit around an ad. Probably you felt constrained by the number of words you were allowed. So now no constraints. He does the news as much as it needs, no more no less. If you want to get that
[02:39:58] email to you, you can also do that. Go to grc.com slash email. The main purpose of that page is just to whitelist your email address so you can email him with pictures of the week or comments or suggestions. But there's also two checkboxes below it. One for the weekly newsletter, the security now show notes. The other for a very infrequent email he'll send out when he has new products to tell you about. He's only sent it out once in his whole life. So don't count on a lot of email from
[02:40:28] Steve. I'm just saying. There's also lots of articles, lots of great stuff. It's the kind of site you spend an hour browsing and the time just whizzes by and you forget where you've been and oh my goodness and there's another thing and another thing. grc.com We have copies of the show at our website. twit.tv slash sn for security now. There's a YouTube channel dedicated to security now. There's also of course the easiest way to get it subscribing your favorite podcast client. You'll get it automatically as soon as we're done.
[02:40:58] There's audio and video. We have the video version so you could subscribe to one or the other or both. Leave us a five-star review if you will. Let the world know about security. Now we'll be back here next Tuesday as I said around about 1330 Pacific time 1630 East Coast time 2130 UTC. You can watch us live. We'll be looking for you then. Have a great week Steve and I'll see you next time on Security Now.
[02:41:27] Thanks buddy. Till then February. Hey everybody it's Leo Laporte. It's the last week to take our annual survey. This is so important for us to get to know you better. We thank everybody who's already taken the survey and if you're one of the few who has not you have a few days left. Visit our website twit.tv slash survey26 and fill it out before January 31st. Thank you so much. We appreciate
[02:41:53] security now.
