Australia's nationwide social media ban has put tech's age verification tools under the spotlight, exposing the flaws and privacy risks in today's facial detection systems and sparking worldwide debate about what's coming for the rest of us.
- Home Depot's puzzling reluctance to close a bad hole.
- GNOME's shell extension manager is unhappy with AI.
- How attacks on open source repositories compares in 2025.
- China's researchers have taken aim at the US power grid.
- How bad has the React2Shell vulnerability turned out to be.
- More new React vulnerabilities.
- Apple moves to iOS 26.2.
- Let's Encrypt's crosses into one billion servers managed.
- A DNS Benchmark update.
- Some interesting listener feedback, then...
- How things going with Australia's social media ban and what we are learning
https://www.grc.com/sn/SN-1056-Notes.pdf
Hosts: Steve Gibson and Leo Laporte
Download or subscribe to Security Now at https://twit.tv/shows/security-now.
You can submit a question to Security Now at the GRC Feedback Page.
For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
Sponsors:
[00:00:00] It's time for Security Now. Steve Gibson is here. We're talking about, as usual, all the security problems like the flaws, the holes in the U.S. power grid. Apparently China has been using it for war games. Apple's move to 26.2 is about security. We'll talk about the end of the line for long-term certificates and what's going on with Australia and their social media ban.
[00:00:29] Steve's got an update. All of that coming up on Security Now. Podcasts you love. From people you trust. This is TWiT. This is Security Now with Steve Gibson. Episode 1056, recorded Tuesday, December 16th, 2025. Australia.
[00:00:54] It's time for Security Now. Fasten your seatbelts, put on your propeller hats and get ready for Mr. Steve Gibson, our man of the hour and our security. Security guru. Hello, Steve. I just love it. You look at me like, oh, there he is. Hey, Steve. You know what? I'm going to get, we're going to get a double dose of Steve this week because you're going to be on a twit on Sunday for our holiday show. It's going to be very fun. Yeah. You've always, we've had you on the holiday show many times. You've always wonderful to have one. I got to dust off my Santa cap. I got around here. Yeah. Yeah. Dire.
[00:01:24] I still have, I have, I have the Grinch was the, I guess it was the Grinch. You were Grinch. You had the hands. And I remember that Paul Theriot just loved it that I just kept doing. I kept doing this. It'll be fun. It's Paris Mark. No, it's you and who else? Who else? Benny knows on Twitter. Micah. Micah. That's right. So it's going to be as always with our holiday show kind of relaxed. We'll probably look back at the big stories of the year, but mostly it's just kind of sitting around. Yeah. Enjoying each other.
[00:01:53] What the heck happened in 2025. So, uh, speaking of what the heck happened today's podcast. I don't know if I've, I'm, I'm sure in the past I've used a single word title today. We have one. It's just Australia. It says it all. It does. It does. Wow. I mean, the entire world's attention has been focused on this and the results were somewhat different than
[00:02:22] we thought. I spent some time bringing myself current. Uh, I've got two pieces of listener feedback that we'll cover there at the end of the show. Uh, from, uh, one from an Ossie who's talking about the way things look are different than some of the way it's been characterized. Um, and I don't mean to just, you know, drag us week after week through age verification, but,
[00:02:46] you know, it is like the big problem to solve right now. And it's, it's in our wheelhouse because crypto is the solution yet. We're not using that. We're using, well, what do you think? How old do you think he looks? Oh, I don't know. His face is scrunched up, you know, and it makes it a little hard to tell such a weird way to do that. It is so wrong anyway. And that's gonna, we're going to talk about that because that actually is
[00:03:11] creating a new set of problems. But first we're going to look at Home Depot's puzzling reluctance to close a very bad hole in their security. Gnome's shell manager is unhappy with AI. We're going to do a little bit of a deep dig there. And any of our coders or, or code curious listeners, uh, are probably
[00:03:36] going to find that interesting. Also, we're going to look at how attacks on open source repositories compare now that we're ending 2025 with what 24 looked like by comparison. Uh, some surprising information about the degree to which Chinese researchers have taken aim at the U S power grid
[00:03:59] and the specific nature of that aim. It's worse than we thought. Also, how bad has that react to shell vulnerability turned out to be? We have numbers, uh, as well as some new react vulnerabilities, which are a consequence interestingly of researchers looking at the react code. We've seen that happen before too. So, and I'm going to, uh, briefly touch on as you did in the
[00:04:27] previous podcast on Mac break weekly, Apple's moved to iOS 26.2, uh, and some interesting zero days that were discovered there. Let's encrypt has a big announcement. They've walked, they've had a series of announcements. Uh, a biggie is, is projected for 2026 based on the, the shape of the curve that they're on. I think they're going to make it. Um, uh, I've got a, a check in on the progress of my DNS
[00:04:56] benchmark after its first 10 days. We've, I've learned some interesting things. We've got some listener feedback. And then, as I said, Australia. So I think another great podcast for our listeners. Good day, Australia. All right. Yes, it will be a good day for, uh, for us today. Security now day. Always look forward to Tuesday with Steve Gibson before we get to the picture of the week,
[00:05:23] which apparently, uh, is timely for both of us. I'm not sure why I haven't looked at it yet, but we'll find out in a moment. I would like to talk about our sponsor for this segment on security now Zapier. Zapier does a lot of the work behind the scenes on our shows. You may not even know it. Zapier is the tool I've been using for years to automate workflows. So, uh, it's one of the ways we prepare our shows. When I find a new story, I bookmark it. Zapier picks up the bookmark,
[00:05:50] puts it on Mastodon, toots it on our, uh, uh, Twitter news, uh, feed on Mastodon, but also then puts it, formats it, puts it in a, uh, a Google spreadsheet so that the editors could pick it up and then put it in our rundowns. I mean, it's just a really wonderful tool that I use all the time without even thinking about it. Cause once you set it up, it just works. It just works. Well, now Zapier is, is even better. And now I'm thinking of all sorts of new ways I can use it.
[00:06:17] We cover a lot of things on this show, but over the last few months, one of the top topics on all of our shows, AI, right? Let's face it. Talking about trends doesn't help you be more efficient at work for that. You need the right tools. How many times have you sat down at the AI prompt going, well, I don't know what, uh, what should I, what should I say? Well, with Zapier, you got the right
[00:06:39] tool. Zapier is how you break the hype cycle and put AI to work across your company. Zapier is my favorite automating workflow tool. I just, I just love it. It's it's, but now it's even better because they've added AI orchestration. It is now the premier AI orchestration platform. Zapier is how you can actually deliver as a company on your AI strategy, not just talk about it. With Zapier's AI orchestration platform, you could bring the power of AI to any workflow.
[00:07:09] I'm thinking of some of the workflows that I have already. I want to add AI to, of course, you can create brand new AI based workflows as well. You could do more of what matters. Use Zapier. It's like, you know, uh, what was it? Was it Archimedes who said, give me a lever. I could move the world. Zapier is that lever. It's that thing that gives you so much power and now add AI to it. You connect top AI models like chat GPT, uh, or Claude, or, well,
[00:07:38] they have many of them to the tools your team's already using Google drive and, uh, and Microsoft office. And you know, all of that Zapier has over 3000 integrations. So even if I started to list them, I could never finish it's, it works with everything. I use it to, to make my lights come on at sunset, things like that. So you can add AI to any of those workflows or create your own
[00:08:03] AI specific workflows, AI powered workflows, like an autonomous agent or a customer chat bot or anything you can imagine. You can orchestrate with Zapier and AI. The thing about Zapier that's really important doesn't require technical expertise is for everyone. You don't have to be a tech expert. And the proof is that teams have already used Zapier to automate over 300 million AI tasks, 300 million
[00:08:29] join the millions of businesses, transforming how they work with Zapier and AI get started for free by visiting zapier.com slash security. Now that's Z-A-P-I-E-R.com slash security. Now thank you Zapier, uh, for your support of security. Now the important work story Steve is doing here, Steve, we can, uh, we can go ahead with the picture. Happier means more happy, happier.
[00:08:56] Zapier makes you happier. Well, no, I'm wondering if Zapier means more zappy. Well, they call the automation zaps. So maybe I'm more zappy. I'm more zappy. I'm zappy. I'll propose that to them as their next slogan. How about that? And maybe they won't fire the work. Okay. So, uh, our picture of the week had no caption. I just couldn't think of one that
[00:09:25] did any better job than this did. It's a four frame cartoon. So we have a young girl sitting on Santa's lap as is customary. And she says to Santa for Christmas, I want a dragon. Yeah. And Santa says, okay, be realistic. So she says 64 gig of Ram. And he says, what color do you want your dragon?
[00:09:56] That is a pretty awesome and very timely. Yes. Both of us, it turns out responded to this Ram crisis. Yeah. I, I purchased what will, I don't buy PCs often, uh, probably maybe at a, at a pace of one 20th of yours, Leo, I I'm just estimating, you know, I'm, I'm actually sitting in front to try him. I know. And the IRS, you got to explain that. It's like, sorry, IRS. I would, I had to have
[00:10:25] no, they understand. No, they understand. Yeah. Of course it's business expense. Yeah. I'm, I'm talking to you in front of a windows seven gigabyte motherboard from, I don't know. I think thunderbolt was an innovation when I bought this thing. So it was, it's been a while because you know, it's great. It works. It goes nowadays. Computers really are so fast. Used to be, you'd have to buy it
[00:10:50] to keep up with windows being so sluggish. You'd have to buy more processor, but now I don't really nowadays you're buying it. Cause you want to be able to plug more displays in to the box. Um, right. So, and then at my other location, you know, my, my, my, my place with Lori, I've got a, one of those really low profile Intel Nux and those are great. I love that. Well, they are except that
[00:11:15] it's sitting on the desk behind a monitor and it's fan is just drives me nuts. Cause it's, you know, to, to be so small, it's got to move. It's got to force a lot of air through. So there's a little small flat disc fan that's nice. Oh my God. The only good thing is I can tell when I can only,
[00:11:41] I can always tell when some process is like hogging the CPU and I fire, so I fire up task manager and it shows, you know, very high on some random thing. Right. So I'll think, well, and it's typically anti-malware. It's decided to, to wake up and bog the whole system down while it rescans my drive. So anyway, point is, uh, because of the, the crazy, and we talked about it last week, the crazy
[00:12:10] recent explosion in Ram prices, which is being driven because all these data centers that are being set up are, have hogged all of the Ram production capacity of the world. The Ram makers are going, well, if you want it that badly, here's what, here's our new price. And anyway, so, so
[00:12:30] anyway, I, I purchased a Lenovo on gen three, uh, core I nine, whatever triple scoop thing. You know, it's, it's a small, small form factor because that's the right thing. And it's got, you know, you're for a geek. Most geeks like they could rip off all of the part numbers of everything they bought, but you just the triple scoop. Yeah, that's, that's right. It's got,
[00:12:57] and it doesn't matter anymore. Does it just with the late, whatever's the latest. Precisely. I'm still kind of, you know, Intel centric. I feel more comfortable with an actual Intel. I think for laptops, Intel is probably still the right way to go. I think desktops AMD is the champ, but well, that's it. This is a desktop and I still kind of, I still said, fine. Intel's come along a long way. They were struggling. It does have the Nvidia, whatever the hell it is.
[00:13:26] Oh, good. Something or other. It's kind of neural, neural thing. So maybe some super girls will happen. Double dip to extra crunchy. That's right. I know that it has three display ports because as I mentioned before, my setup is three screens. I made the mistake. I'm having a high resolution center screen and different resolutions on the wings. And that's disastrous. When you drag something across and it like goes big or small or yeah, that's not good.
[00:13:56] No, it has to all be the same. In fact, I don't think Windows works right. Actually, when you're dragging, it's like, it's not happy. Steve, little tip here. Windows doesn't work right. Period. It's a little inside information. Oh, try to diagnose. Yes, you're right. And is it today? Oh, I think it is today. I have a little bit of a beef with Microsoft. Not surprisingly, we've gone several weeks without. I'm going full legs. I did the same thing. I bought a Lenovo,
[00:14:21] but I bought a laptop. This is the ThinkPad Carbon and it's a Core Ultra. But I'm putting Linux on this because I'm not a masochist. But I'm looking forward to using this. You were explaining on MacBreak how you're basically severing ties with the real world. And we understand, Leo. It's not that. It's just that we had a story about a poor guy who's actually
[00:14:47] in our club, Australian of all things, who got locked out of his Apple account and is getting no response from Apple. And that means he's lost everything for 20 years. And this is true of all of these guys. They're kind of siloed. And so I want to go with Linux because I just feel like I'm the user. I'm not going to be the victim of this. I don't want to be there.
[00:15:15] Well, and it's a lesson that I learned at some point with Spinrite was people were saying, hey, hard drives are less expensive than Spinrite. Why would I use Spinrite to fix my dead hard drive? It's like, hey, it's your data, dummy. It's the hard drive. It's what's on it. No, no one cares about the hard drive. I've got those for doorstops. No, it's what's on the hard
[00:15:40] drive that you want to get back. So, yeah, I get it. And boy, think about I'm noticing the difference today versus when we were growing up, when you actually had to put film in the camera and then like wind the knob until, you know, like throw away the first couple because they were going to be exposed to light. And you just didn't indiscriminately take pictures of everything you paid because that was expensive. You were actually consuming something. Mylar was dying
[00:16:10] on your behalf and silver iodide or something was getting exposed. I don't know. It's like 50 cents for every shot. But yeah, now it's a different world, but you do need to, I agree with you that archiving your, this photo collection that we now just take for granted. What if it did disappear? What if you lost your Google account? What would happen? What if you lost your Apple account? What would happen? Paul Therot very nearly lost everything when his Google account was canceled. He was able to
[00:16:37] get it back, but it really, I think it reminds us that it's on us to make sure that we are whole and not dependent on these third parties. And if anything, we've seen a complete collapse of, of user support, like you, true customer concern. If you, if something doesn't work, it's good luck. You know, you're not, you're going to get some robot chat box in the lower right corner of your
[00:17:04] screen that says, how may I help you today? And you ask it a question that says, uh, would you, is that nine or 10? No, I need, you know, can I talk to a real person, please? Well, and it's even worse if you get stuff for free. Yeah. We don't have real people. No, I always said Google gave you support by a Python script. And boy, speaking of that, I've seen more mistakes from Google's little helpful AI. When you ask it something, it just makes up crap. I've mentioned it before,
[00:17:31] it just happened to me yesterday where I was asking it something. Um, or I wanted to know if I could limit the postings on Zen for, Oh, the forum software we use for specific people, because we've got one person who just like, just like he believes that he's somehow getting remunerated based on character count by the word. Anyway. And of course, Google said, yes, you can, here's what you do. It's like, and it made up a complete bunch of nonsense.
[00:18:01] It didn't, it didn't exist before that. I asked it. There's another problem that we're having. When my, Oh, windows 11 has something called smart application control. Well, right off the bat, the name should tell you that you've got some, you know, uh, smart, no, because, because there have been some users of the, the, the benchmark who had been blocked by smart application control. This is not like windows defender where you can say, okay, I know you're,
[00:18:30] you're nervous because see the problem is we sign. And I talked about this a long time ago, a couple of years ago, we custom sign everybody's individual download with their name and license in the code. So, so that it is pre-licensed for them. Well, that means that every copy is unique, which means that Microsoft can't learn that all thousands of people have downloaded this.
[00:18:57] We never got any complaints. So, so the good news is we've had almost no trouble, but two people that I'm aware of have said that smart app control has said no. Well, okay. Get this Leo. Uh, unlike windows defender, you can't say I know, but I know Steve and you know, trust him. Let, let this through. No,
[00:19:23] if you turn it off, you can never turn it on again. Get this. It can only be turned on during a fresh install from scratch of windows 11 because they're saying, Oh, you have to be on from the beginning. Right. So, you know, so that we can assure you that there was never a moment when your system was
[00:19:49] vulnerable. So B and because it can absolutely cause trouble. I mean, it won't let any unsigned software run. You're just SOL. If you, if you are trying to run anything that is not signed. So you can imagine that enterprises have a problem with that because they're running internal software for, for their enterprises. So it can be turned off. Most enterprises do have it off. It will never be
[00:20:14] on if you've upgraded windows from 10 to 11. Cause again, well, we don't know what's all, what, how contaminated your system already is. The only way to have it on is if it's a fresh install. Now, oftentimes OEMs will, will start off by turning it on, but then they get so many customer complaints about this being a problem that now OEMs are shipping with it off because it's just not worth
[00:20:37] it. You still have windows defender. Anyway, my point is it's not smart. And I asked because this was the first time I encountered it was yesterday. I said to, to Google, you know, what is smart app, app control? And can I turn it off? And Google said, Oh yes, here's what it is. And yes, you could turn it off. And it gave me instructions to turn it off, which, which technically are correct,
[00:21:04] but it didn't explain of course, that once off, it can never turn it on again. So, so it's not off for just one application. I got a little confused with windows defender anyway. Wow. Uh, the world we live in, you're in, you're in Linux and I understand why that's why I'm in windows because I have customers. No, I understand. I, I, I'm in the very enviable position where I owe
[00:21:27] nothing to anybody and I can do what I want. And I'm very happily using a Linux. I will say, however, I've been very impressed with how many people are running the benchmark under Linux and on their Apple machines. So there is a, there, we have a strong crossover. I mean, it's, we're not just, even though I'm producing windows only software, thankfully the wine people have been working overtime. It's a miracle. That's another reason for your loyalty to Intel. You're, you're writing
[00:21:57] X86 assembler. You can't be using those smart Snapdragon stuff. Although Rosetta works great. No, it runs, it runs on Intel arm on windows arm and Mac arm machines. It, it, it, this is the last operating system version. It will run cause you're killing Rosetta. I know, but there's something else, not Rosetta, but there was, there are other emulators. Yeah. I mean, I use, I use parallels all the time. There's fusion VMware fusion and parallels. Uh, there's
[00:22:26] key move. There's a number of emulators that'll let you do it. Cause you're, and I, and this is the key. You wouldn't want to use, uh, use that for spin, right? Cause you don't want an emulation layer, but your tool is looking at DNS speeds. That's independent of how it's running. Yeah. Yeah. Yeah. Well, okay. Let's talk about home Depot. Oh, okay. So last week I used the phrase,
[00:22:50] oh yeah, well make me in reference to the sort of conduct that's probably most common in adolescent males of around high school age. I was reminded of that by tech crunches reporting of home depots, apparently taking the same unfortunate tactic, even though one could argue they they're grownups.
[00:23:13] Uh, tech crunches headline was home Depot exposed access to internal systems for a year says researcher. And actually it's nearly two years. Zach Whitaker reported for tech crunch, a security researcher said home Depot exposed access to its internal systems for a year after one of its employees
[00:23:37] published a private access token online, likely by mistake, I would say definitely by mistake, unless he was, you know, really disgruntled. The researcher found the exposed token and found the exposed token and tried to privately alert home Depot to its security lapse, but was ignored for several
[00:24:04] weeks. The exposure is now fixed after tech crunch contacted the company's representatives last week. So the security researcher was Ben Zimmerman, who writes tech crunch told tech crunch that in early November, he found a published GitHub access token belonging to a home Depot employee, which was exposed
[00:24:29] sometime in early 2024. So as I said, two years coming up on two years when he tested the token, Zimmerman said that it granted access to hundreds of private home Depot source code repositories hosted on GitHub and allowed its holder to modify their contents. Yikes. Okay. So just to pause here for a
[00:24:54] second, we don't know what those hundreds of private home Depot source code repositories might have contained or might still contain, but having a token loose on the internet that permits right access to them ought to keep anyone from resting before it was invalidated. I can't imagine someone reporting this and just being blown off. We haven't encountered this Ben Zimmerman before, but Zach provided a link to Ben's
[00:25:23] website where he introduces himself writing, Hey, I'm Ben. I'm a security researcher from California. I've been awarded over $20,000 in bug bounties for securing critical infrastructure and the open web. Uh, many lists, a bunch of his discoveries on his page. So this guy's the real deal. Zach continues writing. The researcher said the keys allowed access to home depots, cloud infrastructure,
[00:25:50] including its order fulfillment and inventory management systems and code development pipelines, right? And order fulfillment among other systems. Home Depot has hosted much of its developer and engineering infrastructure on GitHub since 2015. So for the last decade, according to a customer profile on GitHub's website, Zimmerman, the researcher said he spent, he sent several emails to home depot,
[00:26:18] but never heard back, nor did he get a response from home depots, chief information security officer, Chris Lanziata, after sending a message to him over LinkedIn. Zimmerman told tech crunch. In other words, Ben really tried to like every way he could to contact home depot and say, hello. Zimmerman told tech crunch that he has disclosed several similar exposures in recent months to companies,
[00:26:46] which have thanked him for his findings. He said, home depot is the only company that ignored me unquote. Given that home depot offers no way to report security flaws, such as a vulnerability disclosure or a bug bounty program. Zimmerman contacted tech crunch finally in an effort to get the exposure fixed.
[00:27:10] So right using tech crunches strength when reached by tech crunch on December 5th, home depot spokesperson, George Lane acknowledged the receipt of our email, but did not respond to follow-up emails asking for comment. Wow. The exposed token is no longer online. And the researcher said the tokens access was revoked
[00:27:33] soon after our outreach tech crunches outreach. We also asked Lane if home depot has the technical means such as logs to determine whether anyone else used the token during the months it was left online, you know, almost 24 months to access any of home depot's internal systems. We did not hear back.
[00:28:01] So, okay. The question at the end of Zach's reporting, of course, is exactly the one I was asking myself. Ben was able to date the creation of the token to early 2024. So we're coming up on two years of right access exposure to many of home depot's critical appearing internal systems by way of the software that runs them. Ben's a good guy, security researcher, who's out there working to
[00:28:29] improve the security of the world. But we know that within the population of people who may be poking around looking for security vulnerabilities, good guys like Ben are almost certainly in the minority. Access to hundreds of home depot's internal operations source code repositories would be immensely valuable to any attacker who wants to find some way to threaten and extort home depot,
[00:28:57] as well as, you know, you know, who is home depot is a well-known U S entity with deep pockets and apparently not much in the way of security practices. So do they have logs? Do they even care if they have them? We don't know anything about home depot's internal it culture, but what we do know doesn't look good.
[00:29:22] So not the way you need to operate. And, you know, we need to remember, I think that not all companies are it centric. I think that's crazy in today's world. You know, we're in the process of a lengthy remodeling of a condo that we'll be moving into my wife and I in a month or two, uh, and you know,
[00:29:49] home depot and their online presence. Uh, and the fact that they've got some retail outlets near to us, they've been seeing a lot of use from us in the last few months. So a company that is, you know, on the internet cannot afford not to have an it culture, which is up to speed. And there sure doesn't seem to be, I thought you were going to say we're building a house and we're going to put
[00:30:14] locks on the doors. We can assure you that maybe security cameras in there too, huh? That too. You betcha. Wow. Okay. So this is really good. In fact, this is so good, Leo. I think we should take a break before I get into this goodness because, uh, otherwise we're going to get well past our first break point. So let's do that. And then we're going to talk about what Gnome's shell
[00:30:44] extension manager, one of them had to say about AI and its contribution to their efforts. Yeah. I'm well aware of this. I think they had to, but we'll talk about it in just a little bit. Uh, first though, since Steve wants to take a break, I would like to talk about our sponsor for this segment on security. Now the threat locker, very excited about threat locker because I think Steve and I are going to be working with them in the spring at their big zero trust world.
[00:31:11] Threat locker is a very, very well-known maker of zero trust solutions that are both easy to install affordable and highly effective in stopping supply chain attacks. Zero days. There is a real reason why you need threat locker ransomware. I don't have to tell you this. If you listen to the show, you know, it is harming businesses everywhere. Threat locker can help prevent you from becoming the
[00:31:38] next victim. Threat locker is a zero trust platform. And this is the key takes a proactive. These are the three words deny by default approach. Deny by default approach. That means it blocks every unauthorized action to protect you from both known and unknown threats. It's the only way you can. You have to explicitly say it's okay for this program to do this. It's okay for this user to do this.
[00:32:05] You have to explicitly say that. And this is incredibly powerful. And that's probably why threat locker is trusted by companies that just can't afford any downtime. Global enterprises like JetBlue. An airline goes down for five minutes and it is a crisis, right? Port of Vancouver, same thing. Infrastructure. Threat locker shields them and can shield you from zero day exploits and supply chain attacks while providing complete audit trails for compliance. That's another benefit
[00:32:34] of this. Because if everything that happens is proactively approved by you, well, you've got an audit trail of everything that did anything that everybody who did it, you know. It's great for compliance. As more cyber criminals are turning to something called malvertising. This is something that Steve's been really talking a lot about is how hard it is to protect your company
[00:33:00] from employees acting completely innocently. And in order to do this, you really need more than the traditional security tools. Malvertising is a perfect example. Attackers create convincing fake websites. They impersonate popular brands, AI tools, software applications. They distribute them through social media ads, through hijacked accounts. And then, and this is the most evil part, they use, actually it's all evil, but they use legitimate ad networks. They buy ads and
[00:33:30] all of these are automated on all of the ad networks. They're all automated. So there's nobody checking. So they buy these ads to deliver malware, which means your employee browsing on a work system to legitimate sites will see these ads. They can't help it. You know, they don't have to seek it out. They're being thrust at them. Traditional security tools usually miss these attacks because they are
[00:33:55] clever. They use fileless payloads. They run in memory. They exploit trusted services that bypass typical filters, but not ThreatLocker. ThreatLocker's innovative ring fencing technology strengthens endpoint defense by controlling which applications and scripts can access or execute. It's as simple as that. That contains potential threats. Even if those malicious ads successfully reach the device,
[00:34:22] they cannot run because you haven't approved them. ThreatLocker works across all industries. It supports Windows and Mac environments. It provides 24-7 US-based support, best support ever, and enables comprehensive visibility and control. Jack Senesap is the director of IT infrastructure and security at Red Nurse Markets. They use ThreatLocker. Jack says, quote, when it comes to ThreatLocker, the team stands by their product. ThreatLocker's onboarding phase was a very good
[00:34:52] experience and they were very hands-on. ThreatLocker was able to help me and guide me to where I am in our environment today. Visit ThreatLocker.com slash Twitter to get a free 30-day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance. That's ThreatLocker.com slash Twitter. And for a limited time, if you use this offer code ZTWTwit26,
[00:35:18] you're going to get $200 off registration for Zero Trust World 2026. It's in Orlando. ZTWTwit26, 200 bucks off registration for Zero Trust World 2026. That gives you access to all sessions. It gives you hands-on hacking labs, gives you meals. There is a very famous after party they do every year. It's the most interactive hands-on cybersecurity learning event of the year. It's
[00:35:44] March 4th through the 6th in Orlando. And I think it's okay to say we're going to be there. I'm very excited. Be sure to register with the code ZTWTwit26. We'll see you at Zero Trust World. We'll see you and ThreatLocker at Zero Trust World. And I'm very excited about that, Steve. It's going to be a lot
[00:36:08] of fun. It's going to be fun. And we're probably okay to say we're the last presentation of the first day. Followed by a cocktail party. So you'll be around. Yeah, you'll have a chance to buttonhole us and talk to us. So that's going to be, I cannot wait. And Orlando, I think, is a fun place to go. It's going to be warmer than it is here. It's got stuff. It's got stuff. Look, we have a party going
[00:36:32] up. Okay, on we go with the show. So a recent posting by one of the guys who's taken on the job of managing GNOME's shell extensions was interesting. And I wanted to share it. So, but first of all, just to be clear about what GNOME is, for those who may be familiar with the Windows or Mac world, GNOME is to Linux and Unix-like operating systems. What Explorer and the
[00:37:01] Windows desktop is to Windows or Finder and the Mac OS UI is to Mac OS. So it's the UI. It's the desktop, you know, file manager and so forth. All three, GNOME, Explorer, and Finder, you know, are that for their respective environments. And GNOME is, you know, G-N-O-M-E.
[00:37:27] It was originally an abbreviation for GNU Network Object Model Environment. So since then it's taken a life of its own and, you know, people just know it as GNOME. Okay, therefore, a GNOME shell extension is an add-on that adds a feature to the Linux desktop, which is what runs GNOME. So here's what
[00:37:50] one of the shell extension managers wrote last week. He said, since I joined the extensions team, I've only had one goal in mind, making the extension developers' job easier by providing them documentation and help. I started with the port guide and then I became involved in the reviews by providing developers code samples, mentioning best practices, even fixing the issues myself and
[00:38:19] sending them merge requests. Andy Holmes and I spent a lot of time writing all the necessary documentation for the extension developers. We even made the review guidelines very strict and easy to understand with code samples. Today, extension developers have all the documentation to start with extensions, a porting guide to port their extensions, and a very friendly place
[00:38:46] on the GNOME extensions matrix channel to ask questions and get fast answers. We now have a very strong community of GNOME shell extensions that can easily overcome all the difficulties of learning and changes. The number of submitted packages is growing every month, and we see more and more people joining the extensions community to create their own extensions. Some days,
[00:39:12] I spend more than six hours a day reviewing over 15,000 lines of extension code and answering questions from the community. In the past two months, we've received many new extensions. This is a good thing since it can make the extensions community grow even more. But there is one issue with some packages.
[00:39:37] Some devs are using AI without understanding the code being produced. This has led to receiving packages with many unnecessary lines and bad practices. And once a bad practice is introduced in one package,
[00:39:57] it can create a domino effect, appearing on other extensions. That alone has increased the waiting time for all packages to be reviewed. At the start, I was really curious about the increase in unnecessary try-catch block usage in many new extensions being submitted.
[00:40:21] So I asked, and they answered, that it is coming from AI. Just to give you a gist of how this unnecessary code might look. Okay. And then he, he, in his posting, he gives us a sample of this code that I'm going to dissect here in a second.
[00:40:41] So he, he provides us a sample of code that is that, you know, is, is what he actually sees in submitted gnome extension source where, and it's got a whole bunch of lines. And then he says, instead of simply calling, and then he's, he's calling a function super dot destroy. He said, which you clearly know exists in the parent. And then he basically shows all of those lines.
[00:41:10] And then basically the, a single line or single call is all you need. So he says, at this point, we have to add a new rule to the review guidelines. Any packages with unnecessary code that indicate they are AI generated will be rejected. This doesn't mean you cannot use AI for learning or fixing some issues.
[00:41:39] He writes AI is a fantastic tool for learning and helping find and fix issues. Use it for that, not for generating the entire extension. For sure in the future, AI can generate very high quality code without any unnecessary lines. But until then, if you want to start writing extensions, you can always ask us in the gnome extensions matrix channel.
[00:42:10] Okay. So for people who, as I said, are coders or, or code adjacent or code curious, this is, this is a, like, there's a, just, just a perfect example of many different things going on here. Um, so we're going to, we're going to look and understand this manager's annoyance so that we can also, we, and we need to look at it and understand it.
[00:42:37] So we could talk about what AI is doing here and, and what's wrong. So first of all, modern high level languages have a construction known as try catch. And Leo will be glad to know that this concept originated back in the 1960s with Lisp, which used the semantics catch and throw to essentially do the same thing. This first appeared in Lisp.
[00:43:06] So the idea behind this is that if some code might produce an error at runtime, not when you're compiling it, where the, where the syntax passes, but when you're actually running it, where something bad happens, like you tried to divide something by zero, you can't, right? That's illegal. So that produces an error.
[00:43:28] So the idea is that if you have some code that might produce an error at runtime, we don't want the entire program to just give up and explode. We want to have the opportunity, we, the coder want to have the opportunity to contain the problem and to possibly handle it ourselves in some more of our own code.
[00:43:49] So the suspect code, code that might cause a problem is placed inside a, what's called a try block, which tells the, the, the runtime manager to literally try doing this.
[00:44:07] The try block is then followed by a catch block that's used to catch any runtime error that might unexpectedly occur while we're executing the code inside that first try block. So in other words, we're telling the runtime manager, while code is executing inside this try block, don't freak out if anything bad happens.
[00:44:33] Simply stop what you're doing for us there and execute the code we have provided for this purpose in the catch block, which immediately follows the try block. And we'll take it from there. So this allows code to be somewhat self-healing and to handle its own errors internally rather than simply, you know, crashing and saying, you know, bam, this program has died. Okay. Okay.
[00:45:03] So now let's look at the specific case of this gratuitous AI generated code, which, which this manager posted in the example, the manager provided, we have some code in the try block that first of all, cannot possibly fail. It's, it's already, it's already, the code is already being extremely cautious.
[00:45:32] It, it, it first checks to see whether the super object contains a function named destroy. The test for that, just asking the question, does this exist? Cannot produce an error. It will either return true or false. Either the super object exposes a destroy function or it doesn't.
[00:46:01] And then the way that conditional is written, only if the super object does expose a destroy function, will that destroy function then be called on that super object. So this conditional that's wrapped in a try block cannot fail.
[00:46:24] It cannot cause an error that could require the try catch exception handling mechanism to be invoked. It can't happen.
[00:46:35] But more than this, we learned from the context that the manager shared with us, which is that whatever that super dot destroy function is, it is apparently well known to exist and must exist in this GNOME shell extensions execution environment. That makes it always safe to simply call the function.
[00:47:02] It will always be present and simply calling it can never fail.
[00:47:07] So not only was the use of that try catch construction provably and obviously unnecessary to anyone looking at the code because the conditional expression it contained, that's all it contains is one conditional expression, was first testing for the presence of the function and only calling the function if it existed.
[00:47:32] But that so that conditional test itself was also completely superfluous because whatever that super destroy function might be, apparently it must always be present. That means that everything there, all of that code, other than simply calling the super destroy function directly was superfluous. It was gratuitous nonsense. So how did this happen?
[00:48:02] It happened because today's LLM-based AI, as we've been saying recently, as we've grown over the last year to really deeply understand what is going on here, it doesn't understand even a little bit what it's doing. It doesn't know whether we're asking about the population of kangaroos in Australia or asking for code to destroy the super object.
[00:48:30] It's all the same. To today's AI. It's all just language. It's all just language. Which brings us to the main point of this. The thing that I thought was most interesting was the observation that AI-generated code could and would become infected with nonsense code like this. That's a very interesting observation on the part of this manager.
[00:48:58] And I'm sure it tracks everyone's intuitive and growing understanding of the way today's LLM-based AI operates. Today's AI is all just astonishingly sophisticated pattern matching. So somewhere along the way, AI picked up that conditional test construction of making sure that a function existed on an object before calling that function would be good.
[00:49:28] It doesn't hurt to do that. But our code would get seriously bogged down. I mean, like the world's code would get seriously bogged down if we were to keep asking the runtime manager to verify the presence of known existing functions before every time we called one. The point is that testing like this for a function that might not exist is a good thing to do.
[00:49:58] So there's a place for it. An AI picked up on that useful instance without any understanding of why. And is now salting the code it produces with that nonsense without need. Alternatively, you could protect yourself from a missing function by wrapping it in a try-catch construction. We saw that too.
[00:50:26] So in this case, the AI did both of those things when neither were necessary. It didn't know why. It was just copying stuff it saw elsewhere where there actually was a need. But in this case, there's no need. Yet it still copied that code because it saw it elsewhere without ever understanding it. So here's where the notion of infection, of course, comes in, which is from promulgation.
[00:50:54] We know that AI is training on what it finds out on the open Internet. Even if what it finds is code that it or some other AI previously emitted. That means that superfluous code like what we've just seen, which does not cause errors but adds nothing other than overhead and bloat, will tend to be self-perpetuating.
[00:51:20] If this manager did not proactively strip this crap out of GNOME's open source shell extensions code base, it would remain there. It would get picked up by LLMs that were training on it that would then further replicate it into the future. And the more it's replicated, the stronger it becomes. The pattern takes hold.
[00:51:46] Before long, code would be littered with this because non-coders would be asking AI to write an extension without ever bothering or needing to look inside.
[00:52:26] Look, it works. Amazing that it's able to do so, but it's not without some downside risk. And any code that doesn't actually cause an error that would force it to be debugged and to be corrected, that code won't be corrected or removed because it works.
[00:52:55] And if it's put back into circulation, other AI will train on it and it will continue to live and get amplified. So, that said, I'm sure that all is not lost forever because I'm, as I've been saying from the start, I am 100%, just like this guy is who wrote at the end of his posting.
[00:53:19] I am 100% absolutely certain that some future coding AI, which we don't have yet, that's been specifically designed for coding. We'll look at the code that was emitted by these early LLMs like we've just seen and shake its electronic head. It would be able to see and actually understand the code used in this example.
[00:53:49] It would know that super.destroy function must always be present in the super object. It would know that the super.destroy function can always be counted on to be present. So, it would remove the conditional test for its presence. Then it would see that what's left in the try block, which is just that function call, cannot possibly fail.
[00:54:18] So, it would completely remove the try-catch construction and all of the code from the catch block, which would also be removed because it could never be executed. So, we're not there yet.
[00:54:31] So, the point is, we could contain the problem in the short term just by not blindly submitting AI code back into the public repository pool where AI will train on it and amplify bad practices. Not things that produce errors, but things that just produce bloat. Until in the future, we end up having truly smart coding AI.
[00:55:00] I'm sure that's coming because code can be understood by a computer. I don't think language can necessarily be understood or at least not where we are today. Code can. This is going to be possible. So, there is hope for getting the human-generated code cleaned up, I suspect, Leo.
[00:55:27] But anyway, I thought this was a really – it was just a perfect example of what is actually being seen in the wild, what AI is producing, why they've had to tighten the guidelines in this instance.
[00:55:41] And unfortunately, it's unlikely that the majority of the AI-generated code that is being put back into the public arena is being checked by people who know better than to allow this slop, which doesn't produce errors, to persist. So, the future is true coding AI that is able to look at this and just say, okay, we don't need any of this crap.
[00:56:10] Let's get rid of it. We're just going to call the function. Maybe it's being super cautious. That's all. Wasn't that a cool example, Leo? I'm not sure I agree with the example, but there's a debate going on in the Discord from some of our more accomplished programmers who say, well, you don't ever want to assume that super destroy is going to work. And if it throws an error, you want to catch the error if you can. I don't know if the type of is necessary.
[00:56:39] That's maybe what he was referring to. But when you catch it, all you're doing in this example is throwing up a message. Another error. So, that doesn't help anything. That's true. It's a good point. Yeah. I don't know. Yeah. There's discussion about it. It's fun. So, whether super destroy fails to destroy something, we're talking about its existence, not the value that it returns. Well, it's both.
[00:57:08] It's both because it asks for a type of. It says, is this a function? I think that's superfluous. It's obviously a function. And then it runs it in the try. So, if super destroy failed or didn't exist, it would catch it. But we know it exists. The author said it is known to exist in this environment. Darren says, and maybe he's being facetious, it's JavaScript. You never know what exists. He's probably making a joke.
[00:57:36] He also, I mean, look, he coded for financial institutions where they probably do bend over backwards to make sure to catch errors. Right? And that's a mindset. You know? And who knows? Maybe that's where the AI got it. I don't know. But I'm sure there's other many multiple examples in the GNOME extensions of obviously AI generated crap. That's part of the problem is a lot of the people who do this are karma farming.
[00:58:06] They're not really trying to create useful extensions. GNOME extensions are great. I use them all the time. But there are also people out there who just want to say, look, I put an extension. They want to get GitHub, stars, or whatever it is. And they aren't actively contributing to the ecosystem. And I think that's more of the problem than the AI. It's like the Android apps. How many? Right. Exactly. I mean, oh, my God. Exactly. And AI makes this possible at scale, right?
[00:58:34] Because somebody with no coding skill at all can create some slop. And he's just trying to keep the slop out of the extension library. The extension library is already ungainly big and full of old stuff and bad stuff. So I can understand why he might want to restrict some of the AI slop. It's good. You know, having some AI overlords may not be that bad. You know, Leo, I don't know. I'm not sure that we're demonstrating. Could we do worse than humans have done? Right?
[00:59:04] I don't know. Exactly. Okay. So the deliberate pollution of our industry's open source repositories, not by well-meaning authors and AI, but by malicious actors, is one of the most unfortunate but retrospectively obvious problems of the open source movement. Right?
[00:59:28] I mean, the altruistic goal is to allow all well-meaning actors to share or well-meaning coders to share and share alike. It's like, hey, I wrote this. This is useful. Here it is in case it's good for anybody else. You know, it's a wonderful concept in principle. It was – what's his face? The open source guy. I'm blanking on his name. Linus Torvalds? Torvalds? Oh, Richard Stallman. Stallman. Linus Torvalds. Yes.
[00:59:56] Stallman's like original perfect concept of let's all – all software should be free. Yeah. And, you know, we just put it out there and people can use it and they can improve it and then everybody gets their improvements and so forth. Well, great. How's that working? Pretty good, actually. Nothing – You know, he wrote Emacs. I'm pretty happy with it.
[01:00:19] Nothing has ever been more prone to abuse, however, because the entire system is built on the assumption of goodwill by those who are contributing. Yeah. As the year 2025 draws to a close, we're able to look back now on this past year and compare it to 2024.
[01:00:39] As usual, the volume of packages submitted to NPM – that's the package management repository for JavaScript – in 2025 far outweighs what's seen in any of the other repository ecosystems.
[01:00:57] The primary reason for this is that when looking at web applications, regardless of the back-end technology, you know, whether you've got Java or Rust or C Sharp or whatever on the back-end, it is most common for the front-end UI to be built using JavaScript or TypeScript, right? That's what our browsers typically run. You know, that's the scripting in the browser itself. So these front-end technologies largely depend upon NPM.
[01:01:27] Adding to this is the fact that it's very straightforward to author and publish packages to NPM, which explains why there's a consistently high level of activity there. I've got a pie chart here at the top of page six, which shows the breakdown of the public repositories. NPM, this JavaScript repository, holds two-thirds of the entire repository segment.
[01:01:58] NuGet holds second place. That's the repository for the .NET ecosystem. It's got 20%, which puts it in number two ranking, compared to NPM's 65.5, just shy, half a percent shy of two-thirds, 66%. In third place is PiPi. We're also often talking about problems there. It's got 13.1%.
[01:02:25] But those three, those top three are followed by, in order of really diminished share. Tiny slices. Yes. Cargo, Ruby Gems, Golang, and Maven. Cargo's for Rust. Ruby Gems, obviously. Ruby, Golang, Go. What is Maven? I don't know what Maven is. Good question. I didn't even look it up.
[01:02:48] But taken together, those top three comprised 98.5% of the entire space, with those also-rans holding a total aggregate of 1.5%. So, overall, comparing this year, 2025, as we're closing out, to the same period in 2024, there was an 86.8%.
[01:03:16] So, not quite doubling, but close, 86.8% increase in malicious submissions of all kinds relative to the same period last year. To give everyone some sense for this, here are the counts and the natures of what bad guys were hoping to slip into the repositories and slip into other users' and developers' code bases.
[01:03:47] 4,196 packages were specifically designed to target organizations or groups often linked to cyber espionage or financial theft. More than 58,000 packages contained URLs known to be malicious, underscoring the growing risk of dependency injection attacks, right?
[01:04:11] Where the code itself might not be malicious, but when it's running, it reaches out to a known malicious URL to bring some dependencies in, which are then malicious. A whopping almost 930,000 packages included precompiled binaries. Ooh, what's in that precompiled binary, I wonder? Right.
[01:04:40] Creating potential attack vectors for binary tampering. 161,000 packages executed suspicious code during their installation. 38,000 packages made server requests to IP addresses attempting to communicate with command and control servers. More than a million packages attempted to obfuscate their underlying code, making detecting malicious activity much more difficult.
[01:05:08] I should note that I don't remember the statistic. I saw it. It was something like most of these things had mildly downward aiming trends. This obfuscation of underlying code saw like a 1,200% jump in the last year. That's like the big thing to do now is to obfuscate your code. You know, there are reasonable reasons to do that.
[01:05:36] Like it's proprietary and you'd like to protect your proprietary product. Like I would imagine that, you know, some of our password managers have deliberately protected script that they need to run in the browser, but they would rather not have people messing with it.
[01:05:54] Nearly 5,000 packages were identified as typosquats indicating a concerted effort by attackers to just trick developers into installing malicious versions of popular packages. We've talked about typosquatting years ago where, you know, closely named packages were actually being downloaded because someone just mistyped it. It didn't generate an error because guess what?
[01:06:21] Some bad guy already stuck a package there by that typoed name. And that's what you're downloading now without realizing it. More than 61,000, almost 62,000 spam packages were published across the ecosystems, severely degrading the integrity of the open source repositories just by filling them up with crap and threatening the trust the developers place in these platforms.
[01:06:48] I would be very wary about trusting them at this point. And more than 206,000 packages were flagged as containing critical malware. 206,632 packages, again, flagged as containing critical malware. And we, you know, every few weeks, I just remind everybody by noting how many hundreds of NPM packages were removed because they were containing malware.
[01:07:18] Well, the aggregate of that over the year to almost 207,000 packages were found just, you know, just malicious containing crap that were posted in the repository. The Veracode group who make it their business to keep an eye on all this wrote about the trends that they've been seeing changing over the last year. They said,
[01:07:44] We observed several trends across these categories of malicious behavior when compared to last year. Most notably, it is now common for packages to make use of obfuscation, as I noted, a normally benign technique used to make the code harder to analyze or reverse engineer in order to protect intellectual property.
[01:08:07] However, attackers are leveraging this to disguise malicious payloads and make detection significantly more difficult. We saw a rise in code that executes during package installation. This is particularly problematic for malware analysis when malicious code is fetched from outside the package itself, for example, via a file download from a URL during installation using pre- and post-install hooks.
[01:08:35] This dynamic nature makes it difficult to be certain whether a package is malicious or not, as the contents of the file behind the URL could change over time, right? Swapping out a benign or legitimate file for a malicious payload. There was a reduction in dependency confusion attacks this year, suggesting tactics to target specific groups or organizations for financial gain have changed,
[01:09:03] and other more effective means are being used instead. So, unfortunately, there's no easy solution to this, right? The repositories are so popular because they serve as a source of, in many cases, terrific ready-made code that solves real problems that developers have. Rather than continually reinventing the wheel, you know, writing your own package to do something,
[01:09:32] use one that's proven if you can. The only thing developers can do is to remain vigilant and inspect anything that's downloaded. Unfortunately, because benign behavior, as they noted, can change when behavior is based on whatever is pulled from a URL. Well, even an initial all-clear might not be enough caution. So, Leo, we're at an hour.
[01:10:02] We're going to talk about China next, but let's take a break. This is moving fast. All right. You're watching Security Now, Steve Gibson. I'm Leo Laporte. We're glad you're here. Of course, I know you're glad you're here. What would you do on a Tuesday if you didn't listen to Security Now? That's my question for you. By the way, we will have a show next Tuesday, Christmas Eve Eve, the 23rd. And then the 30th, we're going to do a special.
[01:10:30] Steve is going to bring back, we are going to bring back Steve's classic 2000, I think it was 2009, vitamin D episode, which really, in the intervening 16 years, has really proven to be kind of prescient and quite smart. And I think probably thanks to it, a lot of our listeners stayed healthy through the COVID pandemic, as Steve did. That will be replayed. There is no video because it was an audio show.
[01:10:59] Steve and I will record and open to it tomorrow, or just to kind of set it up. But I understand Anthony Nielsen has done something interesting so that we can put it on YouTube. There will be some video. You can kind of think of it as your geek Yule log, I think, is the idea. But I'm not sure exactly. Stay tuned for that. Our show today brought to you by Delete Me. Ever wonder how much of your personal data is out there on the internet for anyone to see? Don't look. It's a nightmare.
[01:11:29] It's a lot more than anybody really should ever have to see. Your name, your contact info, your social security number. Can you believe in this day and age it is not illegal to buy and sell people's social security numbers? Home addresses, even information about your family members. And here's the thing. It's all being collected, compiled by this industry segment called data brokers. That, again, completely legal in the U.S.
[01:11:57] And data brokers, they will compile that information and then sell it along to the highest bidder or any bidder. It's actually pretty cheap. Forget the highest. It says it's the highest. Anybody. That includes, of course, marketers, but also law enforcement. It includes nation states. It includes China. Anyone on the web can buy your private details. And the impact can be pretty horrific. Identity theft, phishing attempts. That's what got us using Delete Me.
[01:12:25] People were impersonating our CEO and trying to extort money out. Fortunately, we have a smart team, but I don't know. You know, you don't want to allow that. It can be a source of doxing and harassment. Look, you can protect your privacy, and you should with Delete Me. You absolutely have to protect yourself. Unfortunately, Congress hasn't done it. The laws haven't done it. The states haven't done it.
[01:12:51] That's why I personally recommend and why we use Delete Me, especially a business. You should have Delete Me for all of your managers. Trust me, I know. Delete Me is a subscription service that removes your personal info, just wipes it from hundreds of data brokers. You sign up. You tell Delete Me exactly what information you want deleted. And then their experts will take it from there. Now, it's not just a one-time service. They will delete all that information.
[01:13:20] And the reason it's not one-time is twofold. One is there's always new data brokers. Every day somebody gets in this business because it's a lucrative business. It's a great business. But two, data brokers, not exactly the most respected members of society, shall we say. Yeah, you deleted it, but there's nothing to stop them from saying, oh, look at this. I don't know if it's the same, but I'm going to start collecting this information. And your dossier gets built right up again. So Delete Me goes out and they check again and again.
[01:13:49] They send you regular personalized privacy reports showing what they found, where they found it, what they removed. We just got one the other day for Lisa. Delete Me is always working for you. They're constantly monitoring and removing that personal information you don't want on the internet. To put it simply, Delete Me does the hard work of wiping you, your family, your employees, your managers, your businesses, personal information from data broker websites. Take control of your data. Keep your private life private. Sign up for Delete Me.
[01:14:19] For a special discount just for you, our listeners, today, get 20% off your Delete Me plan, your individual plan, when you go to joindeliteme.com slash twit and you use the promo code twit at checkout. Now, this is the only way to get that 20% off. Go to joindeliteme.com slash twit. Enter the code twit at checkout. Join Delete Me. One word. Joindeliteme.com slash twit. Make sure you go there and use the offer code twit for 20% off. Thank you, Delete Me.
[01:14:49] You're doing important work. So is this guy right here, Mr. Steve Gibson. Okay. So, oh boy. We've recently talked about the various countries becoming worried after their discovery of multiple undocumented cellular radios hidden inside their widely deployed Chinese-made electric buses.
[01:15:12] In the first case that we reported, the buses were driven into what was described as bus-sized Faraday cages to cut off all radio access, to cut them off from any outside monitoring or control, and all of the SIM cards were removed from their secret cellular radios.
[01:15:36] The news of these buses wouldn't have surprised any of our long-term listeners since we had previously reported upon the similar discovery of undocumented cellular radios in Chinese-made dockside shipping cranes and the inverters used to convert the DC current produced by wind turbines and solar panels into AC. So, we have all that already, and it's a lot.
[01:16:06] We might think that it would be difficult to further surprise and worry us. But when you learn that from 2010 to present, Chinese researchers have published 2,723 research papers, most never translated into English, on the subject of vulnerabilities in the United States power grid.
[01:16:35] That's a bit of a wake-up call. 2,723 papers, with at least 225 of those papers, so not quite 10% of them, but still 225, which explicitly explore potential attacks on the U.S. power grid. I sincerely hope that people over on this side of the Pacific who are in a position to do something about this
[01:17:02] are also studying these papers and not sitting around waiting for something bad to happen because China is apparently prepared. Strider Intelligence's report is titled, In Broad Daylight, U.S. Grid Exposed to Risk from PRC Manufactured Inverter Equipment. They wrote, The People's Republic of China is systematically targeting
[01:17:31] America's critical infrastructure as part of a long-term strategy, again, since 2010, so 15 years of research, security research papers, long-term strategy to gain leverage in a crisis. These are coordinated campaigns to pre-position access across the systems that keep the U.S. running. I mean, it sounds like science fiction, right? But no. They wrote,
[01:18:00] This new report from Strider details the United States' growing dependence upon inverter-based resources. They have an acronym, IBR, inverter-based resources, including solar inverters and battery energy storage systems manufactured by companies in the People's Republic of China. These networked, software-driven devices are capable of remote communication and control,
[01:18:29] which, when combined with their PRC origin, expose U.S. critical infrastructure to unprecedented risk. Now, you know, just to pause here for a minute, we know there are people in the United States who, if they appreciated this, like our senators and representatives in Congress would be freaking out over the idea that this is true. So, again, I hope somebody's paying attention. They wrote,
[01:18:59] Under the PRC's 2017 National Intelligence Law, any domestic company, that is Chinese domestic company, can be compelled to support state intelligence activities. There's no doubt about that now. As a result, PRC made inverter-based resources inherently carry elevated security risks, regardless of direct ties to high-risk entities.
[01:19:27] Strider's analysis found that nearly half of all inverters and battery energy storage systems imported into the United States between 2015 and 2024 came from a high-risk PRC manufacturer. Additionally, 86% of U.S. utilities surveyed for this report, representing about 12% of the installed U.S. capacity,
[01:19:53] rely on at least one risky PRC supplier in their power composition. Three of the high-risk PRC suppliers found were, first, Contemporary Amperex Technology, CATL. In 2025, the U.S. Department of Defense labeled CATL a Chinese military company flagging national security and sanctions exposure.
[01:20:22] They're one of the suppliers of the hardware we're using in the U.S. Huawei. The company has a documented history of IP theft accusations, export control violations, and close alignment with the PRC military, intelligence, and law enforcement entities. Huawei was added to the U.S. Commerce Department's entity list and banned from U.S. 5G networks over espionage risks, but there's no federal rule
[01:20:50] banning Huawei solar inverters. So, in they flow. And finally, SunGro. The company's CEO and chairman is a member of the National People's Congress, the legislative body of the PRC state. And nearly 30% of SunGro's senior management are members of the Chinese Communist Party. They said, within the 2,723 PRC research publications
[01:21:18] examining weaknesses in the U.S. energy grid, at least, and here's that number, 225 of those publications related to potential attacks against the U.S. grid, including multiple publications that ran attack simulations on the western U.S. power grid to test new concepts, methods, and tools of attacking us. Okay.
[01:21:47] So, you know, Leo, we've talked about this. The U.S. and China have the most bizarre interdependent relationship. This is strange. It is. Perhaps codependent would be a better term. Yeah. I don't understand it. But then... Frenemies. I also... Co-opetition. Frenemies. I also don't understand the world's superpowers having their nuclear arsenals
[01:22:16] all aimed at each other. That's also crazy. So perhaps this cyber war nonsense is much the same as the nuclear standoff that has been in place now for decades. Let's just hope that no one ever makes the mistake of pulling any triggers. So it's like, you don't attack us. You know, you don't shut down our power grid. We're not going to shut down your power grid. Nobody wants their power grid shut down. And... And...
[01:22:45] Wow. I guess this is just the way it goes now. Yeah. Yeah. It's very strange. Wow. It's like the Cold War, kind of, right? It is. No. It absolutely is. It is. You know? And as I said, I've... Recently, we did get some intelligence that suggests that we are every bit as much in their networks over there as they're in ours over here. And...
[01:23:15] I guess both sides understand that. And it's just kind of a status quo. And meanwhile, we're going to buy stuff and they're going to buy stuff. And... Let's just, you know, hope they don't invade the time. Well, they own a ton of real estate in the U.S. and they probably are a single largest holder of American bonds. So, I don't know what the answer is. I really don't. No, it's crazy. Okay.
[01:23:45] Following up on last week's podcast, which I titled Reacts Perfect 10, last Friday the 12th, Google updated the world on the five, once again, Chinese state actors that have been actively attacking the West through this distressingly easy to exploit and widespread vulnerability in React servers. Google's Friday posting was titled
[01:24:15] Multiple Threat Actors Exploit React to Shell CVE 2025 551.82. And they wrote, On December 3rd, 2025, a critical, unauthenticated remote code execution, RCE, vulnerability in React server components, tracked as that CVE, a.k.a. React to Shell, was publicly disclosed. Shortly after disclosure,
[01:24:44] Google Threat Intelligence Group, GTIG, had begun observing widespread exploitation across many threat clusters, ranging from opportunistic cybercrime actors to suspected espionage groups. GTIG has identified distinct campaigns leveraging this vulnerability to deploy a Minocat Tunneler, Snowlight Downloader, Hisonic Backdoor,
[01:25:14] and Compude Backdoor, as well as XMRig cryptocurrency miners, some of which overlaps with activity previously reported by Huntress, you know, Huntress Labs. These observed campaigns highlight the risk posed to organizations using unpatched versions of React and Next.js. This post details the observed
[01:25:43] exploitation chains and post-compromise behaviors and provides intelligence to assist defenders in identifying and remediating this threat. Okay, now, Google then reminds us they spend some time in their posting reminding us about the nature and background of the React problem, which I'm going to skip since we covered that at length last week. What I think is interesting and important for us to look at is what Google is actually
[01:26:12] seeing being done. They're watching it happen. all enabled by this perfect 10 vulnerability, which we first talked about last week. It's one thing to say, oh, that's not good, a perfect 10, but it's still useful to see exactly what that means. Like, what does a perfect 10 do? So they write, since exploitation began,
[01:26:42] GTIG, again, Google's threat intelligence group, has observed diverse payloads and post-exploitation behaviors across multiple regions and industries. In this blog, we focus on China nexus espionage and financially motivated activity, but we have additionally observed Iranian-based actors exploiting the same CVE. As of December 12th, that's last Friday,
[01:27:12] GTIG has identified multiple China nexus threat clusters utilizing the CVE to compromise victim networks globally. Amazon Web Services reporting indicates that China nexus threat groups Earthlamia and Jackpot Panda, which we talked about last week, are also exploiting this vulnerability. GTIG tracks Earthlamia as UNC 5454. Currently, there are no
[01:27:41] public indicators available to assess a group relationship for Jackpot Panda. Okay, so, actual exploitations. Minocat. GTIG observed China nexus espionage cluster UNC 6600 exploiting the vulnerability to deliver the React to shell vulnerability, this Perfect 10 vulnerability, to deliver the Minocat tunneler.
[01:28:11] The threat actor retrieved and executed a bash script used to create a hidden directory. Okay, so, retrieved and executed, meaning they got in, they used React to shell to get onto the server, then reached out from that server to retrieve a bash script, which they then ran, which in turn obtained the Minocat tunneler.
[01:28:40] They said, so, retrieved and executed a bash script used to create a hidden directory. So, it's under the home, it's dot systemd-utils hidden directory. It then kills any processes named NTP client, so, network time protocol client. It then downloads a Minocat binary and establishes persistence by creating a new cron job and a systemd
[01:29:10] service and by inserting malicious commands into the current user's shell config, which will execute Minocat whenever a new shell is started and also apparently based on a cron. Minocat, they say, is a 64-bit ELF executable for Linux that includes a custom NSS wrapper and an embedded open source fast reverse proxy, an FRP client that handles the actual
[01:29:40] tunneling. So, again, I think it helps to appreciate that these are not theoretical attacks, right? They are actually happening to real people and organizations. If this happens to a server, the fast reverse proxy client phones home to establish a persistent connection, then allowing bad actors, apparently Chinese bad actors, to do whatever they wish with the compromised system.
[01:30:09] And the important thing to appreciate is that this is a persistence mechanism. The owner of the React server might have then patched it, rebooted it, restarted it, whatever, but it's too late. The machine has already been owned and will continue to be owned until and unless the specific modifications that were made and this malware that has been installed and set up to keep running is removed. Another
[01:30:39] example, Snowlight. They wrote, in separate incidents, suspected China Nexus threat actor UNC 6586 exploited the vulnerability to execute a command using curl or wget to retrieve a script that then downloaded and executed a Snowlight downloader payload. Snowlight is a component of vShell, a publicly available multi-platform backdoor written in Go,
[01:31:10] which has been used by threat actors of varying motivations. GTIG observed Snowlight making HTTP get requests to command and control infrastructure to retrieve additional payloads masquerading as legitimate files. GTIG also observed multiple incidents in which a different threat actor, UNC 6588, exploited the vulnerability, then ran a script that used wget to download a compood backdoor payload.
[01:31:39] The script then executed the compood sample, which masqueraded as VIM. GTIG did not observe any significant follow-on activity, and this threat actor's motivations are currently unknown. Compood has historically been linked to suspected China nexus espionage activity. In 2022, GTIG observed compood in incidents involving a suspected China nexus espionage actor, and we also observed samples uploaded to
[01:32:09] virus total from Taiwan, Vietnam, and China. So, wow, you know, this is all actually happening. We have two more. Hisonic, another China nexus actor, UNC 6603, deployed an updated version, right, because you wouldn't want an old version, an updated version of the Hisonic backdoor. Hisonic is a Go-based implant that utilizes legitimate
[01:32:38] cloud services such as Cloudflare Pages and GitLab to retrieve its encrypted configuration. This technique allows the actor to blend malicious traffic with legitimate network activity. In this instance, the actor embedded an XOR encoded, so weakly encoded, just basically obscured, configuration for the Hisonic backdoor delimited between two markers, they're just random hex strings,
[01:33:07] to denote the start of the configuration and to mark its end. Telemetry indicates this actor is targeting cloud infrastructure, specifically AWS and Alibaba cloud instances within the Asia-Pacific APAC region. And finally, they wrote, we also observed a China Nexus actor, UNC6595, exploiting the vulnerability to deploy angryrebel.linux. The threat actor uses an installation script,
[01:33:36] b.sh, that attempts to evade detection by masquerading as the legitimate open SSH daemon within the etc. directory, rather than its standard location. The actor also employs time-stomping to alter file time stamps and executes anti-forensics commands, such as clearing the shell history using history-c. Telemetry indicates this threat actor cluster is primarily targeting infrastructure hosted on
[01:34:06] international virtual private servers, VPSs. So, you know, an example of true actual happening real-world consequences of this vulnerability. I think it's important to appreciate again that real people and organizations are being hurt due to this vulnerability. It's unclear to me, as it was last week
[01:34:36] when we first reported on this, why the updated React server code was not given a great deal more time to filter out into the React server install base before it was publicly disclosed to trigger this feeding frenzy. My guess is that any update to React would have triggered an investigation and reverse engineering of the changes by malign forces.
[01:35:06] And since it was all, you know, React is probably, I didn't look, but it's probably very easy to take a look at it and see what changed and know what was fixed. So, perhaps it was better to make a big noise and just hope that that noise would be heard by people, legitimate users of the React server components and it would get updated quickly. Unfortunately,
[01:35:35] this is a big bad one and lots of people are going to get hurt from this. And we know what's going to happen, right? This is about money. This is about bad guys making money. It's money, money, money, money, money. So, they're going to do what they can. They're going to end up installing ransomware, encrypting data, exfiltrating data, and then holding these companies for ransom, hoping to get some money out of it. They don't really care what these companies do. They're not interested.
[01:36:05] They just want money. And speaking of the React vulnerability, something else happened that we have also seen before when something bad was found in a significant piece of open source code. The pylon by the security researchers who all wanted to take a look and see what this was about, ended up turning up
[01:36:35] additional previously unknown problems. In this case, Meta, React's original creator and chief maintainer, has consequently released new security updates for the React JavaScript framework. new patches fix two denial-of-service bugs and a vulnerability that can expose an app's source code. So there's a little tiny bit of silver lining for the otherwise
[01:37:04] devastating React vulnerability. One thing we know is that motivating responsible security researchers to examine code is a terrific way to get it improved. And that happened in this case. Also last Friday, Apple moved to iOS 26.2, which patched two actively exploited zero-day vulnerabilities in WebKit. Apple stated that the zero-days
[01:37:34] were used in what they termed an extremely sophisticated attack, and that the targeted users were still running iOS versions earlier than 26. So we don't know whether the major change that was made, and we talked about it at length in iOS 26, that seriously hardened the kernel. We don't know whether those so-called extremely sophisticated attacks would have worked against people
[01:38:04] running now this much stronger kernel that iOS 26 brings, but we do know that these zero days were being used in people with versions of iOS earlier than 26. We would hope that any target against whom it was worth launching what Apple is calling an extremely sophisticated
[01:38:33] attack would be someone who understood that upgrading older hardware that can run the latest protections, if indeed their hardware was unable to run iOS 26, is worth doing. Even if it means somehow tolerating Apple's way over-the-top UI nonsense, liquid glass, it's still worth having the security. As we noted, you can turn a lot of the
[01:39:03] liquid glass off, making it significantly less liquid. Leo, we're at an hour and a half in. I want to talk about Let's Encrypt at some length. Let's take a break. We're going to look at where Let's Encrypt is and what's going to happen with them next year. I'm a little worried. Well, we'll see. But before we do that, let me talk about our sponsor for this segment on security now,
[01:39:33] Veeam. Data resilience. This ought to be at the top of your to-do list. When your data goes dark, Veeam turns the lights back on. Veeam keeps enterprises running when digital disruptions like ransomware strike. You need Veeam. How does Veeam work? Well, by giving businesses powerful data recovery options that ensure you have the right tool for any scenario. Veeam gives you broad, flexible workloadtitle coverage. And this
[01:40:03] is one of the kind of pain points in having a reliable, resilient data backup is your data is everywhere. Clouds, containers, on -prem, on hard drives, everywhere, which makes it tricky, right? With Veeam, you get full visibility into the security readiness of every part of your data ecosystem. And it works on every part. Tested, documented, and provable recovery plans. How's your recovery plan doing? When did
[01:40:32] you update that last? They can be deployed with the click of a button. This is really why Veeam is the number one global market leader in data resilience. Just call them the global leader in helping you stay calm under pressure. We all need this. With Veeam, it's all good. Keep your business running at Veeam.com. That's V-E-E-A-M dot com. There's no reason for you to be in the headlines, tomorrow's headlines about yet another company shut
[01:41:02] down by ransomware. No reason at all, because you've got Veeam. V-E-E-A-M dot com. Keep moving with Veeam. All right, let's talk about Let's Encrypt. Steve? You've got Veeam and you've got us. Yeah, right. But I often wonder when you hear about these companies like Jaguar down for a month, it costs a huge billions of dollars. Don't they have backup? And I've
[01:41:31] since learned it's really hard for enterprises because their data is all over the place. It's a complex system. It's not as easy as me just backing up my stuff on a Synology. And backup is always sort of an afterthought, right? It's like, let's get it going. Let's get it going first. And then it's like, okay, but wait a minute. Next week was going to be for doing backup. Oh, no, no, we need you to do this now. Do that later. Right. Okay,
[01:41:59] so Let's Encrypt will cross a significant milestone in 2026 next year with traditional certificate authorities establishing increasingly stringent security requirements to avoid spoofing. And with the coming ridiculous short lifetime certificates that will be putting a practical end to manual web certificate management, the lure of simply
[01:42:29] obtaining a domain validation certificate by providing proof of domain control through a DNS lookup and an Acme server listening at port 80 of the domain's IP, well, that solution was always going to win and winning it is. Early last week, John S, A-A-S, over at Let's Encrypt
[01:42:57] posted 10 years of Let's Encrypt certificates. He wrote, on September 14th, 2015, so a little over 10 years ago, our first publicly trusted certificate went live. We were proud that we had issued a certificate that a significant majority of clients would accept and had done it using automated software. That's the Acme protocol that makes that possible. Of course, he says,
[01:43:26] in retrospect, this was just the first of billions of certificates. Today, Let's Encrypt is the largest certificate authority in the world in terms of certificates issued. We've talked about that, right? We've seen a pie chart. It's like the vast majority of certificates are now Let's Encrypt. He said, the Acme protocol we helped create and standardize is integrated throughout the server ecosystem and we've
[01:43:56] become a household name among system administrators. In 2023, we marked the 10th anniversary of the creation of our nonprofit internet security research group, ISRG, which continues to host Let's Encrypt and other public benefit infrastructure projects. Now, in honor of the 10th anniversary of Let's Encrypt's public certificate issuance and the start of the general availability of our services,
[01:44:26] meaning 10 years ago, we're looking back at a few milestones and factors that contributed to our success. A conspicuous part of Let's Encrypt's history is how thoroughly our vision of scalability through automation has succeeded. And no one can argue with that. He wrote, in March 2016, we issued our one
[01:44:55] millionth certificate. Just two years later, in September 2018, okay, so two and a half years later, we were issuing a million certificates per day. In 2020, two years later, five years ago, we reached a billion total certificates issued, and as of late 2025, so now, we're
[01:45:25] frequently, he wrote, issuing 10 million certificates per day. 10 million certificates a day, meaning on a rolling basis, 10 million certificates per day are reaching to the point where their expiration date is near enough that it is time for their server to request a fresh certificate from Let's Encrypt, which it then receives and installs, and then has
[01:45:55] another period until it needs to do that again. We know that Let's Encrypt certificates are 90-day certificates, so some amount of time shy of 90 days, the server starts thinking, okay, this certificate doesn't have much time left, need to get another fresh 90-day certificate. So think of that. Oh, and he finishes saying, we are now on track to reach a billion active sites,
[01:46:24] a billion active sites, probably sometime in the coming year. So that's the milestone for 2026, a billion sites. So think of that, one billion domain names, one billion certificates being continuously created, installed, and replaced on a rolling basis on web servers across the world. That really is an accomplishment. As our listeners know,
[01:46:54] I've been a big fan and user of DigiCert's certificates ever since I left VeriSign, who was later purchased by DigiCert, but this steadily shortening certificate life means that within another year or two, I'll be joining the teeming billions whose certificates all say let's encrypt. It's certainly no longer the case that let's encrypt certificates are in any way second class.
[01:47:24] The browsers collectively first decided to deprecate any extra value or cache provided by extended validation EV certificates. Remember for a while we had a green bar or extra glowy something. It was good. So now there's no reason to pay anything extra for those because users never see them. Consequently,
[01:47:53] generic domain validation DV certs have become the norm. Then the CA browser forum decided to abandon reasonably long lived certificates just as Mozilla's efforts to solve the certificate revocation problem finally succeeded offering total privacy based on client-side bloom filters. They got
[01:48:23] it working. We now have revocation in real time with no privacy consequences and we're going to abandon that. We're going to go basically real-time certificate issuance. Lord help us. So as we've seen, the people who are driving the decisions behind technology do not always arrive at what looks like the best solution, but we at least can all celebrate Let's Encrypt's achievement.
[01:48:52] That said, I still shudder at the idea that a billion websites, think of this, a billion websites will all be dependent upon a single service for their certificates and that if anything should happen to that service, websites will begin dropping off the air at the rate of 10 million per day,
[01:49:22] which is the rate at which they are now issuing new certificates, 10 million certificates per day. Websites will begin dropping off the internet at the rate of 10 million per day if Let's Encrypt is ever unable to renew them before they actually expire. The genius of the internet's design has always been its distributed diversity without any single point of failure. How many times is, oh, that's a
[01:49:52] big benefit of the internet, no single point of failure. Well, we've just created one. This changes that. I hope we know what we're doing. And this is on the CA browser forum if this ever collapses because they're the people who did this, who shortened the certificate lifetime. And actually, unfortunately, it's Apple for reasons I don't understand. force this to happen. It's just, we fixed
[01:50:21] this, we solved the problem. Yes, the original lists of revocations, the CRLs, the original certificate revocation lists, they had a problem. We switched to OCSP to solve that. That had a privacy problem. So we fixed the CRLs with bloom filters so that it's now absolutely possible to maintain an
[01:50:50] instant availability, a quick knowledge of revocation. But we've abandoned that and we're now going to go to real-time issuance. certificates. It's like somebody wants to have absolute control over the issuance of these certificates. Unfortunately, as I said, that is not the internet way. And it really does create a single point of failure, which our entire system, I mean, the reason the internet has survived
[01:51:20] is that it hasn't had that before. Seems like the wrong thing. And it seems like it was completely unnecessary. I just don't get it. What I do get is 10 days of the DNS benchmarks success. Now that it's been exposed to a much wider audience, I should mention that we are now at its third release. We were at release one this time last week.
[01:51:50] Anybody who purchased it now will get the third release. And anyone who runs releases one or two will immediately see a pop-up giving them the notice that there's been an update and a link where they can download it and immediately update themselves. As I said, I'm no longer going to hold the release of something up until it's absolutely known that I will
[01:52:20] never have to change it again. That did make sense back when I was duplicating disks for Spinrite. We were packaging them in boxes and sending them to egghead software to stick on a shelf. Today we have a connected world. It just makes more sense to put something out which is really good and where every known problem is solved. That's what I waited for. It took a year to get there.
[01:52:51] As you expose it to a larger audience, you're going to find things you didn't find. For example, the primary incentive for the second release was large enough string buffers in the code that posts the conclusions which contains and shows the total number of packets sent and received.
[01:53:20] I allowed eight characters. I had eight character buffers. Because ASCII strings are null terminated, Pascal strings, the first byte of a Pascal string is a byte with the length of the string. The problem is that limits strings to 255 characters. So we switch to what's called null terminated strings where you have
[01:53:50] ASCII characters or Unicode characters and you signal the end of the string with a null byte or two in the case of Unicode. So that makes the strings able to be of any length. Of course, it does make them a little vulnerable to mistakes where then you keep looking for a null and bad guys can arrange to do things and so forth. But the technology is nice. Anyway, the original benchmark, well,
[01:54:20] maybe it would send 30,000 packets. So that would be 3,0,000 and a null. So what? Six bytes. And you couldn't do more than that. So I allocated eight byte buffers. No problem. Nobody would ever need more than eight bytes. Ever. No. Then I introduced the benchmarks of 50x and 100x sampling modes.
[01:54:50] Whoops. 30,000 became 3 million. Oh, boy. 3 million is 3,000,000 ,000 because I comma-ize those strings. Well, that's 10 bytes. Oh, 10. That's 10 bytes and that overflowed the buffer. And so what a couple users quickly reported was when they ran the 100x, in fact, one of our, I think it may have been one of our
[01:55:20] podcast listeners, purchased the benchmark. He said, I just decided to go for the gusto is the expression he used. And so as soon as he waited 45 minutes for this thing to finish and then it crashed. It's like, whoops, sorry about that. So I immediately fixed the problem, updated to our second release. I doubled the buffers to 16 bytes and there's no way we're going to ever overflow those. So that's fixed. Now the primary
[01:55:50] incentive for the third release was actually not my fault, but I'm glad for it. What we discovered was that we needed at least version nine of the Windows Wine emulator. A surprising number of people are running the DNS benchmark under Linux and Mac, as I mentioned before. And despite the fact that Wine 9 is nearly two years old and 10 was released at the start of this year, not surprisingly,
[01:56:20] who would be surprised? Many people still have Wine 8. The problem is that Wine 8 predated the change in code signing from SHA-1 to SHA-256. The DNS benchmark verifies its own digital signature at startup to make sure that it was properly downloaded and saved and that there was no error anywhere. So it checks its own digital signature. Well,
[01:56:50] that was failing for those who were still using Wine 8. The problem was, under the assumption that the only reason for a signature failure would be code modification, the error message that was being presented was confusing. It stated that something must have altered the program after it was downloaded. Anyway, that wasn't true. It was that they were using Wine 8. So I quickly pushed the third release out to end the confusion.
[01:57:20] Now, the benchmark first checks to see whether it's running under Wine, and if so, whether it's Wine 9 or later. If it's Wine 8 or earlier, it explains that the user will need the user who's using it on Linux or Mac on an old version of Wine will need to update to a later version of Wine. And when there was one other thing that somebody wanted, somebody noted that it would
[01:57:50] be nice if Control-C copied all of the text from any of the many dialogues in the program to aid in translation. So I thought, oh, that's a really good point. So that's in there now, too. So we received a bunch of gratifying feedback, some from people who cannot get their head around the fact that so much is packed into 215k bytes. You know, everything it
[01:58:18] does and all of the descriptive dialogues and windows that it contains, 215k. small JPEGs are bigger than that. You know, we've all become so abused by the ridiculous multi-hundred megabyte monstrosities that we've lost sight of how dense and expressive actual code can be. I'm surprised, frankly. I'll write a whole bunch of stuff to add some new feature and the program only got a K
[01:58:48] bigger. It's like, wow, this is great. Anyway, the other feedback has been as predicted from people who are commenting that they were happy with their local DNS resolver whose caching performance was insanely high. The problem with having an insanely high performance local DNS router cache is that its contents will be largely the same as the DNS cache in Windows itself because the request for Windows will be what
[01:59:18] caused their local router to load its cache in the first place. So Windows won't ask again for anything that's already cached since Windows will already have it. Therefore, caching performance just doesn't matter. What really matters is the mix of queries and that's what I've understood and that V2 offers. So people have been saying, hey, I was really happy with my DNS setup, but version 2, I'm going to have to make
[01:59:47] some changes. So of course, that's good news because it's more correct than the way we were doing it before. Anyway, it's been good. I do have something to fix coming up, but I don't have a solution for it. I don't even remember what it was. It's just something that came up this morning. So everybody who purchases it gets whatever is the most current and I'll check back next week and let people know where we are. Is there an auto update feature or do you have to download a new version?
[02:00:16] Well, a lot of our developers said I don't like code that downloads itself. So what there is is a link you can click and then that takes you to the page that allows you to get the new code, which I think is the best thing. Yeah, that's good. Listener feedback. Scott Wise wrote, Steve, an issue I see with age grouping or the over age token credential for age verification that I'm sure you've thought about.
[02:00:47] Oh, actually, this is a really great point, but I don't remember hearing disgust is that it will disclose your birthday on your birthday. Okay, now listen to this guys. This is kind of cool. He said, if you need to be a certain age to access a service, be it physically or virtually, you likely do it on or very near your birthday. A common one is going drinking, on your birthday, when you
[02:01:16] are exactly old enough. In the physical realm, you'll likely get a congratulations and maybe even a free drink, but they don't share your personal information with others. If you need to be a certain age to access social media, you are likely to create an account on your birthday and you should be assured that the company, oh, and he says, and you could be assured or should be, that the company will sell that information to as many others as they can,
[02:01:46] taking a jaundiced view, which I understand. He said, reaching certain ages will trigger different ads. Driving age will likely trigger car sales ads and reaching the drinking age may trigger alcohol ads. These will happen regardless of your actual birthday as they would fall into the age group identification. He says, I don't know all the ramifications of disclosing your birthday, but a few I can think of would be enhanced phishing,
[02:02:15] fake account creation, and password guessing. This isn't a reason to stop the work on age grouping or the overage token credential, but I think it should be considered, signed Scott in Regina, Saskatchewan, Canada. So I think Scott makes a terrific point. Think about this. In a world where accounts on highly desirable services are age-gated, the first
[02:02:45] time start use of those services could reasonably be used to infer something about an individual's age. Now, back in the 70s, though apparently less so today, knowing when someone had obtained their driver's license would, with some accuracy, probably tag their age. Apparently, today's teenagers feel less urgency to drive than I did. That urgency to drive may
[02:03:14] have been replaced with an urgency to use social media. I don't know. They want to stay home. Yeah. But if we imagine a world where we do have robustly solved the online proof of age problem, it is easily foreseeable that anyone turning 16, for example, in Australia and soon in many other jurisdictions, would
[02:03:44] immediately, like, it's like, hey, I'm 16, I finally can do this. They would immediately join the many services that they would then be able to on the day of their 16th birthday. So as Scott, I think, very astutely points out, that does constitute a strong age disclosure, which, or, you know, birth date disclosure, which I'd never thought about before. Mr. Gecko said, there's one problem with the age
[02:04:14] verification solution that's being worked on, and that is the discrimination of what device and operating system one must use to be considered valid. The attestation system discriminates against open-source operating systems, he's right, browsers, and even prevents new competition from being able to start. This means to use the internet, social media, or anything
[02:04:43] considered adult content, one must use an Apple or Microsoft-based computer with an Apple, Microsoft, or Google browser, and one must use iOS with Safari, or one of the approved Android phone vendors with original software. This will be very, very bad, and no one is talking about the issue from this standpoint. He said, I personally install an open-source operating system on both my phone and PC to get
[02:05:13] away from the privacy invading companies. Once these laws come into play, I won't be able to use Facebook to contact my parents, who would not use Signal or any other messaging solution, and will be treated like a bad guy because I decided to go for privacy. So I think this is another great point. Our Mr. Gecko here is noting that the requirement of bringing enforceable security to age
[02:05:42] verification means that platforms which are unable to offer true enforceability will not be permitted to assert their user's age. And as listeners of this, you know, in the past know, this is a common theme, right? Listeners of this podcast know, this is a common theme. Hundreds of millions of otherwise completely functional PCs are stuck at Windows
[02:06:12] 10 because they only contain hardware or firmware support for version 1.2 of the TPM and Microsoft has decided to require TPM 2.0 for Windows 11 and beyond. Another example is the de facto requirement that Windows executables be signed with an expensive cryptographic certificate that expires every few years for no reason other than to create revenue
[02:06:41] for certificate authorities. As Mr. Gecko noted, all of this is hostile to open source, open source and open platforms. All security, as we've noted recently, absolutely requires the ability to robustly keep secrets of some kind somewhere. Yet full openness is explicitly about never keeping secrets. The two concepts are fundamentally at odds with one
[02:07:11] another. Owen Laguerre says, Hi, Steve. I have a question regarding the grc.sc bot check shortcut you created. That's the thing that I recently mentioned where you can check the service that is watching malicious botnet activity and creating a database by IP. This allows someone to check their current IP
[02:07:41] for known bot activity. He said, in the podcast discussion of the service, I don't recall any mention about when you get a result showing activity, how to determine if that activity is from your network or whoever had been assigned that IP address previously. He's 100% right there. He said, since most people get their IP address by DHCP, the activity could be from someone who had that IP
[02:08:11] previously. He says, if there is a way to determine how long you have had your IP address and the bot check site shows the dates when the malicious activity occurred, you should be able to determine if all the activity was before you were assigned the IP address. Is this the way to make that determination assigned Owen? So he makes a very good point. For those whose IP addresses change often,
[02:08:40] this test would be inaccurate in both directions. It could produce false positives or false negatives by reporting on the condition of the network of whomever one or more people may have had that IP previously. The other problem is that IP v4 depletion has moved some large ISP hosts to carrier grade NAT. When an ISP has more
[02:09:10] subscribers than they have IP v4 addresses, and when they are unwilling or unable to upgrade their services to IP v6, they will be forced to place their own NAT routers between their subscribers and the internet, just as we end users have many more internet gadgets than we have public IP addresses. My point typically we have one public IP address, or two for IPv4 and IPv6. My point is that carrier grade
[02:09:40] NAT, which is becoming increasingly common, will also obscure the truth since any one of the ISPs, many subscribers may have been emitting malicious bot traffic from the public IP that is now assigned to the user running the bot check service. So it's true that all of those caveats need to be taken into consideration when using that free bot check service. For myself, my IPs with Cox communications
[02:10:10] and my cable modems tend to remain static for years at a time. I mean, I am able to establish static IP filters at GRC that knows my IP and I'm able to lock ports and access based on those because they change so infrequently. And even though they are DHCP, DHCP does try to reissue the same IP
[02:10:40] you had before. So there is an effort to maintain a static IP. I've told people in some cases where they have some reason to change their IP, no, just turn off your cable modem overnight. And after a long enough period of time when it comes back up, it will probably have a new IP. But it needs to be a significant outage in order to get your IP to change. Still, Owen is right. You could get false positives or false negatives if your IP
[02:11:09] has changed and that bot check site is basing its appraisal on IP. Alan W said, oh, and Leo, you're going to love this. This is Alan, our voracious security now consuming semi-truck driver. He said, Steve, I last wrote to you at the end of October asking about a password of 63 plus signs. I just
[02:11:39] finished that episode and you're right. I understand now. He said, kind of cool that I wasn't far off in my assumption of 63 plus signs being a strong password. Great episode. Thank you. He said, yes, I have listened up to episode 303 since late October. He's got a way to go. And not yes, but he's making good progress. And I he's a third of the way to the
[02:12:09] infamous 999. So, or almost a third. So, he's getting close. And he said, and not just during my 70 hour work week driving a semi. He said, I found myself spending a large portion of my waking hours listening to security now. He said, I was thrilled when you read my last email on security now. And yes, my Python sensei, Sean, did indeed share that clip with me. As you mentioned, by the time I get caught up,
[02:12:39] I'll be a completely different person. I see that already. As you can imagine, listening to 50 plus hours of security now per week while driving a semi and then listening more after hours has made me quite paranoid about everything security. And it's constantly on my mind. Perhaps my brain is an overload. I took the week of Thanksgiving off and didn't listen to a single episode for six days. I felt less
[02:13:08] nervous about security two days into my break. But then during a train ride, the strangers at the table with me started talking about loving their debit cards tap feature. And moments later, I found myself lecturing everyone about what could happen thanks to that little chip. Nice. He's become an evangelist. He said, later I realized that most of what I said was probably based on information from before Michael Jackson died,
[02:13:38] but even in hindsight, no regrets. They got off light since I didn't make them all buy a copy of Spinrite before disembarking the train. He said, since listening to episode 303, I'm going to ask about something you've mentioned a few times. I understand that every successive binary bit represents a doubling of values, but I've also heard you say that with 26 letters in the alphabet,
[02:14:07] double that for upper and lower case, and add numeric digits, that would give 62 possible combinations out of a total of 64 total in a 6-bit word. I've heard you run the math, which reveals 5.9375 bits of entropy, just shy of 6 bits. Considering that binary is either a 0 or
[02:14:36] a 1, how is it that the 0.9375 is not rounded up to 1? I'd think someone trying to brute force the number would have to try all 64 combinations. Wouldn't just letters of a single case plus numbers, giving 36 possible combinations, take the same amount of time to crack, since
[02:15:05] all 6 bits would have to be tested. The brute forcing system wouldn't know to test just the first 36 out of the 64 values, right? He said, thank you and Leo for this podcast. Sitting in traffic is a lot less rage-inducing since I started listening, and that's a good trade-off for the cold sweats I get until my VPN reconnects every time I reboot my computers or sell. signed Alan.
[02:15:35] Okay, so to answer Alan's question, the answer is, in fact, the effective entropy really is that odd-seeming 5.9375, just shy of 6 bits, 5.9375 bits of entropy, because the lowercase plus uppercase plus 10 digits create, in this example, a total of 62 characters in this reduced alphabet.
[02:16:05] And even though expressing any of those 62 characters in our reduced alphabet does require 6 bits, there is no character represented by the 63rd and 64th binary bit patterns. Those final two binary bit patterns do not stand for anything. They do not represent any character of our reduced
[02:16:34] alphabet, so they cannot be tested. We must stop after testing the 62nd character, which is at the end of our alphabet, reset it back to 0, increment the next most significant character to its next possibility, and keep trying. So, Alan, another astute question from are on the road, rapidly catching up. Alan, what are you going to do when
[02:17:04] you get caught up and there's only one of these a week? You know? Oh, my God. Lewis Blanchard said, hello, Steve, I continue to really enjoy the Security Now podcast. I recently purchased a new Microtik H-E-X-S 2025 router running firmware router OS 7.2.6, which offers excellent value. After updating the device and installing it in my home network, I ran the shields up test to
[02:17:33] assess its default security posture. I was pleased to see that the all service ports check reported stealth for all open TCP IP ports. However, shields up was able to elicit a reply to an ICMP echo request, a ping. I confirmed this behavior after a factory reset, indicating it is the device's default configuration. I've since
[02:18:03] configured the firewall to drop inbound ICMP echo requests, resolving the issue. My question is about the security implications of this default setting. Is shipping a router with default ICMP echo replies enabled potentially negligent or dangerous for general customers who may have little networking knowledge? Given that Microtik, which I used to call Microtic,
[02:18:32] often uses the same default configuration across many hardware models, would it be worthwhile to contact Microtik to suggest a change to the default firewall template to drop WAN side ICMP echo requests, while ensuring vital ICMP traffic such as destination unreachable remains active for performance? Best wishes to you and your family for the holidays. Thanks in advance, Louis.
[02:19:02] Okay, so that's a great question. It's one of those issues like the undeniable utility of NAT routing that causes the old gray-beard internet unix gurus to increase their blood pressure medication. The reason for this is that it is absolutely clear that any IP device that's alive and working should
[02:19:32] at the absolute bare minimum reply to an ICMP echo request with an ICMP echo reply. If an internet protocol stack is present and connected, the specifications are very clear that this must be done. The argument could be made, and believe me, the cranky old gray-beard internet unix gurus do,
[02:20:02] that any device that deliberately fails to do so, do this simplest of all things, is an aberrant abomination on the internet, has no right to send or receive a single IP packet, and should be immediately disconnected with prejudice and burned at the stake. Yes, I completely understand what those people are saying, and they're not wrong.
[02:20:32] ICMP echo requests and replies commonly referred to as pings are incredibly useful. They're perhaps one of the most useful features of IP networking. By being so low level, by not relying upon anything else to function, by default always being present, it's possible to ping any device at any IP and to know that you'll receive a reply
[02:21:02] if that device is alive and if IP traffic has managed to get to and from the source of the ping and its destination. So in a very real sense, deliberately not replying to a ping request, just ignoring it, is a breach of one of the most fundamental laws of the internet protocol. The flip side is to ask who
[02:21:31] is pinging us and why. Would we want a tech who works for our ISP to be able to ping our router if they are working at diagnosing some network trouble? Yeah, of course we would. But would we want to reply to an ICMP echo request from some random hacker in North Korea, China, or Russia? How does telling
[02:22:01] them that, hey, yeah, we're here, what do you have in mind? How does that possibly help us? The problem with those old gray beard internet Unix gurus is that they're living in their own ivory tower. They'll say, well, of course, you should have a good firewall. Right. But what if that firewall contains a known bug that requires a bunch of pounding on its wall
[02:22:31] in order to penetrate? No one is going to bother pursuing a difficult to exploit firewall vulnerability against an IP that doesn't reply, may not even be there, it's just dead air. But if that same IP bounces back with a hiya, what's up response to anyone anywhere in the world who might be knocking, does that make sense? You might just find
[02:23:01] yourself on the receiving end of an attempt to penetrate your defenses just because you said, yeah, I'm here. I'm not saying that any of that is likely to happen, but it's a valid scenario. I think the question to ask oneself is how it benefits you to have the device that's protecting your entire network announcing its presence to anyone anywhere who attempts to bounce a ping off its public interface.
[02:23:31] If running with full stealth is an option, I don't see any reason not to use it. And if you are working with your ISP's tech, I'll bet they know by now to ask you to disable your router's stealth mode if they are trying to use ICMP echo requests to troubleshoot your connection to their network. So great question. I learned that from shields up back in the day.
[02:24:01] Yeah, actually I was a little curious about where the term stealth came from. It is the opinion of AI that I know I guess cloaking was the Kling on stealth fighters. stealth was mine. don't know where it got to help. Anyway, in terms of internet presence,
[02:24:31] it was shields up that first used it and that got picked up and widely used. So that was cool. No knocks are heard here. Let's take a break and then we go to Australia. We go down under Steve and reaction to the social media ban which took last Wednesday. Wow. It's been almost a week. There's some very unhappy teenagers.
[02:25:01] Or are there? I don't know. Let's find out. But first a word from our sponsor. It's been mixed. Yep. Yeah. I bet it. Well, mixed. Yeah, that's fair. There were some relieved teenagers. I'll share their comments. I'm sure there were. Yeah. Oh, thank God. I couldn't get Yeah. That might be the unbalanced the best thing to do. I don't know. We'll find out. Stay tuned. First word from our sponsor. Bitwarden, the trusted leader
[02:25:30] in passwords. Yes. Pass keys. Yes. And even secrets management. Bitwarden is consistently ranked number one in user satisfaction by G2 and software reviews. You'll see, by the way, also picked as the number one password manager time and time again by independent reviewers. And I got to point out, Steve Gibson and I are among the 10 million users across 180 countries and more than 50,000 businesses
[02:25:59] that use Bitwarden. As we approach the holiday season, this is one of the biggest riskiest times for credential risks of the year. Why? Because people are out there, they're shopping, they're using their credit cards, maybe they're having a little too much fun, they're not paying as close attention to the phishing attacks. Cybersecurity Awareness Month, by the way, they do an annual poll, the Cybersecurity Awareness Month poll. The most recent poll revealed 42%
[02:26:29] of parents. This is 42% of parents with kids aged 3-5 little kids, young kids, said their child has accidentally shared personal data online. Don't ask how, I don't even want to know. Meanwhile, 80% of Gen Z parents feel their kids could fall victim to AI scams. But despite all that, 37% still
[02:26:59] give their kids full autonomy or only lightly monitor online usage. As cyber threats become increasingly personal, having a robust identity and access management solution is more critical than ever. And this is a great time to teach your kids that. Teach them how to protect themselves online, how to be security conscious. Whether you're protecting one account or thousands, Bitwarden keeps them secure all year long with constant updates. They're always adding new features.
[02:27:29] They just added a great new feature that allows users to access their vaults and this is in Chromium based browsers which is most of them using a pass key instead of having to remember a master password which delivers a secure phishing resistant authentication method that protects against credential theft can even be tied into biometrics to make it even more secure I actually whenever I log into Bitwarden I do
[02:28:18] of a data breach tops $10.22 million per breach that includes ransom downtime reputation loss with 88% of cyber tax on basic web apps tied to compromised credentials bad passwords and leaked passwords it's easy to
[02:28:48] whether it's IT and operations finance engineering HR marketing Bitwarden will enhance your business security and productivity and introducing Bitwarden security in your business is the simplest way and probably the least expensive investment to safeguard credentials and protect all your employees so stay safe and secure online this holiday season Bitwarden setup is easy they support importing for most password management solutions so it's easy to move I would bet though that most people
[02:29:18] who start with Bitwarden have never used a password manager before which is one of the reasons Bitwarden is so easy to use that it encourages this plus the fact that it's free for life for unlimited passwords unlimited pass keys YubiKeys too for individual users so it doesn't cost them anything plus it's open source I think that's super important Bitwarden is regularly audited by third-party experts you can see it yourself it's on github
[02:29:47] Bitwarden meets SOC 2 type 2 GDPR HIPAA CCPA compliance it's ISO 27001 2002 certified get started today with Bitwarden's free trial of a teams or enterprise plan or as I said get started for free across all devices as an individual user bitwarden.com slash twit there's no better way to protect yourself this holiday season bitwarden dot com slash twit we thank them so much for supporting
[02:30:17] security now we really appreciate all the help they give us all year long they've been a great sponsor back again in 2026 now let's let's find out what's going on in Australia I'm dying to know okay so I expect to be giving this entire age verification issue a rest for a while hopefully the world's going to calm down now that Australia has done this although the EU is
[02:30:47] making noises you know at least until something more happens last thing I want to do is bore our listeners but this is what's happening on the internet right now but before doing that I wanted to wrap up today's podcast with a check-in on the status of Australia's social media age restrictions you know sharing also some comments from two of our listeners and I think my clearest yet description of where we
[02:31:16] should and where we should not compromise so what's going on in Australia to say that the entire world is watching with interest would be no exaggeration at all you would think that the world's news reporting agencies were starred for news with all of the coverage that this ban has been attracting everybody watching everybody reporting sadly technology in general I feel is not showing too well
[02:31:46] you know technology's reputation is taking a bit of a beating because Australia's teens are being confronted with age detection based on facial feature characteristics which everyone knows to be readily spoofable and no one is being disabused of that belief since last Wednesday there are stories of girls applying more makeup to appear older and slipping right past the detector when before they didn't or a
[02:32:16] 13 year old boy who scrunched up his face when asked to verify his age presumably the scrunched up face looked old and wrinkled and pruney and that's all it took right other teens have simply had someone older look into the camera for them and on the flip side 16 17 and 18 year olds have been banned for being under age so
[02:32:45] this is not technology's proudest time I've seen stories of parents who for whatever reason believe in raising their children to be their best friends they believe that overexposure to social media may not be healthy for their kids but remaining their child's best friend means that someone else needs to tell them no so these were accepted
[02:33:14] as being 16 and allowed to continue social media they were hoping it would end but what can they do right at the same time there have been stories of teens expressing relief relief at being denied and blocked due to their age if they could be involved in the social media rat race they needed to be but they are not unhappy now to be off the hook at least for a few years you know maybe the practice
[02:33:44] of facial age spoofing will difficult to argue with that my advice to the facial detection providers would be to invest the profits that they are currently enjoying today because my bet is those profits are going to be short lived it's bad that facial
[02:34:14] age detection is such an inherently inexact practice I completely understand that this is all we have right now but it has been misapplied for this application it should never have been used here since whether or not teens are old enough to have access to the social media which is often central to their lives independent of whether or not this might be healthy it cannot be left to
[02:34:44] chance and to a capricious error prone technology my point is the go no go decision is too important and must be made fairly and based upon an individual's actual age not more or less a coin toss some are saying that anything is better than nothing I'm not so sure that's true among the rest of the world that's watching this ever made nationwide experiment
[02:35:13] is the EU they may be further along with an application that can verify someone's age without any privacy compromise one thing is for sure anyone who may have believed that facial feature age determination actually works well enough probably no longer think so I hope that's true I encountered a note from an Australian listener of ours that I thought provided some valuable perspective Bruce
[02:35:43] French wrote hi Steve and Leo I'm a long time listener and club twit member here in how do you pronounce this Adelaide Adelaide Adelaide Australia I've been listening to Leo since he was in the cottage and Leo you and I have been talking since before the cottage so we've been doing the podcast since the cottage was where you were when we began when twit began
[02:36:13] right right he said I've been listening to your discussion on the social media ban security now episodes 10 10 54 and put my point of view and limited experience forward firstly this small part of the world has not stopped functioning it just has not been a huge deal there has been the usual commentary on the various media
[02:36:43] outlets some teens have been able to bypass the restrictions some adults have been blocked when they should not have been but these do not appear to be in large numbers no mass commentary in either nobody within my extended family has been asked to verify their age unless you are under 16 or near 16 years old it's been a non-event for the majority when listening to episode 1054 I became a little defensive
[02:37:12] of Australia I suppose as I thought your tone was a little condescending and mocking I think I understand more where you're coming from after just having listened to privacy being the issue of concern well and accuracy for me so yeah you know fairness and correctness you know for me a flaky age verifier seems like a really bad thing anyway he
[02:37:42] continues
[02:38:11] I majority support throughout the country for this action I have no data on hand to support this but it is clear there has been very little pushback other than from the media companies I note here that in general in Australia we are prepared to have some restrictions imposed if it is for the greater good this looks to be one of those times there
[02:38:43] from my very limited viewpoint I do not have teenagers myself and my grandchildren are not old enough for this to have been an issue regards Bruce French so thank you Bruce I thought that's valuable perspective and it's interesting that Bruce reports that adults are not being asked to verify their ages there are presumably other heuristics that the services could be and I guess probably are using because they don't want to be doing this either I encountered an
[02:39:13] interesting thought somewhere which noted that anyone having an existing account that's at least 10 years old could safely be assumed to be at least 16 years old today since they would have had to create the account 10 years ago when they were younger than 6 which seems unlikely I so
[02:40:04] I want to share and react to what another of our listeners wrote Jane said greeting Steve in the previous episode as well as in a number of previous ones you expressed an opinion that a universal location independent age verification standard should be developed as it is the direction the web is going however I find it unsettling when this is treated as
[02:40:34] an acceptable compromise she said the privacy preservation in at least some of the methods discussed such as the one described in episode 1044 that was the true age system can be negated by subpoenas as admitted in that episode as well thus leaving people still very vulnerable this could also be a very convenient avenue to denying certain adults like journalists access to various resources based on political
[02:41:03] reasons one particular approach was mentioned as it was Apple or Google doing the verification maybe on device even if we assume no logs are sent out to their servers this is increasing surveillance attacks on freedom and just plain and shitification
[02:41:33] like the proposed limitations on unverified APK installation after having used graphene for one and a half years now I don't think I can go back to having Google services at all let alone with maximum privileges same for Linux on my desktop in the newest episode it was phrased as we can do it without any loss of privacy unquote but the biggest loss of privacy in the
[02:42:03] proposed solution in addition to surrendering your sensitive documents to Apple or Google is the loss of privacy on the device itself by having to the other problem in itself is having to register with the service
[02:42:32] one was going to use de-googled OS's are already being disadvantaged banking and other important apps would often refuse to run on non stock OS's which is one of the biggest hurdles to adoption you can get around this with root and certain tools but that's apparently becoming harder doesn't always work and is a continuous fight not to mention that would mean still having the
[02:43:02] invasive Google services installed age confirmation is likely to be treated as strictly as identity documents or banking thus effectively excluding people like me there's an example of this already the European identity wallet which is mentioned in this podcast has been found to employ play meaning play store integrity an issue was raised on GitHub she provides a link but at least the last I checked
[02:43:32] the developers dismissed it I find it odd to treat widespread age verification as any less horrific than chat control this would cause just as much collateral damage if not more far outweighing any potential benefits and it like and it be unlikely to protect children she has in quote protect the children anyway thanks for the podcast it's one of the reasons I switched my education
[02:44:02] direction towards security nice nice yeah okay so first I want to make one point Jane notes that the privacy preservation of the system we talked about in episode 1044 could be negated by subpoena to be very clear I would never consider any system whose privacy can be breached by a court order to be sufficiently privacy preserving I mentioned that aspect of the true age system specifically because it was a red
[02:44:32] flag and there was no there was some comment of it being incorporated into some of their technology being used by the worldwide web consortium the W3C we know we know how true privacy fanatics such as Apple and Signal would respond they would have deliberately designed their technology so that they are technically unable to respond to any
[02:45:01] court order that's who they are but the bigger point of our listeners the internet has been commercialized and its users are being monetized whether we like it or not the commercial interests such as Apple and Google have grown into monopolies that no longer have
[02:45:44] citizens and even other countries citizens are allowed to do we've seen all of this in this podcast Jane started out her note writing in the previous episode as well as in a number of previous ones you expressed an opinion that a universal location independent age verification standard should be developed as it is the direction the web is going however I find it unsettling when this idea is treated as an acceptable compromise I
[02:46:14] understand what she means but in Australia today young people are being forced to stare into a camera's lens so that their image can be transmitted to some third party service and used to judge their age that's not privacy preserving by anyone's standard the question is no longer whether or not internet users are going to be able to continue to enjoy completely
[02:46:44] unfettered access to any resource anywhere they choose they're not that's over that's what's known as a lost cause our various governments are taking those days away so it's not about having acceptable compromises whether we like it or not with us control is descending upon the internet Apple already
[02:47:13] knows all about me I subscribe to Apple TV and news and have Apple pay set up in my iPhone so I have no problem with the idea that Apple would allow my smartphone to assert my age or age range and absolutely nothing else to anyone who has a need to know after I've given my permission I can't say that I trust Google to the same extent
[02:47:43] but perhaps the Android platform will find a savior to offer universally accepted age assertion what's possible from a pure technology standpoint and this is where acceptable compromise comes in for me but also where I see no reason to further compromise beyond that at all ever is for individuals to affirmatively identify themselves just
[02:48:13] once to one trusted proxy under the understanding that while that proxy must briefly know who they actually are in the physical world in order to verify their date of birth that proxy will then discard all of that transient identifying information retaining only their date of birth and the information required to identify them biometrically from then on we can do
[02:48:42] that from that point forward at any time they can call upon that proxy to present to any inquiring third party and anonymous assertion of whatever age is required if we can get that it will be a lot it should be the industry's goal I am seriously annoyed that Apple has not yet stepped up with the realization that it is in the best interest of their users
[02:49:12] for their i devices to be able to make those assertions on their behalf Apple should be the one to do this they are hurting the privacy of their users by continuing to refuse none of the children who are staring into an iPhone in Australia should need to be scrunching up their faces or applying makeup and having their photos sent to third-party services not when
[02:49:42] Apple could entirely solve this problem without breaking a sweat the noises the EU is making along these lines all sound right and there's talk of some app that's presumably EU centric and cross platform so that both Apple and Android would be covered that may be where the rest of the
[02:50:16] longer get total absolute freedom and privacy well we don't our governments have decided that's you know we're not going to have that so well it doesn't mean we have to accept it no as I said you can unplug absolutely or we can protest or we can change governments yes right I don't have to accept the fact that they've decided that that's not that doesn't
[02:50:46] mean I have to you know accept that by any means and I'm not going to unplug we could have a private secure internet what about that why should the government be able allowed to do that I don't think that they should be allowed to do that and I think it's a mistake to bent to roll over and say oh well I've done it so that's the way it is I disagree 100% well if it's going to happen we can do it with minimal loss of privacy from a technology standpoint the rest is politics
[02:51:15] and I'm you know yeah I mean yeah I agree that's a political decision yeah I mean no one should misunderstand me and think that I think this is a good idea right I'm it is happening but if it is then we want to make the best of it we want the least invasion of our privacy possible and it is absolutely possible for Apple with their
[02:51:45] biometric unlocking to be the entity that asserts our age and now agreed there's a slippery slope aspect to this also right the more this becomes possible the more likely it is won't be able to use it because they're just not accurate enough and we're
[02:52:15] seeing how inaccurate it is in Australia I choose to resist that's more power to you my friend and I'm not going to disconnect I'm going to resist yeah we'll see what happens I may well lose Steve Gibson this is the place to come if you want to learn what it
[02:52:47] this is the place to be on a Tuesday we do this show live Tuesday after pardon me can you imagine anyone from Congress listening yeah I can't imagine that I'm a utopian Ron Ron Wyden I can imagine well he would have a staffer who would be listening and then say hey you know Ron I guarantee you members of Congress are
[02:53:17] them to listen to than many of their other choices if not all that's just my thought if you have an opinion well there's many ways to get a hold of us I'll tell you first of
[02:53:47] in and kick after the fact on demand versions of the show are available at our site twitter tv slash sn there's audio and video there or on youtube there's a video there of every show and you can use that to share clips and of course you can subscribe in your favorite podcast client audio or video Steve also has copies of the show he has unique copies of the show a 16 kilobit audio version for the bandwidth impaired a 64 kilobit for people with at least one ear he
[02:54:17] also has the transcripts which are very complete many pages usually a couple of dozen pages of notes links images it's a very nice piece of work he does every week you can get that by going to a site grc.com and downloading it you can also subscribe to his newsletter and he'll mail that out to you every week a day or so before the show begins which is nice you can read along as you listen or click the links and so forth to do that go to grc.com slash email that's actually
[02:54:46] initially was created so that you could validate your email address so you can make like pictures of the week comment suggestions like we just heard but you can also when you're there submitting your email address you'll know there are two newsletters you can subscribe to one is very infrequent just announces new software like the DNS Benchmark Pro which just
[02:55:22] sure but there's also now the DNS Pro Benchmark for a mere what is it 10 bucks lifetime subscription get its entire future good way to support him is to buy his software I think grc.com let's see what else I guess that's all of it we will be back here next week for a regular episode on Christmas Eve Eve then we're taking a week off and that's when you're going to get the
[02:55:52] vitamin D story it's a Christmas story we all enjoy the vitamin D story that'll be on this New Year's Eve Eve December 30th and then we'll be back with new shows again in January Wow 2026 yikes yikes Steve we live in the future and it's just about as dystopian as we thought may your certificates keep getting shorter may they never expire no that's not quite right unfortunately
[02:56:22] thank you Steve have a great week we'll see you all next time on
