How secure are your Chrome extensions and certificate signings really? This episode pulls back the curtain on a massive spyware discovery and exposes the convoluted hoops developers must jump through to prove their identity in 2026.
- Websites can place high demands upon limited CPU resources.
- Microsoft appears to back away from its security commitment.
- What's Windows 11 26H1 and where do I get it.
- Chrome 145 brings Device Bound Session Credentials.
- More countries are moving to ban underage social media use.
- The return of Roskomnadzor.
- Discord to require proof of adulthood for adult content.
- Might you still be using WinRAR 7.12 -- I was.
- Paragon's Graphite can definitely spy on all instant messaging.
- 30 malicious Chrome Extensions.
- 287 Chrome extensions from spying on 37.4 million users.
- The first malicious Outlook add-in steals 4000 user's credentials.
- Some AI "vibe" coding thoughts.
- What I just went through to obtain a new code signing certificate
Show Notes - https://www.grc.com/sn/SN-1065-Notes.pdf
Hosts: Steve Gibson and Leo Laporte
Download or subscribe to Security Now at https://twit.tv/shows/security-now.
You can submit a question to Security Now at the GRC Feedback Page.
For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
Sponsors:
[00:00:00] It's time for Security Now. Steve Gibson is here. We have lots to talk about. A big change to Chrome, bringing something called device bound session credentials to your browser. Steve's going to talk about how you can prove you are who you say you are when it comes to your code signing. And bad news about more than 200 Chrome extensions that were spying on more than 34 million people. That and more coming up next on Security Now.
[00:00:32] Podcasts you love. From people you trust. This is TWiT. This is Security Now with Steve Gibson. Episode 1065. Recorded Tuesday, February 17th, 2026. Attestation. It's time for Security Now. The show we cover the latest in security, privacy, how things work, sci-fi, and whatever else this guy here is.
[00:01:03] What's up to here is up to Mr. Steve Gibson. Welcome. Steve Gibson. I do try to keep us mostly on track though, you know, the world is not monotonic. So, you know, lots of you're a, you're a, a, a, a, a polyglot, a poly, you, you know, everything. So it's a, it's nice to talk about all these things.
[00:01:24] Certainly don't know everything. I do. There are things I know a lot about and things that I'm interested in learning more about. So, but yeah, I'm definitely curious. Um, I just, from my first moments on, of awareness, I wanted to know how things work. That's what I want to know.
[00:01:43] That's an important mindset. Yeah. Yeah. I agree. And so I lost my fear of, you know, looking inside to go, oh, look, that little cam goes this way and that pushes that lever over here and that causes that to drop down. And, you know, I was very good at the game, the board game mousetrap for that reason.
[00:02:03] Back, back, back in our youth. Um, okay. The elephant in the room is the 28 page security research paper that was recently published after I put the show this week together. I felt so bad because I sent it to you and I said, oh, Steve, I know you're done. You and about 50 of our listeners. I mean, so we got, I've, I've been very impressed with how in touch our,
[00:02:31] our listener community is because it was like, oh, Steve, see, oh my God. Okay. You know what, what does this mean? So to do it justice, I will answer that question next week. Um, the good news is no hair on fire. It's not the end of the world. Um, it, it, it, but it's password managers. Yes. Sorry. Uh, so, uh, ETH Zurich, the guys, we, the, the, the, the, the researchers there who we've spoken of many times as a consequence of
[00:03:01] their work and, and some Italian guys, they got together and did a deep dive into the consequences of server side, you know, IE cloud is now the new term, uh, um, attacks on three popular password managers,
[00:03:23] you know, browser based password managers, dash lane, uh, last pass and bit warden last pass. Of course, a, a, a, a previous sponsor and favorite of ours until they screwed up. And actually it was on the server side. So that's kind of interesting. Um, and scared us all. And, and bit warden, a current sponsor of the twit network. Um, one of the,
[00:03:48] this was everybody, it was one password. It was dash lane. It was everybody, which I, yes, that was interesting. Although they did focus on those three and I thought it was interesting for it. First of all, dash lane and la and, uh, bit warden both responded bit warden with a thanks for the analysis. And bit warden commented that as a consequence of the fact that they were an open source system from one end to the other, the, the, the,
[00:04:18] the job of the security researchers was far more enabled because it wasn't necessary for anyone to reverse engineer their stuff. You know, they're wide open. Um, okay. So again, as I started off saying no hair on fire, um, I'll, I'll give, I'll give us a complete readout of it next week or where we look at in detail what it was that was found. Um, both dash lane and bit warden are either immediately
[00:04:46] already responded to the, already responded to the issues. Again, none of which were, I mean, these were like worst case. If a bad guy completely took over your server infrastructure, what could be learned? And to give you some feel for it, um, there was an instance, I think it was with dash lane where they were deliberately supporting older crypto standards for the sake of backward compatibility.
[00:05:16] So if you could, if you took over the server infrastructure forced a, a, a protocol downgrade to use the oldest, the oldest supported crypto and the user had a weak password, then it might be possible to decrypt their vault. So again, it's not like ifs, that's a lot of that.
[00:05:42] Yeah. And that's my point is that these were like that, you know, I mean, definitely useful. Um, the, the researchers commented that they were somewhat surprised that this hadn't been done before. It's like, you know, here we are running around all using our password managers, just sort of thinking, well, seems great.
[00:06:01] But, you know, this is really the role of independent research, which an open source facility like Bitwarden offers makes far more possible. So next week I'll have the whole, uh, update, but just, I want to thank our listeners, Leo and, and Leo, you for, for bringing it to my attention. Also yesterday, um, at, uh, it's next week's topic.
[00:06:24] Good. Um, good. Yeah. Cause we want to know, I mean, uh, you know, I know Bitwarden's a sponsor, but we want to know as is one password. Yeah. Well, and, and, you know, we were bullish on one password until, and it's interesting too, because this has an echo feeling like it was one passwords, not updating.
[00:06:45] No, last pass. You're talking about last. I'm sorry. I'm sorry. I'm sorry. Yes. Yeah. It was, it was last passes, not updating the, the, the iteration level of their PBKDS, which, you know, password based key derivation function, which got them into trouble.
[00:07:03] And it's like laziness or not like a fear about breaking something. I mean, there, there, I, I think if, if, uh, if there's a lot of legacy code and the people who wrote it are gone and new guys are coming along going, Oh yeah, you know, let's, you know, let's let, we don't want to be responsible for breaking something.
[00:07:27] So there's kind of a, like, leave it alone if it's not broke. But unfortunately the way crypto standards are going, you do need to keep rolling forward because the attacks are getting stronger. Anyway, uh, we'll, we'll look at that completely, but no one needs to like fear that this means we, they have to go back to a paper pad for writing their passwords down. No.
[00:07:52] So today's topic for podcast 1065 is attestation. I want to share an adventure. I've just survived, uh, which I will get to at the end of the podcast.
[00:08:08] Oh boy. Uh, really, really interesting. What's going on in the industry, uh, and understandable. Uh, so I will get to all that. We're going to talk about websites, uh, websites,
[00:08:23] websites, placing high demands upon limited CPU resources. I realized after we talked about this last week, Leo, what happened with AI.com and why that dot, that graphic you showed that showed that cloud flare was just all fine, but the host was unresponsive out at the end. Right. What happened? Cause I realized that we've talked about this before, but I just, it didn't hit me until I was thinking about it later. Um, also,
[00:08:52] uh, in a worrisome move, Microsoft appears to be backing away from its commitment to security. Uh, okay. Uh, also what's windows 11 26 H one and where do we get it? Um, Chrome one 45 is released and it brings something known as device bound session credentials.
[00:09:17] Finally, finally out to the mainstream world. We talked about this. I think it was last April, but we're going to, uh, you know, circle around again because now it's here. Also, uh, I had a blurb and I heard you, you guys, you've been talking about this in a number of different, uh, places, Leo, more, more countries moving to ban underage social media use.
[00:09:40] Uh, and, uh, and, uh, and discord to require proof of adulthood for adult content, social media use. Uh, and there's been a little bit of overreaction to that. So we'll tamp that down. We have the return of Ross come non-zor, which, you know, no podcast would be complete without that. Uh, also might you still be using win RAR 7.12?
[00:10:06] I was what? Yeah, it caught me. So we're going to have to make sure nobody is, uh, also we now have proof that Paragon's graphite smartphone spyware can definitely spy on all instant messaging apps.
[00:10:26] Uh, a researcher discovered 30 malicious Chrome extensions and a different project found 287 Chrome extensions, which were spying on 37.4 million of their users. So this is really a problem that we're going to talk about. Uh, the first malicious outlook add in has stolen 4,000 of their users credentials.
[00:10:52] Um, I've got some thoughts on AI vibe coding, uh, and then I'm going to go through, but I just survived obtaining a new code signing certificate. So I think, uh, and of course we have a fun, uh, and interesting picture of the week. So yeah, I think we've got a good one here for as usual, as usual security. Now you've been waiting all week. It's finally here.
[00:11:20] But while Steve takes a sip from that giant mug of coffee, I am going to talk about our sponsor for this segment of security. Now it's in my hands right here. This is, yes, you recognized it. This is not an external hard drive, although it kind of looks like what's about the size of a USB hard drive. But I guess the giveaway is that, uh, that things canary logo on the front and the, uh, the fact that the only connector on it is an ethernet port.
[00:11:47] And that is because this is my thinks canary honey pot. This is gold. If you have a network, if you know, bad guys are trying to penetrate your network. And I think any business probably should be thinking that because they are, you really have to ask yourself, how would I know if somebody successfully got in?
[00:12:11] How would I know if a hacker or even a malicious insider was wandering around inside my network looking for data about our customers to exfiltrate, looking for places they could put time bombs for ransomware? How would I know that? It might terrify you to think, I mean, every time we'd see a breach, we see these numbers. Well, they found out, uh, that somebody broke in two months ago.
[00:12:36] The average time between being penetrated and the time a company notices they've been breached, 91 days, three months. That's three months too long. That's why you get the thinks canary. These are honey pots, easy to deploy. You can do it right on the, uh, on the console, on your, on your webpage there. And it can be almost anything.
[00:13:02] I mean, everything from a Windows server to a SCADA device to a Linux server with an SSH server. You pick. You can also create lure files. Just, they look like, you know, document files, like a word processing document files or spreadsheet files, or they can even be things like wire guard configuration files, except they're not. If somebody, a bad guy says, oh, look, there's their wire guard configuration file, but there's passwords in there.
[00:13:29] Or, oh, look, that spreadsheet on their Google drive says payroll information. Man, I bet that's got some juicy stuff. As soon as they open it, you get an alert. As soon as they try to brute force your fake internal SSH server, you get an alert. No false positives. Just the alerts that matter. And you get it any way you want it. SMS, sure. Mail, syslog, webhooks. There's an API. All you got to do is choose a profile for your thinks canary device.
[00:13:59] And it's so easy to do that. You can change it every day if you feel like it. I do. I often just play with it just to see what else it can be. And these profiles are good. It has the actual MAC address of the company. So a hacker can't look at it and say, well, I know that's phony. There is no way to tell. Mine is usually a Synology NAS. It has the DSM-7 login page.
[00:14:22] Actually, that's good to have a login page because you get more information about what the bad guy knows when you see what email address and password they use, right? Just choose a profile for your thinks canary device. Register it with your hosted console. You're going to get monitoring. You're going to get notifications. And you just sit back and you relax. An attacker who's breached your network, a malicious insider, any adversary will make themselves known. They can't help it because it doesn't look vulnerable. It looks valuable. It looks like what they're looking for.
[00:14:50] As soon as they access your thinks canary, you got them. Now, I think these are fantastic. And how many you get depends really on how big your company is, how big your network is. You should certainly have one for every network segment. A big bank might have hundreds with branch offices, might have them everywhere. Small business like ours may be a handful. I'll give you an example. Go to canary.tools.twit. For just $7,500 a year, you're going to get five thinks canaries. You'll also get your own hosted console. You'll get upgrades.
[00:15:21] You'll get support. You'll get maintenance. And if you use the code TWIT in the how did you hear about us box, you're going to get 10% off the price. And not just for the first year, but forever. For life. For as long as you have your thinks canaries. If you're at all unsure, let me tell you this. You can always return your thinks canaries with their two-month money-back guarantee for a full refund. I have to tell you, during all the years, TWIT has partnered with thinks canary. Almost a decade now.
[00:15:48] Their refund guarantee has not ever been claimed. Never. Not once. That's a good sign. Visit canary.tools.twit. Enter the code TWIT in the how did you hear about us box. You're 10% off. The thinks canary. This thing. It's a doozy. You got to get one. Canary.tools. We're actually going to see them at RSA next month. I'm looking forward to seeing the canary team. Do a little interview with them.
[00:16:19] All right. Let's talk about your picture of the week, Steve. So I gave this picture, which didn't have a caption. A caption. Placing unconditional trust in technology. Can lead to mistakes. All right. I'm going to scroll up now and I'm going to see it for the first time along with you. Oh, that's good. I never thought of that.
[00:16:48] So we have a picture of a security camera, which mounted on the ceiling was originally pivoted around, as you'd expect, to be surveying the room that it is monitoring so that it knows what's going on. Apparently, the people who populate that room decided, you know, we don't really want to be having this camera looking at us all the time.
[00:17:17] So it's clear. Here you can see how this picture came about. Someone got up on a chair or a ladder or something and took a photo of the room from the vantage point of the camera, printed it out on eight and a half by 11 sheet of paper. Stuck the paper. And then swung the camera on the wall behind the camera and then swung the camera around so that it's looking at the paper.
[00:17:47] And of course, so as I said, placing unconditional trust in technology can lead to mistakes. Now, so the people in the room are no longer being surveilled. They can be doing anything they want to be. And meanwhile, the security people in some room with lots of monitors are looking in that going, when is this guy going to come back from the bathroom? It's very quiet. Like, why is his desk? How long is his lunch?
[00:18:15] You know, and so anyway, I thought I got a kick out of the picture. It's just very funny. Again, you know, there are all kinds of instances, right? Where we we adopted, we adopt technology to save us some trouble of some sort. And it turns out that people who don't want to be encumbered by that come up with a simple workaround.
[00:18:38] So, you know, like sticking bubble gum on the camera lens or, or, you know, something easy to, easy to mess with the technology.
[00:18:49] So, as I said at the top of the show last week, we noted that the fact that during Sunday's Super Bowl, the company with the very expensive $70 million domain name, AI.com, had been DDoS'd by their own advertisement during the Super Bowl.
[00:19:09] And Leo, you showed us that Cloudflare screen that indicated that all was well with the CDN delivering traffic to the backend hosting server. And that all it said was that the hosting server was not responding. And I, I, I think it was Delahanty. It was Patrick Delahanty. Yes. Yeah. Patrick Delahanty.
[00:19:35] You shared with us that, that his, I guess, modest website was like being inundated with bot traffic and which was really causing him a problem. Like, like, you know, he was being DDoS'd.
[00:19:52] The point I wanted to follow up with, and we've talked about this before, as I said, is that modern websites, both large and small, are no longer almost ever generating their content. Well, actually the way most of GRCs is still. I mentioned also last week, my, this call kind of came about because we were talking about how I, you know, hand author lightweight HTML and CSS.
[00:20:19] And it's not that there isn't dynamic content at GRC, shields up, the, the DNS spoofability test and other things. Because, you know, the, the, the, the CPU is involved in generating those pages, but of course it's all in assembly language. So there's like zero overhead.
[00:20:42] I mean, associated with even GRCs dynamically generated pages, but the modern way, modern, you know, like supposed to be better modern way to create a website is with a CMS, a content management system. Where the web server doesn't actually have static pages.
[00:21:03] It runs server side scripting of some sort, PHP, Ruby, JavaScript with Node, maybe Java, C Sharp, you know, .NET, maybe Python.
[00:21:16] But the point is that one of those content engines is, is producing the HTML on the fly, which is sent to the browser, backed up by some backend database, which, and, and, and so queries of this database describe the content, which is then interpreted by the script and used to generate HTML that then goes out.
[00:21:43] And so the point is that while these approaches turn web servers into very flexible application delivery platforms, that power and flexibility to dynamically deliver any page content comes at a steep price in processor load and database load.
[00:22:06] So I have no doubt that, you know, as we saw in that chart, Cloudflare was faithfully delivering HTTP queries to whatever backend server infrastructure AI.com had built out at that point.
[00:22:26] But whatever it was, it was unable to scale as it needed to, uh, to handle the massive demand spike, which was created by a Superbowl ad. I don't think the site went down technically. It was just probably, you know, the, the per page processing cost was so high that there just wasn't enough processor available to, to keep up with the demand.
[00:22:55] So, you know, mostly it was just embarrassing. Right. And it was certainly not an auspicious launch for a new venture. And you could argue that they probably lost some people who responded during the Superbowl commercial, uh, and then thought, well, I don't know what's wrong with these people, but they don't seem like they got their artificial intelligence working very well.
[00:23:14] So anyway, um, the, uh, the, um, the consequences of this high cost webpage delivery are actually being felt a lot, uh, by pure coincidence. I happened to stumble on last Wednesday's Linux mint blog, which among other things addressed problems with their forums.
[00:23:43] The guy wrote, we'd like to apologize to our forum users for how slow and unreliable the forums were last month. The volume of traffic we receive is extremely high and it's mostly coming from AIs, bots, scripts, and web crawlers. It got to the point where our server could not cope and people weren't able to use the forums.
[00:24:08] In addition to the security web application firewall, it took us a while to come up with an efficient way to filter bad with what he's calling bad traffic, meaning, you know, non-human users, I guess is what he's calling bad. If you're getting 403 errors from the forums right now, please make sure your browser is up to date. I thought that was interesting.
[00:24:32] He said, we upgraded the server to give it 10 X, the CPU capacity and twice the bandwidth. So I checked them out. Linux mints forums use the free PHP BB. Uh, and I don't know whether they've spent time, uh, speeding up their implementation by using there are many tricks you, you can use to, to reduce the overhead of a PHP based site.
[00:25:02] There's an in-memory, uh, uh, uh, tool called op cache, which is able to, to take the burden off the backend PHP interpreter. And also, um, Redis has a key value store that is that many forums are able to enlist. I use both because, because, because the forums at grc.com are also, I'm using Zenforo and that's a PHP based system.
[00:25:29] And I looked at the, at the user account. They were talking about 6,000 people. Well, we have a thousand typically, uh, roaming around at any given time. And my, my CPU is off down. It's down in the single digit percentages. So, um, you know, there are ways to, if, if you are focused on improving efficiency to do so,
[00:25:54] I don't know what is going on with, with Patrick Leo, but it was bots that he said were causing trouble for him. So, the takeaway is to remember that connection bandwidth is almost certainly no longer the limiting factor that it once was. And it, it, it can be practically impossible to change platforms once one's committed.
[00:26:23] That is, give some thought to the page delivery overhead and performance of a system that you're, you're, you're considering switching to. It's, uh, I'm sure anybody who's ever switched to a different platform knows what an incredible pain it is. So, there's huge anti-switching inertia.
[00:26:45] But if, if you have a system which is inherently heavy in overhead, then switching is going to be a problem. And all you could do is scale up in CPU resources. And it can be expensive. If you then need load balancing in front of, of, of a bunch of servers, then there's an additional burden from, from that. So, anyway, it really pays to, to keep efficiency in mind. And it's not just bandwidth anymore.
[00:27:14] With all of our pages, so many of them being delivered dynamically that the overhead of the delivery system really matters. Um, yeah. Modern websites are programs, really. They're not. Right. Static websites. Right. That's absolutely true. And it's good you said modern because I'm delivering almost. You're static. You're HTML. I'm sure. Plain old HTML, right?
[00:27:41] Well, but, uh, shields up is dynamic, uh, the, uh, DNS spoof ability. So I've got dynamic pages, but you know me, they're all written in assembly language. Right. So my blog is also a static. It, it has a program running in the background that generates HTML. And that's a, that's a very good way to do that. Yes. Yeah. So you still get the benefit. Yeah. Yes. Uh, GRC.
[00:28:04] I have three freeware pages like, uh, and, and you're able to ask how you'd want them sorted by popularity, by age and by something else. I don't remember. Uh, and every night at midnight, I re I statically regenerate those three pages so that they're delivered, you know, fast from, from finished HTML.
[00:28:27] Uh, and you, so you, you're only, you're only going through that generation process once a day because it's not the kind of thing that needs to be changed every second. So, and it's not, you know, there, there's no need to, to generate them per person as is often the case with a modern website. Um, okay.
[00:28:47] So, um, I want to share an editorial which appeared in the seriously risky business publication, uh, which was unfortunately titled Microsoft's Microsoft forgoes its secure future. Um, so I'm just going to share what they wrote and then I'm going to share some observations afterwards. Uh, but I, I thought what they wrote was very insightful.
[00:29:16] They said for a brief time, Microsoft appeared to be making security a priority as with all good things though. It appears that period has come to an end with personnel changes at the organization signaling a shift in priorities. We fear Microsoft's goal now is not to make secure products so much as to sell security products.
[00:29:44] And of course, this is not the first time we've touched on this, but some recent changes as we'll see. They wrote last week, CEO Satya Nadella announced that Microsoft's executive vice president of security, Charlie Bell had been replaced by, um, Hayat Gallat, who was most recently president of customer experience at Google cloud.
[00:30:09] Charlie Bell is stepping back from leading Microsoft's security organization to become an individual contributing engineer. Now that Bell is gone, it appears the guise of security first has been tossed aside and we fear the company may slip back into being a security disaster. Bell has a great reputation and joined Microsoft to make a positive impact on its security.
[00:30:39] Despite this, the history of his tenure at Microsoft shows that the company itself only prioritized security when it was forced to by government pressure. Bell joined Microsoft from AWS to lead a new security organization in 2021.
[00:30:58] At the time of his hiring, he wrote that we had consistently for months on end shown example after example of Microsoft security, as they put it, clangers. Those rolling security debacles, and of course, we talked about them all here on the podcast, were a symptom of senior leadership prioritizing profit over security.
[00:31:25] You know, things like not logging unless you paid extra, that kind of thing. At the time they wrote, we predicted that Bell would struggle to make a difference. We were right. Not even an exceptional manager can change much if the CEO and executive team are not really interested.
[00:31:43] A 2022 profile of Bell in the information reported that Microsoft's old guard managers pushed back on Bell's suggestions for improving their responsiveness to security vulnerabilities, believing he was setting too high a bar for stopping attacks on its products.
[00:32:06] The company continued to pay lip service to security, although it did launch a lackluster security uplift program, the Secure Future Initiative, in late 2023. Microsoft's devil-may-care approach to security, they wrote, came back to bite it after separate compromises by Chinese and then Russian state hackers were discovered. The security lapses that led to these breaches were frankly unbelievable.
[00:32:36] In April 2024, a Cyber Safety Review Board, the CSRB report, entered the Chinese breach, which had compromised the email accounts of senior U.S. policymakers, found a cascade of security failures. It wasn't until this kick in the pants that Microsoft truly embraced security.
[00:33:00] The following month, CEO Satya Nadella told staff to prioritize security above all else and that, quote, if you're faced with a trade-off between security and another priority, your answer is clear. Do security, unquote. What followed was a short halcyon period where Bell was able to kick some goals.
[00:33:29] But the Trump administration has since disbanded the CSRB and signaled that it is not interested in strong regulation. The pressure is off. Microsoft execs can grab a coffee and relax. Which brings us back to the recent change in security leadership and, in particular, Nadella's messaging in his public announcement of Gallup's appointment.
[00:33:54] It sends strong warning bells that security at Microsoft is falling by the wayside. Nadella had an opportunity to highlight Gallup's work experience in security roles. Instead, he focused on her, quote, critical roles in building two of our biggest franchises and, quote, leading to our go-to market efforts. Much of Nadella's announcement was about selling more security products.
[00:34:24] He said that the company has, quote, great momentum in security, including strong purview adoption and continued customer growth. Purview is a product of theirs. Entirely missing was any language about the importance of actual security to the company or a call for people to get behind the critically important security work that Gallup will lead.
[00:34:50] If it walks like a sales target and, I'm sorry, if it talks like a sales target and walks like a sales target, it ain't security. It's a recipe for security sales. Okay, so that's the end of their editorial. I wanted to share this to highlight a lesson we've all learned throughout the past 20-plus years of our observation of real-world security deployment.
[00:35:15] The lesson I believe we've all learned is not only that security is hard, but that it's always much harder than we expect it to be. If it wasn't so difficult, we'd have much more of it than the sad little bit of security we actually have out in the world.
[00:35:35] The U.S. wouldn't have Chinese and North Koreans crawling around in our networks, nor telco executives actually saying, we're not sure we can get rid of it all. What? What?
[00:35:52] My point here is that since we always need all of the security we can possibly get, any sign of Microsoft slacking off whatsoever on the security front should be taken very seriously. What's worse, a reduction in delivered security is not something that can or will be immediately apparent, right?
[00:36:15] It's only the inevitable consequences of a relaxed security posture that will wind up being felt. As for why Microsoft might make this shift, one of the problems is that since it's not possible to prove a negative, no one really receives any credit for security breaches that don't occur because they were prevented.
[00:36:41] In the case of Microsoft, the successful influence and efforts of Charlie Bell, their now previous executive vice president of security, may easily have gone underappreciated. You know, it's look at that. I guess security isn't as big a problem as we thought. Those other problems must have just been one-offs. Right. So, let's hope for the best.
[00:37:10] One quickie, and then we'll take another break, Leo. I suppose I should at least mention, I know that Paul was talking about this last week. I should at least mention that this spring, because listeners have already been asking, Microsoft will be introducing what they're now terming a scoped release of Windows 11.
[00:37:32] Its scope is limited to use with the new Qualcomm Snapdragon X2 next-generation ARM system on chips, where Windows 11.0. Where Windows 11.0. Uh, 26.0. H1. Will come pre-installed on those machines. It only runs on them, and it will not be available in any other form for general use or upgrading.
[00:38:02] The latest general Windows 11 release will remain 25H2, and this oddball 26H1, whose naming appears to have ruffled many feathers. There's lots of dialogue out on the net saying, what? Come up. Come on, Microsoft. Give this a different name. It's really confusing. Anyway, despite its name, it is not an update for 25H2. So, everybody else should just ignore it.
[00:38:32] We can't have it. We need to wait for 26H2. And Leo, they just, Microsoft just cannot stick with anything. It just, I mean, I guess I understand you don't know how the world is going to evolve over time. That's the nature of it. But still, you know, you know, it's not like what they do doesn't matter. And a lot of people aren't paying attention and trying to figure it out.
[00:38:57] So, there's a, you know, a high price for them changing their mind all the time. Sad to say. Yeah. Okay. Break time. I'm going to rehydrate. And then we're going to look at Chrome 145 and its new support for device-bound session credentials. Well, there you go. Stick around. You don't want to miss that excitement. No. Noob. But first, a word from our sponsor. Delete me.
[00:39:28] Have you ever wondered how much of your personal data is out there on the internet for everyone to see? It's a depressing fact. I mean, your name, your contact info. Yeah, of course. But even like your social security number, your home address, information about your family members, information about if you're in a business, about your workers, your coworkers, your managers. Why are they there?
[00:39:54] Because they're all being compiled by data brokers whose sole and utter business is to steal, well, borrow. I don't know. Take your personal information and sell it online. I mean, in the early days, I guess it was kind of benign because they're selling it to marketers, I guess. It ain't benign anymore.
[00:40:19] Anyone on the web can buy your private details, and that can lead to some nasty side effects like doxing, identity theft, phishing, harassment. So what are you going to do? Well, you can protect your privacy. You can use Delete Me. Now, I am very aware. Steve and I both are very aware of this because there was a data breach some time ago,
[00:40:45] and all the data was put online in a searchable database. And so Steve and I, we did it on the show, looked up our information in this breach, and there was our social security number. There was all our private information. It was from a data broker, right, this breach. They had all that stuff. That's when I found out the worst thing. It is completely legal.
[00:41:13] If you somehow manage to acquire Steve Gibson or my social security number, it is completely legal for you to sell it to anybody, a foreign government, anybody. So this is why I recommend Delete Me. In fact, we use Delete Me at Twit because of the really very real issue of phishing. Phishing is made much more effective when they know personal information.
[00:41:40] Hi, this is Leo, and I'm tied up at a meeting right now, but could you send some Apple gift cards to my son? Here's his address. I forgot his birthday. Something like that. And all of that information, if it's real, it makes it more convincing. We got phished like that. All the time we get phished like that. So we used Delete Me. Delete Me is a subscription service.
[00:42:09] It removes your personal information, and it does it from hundreds of data brokers. There are more than 500 on Delete Me's list. You know, the state of California recently said, we're going to do this, fewer than 100, and it's not till this fall. And I mean, there's all these caveats. No, no. Delete Me. You sign up. You give Delete Me just what information you want deleted. That's nice because you control what's deleted and what's not deleted. Their experts take it from there. This is what they do.
[00:42:36] And they will send you regular personalized privacy reports showing what they found, where they found it, and what they removed. Delete Me. It's not just a one-time service either. That's really important. We just got one of those emails, the reports. It's always working for you, constantly monitoring and removing the personal information you don't want on the internet. You have to do that because these data brokers, even if the data has been removed, they'll start repopulating it. And they'll change their name to avoid this.
[00:43:04] You know, and their new ones start up every day. Dozens of them because it's very profitable. To put it simply, Delete Me does all the hard work of wiping you and your family and your company's personal information from data broker websites. Take control of your data. Keep your private life private. Sign up for Delete Me. We've got a special discount for our listeners. You'll get 20% off your individual Delete Me plan.
[00:43:29] 20% off when you go to joindeleteme.com slash twit and use the promo code twit at checkout. Now, this is the only way to get 20% off. And you have to use this address because you can't just Google it because there are other Delete Me's in the world and they don't do the same thing. You want to go to joindeleteme.com slash twit and make sure you use that offer code twit at checkout. That's joindeleteme.com slash twit offer code twit.
[00:43:56] I am so glad Delete Me exists and I'm so glad we found it. Joindeleteme.com slash twit. Now, back to stay. Okay. So last Tuesday, Google updated the world to Chrome 145.
[00:44:14] This update repaired, you know, your typical assortment of a few high, mostly medium and some low severity security issues and continued to move Chrome's support for the latest HTML, CSS and JavaScript standards forward.
[00:44:32] Perusing those, I'm just astonished over the complexity of today's modern web content interpreters. These browsers are so complicated. It just gets more insane every day.
[00:44:54] One of the new features that stood out is Chrome 145's support for something known as device-bound session credentials. Think about that phrase for a moment. Device-bound as in binding to a device session credential. A session credential is just a fancy name for a cookie.
[00:45:17] And device binding would mean binding a session credential cookie to the device whose web browser first receives that cookie from a remote website.
[00:45:31] So that means that this innovation arranges to, for the first time ever, prevent anyone who might somehow arrange to obtain a session cookie from being able to use it themselves anywhere else. That's huge. And Chrome 145 now supports it.
[00:45:54] Many years ago, before servers were fast enough to glibly encrypt all connections all the time, a user's session cookies would be sent in a clear after they had first successfully logged on. This allowed anyone who was able to eavesdrop on internet traffic anywhere to capture those logged on session cookies to impersonate their rightful owner.
[00:46:20] Looking back on that, it's just like hard to imagine we survived that state of affairs. But of course, that was yesterday's internet too. Less mission critical than it is today. So although things are much better today with all of our connections encrypted all the time, there are still various interception attacks and mechanisms that create vulnerabilities and weaknesses for session credentials.
[00:46:45] For example, though it's being done only for the best and most justifiable reasons. Many enterprises maintain TLS decrypting middle boxes that decrypt everyone's TLS connections as they cross the enterprise's network edge in order to scan them for malware and other shenanigans and protect the internal network.
[00:47:13] Everyone's cookies are thus exposed at that point. And if it were possible to briefly impersonate or compromise either end of a connection to observe any browser's reply, the session's logon credential cookies would be exposed. So, you know, it's all we've been able to do so far, but there are problems. This resolves that.
[00:47:43] Until this time, the browser cookie, you know, has been a, I guess I would say it's an overworked authentication mechanism. It really was just meant in the original creation of it to allow a web server to like, to create this notion of being logged on.
[00:48:06] To identify you when you made successive moves around pages on a website, it would be like, oh, there's that guy again. Okay, fine. And, you know, so you could maintain state. Well, now we use this for banking and international commerce and, you know, super private connections to investment portfolios. Everything is being done with this poor overloaded cookie.
[00:48:34] So, it's all we've had. With this innovation of device-bound central credentials, that finally changes. Now, you do need some form of secure enclave, such as a TPM or a secure enclave like Apple has on iOS devices on their platforms.
[00:48:56] You have to have something like a TPM, a trusted platform module, in order to store the secret part of this credential. But, as we know, all modern OS platforms require this already for themselves. So, that's really not a problem anymore. And, in fact, it may have been what's delayed the arrival of this. That's all I'm going to say for now.
[00:49:21] Now, if anyone's curious, we described the operation of this in full detail during episode 1021 last April 18th on the podcast. So, it did take longer than expected to arrive, but we have it now with Chrome 145. And I said on the podcast then that Firefox, you know, that Mozilla had implemented in Firefox and it was also in Safari.
[00:49:47] I did not follow up to see what the status is today, but it just got released in Chrome 145. So, even if it takes a while to actually filter out into the world, it's clearly going to happen. It does require, as I explained last April, significant support effort from the web server. It's not just your old, you know, it's not your grandparents' cookies.
[00:50:18] It's a whole different technology to pull this off. But it's eventually going to become, clearly, become a widespread standard because it significantly increases the integrity of cookie authentication. It can't be used to fingerprint you, though, right? Because it is unique to you. They can't request that ID. They probably, it's like a public key thing.
[00:50:48] Oh, yeah, yeah. They would just match it up. So, they can't, yeah. Correct. Yeah, okay. Well, that's good. That's interesting. Well, if this, you know, comes back up on in our, as a topic, it'd be nice to make sure that there isn't some. But I can't imagine in this day and age that we would have implemented it. I mean, it's industry-wide. It's an industry-spent. Oh, okay. It's not just Google. Yes, it's not just Google. Yeah. Okay. Yeah, they couldn't have gotten away from it.
[00:51:18] But I agree with you. Google would love to have another mechanism to track us around. Okay, so I noted that the governments, in catching up on the last week, Kazakhstan, Moldova, and Romania are considering adding their names to the growing list of countries. That are enacting age restrictions on the creation of new social media accounts by children.
[00:51:48] I also saw some commentary somewhere that I appreciated. It noted that the newer legislation was deliberately eliminating, and I really thought this was good. You may not agree, Leo, but I can understand it from a practicality standpoint.
[00:52:07] The newer legislation was deliberately eliminating any opportunity for parental override or exception where, for example, a child who was at least 13 but not yet 16 could appeal to their parents to allow them to create an account. And the person commenting clearly understood that parents would be hard-pressed not to succumb to the argument.
[00:52:35] But, Mom, Susie's parents let her use Instagram, and she's younger than me. And you shouldn't be a parent. I mean, do we really need the government? That's another topic entirely. Do we really need the government to enforce this? We need to do a whole podcast titled You Shouldn't Be a Parent. I know. That would be a good show. So, anyway, the world does seem to be moving in that direction.
[00:53:05] And I've got another point about that. Although, for some reason, I've got a little blurb here. It's quick about the return of Roskombnadzor. And, you know, what would a Security Now podcast be without an update on the most recent machinations of Russia's Roskombnadzor internet watchdog?
[00:53:26] It turns out that part of the infrastructure that supports Russia's sovereign RUNet is its own domain name system, right? They've got their own DNS called NSDI. And Roskombnadzor controls what's listed and what's not.
[00:53:48] Though access to YouTube and WhatsApp has been throttled since last July. Remember, we talked about that. Remember, like, they only allowed a tiny bit of YouTube data? And it was like, what can you do with that much data? It's like, you can't even get a video off the ground. Anyway, you can see all the things you're not able to see is what you would do. Right. You can't watch that or that. Right. Or that. Right.
[00:54:17] Anyway, so they've gone beyond throttling now. Now those two domains, YouTube and WhatsApp, along with Facebook and Instagram, have been entirely removed from Russia's DNS. Following the Russian government designating meta as an extremist organization after it refused to censor content relating to Russia's war with Ukraine.
[00:54:42] In addition to YouTube and these three meta properties, you know, Facebook, Instagram and WhatsApp, Roskombnadzor also blocked access to the Tor Project, Wind Scribes VPN, APK Mirror and the BBC, as well as several other news sites.
[00:55:01] So, you know, they're tightening their grip on the Internet and Russian citizens are, you know, having to come up with workarounds or just go with what the Russian state tells them is happening in the world. So what you're saying is Roskombnadzor is treating every Russian as under 13. Yes. Yes. You're not you're not mature enough to understand. You are not ready for these things. Yes. And I would argue that Russia is probably not the best parent.
[00:55:31] So back to why you shouldn't be a parent podcast. OK, so Discord. Some of our listeners wrote to ask whether I'd seen that Discord, perhaps as part of reprofiling itself in advance of a 15 billion dollar IPO, would be switching all accounts to underage by default unless shown evidence to the contrary.
[00:55:59] Now, since that's partially true, I wanted to share the full story in their own Discord's own clarification, because, of course, you know, this created quite an upheaval. This is what they wrote. They said, we've seen some questions about our age assurance update and we want to share more clarity. We know how important these changes are to our community. Here's what we want you to know.
[00:56:29] Discord is not requiring everyone to complete a face scan or upload an idea, an ID to use Discord. The vast majority of people can continue using Discord exactly as they do today without ever being asked to confirm their age.
[00:56:50] You need to be an adult to access age-restricted experiences, such as age-restricted servers and channels, or to modify certain safety settings. For the majority of adult users, we will be able to confirm your age group using information we already have. We use age prediction to determine with high confidence when a user is an adult.
[00:57:16] This allows many adults to access age-appropriate features without completing an explicit age check. When additional confirmation is required, we offer multiple privacy-forward options through trusted partners. Notice that, you know, they trust them. Whether we trust them is another thing. And they enumerated that. Facial scans never leave your device. Discord and our vendor partners never receive it.
[00:57:45] IDs are used to get your age only and then are deleted. And Discord only receives your age. That's it. Your identity is never associated with your account. Okay, so for the time being, this is probably the best we can hope for. You know, we know that it will eventually be nice to have our devices able to assert an age range on our behalf.
[00:58:13] But we don't appear to be even close to having any universal solution or even a standard for that yet. You know, the most recent meeting of the World Wide Web Consortium was, well, what do we want to achieve with this? It's like, oh, God. Okay. Well, we're not there yet, obviously. So I'm sure we will in time since it's very clear. I mean, there could not be more pressure on getting this to happen.
[00:58:43] I know that Stephen Ehrensvart is hard at work on this. Her whole focus is addressing this issue, and she tends to get results and is very much an activist and active in all these sorts of things. For me, it's just, no, I can't do committees. So are privacy purists losing some of their precious, if entirely illusory and fictitious privacy? Yeah. Yeah.
[00:59:13] You know, that's going to happen. But even that will be better in the future once stronger privacy-protecting standards are in place. And, Leo, I know you were talking about Discord and this recently. Yeah, because we use Discord. Right. But are you flagged as an adult content server? No. No. So we probably wouldn't have to worry. Right.
[00:59:37] And that's my point, is that it's probably only explicit servers that are offering explicit content. And when Discord doesn't have, has not been able to obtain high confidence that a user is already an adult, that then they would say, okay, you know, you're going to, you know, sorry, you've, the people you're talking to haven't convinced us.
[01:00:05] And your language or your grammar or whatever they're using as signals, you need to prove that you're an adult to us. And I should mention that- We'd lose members if that started happening. I mean, it would be a real problem. And it wouldn't happen because you're not flagging your server as adult. So I think, you know, there isn't a problem in the best case.
[01:00:27] Now, it is, however, the case that we have seen evidence of IDs not being deleted when they should have been. And so, you know, that's a concern, right? This kid, you know, K-ID is a service that, a third-party service that some providers are using in order to obtain this age verification.
[01:00:55] And I don't want to accuse them wrongly, but somebody had a breach. And we talked about it on the podcast about six months back where a ton of, you know, identification, personally identifiable information like the works was obtained in a breach by one of these third-party ID, you know, age verification services that for no reason anyone had or could explain.
[01:01:25] Had not deleted this information from their servers. It's like, guys, you know, we're giving this to you on the condition that you delete it. You have to. On the other hand, we all just heard about what happened with the ring doorbell, you know, magic video being deleted and then somehow coming back. So, you know, what does deleted mean? The Google Nest doorbell, yeah. Yeah. Yeah. In this Nancy Guthrie case, yeah. Okay.
[01:01:51] So, when I saw that GTIG, Google's TIG, Threat Intelligence Group, had identified a widespread active exploitation of the critical vulnerability in WinRAR, which we talked about last summer. Although I was certain I had updated my copy then, I double-checked and yikes.
[01:02:21] I was still using 7.12, which contained the vulnerability. It was the last version that did. 7.13 fixed it. I'm now using 7.20, but I decided that given that the threat has moved as it was from theoretical to now real and live, I ought to remind all WinRAR users to be certain they've updated.
[01:02:51] Here's what Google's Threat Intelligence Group just posted.
[01:02:54] They said, the Google Threat Intelligence Group, GTIG, has identified widespread active exploitation of the critical vulnerability CVE 2025 8088 in WinRAR, a popular file archiver tool for Windows to establish, and it's being abused, to establish initial access and deliver diverse payloads.
[01:03:22] Discovered and patched in July, last summer, government-backed threat actors linked to Russia and China, as well as financially motivated threat actors, continue to exploit this end day across disparate operations, meaning lots of people are in the act.
[01:03:41] The consistent exploitation method, a path traversal flaw allowing files to be dropped into the Windows startup folder for persistence, underscores a defensive gap in fundamental application security and user awareness.
[01:04:04] In this blog post, we provide details on CVE 2025 8088 and the typical exploit chain, highlight exploitation by financially motivated and state-sponsored espionage actors, and provide indications of compromise to help defenders detect and hunt for the activity described in this post. To protect against this threat, we urge organizations and users to keep software fully up to date, blah, blah, blah.
[01:04:34] Okay, so anyway, 8088 is a high-severity path traversal vulnerability in WinRAR that attackers exploit by leveraging alternate data streams. They're able to create data streams, they're able to create data streams, they're able to create data streams, they're able to craft malicious RAR archives, which when opened by a vulnerable version of WinRAR, can write files to arbitrary locations on the system.
[01:04:59] Exploitation of this vulnerability in the wild began as early as July 18th, 2025, and the vulnerability was addressed by RAR Lab with the release of WinRAR version 7.13 shortly after. On July 30th.
[01:05:46] Have you ever seen RAR in the last six months from anywhere? I don't think so. That's not my normal mode of getting things. But again, anyone wanting more information and details, I've included the link to Google's coverage in full in the show notes. And you can go to WinRAR, W-I-N-R-A-R dot com forward slash download dot HTML.
[01:06:16] That will get you 7.12 for whatever platform you're using. Make sure you're using that. If you do discover a version of WinRAR before 7.13 as I did, you can know at least for what it's worth. It's not much good, but we're in good company. Stairwell Security just wrote,
[01:06:44] Stairwell recently identified a significant and concerning trend across our customer base. Get this. Over 80% of monitored environments contain vulnerable versions of WinRAR affected by CVE 2025-8088.
[01:07:04] This finding underscores a persistent challenge in enterprise security when widely deployed trusted software quietly falls out of date and becomes a high value target for attackers. And then they talk about how Google identified the exploitation, blah, blah, blah.
[01:07:24] Any 80% of the environment that they monitor, they discovered versions of WinRAR earlier than 7.13. Everything previous to that has the vulnerability. So, yikes. And Leo, speaking of yikes, we're at an hour in. Yikes! I think it's time for me to have a little caffeination.
[01:07:52] Well, have at it, Mr. Steve. Well, I tell you about Meter, the company building better networks. They're sponsored for this segment of Security Now. This meter was founded by two network engineers who feel your pain. If you're a network engineer, well, they know the headaches. Legacy providers with inflexible pricing and IT resource constraints stretching you thin. Everybody's got that, right?
[01:08:20] Complex deployments across fragmented tools. Look, as a network engineer, you're mission critical to the business, but you're working with infrastructure that wasn't built for today's demands. No one knows that better than you. That's why businesses are switching to Meter. M-E-T-E-R. Meter delivers full stack networking infrastructure. Wired, wireless, even cellular. That's built for performance and scalability.
[01:08:49] Because they build it themselves. Meter realized, if we're going to be effective, we've got to do the whole stack. Meter designs the hardware, writes the firmware, builds the software, manages deployments. They, of course, provide support afterwards, too. Meter offers everything from even ISP procurement. They'll help you find the best ISP for your needs. They'll help you with security. They can do routing, switching. They do wireless.
[01:09:19] They do firewalls, cellular power, DNS security, VPN, SD-WAN, and multi-site workflows, all in a single solution. Meter's single integrated networking stack scales beautifully from major hospitals. And you know, if you've ever been in a hospital, how bad the internet is. Well, it makes sense. They've got MRI machines. They've got all kinds of equipment. It blocks signals. You can't have signals in certain areas.
[01:09:48] But Meter can help. Branch offices. You know, you've got a great setup at the headquarters. Maybe. Maybe. But does that branch office have a great setup? And does it integrate with yours so that they can work as if they're in the main office? And you know what happens? A lot of companies buy. They acquire branch offices. They acquire warehouses. Or they build warehouses. Suddenly, the problem is exponential.
[01:10:18] Large campuses. They even do data centers. In fact, they did Reddit's data center. The assistant director of technology for Webb School of Knoxville. This is what he said. We had more than 20 games, athletic games, on campus between our two facilities at the same time. Each game was streamed via wired and wireless connections. And the event went off without a hitch. Can you imagine that?
[01:10:43] He said, quote, we could never have done this before Meter redesigned our network. Let them redesign your network. With Meter, you get one partner for all your connectivity needs. From the first site survey to ongoing support. Without the complexity of managing multiple providers, multiple tools. And you know how it is. If you've got multiple providers, they're going to blame each other. It's not our fault. It must be the router. Not the router's fault. It must be the ISP. Not the ISP's fault. It must be that security appliance. And then nobody gets it fixed.
[01:11:13] Well, Meter, that doesn't work that way. Because they're the whole stack. Meter's integrated networking stack is designed to take the burden off your IT team. And give you deep control and visibility, reimagining what it means for businesses to get and stay online. And after all, that's what's changed in the last 20 years. If you're not online, if you don't have effective internet access, you've got problems. Meter's built for the bandwidth demands of today and tomorrow.
[01:11:42] We thank Meter so much for sponsoring SecurityNow. Go to meter.com slash security now to book a demo. M-E-T-E-R dot com slash security now to book that demo. Let them show you what they can do for you. You will be blown away. Meter.com slash security now. Now, back to Steve Gibson. Okay. We've talked about the graphite spyware before.
[01:12:12] You know, it's one of the Israeli companies, in this case, Paragon Solutions. It's one of the more capable systems. But it's one thing to hear about it and another thing to see it. They made the mistake of exposing details of their graphite spyware control panel. The panel was exposed in photos from a demo day recently in the Czech Republic.
[01:12:40] The photos, which were immediately taken down, as I, whoopsie, didn't mean to show those, revealed graphite's ability to extract messages from instant messaging clients, including WhatsApp, Signal, Telegram, Line, Snapchat, TikTok, and more. We already know what, well, we already know that WhatsApp and Signal are truly secure.
[01:13:09] And that Telegram, well, it probably is, mostly because its encryption is so random and scrambled that no one has yet, as far as we know, been able to make heads or tails of it. Even though, when we talked about this about a year ago, some researchers really tried. They're like, what? We're not sure what this is doing. Anyway, the point is, as we've always observed, there is no threat from anyone monitoring their
[01:13:38] users' communications on the outside. The threat is that once spyware arranges to gain a foothold inside a smartphone, it doesn't need to untangle Telegram's mess of crypto or fight with Moxie's triple ratchet in Signal. All it needs to do is pretend to be the device's user.
[01:14:05] Examine the decrypted state, you know, the decrypted data that's presented on the device's screen and send that back to its central headquarters. So those leaked photos conclusively demonstrated that once a smartphone has been, you know, lubed up with Paragon's graphite, none of its secrets will be safe from spying eyes. And as we know, this is the battle that Apple is in. I mean, they really take this seriously.
[01:14:35] They've gone to, you know, every extreme imaginable just to keep this cat and mouse battle going on, trying to harden and then reharden and overharden and superharden their hardware platforms to keep the bad guys from getting into their devices. It's just amazing how this battle has continued.
[01:15:01] If it weren't so difficult to apply, a useful security caution might be, beware anything that's too popular. We often see that bad guys are very quick and unfortunately clever about jumping onto anything for which there's a large demand.
[01:15:24] For example, fake charitable contribution sites invariably pop up following any natural disaster in the hope of cashing in on people's compassion, you know, for the plights of others. So I suppose we shouldn't be surprised to learn that some cretin has created a family of 30 malicious AI assistant browser extensions for Chrome.
[01:15:54] Of course, why wouldn't someone do that? AI is all the rage at the moment and people are going to be looking around for AI this or that. So last Thursday, Layer X reported on their discovery, which they've named AI Frame with the headline, fake AI assistant extensions targeting 260,000 Chrome users via injected iframes.
[01:16:24] They wrote, As generative AI tools like ChatGPT, Claw, Gemini, and Grok become part of everyday workflows, attackers are increasingly exploiting their popularity to distribute malicious browser extensions. In this research, we uncovered a coordinated campaign of Chrome extensions posing as AI assistants for
[01:16:50] summarization, chat, writing, and Gmail assistants. While these tools appear legitimate on the surface, they hide a dangerous architecture. Instead of implementing core functionality locally, they embed remote server-controlled interfaces
[01:17:10] inside extension-controlled surfaces and act as privileged proxies, granting remote infrastructure access to sensitive browser capabilities. So basically, you install this and then you've created a tunnel from the bad guy's backend server infrastructure into your browser. Not what anybody wants. They said,
[01:17:36] Across 30 different Chrome extensions published under different names and extension IDs and affecting over 260,000 users, we observed the same underlying code base, permissions, and backend infrastructure, meaning they're all from the same guy, group, whatever. Critically, because a significant portion of each extension's functionality is delivered through remotely hosted components,
[01:18:06] their runtime behavior is determined by external server-side changes, rather than by code reviewed at install time in the Chrome web store. And we should just pause to say there is something so wrong with the fact that this is even possible.
[01:18:25] The fact that the Chrome web store could be allowing extensions to then later change their own behavior by changing what's happening on the server side. So the security of this whole aspect of the ecosystem is badly broken. They said the campaign consists of multiple Chrome extensions that appear independent,
[01:18:53] each with different names, branding, and extension IDs. In reality, all identified extensions share the same internal structure, the same JavaScript logic, the same permissions, and the same backend infrastructure. Across 30 extensions impacting more than 260,000 users, the activity represents a single coordinated operation rather than separate tools.
[01:19:21] Notably, several of the extensions in this campaign were featured by the Chrome web store. It's a featured extension by the Chrome web store, increasing their perceived legitimacy and exposure. The technique, commonly known as extension spraying, is used to evade takedowns and reputation-based defenses.
[01:19:45] When one extension is removed, others remain available or are quickly republished under new identities. Although the extensions impersonate different AI assistants, CLAWD, ChatGPT, Gemini Grok, and generic AI Gmail tools, they all serve as entry points into the same backend controlled system. By leveraging the trust users' place in well-known AI names,
[01:20:14] you know, brand names such as CLAWD, ChatGPT, Gemini, and Grok, attackers are able to distribute extensions that fundamentally break the browser security model. The use of full-screen remote iframes combined with privileged API bridges transforms these extensions into general-purpose access brokers capable of harvesting data, monitoring user behavior,
[01:20:42] and evolving silently over time. While framed as productivity tools, their architecture is incompatible with reasonable expectations of privacy and transparency, which I would say is putting it mildly. As generative AI continues to gain popularity, defenders should expect similar campaigns to proliferate. Extensions that delegate core functionality to remote mutable infrastructure
[01:21:09] should be treated not as convenient tools, but as potential surveillance platforms. Amen. So yeah, more than a quarter million instances of browser extension downloads and installations which front for this single malicious campaign. We know that web browser extensions are super popular and arguably necessary.
[01:21:35] After all, we could be using the password manager of our choice today without them. But their diversity and popularity has overwhelmed Google's ability to examine and manage them such that today's web browser ecosystem creates serious vulnerabilities. And there's really no solution today except to just say, be prudent.
[01:21:59] Only install from like really well-known brands with, you know, that have been around a long time. And next, that's not even the worst. Would you believe? That was 30 extensions. Now we have 287 Chrome extensions found to be spying on 37.4 million users.
[01:22:31] Chrome browser extensions. Um, the researcher in this case is, uh, they actually, they posted on sub stack. Great research, despite the fact that they, they chose as their handle the Q continuum. Okay. Uh, they wrote, although their research is great. They wrote, we built an automated scanning pipeline that runs Chrome inside a Docker container.
[01:22:59] This is great research routes, all traffic through a man in the middle proxy and watches for outbound requests that correlate with the length of the URLs. We feed it. That's very clever. So they feed that they feed the browser URLs of different lengths.
[01:23:21] And then, although they're unable to see the detail, they look at the length of the traffic, which is passing to a remote server and see that if it's core, if it's correlating with the length of the URL, then it is almost certainly that URL encrypted.
[01:23:42] So they say using a leakage metric, we flagged 287 Chrome extensions that exfiltrate browsing history. Meaning you install this extension, every single URL you visit in Chrome, even though just because the extension is sitting there in, in your pile of extensions, it is sending them all back to the extensions publisher.
[01:24:11] Complete breach of your privacy. They said those extensions collectively have 37.4 million installations, roughly 1% of the global Chrome user base. Just this group, 1%. The actors behind the leaks span the spectrum.
[01:24:29] Similar web, curly doggo, orthodox, Chinese actors, many smaller obscure data brokers, and a mysterious big star labs that appears to be an extended arm of similar web. They said the problem isn't new. In 2014, we have a lot of data. In 2014, Weisbacher et al., their research on malicious browser extensions demonstrated this.
[01:24:56] In 2018, Heaton showed that the popular stylish theme manager was silently sending browser URLs to a remote server. These past reports caught our eye and motivated us to dig into this issue today. So fast forward to 2025. Chrome Store now hosts roughly 240,000 extensions, right?
[01:25:23] So just shy of a quarter million browser extensions. How can they possibly know what they're all doing? Many of them, they wrote, with hundreds of thousands of users. We knew that we needed a scalable, repeatable method to measure whether an extension was actually leaking data in the wild.
[01:25:44] It was shown in the past that Chrome extensions are used to exfiltrate user browser history that is then collected by data brokers, such as SimilarWeb and Alexa. We try to prove this in this report. We try to prove in this report that SimilarWeb is very much still active and collecting data. Why does it matter?
[01:26:13] They write, there's a moral aspect to the whole issue. Imagine that you build your business model on data exfiltration via innocent-looking extensions and using that data to sell them to big corporates. Well, that's how SimilarWeb is getting part of the data.
[01:26:33] That should remind us that whatever software you're using for free and it's not open-sourced, you should assume you are the product. The second aspect is that it puts the users into danger and potentially this could be used for corporate exfiltration. Even if only browsed URLs are exfiltrated, they typically contain personal identifications.
[01:27:01] That way, bad actors that would pay for the raw traffic collected can try to target individuals. So anyway, they go on at length. I just wanted to put this again on everyone's map. Again, I don't know how to solve the problem. We want extensions that are powerful. Our extensions need to be powerful to be, for example, a password manager.
[01:27:28] You know, I fill out a form and Bitwarden sees the contents that I put in the form and says, Oh, I checked your domain. I don't have this in my library for you. Would you like me to add this to your password manager collection? And you just say, yeah, I do. I want that. And that's done.
[01:27:57] So super convenient. But consider what that means this extension can do. It sees you entering the plain text password and your username. And it knows where you are to hold URL. That's what these extensions have access to. And now we have an ecosystem in the Chrome Web Store of 240,000 of these extensions.
[01:28:24] Obviously, many of them are spying on their users. In this case, these guys found 287 representing that have been downloaded by 37.4 million users, representing around 1% of the Chrome user base, sending everywhere they go home. Yikes.
[01:28:47] The folks at Koi Security, titled their write-up of a new attack, agreed to steal. The first malicious outlook add-in leads to 4,000 stolen credentials. And here's another fundamental problem that we have in the industry. I had this on my radar for a while, and then another instance of this came up.
[01:29:15] Generically, these are known as domain recovery attacks. They can be quite serious, and they reveal an aspect of Internet security that is important and has largely been overlooked. So I'll first share the beginning of what Koi wrote. Last Wednesday, they posted, This is the first known malicious Microsoft Outlook add-in detected in the wild.
[01:29:42] But the developer who built the add-in is not the attacker. In 2022, so four years ago, a developer built a meeting scheduling tool called AgreeTo and published it to the Microsoft Office add-in store. It worked. People liked it. Then the developer moved on, and the project died.
[01:30:08] However, the add-in stayed listed in Microsoft's store. The URL it pointed to, hosted on the Vercel.app domain, became claimable, and an attacker claimed it. After making it theirs, they deployed a phishing kit, and Microsoft's own infrastructure started serving it inside Outlook's sidebar.
[01:30:38] By gaining access to the attacker's exfiltration channel, we, Koi, security, were able to recover the full scope of the operation, over 4,000 stolen Microsoft account credentials, credit card numbers, banking security answers. The attacker was actively testing stolen credentials yesterday. Yeah, they posted this, what is on Thursday?
[01:31:07] So last Wednesday, oh no, they posted it on last Wednesday. So last Tuesday, they saw this happening. They said, the infrastructure is live as you read this. This is the story of how a dead side project became a phishing weapon. They said, first off, office add-ins are not installable code. their URLs.
[01:31:34] A developer submits a manifest to Microsoft, an XML file, that says, load this URL into an iframe inside Outlook. You know, whereupon, of course, we say, what could possibly go wrong? They said, Microsoft reviews the manifest, signs it, and lists the add-in in their store. But the actual content, the, you know, the UI,
[01:32:04] the logic, everything the user interacts with is fetched live from the developer's server every time the add-in opens. Okay, so just to pause here, that really sounds like an architecture that is asking for trouble. What could Microsoft possibly have been thinking to implement office add-ins like this? And, it appears the trouble is what they got.
[01:32:34] Coy continues saying, note, the read-write item permission in the manifest. That grants the add-in the ability to read and modify the user's emails. It was appropriate for a meeting scheduler. It's less appropriate for whoever controls that URL today. There's no static bundle to audit. No hash to verify. Whatever the domain
[01:33:04] outlook-one dot vercel dot app serves right now is what runs inside Outlook. If the developer pushes a bad update, it's immediately live. If someone else takes control of that URL, they control what every user of that add-in sees. Inside Outlook's trusted sidebar with full read
[01:33:33] and write access to their email. Microsoft blessed this manifest once in December of 2022. They never check what the URL serves again. Agree to was a real product, an open-source meeting scheduling tool with a Chrome extension, 1,000 users, 4.71 star rating, 21 positive reviews,
[01:34:04] and an Outlook add-in published to Microsoft Store in December of 2022. The developer maintained an active GitHub repo, a full TypeScript mono repo with Microsoft Graph API integration, Google Calendar support, and Stripe billing. This was somebody building a business. Then it stopped. Development stopped. The last Chrome extension update shipped in May of 2023.
[01:34:33] The developer's domain, agree2.app, expired. Google eventually removed the dead Chrome extension in February of 2025, but the Outlook add-in stayed listed in Microsoft's Office Store, still pointing to a Vercel URL that no longer belonged to anyone. At some point after the developer abandoned the project, their Vercel deployment was deleted. The subdomain
[01:35:02] outlook-one.vercel.app became claimable and the attacker grabbed it. They deployed a four-paged phishing kit a fake Microsoft sign-in page, a password collection page, an exfiltration script, and a redirect. That's all it took. They didn't submit anything to Microsoft. They weren't required to pass any review. They didn't create a store listing. The listing already existed.
[01:35:31] Microsoft reviewed. Microsoft signed. Microsoft distributed. The attacker just claimed an orphaned URL domain and Microsoft's infrastructure did the rest. So, their description continues with all the details, but everyone gets the idea. Very poor design on Microsoft's part. I can understand Microsoft not wishing to re-vet and re-verify any
[01:36:01] change that an add-in developer might make, but they should have some mechanism for preventing abandoned and dangling URL domains from being taken over and repurposed. That's just dumb. In general, the design of the internet creates this problem, right? We've all encountered abandoned domains that have been acquired typically by low-end advertisers who snap up web domains that
[01:36:31] have expired and then they host their own content that nobody wants in the hope of generating revenue from advertisers who will pay for any traffic from anyone and they're not discriminating. But when domains that are used to host important content are abandoned, things can quickly take a turn for the worse. Years ago, we examined an instance where the domain of an important and super
[01:37:00] popular web browser JavaScript library had changed hands. Suddenly, an incredible number of web browsers were pulling a critical library from someone else. It should be enough to keep one up at night. And Leo, we're at an hour and a half. We got some listener feedback. Let's take a break and then we will plow into some feedback. No reason to stay up at night. Take a nap and I'll be
[01:37:30] back in a minute. Our show today brought to you by Zscaler, brought to you by the world's largest cloud security platform. That should get your attention. We talk a lot about AI in all of our shows, of course, the potential rewards of AI in your business. I think too great to ignore. No business can afford not to at least explore AI. But the risks are there too. I mean, there's the issue of loss of
[01:38:00] sensitive data, even attacks against enterprise managed AI. And of course, the bad guys love it. Generative AI increases opportunities for these threat actors that helps them to rapidly create impeccable phishing lures, write malicious code. They use AI to automate data extraction at speed. I mean, there's so many things to worry about. Your employees may even be using AI right now without your knowledge. And the problem is
[01:38:29] even if used carefully, it's possible to accidentally leak proprietary information. For instance, there were 1.3 million instances of social security numbers leaked to AI applications last year. ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations. operations. So I think we can agree. It's time to rethink your organization's safe use of public and
[01:38:58] private AI. That's what Chad Pallet did. He's the acting CISO at BioIVT. They use Zscaler. He uses it. He says Zscaler helped them reduce their cyber premiums, reduce them by 50% and double their coverage. So that's like a 40, I don't know, 50% times, I don't know, it's like a big improvement, right? And really improved their controls too. Take a look at this video we got from Chad. With Zscaler, as long as you've got
[01:39:28] internet, you're good to go. A big part of the reason that we moved to a consolidated solution away from SD-WAN and VPN is to eliminate that lateral opportunity that people had and that opportunity for misdirection or open access to the network. It also was an opportunity for us to maintain and provide our remote users with a cafe style environment. Thank you, Zscaler. Zero Trust plus AI, you can safely adopt generative AI and private
[01:39:57] AI to boost productivity across your business. And you don't have to worry about it. Their Zero Trust Architecture plus AI helps you reduce the risks of AI-related data loss. It also protects against those AI attacks to guarantee greater productivity and compliance. Learn more at zscaler.com slash security. That's zscaler.com slash security. We thank them so much. for their support of security now. Back to Steve.
[01:40:27] Okay, so I got an email from Walt Stoneburner who said, Steve, thank you for pointing out that quality tested code that adheres to functional specs is important for production level code. There's a big difference between throwing something together that seems to work but that you don't understand and experienced craftsmanship. It's not that we don't love coding. It's just a pleasant benefit.
[01:40:56] It's that we're aiming for correctness, speed, size, cost, maintainability, clarity, extensibility, expressiveness, modularity, portability, and a host of other factors that vibe coding does not do. Walt in Ashburn. Okay, so lots of our listeners are writing in saying, Steve, what do you think about all of this code generation by AI? And I've continued to
[01:41:26] think about it. So one thing I want to say is I know that Leo and I were talking about this, I think it was before we began recording today, that I want to always acknowledge that wherever we are today is not where we're going to be tomorrow. It's not where we were yesterday and I don't see any sign of this slowing down. I'm happy that so much resource, I mean, I'm happy for the hype because the hype
[01:41:55] means that a ton of resources are being put into something which I think has great potential. Okay, that said, where we are at the moment. I've been thinking about vibe coding and I think that the most unnerving aspect of vibe coding for me, a lifelong coder, is the idea that a bunch of code has been cast,
[01:42:24] which may do what I want and expect, but it also may not. There's every chance that in some subtle way it might misbehave. In some of the feedback I've received and shared in recent weeks, the tasks were relatively straightforward. So, you know, the various strange errors Claude code made were obvious to its user, you know, like that book author's name appearing twice in its field. He didn't know why,
[01:42:54] but he pointed it out to Claude and it says, oh yeah, sure enough, and then it fixes it. But this should give any true coder some pause to wonder what other far more subtle errors might be lurking in there that haven't been seen and pointed out to the code bot. And we would expect that there would be an exponentiating effect in errors as projects grew in size to
[01:43:23] create many more possible interactions and places where subtle errors might hide. And this is nothing against AI. We've talked about this with actual, you know, any project size, any time complexity is getting larger, you know, there's far more opportunity for mistakes. Okay, so, but having said that, then I challenge myself and say, okay, hold on there a second, Gibson. When you use a
[01:43:53] library authored by some third party, you didn't write that library, you don't know everything about its innards, you're taking on faith the fact that it operates correctly. And that's true. But the difference is that I'm able to assume when I use a third party, you know, code from a third party, that its non-AI author took pride
[01:44:23] in creating deliberately writing code and knowing what it did and testing the functions of each and every one of its whatever it does, I'm able to assume the library's correctness. this suggests that a unit testing approach to professional AI code generation might be the solution. Break the large project down into small pieces,
[01:44:52] then design and apply unit tests to verify the correct operation of each piece under every edge case and possible condition. Now, and this echoes some of the early formal code correctness verifications that programmers have been applying by hand for years, it's considered the only way to know for sure from a testing standpoint. So perhaps AI can similarly be asked to build large projects
[01:45:22] from smaller carefully tested pieces. There's one thing that worries me. When some aspect of my code is not doing what I expect, I'm able to quickly and easily zero in on the trouble and fix it because I wrote it in the first place. You know, it's my code. So I understand how it works and what it's supposed to be doing. But what happens when a
[01:45:51] non-coder detects that something is not working? Last year, when we began looking into Microsoft's early use of Copilot for fixing bugs, remember that instance where Copilot was shown a bug in some code where a parser was running off the end of the stack that it was parsing. Rather than fixing the underlying error, because a stack underflow should not have been
[01:46:21] possible at all, Copilot added some glue, an explicit test to prevent the pointer underflow. Okay, technically this repaired the problem that occurred. by explicitly preventing the condition that revealed the bug. This is reminiscent of the old joke, excuse me, about the guy who goes to the doctor with a complaint.
[01:46:51] He explains to his doctor that his left shoulder hurts whenever he raises his arm in a certain way, and his doctor says, no problem, just don't raise your arm like that. Of course, the joke is that the symptom was suppressed, but the underlying problem was not addressed. In the case of the early copilot experiment, an experienced Microsoft coder was overseeing the copilot testing and questioned whether copilots fix might not be
[01:47:20] masking a subtler underlying problem. So I'll suggest that it's going to be very interesting to watch this whole vibe coding era play out. And I also think that we're at 1%, if that, of where we're going to be. I mean, I was among the first people to say very early on that code should be something that AI could master. And, you know, we're seeing
[01:47:50] very, very encouraging early results. But again, I said it last week, I author the code that I produce. There's no way I'm going to be selling code under my name that an AI produced. That's just, that isn't, that isn't for me. Denny Vandemail said, hello, Steve, longtime listener of Security Now and user of your web products and software.
[01:48:20] For many years, I've held the position that free VPN services are scary in general. Then I stumbled across Cloudflare's free tier of their warp VPN for most devices. As you know, Cloudflare's IP address and DNS is 1.1.1.1.1. Cleverly, they bought the TLD O-N-E and their free tier VPN is located at O-N-E
[01:48:49] dot O-N-E dot O-N-E dot O-N-E. He said, it works well and can be installed on Apple, Mac, OS, iOS, Android, Windows, and Linux, signed Denny. And so I just wanted to thank Denny. I'd forgotten about Cloudflare's free warp VPN offering, so I am glad for the reminder. Okay, so that's our feedback from our listeners. I want to talk about attestation and what
[01:49:19] it's about and the surprising and unplanned adventure that I had last week. Why don't we just do our last advertiser break and then I won't break in the middle of this. That'd be great. Unplanned adventures. Let me clear my throat too if I got something. Yeah, unplanned adventures are never good, I think. You always want to plan them at a time. This episode of Security Now brought to you by Hawks Hunt. Oh, we're going to talk
[01:49:48] about this on our talk at Zero Trust World. What are you calling it? The problem is inside the house? The call is coming from inside the house. It's your users, right? That is often the biggest problem with security. Well, Hawks Hunt is about helping your users help you. That's a good way to put it. As a security leader, you are paid, well-paid I hope, never enough I'm sure, to protect your company
[01:50:18] against cyber attacks, but it's getting harder. Well, I mean, there are more cyber attacks than ever, and if you are faced with this idea of keeping your employees from clicking malicious links, you've got to be terrified by the AI generated emails you're getting, the phishing emails. They're getting better and better. I was fooled. I was fooled. I shouldn't have been, but I was, a couple of weeks ago, we talked about it on the show. Every morning, as Lisa goes through her email, I go through my email,
[01:50:47] we compare phishing emails that we're getting. You know, it just terrifies me, you know, to know that our employees are getting the same email and they could click one wrong click and you're dead. problem is legacy one-size-fits-all awareness programs really don't stand a chance. At most, they're sending out four kind of generic trainings a year. Most employees hate them, they loathe them, they ignore them.
[01:51:17] And then, of course, you're sending out the tests, but when somebody actually clicks, they're forced into an embarrassing training program that feels like a punishment. That's no way to learn. That's why more and more organizations are trying Hawks Hunt. Hawks Hunt goes beyond security awareness and changes behaviors by rewarding good clicks and coaching away the bad. Whenever an employee suspects an email might be a scam, Hawks Hunt will tell them instantly providing a dopamine rush
[01:51:47] that gets your people to click, to learn, to protect your company. And you'll love it. As an admin, Hawks Hunt makes it easy to automatically deliver phishing simulations in any way you want. Email Slack teams. You can use AI to mimic the latest real-world attacks. The simulations are personalized to each employee based on department location and more, while instant micro-trainings solidify understanding and drive lasting safe behaviors. Because they're
[01:52:17] little, they're short, and they're fun. You can trigger gamified security awareness training that awards employees with stars and badges. I know that sounds dopey. I got to tell you, though, from my own experience, it's not. You feel good. I got a star. I got it. You know, they do, you should go see the demo at the website. They have, they'll surround it, they'll fly, you did it. That boosts completion rates, that ensures compliance. People learn when they're having a good time,
[01:52:47] when they're enjoying themselves, not when they're being punished with trainings that are boring and useless. Plus, you could choose from a huge library of customizable training packages so you can really fit it to your needs. You can even generate your own with AI. Hawkshunt, they've got everything you need to run effective security training in one platform, meaning it's easy to measurably reduce your human cyber risk at scale. You don't have to take my word for it, there are over 3,000 user reviews on G2 that make Hawkshunt the top
[01:53:17] rated security training platform for the enterprise, including easiest to use, best results. It's also recognized as a customer's choice by Gartner and thousands of companies like Qualcomm, AES, and Nokia use it to train millions of employees all over the globe. Visit hawkshunt.com slash security now to learn why modern secure companies are making the switch to Hawkshunt. Do it right now. Hawkshunt.com slash security now. H-O-X-H-U-N-T
[01:53:46] like Foxhunt with an H. Hawkshunt.com slash security now. It really works. It's great. And we thank him so much for supporting Steve. Now, let's talk about attestation. As I've noted and warned, the month of March 2026, which is now a mere two weeks away, we'll see major changes in the identity certificate
[01:54:15] issuing industry. A few weeks ago, near the end of January, actually it was Monday, January 26th, being a customer of DigiCert, as I have been for a long time, I received a courtesy piece of email with the subject, Important Reminder, TLS SSL Certificate Lifetimes Changing February 24th, 2026. They said, hello, we're writing to remind you
[01:54:45] that starting February 24th, 2026, TLS SSL certificates issued through DigiCert Search Central will have a maximum validity of 199 days down from 397. Okay, so close to 200 versus close to 400. They're basically cutting certificate life in half. This change to shorter certificate lifetimes is an industry-wide requirement mandated by new CA browser forum
[01:55:15] baseline requirements. While shorter lifetimes may require adjustments, they also reduce risk, blah, blah, blah, blah, blah, right, justifying all of this. So basically, they explain that they're cutting the lifetime of their certs in half and how this affects their customers. everyone who's been following this podcast knows only too well the reasoning behind my feelings about this ridiculous and
[01:55:44] extremely inconvenient shortening of certificate lifetimes. And that's doubly so for code signing certificates, which unlike web server certificates, can only be stored in HSM hardware, making them completely impervious to remote theft. In this case, DigiCert is alerting their customers and giving us a one-month reminder of the upcoming reduction in web server
[01:56:14] authenticating TLS certificates. Maximum certificate lifetime will be dropping from one year plus some margin down to just six months plus some margin. One of the consequences of the industry's shortening certificate lifetime is the need to decouple certificate issuance from certificate qualification. In the bygone days, when certificates lasted for five or ten years, as they once did,
[01:56:44] the act of proving you were who you claimed to be would be part of the certificate renewal process. In applying for or renewing a certificate, you would need to do whatever the CAA asked you to do to prove that you were you. But now that process has also been significantly fouled up because you don't want to have to do that every time you need to renew a certificate. DigiCert's
[01:57:12] email says, for example, on February 24th, meaning a week, a little more than a week from now, OV, organization validation reuse periods will be shortened from 825 days to 397 days. On the same date, on February 24th, domain validation reuse periods will be shortened from 397 to 199.
[01:57:42] In other words, it will now be necessary to revalidate one's organization annually rather than only every two and a quarter years. It used to be 825 days every two and a quarter years. Now you got to do it every year. Like, reprove who you are, who your organization is. Now, given that Let's Encrypt only offers domain validation certificates, not organization
[01:58:12] validation, all they're saying is what you need to connect to a server reliably, which I think makes total sense, you know, and thus it doesn't incur any of this nonsense, I have a difficult time understanding how the CAs are not putting themselves out of business with these kinds of practices. I suppose they plan to survive on all of the other various types of certificates which they issue and manage, such as for signing documents
[01:58:42] and such. And they'll continue to offer TLS web certificates sort of as a lost leader. You know, it's like, well, they just want to offer a full suite of products so they will continue to offer certificates because they already do. In order to obtain the best price possible, I previously purchased TLS certification from DigiCert into 2028. In preparation for this March, GRC recently jumped through
[01:59:11] the various organization validation hoops, and at the start of last week, I reissued GRC's TLS, you know, GRC.com, our TLS domain certificates well in advance of DigiCert's February 24th deadline for a full year, you know, a year plus, 397 days because I didn't want anything to happen that might get in the way of that. You know,
[01:59:41] because with certification now having become so involved, you know, you've got to be standing by the phone when it rings and it jumped through all kinds of hoops, there's no telling when or why that process might fail or stall. I've been surprised in the past, so I wanted to give myself time to fix anything that might fail before that deadline. Now, the process, as it turns out this time, proceeded without a hitch. So, just because I can and because
[02:00:11] DigiCert has no problem with reissuing certificates, next Monday morning, the 23rd, the day before the drop dead date, just because I can on the last possible day to obtain a 397-day certificate, I'm going to do that. So, first part of this is, this should serve as a heads-up reminder to anyone who might similarly have better things to do right at this moment than figure out how to switch their
[02:00:41] certificates over to Let's Encrypt that whoever their CA is, there will be an end-of-the-month halving of standard TLS certificate lifetime from more than a year to just over more than half a year. And, of course, I will be moving over to Let's Encrypt and switching to domain validation instead of organization validation certs as soon as I can, as soon as it makes sense. So,
[02:01:11] I imagine that once my pre-purchase at DigiCert, I've already bought certificates through 2028, I might as well have them, and then I'll switch to Let's Encrypt. Okay, so that's the current status of the TLS web server certificate side. But my primary focus today is on another class of certificates I've recently discussed, specifically code signing. As I noted recently, the maximum lifetime
[02:01:40] time of code signing certificates is also being cut, in this case, by two to a third from a convenient three years down to one year minimum. Anyone who examines any of the software that's available from GRC will find that it's all signed with a DigiCert certificate. Sadly, that will no longer be true after this August when my current code signing certificate reaches the end of
[02:02:09] its three year life. That's that EV certificate that I've got currently being signed. I would prefer to remain with DigiCert. You know, why not? They've been good to me. But the recent changes at DigiCert have overcome my own change inertia, which is big, for a code signing certificate authority. So long as there's any practical alternative, I will not
[02:02:39] countenance renting the privilege of signing my own code. I can't imagine using a cloud-based provider who places a limit on the number of signatures I'm able to make and charges per signature for any overage. And even when my code signing or even when signing my code with my own customer provided HSM, which is what I've been doing for the past
[02:03:08] three years, with DigiCert, the least expensive code signing plan where the user provides their own hardware, is advertised as $50 per month. But that's disingenuous as hell, since it's not possible to purchase it in monthly increments. It's only available with an auto-renewing annual commitment paid in advance. So that's $600 a year. And even that $600 per year presumably
[02:03:38] is subject to change at the next annual billing cycle, since there's no longer any way to pre-purchase future years to get a price commitment. So while I'm bitterly disappointed in DigiCert, to whom I felt a well-deserved loyalty for many years, I don't mean to single them out. The entire code signing certificate industry appears to be headed in the same direction, and it's not pretty. As I was looking around, I discovered
[02:04:08] that a number of other CAs are now reselling DigiCert for exactly the same pricing structure. I mean, it is DigiCert for all intents and purposes, just you go to them with a different domain and website. Scouting around, I found that Identrust will still sell a no-strings-attached three-year code signing certificate for $538. When placed into
[02:04:38] my own HSM, I'm able to use it. So, that's $179 a year, basically 30% the cost of remaining with DigiCert, and that's assuming that DigiCert doesn't choose to further raise their prices before the next three years have passed. Identrust is well-known, so Identrust it was for me. And thus began the new adventure of obtaining
[02:05:07] a code signing certificate in 2026. Our illustrious CA browser forum has added a surprising hoop through which anyone wishing to obtain a code signing certificate must now jump. The CA browser forum requires the issuing certificate authority to obtain an attestation letter from an independent legally licensed attorney or CPA.
[02:05:37] You know, a CPA, a certified public accountant. This third-party individual must attest to having first-hand knowledge of the legitimacy of the corporation and its officers. Okay, now, since Gibson Research Corporation has been a tax-paying California corporation in good standing for 37 years, with a stable business location, a DNS domain name, and a well-known presence,
[02:06:07] I initially doubted the need for this attestation letter, which I've never needed before, nor been asked to provide, and identrust s documentation was unclear about it. So one week ago today, last Tuesday, I created an account with identrust and received a link to download a PDF packet of documents. I filled them out, omitting the clearly separate final three pages that
[02:06:37] contain the attestation letter details. I sent this off to identrust in Utah via Federal Express overnight. Last Wednesday, the next morning, 11.32 a.m., I received email confirmation of the forms having been received, and 35 minutes later, at 12.07 on Wednesday, I received notice with the subject action required, code signing application,
[02:07:07] attestation letter required. Oh, great. So the famous Merriam-Webster dictionary defines attestation as an act or instance of attesting something, such as a proving of the existence evidence or an official verification of something as true or authentic. So apparently, I needed to prove or to provide identrust
[02:07:37] with an attestation letter. My lifelong personal and corporate attorney retired from practice a few years ago, and I'm sure that he allowed his license to lapse. I've been using the same CPA tax accountant firm for the past 40 years since 1984. So I asked my California licensed CPA if I could trouble him to use his license because he's
[02:08:06] got to fill out this form, which, you know, basically he's having to justify his own existence as a California licensed CPA to identrust. I asked him if he would attest to Gibson Research Corporation's identity. He didn't hesitate to say yes, so Wednesday afternoon I emailed identrust's three-page attestation letter document to him. The CA browser form requires either a digital
[02:08:36] signature using the attesting individual's personal certificate, which is not something that my CPA had, or what they termed a wet-signed original. My CPA signed and filled out the PDF, signing it in nice blue ink, so it was very clearly not from some printer. Thursday morning I dropped by his office, picked it up, then swung
[02:09:05] by FedEx to send the originally signed attestation letter to identrust. Late the next morning, last Friday, I received notice that my identity had been established, and a few hours later, a code signing certificate was issued. So, success. My reason for sharing all this is to establish the proper and full context for understanding what has happened to us, to the entire PC industry,
[02:09:35] in response to the threat of malware. malware. This is the nature of the cost and the burden that malware has inflicted upon the world. I dislike what I've just had to go through to obtain the privilege of adding a cryptographic signature to my code as the only available means of proving my identity as
[02:10:04] my code's signer. But as long as our systems are subject to malicious abuse from malicious software, I understand the need to have some unspoofable means of determining the source of any software we allow to run on our computers. As we've seen, all of the PC, desktop, and mobile platforms that are able to run third-party applications, with the notable
[02:10:34] exception of Linux, check and verify the cryptographic signature of any code they're being asked to run before they let their processors near it. So I understand the need for this, and I have no better idea. But what really rubs me the wrong way is the apparent profiteering by the industry's certificate authorities. I get it that the CA browser forums' increasingly stringent
[02:11:04] policies have increased the verification burden upon those CAs, and thus the cost of offering this service. But even that is one-time and non-recurring. Once any new CA has figured out who I and Gibson Research Corporation are, that's not going to ever change, just as it never did for DigiCert. I must have been grandfathered in because I was never asked
[02:11:33] to do all this from DigiCert. These requirements were already in place when I obtained my most recent EV code signing certificate, and as I said, I never needed to go through any of this, presumably because I had already established a long multi-year relationship with them and I was grandfathered in. Looking over the current baseline requirements, that's what they call, they're called in this document, which dictate the behavior
[02:12:03] of all certificate authorities that issue code signing certificates. It became clear that the standing and authenticity of my own CPA was also, had also just been thoroughly researched. Today, you know, I'm calling this podcast attestation because I want to share what I just learned about the extent of what this attestation means. It's a bit
[02:12:33] eye-opening. The document which governs the content of the world's certificate authorities is titled Baseline Requirements for the Issuance and Management of Publicly Trusted Code Signing Certificates, Version 3.8.0. Now, everyone should keep in mind that these requirements are applicable to anyone and everyone who wishes to create code that will be signed and widely published by any
[02:13:02] platform. Trusted code requires that it be signed and timestamped by an unexpired code signing certificate. As we know, unexpired at the time of the signing. Near the top of the baseline requirements is a section of definitions. Under attestation letter, the document says, a letter attesting that subject information is correct,
[02:13:32] written by an accountant, lawyer, government official, or other reliable third party customarily relied upon for such information. Section 3.2.2.1. Authentication of organization identity for non-EV. Remember, I'm not going for EV again because that's just throwing money away at this point. I'm seeing some language on the internet that
[02:14:01] says that Windows smart screen filter gives you immediate benefit if you're using an EV cert. I think that may just be inertia from years past because Microsoft is reportedly, and we've talked about this, no longer giving EV any extra validation whatsoever. So all of this is the minimal verification
[02:14:30] and certification for code signing certificate. I don't even want to think about what would be required to establish extended validation with a new certificate authority. So that section 3.2.2.1 says, prior to issuing a code signing certificate to an organizational applicant, the CA must verify the subject's legal identity, including any DBA, you know, doing business as,
[02:14:59] proposed for inclusion in a certificate in accordance with section 3.2.2.1.1 under identity and 3.2.2.1.2 under DBA trade name. The CA must also obtain, whenever applicable, a specific registration identifier assigned to the applicant by a government agency in the jurisdiction of the applicant's legal creation, existence,
[02:15:29] or recognition. That was point one. Point two, verify the subject's address in accordance with section 3.2.2.1.1 under identity. Third, verify the certificate requester's authority to request a code signing certificate and the authenticity of the certificate request using a reliable method of communication. That's an all caps, so that's an official term. Reliable
[02:15:59] method of communication in accordance with section 3.2.5, validation of authority. And finally, point four, if the subject's or subject's affiliates, parent companies, or subsidiary company's date of information is less than three years prior to the date of the certificate request, thank goodness mine's 37 years, verify the identity of
[02:16:28] the certificate requester. The method used to verify the identity of the certificate requester shall be per section 3.2.3.1 individual identity verification. Okay, so if the corporate entity is less than three years old, then the identity of the requester is also verified. There were several references to section 3.2.2.1.1 under identity, so that definitely comes into play. It says,
[02:16:57] if the subject identity information is to include the name or address of an organization, and it has to, the CA shall verify the identity and address of the organization and that the address is the applicant's address of existence or operation. The CA shall verify the identity and address of the applicant using documentation provided by or through communication with at least one of the following, a government agency in
[02:17:27] the jurisdiction of the applicant's legal creation, existence, or recognition, a third-party database that's periodically updated and considered a reliable data source, a site visit by the CA or a third party who's acting as an agent for the CA, or an attestation letter. Thank goodness. The CA may use the same documentation or communication described in one through four above to verify the applicant's
[02:17:57] identity and address. Alternatively, the CA may verify the address of the applicant, but not the identity of the applicant using a utility bill, bank statement, credit card statement, government-issued tax document, or other form of identification that the CA determines to be reliable. I should note that it has become very difficult for individuals to obtain code signing certificates. It's not impossible. There is something known as an
[02:18:26] IV certificate, an individual validation certificate, but not all CAs offer them. Only a couple do. So how do individuals confirm their identity? The baseline requirements assert a principal individual associated with the business identity must be validated, that is, I, who represent Gibson Research Corporation as its president and CEO, must be validated in a face-to-face setting.
[02:18:55] The CA may rely upon a face-to-face validation of the principal individual performed by the registration agency, provided that the CA has evaluated the validation procedure and concluded that it satisfies the requirements of the guidelines for face-to-face validation procedures. Okay, and I'm going to skip a few paragraphs of this mind-numbing boilerplate. A personal statement has to be provided and signed, providing a
[02:19:25] full name or names by which a person is or has been previously known, residential address at which she can be located, date of birth, and an affirmation that all information contained in the certificate request is true and correct. A current signed government-issued identification document that includes a photo of the individual and is signed by the individual, such as a passport, driver's license, personal identification card, concealed
[02:19:55] weapons permit, or military ID, at least two secondary documentary evidences to establish his or her identity that include the name of the individual, one of which must be from a financial institution. Acceptable financial institution documents include a major credit card provided that it contains an expiration date and has not expired, a debit card from a regulated financial institution provided that it contains an expiration date and
[02:20:25] has not expired, a mortgage statement from a recognizable lender that is less than six months old, a bank statement from a regulated financial institution that is less than six months old. Acceptable non-financial documents, and it goes on like that. I mean, wow. And then, a third third-party validator performing the face-to-face validation must attest to the signing of the personal statement and the identity of the
[02:20:54] signer and identify the original vetting documents used to perform the identification. In addition, the third-party validator must attest on a copy of the current signed government-issued photo identification document that it is full, true, and accurate reproduction of the original. Now, of course, the certificate authority doesn't know who this
[02:21:23] supposed third-party validator is, right? So, the baseline requirements state about the third-party validator, the CA must independently verify that the third-party validator is a legally qualified Latin notary, which is a special high-end type of notary whose statements aren't questioned. Do they speak in Latin? No, it's weird. I didn't know what it was either, so I did some research, and it is like
[02:21:52] a super special class of notary, or a regular notary or legal equivalent in the applicant's jurisdiction, a lawyer or accountant in the jurisdiction of the individual's residency, and that the third-party validator actually did perform the services and did attest to the signatures of the individual. And that leads me to the final piece I want to share, this far longer and detailed document, which I'm going
[02:22:22] to skip most of, under verification of attestation. The baseline requirements say the CA must confirm the authenticity of the attestation and vetting documents, and that elaborates. Acceptable methods of establishing the foregoing requirements for vetting documents are, the CA must verify the professional status of the third-party validator, meaning my CPA,
[02:22:51] by directly contacting the authority responsible for registering or licensing such third-party validators in the applicable jurisdiction, certification, and the third-party validator must submit a statement to the certificate authority which attests that they obtained the vetting documents submitted to the CA for the individual during a face-to-face meeting with the individual.
[02:23:22] In my case, that happened between me and my long-standing CPA last Thursday. And finally, three, the CA must confirm the authenticity of the vetting documents received from the third-party validator. The CA must make a telephone call to the third-party validator and obtain confirmation from them or their assistant that they performed the face-to-face
[02:23:52] validation. The CA may rely upon self-reported information obtained from the third-party validator for the sole purpose of performing this verification process. Oh, whoa. Now, if all of that leaves you feeling somewhat dizzy, you're not alone. I almost feel guilty, Leo, that I was able to pass through that verification gauntlet.
[02:24:21] You're one of the proud, the verified. I'm somewhat surprised that I was accepted by Identrust without first agreeing to a full-body cavity search. Although, I'm pretty sure that I would need a new CEPA if that happened. So, okay, stepping back from all of those gory details for a moment, think about what all this means, why this was done, and what it does and
[02:24:51] does not achieve in return for all this effort. Our industry is desperately trying to get control of the malware scourge. Among other things, we're seeing attacks at every stage of the software creation process. Source code repositories are being attacked and poisoned. Malicious libraries are given off-by-one character names
[02:25:21] in the hope that a developer will introduce a typo at just the right place to invoke the typo squatted library to devastating effect. Even AI has been used to invoke a malicious library as a result of a weaponized hallucination. And, you know, the most frustrating part of this in the context of today's discussion of code signing is that any of these
[02:25:50] or similar supply chain attacks would result in compiled code that is then code signed in good faith by its publisher and accepted by any commercial OS platform despite inadvertently incorporating that infiltrated malware. In other words, it's not as if blessing code with a signature is able to confer
[02:26:20] any assurance about the behavior of the code that's been signed. It's still got bugs. It might even be malicious. The only thing signing is able to do is assert that not a single bit of the signed code has been altered since its signing, as well as the identity of the signer, as it was known to the certificate authority that issued the signer's certificate.
[02:26:49] But that said, we're certainly far better off occupying a world where entities who are not interested in deliberately creating malware are able to sign their code and have their unspoofable signatures recognized by the guardians of the platforms we're all using. So what's the point of all this seemingly over-the-top
[02:27:19] attestation? Well, with the world's major commercial platforms having become completely unwilling to run any software that's unsigned, Linux accepted, the bad guys must somehow arrange to get their malware signed, right? One avenue we've seen is to attack the software supply chain in the hope of being incorporated into otherwise legitimate software
[02:27:49] under the code signing signature of some unsuspecting developer. The other much more powerful solution that's available to the bad guys is the direct full frontal approach of obtaining their own legitimate code signing certificate from one of the many trusted certificate authorities. The blockade that now prevents the major commercial OS platforms from executing
[02:28:19] any code that has not been signed has created huge pressure to spoof corporate identities or just make up, synthesize a corporate identity in order to trick certificate authorities into issuing valid code signing certificates to explicitly malicious parties. Fraudulent code signing certificates are a real problem. This explains
[02:28:49] why today it's the reputation of the signing certificate that matters, not just its existence. The CA browser forum understands that what they just put me through was inconvenient as all heck and a pain in the butt. But what choice do we have? They cannot simply take the word of anyone who may be able to recite
[02:29:19] a Boy Scout is trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent. No, that doesn't cut it. They clearly need another trust anchor and that anchor is a licensed attorney or CPA who will be willing to put their own reputation and license on the line to substantiate and attest to the identity
[02:29:49] of the code signing certificate applicant. Given what I just went through, anyone who may have forgotten or may have been putting off obtaining a three-year code signing certificate has about 10 days from today to get that done. So, if you want to get a certificate good for three years, you can from Identrust and I was very impressed with how quickly
[02:30:18] they moved. if you are attempting to establish you or your company's identity with a new certificate authority like Identrust, take the need for an attestation letter from an attorney or CPA to heart. It may save you as it would have for me another couple days that you might not have remaining because you want to squeak in into February. And I would expect the code signing certificate authorities to be a bit busy as these
[02:30:48] last days of February expire and three year certificate availability winds down. And remember, if you want to avoid cloud-based pay-as-you-go or limited quantity code signing, having your own signing hardware is now a requirement. And if you want to get that done now, you'll be able to use it whatever you do for three years. I'm glad I'm doing that. I want to get
[02:31:18] this new certificate from iDentrust since my current certificate with DigiCert lasts through August, at which time I will not be getting another one from them. My plan is to dual-sign my software so that the world has a chance to see this new certificate but also sees that it's cosigned with the already almost well now it's two and a half years old
[02:31:47] DigiCert code signing certificate and then the DigiCert certificate will drop off after it expires. So boy I mean it is a pain in the butt feel very robust I have to say. No it's not I mean you're right you could get somebody to fake a CPA or fake an attorney and you know I mean but what do we do? There's got to be a better way to do this there just has to be it just feels like they're not
[02:32:17] improving it they're just kind of layering stuff on well and the price I mean on one hand okay they clearly had to go jump through some hoops but boy are they making it expensive just to produce code yeah and I feel like that's the point is to make money off of you producing code unfortunately I mean I like DigiCert but as I looked around I found that GlobalSign and there were like four other CAs
[02:32:46] I've used GlobalSign for MimeCerts yeah there are others yeah and ACME is not a solution to this to any of this not for code signing ACME none of this can be used for code no not not for code because ACME is explicitly saying I control this domain right code signing is I am this identity so it's I am me exactly yeah authentication is so hard but maybe you know Sam Altman's got the right idea
[02:33:16] with the orb the iris scanning orb I mean he recognizes this is an issue this is going to be an issue in the new world how do you prove you are who you say you are well and think about it I mean a global network decouples you from identity we've been heralded we've been heralding that as the great liberation yes it frees us it's oh my god it's you know we get to be autonomous and you can be a dog
[02:33:46] if you want to be unfortunately there are instances where it really does matter bad guys will abuse that very anonymity and so it turns out clamping down on it is really hard yeah neil stevenson writes about this in his book fall or dodge and hell and what he talks about is having kind of a variety of identities you can assume you have your real identity which you can prove but in
[02:34:16] order to allow anonymity and flexibility and autonomy you also have other identities that are spawned from your real identity but can't be connected back to your real identity and I think that we'll end up with something like that it might be tied to some sort of TPM hardware or something but we'll end up with something a chip implanted in your brain at birth or something it's gotta be we gotta solve it yeah it's a big it's a big issue in the
[02:34:47] well and we're running smack into it with the whole age restriction deal exactly that all of our politicians have suddenly decided well we don't know how you're gonna solve it but you know you guys are smart nerd harder so yeah yeah you'll figure it out what an interesting subject I feel like authentication is one of the most interesting and thorny problems we have and it's a necessity we need to solve why I spent seven years on squirrel was that right you know it's really worth
[02:35:16] fixing yeah that's the guy that's the squirrel guy steve gibson he's at grc.com you might want to check it out there's a lot of great stuff at GRC of course the two programs he sells that's how he makes his living spin right the world's best mass storage maintenance recovery and performance enhancing utility that's at grc.com but there's also his brand new dns benchmark pro which is inexpensive $9.99 and that's there and you
[02:35:46] should probably own both of them check it out while you're there you can get your email validated and sign up for his two newsletters or two email mailing lists go to grc.com slash email you provide your email address he through the magic of something will that you are at that address and not a spammer and well then you can send him email okay from that address so that's good and then below it there are
[02:36:15] two unchecked boxes one for the newsletter for this show the weekly show notes which is definitely worth everybody should subscribe to that that's really great you'll get them a day or two ahead of the show and you can read along as you listen and so forth we are 17 subscribers shy of 20,000 security now subscribers that is really cool getting there and it's a book every I mean this is a free magazine really it is it's a free magazine 21 pages of great stuff
[02:36:45] this week the other email list is just to announce his announcement list and he doesn't have many announcements in fact he really basically doesn't ever use it consultant licenses and so that will be happening soon cool and that'll be announced probably
[02:37:15] let's see what else lots of other things there a lot of free utilities a lot of information it's really great hang and the podcast of course all the versions Steve hosts are unique he has a 16 kilobit audio admittedly not the highest fidelity but it is small and that's its real virtue there is a 64 kilobit audio that sounds fine it's still smaller than what we offer but so maybe go there to get that he also has the show notes for downloads so you don't have to subscribe to the newsletter you can just download it
[02:37:46] he has transcripts Elaine Ferris an actual human person writes those every week takes a couple of days after the show that's how you know it's an actual human person but those are really
[02:38:17] well our version is different we have 128 kilobit audio don't ask we also have video which is nice if you want to see Steve's mustache there is video at the YouTube site dedicated to security now that's nice we have I was just looking you have 76,000 subscribers to your YouTube channel which
[02:38:47] I don't really understand how YouTube works there's a bell there's a thing I don't know you get notifications I don't know we also have a Twitter channel YouTube dot com slash Twitter you can do the same thing there and get notifications when we go live with shows that kind of thing we do go live every Tuesday right after Mac break weekly supposed to be 1.30 sometimes a little late we were late today I'm sorry that's 1.30 Pacific 4.30 Eastern 21.30 UTC we stream live in the Club Twit Discord I hope you're a club member
[02:39:17] if you're not go to twit.tv slash club twit it's 10 bucks a month not free but you get ad free versions of all the shows and lots of extra content and if you're a club Twitter member you can also watch this show in the club Twitter Discord great place to hang out lots of smart people have been having great conversations in there you can also watch on YouTube Twitch X Facebook LinkedIn and kick so we stream on six seven different platforms every Tuesday
[02:39:47] and the best way to get it of course subscribe in your favorite podcast client that way you're going to get it automatically you can listen whenever you want you get the audio get the video whatever thank you Steve have a great week and we will see you next time right here on security now I'll be back bye hello everybody Leo Laporte here you know what a great gift would be whether for the holidays or just any time a birthday a membership in club twit if you have a twit listener in your
[02:40:17] family somebody who enjoys our programming and you want to give them a nice gift and support what we do visit twit.tv slash club twit they'll really appreciate it and so thank you twit.tv slash club twit security now
