AI-powered web browsers are hitting the scene fast, but Steve and Leo unpack why these smart assistants could usher in an era of security chaos most users aren't ready for. Brace yourself for the wild risks, real-world scams, and the privacy questions no one else is asking.
- Secret radios discovered in Chinese-made busses.
- Edge & Chrome introduce LLM-based "scareware" blocking.
- A perfect example of what scareware blocking hopes to prevent.
- Aardvark: OpenAI's new vulnerability scanner for code.
- Italy to require age verification from 48 specific sites.
- Russia to require the use of only Russian software within Russia.
- Russia further clamping down on non-MAX Telegram and WhatsApp messaging.
- 187 new malicious NPM packages. Could AI help with that?
- BadCandy malware has infiltrated Australian Cisco routers.
- Github's 2025 report with the dominance of TypeScript.
- Windows 11 gets new extra-secure Admin Protection feature.
- A bunch of interesting feedback and listener thoughts.
- And why the new AI-driven web browsers may be bringing a whole new world of hurt
Show Notes: https://www.grc.com/sn/SN-1050-Notes.pdf
Hosts: Steve Gibson and Leo Laporte
Download or subscribe to Security Now at https://twit.tv/shows/security-now.
You can submit a question to Security Now at the GRC Feedback Page.
For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit
Sponsors:
[00:00:00] It's time for Security Now. Our guru is here, Steve Gibson. We'll talk about secret radios discovered in buses made in China. I wonder why. Bad candy infecting your Cisco router and why you may not want to use one of those new AI browsers. It's all coming up next on Security Now. Podcasts you love. From people you trust. This is TWIT.
[00:00:32] This is Security Now with Steve Gibson. Episode 1050, recorded Tuesday, November 4th, 2025. Here come the AI browsers. It's time for Security Now. Yay! All week long we wait for Tuesday and the advent is kind of like a little weekly advent calendar of Steve Gibson. Open the door and he pops out. Hi Steve! Hello Leo.
[00:01:00] I have advent calendars in my mind because as you know, at December 1st we have to and I'm and I always like to buy advent calendars as gifts. So there's a lot of different things. I gave my son a hot sauce calendar one year. New hot, new little hot sauce bottle. Well, and that apparently worked out well for him. Yes, it did. It did. It did. What is coming up on Security Now this week?
[00:01:22] So, uh, our title for this, uh, first episode of November, as we've moved our clocks back, I figured out that. Oh, and by the way, to answer your question about why I have manually settable clocks, it's that, uh, like we like, we have a big dim red LED clock on the bedside table. Yes. And I mean, I guess I could get something that checked in with WWDV, whatever that was. Yeah. Uh, but it's risky.
[00:01:52] You know, I just bought a clock. That's a wifi clock. And I realized, oh crap, who makes that? How do they keep it up to date? It's on my network. Yeah. So I understand why you would want a clock that you set manually. Yeah. And it's not that big a problem to push a button every six months. So anyway, uh, we're all re synchronized. We're on a new, uh, daylight. We're not, we're off of daylight savings time on standard time, standard time.
[00:02:17] And, uh, uh, which Benito, who is our, is managing the backend of this is glad for, because he's in the Philippines. Where this time doesn't change. And he got to sleep in an extra hour where it's like 3 AM or something. I mean, it's like, don't even just don't mention it. No, no. Okay.
[00:02:37] So we're going to talk on episode 1050, uh, about concerns, which are already being raised by security researchers about this kind of obvious thing. That's that is already started to happen, which is the creation of AI enabled browsers.
[00:02:59] And this is not like a sticky tab in Firefox or an extension add on that lets you easily get the chat GPT or something. This is like a web browser from open AI. And it's like, okay, uh, what does that mean for us? So today's title is here come the AI browsers. Um, and there's some interesting information about that, including in, in pursuing the research.
[00:03:29] I found the guy who coined the term prompt injection and he very clearly explains like what the problem is. So anyway, like, I think some interesting stuff for us, but we're going to talk about new secret radios being discovered in buses purchased from China. Uh, oh yeah. It's like, where aren't they is the question.
[00:03:56] Uh, both edge and Chrome have introduced LLM based scare wear blocking. And we act just by pure coincidence. I also stumbled upon a perfect example of what it is. They want to block in a real life instance, a experience by a Canadian, an elderly Canadian couple, uh, that was covered in, in, uh, Canada's TV CTV. We're going to look at that.
[00:04:26] Also, we've got aardvark, which is, I don't know why you would name anything aardvark, but you know, as I like to say, I guess all the good names were taken, uh, which is open AI's new vulnerability scanner, which is coming, uh, currently in beta. Uh, did you see the name though of, uh, of Google's vulnerability, uh, scanner? It's called the big sleep, which is even arguably worse than aardvark.
[00:04:55] Wow. I guess I beat them to it. Yeah. We got, we got aardvark. You can't have it. So, okay. We'll just go with the big sleep. Oh my God. What the heck? This is what too much money will do to you. Maybe. Or all the good names are taken. You might not be. That might be. Also Italy. Yeah. It's going to be requiring age verification from 48 specific sites. So they're just lining up, uh, Russia get this.
[00:05:23] Good luck with this one is going to require the use of only Russian software within Russia. Okay. Uh, I know. And they're further clamping down on non max max being their state sponsored messaging system. They're clamping down in some interesting ways on telegram and what's app in order to make that more problematical, uh, problematic.
[00:05:48] Anyway, uh, we've also got 187 new malicious NPM packages. And I wonder thinking about aardvark, whether AI might be able to help with that. We'll take a look at the problems there. Also, we've got bad candy malware infiltrating Australian Cisco routers and sad tale of woe there. Uh, GitHub has released their 2025 report with a surprising. Amazing bit of information.
[00:06:19] Python has been kicked out of the number one spot. Oh, by of, of code. By common lisp. Get no, no. Sorry, Leo. Sorry, Leo. I can dream. You can keep it alive with advent of code coming next month. I will. Uh, windows 11 is getting a, a, uh, just got your people who have 11, either 24 H two or 25 H two.
[00:06:44] Can get this a new extra secure admin protection feature announced fully a year ago. Finally available. We'll talk about that. We've also got a bunch of interesting feedback and listener thoughts before looking at why the new AI driven web browsers might be bringing a whole new world of hurt to people who don't know any better. So, you know, it just makes sense. Doesn't it? It does. It does.
[00:07:14] It is like, who wouldn't want an AI browser? That sound mean AI is obviously wonderful. So let's just make a browser that has it built in. Apparently you're able to go. Someone, someone said you can chat with your tabs. It was like, oh, great. That's right. I want to do. I want to chat with my tabs. Can't wait to chat with my tabs. Oh, yum. Um, uh, all right.
[00:07:39] Well, I, I think we suspected this was an issue, but I, I look forward to finding out exactly. Exactly. The painful details are why we tune in every week. And we do have a great picture of the week. So yes. Yeah. Your tax dollars are harder. So yeah, uh, we will talk about all of that in moments. Plus we have the picture of the week, which I have not. I have carefully, I'm in a soundproof booth. I've not looked at, but I will look at it with you together.
[00:08:07] And we can be surprised together in just a moment. But first a word from our sponsor and one we love dearly. Bitwarden, the trusted leader in passwords, pass keys, and secrets management. Bitwarden is consistently ranked number one in user satisfaction by G2 and software reviews. More than 10 million users across 180 countries, over 50,000 businesses.
[00:08:34] And I just saw, was it wired in their review of password managers? Just said the password manager best for most people. So add that to the list of kudos, plenty of them. Certainly the best for me. I've been using Bitwarden for years now, and it is a must as far as I'm concerned. You want to know why? Well, I think, you know, if you listen to the show, but I will give you a stat that might still shock you more than 19 billion passwords are available.
[00:09:02] Billion with a B on the dark web right now. That's not the bad part. Of course, there's probably a lot of passwords. Yeah, I guess they leak out, but here's the bad part. Out of 19 billion passwords, 94% have been reused across accounts. But the problem is, if you reuse a password, as you well know, you are in trouble because the bad guys get your email and password from these data dumps on the dark web.
[00:09:30] And now they try it on every single account they can find. And if you've reused it, chances are that password is going to unlock a few accounts, more than a few. Maybe even ones you really don't want unlocked, like your bank. Info stealer malware threats. Another way bad guys get this information surged by 500% in the last year. Modern hackers, they don't hack accounts.
[00:09:55] They just buy these passwords or steal them and log in with reused passwords and they can get in everywhere. So there's a way now to avoid this. And you may say, well, I have good hygiene, but do your employees, do the people at your business, you count on them not to reuse passwords? Bitwarden now has something new. It's called Bitwarden Access Intelligence.
[00:10:16] It's a new enterprise feature that lets enterprises proactively defend against these kinds of internal credential risks, plus external phishing threats. So there's two core functionalities to this new Bitwarden Access Intelligence. There's risk insights, which lets IT teams identify, prioritize and remediate at risk credentials. So you can see, oh, this password's been, you know, leaked and get rid of it, remediated immediately.
[00:10:46] You also have an advanced phishing blocker, which everybody needs, which alerts and redirects users away from known phishing sites. It does it in real time using a continuously updated open source block list of malicious domains. No, of course, it's not going to stop everything. But even if you stop 50, 60, 70%, you're way ahead of the game, right? Another thing that is so good and Bitwarden supports so well, passwordless, huge passwordless authentication is transforming digital security.
[00:11:15] Bitwarden's at the forefront. The minute they could, they started offering support for pass keys. I've been using Bitwarden's pass keys. It's so much better than, you know, having the pass key attached to a specific device like your phone, because everywhere I use Bitwarden and that's everywhere, I have access to my pass keys. I use it now for Amazon, for Google, both my Google accounts, Workspace and my personal account use pass keys. I use it so much. Microsoft has a new feature. You turn off passwords entirely. I don't need it anymore. Passwordless is incredible.
[00:11:45] Plus, Bitwarden has always supported FIDO2 standards, which really strengthened and simplified the login experience, both with pass keys and with hardware keys, right? Bitwarden pass key support includes enhanced pass key support across web, desktop and mobile platforms, which means you can store and sync pass keys and encrypted. So you're not sending the pass keys out in the clear and encrypted, but that means they're on every device. Every device.
[00:12:14] Two-step login with FIDO2 WebAuthn allows hardware key authentication as a second factor or even a primary method for supported lock-ins. By the way, this is a really good way to defeat phishing because FIDO2 is smart about domains. So you're not going to, you know, your employees might be fooled as I was some years ago by a website that's T-V-V-I-T-T-E-R.
[00:12:43] Looks just like Twitter, but it's V-V, but the FIDO2 pass keys aren't. They go, no, that's not Twitter. I'm not logging in. I'm not giving you a second factor. Biometric unlock enhancements, which are now ubiquitous on mobile and desktop, really help too. They streamline access without compromising security. I've turned that on for Bitwarden on every device. So I don't even have to remember my master password anymore.
[00:13:07] I use my, I mean, I do, of course, and I have it, but I use my fingerprint or face virtually everywhere. Improved autofill experiences for pass keys, for cards, for identities, all designed to work seamlessly across modern browsers and apps. That's Bitwarden. Bitwarden setup is easy. It takes a few minutes. If you're thinking about moving to Bitwarden in your business, it imports from most existing password management solutions. So it is as fluent, as simple as possibly can be.
[00:13:36] And here's, to me, the most important reason I switched to Bitwarden. It's open source, GPL licensed. That's if, when I think encryption, I don't want closed source encryption. I don't want a backdoor. I want that open source encryption so the code can be inspected by myself or, better yet, by somebody who knows what they're doing. Bitwarden does that. It's all open source. It can be regularly audited. In fact, they do have regular audits by third-party experts.
[00:14:05] It meets SOC 2, Type 2, GDPR, HIPAA, CCPA. It's compliant. It's ISO 27001-2002 certified. It's done right. Get started today with Bitwarden's free trial of a Teams or Enterprise plan. Or, if you're an individual, free for life. Or get started for free across all devices. It is an individual user at bitwarden.com slash twit. That's bitwarden.com slash twit. Use it.
[00:14:35] Start using it in your business, too. Because you know what? You may have good hygiene. Pretty much guarantee your employees. Do not. All right, Steve. Let me add my laptop camera so that you can... Oops. Please connect more video input devices. Well, what is... Let's see. Yeah, I can do it. Go ahead. So, this picture of the week was actually taken by one of our listeners.
[00:15:03] You can see how it's very crisp. You know, it's a full-resolution photo taken from the driver's seat as he was driving down the street. And he thought of us. He thought, okay, Steve's going to love this one. Yeah, you better describe this. That's hysterical. You know... Who puts these up? You know? Don't these guys look?
[00:15:33] I have a theory. But first, I'll explain what it is here. So, what we have is we have this beautiful road. It looks like it's in the Midwest somewhere. Is it pretty? Autumn. It's just beautiful. Yeah. And we want to make sure that no passing cars blow too many leaves off the trees. We want it to look picturesque and not denuded. So, we got to keep people's speed under control.
[00:15:58] The sign that is closest to the car as it's driving down the street is very clear. It says, speed limit 25. And then it adds parenthetically on a sign below unless otherwise posted. But, like, 10 yards further down, we see equally clearly a speed limit sign that says 45.
[00:16:28] So, okay. I guess this is, you know, taken together. 25 unless we tell you otherwise. Oh, and we're telling you otherwise. So, my only... The only way I could see this makes it essentially, you know, is if someone in purchasing bought speed limit 25 signs by mistake. We have extras. They got like, uh-oh. We got, you know, I'm in trouble.
[00:16:55] I got 1,000 excess speed limit 25 signs. We got to do something with them. Except we can't lower the speed to 25 everywhere because that would be bad. So, oh, we'll get some unless otherwise posted signs. We'll add those underneath the speed limit 25 signs and stick them up everywhere just in front of the actual speed limit 45 signs. And we're covered.
[00:17:23] It looks like we've got, like, some big plan here. Everything's under control. And the effect is to nullify the speed limit 25 signs, which we have. So, we had to use them. Otherwise, we would have gotten in trouble. I have another theory. Because I noticed the 45 mile an eye sign is considerably lower than the 25. I think... True.
[00:17:45] I think high school students came and put that sign in as a joke. It's got to be. Right? That's the kind of thing a kid would do. So, you think it looks less official because it's not at the normal height of... Yeah. It's shorter than the other ones. Actually, you're right. It could be a spoof. Although, it's not a hand-drawn sign. It's an official sign. So, they wouldn't have to steal it. You know the kids steal it.
[00:18:14] We don't have a street sign on our street. And I know why. Some kids got it in his bedroom. In his bedroom. Yes. Dorm rooms and bedrooms have been famous first for street signs. Yeah. Darren Oakey in our Discord says, I want to produce a sign that says, unless you're in a hurry. And we know it's not fake because a listener took it. So, it's a real deal.
[00:18:41] Take a picture and send it to me saying, Steve, I thought of you and I saw this and had to pull over and take a snap. Thank you. Okay. So, there's news from Oslo, Norway. Their public transportation agency is named Rutter. R-U-T-E-R. Rutter became curious and conducted a security audit. Not, I mean, I can imagine why they might become curious. They're Norwegians and they're like, okay, let's just make sure what's going on here. They're careful.
[00:19:11] A security audit of their Chinese manufactured electric buses. And unfortunately, to no one's surprise, you know, which is why they looked, they found that these buses could be remotely disabled by their Chinese manufacturer. According to a report from a local newspaper, Affenposten, Rutter tested and took two electric
[00:19:38] bus models in, get this Leo, inside a Faraday cage room. So, the fact that you have a bus size Faraday cage room, that's kind of cool. I don't know what you would use it for otherwise. Maybe if you have a bomb that you're worried is triggered by a cell phone. I didn't want, anyway, I don't know. But they have a Faraday cage room that can hold a bus. They found that electric buses from this Chinese company, Yutong, Y-U-T-O-N-G, maybe
[00:20:08] that's how you pronounce it, I don't know, could be remotely disabled via remote control capabilities embedded in the bus's software, in its diagnostics module and its battery and power control systems. So, I guess this is extensive remote control disablement. Similar buses manufactured by a Dutch company, VDL, were found to have no such remote control disabling features.
[00:20:35] So, it's not like this is a universal feature in electric buses. No Chinese buses. So, the issue prompted Reuter to take the action of disabling internet connectivity by removing all of the SIM cards from the onboard modems.
[00:20:55] They run over 300 of these Yutong electric buses in Oslo alone and 550 of them deployed across other cities throughout Norway. Following the news, an interview with a national security expert from the Norwegian Naval Academy revealed his dismay at what he considered the naivete of Norwegian politicians.
[00:21:21] He said, I cannot comprehend and understand why politicians refuse to listen to the security authorities' repeated annual warnings. Well, maybe they just think these security guys are crying wolf and the sky is falling and, you know, this is a big problem that doesn't exist.
[00:21:43] As we've covered here on the podcast in the past, this unfortunately appears to be something of a design pattern for Chinese products. Remote control features have been found in shipping terminal cranes deployed in the U.S., Chinese smart cars and solar panels.
[00:22:06] There's a valid point to be made that many such remote control surveillance systems may have an explainable and a benign purpose, right? They may be needed for debugging and for offering remote support. You know, why send a support team across the world when it's possible to just SSH into a device, restart some processes and fix the problem and move on.
[00:22:34] Unfortunately, the benign purpose argument could rally more support or maybe any support if these Chinese SIM-equipped cellular radios were anywhere documented. But they never are. Nowhere in any of the buses' technical service and reference manuals is there any mention made of these
[00:23:04] surreptitious radios. And they're surreptitious because they're secret. You know, they're like, why would they be secret? They were first placed into a Faraday cage to prevent the buses from phoning home. That's why the Norwegians stuck them in a Faraday cage room, closed the door. That's a big Faraday cage. These are small buses. Yeah. And it's a beautiful bus. Look at that bus, Leo. I've got a picture in the show notes because I looked at it.
[00:23:34] I was just astonished by its engineering cleanliness. I mean, it's a gorgeous looking bus. Unfortunately, apparently it phones home and reports when it's being inspected and reverse engineered. And that's a no-no. Do you think it has a kill switch too, maybe? Or you don't pay your bills? That's the point. Yes. That's a very good point. Maybe they offer financing.
[00:23:59] And so, you know, it's remote repo is the purpose for the whole thing. Who knows? There are cars with that. I mean, that's- Yep. They have been found in cars. And as we know, they've been found in the power supplies of solar array installations coming from China, Chinese inverters.
[00:24:19] So anyway, you know, one hopes that if China were to ever go to war with the rest of the world, that all of the world's technology, which had been purchased from China, wouldn't all stop in unison? You know, basically playing out the day the world stood still theme. But it could. Not saying it will, but wow.
[00:24:51] Edge and Chrome are both with the- Interestingly, the same numbered release last week have introduced large language model-based scareware blocking. In the case of Edge, its new scareware blocker employs a local computer vision model to spot and block full-screen pop-ups and phony warnings.
[00:25:17] The feature was added in Edge version 1.4.2, which, as I said, was released last week because it's compute intensive. No kidding. I mean, you're running vision. A local computer vision model LLM is now in Edge, which, you know, could be a bit of a battery drain because it's so compute intensive.
[00:25:43] It will only run on systems that have more than two gigs of RAM and at least four CPU cores. Now, I opened Edge and I browsed over to edge colon slash slash settings slash question mark search equals scareware. So you could just search for scareware under your search box in settings or use that URL.
[00:26:08] Edge describes it in a help pop-up saying, scareware blocker protects against tech scams. Tech scam sites try to trick you into thinking your computer, they wrote there, I don't know who there is, their computer is damaged. So you call a fake support line through the call. The scammer hopes to gain remote access to your computer.
[00:26:35] If you turn this on, Edge will identify if you've potentially landed on a tech scam site and allow you to return to safety. So that's built in now into Edge. It doesn't seem like that was written by an English speaking person. No, doesn't it? Maybe that will. This might have been taken from the beta or something, but it does seem like they got their pronouns a little, little confused, but that's a screenshot from, from it, my edge.
[00:27:06] So it's the real deal. Okay. I immediately turned mine off and there's a conveniently located switch on that page to do that. And I did so because thank you very much. There's no way I am ever going to fall for some fake tech support scam. You know, I'm not this feature's target audience and neither I would venture are many of this podcast's listeners.
[00:27:32] The fact that the feature is not enabled unless a system has two gig of RAM and a quad core processor strongly suggests that running a real time computer vision AI model on every page that appears, which again is what it has to do is likely to put an unnecessary computing
[00:27:53] and power consumption burden on my system for no useful to me purpose whatsoever. And not to be left behind Chrome's identical version. Number one, four, two has also just added its own large language model to detect scare wear and scams similar to what edge just added.
[00:28:19] Both of them appeared last week in their versions one, four, two. Okay. So here's how Microsoft explains what they've done, which I don't know. It's okay. I, I understood reading this Leo, why some of the things that Paul explains on windows weekly leave me saying what, what, because it's micro speak, which I think is what we have to call it.
[00:28:43] So on October 31st, Leno last Friday's Halloween under their headline, protecting more edge users with expanded scare wear blocker availability and real time protection. Microsoft attempts to explain writing scare wear blocker for Microsoft edge is now enabled by default. That's another key. It was on for me. And that's why I turned it off.
[00:29:09] Enabled by default on most windows and Mac devices. And the impact is already clear. Scare wear blocker shields users from scams before traditional threat intelligence catches them. Behind the scenes, we're improving our systems to help protect even more would be victims. Scare wear blocker uses a local computer vision model to spot full screen scams and stop them
[00:29:39] before users fall into the trap. This all sounds great, but not for us. The model is enabled by default on devices with more than two gig of RAM and four CPU cores. I wonder if it means more than four CPU cores. That's not clear. With more than two gig of RAM and four CPU cores. Sounds like four is enough. Where it won't slow down every day browsing. Uh-huh. IT pros also now have an enterprise policy.
[00:30:09] So yes, this could be enforced at the enterprise level. Enterprise policy they can use to configure scare wear blocker on their desktops and add internal resources to an allow list. Results from the preview were compelling. So apparently this has been under preview. It hasn't been affecting most of us until now. Results from the preview were compelling.
[00:30:31] When scare wear blocker is active, users are protected from fresh scams, hours or even days before they appear on global block lists. Unsurprisingly, AI-powered features like scare wear blocker will forever change the way we protect customers from attacks. And I think this is great. Let me just be clear about that. This seems like a good thing.
[00:30:59] Scare wear, except for the privacy aspect. We'll get there in a minute. Scare wear blocker users stepped up to share feedback and protect other users. When someone reports a scam with scare wear blocker, we work directly with Microsoft defender smart screen to get the scam blocked for other customers using smart screen.
[00:31:22] Okay, so now all edge users with this thing enabled are being tied into a big sensor network. They're part of a sensor net. During the preview, each user report protected an additional 50 users on average. These reports were not limited to the familiar virus alert exclamation point pop-up, which meaning that that's what people are, that's the typical scare wear virus alert pop-up.
[00:31:52] They said, we've seen reports of scams with fake blue screens, fake control panels, and more. Recently, users reported scams posing as law enforcement, accusing them of crimes, and demanding payment to unlock their PCs. When scare wear blocker caught that scam, it had not yet been blocked by defender smart screen or other services like Google safe browsing.
[00:32:20] I'll just note that we know that because it wouldn't have been shown had it been caught. They said, scare wear blocker caught the scam mentioned above that impersonated law enforcement, but before the first user report arrived, 30% of the targeted users had already seen the scam. We saw this throughout the preview.
[00:32:44] Scare wear blocker provided a first line of defense, but in the time before users reported scams and smart screen was able to start blocking, fast-moving scams still reached too many of their targets. Starting in November, meaning now, today, because this came out last week. It was announced on Halloween, last Friday.
[00:33:08] If scare wear blocker detects a suspicious full screen page, the new, and this is where the jargon gets confusing, the new scare wear sensor in Edge 142. So that's something different, right? We got scare wear blocker, and now they're introducing for the first time something called the new scare wear sensor in Edge 142.
[00:33:33] Can notify smart screen about the potential scam immediately. So I think what they're telling us here is that sensor is a proactive feedback to Microsoft's headquarters where smart screen is managed. So if a user encounters something that scare wear blocker on Edge sees, the sensor part is the proactive notification back to Microsoft.
[00:34:00] So they said scare wear sensor in Edge 142 can notify smart screen about the potential scam immediately without sharing screenshots or any extra data beyond what smart screen already receives. This real-time report gives smart screen an immediate heads up to help confirm scams faster and block them worldwide.
[00:34:25] Later, we'll add more anonymous detection signals to help Edge recognize recurring scam patterns. So what we're seeing here is we're seeing large language model technology being deployed in the browser, basically as a proactive filter between the browser and the user's eyeballs to keep them from being confronted
[00:34:54] with something that could be a problem. They said this new scare wear sensor setting, oh, is disabled by default for the time being. So this proactive feedback is not there, but we intend to enable it for users who have smart screen enabled. Since any scam the sensor detects would be a scam that smart screen missed. Okay, so they're acknowledging there that smart screen would get there first.
[00:35:24] And so if the scam sensor detects it, that means that smart screen didn't. They said even with the scare wear sensor disabled, though, as it is currently, scare wear blocker will still work as expected. And Leo, I have to tell you, as I was putting this together yesterday, I was so tempted to like create a spoof page at GRC that people could go to just to like, you know, essentially subject themselves to a scam screen.
[00:35:54] I wouldn't. No. Because you don't want to get added to a database. I would end up being blackballed by Google forever. Exactly. Yes. Yeah. So they said also the scare wear sensor is always disabled for in private mode. Okay, so they're recognizing that there are some privacy consequences here because this scare wear sensor proactively sends stuff back to Microsoft. Yeah. It's looking at every page you look at.
[00:36:23] It is. Yes. We're, you know, shades of replay, right? Yeah. Freaked everybody out. Recall. I'm sure this is just the same technology repurposed. Yep. So they said finally users can choose to disable smart screen entirely, though we strongly recommend leaving it enabled. While the sensor will help provide earlier detection, please continue to report feedback when you hit a scam.
[00:36:49] Manually reporting feedback allows you to share the screenshot of the scam and other context to help block attacks at their source as well as helping identify false positive. Okay. So the sensor is autonomous running in the background and will report without you doing anything. And that's disabled currently, but they plan to turn that on in the future.
[00:37:14] But, but, but when you get the scare wear blocker pop up over the scam, so presumably it'll say something like, whoa, this looks fishy. Uh, this is probably a scam. Please, you know, confirm that, that this is not something that you're expecting or you know what it is and it's benign report this to us.
[00:37:38] In which case you manually click that and then it is sent back to Microsoft for their verification. So they said, even after a user has reported a scam, it may continue to impact other victims before smart screen can start blocking. To address that, we're working to reduce latency and deliver faster smart screen protection for scams reported by scare wear blocker users.
[00:38:06] So point is when you do say you, you, when, when you get a, when you are confronted with a scam and the blocker pops up over it and says, this looks suspicious. When you say, I agree. Thank you. The, that Microsoft is tightening up the feedback loop because they want to protect more people from this. They said behind the scenes, we're also upgrading the end to end pipeline scare wear blockers
[00:38:33] connection to smart screen started off as a promising prototype. And now we're upgrading it to run on the same production scale threat intelligence systems that power smart screen client protection worldwide. Scare wear blocker caught the same scam described above again, recently on a new site. This time though, the improved pipeline responded more rapidly.
[00:39:00] Smart screen protection kicked in after the scam reached just 5% of its intended targets. And most of those exposed would have had protection from scare wear blocker with earlier warning from the sensor, which again, coming soon, but not yet. And more improve, which because it's autonomous, more improvements to the pipeline. We hope to reduce exposure even further.
[00:39:27] So, again, overall and generally, I salute Microsoft for this. While there are those who will be concerned rightfully, I think, I mean, or at least like raise caution about the privacy implications of this in a world where Microsoft wants their users to enable Windows recall and record everything their machine shows them.
[00:39:54] There's no shortage of those who are concerned about privacy. I don't expect to be using Windows 11, but if I were to, I'm pretty sure it would be without recall because I just don't think it's a useful feature for me. I would need to reread Microsoft explanation of this a few more times. I think, I think though we understand what's going on about what's being sent, where and to whom.
[00:40:20] But it's clear that all users of the edge browser, all five of them, unless they disable this feature are becoming sensor operators whose machines, like once the sensor is turned on in the background, whose machines will be sending intelligence back to Microsoft. So why do I salute that? It's because there are many computer users who are far less computer savvy than anyone listening
[00:40:49] to this podcast, you know, and we all know some and we love them. And they are very much more likely to become victims of these sorts of scams than anyone who's listening to this podcast. You know, so this is proactive security from Microsoft. And I think it's good. I don't want it for myself, but I do think it could be very useful for the general population.
[00:41:14] You know, aside from the mildly annoying phoning home aspect, it's, it's got to be sucking up cycles in order to be constantly monitoring and interpreting the pages that it's displaying. And of course, being a Firefox user, this is academic to me because I'm not, you know, I had to go deliberately open edge and go find that, that switch to verify that it had appeared and that I didn't want it. Thank you very much. I've not seen the details about Chrome's implementation of their similar feature.
[00:41:45] I imagine it would be a little bit less windows centric than what Microsoft has done for their own edge browser. You know, it might be entirely local and many people might find that preferable. But in general, Leo, I think, you know, protecting people from what the internet is showing them is a good thing. Go ahead. I'm sorry.
[00:42:12] I was just going to say, and, and it's significant that when they're in, in private mode, it's not doing this because again, people are creeped out by the idea of some AI looking over their shoulder at every page they view. As they frankly should be. And to do this, it has to do that. Right. Yeah. I would turn it off immediately. It's really also the other takeaway is how predominant these scams have become and how
[00:42:39] often people get suckered by them. And that's really a shame. You know, I can't send a Zelle payment anymore without a full page saying, make sure you know who this person is and here are examples of, of scams. And I mean, it's just everywhere. It's an educational process to let people know you're getting, you know, you're really risking getting scammed here. And I have to think it's just because so many people are getting pulled in.
[00:43:08] Probably people our age, Steve, let's face it. It's, it's older people after this sponsor break. I'm going to tell you a story. Okay. A couple who did. Yeah. Yeah. It's, uh, it's sad, but true. Uh, and I, that's why I'm glad to see stuff like this. Oh, I really wish there were a better way to do it than to watch everything you're doing. And I know, you know, that seems like it's a little intrusive and all the Ram and the CPU cycles and so forth. I mean, yeah.
[00:43:37] What about a laptop that's like, you know, trying to stay alive and this thing's busy spinning the fans up in order to do image recognition of the screen. Yeah. I run an LLM. I mean, you gotta, I guess I commend, like you, I commend Microsoft for trying to do something and Google for trying to do something. I don't know if this is the right thing to do, but. Uh, we're going to take a break and come back and talk about more scams in just a bit.
[00:44:04] You're watching security now with Steve Gibson, uh, our show today brought to you by delete me. And it's a, you know, this is a really closely related issue because so much of our personal information now is online and that just helps scammers. It just helps hackers. It just helps bad guys. If you've ever checked and I don't recommend it. If you've ever searched for yourself online, so much of your personal data is on the internet
[00:44:33] for anyone to see much more than you might even imagine. But this, I don't take my word for this. Don't go out and do it. Uh, your, your name will be there. Your contact info, probably your social security number. In some cases, your home address information about your family members. Where's this coming from? It's coming from data brokers completely legally. This is not an illegal action. I don't want to imply that these guys are the bad guys.
[00:45:00] I'm not thrilled about them, but data brokers make their living by collecting as much information as they can about as many people as they can. Pretty much everyone in the United States, because it's not illegal here, including your social security number and anything else they can get, and then selling it on to anybody willing to pay. It's not expensive. So almost anybody can buy it. Not it's law enforcement buys it in the United States, foreign governments buy it, scammers
[00:45:29] buy it, hackers buy it, and there is no law against it. Anyone on the web can buy your private details. And the consequences are, you don't have to have a great imagination to think of what they could be, not just identity theft, but doxing, harassment, hacking. Uh, how much more effective is, is one of these scammers emails or, or phone calls to you if they know some personal information?
[00:45:58] I mean, I, I remember getting emails that purported to be from my landscaper saying, Hey, I'm stuck in Europe. They stole my passport. If you could just send me 1500 bucks, I'll go, I'll pay you back the minute I get. This is, you know, Joe, your landscaper. And he knew the detail. That's what makes this stuff, these scams effective because they know this information. Well, now there's something you can do about it. It's not going to eliminate all scams. It's not going to eliminate all of those text messages you make, but it's going to get your
[00:46:28] private data out of the hands of these data brokers with delete me. Look, I'm, I'm in public. I know my name is out there. This thing is, I know my name is out there. I know my face is out there. I know I'm in public. Look, you might imagine that I'm a greater risk than you are. I'm not. Everybody needs to think about safety, privacy, and security because it's so easy to find personal information about everybody online. We use delete me here to protect us.
[00:46:58] Every company should use delete me. Absolutely. For their middle management and their management. Because if the bad guys could figure out who you are, who your direct reports are, what your phone numbers are, what their phone numbers are, they can scam you. We've been, people keep trying to scam us all the time. Well, they did at least until we started using delete me. Delete me is a subscription service. It will remove your personal information from those data brokers. Hundreds of them. Hundreds of them.
[00:47:26] Because it's such a lucrative business. When you sign up, you're going to give delete me exactly the information you want deleted. You have control of that. They won't delete everything unless you tell them to. It's up to you. But their experts, here's the thing. They know where all these data brokers live. They know exactly. And the data brokers don't make it easy. They hide those pages. They know exactly where to click to get the page that says delete this data. And then they send you regular personalized private reports showing what info they found, where they found it, what they removed.
[00:47:56] But they don't just do that once. Because this is the sad truth. There's new data brokers every single day. And the old ones are not the most ethical people in the world. Once they delete your data, there's nothing to stop them from starting to collect it again. So delete me continues to work for you, constantly monitoring and removing that personal information you don't want on the internet. We get regular emails from delete me saying, yeah, we found some stuff. We deleted it. It really works.
[00:48:25] To put it simply, delete me does all the hard work of wiping you and your family and your employees and your management's personal information from data broker websites. Take control of your data. Keep your private life private. Sign up for delete me at a special discount just for our listeners. Get 20% off your delete me individual plan when you go to join delete me dot com slash twit and use the promo code twit at checkout. Now, the only way to get 20% off is to go join delete me dot com. Join delete me dot com slash twit.
[00:48:55] Join delete me dot com slash twit. All one word. And enter the code twit at checkout. And you know what? You probably want to do this for the elders in your family, the people who don't have your savvy, who could easily be scammed. This is another way to protect them. Join delete me dot com slash twit. Use the offer code twit to get 20% off your individual privacy plan. Join delete me dot com slash twit.
[00:49:24] It's a shame we have to, you know, go to all these lengths to protect ourselves. We do. Yeah. So what's the story of the Canadian couple? OK, so wouldn't you know exactly what a week ago that Canadian CTV News ran a story about exactly this happening, this scam alert problem on a PC to an elderly Canadian couple.
[00:49:53] Elderly, as you were saying, Leo, kind of people are. All right. And we're elderly. So this is not not to put anybody down, but elderly people are often the targets of these scams. Yes. So the story ran as a consumer alert on the TV station. The print piece, which was put online in concert with that, which was made from the television story, carried the headline. We're devastated, quoting this couple.
[00:50:20] And then it said Ontario seniors give away more than one million dollars. What? Scammers. Now, get this. So here's here's here's what we know. Fraud and cyber crime. The story starts out, says cost Canadians more than six hundred and thirty million Canadian dollars last year. Six hundred and thirty million lost to scams. Many of the victims being seniors.
[00:50:50] A couple in their 70s contacted CTV News to say what started with a pop up warning on their computer screen led them to losing their life savings. The Brantford, Ontario couple asked not to be identified as they are devastated after losing all their money in the scam, said it was in March of this year when. So like what?
[00:51:18] A little more over six months ago. They received a warning on their laptop. So they called the number on the screen. The woman said I the woman said I couldn't get rid of it. I tried control alt delete and it wouldn't go away. It wouldn't turn off, unquote. When they called the number, they were told their accounts had been hacked and it appeared the man was involved.
[00:51:46] The hacker in criminal activity. The husband said, quote, they said my SIN number had been compromised and was being used for money laundering by a criminal organization that was involved in child pornography, human trafficking and drugs.
[00:52:07] For the next five months, criminals told them their bank accounts were in jeopardy and they needed to follow instructions to keep their money safe. After two months of grooming, the couple with daily calls claiming to be with the Canadian Anti-Fraud Center, the police and Canada's Treasury Department.
[00:52:31] The scammers started to tell them to start removing money from their accounts and giving it to them so they could keep it safe while the investigation progressed. They were told to use their money to purchase gold bars and to put some in a Bitcoin machine.
[00:52:51] In the end, the couple purchased nine hundred thousand dollars Canadian in gold and one hundred and one nine hundred and ninety dollars in Bitcoin for a total loss of one million ten thousand nine hundred and ninety dollars. Despite warnings from their bank, they still went through with it. The woman said our financial advisor warned us.
[00:53:19] She said this sort of sounds like fraud. But instead of heeding that warning, the couple said they were told their advisor that the couple said they told their advisor they were buying the gold as an investment. Oh, no, he's a nice boy on the phone. He comes from the Canadian government. Yep.
[00:53:42] Eventually, when they had no more money to give the scammers, the criminals cut off all contact with them. That's the first time they realized they'd been duped. The man said, oh, we're devastated. It sounds very foolish that somebody would do something like this. But it was the trust that was built up over five months, which convinced us it must be legitimate.
[00:54:12] Anthony Quinn, the president of the Canadian Association of Retired Persons, C.A.R.P., told CTV News, quote, Quint said he feels banks need to take additional steps to protect vulnerable seniors from scams. The couple said they're now ruined financially and the chance of recovering any of the funds is almost zero.
[00:54:41] The man said, quote, it was money that we invested over our lives. It was money that we inherited. It was money from the sale of our house. It was money we were going to leave to our son. Sadly, the couple also cashed in their RRSPs. That's the Canadian Registered Retirement Savings Plan.
[00:55:04] So at tax time, they'll have a tax bill of more than one hundred thousand dollars, which they said they don't know how they'll pay. Legitimate government agencies, police investigators and banking officials will never ask you to participate in an investigation like this or ask you to buy gold bars to put money or put money in a Bitcoin machine.
[00:55:31] CARP's president, Anthony Quinn, said, quote, Canadian banks should. You know, he he he's the the Association of Retired Persons president. Canadian banks should be doing more to set up an infrastructure to protect seniors so they don't fall prey to these criminals. Well, apparently the bank and their investment advisors, everybody was telling them this sounds wrong. This does not sound legitimate. But as you said, Leo, oh, he's so nice on the phone.
[00:56:01] It's a nice boy. You know, so anyone's initial reaction as it would ours be and I'm sure our listeners would be might be to wonder how this couple could be so well duped. But, you know, their comment about the five month investment in grooming and the five month investment that the criminals made. I'm sure the criminals have figured out this is the way you do it. We're not in a hurry. We're in for the long game here.
[00:56:29] So we're going to build over time a rapport and a relationship with these people. And look at the payday they got. They got a million dollars of these people's money. They're they're literally their life savings. They were carefully groomed over time and began making incremental investments that seemed to be the right thing to do. And eventually at some point it was in for a penny. Right.
[00:56:55] Because, you know, in for a pound, because now with having already paid a bunch, they desperately wanted it not to be the case that this money was all gone. Right. So it's like, oh, we don't want to upset them now. So let's keep giving them more money in order to have our money protected. Now, we've often noted here computers and the Internet remain a huge mystery to most people.
[00:57:20] You know, even people who use them daily have no real idea how they work. And this sad story reminds us that it's the human factor. Right. That still trips people up. You know, this couple was very skillfully conned and conning people is one of the oldest practices there is.
[00:57:41] So what Microsoft and Google with release 142 of Edge and Chrome are hoping to do is to nip these sorts of scams in the bud before anyone even sees them. And, you know, had this been in place back in March when this notice first popped up and they were and this couple might have been proactively warned. I imagine they're using either Edge or Chrome.
[00:58:08] Then maybe they would have been saved a million dollars. So that's why I think despite the fact that this is going to be power consuming, it's going to be sketchy from a standpoint of having something watching your screens. That seems to be the future for for protecting people from all of the junk that's on the Internet. And everyone listening to this podcast has the option option to flip that switch off. Thank you very much. But I don't want my screen to be checked for me.
[00:58:38] I'm competent to check it myself. Whereas, you know, we are the minority. OK, so. Last Thursday, OpenAI told the world about Aardvark with the heading introducing Aardvark, OpenAI's agentic security researcher.
[00:59:04] I don't think that's funny, huh? I should say research technology, not researcher. But yeah. Yeah. So an agentic security researcher. What? Huh? OK. Here's what they wrote as they took the wraps off their new gizmo.
[00:59:27] They said, today we're announcing Aardvark, an agentic security researcher powered by GPT-5. Software security, they wrote, is one of the most critical and challenging frontiers in technology. Each year, tens of thousands of new vulnerabilities are discovered across enterprise and open source code bases.
[00:59:51] Defenders face the daunting tasks of finding and patching vulnerabilities before their adversaries do. At OpenAI, we are working to tip that balance in favor of defenders. Aardvark represents a breakthrough in AI and security research, an autonomous agent that can help developers and security teams discover and fix vulnerabilities at scale.
[01:00:20] Aardvark is now available in private beta to validate and refine its capabilities in the field. Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches.
[01:00:42] Aardvark works by monitoring commits and changes to code bases, identifying vulnerabilities, how they might be exploited, and proposing fixes. Aardvark does not rely on traditional program analysis techniques like fuzzing or software composition analysis.
[01:01:03] Instead, it uses LLM-powered reasoning and tool use to understand code behavior and identify vulnerabilities. Aardvark looks for bugs as a human security researcher might by reading code, analyzing it, writing and running tests, using tools, and more. Wow. Wow. Okay.
[01:01:28] This sounds like something we've talked about and even anticipated, but frankly, this happened sooner than expected. Sooner than I thought this technology was ready for prime time. I guess we're going to find out. They continue to explain what they've created by writing, Aardvark relies on a multi-stage pipeline to identify, explain, and fix vulnerabilities. There's the analysis.
[01:01:53] It begins by analyzing the full repository to produce a threat model reflecting its understanding of the project's security objectives and design. Then, commit scanning. It scans for vulnerabilities by inspecting commit-level changes against the entire repository and threat model as new code is committed.
[01:02:19] So, that tells us that it has built a big context based on its initial analysis of the entire repository. And then, it looks at commit-level changes against the context that it's created. They said, when a repository is first connected, Aardvark will scan its history to identify existing issues. Aardvark explains the vulnerabilities it finds step-by-step, annotating code for human review.
[01:02:50] Then, there's validation. Once Aardvark has identified a potential vulnerability, it will attempt to trigger it in an isolated, sandboxed environment to confirm its exploitability. Aardvark describes the steps taken to help ensure accurate, high-quality, and low-false-positive insights are returned to users. And then, finally, patching.
[01:03:16] Aardvark integrates with OpenAI Codex to help fix the vulnerabilities it finds. It attaches a codex-generated and aardvark-scanned patch to each finding for human review and efficient one-click patching. Wow. So, okay. Aardvark. Now, Leo, I know why they named it that. Because the worst name in history is X. Yes.
[01:03:45] And the best name ever is Aardvark. Yeah. Because when you do – Yeah. Yeah, it's at the top of the list. And if you talk about aardvark, you don't have to say aardvark formerly named this. Right. You know, it's not X formerly called Twitter. No, it's aardvark and there is none other. It's not that all the names are taken, just all the good names are taken. Or maybe all the confusing names. Right.
[01:04:15] Because, you know, Kleenex was a good name because – what? Kleenex? Actually, that does sort of have a connection to cleaning. But anyway. So, they said aardvark works alongside engineers integrating with GitHub, Codex, and existing workflows to deliver clear, actionable insights without slowing development. Bad Rod explains it. What do aardvarks live on? Uh-oh. Bugs. Nice. Right? Good name.
[01:04:45] Good name. Good name. Yes. Yeah. Better than Big Sleep. Yeah. That's a really terrible name. That's depressing. Aardvark is built for security. In our testing, we found it can also uncover bugs such as logic flaws, incomplete fixes, and privacy issues. Aardvark has been in service for several months, running continuously across OpenAI's own internal code bases and those of external alpha partners.
[01:05:14] Within OpenAI, it has surfaced meaningful vulnerabilities and contributed to OpenAI's defensive posture. Partners have highlighted the depth of its analysis with aardvark finding issues that occur only under complex conditions. In benchmark testing on golden repositories, aardvark identified 92% of known and synthetically introduced vulnerabilities.
[01:05:45] So, I guess, golden repositories are test repositories where it's like saying, you know, they're saying, okay, you bug eater, see what you can find here. Demonstrating high recall and real-world effectiveness.
[01:05:59] Aardvark has also been applied to open source projects where it has discovered and we have responsibly disclosed numerous vulnerabilities, 10 of which have received CVE identifiers.
[01:06:15] As beneficiaries of decades of open research and responsible disclosure, we're committed to giving back, contributing tools and findings that make the digital ecosystem safer for everyone. We plan to offer pro bono scanning to select non-commercial open source repositories to contribute to the security of the open source software ecosystem and supply chain.
[01:06:45] Wow, that's very cool. We recently updated our outbound coordinated disclosure policy, which takes a developer-friendly stance focused on collaboration and scalable impact rather than rigid disclosure timelines that can pressure developers. We anticipate tools like Aardvark will result in the discovery of increasing numbers of bugs and want to sustainably collaborate to achieve long-term resilience.
[01:07:13] Software is now the backbone of every industry, which means software vulnerabilities are a systemic risk to businesses, infrastructure, and society. Over 40,000 CVEs were reported in 2024 alone.
[01:07:33] Our testing shows that around 1.2 of all commits introduce bugs, small changes that can have outsized consequences. Aardvark represents a new defender-first model, an agentic security researcher that partners with teams by delivering continuous protection as code evolves.
[01:07:59] By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation. We believe in expanding access to security expertise. We're beginning with a private beta and will broaden availability as we learn more. So, wow. Wow. Okay.
[01:08:23] As I said, sooner than I expected, given the code quality that I get from OpenAI's GPT-5, you know, you need to check your… It can find bugs, though, so that's good. That is good. And if it thinks it finds something, then sticks it into a test harness and validates it by demonstrating its exploitability, how do you argue with that? And at speed. It's like, yes.
[01:08:51] Now, here's what we… We talked about this on Sunday with Alex Stamos, who's on the show. Because the FFMPEG people are very upset with Google's tool that does the same thing. It's agentic bug hunter, the pig sleep, because they say this puts an undue burden on us. It found many bugs in FFMPEG. And their complaint was Google responsibly disclosed it and so forth. They gave them 90 days and stuff. But they didn't give us any help in fixing the bugs.
[01:09:21] And this is a project run by volunteers. In fact, they lost a volunteer because one of the bugs was in a codec that was used by LucasArts for several frames of one game 20 years, 30, 40 years ago. So, smush. But because FFMPEG wants to have every codec in it, it had a codec for smush. And there was a bug in it. And it could be exploited.
[01:09:47] But the problem is there's just one guy who reverse engineered it. And, you know, he's got to jump to and fix it. And so, I think this is why you see the verbiage from OpenAI saying we're going to be cooperative, right? Yes. Yes. Clearly that they've seen some blowback from the slow sleeper and decided that… Big sleeper. Big snooze. Yeah. I mean, FFMPEG was very upset. They lost…
[01:10:17] One of the developers quit because he just said, I can't… There's too much pressure. I can't get this done. But can you think of a better thing to aim it at? I mean, as we know, codecs… Oh, my Lord, are they complex and inherent bug fests. And FFMPEG is everywhere. So, it is a very high risk vulnerability. So, I can't fully blame Google on this either. I mean, maybe they think Google's rich enough they could have helped us with this maybe. Something like that. Yeah.
[01:10:47] And it also says that this is one of the reasons that Aardvark is offering the fixes. Yeah. That's key. So, it's closing the loop and saying, we found a problem. Here's what we found. Here's what we think will fix this for you. Aardvarks are specialist insectivores. Their diet is almost entirely ants and termites. Or, I guess you could say an insect is not a bug. But, you know, it's like a bug hunter. Nice.
[01:11:14] Also, the fact that OpenAI will be offering pro bono scanning for some non-commercial open source projects. Right. You know, like we'd like it to run through OpenSSH. Thank you very much. And OpenSSL. Thank you very much. Exactly. Yeah. You know, and who knows? Linux. Please, you know, check the Linux kernel for us. Find the bugs. Now, one thing that's not clear is where all of this AI LLM interpretation takes place.
[01:11:43] You know, if the code being checked is proprietary, I would imagine that commercial users will be a little reluctant to give some OpenAI code scraper unrestricted access to their code base. That's why they start with open source. Yeah. That's doubly true if it means the shipping the code off to the cloud in order to have it checked, you know, up there.
[01:12:06] But, you know, again, Leo, even if it was, as you said, only used for open source projects or only initially, you know, we can hope that this will eventually provide an additional tool to help improve the security of, you know, of the code that gets produced. So, I did think that that statistic about 1.2% of new code commits introduce a bug. Yeah. That feels about right, doesn't it? Yeah. Because, you know, you fix one thing here and you screw something else up elsewhere.
[01:12:36] So, it's certainly a believable number and it tracks with what we see in the real world. Okay. I've got a bunch of quickies here. An Italian news outlet reported the following last Friday from Milan on the 31st of October, La Presse said,
[01:12:58] AGcom has published on its website a list of sites that starting on November 12th, so I don't have my calendar in front of me, next week, will no longer be accessible through self-certification of age. You know, the famous, yes, I'm 18 button. There are 48 sites they wrote in total on the AGcom list. The AGcom is a regulator.
[01:13:26] Among them are Pornhub, YouPorn, and OnlyFans. The list was compiled in accordance with Article 13-BIS of the Kevano Decree, which introduced the obligation for operators of websites with pornographic content to verify the age of users in order to prevent access by minors.
[01:13:49] AGcom resolution 96-25-1996.com establishes the technical and procedural methods for implementing the verification in accordance with the decree.
[01:14:05] So add Italy to the list, whether we call this more of the spreading move to protect minors or enforcement of longstanding laws that have largely been ignored until now, or maybe a not very subtle means of clamping down on the general availability of sexually explicit content on the Internet, whatever, or maybe all of the above.
[01:14:29] Um, it's unfortunately happening in a vacuum of any privacy preserving technology to actually pull this off on the Internet. But as we've often noted recently, it is happening nevertheless. So, uh, 48 sites will, uh, be off the net, uh, next week. Okay. Now the next bit of news here brought to mind the old rhetorical question.
[01:14:57] Uh, what have you been smoking, which is often followed by, and can I get some, uh, get a load of this one. The news from Russia is that Russian lawmakers are seriously considering passing a law that would force all commercial companies that is adding them to the state run organizations that have all that are already under this law.
[01:15:23] All commercial companies to replace any and all non-Russian foreign made software with Russian based software. The Duma previously passed a law requiring state run organizations to use Russian made software only by 2028.
[01:15:48] So that gives state run organizations a little over two years from now, unfortunately, and to no one's surprise, foreign made software still dominates most Russian industry sectors.
[01:16:01] So, wow, it's difficult enough getting people to simply upgrade the software they already have, you know, to the latest and greatest to say nothing of forcing a switch to some completely different and almost certainly incompatible alternative software. You know, all the while continuing to have uninterrupted operations. I don't know how you do that.
[01:16:28] So I don't know how you get an operating system that's entirely written in by Russians. Yes. And I mean, you have to wonder whether, you know, I mean, they've got to start with, with Linux and then go from there. They have a Russian, uh, Linux, Astra Linux, which is used by the armed forces, but it's based on Debian, which is not Russian. Of course.
[01:16:51] So maybe open source and then like, you know, uh, you know, fork their own rewrites. Sendance. Russian. Yeah. Wow. Yeah. I just, I problematic. I, yeah. And again, one wonders, we're going to see what happens in two years because it is the beginning of January of 2028. There can be no non-Russian software in any state run organizations.
[01:17:22] Wow. No phones. What phone? Wow. What phone is not? Is Russian. Wow. They better get to work. And in other interesting news from Russia, we have Russian telecom operators starting to block the calls and SMS messages used for second factor authentication by both telegram and WhatsApp.
[01:17:48] Both during the initial account registration and subsequent account verification, you know, re-authentication. Russian telecom operators, as we've talked about before, don't want the competition that is created by those platforms, those non-telecom internet-based platforms. And Russia, for their part, wants to force everyone to use its own Russian state-backed messaging app, Max. We've got some listeners who've told us about that.
[01:18:16] So Russia is arranging to encumber and cripple the use of those alternatives messaging platforms bit by bit. And here's the latest instance of that.
[01:18:33] I encountered a short blurb mentioning that an additional 187 new malicious NPM packages had just been discovered and taken down last week.
[01:18:49] What occurred to me was that one of AI's biggest and still unresolved problems is that it costs so much to run it that even with a strong and loyal subscriber base, the AI companies are still losing, currently losing vast sums of money. And the more we use their AI, and the more popular it becomes, the faster they lose money.
[01:19:17] You know, LLMs burn energy, turning it into waste heat. And unfortunately, it's not something where efficiencies of scale apply. So on some level, cool as all this stuff is, it's never been shown yet to be economically viable in its current form.
[01:19:39] That said, if we've seen anything over time, and Leo, you and I at approximately the same age, we've seen technology advance shockingly over our lifetime. What we've seen is that technology can mature at astounding rates and in ways we could never imagine. You know, AI itself, such as we're all now using today, was utterly unknown to us just five years ago.
[01:20:09] We weren't talking about any of this five years ago. Many of us were using 300 baud modems in our earlier life. You referred to that earlier on the podcast. And I remember paying $5,000 for my first 10 megabyte spinning hard drive, which was in an IBM PC XT.
[01:20:33] At that time, back then, it wasn't clear how we would get from where we, you know, from there, where we were there, to where we are today. We didn't know. We were already amazed at where we were at the time. My God, 10 megabytes in a hard drive.
[01:20:58] So, no, we can hope that AI will be able to enjoy the same sort of cost reduction and capability improvement over time. And again, we may not know how today or where it's going to come from. But based upon a lifetime of experience, the odds are that it'll happen anyway, even if we don't know.
[01:21:20] Because we didn't know then how we were going to go beyond a 300 baud modem and a $5,000 10 megabyte hard drive. Look where we are now. So, this is relevant to the continual discovery of hundreds of newly malicious NPM and other repository packages.
[01:21:39] Because it would be so nice to have technologies such as OpenAI's Aardvark guarding the entrance to these repositories and thoroughly checking any package that's submitted or updated in these repositories before their release for public consumption. But there's one big problem with that, right? Who's going to pay for it?
[01:22:07] With AI costing today so much to run and with cost increasing linearly as we use more of it, free and open source repositories would never be able to afford the protection of fancy AI code verifiers. But that's only today. If there's anything we know, it's that tomorrow's technology won't be any more like today's than today's is like yesterday's.
[01:22:35] And the changes that we've seen during our lifetime have been astonishing. So, it's very clear to me, Leo, that someday AI will be cheap and that will truly change everything. Because when it becomes so inexpensive to have this kind of new capability, everything is going to change. I agree. Yeah.
[01:23:03] This is kind of a new area where the idea of AI finding bugs in software. That's very interesting. And we talked about it early on. To me, it is an obvious place where we're going to have traction. I think it makes sense. Yeah. I think, I mean, it is computers working, you know, that can understand code because code is fully deterministic.
[01:23:31] It may make a crappy therapist, but it can sure as crap find bugs in code. You know, I also think it'll make coders better if used properly. For instance, you know, we were talking about regressions and how many bugs are introduced with every patch. Well, a lot of that would be avoided with proper regression testing. And that turns out to be something that AIs do very well as write tests.
[01:23:56] And what's nice about the AI writing the tests is they don't have the same blind spots as the person who wrote the original code has. That's why it's hard to write tests. Right? Yes. Yes. And in fact, it's one of the things that I miss about having a tech support group. Yeah. Partner. The guy that I worked with, I would say, James. Look at this. You know? Yeah. Look at this. Right. And, you know, because you, you, it's, and we've talked about this often. It is, it is impossible to find bugs in your own code. Right.
[01:24:26] Because it's, it's like reading what you've written. You can't. Everybody needs a good editor. Like reading text. Yeah. You, you, you do not see your own typos. Furthermore, as we've seen, most flaws and bugs follow patterns. Like, you know, buffer overflows, uh, uh, right when, uh, free, uh, to memory. And those are things that I could quickly spot. I would think. Right. Because they're patterns. Yeah.
[01:24:54] I really think, I, I think AI is, is going to make a big, big difference in security, but it needs to be affordable. Um, and it's not yet. Well, the good news is the biggest cost to AI is in the training. And, uh, it, you know, that's where the biggest power use is. That's where the most manpower goes. Once it's trained, it can run fairly economically. It's not much, not much worse than, you know, a Google search.
[01:25:20] So I think that we're spending a lot of money right now because we're spending a lot of money on training. On R and D. On R and D. Yeah. But we may benefit in the long run, uh, from, from having these models that are now fully trained, especially if, you know, you say, I'm going to, we're going to train this model to find bugs. The bugs, the way bugs happen hasn't changed much over the last 50 years. Right. It's the same problem. Amazingly. Yes.
[01:25:49] In fact, that's, what's so frustrating. We know how to not do it and we still do it. We know what's wrong. So, uh, maybe, maybe this won't be so expensive in years to come. And the reason is, of course, the architecture of our processors has not changed. They are still von Neumann machines. Yeah. Yeah. Basically the way they were. Right. Last, speaking of Australia there, they were in the news a lot.
[01:26:12] Um, last Friday, the Australian signals directorate, the ASD posted a status update, which detailed the significant infestation of the bad candy malware in several hundred Australian based. And Leo, would you believe that it's Cisco iOS XE devices? I know it's a shock. How did these boxes become infected?
[01:26:39] Would you believe that no one ever bothered to update any of these? There's more than 400 of them. Any of these devices at any time during the two years after a very serious, remotely exploitable vulnerability patch was made available to them. Two years. Um, you know, everybody listening to this podcast, of course you believe that because it's even expected at this point.
[01:27:09] So here's what the Australian, Australian signals directorate had to say about the situation on Friday. Now, when you hear the headline they gave to this posting, you might be inclined to wonder about the fact that Friday was also Halloween in Australia. I checked. I checked. They, it's becoming increasingly popular to celebrate Halloween in Australia. The headline was don't take bad candy from strangers. That's right. How your devices could be implanted and what to do about it.
[01:27:39] They wrote, by the way, that's exactly what we do on Halloween. It was we take candy from strangers, but okay. Exactly. Yes. They said cyber actors are installing an implant dubbed bad candy on Cisco iOS XE devices that are vulnerable to CVE 2023. Yes. Yes. Number, uh, uh, 2198.
[01:28:07] Variations of the bad candy implant have been observed since October of 2023 with renewed activity notable throughout 24 and 25. Bad candy. Bad candy.
[01:28:20] They caught this right as a low equity Lua based web shell and cyber actors have typically applied a non-persistent patch post compromise to mask the device's vulnerability status in relation to that CVE 2023, uh, 2198. In these instances, the presence of the bad candy implant indicates compromise of the Cisco iOS XE device via that CVE.
[01:28:51] The bad candy implant does not persist following a device reboot. So it's a, it just lives in RAM. However, where an actor has accessed account credentials or other forms of persistence, the actor may retain access to the device or network. The patch for the CVE must be applied to prevent re-exploitation.
[01:29:13] So just restarting it and it gets the, the, the, the, the, the, the malware is washed out of RAM, but it comes back up in an exploitable mode access. And here it is, Leo access to the web user interface should also be restricted if enabled. Oh yeah. And then they said, see the general hardening section below.
[01:29:35] And they finished since, since July of 2025, ASD assesses over 400 devices were potentially compromised with bad candy in Australia. As of, as late as October 25, meaning just last month of, you know, a week ago, they said there are still over 150 devices, um, compromised with bad candy in Australia.
[01:30:03] So anyway, the directorates posting goes on at length, but everyone gets the idea here. Once again, just, you know, two years ago in 2023, as recently as two years ago, Cisco iOS XE devices contained a vulnerability in their public facing internet connected HTTP web interface, which allowed for remote exploitation.
[01:30:32] Did those more than 400 devices ever actually require HTTP web interface, remote management, probably not, but it's there nevertheless. And now bad guys have crawled inside, set up shop, stolen whatever credentials they may need for future persistence and use. Who knows what they've done with their access then to the network behind perhaps a little ransomware. What a mess.
[01:31:00] And if Cisco would do more than publish an optional hardening guide for their devices, this might all be prevented. Why not harden by default Cisco and then let people turn things on when they need it. And when the, when you notice that it hasn't been used for six months, ask them if they're sure they'd like to keep this enabled because nobody's using it except bad guys keep trying to log on.
[01:31:29] You know what Alex Stamos suggested. He said, everybody should show Dan themselves. Just put your, put your public IP address in a show Dan and just see what happens. I think it's an interesting thought. Can you search? Cause I know you could say with show Dan, okay, I'm looking for this, uh, you know, hole or this X plate. Can you, can you just say, tell me what's available? He said, everybody should end map themselves or show Dan themselves. You can do with them map. I know. Yeah. Yeah. Yeah. I, you know, I don't know.
[01:31:59] I understand how you could be running a shop, a security, you know, person, an IT person running a shop with these Cisco iOS devices and not check, not update. It's just, I know. I guess it's in a closet somewhere and nobody's really aware it exists and show Dan yourself. They're, they're, they're short staff. They're in a hurry.
[01:32:24] They, they, they set it up and, and maybe they enabled the, the web management, uh, because it's in a satellite location and they actually need it. Except they could have put a firewall in front of that and said, only allow connections from our, from our headquarters IP block. And then China and Russia would never be able to get to it.
[01:32:46] I mean, it is, it is so possible to do this securely and Cisco should not. I mean, it is negligent for them to make it so easy for it to be done in securely and companies are being exploited. I mean, this, these are not theoretical problems. I mean, you know, meltdown was a theoretical chip problem. Probably nobody actually ever got hurt by it.
[01:33:15] These are not theoretical. They're like, they, they, they've, they've seen 400 of these Cisco iOS boxes compromised in Australia alone. I don't know how, uh, okay. By the way, it's not just Australia. It's just, that's the story. That was just their report. And that was just 400 boxes in Australia. Oh, it's probably all over the world. Oh, of course it is. Of course. Okay.
[01:33:43] So last week, GitHub updated the world on their status and AI took center stage. They wrote, if 2025 had a theme, this is GitHub saying this, it would be growth. Every second, more than one new developer on average joined GitHub. One new, one new developer per second during 2025 on average.
[01:34:12] They said over 36 million new GitHub, new developers on GitHub in the past year. It's our fastest absolute growth rate yet. And 180 million plus developers now work and build on GitHub. So think about that 180 total 36 million new ones just in this last year.
[01:34:39] So a surprisingly large percentage of the total are, are within the past year. They said the release of GitHub co-pilot free in late 2024 coincided with a step change in developer signups, exceeding prior projections. Beyond bringing millions of new developers into the ecosystem, we saw record level activity across repositories, pull requests and code pushes.
[01:35:08] Developers created more than 230 new repositories every minute. And don't we wish that they were bug free? More than 230 new repositories every minute. Merged 43.2 million pull requests on average each month, which is up 23% year over year.
[01:35:33] And push nearly 1 billion commits in 2025, which is up 25.1% year over year, including a record of nearly 100 million in August alone. The surge, they wrote, in activity coincides with a structural milestone. Here it comes, Leo.
[01:35:56] For the first time, TypeScript overtook Python and JavaScript in August of 2025 to become the most used language on GitHub. Wow, I'm kind of surprised. Reflecting how developers are reshaping their toolkits. This marks the most significant language shift in more than a decade. They said, and the growth we see is global.
[01:36:26] India alone added more than 5 million developers this year. And Leo, some of them are actually human. Over 14% of all new GitHub accounts. And is on track to account for one in every three new developers on GitHub by 2030. So India developers are pouring into GitHub. They said, this year's data highlights three key shifts.
[01:36:56] First, generative AI is now standard in development. More than 1.1 million public repositories now use an LLM SDK with 693,867 of these projects created in just the past 12 months alone. That's a 178% year-over-year increase from August to August.
[01:37:26] Developers also merged a record 518.7 million pull requests. Moreover, AI adoption starts quickly. 80%, 80% of new developers on GitHub use Copilot in their first week. Second of the three key shifts is TypeScript is now the most used language on GitHub.
[01:37:53] In August 25, TypeScript overtook both Python and JavaScript. Its rise illustrates how developers are shifting toward typed languages. That's the key. Yes. That make agent-assisted code more reliable in production. It doesn't hurt that nearly every major front-end framework now scaffolds with TypeScript by default.
[01:38:19] Even still, Python remains dominant for AI and data science workloads, while the JavaScript TypeScript ecosystem still accounts for more overall activity than Python alone. And the third major change, AI, they said, is reshaping choices, not just code. In the past, developer choice meant picking an IDE, language, or framework.
[01:38:47] In 25, that's changing. We see correlations between the rapid adoption of AI tools and evolving language preferences. This and other shifts suggest AI is influencing not only how fast code is written, but which languages and tools developers are using. And they finish saying, and one of the biggest things in 25, agents are here. Early signals in our data are starting to show their impact,
[01:39:15] but ultimately point to one key thing. We're just getting started, and we expect far greater activity in the months and years ahead. So TypeScript's ascendance is interesting. I don't think we've ever talked about TypeScript much, and I haven't had my own eye on it. But the fact that it has now surpassed Python's use in GitHub projects,
[01:39:43] that's significant. TypeScript can be thought of as a sort of super JavaScript. I've written some JavaScript during this podcast. I don't mean while we're doing the podcast, but during the years of the podcast, the password haystacks page is all client-side JavaScript. And that wacky Latin squares-based encryption system that I created,
[01:40:12] which I named Off the Grid, that used JavaScript to synthesize its Latin squares, also all on the client. And as we know, my own native coding language is Assembler, which is about as unforgiving a coding environment as it's possible to find. So coming there, you know, to like from there, from Assembler to JavaScript,
[01:40:42] was somewhat annoying to me, because whereas coding assembly tends to be far too rigid for most people, I found JavaScript to be far too lax for my taste. Oh, it's terrible. Yeah. the JavaScript language was deliberately designed to be forgiving and easy to use, but that too could be taken too far. Fortunately, I wasn't the only person to feel that way about JavaScript.
[01:41:11] In describing the genesis of JavaScript, which are, sorry, the genesis of TypeScript, which is JavaScript, you know, it's much stricter successor. Wikipedia said, TypeScript originated from the shortcomings of JavaScript when developing large-scale applications, both at Microsoft and among their external customers.
[01:41:39] Because TypeScript was defined by Microsoft. Challenges with dealing with complex JavaScript code led to demand for custom tooling to ease developing of components in that language, in JavaScript. Developers sought a solution that would not break compatibility with the ECMA script standard and its ecosystem.
[01:42:05] So a compiler was developed known as a transpiler to transform a superset of JavaScript with type annotations and classes through TypeScript files. And it transformed it back into vanilla ECMA script five code.
[01:42:27] TypeScript classes were based on the then proposed ECMA script six class specification to make writing prototypal inheritance less verbose and error prone, and type annotations enabled IntelliSense and improved tooling. So, you know, meaning that if you, if you clearly defined your, your type classes in files, then in,
[01:42:56] then Microsoft's IntelliSense in, in JavaScript, in, uh, visual script code could import those files and then help you, uh, to, to use the functions and the classes that you would define. Many of our listeners in future, if not already in practice, I want to share a bit more, uh, from the top of Wikipedia's article.
[01:43:25] They wrote TypeScript is a high level programming language that adds static typing with optional type annotations to JavaScript. It's designed for developing large applications. It transpiles to JavaScript. It's developed by Microsoft as free and open source software released under an Apache license 2.0.
[01:43:49] TypeScript may be used to develop JavaScript applications for both client side and server side execution, as with React, JS, Node.js, Dino, or Bun. Multiple options are available for transpiling. The default TypeScript compiler can be used, or the Babel compiler can be invoked to convert TypeScript into JavaScript.
[01:44:14] TypeScript supports definition files that can contain type information of existing JavaScript libraries, much like a C++ header file can describe the structure of existing object files. This enables other programs to use the values defined in the files as if they were statically typed TypeScript entities. There are third party header files for popular libraries, such as jQuery, MongoDB,
[01:44:43] and D3JS. TypeScript headers for the Node.js library modules are also available, allowing development of Node.js programs within TypeScript. And finally, the TypeScript compiler is written in TypeScript and compiled to JavaScript. It's licensed under the Apache license 2.0. And here's what caught my attention. Anders, who we all know,
[01:45:10] legendary Anders Helgsberg, lead architect of C Sharp and creator of Delphi and Turbo Pascal, when Anders was working with Philippe over at Borland, has worked on developing TypeScript. So, when you hear that Anders is putting his time and focus into a language system, that's worthy of attention all by itself. He's a legend.
[01:45:35] And he's currently recoding the TypeScript transpiler in Go, where it's expected to end up with about a 10x speed improvement. So, everybody's looking forward to that. I'm still not comfortable with many of the decisions that were made during the definition of JavaScript. Having, I mean, even the name, like it's confusing with Java, which it has,
[01:46:04] there's no relationship to. It's like, okay, fine. But, but having Leo being able to have a variable, able to take the value or rather the explicit non-value of NAN, which stands for not a number. Okay. That just rubs me the wrong way. It's like, what? Oh yeah. JavaScript's very lax. It is a, it is a mess. Anyway, all that said, I'm,
[01:46:32] I am sure I would appreciate having a much less JavaScripty, JavaScript. So, the next time I, that I do need to do some web browser client side coding, I'll probably familiarize myself with, with TypeScript and be much more comfortable with it than I was with JavaScript. And obviously it's more popular now than anything else over on GitHub. So everybody else seems to be liking it a lot. So. Yeah. If you had asked me, you know,
[01:47:01] what would replace Python? I would have said JavaScript because JavaScript is kind of now the lingua franca of the web. But anybody who's struggled with JavaScript will welcome TypeScript because it's kind of, you know, you've seen that book, JavaScript, the best parts, which is a really thin book. The good parts. In fact, I think that we, I think we did a picture of the week that had the two side by side. Yeah. JavaScript, the full reference, JavaScript, the good parts. Yeah.
[01:47:30] So this is kind of like that. It's like, well, what if you made JavaScript a typed language, a modern language? And we want people to use statically typed languages because that solves all of these, you know, use after free and buffer over. I don't think I knew that Anders was at Microsoft. Yeah. That's yeah. I did know that. Yeah. That's cool. It is cool. Yeah. Yeah. I mean, he, he, he, you know, I, I sit up and take notice when I, when I hear that. Okay. So nearly, nearly a year ago,
[01:47:59] Microsoft introduced the idea of a new extra tight security feature, which they call administrator protection. One way to think of it is as a sort of super UAC. Of course, you know, we're all familiar with user account control. We're talking about this today because the recently released KB 5067036 previews,
[01:48:27] cumulative update for both windows 11, 24 H2 and 25 H2 finally includes this disabled by default new feature. So this, this KB 5067036 update is one of Microsoft's optional non-security preview releases. Unlike regular patch Tuesday cumulative releases,
[01:48:56] these monthly non-security preview updates do not include security updates and they're optional. You can obtain it by going to windows update, checking for updates, and then downloading and installing it. Once installed, this optional cumulative release will update windows 11, windows 11, 24 H2 to build 26, 100 dot 507, four and windows 11,
[01:49:25] 25 H2 to 26, 100 dot seven, zero one nine. Okay. So what then here's how Microsoft introduced the new feature last November, which we finally have now as of like a few days ago, this is now available. They said in today's digital landscape, the importance of maintaining a robust security posture cannot be overstated.
[01:49:50] A critical aspect of achieving this is ensuring that users operate with the least privilege required. Users with administrative rights on windows have powerful capabilities to modify configurations and make system-wide changes that might impact overall security posture of a windows 11 device. These powerful administrative privileges represent a significant attack vector
[01:50:19] and are frequently abused by malicious actors to gain unauthorized access to user data, compromised privacy, and disable OS security features without a user's knowledge. Recent statistics from Microsoft digital defense report, 2024 indicate that token theft incidents, meaning privilege, token theft incidents, which abuse user privileges have grown to an estimated.
[01:50:46] I couldn't believe this Leo 39,000 per day token theft incidents. So they said administrator protection, this new feature, a new platform security feature in windows 11 aims to protect users while still allowing them to perform necessary functions with just in time administrator privileges.
[01:51:13] The administrator protection requires that a user verify their identity with windows. Hello,
[01:51:28] This new feature is a new feature, And more importantly, helps prevent malware from making silent changes to the system without the user knowing.
[01:51:56] And of course, I'm wondering, OK, how is this not UAC? They said at its core, administrator protection operates on the principle of least privilege. The user is issued a deprivileged user token, even if you're an admin, when they sign into Windows. However, when admin privileges are needed, Windows will request that the user authorize the operation.
[01:52:26] Once the operation is authorized, Windows uses a hidden system generated profile separated user account to create an isolated admin token. This token is issued to the requesting process and is destroyed once the process ends. This ensures that admin privileges do not persist.
[01:52:53] The whole process is repeated when the user tries to perform another task that requires admin privileges. So they finished saying you can enable administrator protection on your device by navigating to the account protection section on the Windows security settings page and switching the toggle to on. A full Windows restart will be required.
[01:53:21] So I'm still I didn't have a chance to play with this. I'm still left on not quite understanding how this isn't UAC because UAC does black out the screen, take it over and say, is this something you know, you've got you've got to elevate your.
[01:53:43] Remember, remember, remember, remember, remember, remember, remember, remember, remember, we talked about this in the very beginning and the invention of this on Windows 7, where a split token was was was generated. And the user normally used a non privilege token. And then Windows could switch over to it. You know, it is possible to disable UAC. You're able to pull the little bar down to, like, you know, reduce how often it's seen.
[01:54:11] And you can even completely turn it off. So you never see it at all. Basically disabling all of that protection. I don't I guess I'm I'm left wondering why that wasn't enough. Well, you know, but. All that requires is that you click on yes, that and so that is clearly a difference here. You must reauthenticate with this administrator protection feature.
[01:54:41] Maybe that's maybe that's what's different. Maybe that's the only thing that's different is that UAC just requires a warm body in the seat and you click on. Yes, I want to do this. Well, malware could potentially click on. Yes, I want to do this. Presumably malware cannot reauthenticate under Windows Hello as you. So this is going to require that you do you do you proactively reauthenticate your identity to Windows.
[01:55:09] Every time you want to do something that requires admin privileges. So I wanted to share this because this is a security based podcast and I'll bet a lot of our listeners are interested. Maybe those who control policy for enterprises, because you can you can turn this on through the whole Windows policy management system. It could be made enterprise wide immediately.
[01:55:40] Right now it's that preview. Apparently it'll be happening next month. Maybe it is what we have patch Tuesdays next Tuesday. So I'm not sure if it'll be next Tuesday or a month from then. But anyway, it's a feature that Microsoft has talked about bringing for a year. It was last November that they first talked about this. Now it's here.
[01:56:06] And Leo, the other thing that's here is our as our sponsor. Yes. Before we get into our feedback from our listeners, because we have a bunch of cool stuff. A little time out. A visit from this little guy here. This is my Thinkst Canary, our sponsor for this segment. One of the sponsors has been with us for I think it's nine years now. Now, this is a this this gizmo is a lifesaver. This is a honeypot.
[01:56:37] Now, you can write your own honeypots. Remember, we were in Boston and we talked to Bill Cheswick, who, you know, he was in search of the wily hackers, very famous security researcher who wrote one of the very first honeypots. And I was asking Bill about it. He said if they're the devil, the devil to write a honeypot is something that is a track. It's like a beast to honey. Right. It's something that looks valuable, looks attractive to bad guys.
[01:57:05] But of course, when they go in, not they don't get stuck in there. They just telegraph their their presence there that they're looking around and something they shouldn't be. So that's the whole idea of a honeypot. It's been used for years in security, but only the very best security people could write honeypots that are good for a variety of reasons. They've got to be attractive to bad guys. They have to, in every respect, look like the real thing. I mean, hackers are pretty sharp and they're very suspicious. They're paranoid, as they should be.
[01:57:35] So they're very careful about the resources they hit on your system. So it has to be absolutely convincing. But it also has to be absolutely secure. You don't want a honeypot that adds insecurity to your system. The whole point of a honeypot is to make your system more secure. Well, these are honeypots created by people who've spent years training companies and governments on how to break into systems. They're pen testers. They're expert hackers.
[01:58:01] They have created the most secure honeypot that is indistinguishable from the real thing. So you get these Thinxed canaries. That's what this is. It's got a little canary on it. You get these things. It looks like about like an external hard drive or something, right? But instead of a USB port, it's got an Ethernet port and a power connector. You plug it in. Now, you can go into your Thinxed canary configuration panel and you can choose what this Thinxed canary appears to be.
[01:58:30] And they see something that looks like, oh, a SharePoint 2019 server or an IAS server or a Linux box or maybe an old Windows 95 machine or a SCADA device. It could be anything, an OpenSSH server. And it is absolutely indistinguishable. So a bad guy is going to look at it and say, oh, I bet there's something good on there. But the minute they attempt to access your fake internal SSH server, you get an alert.
[01:58:59] No false alerts, just the alerts that matter in any way you like it. SMS, Slack, email. They support syslog. They have web hooks. They actually have an API. You can write your own if you wanted to if you're so inclined. The other thing that Thinxed canary will do, the hardware device will create little software files that you can spread around. I have like on my, and you can spread them even on your cloud. So on my Google Drive, there's, you know, Excel spreadsheet that says payroll information, that kind of thing.
[01:59:28] Except not, it's not an Excel spreadsheet. If the bad guy tries to open it, I'm going to get the alert. You can even make it, I have wire guard configurations. You know, that's something a bad guy really wants to get. Oh, the private key for your wire guard is in there. And they can't resist opening it. It's a file and they open it. Something goes wrong. It doesn't open. They go, I don't know. They move on. But you know now they're in there. So what you do is you get your Thinxed canary devices. You choose a profile form.
[01:59:56] You register it with a hosted console for monitoring and notifications. And then you put your feet up and you relax. Attackers who've breached your network or malicious insiders, any adversary cannot help but make themselves known by accessing your Thinxed canary. Now, if you're a big operation, a bank or a casino back end, you might have hundreds of these. They do.
[02:00:19] These Thinx canaries are very, very popular with people who want, you know, layered security, who want to make sure if somebody gets through their outer perimeter defenses and gets into the network, they know they're there. A small operation like ours might just have a handful. Let's say you want five. Visit canary.tools.twit. For $7,500 a year, you're going to get five Thinxed canaries. You get your own hosted console. You get upgrades. You get support. You get maintenance.
[02:00:46] Oh, and by the way, if you use the code TWIT in the how did you hear about us box, you'll also get 10% off the price. And not just for the first year, for life. For as long as you own your Thinxed canary. Here's another thing you should know. You can always return the Thinxed canary if they've got a two-month, 60-day money-back guarantee for a full refund. So there's zero risk. I should tell you that in the nine years Twit has partnered with Thinxed canary, not one person has asked for their money back.
[02:01:14] The refund has never been claimed. Visit canary.tools.twit. Enter the code TWIT in the how did you hear about us box, 10% off for life. All right. Back to you, Steve. So to remind our listeners from last week, it was Michael Cunningham, whose note I shared in which he told us about an employee of their common employer who walked past his desk and simply said, you're evil.
[02:01:43] After he had implemented a minimum password duration policy on top of a password reuse prohibition. Oh, Lord. Making it impossible to quickly change the password five times to come back to the one that the employee wanted. So he heard me share his email on the podcast last week and he took it in to his credit in the constructive spirit in which it was intended.
[02:02:09] And he followed up by writing, just to close the loop. I thoroughly enjoyed your thoughts on this and I totally agree. The company where I am now did away with password changes during the pandemic simply because the cost of dealing with help desk calls due to failed password changes far outweighed any benefit.
[02:02:34] Plus, we have services that monitor for compromised passwords and only make those users change them if they match. Keep up. Oh, that's different. Yes. Yes. He said, keep up the good work. And P.S. Yes. I do now get invited to more parties. Oh, yes. If they're compromised, change them. Yes, of course. Of course.
[02:03:00] So anyway, I Leo, I did want to mention that I thought that you are also right to point out that the minimum 16 character password length requirement when you're only using a single factor for authentication. That's a really good change. I mean, having a long password is key. So if all you're going to use is a password. You taught me that. Password haystacks, baby. You taught me that. Yeah. Yeah. Robert G. in the UK said, hi, Steve.
[02:03:29] I've been enjoying the selection of comments to the NIST password changes you've been sharing. Boy, I mean, this resonated with our listeners, of course. The story of the admin being called evil reminded me of a similar conversation a few years ago, paraphrased below. He said a user quips in a throwaway remark. I use the same handful of passwords for everything. Sorry, Rob. And Rob replies.
[02:03:57] Why do you think I enforced two factor authentication for everyone? It's so I could sleep at night. And he said a combination of knowing what data is behind the login and a full acceptance of, well, yes, users will be users and we'll work around whatever they can was why I didn't fight that battle. Although since friends don't let friends do stupid stuff, his education is continuing. Meaning Rob is continuing to explain to the various people, you know, why they need to be secure.
[02:04:27] Neil in Ohio said, you described in great detail explaining that the attacker can see the targeted resolvers query packet. Oh, he's talking about the process of DNS cache poisoning. He said, you described in great detail explaining that the attacker can see the targeted resolvers query packet and then they can guess what the next one will look like.
[02:04:55] But since they can see the actual request they want to fake a reply to, don't they already have the port and sequence number and they can just make a quick false response with those? He says, I'm missing something. Okay, so I see what Neil means. He's 100% correct.
[02:05:15] If an attacker were somehow able to directly observe the upstream request that a resolver made to the name server, it's querying, and whose reply they wish to spoof, then they could absolutely instantly send the matching malicious reply to the resolver.
[02:05:42] But that would require the attacker to be located in a very specific location so that they were able to monitor the actual network traffic passing between the resolver and the name server.
[02:05:58] However, if the requirement were to observe a resolver's actual query, that would make the attack much more theoretical than practical because no attacker in, for example, some other country would be able to position themselves on the network path between an arbitrary resolver and the name server that it's querying.
[02:06:27] That's why the DNS cache poisoning attack that I described last week began by issuing a DNS request probe to obtain the approximate state of the issuing port number and query IDs. That probe could be sent from anywhere on the internet.
[02:06:50] And the query that it induced from the target resolver could also then be observed from the attacker's location. So if you were in Russia, for example, you could send a query to a resolver in the U.S. And induce it to ask a name server in Russia for help resolving the probing domain's IP.
[02:07:18] When you get that packet from it, then you know what port number and query ID it just came from. And so that gives you the rough equivalent of observing the traffic coming from the resolver. So that explains what, you know, Neil's confusion is that he's absolutely right. If you are on the network, then the attack is way easier to perpetrate.
[02:07:46] But the attack is powerful because you don't need to be on the network. You can do it from anywhere in the world. Ian McCutcheon raises an interesting aspect of the ransomware payment calculation. He said, I've been listening since episode one and have an observation on the shifting economics of ransomware. I've noticed more companies refusing to pay demands. We talked about that also recently down to less than a quarter.
[02:08:16] And I suspect it's not principle, but a new cold calculation. It seems the economics have flipped. My theory is that it's now simply cheaper to refuse the ransom and manage the fallout, offering token identity theft protection and what amounts to lip service. You know, rather than pay the actual demand.
[02:08:41] This optimized response, you know, meaning economically optimized is made easier because there's minimal reputational hit. In fact, companies can spin refusing to pay as standing up to the bullies. It's a calculation that prioritizes the corporate balance sheet over user welfare. Am I being too cynical or do you see these new mechanics at play? Ian.
[02:09:09] So I think his point is something that makes sense. And it hadn't really occurred to me before.
[02:09:16] It's certainly the case that breaches of all kinds and especially ransomware demands have unfortunately now become so common that the public's perception of the attacked company is no longer necessarily as negative as it probably once was like three or four years ago, five years ago. You know, back then it was a big deal. Now, oh, another company.
[02:09:45] OK, that happens. Apparently hackers can get into everything. That's sort of what's that. That's what the ether is now saying. Of course, there are exceptions to that rule. The astonishing cost to the British economy of the Jaguar Land Rover attack now estimated at nearly two billion pounds. That stands alone, making it the single most damaging cyber attack in British history.
[02:10:11] But unless an attack victim screws up that badly, I'd say that Ian has a good point. Most attacks these days now result in a shrug, you know, and some sympathy for the victim just under the assumption that it's not the victim's fault. OK, you know, we probably know better on this podcast. GP writes, good day, Steve and Leo.
[02:10:37] Just a quick word on the MS Teams Wi-Fi tracking policy for your listeners. The forthcoming feature is to allow Teams to update the user's status as in office or remote. The PSA here is that MS Teams has already had that ability for years.
[02:10:59] Teams tracks the geolocation of its users and can even restrict logons by geolocation or even in office or not, and so forth. Previously, this information was available through access logs, either in MS Teams or in the authentication service provider, Duo, Okta, etc. Now it will be a visible indicator of whether the user is in office or remote. Is that an invasion of privacy?
[02:11:29] I guess the user can choose. All the best and keep the work going. OK, so thank you, GP. It certainly makes sense that everything would have been logged somewhere. So the only thing that's really changing then is that this information is being surfaced to make it more accessible on the user interface. Alan wrote security. I love it. He addressed this to security now. Security now.
[02:11:55] My friend Sean is teaching me how to program in Python. And get this, Leo, eight days ago told me I needed to start listening at the beginning of security now and listen to the whole 21-year archive. Now, this was eight days ago. He says, I'm currently on episode 75. Wow.
[02:12:21] Now, but we remember those episodes were, you know, 30 minutes long. Yeah, yeah. So, but he said, having eight hours a day to listen while driving a semi has its benefits. OK, yeah. OK. So he said, considering. I don't know how it helps with Python, but OK. Go ahead. Well, but this guy is sharp.
[02:12:46] He said, considering that I'm 19 years behind as of now, you may have already answered this. But just in case you haven't, if someone were to try to brute force a password, I imagine running a dictionary would be the first stop, followed by common names and combination of names and words. After that, what, though?
[02:13:12] Would it start at the shortest possibilities of upper, lower number and special character combos and work up? If so, couldn't a password made up of 63 plus signs potentially provide a password strong enough to remain uncrackable until our sun explodes? Thank you for your time, Alan. And I'm quite impressed with Alan.
[02:13:41] Since, you know, episode 75 places Alan into our second year of 20 years, it's going to be quite a while before he hears this reply on episode 1050 of the podcast. So I knew that I needed to reply not only here, but also in writing. Here's what I wrote and said to Alan. Although he'll hear it when he's about 90, but OK. Yeah.
[02:14:10] I said, Alan, I would conclude that your friend Sean is not wasting his time teaching you to code in Python since you're clearly sharp enough to learn how to make computers go. And Python is a great first computer language to start with. It might be all you ever need.
[02:14:28] Since you're patiently starting at the beginning of our 20 plus years of weekly podcasts and are currently at episode 75, I knew that if I shared and answered your note during podcast 1050, as I am doing, that you might not hear my reply for quite some time. So I'm also emailing my reply to you. I mentioned that you are clearly sharp enough to succeed at computer programming.
[02:14:54] My opinion was driven by your astute observation that length is what matters most for brute force password cracking resistance. When you get to Security Now episode number 303, you're going to encounter password haystacks, which is a web page and demonstration I created to illustrate the overwhelming power of password length.
[02:15:22] And let me just mention, if I may be immodest for just a moment, that you are going to learn so much about computers, networking, the operation of the global Internet, and very broadly about computers and computing in general, that you're going to be a very different person by the time you catch up, aside from being 90, and emerge at the other end of this journey.
[02:15:51] Congratulations in advance. And also congratulations on your decision to learn to code. I'm strongly biased, but coding is the most fun I have ever had. Oh, yeah. I agree 100%. You can't do it while driving a truck. That's the drawback. So he can listen to the security now, which is not going to hurt. And then he'll be coding when he's not driving his truck for 10 hours a day and when he's not sleeping.
[02:16:19] Or actually, maybe he could code in his sleep. You know, I do. So it could happen. It's funny that you say that because I often, if I'm stumped on a coding problem, like, you know, these advent of code problems. Yes, big, serious, real how to tackle the whole solving. And I often wake up with the answer. My brain does work on it overnight. It's very interesting. Yes. The so-called sleep on it, there's something to it. Oh, yeah.
[02:16:49] Sure. That's how, was it Watson or Crick had the dream of the double helix and solved the DNA issue. Actually, it was a woman named Franklin, but we won't go into that. Go ahead. Continue on. And speaking of passwords, Cameron Patberg wrote, hey, Steve, I'm hopeful you are getting these since I haven't ever received a response or heard my comments make it to the podcast. Not that I expect them to, just feels like it may be a black box sometimes.
[02:17:20] I just wanted to share some thoughts regarding your comment in episode 1049. So that's last week. Regarding the browser should hash the password before sending it. It should be made explicitly clear that the gains from that really only work for websites and not in corporate environments for services.
[02:17:40] It should also be made clear that even if you hash the user password before sending it, that hash becomes their actual password from a technological standpoint. And you still have to hash it after receiving it. Some may ask why. And it comes down to the liability the company holds in storing the values in the password database.
[02:18:05] If you don't run an operation such as hashing after receiving the password and before storing it in the database, you leave yourself open to having every user's password compromised, guaranteed in the case the database ever gets leaked or dumped. An attacker will figure out what's going on and bypass the password. And bypass the JavaScript hashing the password beforehand, probably by using a proxy or script.
[02:18:34] And send a login request with the hashed value, which in this case is the real password. So if they get access to the database, in this case, they can automatically access everyone's account. No cracking passwords or hashes necessary. So what protections do we get from hashing the password in the browser before sending it if we still have to hash it again on the back end?
[02:19:01] I honestly haven't found a good reason why we should do this and would love to have explained to me why I'm wrong. It honestly makes the most sense to just rely on the transport encryption, TLS, as I don't see any benefit of hashing it before sending it. The second thing to bring up is corporate environments and why hashing the password before sending it doesn't work.
[02:19:28] Most corporate environments set up federated services or some other method of sharing credentials for different services. Not every service can be expected to do this extra step, especially when they depend on other protocols that don't support a browser or JavaScript. SMB, LDAPS, Kerberos, etc.
[02:19:49] Hashing the password in the browser before sending it would result in a different password from what the other systems receive because of the need to still hash it once again, as mentioned above. I love your thoughts on this because I really don't see the value of hashing the password in the browser before sending it, what it really does for anyone. And instead, strong transport encryption should be relied upon. Thanks, Cameron.
[02:20:16] So I wanted to first explain that I generally receive around 100 pieces of email from our listeners every week. Wow. As a feedback system, I could never ask for more. This has been a huge success. In fact, you could ask for less, but okay. It does mean, well, I scan through things and lots of things are people saying thank you or pictures of the week that they forgot we've shown previously and so forth.
[02:20:45] But it does mean that I am never going to be able to air everyone's notes and feedback. So everyone should know how much I appreciate all the feedback. They hugely improve the podcast and it gives everyone a sense of community and connection, which is what we want.
[02:21:03] A case in point is Cameron's note, since he's absolutely correct about the need for the recipient of whatever is received from the user to be hashed again, rather than simply being stored and later compared with what the user later resupplies when re-authenticating. And we know that that's important because users could change the browser side hashing, which we've seen with LastPass.
[02:21:31] The reason to employ client side browser-based hashing in the context of, for example, a local password manager is the need to prevent local brute force attacks on the password manager's locally stored and encrypted password or password database.
[02:21:54] You know, this was the reason LastPass maintained an iteration count in their password manager, even though they were not great about increasing it over time and updating and reapplying it when necessary.
[02:22:10] And in the context of users authenticating to a remote server, Cameron's correct that once whatever the user sends arrives at the server side, it must also be hashed in a brute force resistant way to prevent simple replay attacks of that service's stored data.
[02:22:29] Back in the early days of this podcast, we spent a great deal of time going over and over and over this, looking at the specific mechanisms that were being employed on the server side to prevent all manner of attacks.
[02:22:45] As for never hashing on the client side and relying entirely on the server, it's somewhat unnerving to rely entirely on transport layer security, since we know that middle boxes which intercept and decrypt such communications are a real thing. So whenever possible, it would be best to perform both local and remote hashing to deeply obscure a user's password.
[02:23:13] And that's what all of the password managers do, for example. They deeply hash on the client and then they deeply hash server side so that what they're storing can't be cracked either by a brute force attack on the client or a brute force attack on what the server has stored.
[02:23:38] Admittedly, though, this was a much bigger issue back in the early days when password reuse was much more the rule than the exception. Back then, obtaining a user's in the clear, unhashed password, which could be done at a middle box, had the very real likelihood of revealing the same password that they used elsewhere and still hadn't disappeared from use.
[02:24:07] And as we know, password reuse still, you know, is enduring despite the encouragement by password managers to use a unique password for every site. So anyway, great question, Cameron. And you now know that your questions are not disappearing into a black hole somewhere.
[02:24:28] Randy Crum said, Steve, in episode 1039, during your comments about the script case, you were reading the post from Volncheck. In part in the part about honeypots. Oh, and Leo, I was thinking about this when you were talking about things. And the difficulty of creating convincing honeypots, which is the whole issue.
[02:24:54] So Randy said, in the part about honeypots, it casually mentions they, quote, built a showdown query to avoid the decoys. He said, at this point, I stopped the podcast stunned. He said, if this is so easy to do, honeypots wouldn't work at all. Can you dig a little deeper into this and explain? And, you know, we have another attentive listener in Randy.
[02:25:22] Boy, I hadn't even really thought about that because the honeypot's inside the network. It's not visible to showdown. So if you have a SharePoint 2018 with that, in theory, with that exploit set up on your honeypot, it's not really exposing that to the outside world. So showdown wouldn't see it. Can't. Right. Yeah. So he's absolutely right.
[02:25:47] Just to clarify for everyone, Randy was wondering why and how the Volncheck guys were able to craft a showdown query, which distinguished between the truly vulnerable targets from the many decoys and actual vulnerability.
[02:26:09] And the answer must be that this strategy only applied in this specific case and only worked in this instance because the decoys were not very good decoys. They were hastily thrown together just to create a low quality honeypot. They were not sufficiently thorough simulations of truly vulnerable targets. And that's what you want in your honeypot.
[02:26:39] Well, and in the case of the things canaries, these are honeypots for people who have penetrated the network. So they wouldn't expose services publicly. True. Right. Because that's not the point. You know, they're trying to get people. Some honeypots are, in fact, I think. Deliberately, publicly. Bill Cheswick's were public because they're trying to attract people from the outside world. But the canaries are intended to only discover bad guys in your network. They don't care. And they probably shouldn't care about bad guys around the rest of the place. Yeah.
[02:27:09] That makes sense. Yeah. Chris Garner in Sydney, Australia, said, Hi, Steve. Just listen to 1049 and this robot vacuum sending data out. The one that is sucking. And it's not. It's sucking your data rather than your carpet. In Australia, he said, A lot of us recently got free upgrades to our national broadband network speeds.
[02:27:34] For me, it was 50 megabit up to 500 megabit. But I needed a new router. Telstra were quite happily to send this. The guest network was off by default. I only have my ring doorbell, robo rock vacuum, and washing machine. Initially, I just connected these to the trusted network as I needed them working again.
[02:28:02] SN 1049 woke me up. I paused, enabled the guest network, and reconnected these all to the guest network. My wife said, How do you even know to do this? I explained it all in terms that she could understand. She said, I'm lucky I have you. But what about everyone else? She said, Yeah. And what about everyone else? He said, Interesting to think about.
[02:28:29] All these consumers of IoT devices and nearly all of their users completely oblivious to what's going on and what could happen. Cheers. Chris Gullner, Sydney, Australia. Chris, of course, reminds me. I've got to put that Wi-Fi clock on a separate segment. Worth doing. Yeah. Worth doing. Okay. Our last break.
[02:28:56] And now we're going to then we're going to talk about the AI browsers. What could possibly go wrong? It is not our last break. However, we have. Oh. Two more. So you're a popular feller. I'm okay. In that case, we'll take a break in the middle. Yes. Even if you can't count to five, you're a popular fellow. This is number four in a continuing series of excellent sponsors.
[02:29:23] You know what's great about all the sponsors on the show is they're all focused on security and on privacy and the kinds of things we talk about on the show, which is kind of the whole point. If you think about it, about podcasting is we don't know anything about you because it's an RSS feed. We can't sell ads based on the person who's listening. But we know and our sponsors know anybody who listens to this show is clearly interested in technology and in security and in privacy. So we don't have to know anything about you.
[02:29:53] We already do by the fact that you're listening. Now, if you're not interested in that stuff, you're not going to be interested in any of these sponsors. I'm sorry. But why are you listening? Okay. The Nixie clock. Is it on the Wi-Fi? It is. Gosh, darn it. I got to segment that too, don't I? Thank you. But that you could probably trust because you sort of know about its genesis, right? Yeah. I could trust it. Yeah. Yeah. I think. I don't know.
[02:30:23] Who made the Nixie tubes? Does anybody still make Nixie tubes? They're probably Russian. They're only sourced from Russia now. Yeah. Yeah. So, okay. Know what I'm saying? Fortunately, I do not have to worry about compliance here in the Laporte house. Folks, if you are a big enterprise, of course, compliance is job one in many cases.
[02:30:46] This episode of Security Now brought to you by BigID, the next generation AI-powered data security and compliance solution. And the one that all of you ought to be using right now. BigID is the first and only leading data security and compliance solution to uncover dark data through AI classification. This is fascinating. It can help you identify and manage risk.
[02:31:15] It can help you remediate. And remediate as you wish, you know, the way you want to do it. It can map and monitor access controls. Very important with compliance. And it can scale your data security strategy. Along with unmatched coverage for cloud and on-prem data sources. BigID also seamlessly integrates with your existing tech stack. It allows you to coordinate security and remediation workflows. You can take action on data risks to protect against breaches.
[02:31:42] You can annotate, delete, quarantine, and more based on the data all while maintaining an audit trail. And as I said, it works with everything that you're already using. ServiceNow, Palo Alto Networks, Microsoft, Google, AWS, and on. If I started reading the list, we wouldn't be done until tomorrow. With BigID's advanced AI models, you can reduce risk. You can accelerate time to insight. And you can gain visibility and control over all your data.
[02:32:09] Intuit named it the number one platform for data classification and accuracy, speed, and scalability. If you think about it, if you're using AI, it's really important you know where the AI information comes from. Right? Yes. Imagine the dark data problems posed for a business that's been around for 250 years. How about the U.S. Army? You think they have any dark data?
[02:32:37] BigID equipped the U.S. Army to illuminate dark data, to accelerate their cloud migration, which has been a big priority for the armed forces, to minimize redundancy, and to automate data retention. And they have, I mean, if you think about it, a lot of requirements in data retention. U.S. Army Training and Doctrine Command gave BigID this quote.
[02:32:58] The first wow moment with BigID came with being able to have that single interface that inventories a variety of data holdings, including structured and unstructured data across emails, zip files, SharePoint databases, and more. To see that mass and to be able to correlate across those is completely novel. I've never seen a capability that brings this together like BigID does. End quote.
[02:33:25] That's a pretty good endorsement from the U.S. Army. Wow. CNBC recognized BigID as one of the top 25 startups for the enterprise. BigID was named to the Inc. 5000 and Deloitte 500, not just once, but for four years in a row. The publisher of Cyber Defense Magazine says, BigID embodies three major features we judges look for to become winners.
[02:33:49] Understanding tomorrow's threats today, providing a cost-effective solution, and innovating in unexpected ways that can help mitigate cyber risk and get one step ahead of the next breach. End quote. Start protecting your sensitive data wherever your data lives at BigID.com slash security now. Get a free demo to see how BigID can help your organization reduce data risk and accelerate the adoption of generative AI.
[02:34:16] Again, that's BigID.com slash security now. Ooh, there's also a free white paper that provides valuable insights for a new framework. It's called AI Trism. Maybe you've heard about this. It's AI Trust, Risk, and Security Management, T-R-I-S-M, to help you harness the full potential of AI responsibly. And it's there for you to download right now at BigID.com slash security now.
[02:34:44] And we thank them so much for their support of Security Now. Another sponsor is going to be back next year. We love these guys. BigID.com slash security now. Okay, Steve. Let's go on with the show. The Verge's headline last Thursday was, AI browsers are a cybersecurity time bomb. Followed by the article's tease,
[02:35:11] rushed releases, corruptible AI agents, and supercharged tracking make AI browsers home to a host of known and unknown cybersecurity risks. And I thought this reporting by The Verge was quite interesting, especially since there's a feeling in the air, you know, in the industry, that this merging of AI and web browsers is in some way natural.
[02:35:39] And it's like it's destined to be a thing. We also know that this sentiment, while widespread and spreading, is not universal. Since several months back, we discussed Vivaldi's stance, you know, in which they spelled out in their posting, carrying the headline, Vivaldi takes a stand, keep browsing human. So they're just saying no.
[02:36:06] And even then, though, we recognized that that was probably just for now, right? Like AI was going to get them sooner or later. Basically, that's what they said. This is for now. Yeah. But that's good. It's a start. Until we, you know, we see that it's proven and it's a good thing. So you don't have to worry about it creeping in, you know, yet. Okay.
[02:36:32] So let's start today's topic journey because I've got some cool stuff. By looking at what The Verge reported, they wrote, Web browsers are getting awfully chatty. They got even chattier last week after OpenAI and Microsoft kicked the AI browser race into high gear with ChatGPT Atlas and Copilot mode for Edge.
[02:36:59] They can answer questions, summarize pages, and even take actions on your behalf. The experience is far from seamless yet, but it hints at a more convenient hands-off future where your browser does lots of your thinking for you. And who wouldn't want that? Cybersecurity experts warn that that future could also be a minefield of new vulnerabilities and data leaks.
[02:37:26] The signs are already here, and researchers tell The Verge the chaos is only just getting started. Atlas and Copilot mode are part of a broader land grab to control the gateway to the Internet and to bake AI directly into the browser itself. That push is transforming what were once standalone chatbots on separate pages or apps into the very platform you use to navigate the web.
[02:37:56] And they're not alone. Established players are also in the race, such as Google, which is integrating its Gemini AI model into Chrome. Opera, which launched Neon. And the browser company with Dia. Startups are also keen to stake a claim, such as AI startup Perplexity, best known for its AI-powered search engine, which made its AI-powered browser, Comet, freely available to everyone in early October.
[02:38:25] And Sweden's Strawberry, which is still in beta and actively pursuing disappointed Atlas users. In the past few weeks alone, researchers have uncovered vulnerabilities in Atlas, allowing attackers to take advantage of ChatGPT's memory to inject malicious code, grant themselves access privileges, and deploy malware.
[02:38:50] Flaws discovered in Comet could allow attackers to hijack the browser's AI with hidden instructions. Perplexity through a blog and OpenAI's chief information security officer, Dane Stuckey, acknowledged prompt injections as a big threat last week, though both described them as a frontier problem that has no firm solution.
[02:39:16] Hammond Haddadi, professor of human-centered systems at Imperial College in London and chief scientist at web browser company Brave, said, Despite some heavy guardrails being in place, there is a vast attack surface. And what we're seeing is just the tip of the iceberg. With AI browsers, the threats are numerous. Yash Vikaria, a computer science researcher at UC Davis, said, quote,
[02:39:46] They know far more about you and are much more powerful than traditional browsers, unquote, even more than standard browsers. Vikaria says there is an imminent risk from being tracked and profiled by the browser itself. Okay, so let's just pause here to consider that for a moment. One of the things I've often observed is that ChatGPT is clearly maintaining
[02:40:14] a multi-session, multi-week, multi-month conversation context. Over time, for example, it has learned that I'm a Windows coder and I use the original Win32 API that I code in assembly language, but that I prefer to see snippets of sample code in C. My particular set of preferences, you know, they're non-standard enough
[02:40:42] that it quickly became apparent to me that it was learning who I was because days would go by and I would ask a question again and get an answer that was, like, customized for me. So at first it was a bit jarring since it was unexpected, but, you know, it evolved into a convenience since it wasn't necessary for me to keep reminding it who I was
[02:41:10] and the way, you know, the nature of the way my questions were going to be implemented. At the moment, using Firefox's built-in vertical tabs, I have a ChatGPT tab pinned to the top of the tab order. Since I also use Firefox's, you know, control number key shortcut to quickly jump among tabs, I did need to adjust my count
[02:41:38] since that top ChatGPT tab participates in the tab enumeration. So now control one is now my ChatGPT tab and the tab that I may be normally using has become control two and so on. But anyway, I'm making use of ChatGPT as I said in the past. We spent endless hours through the past 20 years of this podcast
[02:42:04] examining every aspect of internet tracking and profiling. Now we're talking about having our web browsers themselves deliberately learning far more about us, not only from our direct dialogues with them, but by being the agents through which we view the world with our browsers. There's one huge difference, though,
[02:42:31] and that's worth considering and keeping in mind, I think. In the case of traditional advertiser tracking and explicit non-advertising tracking through just trackers, the profiling that's being obtained, often despite our express lack of consent, does not directly benefit us. If it serves to increase the advertiser's payouts to the websites we visit by improving ad targeting,
[02:42:59] and then that might be an indirect benefit to us because we're supporting the websites that we're visiting. But generally, it appears that the profiles that are accruing behind our backs are used to line the pockets of the tracking companies who sell this information about us to others. And that might include our own ISPs that have a new income stream about their own customers
[02:43:26] and from which we certainly don't benefit. By comparison, if our web browser is learning about us and presuming that this knowledge is not being shared with the browser's publisher without our knowledge and permission, which that may be a mispresumption, we'll see how this evolves. But if it's learning about us, then a web browser that's able
[02:43:53] to interpose itself between us and the internet for the express purpose of facilitating and improving our browsing experience could indeed be transformative. So I'm not suggesting this is all bad. What I'm suggesting is it's probably going to go. It's probably going to happen. It's probably going to succeed. People are probably going to want it. And unlike with the hundreds of individual tracking agents filling the world,
[02:44:21] if this accrued knowledge about us could be kept local and contained, then the privacy risks may at least be knowable. On the other hand, people said a big no to Windows Recall and the promise was that that would be kept local. So, you know, and our browser, having recall looking at our browser was a lot of what people objected to. So the Verges reporting continues,
[02:44:51] writing, AI memory functions are designed to learn from everything a user does or shares, from browsing to emails to searches, as well as conversations with a built-in AI assistant. This means you're probably sharing far more than you realize and the browser remembers it all. Vicarious says the result is a, quote,
[02:45:19] a more invasive profile than ever before. Hackers would quite like to get a hold of that information, especially if coupled with stored credit card details and login credentials, which are often found on browsers. Another threat is inherent to the rollout of any new technology. No matter how careful developers are, there will inevitably be weaknesses hackers can exploit. This could range from bugs and coding errors
[02:45:48] that accidentally reveal sensitive data to major security flaws that could let hackers gain access to your system. Lukas Olnick, an independent cybersecurity researcher and visiting senior research fellow at King's College London, said, it's early days, so expect risky vulnerabilities to emerge. He points to the early office macro abuses and malicious browser extensions prior to the introduction of permissions
[02:46:17] as examples of previous security issues linked to the rollout of new technologies, and he says, here we go again. Some vulnerabilities are never found and may lead to devastating zero-day attacks, but thorough testing can slash the number of potential problems. With AI browsers, the biggest immediate threat is the market rush because these new Agenic browsers have not been thoroughly tested and validated.
[02:46:47] And I'll just toss in here that my sense that this technology has a large and strong fundamentally uncontrollable aspect has never diminished, by which I mean this notion of teaching an AI agent not to share something it shouldn't with you, not to respond to certain types of questions. We spent the beginning of the emergence of AI
[02:47:15] looking at all the ways it was possible to trick the agents to skip out of their leash. So I continue to be frequently astonished by the dialogues I have with ChatGPT. I really, I just shake my head. I think, holy crap, what is this? And the idea of erecting barriers
[02:47:43] around how it might wish to respond to me seems like a fool's errand. I just, you know, I get the way the technology functions and I just don't know how you really constrain it. And so far, we've seen that those efforts have been worked around. And note that I insist upon placing wish when I say how it wishes to respond. Well,
[02:48:13] that's in air quotes because there's no it there, right? It's it's a very impressive, sophisticated grammar generator that continues to astonish me. So The Verge continues saying, but AI browsers defining feature AI is where the worst threats are brewing. The biggest challenge comes with AI agents that act on behalf of the user.
[02:48:42] Like humans, they're capable of visiting suspect websites, clicking on dodgy links and inputting sensitive information into places sensitive information should not go. But unlike humans, they lack the learned common sense that helps keep us safe online. Agents can also be misled, even hijacked for nefarious purposes. All it takes is the right instructions. Okay,
[02:49:12] so just to segue again for a second, imagine that elderly Canadian couple in their 70s who got fooled. Well, imagine that an AI was similarly gullible, which it may well be, like this elderly 70-year-old couple, and falls for this, and is executing things on your behalf. Yikes. They said so-called prompt injections can range
[02:49:41] from glaringly obvious to subtle, effectively hidden in plain sight in things like images, screenshots, form fields, emails, and attachments, and even something as simple as invisible white text on a white background. Worse yet, these attacks can be very difficult to anticipate and defend against. Automation means bad actors can try and try again until
[02:50:11] the agent does what they want. Interaction with agents allows endless trial and error configurations and explorations of methods to insert malicious prompts and commands. There are simply far more chances for a hacker to break through when interacting with an agent, opening up a huge new space for potential attacks. Shunjin Lee, a professor of cybersecurity at the University of Kent, says, zero-day
[02:50:41] vulnerabilities are exponentially increasing as a result. Even worse, Lee says, as the flaw starts with an agent, detection will also be delayed, meaning potentially bigger breaches. It's not hard to imagine what might be in store. Olnick sees scenarios where attackers use hidden instructions to get AI browsers to send out personal data or steal purchased goods by changing the saved address on a
[02:51:11] shopping site. To make matters worse, Vicaria warns it's, quote, relatively easy to pull off attacks, unquote, given the current state of AI browsers, even with safeguards in place. He says, browser vendors have a lot of work to do in order to make them more safe, secure, and private for end users, yet, here they come. And to that I repeat my skepticism, to the basic feasibility
[02:51:40] of controlling a technology that, to me, just feels fundamentally hostile to being controlled. The Verge finishes by writing, for some threats, experts say the only real way to keep safe using AI browsers is to simply avoid the marquee features entirely. Lee suggests people save AI for only when they absolutely
[02:52:09] need it, and know what they're doing. Browsers should operate in an AI-free mode by default. If you must use the AI agent features, Vicaria advises a degree of hand-holding. When setting a task, give the agent verified websites you know to be safe, rather than letting it figure them out on its own. Nobody's going to do that. It can end up suggesting and using
[02:52:39] a scam site, he warns. I prefer to use AI when I want AI, and the thing is you have AI plugins, you have AI webs, there's plenty of AI everywhere. You don't need to put it into the browser. I know, but Leo, we know what the common user is going to do. They're going to think this is the best thing that has ever happened. Okay. Especially don't want to give my credit card number to an
[02:53:09] AI. That's the last thing. But our browsers have it inside, right? They already have it. Yeah. Yep. Our last break, and then I want to talk about the guy who coined the term prompt injection. Ah, okay. Good. Yeah, I mean, I played with all the agentic browsers. I've, you know, I've got Comet and Dia, and I've got, what's that new one that just came out? I have them all, but,
[02:53:39] and I've played with them all, but I just don't see any real value to be gained by it, and I use AI all the time. I just use it kind of more consciously, and I just, it doesn't seem like a good idea, but anyway, you know, we're not anti-AI. You and I love AI. We're very impressed with it. It's pinned to the top of my tab stack in Firefox. Exactly. And I'm talking to ChatGPT and has learned who
[02:54:09] I am. Yeah. Ask it to draw a picture of you. That's the fun thing, people. I know, it's a fun thing people do. They go, okay, based on what you know about me, draw a picture of me. And often it's very revealing. I'll do that after this break and show you what my AI thinks of me. But first word, the difference is I'm not giving it the credit card number. I'm not. This show brought to
[02:54:39] you by, well, we've talked about zero trust for a long time now. This is the best way to implement zero trust. Threat Locker, ransomware, you know it, killing businesses worldwide. How do they do it? Phishing emails, infected downloads, malicious websites, RDP exploits. Don't be the next victim. Threat Locker's zero trust platform. So brilliant. It takes a proactive deny by default. Those are the three words you want to see.
[02:55:08] Deny by default approach. It blocks every unauthorized action, protecting you from both known and unknown threats. If it's at all confusing, and it shouldn't be, but if it is, there's a great 30 second video on the Threat Locker website. 30 seconds and you'll get it. How their ring fencing works. And it's really trusted by companies that can't afford to be offline for a second, let alone ransomware for a month, like some companies we know. Companies like
[02:55:37] JetBlue use Threat Locker to make sure they keep flying high. The Port of Vancouver keeps the ships going with Threat Locker. Threat Locker shields you from zero day exploits and supply chain attacks while providing complete audit trails for compliance. That's one of the nice side effects of Zero Trust because only actions you authorize can happen. You know exactly who did what, when, where, how, why, and all of that. Great for compliance. Threat Locker's innovative ring fencing technology,
[02:56:07] that's what they call it, isolates, critical applications from weaponization. It stops ransomware cold and, this is really important, limits lateral movement within your network. Just because a bad guy gets in doesn't mean they can go anywhere they want. They can only go where you say they can go. Threat Locker works across all environments, all industries. It supports PCs and Macs. It works flawlessly for a very affordable price. They've got incredible support
[02:56:37] from the U.S. there 24-7 for you. And, of course, you enable comprehensive visibility and control. Mark Tolson, the IT director for the city of Champaign, Illinois, and other, you know, city governments, as we've talked about, very vulnerable. He says, quote, Threat Locker provides that extra key to block anomalies that nothing else can do. If bad actors got in and tried to execute something, I take comfort in knowing Threat Locker will stop that. It works.
[02:57:07] Stop worrying about cyber threats. Get unprecedented protection quickly, easily, and cost-effectively with Threat Locker. Visit threatlocker.com slash twit for a free 30-day trial and learn more about how Threat Locker can help mitigate unknown threats and ensure compliance. That's threatlocker.com slash twit. We are big fans. threatlocker.com slash twit. Thank you for the support. Threat Locker and Steve, on with the show. I asked ChatBT, I said,
[02:57:37] based upon everything you know of me, please draw a picture of me. Yes. And it replied, I can't accurately or ethically create an image of you without a visual reference. If you'd like me to make one, please upload a photo of yourself. I could then create a respectful, artistic, or illustrative version and it said parents, portrait, sketch, avatar, et cetera. So, you should have asked Rock. I'm sure Rock wouldn't have any compunction.
[02:58:06] Any compunction. We just jump right in. Sure. Oh, there's Micah Sargent's picture of himself. Let me show you. That's pretty good. It looks just like you, Micah. Let me pull that up in the discord. That's cute. I don't know how I knew how good looking he is. I think that's actually exactly what Micah looks like. What? Yes. Oh, he uploaded a headshot. That's why. Ah, okay. Okay. So, he made him a podcaster with chihuahuas. See that? That's good.
[02:58:37] That's good. Yeah, but yeah, yeah. That wonder it looks like you, Micah. Yeah. All right. Okay. so, for all of those reasons that we've been. Oh, he says he didn't upload a headshot. Wow. That's amazing. Scary. It's scary. But, but there's a lot, he has a lot of presence on the internet, right? So, if whatever he asked went out, I mean, for whatever reason, chat GPT didn't think to look, I mean, it probably knows my name, but, you know,
[02:59:06] any of us who have a large internet presence. Oh, yeah, if you Google me, I'm, you know, I'm, oh, yeah, he says, I don't want you to have a headshot and it still did it, but it looks just like him. Okay, well, let's see if I can trick it. He tricked it. So, all of the above that we've been talking about is the reason I titled here come the AI browsers, because I think it is so obviously inevitable with everybody
[02:59:36] getting into the game. I mean, we've already got Copilot, you know, enhanced edge. Google is integrating Gemini. I mean, we're going to have AI browsers and, and they're going to, I mean, the hook is, oh, look, it's, you know, it's like, it's much better. I was like, okay, uh-huh. So, without succumbing to catastrophizing hyperbole, there's no sane way to conclude that we're
[03:00:05] not about to pass through an extremely rough patch. I think it's going to happen. It seems obvious to me that every incentive is aligned to encourage bad outcomes here. Just the idea, as I've said, of an AI enhanced web browser is such a hook that those who are in the position to create them are not going to be able to hold back. You know, they're not going to wait for the technology to be tamed.
[03:00:35] They're going to integrate what they've got now and we'll work it out later. And anybody who thinks that it might be a good idea, they're just going to use it without a second thought. So, not surprisingly, all of the new AI browsers are based upon the Chromium engine. That's the best news we could have. At least the underlying web browsing engine itself will not be starting over from scratch, thus needing to root out the
[03:01:05] endless coding errors that have historically plagued Safari, Firefox, and Chrome. This is not to say that nothing bad is going to happen. We've observed many times that today's web standards have become so complex that you really do need to be starting from a solid code base. There's just no way to start from scratch. I'm glad that
[03:01:35] Chromium is the platform. I'm aware there's a project called Ladybird which is working to create a brand new from scratch browser and creating all of its engine components from scratch without using a single line of code from Chromium WebKit Gecko Blink or anything else. I love the idea theoretically of starting over from scratch with a clean design that
[03:02:04] never drags legacy and legacy code forward but today that's beyond a heavy lift. It's by the way taking them forever it shows you how hard it is to do and it ought to ever literally forever it should never happen because I just don't know we got Chromium and we got the Firefox Blink I think that's enough next year we may see how Lady Bird turns out they're
[03:02:34] saying they may be ready to get into beta in 26 so thanks to the solid and all all all all
[03:03:23] of the new AI web browsers and I know just who to do who to go to for that the guy who a little over three years ago in September of 2022 first coined and used the term prompt injection his name is Simon Willison Simon is the co-creator of the Django web framework who became an engineering director at Eventbrite after they
[03:03:53] purchased Lanyard which was a Y Combinator startup he co-founded back in 2010 after that Simon created Dataset an open source tool for exploring and publishing data and he now works full time building open source tools for what he calls data journalism which are built around Dataset and SQLite Simon is an extremely prolific blogger in fact he blogs so much that he offers an optional paid
[03:04:22] subscription to his followers who would prefer to receive less from him if I were Simon I would have been unable to resist naming my blog you know Simon says apparently he has more self control than I to the title of that blog post was the lethal
[03:04:53] trifecta for AI agents private data untrusted content and external communication okay so private data like any of the many things our web browser knows about us our usernames our passwords our credit cards our bank accounts and so on untrusted content G like pretty much anywhere we go on the internet
[03:05:23] and external communication which is the entire point of any web browser put those three characteristics together and you have one big pile of what could possibly go wrong when Simon described the threat posed by this trifecta he wasn't specifically talking about AI web browsers he was talking about AI agents generally but it
[03:05:52] AI are going to arrive for most people will be wrapped up in a web browser so here's what Simon wrote about this trifecta back in June he said if you are a user of LLM systems that use tools you can call them AI agents if you like it is critically important that you understand the risk of combining
[03:06:21] tools with the following three characteristics failing to understand this can let an attacker steal your data the lethal trifecta of capabilities is first access to your private data one of the most common purposes of tools in the first place second exposure to untrusted content any mechanism by which text or images controlled by a malicious
[03:06:51] attacker could become available to your LLM and third the ability to externally communicate in a way that could be used to steal your data he says I often call this exfiltration but I'm not confident that term is widely understood well we all know that the term exfiltration is one of my favorites so everyone here has definitely been exposed to it many times Simon Simon continues saying if
[03:07:21] your agent combines these three features an attacker can easily trick it into accessing your private data and sending it back to the attacker so what's the problem LLMs follow instructions in content this is what makes them so powerful we can feed them instructions written in human language and they will follow those instructions and do our bidding the
[03:07:51] problem is that they don't just follow our instructions they will happily follow any instructions that make it to the model whether or not they came from their operator or from some other source anytime you ask an LLM system to summarize a webpage read an email process a document or even look at an image there's a chance that the content
[03:08:20] you're exposing it to might contain additional instructions which cause it
[03:09:12] means means they don't do exactly the same thing every time there are ways to reduce the likelihood that the LLM will obey these instructions you can try telling it not to obey them in your own prompt but how confident can you be that your protection will work every time especially given the infinite number of different ways that malicious instructions could be phrased this is
[03:09:42] a very common problem researchers report this exploit against production systems all the time production systems in just the past few weeks we've seen it against Microsoft 365 co-pilot GitHub's official MCP server and GitLab's Duo chatbot I've also seen it in effect chat GPT itself April 23 chat GPT plugins May 23 Google
[03:10:12] Bard November 23 writer dot com December 23 Amazon Q January 24 Google notebook LM April 24 GitHub co-pilot chat chat chat chat 24. Google AI Studio, August 24. Microsoft Copilot, August 24. Slack, August 24. Mistral Le Chat, October 24. XAI's Grok, December 24. Anthropics, Cloud iOS app, December 24. Chat
[03:10:41] GPT Operator, February 25. He says, I've collected dozens of examples of this under the exfiltration attacks tag on my blog. And guardrails won't protect you. The really bad news is that we still don't know how to 100% reliably prevent this from happening. Plenty of vendors will sell you
[03:11:06] guardrail products that claim to be able to detect and prevent these attacks. I'm deeply suspicious of these. If you look closely, they'll almost always carry confident claims that they capture 95% of attacks or similar. But in web application security, 95% is very
[03:11:29] much a failing grade. I coined the term prompt injection a few years ago to describe this key issue of mixing together trusted and untrusted content in the same context. I named it after SQL injection, which has the same underlying problem, right? You know, your son is named Bobby Drop
[03:11:58] Tables. That's not good. Unfortunately, that term has become detached from its original meaning over time. A lot of people assume it refers to injecting prompts into LLMs, with attackers directly tricking an LLM into doing something embarrassing. I call those jailbreaking attacks
[03:12:22] and consider them to be a different issue than prompt injection. Developers who misunderstand these terms and assume prompt injection is the same as jailbreaking will frequently ignore this issue as irrelevant to them because they don't see it as their problem if an LLM embarrasses its vendor
[03:12:46] by spitting out a recipe for napalm. The issue really is relevant, both to developers building applications on top of LLMs and to the end users who are taking advantage of these systems by combining tools to match their own needs. As a user of these systems, you need to understand this issue.
[03:13:09] The LLM vendors are not going to save us. We need to avoid the lethal trifecta combination of tools and tools to help ourselves to stay safe. So the key point Simon makes is that in asking an AI web browser to summarize a web page, the content of that page is dumped into the model. And if that page contains
[03:13:37] content of any kind that the model might perceive as instructions it should follow, it might very well believe that its job is to follow those instructions. Given their promise, I'm sure it's unstoppable that consumer web browsers are going to be enhanced with AI. Those pushing this technology out the door
[03:14:01] can't do so fast enough. It's a race. And we know that races tend to forego security for reduced time to market. It also appears that the bad guys are going to be piling on to the emergence of this new and unproven technology with great alacrity. It's a darn good thing that we didn't stop this podcast at 999. Yeah, it's only getting more interesting. Wow.
[03:14:30] The attack surface is just exploding. Exploding. Yeah. It's great. I mean, it's great. It's fun. And just, you know, don't give a million dollars to the bad guys. That's all. That's all. Even though the bad guys are getting smarter and smarter. They are. Yeah. That's what bad guys do, unfortunately. Yeah. That's what makes them bad.
[03:14:56] Steve's one of the good guys in Aren't You Glad? He's on our side. He's at grc.com. That's his website. Easiest place to get some very interesting things. Of course, Spinrite, which is his, I want to say day job. I think this is your day job. Spinrite's your night job. It's his, of course, world-famous tool for mass storage, both recovery, performance enhancement,
[03:15:23] and, you know, just general refreshing and buffing up. It's a really useful tool. If you have mass storage of any kind, even like a Kindle, you need Spinrite. Go to grc.com and look for that. There's lots of free tools, too, like Shields Up and InControl, and, oh, I can go on and on. You just browse around. It's a fun site to check out. You can also get the podcast there. He has several unique formats for it.
[03:15:53] A 16-kilobit audio version, which is small, so it's easy to download. And he did that for Elaine Ferris, who is a very talented transcriptionist who takes the audio, the 16-kilobit audio, and makes a fantastic transcript. That's available at Steve's website. He also has full-quality 64-kilobit audio. If you're not bothering, if you don't have a metered connection, you might want to get the 64-kilobit audio.
[03:16:19] And he has the show notes, which are very complete, really nicely done in a PDF. It includes the picture of the week and all of that stuff. All of those at grc.com. You can actually get the show notes mailed to you ahead of time, usually the day before. If you go to grc.com slash email and provide your email address. Now, the reason for that is to whitelist your address so you can send Steve one of those hundreds of emails he gets every week,
[03:16:45] including the suggestions for picture of the week or questions or comments or suggestions. So grc.com slash email. Give him your email address. You have some way of vetting that, right, to make sure it's not a spammer? What is the – you must have told us, but I just don't remember your magic system for getting email addresses. Or you just assume if somebody does that, that they're good. Well, they give me their address. I then send a confirming email to them. Ah, and a spammer is not going to respond to that. Yeah, yeah, yeah. Right.
[03:17:14] Okay, that makes sense. So that's simple. There are two unchecked checkboxes below that. So you'll get that email. But if you check those boxes, you'll also get a weekly email, the show notes, and a very infrequent – it's only been used once – email for new products from Steve. And the next one, of course, we're waiting for is the DNS Benchmark Pro any day now. So you might want to get on that list too. grc.com slash email. Give him your email address and check or don't check those two boxes below.
[03:17:44] We do the show, of course, every Tuesday right after Mac Break Weekly. That is supposed to be 1.30 Pacific, 4.30 Eastern, 21.30 UTC. Sometimes we're late. But, you know, you catch the tail in the Mac Break Weekly and then you'll hear security now. You can listen or watch on six different platforms. Actually, if you're in the club, you can join us in the Discord, chat along with us. Club members, we really appreciate you.
[03:18:10] They spend $10 a month to support the show, to support all of the shows, to be in the Discord, to get ad-free versions of all the shows if they choose. The club is also a great way of, you know, participating in a lot of special programming we do. We have the Stacey's Book Club and the photography, monthly photography segment with Chris Marquardt and Home Theater Geeks. A lot of great stuff. But the most important reason you might want to join the club is just to support the network, to keep the shows flowing.
[03:18:39] Because we can't do it without you, to be perfectly frank. Twit.tv slash Club Twit. Some good reasons to go there right now. We've got a great coupon, 10% off the annual plan. Great for yourself or for a gift or both, right? There's a two-week free trial. There's family plans. There's corporate plans. All the information, all the benefits and everything spelled out at twit.tv slash Club Twit. If you want to watch us do the show live, everybody is welcome to do that.
[03:19:05] The club members can watch in the Discord, but everybody can watch on YouTube, Twitch, X.com, Facebook, LinkedIn, and Kik. Six different other streams. So this is seven in total as we do the show live every Tuesday afternoon. Of course, you can always download copies, as I said, from Steve's site. Our site, twit.tv slash SN. Audio and video there. We have 128 kilobit audio, and we have video of the show.
[03:19:31] We also put it up on YouTube, so there's a YouTube channel in the video. Great way for sharing clips. Probably the best thing to do if you're new to the show or not is subscribe in your favorite podcast player. That way you'll get it automatically as soon as we're done on a Tuesday evening. Thanks to Benito Gonzalez, who does put the show together and produces it for us. We really appreciate Benito's help. As you mentioned, he's in the Philippines where his day is just beginning as ours winds down. Steve, have a wonderful...
[03:20:01] What a way to start. Yeah, no kidding. Have a wonderful week, Steve. We will see you next Tuesday right here on Security Now. Till then, bye. Security Now.
