SN 1043: Memory Integrity Enforcement - Crypto ATM Scam Epidemic
Security Now (Audio)September 17, 2025
1043
2:51:21157.07 MB

SN 1043: Memory Integrity Enforcement - Crypto ATM Scam Epidemic

Apple just rewrote the rules of device security with a chip-level upgrade that could wipe out most iPhone vulnerabilities overnight. Find out how "memory integrity enforcement" aims to make exploits a thing of the past—and why it took half a decade to pull off.

  • Are Bitcoin ATMs anything more than scamming terminals.
  • Ransomware hits the Uvalde school district and Jaguar.
  • Did "Scattered LapSus Hunters" just throw in the towel.
  • Germany, for one, to vote "no" on Chat Control.
  • Russia's new MAX messenger has startup troubles.
  • Samsung follows Apple's WhatsApp patch chain.
  • Shocker: UK school hacks are mostly by students.
  • HackerOne was hacked.
  • Connected washing machines in Amsterdam hacked.
  • DDoS breaks another record.
  • Bluesky to implement conditional age verification.
  • Enforcement actions for Global Privacy Control.
  • Might Apple have finally beaten vulnerabilities

Show Notes - https://www.grc.com/sn/SN-1043-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit

Sponsors:

Apple just rewrote the rules of device security with a chip-level upgrade that could wipe out most iPhone vulnerabilities overnight. Find out how "memory integrity enforcement" aims to make exploits a thing of the past—and why it took half a decade to pull off.

  • Are Bitcoin ATMs anything more than scamming terminals.
  • Ransomware hits the Uvalde school district and Jaguar.
  • Did "Scattered LapSus Hunters" just throw in the towel.
  • Germany, for one, to vote "no" on Chat Control.
  • Russia's new MAX messenger has startup troubles.
  • Samsung follows Apple's WhatsApp patch chain.
  • Shocker: UK school hacks are mostly by students.
  • HackerOne was hacked.
  • Connected washing machines in Amsterdam hacked.
  • DDoS breaks another record.
  • Bluesky to implement conditional age verification.
  • Enforcement actions for Global Privacy Control.
  • Might Apple have finally beaten vulnerabilities

Show Notes - https://www.grc.com/sn/SN-1043-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit

Sponsors:

[00:00:00] It's time for Security Now. Steve Gibson is here. Who would have thought it? Russia's new enforced messenger has startup problems. What a shock. Steve's going to tell the story of how he hacked the dorm washing machines. And then we're going to talk about an amazing improvement Apple has made to its own chips that may eliminate 90% of security problems. Wow. All that coming up next on Security Now.

[00:00:30] Podcasts you love. From people you trust. This is TWiT. This is Security Now with Steve Gibson. Episode 1043 recorded Tuesday, September 16th, 2025. Memory Integrity Enforcement. It's time for Security Now. Yay! It's Tuesday. That's the show where we explain and help you understand everything that's going on.

[00:01:02] Oh, do we have one today? Oh, this is Steve Gibson. Ladies and gentlemen, get your, is this a propeller hat episode? Well, it's titled Memory Integrity Enforcement. Okay. Which is the technology in the A19 chips that Apple announced a week ago. Is this like ASLR? Is this like...

[00:01:23] Is it like... This is, no, this is... If the only problems that security created were use after free and buffer over runs or use of memory you don't own... Yeah. ...they would all be gone. Well, that's good. It's huge. It is... Because that's where most of the security exploits start, right?

[00:01:49] Way most. And in fact, I was... Before I remembered that it was possible to have other types of bugs, I was dancing around thinking, well, it's over. They've won. But, oh, there are... It is possible to have a different kind of problem. But, oh, far and away, mostly. Like that dumb Adobe DNG image problem that Apple, you know, that embarrassed Apple a couple weeks ago and was...

[00:02:17] That coupled with the WhatsApp exploit created... It allowed targeted attacks on WhatsApp users. That would have never happened if MIE was in place. I mean, this... They spent five years, although, of course, they had to blow it up. It's half a decade. It's like, okay, yes. Also known as five years to get this done. But anyway, our listeners always say they like our deep propeller head episodes.

[00:02:48] Well, get out your galoshes because it's... This one's going to be deep. But, Leo, I interrupted your... Well, I was just going to say, here's Steve Gibson, so that's good enough. So we've got Security Now, episode 1043 for the 16th. This time, the show notes are properly dated at the top. We're going to look at whether Bitcoin ATMs are ever anything more than just scamming terminals.

[00:03:18] The two instances of ransomware I wanted to talk about. One that hit the, unfortunately, well-known Uvalde School District. And also Jaguar, which had some surprising downstream consequences. We're going to ask the question, did the self-named scattered lapsus hunters hybrid group just throw in the towel?

[00:03:44] Germany has said they're going to vote no on chat control. Russia's newly released Max Messenger is having some startup troubles. I know who would be surprised. Samsung has following Apple's change in the WhatsApp patch chain. And shocker, Leo, UK school hacks turn out to mostly be made by students. They thought they could rein them in, but no.

[00:04:13] We have some numbers. Also, unfortunately, HackerOne was hacked, which is not good. But, again, it's that centralized hack that just keeps on giving. We've also got connected washing machines in Amsterdam having been hacked. The university is going to take measures. DDoS has broken another record. Blue Sky has announced they're going to be implementing some conditional age verification in other states.

[00:04:43] We're going to look at enforcement actions coming for global privacy control. That's that GPC notice that sort of replaced DNT, the Do Not Track, which never got off the ground. And we're going to ask the question, might Apple have finally beaten vulnerabilities? Actually, most vulnerabilities, but it's a huge win.

[00:05:06] So we're going to do a deep dive into what it is that they did, the history of this campaign and what this is. This is new hardware introduced last week in the A19 chips. And as Apple put it, they – I don't remember now the exact word. It was – it wasn't astonishing.

[00:05:34] It wasn't – there was some – they said they dedicated a huge – they had a different word. I'll end up sharing it when we get there. A huge percentage of silicon to this. I mean, they are serious about keeping those targeted attacks from happening. Remember, none of – no normal users are ever hit by this anymore.

[00:05:56] You know, that – we covered the news of that hobbyist who'd given up hacking Apple long time ago, you know, because it was no fun anymore. It got to the point where all the low-hanging fruit was, like, up so high that it just couldn't reach. There is no low-hanging fruit. Yeah. No. No. Well, it's – Anyway. The fruit is now way up there. Yeah, that's right. And costs millions of dollars to pluck.

[00:06:21] So – and, of course, we've got a great picture of the week, which we'll get to after our first announcement. We will show you in moments. But first, my friends, a word from our sponsor. This show brought to you – and it's an appropriate sponsor if you care about privacy – brought to you by DeleteMe. If you've ever wondered how much of your personal data is on the Internet for anyone to see, you could do a search. I don't recommend it. It is not good. It's a lot more than you might imagine.

[00:06:49] Your name, your contact info, your social security number. Yes, Steve and I both found our social security numbers in that big breach some time ago. Your home address. Even information about your family members. And here's the thing. It's not just on the Internet. It's being compiled by data brokers. They're building a dossier on you right now and then selling it online to the highest bidder.

[00:07:13] Not just marketing, but law enforcement, foreign governments, anyone on the web can buy your private details. And you can imagine the consequences. It's not just identity theft and phishing attempts, but doxing, harassment. Well, now you can protect your privacy with DeleteMe. Look, I live in public and I know what that means.

[00:07:36] Anybody who talks about what they think online means you've got to think about your safety and security. But I would say I would go even farther. A lot of people think, well, I'm not that, so I'm okay. But if you, for instance, have a company and you've got managers who have information online, as just about everybody does, that can be used against your company. It happened to us.

[00:08:04] Our CEO was spoofed. They sent out a fake text message to her direct report saying, I'm stuck in a meeting, but I need some Amazon gift cards right now. Please buy $500 worth. Use your Twit credit card. We'll reimburse you and send it to this address. Now, fortunately, her direct reports are our employees. They're smart. They've heard these ads. They know what's going on. But what it did is it opened my eyes to the information that's online.

[00:08:32] Not just who our CEO is, what her cell phone number is came from her number. What her direct reports are, what their cell phones are. And now in this age of AI, how to craft a message specifically targeted at them. And all of that because it's so easy to find personal information about people online. If you're a company, not just an individual. Obviously, we individuals, we want our privacy too. But if you're a company, you should really think about this as part of your company's security.

[00:09:02] That's why we use a Twit. We use and recommend DeleteMe. DeleteMe. It's a subscription service. It removes your personal info from hundreds of data brokers. So here's how it works. You sign up. You give DeleteMe the information you want deleted. As much or as little as you want. Now, of course, they treat that information securely. But they need to know what is it that you don't... You're social. You don't want to see that online. And then their experts take it from there.

[00:09:30] They will remove it from data brokers. But more than that, they'll continue to monitor. They send you regular personalized privacy reports showing what info they found, where they found it, what they removed. It's not a one-time service. DeleteMe is always working for you. We just got another email from DeleteMe a couple of weeks ago saying, here's what we found and removed. DeleteMe is always working for you, constantly monitoring and removing the personal information you don't want on the internet. To put it simply, DeleteMe does all the hard work of wiping you,

[00:09:59] your family, your company's personal information from data broker websites. Take control of your data. Keep your private life private by signing up for DeleteMe at a special discount for our listeners. Right now, get 20% off your DeleteMe plan when you go to joindeleteme.com slash twit and use the promo code TWIT at checkout. The only way to get 20% off is to go to joindeleteme.com slash twit. Enter the code TWIT at checkout. That's joindeleteme.com slash TWIT. Offer code TWIT.

[00:10:29] There is another company at deleteme.com. That's not the one. You got to go to joindeleteme.com slash TWIT. Please don't get that wrong. And of course, the offer code TWIT helps us. Let's know you saw it here, but it helps you by getting individuals 20% off. Joindeleteme.com slash TWIT. We thank them so much for their support of Security Now. I think most Security Now listeners know how important privacy is. Now, I guarantee you, Leo. Yeah. Yeah.

[00:10:59] I think we have a pretty good idea. Okay. So this picture raised some questions. I gave it the headline, what exactly is the plan here? All right. I'm going to scroll up. I have not seen it. Okay. There's a tree and a fire hydrant. There's a very important gate around three quarters of the fire hydrant.

[00:11:25] And the business end of the fire hydrant where the hose connects is blocked by the gate. Okay. So, okay. Now, because this email went out yesterday afternoon, I've had some feedback from our listeners with their conjectures answering my question implicit. What exactly is the plan here?

[00:11:49] So, first of all, for those who aren't seeing the video, there is a beautifully painted fire hydrant. The fire hydrant is red. It's wearing a yellow painted cap on top. I mean, it's lovely. And so, okay. The problem is that, you know, a fire hydrant is all about access. You need the fire department needs to hook their hose up if they need water badly.

[00:12:18] And this fire hydrant, beautifully painted, though it is, has been surrounded, as you said, Leo, on three sides. The front business side included by this weird sort of – I mean, it's got to be a custom gate fence thing. It's a beautiful gate. It's got a little star in it. Yeah. Yeah. But, I mean, you just can't go – what, what, do you put in Amazon? I would like a fire hydrant fence?

[00:12:42] I mean, it looks like it was made to order for this fire hydrant. So, you know what? Anyway, the best feedback that I've seen from one of our listeners was, you know, after the guy painted his fire hydrant, he probably was upset at the idea of dogs peeing on it. Yeah. Yeah.

[00:13:06] So, and given the fact that the ground is – the grass below it and in front of it is brown, there may have been some urination occurring in the past. Yeah. So, yeah, that was – I think that's the best idea. I mean, obviously, if the fire department actually needed it, they'd – one of the burly firemen would just grab it and toss it up in the air and get it out of the way. Presumably, it's not cemented into the grass.

[00:13:32] And, I mean, I zoomed in and looked to see whether it could open. Does the gate hinge? Doesn't look to me like it does. It's just some weird, like, okay. It's sort of like in case of fire, break glass, right? So, presumably, in case of fire, throw fence and then get access to the hydrant. It's quite cute, though. It's quite – it's a nice little looking – It's a statement. Setup. Yeah. It's a statement. Yes. Yeah.

[00:14:01] The only question is, what does it say? What is it saying is the question. Exactly. Just keep your dog walking. Don't stop here. So, okay. So, the District of Columbia's Office of the Attorney General has filed – when you hear the facts, this is a well-deserved lawsuit – against the largest crypto ATM operator. It's a company known as Athena Bitcoin.

[00:14:32] And we've talked about the problems, sort of endemic problems with crypto ATMs. Excuse me. I have hiccups. This lawsuit alleges, with, again, ample evidence, as we'll see, that the company knew, Athena Bitcoin knew, its Bitcoin ATMs were being used to collect funds from victims of illegal scam operations.

[00:14:59] But rather than stopping the transfers, it instead charged large hidden fees, then refused to provide victims with refunds when they were due. So, overall, the concept, you know, theoretical, the idea of a Bitcoin ATM, of having one, I think is cool, right? It serves as a real-world interface to a purely ephemeral digital currency.

[00:15:30] But we've learned that the number one enabling factor for ransomware was the emergence of cryptocurrency. One of the principal lessons to be learned broadly from the Internet is that, sadly, any time there's the freedom of anonymity, there will be abuse.

[00:15:50] So, it should come as no surprise that scammers were quick to jump onto Bitcoin ATMs as the means for suckering the uninformed into all manner of online scams. We've previously touched on the problem of Bitcoin, I'm sorry, of ATM abuse, as I said. And now this lawsuit gives us a window into how bad exactly it is.

[00:16:14] What's somewhat surprising is that these Bitcoin ATMs see such low levels of non-fraudulent, which is to say, you know, legitimate use. Believe it or not, only 7% of Athena's Bitcoin ATM transactions were legitimate.

[00:16:36] Officials say that 93% of all deposits made across the seven Bitcoin ATMs, which Athena operates in Washington, D.C., were the result of scams.

[00:16:52] 93% is crap, is like, you know, someone sending you email saying that, you know, your webcam was on and they saw you doing something that you don't want the world to know about. And unless you pay them, you know, go to your local Bitcoin ATM and send some money. They're going to release this to the world. That kind of nonsense.

[00:17:17] So scammers would trick victims into going to an ATM to transfer funds into the scammer's Bitcoin account. OK, that's bad enough. But the D.C. Attorney General alleges first that Athena knew that allowing users to deposit funds into accounts they don't own would be abused for scams.

[00:17:39] They did nothing to stop to stop the scams beyond displaying what was obviously an ineffective warning on the ATM screens because, you know, nobody took the warning to heart. The attorney general's name is Brian L. Schwalb.

[00:17:58] He claims that Athena instead applied large fees instead of like adequately warning people and making it clear that there was a high percentage, a high likelihood of them being scammed. They charged horrendous fees.

[00:18:13] The fees, which were not visible to the customers, thus hidden, reached up to 26 percent of the transaction amount, which is almost 100 times the fees practiced by Athena's competitors, which go from around 0.24 percent to as high as 3 percent, but not even approaching 26.

[00:18:35] As a consequence, scammed individuals were victimized essentially twice, first by the scammers themselves and then by Athena that was riding along a 26 percent surcharge for the privilege of being scammed in the first place.

[00:18:53] So the median loss per victim, meaning that the number where as many paid more than as the number of people who paid more than as paid fewer than that was $8,000, meaning that half of the people scammed paid more than $8,000 and the other half paid less. I don't know what the average amount was. The victim's median age was 71.

[00:19:23] So half of the people who were being scammed were older than 71. And they were the scammers were deliberately specifically targeting the less technical elderly population in Washington, D.C.

[00:19:41] The attorney general brought the lawsuit as a means of forcing Athena into compliance with anti-fraud measures and a secure financial restitution for its victims, as well as to pay financial penalties to the District of Columbia. He said, Athena knows that its machines are being used primarily by scammers, yet chooses to look the other way so it can continue to pocket sizable hidden transaction fees.

[00:20:12] Today, we're suing to get district residents their hard-earned money back and put a stop to this illegal predatory conduct before it harms anyone else. What do they? So do you think these elderly, by the way, we're both under 71, so we're OK. But do you think that these elderly people, what did they think? They were going to put cash in this machine and get a solid gold Bitcoin? What did they?

[00:20:37] I think they believed that they were going to get something, obviously, in return for giving more than $8,000. Like, we know email comes in and you read it and it motivates you to take some action. So maybe they were thinking they were going to maybe pay a ransomware or something like that, right? Could have been paying a ransom.

[00:21:03] Maybe they believed that their bank was actually going to foreclose on their home and just anything. Yeah. And it wasn't Athena doing this, but they knew there was a reason why people were spending all this money on Bitcoin.

[00:21:20] Well, and when the AG is able to look at the transaction history and follow the money trail, which Athena could just as easily do since they're the people running the ATM and conclude that only 7 percent. Like, you know, like, for example, what would 7 percent be? It was still a bad deal because of the fees.

[00:21:51] Right. On top of exactly. Exactly. So the person gives them their bank transaction data and these people take an additional 26 percent just for a essentially a zero cost to them transaction. Their competitors are charging a quarter of a percent up to 3 percent.

[00:22:13] These guys are charging 26 percent and they're the leading ATM in the country, which makes you wonder what they're doing everywhere else. What? Because this is just this is just the Washington, D.C. AG that is going after them. Yeah. So, again, we have great technology and it's good. The bad guys, the scammers are going to find a way to abuse it.

[00:22:41] And in this case, half of the people that were victimized were older, were 71 years or older. So, Leo, not long from now, get ready. Next year, Steve. That's right for me. Yeah, that's right. OK. And speaking of ransomware, the Uvalde School District is shut down all this week following a ransomware attack.

[00:23:11] If that name sounds familiar to our listeners, that's because three years ago in 2022, an 18-year-old former student fatally shot 19 students and two teachers, injuring 17 others. But I doubt that this ransomware attack on the district had anything to do with that. As we know, such attacks are almost always the result of just targets of opportunity.

[00:23:38] Uvalde's cybersecurity was likely wanting and it was not adequately protecting or protected from, you know, someone clicking on a link that they shouldn't have. The incident impacted the district's phone system, their security cameras, their visitor management, and the thermostatic controls for the schools in the district.

[00:24:05] Consequently, classes will be closed all this week while the district gets back on its feet. And I deliberately wrote in the notes, Uvalde's cybersecurity was likely wanting and was not adequately protected from someone clicking on a link they shouldn't have. I don't know that's the case, but that's almost always now the way we're seeing these things happen.

[00:24:30] And I mentioned this thought before, and it's going to be something people are going to be hearing from me going forward. The evidence clearly shows, and I firmly believe, that the new goal for any enterprise's internal security must be to harden itself against random people inside the organization clicking on links they should not.

[00:24:58] Yeah, the threats coming from inside the organization. Yes, that is exactly right. You know, today's podcast topic is about the tremendous lengths Apple has been forced to go to to harden their system against the inevitability of bugs in software. For a long time, the focus was on eliminating those bugs. But we've learned that's apparently never going to happen.

[00:25:23] So Apple has committed massive resources to being able to immediately terminate any process where misbehavior is detected to protect the phone's owner. Similarly, we've talked many times about the need to train employees not to click on that link in the email that appears to be from their mom.

[00:25:47] Or on that link that says they only have two days remaining before their bank account will be closed unless they respond. Go down to the convenience store and find that Bitcoin machine because that's the solution. Exactly right. Exactly right.

[00:26:05] So telling people, employees not to click on the link is analogous to telling every coder of every piece of software on an iPhone that they may never make another mistake. In other words, you could ask, but you're not going to get it.

[00:26:25] My point is that regardless of how much training employees receive, you know, you're going to have a new hire, somebody on the loading dock who missed the last training because, you know, they couldn't make it. They are. Somebody is going to click on a malicious link. It's inevitable.

[00:26:45] So similar to what Apple has finally been forced to do, the only sane recourse is for enterprises to get very, very serious about hardening their internal security against anyone who might click on anything that they receive over the Internet. Whatever it takes. I'm not suggesting it's easy, but that's the bar. That's where it is now.

[00:27:12] If that means implementing new VLAN network segmentation to give up the massive convenience of having everyone being able to participate as equal peers on the same network, then so be it. That's what's going to be necessary, given all the evidence that we've been seeing for the last year here.

[00:27:35] All of these recent massive shiny hunter and sales force compromises are showing us, as you said, Leo, that the calls are now coming from inside the house. The bad guys have clearly located our Achilles heel and it is us.

[00:27:53] So my message to our listeners who are in charge of such things is that if results are what matter, rather than feel good, but ultimately failure prone measures, it's no longer sufficient to rely upon, quote, adequate training, unquote, of every single last employee. There is no such thing as adequate training, you know. And, of course, you have to include the bosses, too, because they're just probably more prone. And they're arrogant. They don't need it.

[00:28:23] I'm the boss. I don't need that. Exactly. I can click any link I want. That's right. Anyway, we've tried that, right? We've tried the training. It didn't work. So the only thing that will work is seriously thinking about arranging to make clicking on malicious links safe. That is the next frontier for internal enterprise security. We need to figure out how to do that. Do you think that's doable?

[00:28:55] Again, it's yes, I would say it is. But I'm not a person, you know, a CISO inside of an enterprise who needs to figure out how Marge can print. You know, Marge needs a way to print.

[00:29:13] But Marge also needs her computer to, if the computer is malicious, that through no fault of hers, it can't hurt the enterprise, even though it has some privileges on the network, which Marge needs in order to do her job. So I know it's not easy. And it probably requires rethinking the boundaries of trust that exist.

[00:29:42] The easy way to establish an enterprise is just to hook everybody up. That's what Microsoft did when the Internet happened. They put all Windows 95 machines on the Internet. How'd that work? Yikes. There was no firewall. And I created ShieldsUp. They greeted people by name when they came to my website because I was able to get the name of them and their computer. And it was a wake-up call. So we know that change is hard.

[00:30:12] But I think if CISOs continue to imagine that training is the solution, enterprises will continue to fall to ransomware and to data exfiltration and all the embarrassment that follows from that.

[00:30:32] The solution is recognize that internal networks now need to be hardened against its own employees, not because they're malicious, but because the links they may click on could be. Wow. Yeah. I mean, it is a different scale. But that's where we are today. And so I just wanted to clearly throw the gauntlet down.

[00:30:59] I think any rational examination of the types of exploits and problems we've seen for the last year would cause anyone to reach that conclusion. It's, you know, sorry, but training isn't going to cut it.

[00:31:18] People are, I mean, just, and again, the problem is it just, it's that the challenge is so difficult because it's the weakest link process in security. Security has to be perfect. So every single person in an organization has to never even once click a bad link. It has one mistake is all it takes.

[00:31:46] And so the only way to protect against one mistake is to figure out how to create an internal organization of privilege such that a computer, an employee's computer that falls to malware, the damage it can do is minimal. If it allows a bad guy to get into it, they're frustrated.

[00:32:15] They can't do anything. And that is just not the case in today's enterprise. Houston, we have a problem. And speaking of clicking on a bad link, I wanted to touch on just one more recent ransomware attack because of its consequences, which were somewhat unique and interesting. More than two weeks ago, Jaguar Land Rovers automotive production lines were ground to a halt due to a ransomware attack.

[00:32:44] And today, all production remains halted. What? The company has said, yeah, the company has said that it expects that at least three of its production lines may be able to resume operation later this week. But here's the interesting. Yeah. Yeah. Here's the interesting bit.

[00:33:06] According to the BBC, several of Jaguar's smaller suppliers are now facing bankruptcy due to the prolonged production shortage by Jaguar. So talk about a supply chain attack. The loss to Jaguar themselves is estimated to end up being between 50 and 100 million pounds since the attack.

[00:33:32] But the ripple effects of the incident are revealing it to be perhaps one of the most significant, as in the worst, cyber attacks in Britain's history. It's expected to affect Britain's national economic growth stats. It's so bad. So, wow. Wow.

[00:33:55] I don't know what the deal is with Jaguar and their cybersecurity, why all of their production lines are down. Obviously, they weren't set up to be resilient from an attack. And an attack has, you know, hit them hard.

[00:34:12] But interestingly enough, it's also hit their suppliers who are like didn't have apparently any margin, any operating margin to fall back on when Jaguar stopped ordering things from them and stopped paying their bills. I'm sure that what's happened is that Jaguar's accounting systems were taken out, too. So they don't have any payables operation in place. They can't pay their suppliers because they don't know who owes them what.

[00:34:42] I mean, it's a mess. That's yeah. Why would it take three weeks to fix? Oh, my God. Again, I have no visibility into their operations, but it doesn't look good. Okay. So it's impossible for us to know what's actually going on here.

[00:35:05] But that hybrid group that was calling itself right itself named the scattered lapsus hunters. Remember, that was composed of individuals from shiny hunters, scattered spider and lapsus. Remember that they were the ones who threatened Google, saying that they had to terminate two of their threat intelligence group employees or else.

[00:35:33] Well, they posted a rambling goodbye note referring to their attack on Jaguar, by the way, and for moderate intrusions into Google. Now, I would normally post a I would share with our listeners a rambling goodbye note. But this one was so rambling, it didn't even clear that bar.

[00:35:59] I'm not going to bother because, I mean, this is just was all over the place and is so often the case with these sorts of things. We're almost certainly going to never know what really happened here. Why was it that after they threatened Google with like dire consequences, they suddenly say, OK, goodbye. OK, maybe Google did not take that lying down. And remember last week we were saying we hope they would not.

[00:36:29] But we've been covering the consequences of this group's actions, which, you know, while not really qualifying as a reign of terror, Jaguar might disagree, did at least certainly put this group squarely on the map. It might just be that they ran dry of targets of opportunity, which they had previously acquired.

[00:36:51] Remember, they were they were the ones who were leveraging all of these attacks against Salesforce or perhaps some counter cyber intelligence managed to penetrate their ranks to convince them to stand down. Whatever the case is, I wanted to keep our listeners current with the news that they had formerly said goodbye. So. We'll see what happens next.

[00:37:16] I have no idea what's going to happen, except, Leo, I do know one thing we're going to take a pause for our next sponsor. Or as they say, station identification. Yes, indeed. And our sponsor this week. Oh, wait a minute. Let me turn on my camera so you can hear me and see me talk about Vanta. This is the show where we like to talk about security solutions.

[00:37:46] And this is a security solution you might be interested in. Compliance, regulations, third party risk and customer security demands all growing and changing fast. Is your manual GRC program actually slowing you down? If you're thinking there must be something more efficient than spreadsheets and screenshots and all manual processes, you're right.

[00:38:11] GRC can be so much easier, all while strengthening your security posture and actually driving revenue for your business. Vanta's trust management platform automates key areas of your GRC program, including compliance, internal and third party risk and customer trust and streamlines the way you gather and manage information. And the impact is real.

[00:38:35] A recent IDC analysis found that compliance teams using Vanta are 129% more productive. So you get more time and energy to focus on strengthening your security posture and scaling your business. Vanta, GRC, how much easier trust can be. Visit vanta.com slash security now to sign up today for a free demo.

[00:39:01] That's V-A-N-T-A dot com slash security now. Wow, we thank him so much for supporting Steve and the work he's doing here on security now. Back to you, sir.

[00:39:12] Okay, so many of the governments within the European Union have by no means given up on legislation to obtain some sort of access or control of privately encrypted interpersonal messaging among its member citizens.

[00:39:34] But there is some disunion evidenced in news from last Wednesday posted by the German government, which indicated that they, Germany, will have none of that. Period. They wrote September 10th, 2025 Berlin from the Digital Affairs and State Modernization Committee. They posted the Digital Affairs Committee met Wednesday afternoon to discuss the status of the CSAM. Of course, we all know what that is.

[00:40:03] Child sexual abuse material regulation, publicly known under the term chat control. Its purpose is to combat sexual violence against children and adolescents online. For over three years, various proposals have been under discussion at the EU level to require providers of messaging and hosting services to detect material related to online sexual child abuse. An agreement has not yet been reached.

[00:40:30] As a representative of the Federal Interior Ministry reported to the members of Parliament, the Danish presidency of the council in office since early July is treating the matter as a high priority, meaning it hasn't been dropped by any means. They said a unified legal basis across the EU is urgently needed given that the current situation is worrying.

[00:40:57] It is clear that private confidential communication must remain private. At the same time, there is an obligation to take action against child abuse online. A representative from the Federal Ministry of Justice pointed out that the matter involves very severe intrusions into privacy, leaving open the question of how deep those intrusions are.

[00:41:24] He also pointed to the strict limits that have already been made clear in EU court of justice case law on data retention and emphasize that a regulation is needed which will stand legal scrutiny. Okay. Whoops.

[00:41:41] In other words, the EU already has strong existing law that would make what chat control wants to accomplish illegal under their own law. The article finished writing, In their questions, MPs asked about the joint position of the federal government, the criticism from civil society about the regulation, and the further process in the negotiations.

[00:42:11] The representative from the Interior Ministry explained that the Danish position could not be supported 100%. For example, Germany is opposed to breaking encryption. The goal is to produce a unified compromise proposal also to prevent an interim regulation from lapsing. So Germany has just said no. They're not. They're opposed to breaking encryption. Sorry.

[00:42:39] So this has all the earmarks of being a very heavy lift. This chat control dream of theirs is still facing very stiff headwinds. I don't know what it means for Germany to declare that it's a firm no vote, but the EU's existing personal privacy laws would need to be changed for chat control to be legal, even in the EU that wants it.

[00:43:08] So lots has to happen first. It's a mess. And, you know, who knows what the answer is going to end up being. But maybe governments will go round and round, Leo, for a while and then just end up saying, well, we'll just have to, you know, make better use of the provisions that we have.

[00:43:27] Which is, you know, what the people who absolutely want no exception to privacy and encryption and messaging say is the right course of action. I think it's telling that even within the EU, countries can't agree. Like some. Right. Some want it. Some don't want it. Some say you can't do this. Some say we have to do this. If they can't agree. Of course, we know that even inside the NSA, there's no agreement. So I don't. Right.

[00:43:54] This is one of those things where the people who say, look, there's no way you can break encryption for some people without breaking it for all people. A notion that other people don't understand. And maybe we need to work harder to get that through to them. Well, and then we also have the issue of communicating with anyone in the EU from outside the EU.

[00:44:21] That presumably means that your messaging will be decrypted, too. Oh, yeah. Good point. Yeah. Much like the UK say we want to be able to see everybody's. You know, one way that do one thing that often brings this home to them is pointing out that, yeah, OK, well, so we're going to break encryption for those people, but it will also break it for you. You know that you won't have private communications anymore either. And often that stops legislators cold. They go.

[00:44:50] Oh, right. You mean the government is not going to be an exception? We don't have privacy. They think they do. That's the problem. Oh, no, we got ways. They want to be able to check everybody else's message. Privacy for me, not the. Yeah. Right. It turns out that even when there are many Western models to follow, launching a new secure messaging service from scratch is not a slam dunk.

[00:45:17] The news out of Russia is that hackers immediately began selling. Yeah. Immediately began selling hacked accounts for Russia's Max Messenger for prices up to 250 U.S. dollars or access to accounts can be rented by the hour.

[00:45:39] This is for the encrypted chat that the Russian governments are forcing phone manufacturers to put on the phones in lieu of everything else. Exactly. And blocking the alternatives in order to force their citizenry over. I mean, we've heard from some of our Russian listeners who are saying, yeah, this is so that we're forced to use Max. That's the reason, you know, Google's group messaging and Google's conferencing is being blocked now. So.

[00:46:10] So. Working to combat this abuse, of course, they're not taking it lying down either. Russian officials say they've already blocked more than 67000 accounts for suspicious for suspicious activity such as spam, sharing malicious files and, you know, the whole rigmarole. Looks like the Kremlin and our favorite agency, Ross Comansoor are going to have their hands full. Yes.

[00:46:38] Are going to have their hands full dealing with the consequences of their own messaging service, which they said they wanted. So it couldn't happen to a nicer bunch. It's no surprise. Even though they've got Western models to follow, still not an easy thing to do. Yeah. Samsung recently patched a zero day, their own zero day 2025 to 1043, which they rated as critical.

[00:47:04] And the Android OS version that ships with the Samsung devices, the vulnerability was discovered in Android's live image codec dot qram dot SO file. Now, I didn't dig in to see whether it may have been similar to what Apple recently patched. That is, whether that was also having to do with decoding the Adobe DNG file format.

[00:47:33] But like the recently patched Apple vulnerability, this one also formed part of an exploit chain that targeted WhatsApp users. So whether WhatsApp was on Apple, where it was using, we know that Adobe DNG image decompression flaw, or whether it was on a Samsung phone using Android OS.

[00:47:59] There was some flaw in the image codec, which was chained with the WhatsApp flaw that allowed spyware to be installed onto the users of WhatsApp for Samsung, presumably broader for Android OS.

[00:48:20] So at least on the Apple side, we will see by the end of this podcast why that would not have worked if this were if this was already in place, what they have now released with this new hardware.

[00:48:36] While I was assembling today's show notes, I was reminded that there's all the difference in the world between a casual mistake made by an employee who clicks on a malicious link they receive and an employee on the inside who wishes to maliciously attack their own employer. You know, that that's a higher bar than an oops, I clicked the wrong link.

[00:49:00] An article from the UK's privacy watchdog is what reminded me of this difference. They found and reported that UK students are increasingly behind the hacks of their own schools. OK, insider hacks, right? Because they're, you know, the student is on the school's network and is able to sneak around.

[00:49:26] The UK Information Commissioner's Office, the ICO, says it studied 215 insider-caused breaches within the UK educational sector between 2022 and the middle of last year, 2024, and found that students, to no one's surprise, were behind 57 percent.

[00:49:51] So not, you know, by no means all, it wasn't 97 percent, but more than half, 57 percent of all intrusions. So certainly there are still external actors trying to get in. And where a stolen password was used to breach a school system, students were involved in almost all cases, 97 percent.

[00:50:18] So virtually all stolen passwords were student-based. The underlying motives were cited as being dares, notoriety, a little bit of financial gain, revenge, and rivalries. In other words, basically, you know, because it's possible to do it, sorts of hijinks. Breaches were blamed on staff leaving devices unattended, students being allowed to use staff devices. Incorrect permissions.

[00:50:48] Yeah, hijinks. Yes, there are some hijinks. Oh, you kids. You rascals. You little rascals. That's right. Right. Incorrect permissions on school resources and in some, though rare, 5 percent of the cases on students using sophisticated techniques to bypass security and network controls. So maybe we have some listeners among the students in the UK who are a little more sophisticated.

[00:51:15] After researching those 215 insider student-caused breaches, the Information Commissioner's Office reached two conclusions. The first one was that an early familiarization with hacking might lead kids down the wrong path and serve as a gateway to a life of cybercrime. Okay, hold on.

[00:51:40] I remember being that age, and I was notorious for all manner of hijinks. Of course, the adventure of the portable dog killer, to name one. But I think it would be a stretch to imagine that some high schooler's success at guessing a teacher's password or perhaps looking underneath the keyboard for it written down on a post-it note would lead to a life of cybercrime.

[00:52:06] You know, after all, everyone is an insider within their own family's home where there are plenty of tantalizing hacking opportunities. So, you know, one school, I would say, is just another of many. The second conclusion the ICO reached was that the responsibility for much of their students' hacking successes lay at the feet of the school's administrators,

[00:52:34] who repeatedly failed to properly and adequately secure their own networks. And, of course, writing one's password on a post-it note under the keyboard is never a good idea. In conclusion, the ICO urged schools to, quote, remove the temptation from their students, unquote, by taking steps to improve their own cybersecurity and data protection practices.

[00:52:59] So, yes, you are trying to herd a wild bunch of, you know, cyber-enabled kids. You know, do yourself a favor by locking the gate if that's what you're trying to do and not allowing them to see what's on the other side because, oh, that might lead them to a life that they regret. Okay, I don't think so. I think they're just having some fun, you know, accepting a dare and so forth.

[00:53:31] It's never a good sign when a security-aware bug bounty company such as HackerOne, one of the leading bug bounty companies, we've talked about them often, themselves get hacked. But this really wasn't on them. The blast radius of the recent sales loft drift supply chain attack has been wide and deep, and HackerOne was another entity that got caught up in it.

[00:54:01] They first posted about this shortly after it happened back at the end of August, August 28th, so like three weeks ago. They wrote, recently, hundreds, and that's true, of companies have been responding to an attack that resulted in unauthorized access to Salesforce records connected to the drift from sales loft application.

[00:54:30] I'll talk about what that is in a second. They said a situation detailed in reports from Mandiant and others. As part of our commitment, writes HackerOne, to transparency, trust, and our company's value of default to disclosure, we're writing to confirm that HackerOne is among the companies impacted by this incident.

[00:54:53] So, okay, they're trying to obscure themselves a little bit by being among the herd, and it's like, well, we're just one of hundreds. Okay. Anyway, they said our security team received notice of the potential compromise from Salesforce on Friday, August 22nd, and this was confirmed by Sales Loft on August 23rd.

[00:55:15] HackerOne's security team immediately initiated incident response procedures, working in partnership with Salesforce and Sales Loft to assess the scope and impact of this incident. HackerOne's investigation is ongoing. But we can confirm that a subset of our records in our Salesforce instance was accessed via a compromise of the drift application.

[00:55:42] Due to HackerOne's strict policies and controls governing data segmentation, we have no reason to suspect that the incident impacted or exposed any customer vulnerability data. We're continuing to conduct forensics on the records that were accessed and will communicate directly with any impacted customers as appropriate.

[00:56:03] Okay, so that's everything we would want and hope to see in a breach disclosure, a straightforward reporting of the event with a promise to follow up when anything more is learned. And that follow-up was posted last Thursday, which is why it came back to my attention. Last Thursday, they wrote, HackerOne continues to investigate the recent Sales Loft drift incident.

[00:56:29] And we are posting here to update you on the status of our investigation as well as provide additional information we're able to share at this time. Based on the information we have to date, a subset of HackerOne's Salesforce data was accessed via the drift application on August 13th and August 18th. Both the dates and the indicators of compromise are consistent with what Sales Loft has reported, which can be found at trust.salesloft.com.

[00:56:57] And don't bother going looking because it's just marketing spiel. They said, we can confirm that all sales drift connectors are currently offline. And as a precaution, we have rotated all relevant API and service credentials. And I'm going to explain what this terminology here means in a second.

[00:57:18] Due to HackerOne's strict policies and controls governing data segmentation, we have no reason to suspect that the incident impacted or exposed any customer vulnerability data, nor have we found any indication of lateral movement. That's all good. We understand that you may still have questions about this incident, and we appreciate your patience as we continue our investigation.

[00:57:41] HackerOne has engaged a third-party forensics firm to ascertain what records were accessed, and we will communicate directly with impacted customers as appropriate. So basically, they're saying, yes, we were caught up in this. We've verified that our network was penetrated, but we have an architecture.

[00:58:04] Now, this is similar to what I was suggesting ought to be the standard going forward, where segmentation, you know, network segmentation, where network segments. I was trying to find another word, but there it is.

[00:58:24] Segments are isolated from one another by purpose so that unless it's actually necessary for some API or individual to have access to some specific set of data, there is no physical access. That's what prevents any damaging lateral movement.

[00:58:49] We're always now talking about lateral movement, how you get in somewhere and then you move laterally in a network to some other location, and then from there you're able to get access you didn't have from where you began. That's what needs to be contained. So I usually try to find some lesson for us to take away from incidents that we cover, like all of this.

[00:59:12] The problem is today's modern model of outsourcing services and interconnecting separate enterprises, automated systems with persistent authentication, which is what happened here. It inherently brings a risk, which we are and have been seeing play out.

[00:59:36] One of the recent trends I'm sure everyone listening to this podcast has encountered is the increasing, at least for me, annoying use of automated conversational AI chat windows that increasingly appear typically in the lower right corner of a website.

[00:59:59] I have yet to find engaging with one of those annoyances to be fruitful. If you've encountered one of those, it may have been courtesy of Sales Loft Drift, since that's what their technology does. That's been the root cause of all of this pain.

[01:00:20] Sales Loft Drift describes themselves as, quote, a conversational AI chat lead qualification component of the Sales Loft platform.

[01:00:33] It's built on or integrates the Drift chat AI agent that engages website visitors in real time, qualifies leads, routes them to the sales team via workflows like Rhythm, and helps convert them into pipeline, unquote. Okay, I don't want to be converted into pipeline, whatever the heck that means.

[01:01:00] All I want to know is whatever happened to that end table that we ordered, but that information is not available through the chatty chat bot. In order to integrate with its client enterprise customers, this Sales Loft Drift AI chat thing needs to have access into its customers' networks.

[01:01:23] Consequently, when Sales Loft Drift is hacked, all of its many customers' networks then suffer their own respective breaches as the hackers of the company to which they have outsourced this service obtain the credentials that allow access into every one of those enterprises' internal networks.

[01:01:47] It's an inherently unstable solution with an astonishing blast radius. But, you know, you get to annoy every one of your visitors by asking them unprompted what they need and whether there's anything they want to ask while not ever being able to provide any answers. This today is what we call progress, Leo. It's customer service, baby. Have you seen those things?

[01:02:16] Those annoying little chatty windows in the lower right? It's like I always close them. Always close them. Oh, and I finally, in frustration once, I asked one of them. I said, well, here's what I want to know. And I, you know, presumably it's some LLMAI thing. And I got nowhere with it. Finally, I got pissed off. And I said, I want to talk to a supervisor. And then it gave me a phone number to call. So it's like, okay. Oh, that's ridiculous. For future reference, just be upset with it and tell it you want to talk to a supervisor.

[01:02:46] Give me the number. Just stop it. Okay. So it was a little over a year ago. In episode 975, it was May of 2024. That we last talked about students hacking their university provided washing machines. You'll remember that, Leo. To obtain free laundry services.

[01:03:12] Now, today, a university campus in Amsterdam has shut down its laundry room after its five smart washing machines were hacked in July. Surprise, surprise. Like, again, that's what you would call an insider attack. Students were able to wash their clothes for free for months. But that will be ending. That will be ending shortly. I know.

[01:03:40] Those five internet connected smart machines are being replaced with dumb washing machines that accept old fashioned coins. Who even has coins anymore? Seems like the students are going to get what they deserve here, needing to somehow now go find coins to put in these slots. Imagine that the university must have been confounded. Why has everyone stopped using our washing machines?

[01:04:08] When we go to empty the coin boxes, they're empty. Imagine that. Now, I'll confess, as I mentioned when we talked about this before, UC Berkeley also provided coin op washing machines in pre-internet 1973. When I happened to be there, and really, what did they expect?

[01:04:32] The machines had been placed in Erman Hall, which was the engineering dorm where I was. It turned out that the coin op box that had been added as an afterthought to the machine had a sheet metal screw in the back.

[01:04:51] The removal of which created a hole through which a properly shaped length of coat hanger wire could be threaded. Not that you would do anything like this. Not that I would have ever had anything to do with that.

[01:05:08] But with a little bit of fishing around, it turned out, the lever that was normally actuated by the insertion of a quarter into the front could be tricked into believing that that had just happened. So let's just say that I never needed to bring laundry home on the weekends for my mom to wash. And that, my friends, that's what leads kids to hacking.

[01:05:36] That's down the dark path. That's right. It's the gateway drug to future hacking exploits. Wow. Indeed. That's what hacking is, right? It's getting around restrictions. I mean, it's like Wozniak and phone freaking with a blue box that generated 2,600 hertz tone that disconnected the local line and dropped you into the long haul network. Not that I knew anything about that. No, of course not.

[01:06:05] No, no, no, no, no. Not a thing. Just things that fascinated kids. Okay. I'm just going to start this next piece by reading what was posted. Then I'm going to share my sadness.

[01:06:22] UK, London, Tuesday, last Tuesday, September 9th, FastNetMon, they wrote, today announced that it detected a record scale distributed denial of service attack. You know, DDoS targeting the website of a leading DDoS scrubbing vendor in Western Europe. The attack reached 1.5 billion packets per second.

[01:06:52] Not bits. These are 1.5 billion packets per second. One of the largest packet rate floods publicly disclosed. Now, I'll just pause to say that, remember, we talked about the challenges that flooding attacks present. One is bandwidth. Just the wires are unable to carry the amount of bandwidth that's being generated.

[01:07:19] So packets overflow the incoming buffers of the routers and are being dropped. And as a consequence of that, valid data is the valid packets have a very low probability of making it through the buffer into the router. As a consequence, the valid service is denied.

[01:07:44] The other problem is that every packet that does get into a router needs to be examined for its destination. The routing table then used to look up which interface that packet should be sent out of. In other words, there is a per packet routing overhead separate from just the raw bandwidth overhead.

[01:08:09] So when you're generating 1.5 billion packets per second and they are all focused down onto some poor little IP address somewhere, what happens is all the routers everywhere on the globe are dealing with all of those packets. And as they are routed closer and closer to their destination through multiple router hops,

[01:08:37] the overall rate of packets skyrockets to the point where even if the bandwidth weren't being flooded, the number of packets that needed to be examined per second, no router could possibly handle. So this attack, 1.5 billion packets per second, as they wrote, one of the largest packet rate floods publicly disclosed.

[01:09:04] The malicious traffic, they said, was primarily a UDP flood launched from compromised customer premise equipment. In other words, you know, CPE is the abbreviation, IoT devices and routers. Across, get this, more than 11,000 unique networks, not devices, 11,000 networks worldwide.

[01:09:29] The disclosure, they said, comes only days after Cloudflare reported mitigating an 11.5 terabit per second DDoS attack. 11.5 terabits, trillion bits per second, showing, they said, how attackers are pushing both packet and bandwidth volumes to unprecedented levels. I mean, really, it's just crazy.

[01:09:58] Pavel Odinsav, founder of FastNetMon, said, quote, This event is part of a dangerous trend. When tens of thousands of customer premise equipment devices can be hijacked and used in coordinated packet floods of this magnitude, the risks for network operators grows exponentially.

[01:10:20] The industry must act to implement detection logic at the ISP level to stop outgoing attacks before they scale. Okay, so there, what he's talking about is, as I said, attacks originate from 11,000 networks, right? And it's the concentration, the aggregation of all of that bandwidth as it narrows down on the internet to a single target

[01:10:49] that causes the buffers to overrun and the routers to fail to be able to route that many packets per second. But if it were possible for all 11,000 of those source networks to never transmit the outgoing packets, then there wouldn't be the ability for the traffic to aggregate.

[01:11:14] Anyway, this quote finishes saying that FastNetMon advanced platform is designed to handle attacks of this size using highly optimized C++ algorithms for real-time network visibility. FastNetMon enabled its customer to automatically detect the flood within seconds, preventing disruption to the target service.

[01:11:39] Okay, I'm not sure what highly optimized C++ algorithms have to do with anything. And unfortunately, this Pavel guy is dreaming. We've been talking about the problem of DDoS flooding throughout the entire 20 years of this podcast. And during that time, while attacks have grown astronomically in scale,

[01:12:04] they have also become less possible to prevent. Back in the early days, spoofing source IP addresses was the order of the day. We argued at the time, correctly, that no ISP should emit any packets from their networks that contained a fraudulent source IP. So-called egress filtering could have been employed back then

[01:12:34] to nip those attacks in the bud before the traffic was given the chance to aggregate into an overwhelming flood. That was all true then. But the only reason devices back then were spoofing their source IP addresses was to hide their true IP from their victims.

[01:12:55] Once you have tens of thousands of individually compromised home routers and IoT devices, hiding is no longer necessary. Who cares if the identity of some of these devices, or all of them for that matter, is known? They're scattered across the globe in faraway countries behind ISPs that will never pick up the phone.

[01:13:21] As a consequence, source IP spoofing as a requirement for packet and bandwidth flooding is far less important today than it once was. There's no way for an ISP now to know that any given outbound traffic is fraudulent because it carries valid source IP addresses.

[01:13:43] The other factor is that it is trivial for a CDN like Cloudflare to drop all incoming readily spoofable UDP traffic. Cloudflare doesn't need UDP traffic. It's a web hosting provider. So what it needs is TCP traffic over port 80 and 443.

[01:14:09] And as we noted recently, even port 80, old HTTP unencrypted instead of HTTPS, HTTP port 80 is now falling by the wayside too. So now the name of the game is connection flooding. And connection flooding needs TCP protocol with roundtrip packets. And roundtrip packets prohibits the use of any spoofing.

[01:14:39] And of course, now who cares when today's massive bot networks have tens of thousands of individually throwaway agents. We don't need we don't care what their IP addresses is. Nobody will ever contact the people who are in control of them or their ISPs or their ISPs, ISPs.

[01:15:00] One of the earliest things we talked about on this podcast during our How the Internet Works series was the brilliant genius invention of the idea of opportunistic packet routing. By completely dropping the idea, just forgetting about it, that every communication packet needed to get through the network with 100% reliability,

[01:15:28] the brilliant designers of the Internet invented an incredibly elegant solution for the ages. There's just one problem with it. To this day, and probably forevermore, that incredibly elegant system is utterly and completely vulnerable to packet generation abuse. And there is no way to fix it. None. The reason why.

[01:15:55] This astonishing global network, which we have, is there. It's in place. So that anyone anywhere can send a packet to anyone else anywhere else. Unfortunately, there is nothing to prevent bad guys with thousands of remotely scattered devices under their control, all sending as much packet traffic as they can to anyone they choose.

[01:16:21] The result of this is that frequently targeted companies are choosing to hide behind the growing number of companies who are able to provide comprehensive DDoS protection thanks to having many points of Internet presence themselves, their own massive network bandwidth, which is able to absorb these attacks and the automation in place to block incoming attack traffic once it's been identified.

[01:16:50] It's not an ideal solution, but I suppose it's the price we pay for a system that otherwise works so incredibly well. And Leo, you know the other system that works incredibly well? You mean the system where we do ads to pay for all of this and you drink more coffee? And I get to have some coffee. That system? I like that system. That's the one. We're going to take a little break. We'll have more of security now in just a moment. We're talking about how can you solve the problem?

[01:17:20] You know, obviously training is not enough of employees having unlimited access to everything on the network. Well, there is a solution out there. It's called Zero Trust. This episode of Security Now brought to you by ThreatLocker. You know ransomware is harming businesses and schools and, I mean, everybody worldwide.

[01:17:42] It happens through phishing emails, infected downloads, malicious websites, RDP exploits, that link that no one should be clicking. Don't you be the next victim. ThreatLocker's Zero Trust platform takes a proactive, and this is the key, three words, deny-by-default approach

[01:18:04] that blocks every unauthorized action, protecting you from both known and unknown threats, and that employee who keeps clicking those links. Trusted by global enterprises, companies that can't afford to go down for one minute, let alone three weeks. JetBlue, for instance. Infrastructures like the Port of Vancouver, they use, both use ThreatLocker.

[01:18:26] ThreatLocker shields them and you from zero-day exploits and supply chain attacks while providing complete audit trails for compliance. ThreatLocker's innovative ring-fencing technology isolates critical applications from weaponization, stopping ransomware, and limiting lateral movement within your network. ThreatLocker works across all industries. It supports Mac and PC environments, provides 24-7 US-based support, really good support.

[01:18:55] Not that you're going to need it. It's so easy to use, and it enables comprehensive visibility and control. So ask Mark Tolson. He's got a tough job. He's the IT director for the city of Champaign, Illinois. Imagine. That's one of those jobs where you have to be perfect. The bad guys, just wait. You have to be perfect. He says, quote, ThreatLocker provides the extra key to block anomalies that nothing else can do.

[01:19:24] If bad actors got in and tried to execute something, I take comfort in knowing that ThreatLocker will stop that. End quote. Stop worrying about cyber threats. Get unprecedented protection quickly, easily, and cost-effectively with ThreatLocker. Visit ThreatLocker.com slash twit to get a free 30-day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance at the same time.

[01:19:50] That's ThreatLocker.com slash twit. Thank them so much for their support of security now. Good work Steve's doing here. All right. On we go. Okay. So Blue Sky is going to implement conditional age verification for South Dakota and Wyoming. As age verification requirements continue to evolve, we got an update last Wednesday from Blue Sky.

[01:20:20] Recall that the last time we talked about them, they were going and did go completely dark in Mississippi due to Mississippi's all or nothing age verification law. After the first two paragraphs of Blue Sky's posting, which didn't really say anything, it was just a marketing spiel, they said,

[01:20:41] In the UK, we complied with a new law that requires platforms to restrict children from accessing adult content. In Mississippi, the law requires us to restrict access to the site for every unverified user. That's the difference. They said,

[01:21:36] These are very similar to the requirements of the UK Online Safety Act. So as we did in the UK, we'll enable kids web services, which they abbreviate KWS, age verification solution for users in these states. Through KWS, blue sky users in South Dakota and Wyoming can choose from multiple methods to verify their age.

[01:22:05] But the important part is you don't have to unless you're trying to access adult content. So all users can still remain anonymous unless they need unless they are trying to access age restricted content. That's what Mississippi did not do. They said,

[01:23:05] That's what the UK has done. Following that tragic Mississippi suicide of the young man who has catfished on Instagram, the state of Mississippi has effectively declared war on all social media, regardless of its content. While First Amendment lawsuits are flying, blue sky decided to just back out of Mississippi until the dust settles.

[01:23:33] What would be good is if Mississippi were to align themselves with South Dakota and Wyoming and just say, OK, it's just the adult content. But, you know, it depends what you define as adult content, though. That's the problem. And that's where these legislators are much broader than you and I might expect when they call stuff adult content.

[01:23:54] And unfortunately, as we know, our U.S. Supreme Court did not make this fight any easier because they said, we don't think it is a First Amendment compromise to require people to provide proof of their age. Right. Well, I mean, that's a huge privacy compromise. Right now, we have no system that allows you to do that without divulging who you are. Guess who's the latest?

[01:24:22] Chat GPT says it's going to attempt to guess your age. Oh, my God. And if it can't guess that you're over 18, it's going to ask for verification. Wow. Wow. This in the way of lawsuits after teen harm, self-harm stories, blaming a chat GPT. They're going to create a chat GPT for kids. So if it thinks you're under 18, it's going to shift you over to that.

[01:24:49] And if it's not sure, it's going to say, OK, you need to give me some ID. And that's, again, hugely problematic. I asked chat GPT. It says, well, I know you're 68. You told me. But it believed it. And that's the point is it assigned me an age based on what I had told it in a prompt. So this seems like this might be. Well, and I'm sure it knows who I am. It knows me, my email address.

[01:25:18] It knows my account. It can go check. I'm all over the Internet. So it knows what day of what my birthday was. It doesn't have to guess that. The big problem. I mean, I don't, for example, I'm a big chat GPT user. I don't have a problem, you know, disclosing who I am to chat GPT.

[01:25:39] But, you know, the dicey thing are, for example, porn sites where people are going to be very self-conscious about, you know, de-anonymizing themselves there. And that's what the what the well, in fact, we're about to talk about that because the UK is really going overboard here.

[01:26:03] This next story I have, speaking of the UK, they're on the warpath following their July 25th passage of the new age check requirements. And that's what we were talking about on the Online Safety Act, which which talks specifically about adult content.

[01:26:20] Only a week after its passage, they announced that they had launched investigations into the compliance of four companies, which collectively run 34 pornography websites to verify that they were now using, quote, highly effective age assurance, unquote, to prevent children from accessing that content.

[01:26:45] At the time, they said that these 34 new cases added to Ofcom's that's the office in the UK that does this to Ofcom's 11 investigations that were already in progress into 4chan and online suicide forum, seven file sharing services and another pair of porn publishers.

[01:27:09] They concluded by saying that they expected to be making further enforcement announcements in the coming weeks and months, which just happened last Thursday. With their apparently proud announcement that another 22 porn sites were now being investigated to verify the effectiveness of their age verification measures.

[01:27:32] So, as I started to say, it's one thing to need to show your ID in order to pick up a medication prescription or before purchasing alcohol. But it's obviously a far more sensitive matter, a personally sensitive matter, to need to produce an ID in order to obtain access to online content that is, to say the least, controversial and probably extremely embarrassing.

[01:27:58] So, it's hardly any surprise to learn that the traffic of the websites that are requiring such proof of age has dropped precipitously and significantly. And, Leo, somewhere I saw, and when I went back to look for it, I couldn't find it, but they were actually targeting sites whose traffic had increased since their legislation.

[01:28:27] Because we knew that people were being driven to the sites that did not require age verification and away from the sites that were. This is just a mess. You know, I'm glad Steena is on this because, I mean, you know, she's a bulldozer.

[01:28:47] And she's going to, if she's working with the World Wide Web Consortium and has a nonprofit set up and they are 100%, dare I say, laser focused or laser aimed at this problem, you know, we need a solution and we need it yesterday. Steena being Steena Svalbard, who is the CEO of Yubico and a friend of the show. And, of course, the YubiKey is at number one solution for hardware authentication.

[01:29:17] So, she's working on some sort of ID, privacy forward ID solution. Yes. She has established a nonprofit. She won. She just won a big award as like Sweden's number one entrepreneur innovator award deal. Nice.

[01:29:37] And, I mean, so she's really and since I knew her, I mean, we had, she used to come down because I, what's the big gaming company down here? Zynga? World of Warcraft. Oh, Blizzard. Yeah. Blizzard is down here. Right. And she was providing their identity solutions. And so, we would meet at Starbucks and spend a morning, you know, talking about, you know, all this stuff.

[01:30:07] Let me correct, by the way. I gave the wrong, I gave her, call her Steena Svalbard. She's Steena Ehrensvart. Ehrensvart. Correct that. Yeah. Svalbard is the city close to the Arctic Circle. It's a different place entirely. Yeah. Anyway, so, so this has been a thing for her.

[01:30:26] And a few months ago, I, I sent a note just saying, Steena, I hope somebody is doing, is like looking at an age verification because we need a privacy forward age verification system where, where all it does is it challenges you for an are you at least this old?

[01:30:46] And you just get a go, no-go reply from, you know, from a system that cannot be spoofed, that is biometrically locked, you know, that, that provides the things we need so that, anyway. So, she says, yes, I have a nonprofit that's doing that right now. Good. Good. That's exciting. Yeah, it is. I will be interested, see with interest. We'll talk to her when it comes out. Okay.

[01:31:13] We've talked about GPC, the global privacy control, which as we know, it's just a talk about no go, go, no go. It's a signal reminiscent of its predecessor, DNT, do not track. And of course, much as I was for DNT, it never got off the ground since without enforcement, it means absolutely nothing.

[01:31:36] You know, you got to sue some people in order to get the industry's attention and for them to go, oh, maybe we should, you know, take this for, you know, take this seriously. But on the enforcement front, GPC may have a brighter future. The news is that state attorneys general from California, Colorado, and Connecticut, three C's. We've seen these three get together before.

[01:32:01] Colorado, California, and Connecticut, they've announced a joint investigation into companies refusing to comply with global privacy control, which is now a law. Data trackers that refuse to honor the GPC signal are in violation of recently passed state privacy laws.

[01:32:25] Seven other U.S. states also require companies to honor GPC, but they've not joined the enforcement action. They may not need to, or maybe we'll make it 10 companies or 10 states. Anyway, this is great news, since, as I noted, without any enforcement, the law means nothing and will likely suffer the same fate as befell DNT. But, you know, there's hope here, because, you know, certainly California is serious about its privacy laws.

[01:32:54] And if it's got, what was it, 499 registered data trackers? If California investigates and finds they're not honoring it, they're going to just get kicked out of California. So, yay for enforcement. Listener feedback. Michael Buck wrote, hi, Steve. In episode 1040, you talked about your disappointment with what you called Synology's built-in NAS synchronizer.

[01:33:23] He said, I'm not sure you gave your listeners a fair review of Synology's solutions. He says, I'm a Synology user and have used Synology Drive, which works like SyncThing, Box, and other synchronizing tools. Like you, I have several machines that I use and like to keep files synchronized between these machines. Synology Drive was easy to set up, and I've been using it for years without any problems.

[01:33:50] It keeps my files synchronized between multiple Mac and Linux machines. I also use the tool that Leo mentioned, Hyper Backup. Most Synology NAS machines have an external USB port. My son also has a Synology, and we each purchased a large USB drive and plugged them into each other's NAS USB ports.

[01:34:14] Then we each use Hyper Backup to back up our NAS machines to our own USB drives at each other's location. The data is encrypted, and we don't eat up the disk space on each other's NAS. Thanks for all you and Leo do to provide a great podcast. Cheers, Mike, Spinrite owner, and podcast listener since episode one in Houston, Utah. That's clever. Yeah, that is.

[01:34:41] Okay, so in case anyone else may have been confused by my disappointment with Synology's built-in inter-NAS synchronization, I wanted to take another moment to clarify. There was nothing whatsoever wrong with it. I agree with Mike that it was quick and easy to set up, and I have a strong bias toward what we would refer to living off the land solutions,

[01:35:07] meaning that if Synology provides a means of keeping two of their NASes synchronized, I would be strongly inclined to assume that they know best how to do it. And again, it worked. I would have never been unhappy with it or aware that the system, at least for me, was operating in what appeared to be a far from optimal way,

[01:35:34] unless I had been watching the Synology Drive's massive, apparent, full resynchronization using SoftPerfect's wonderful free networks utility, which I've spoken of before. I have that utility, networks, configured to continually display the SNMP counters on my router's interface.

[01:36:00] So it is showing me not my own machine's bandwidth, but the instantaneous bandwidth usage of my entire LAN, which includes the Synology. What I witnessed, to my extreme chagrin on many occasions, was my network's bandwidth being pinned for a very long period of time after only updating a few files on my NAS.

[01:36:28] And when I checked the NAS's drive lights, they were all flashing away like mad. So it appeared that updating a small collection of files was basically triggering some sort of shock and resynchronization of the entire NAS whenever that happened.

[01:36:53] Again, everything worked, but it was certainly not a situation that I wanted to live with. The only change I then made was to shut down Synology's native synchronizer and run SyncThing natively on both NASes, with them synchronizing everything on each end. Now, using SyncThing, when I update a few files on my local NAS, for example, after rebuilding a new instance of the DNS benchmark,

[01:37:22] after a short delay, I'll notice a brief few seconds long blip of outgoing bandwidth as my local SyncThing instance sends those and only those updated files over to the other NAS. So, yes, SyncThing's native synchronization works. No question about it. And it's, you know, I meant to say Synology's native synchronization works.

[01:37:52] It's easy to set up and configure, but it might be worth monitoring its bandwidth usage. Or if that's not easy for you to do, just watch its drive activity lights after you've updated a bunch of files all at once and see if they just go, you know, blip for a few seconds or if it generates, you know, 45 minutes to an hour of frantic drive lighting, because that's what I saw. Greg Williams wrote, Hi, Steve.

[01:38:22] Just a few notes. Cloudflare already has certificate transparency monitoring. He says, although it's in preview and gave me a link. He said, no idea why they didn't use it themselves. And he said, you also mentioned the 1.1.1.1 domain. That's not a domain. It's an IP address that's not owned directly by Cloudflare, but APNIC. He said, see the Wikipedia article.

[01:38:51] And he gave me a pound tail on the URL, which, as we know, jumps you to a section on a page. That page is titled prior usage of the IP address for other references to the default use of 1.1.1.1. He says, as laziness by other vendors, including Cisco. Signed, cheers. Greg Williams, Brisbane, Australia. Interesting. Okay.

[01:39:21] Yes. So, of course, first of all, Greg is 100% correct about 1.1.1.1 not being a domain. I know better. The numeral 1 is not a TLD, right? It's a numeral 1, which could never be a TLD since the RFC specified minimum length of any TLD is two characters.

[01:39:49] You cannot have a single character top level domain. So, Greg, thank you for the correction. I also got a kick out of Greg's reference to that Wikipedia page, which suggests that it wasn't just this random CA that was using 1.1.1.1 out of laziness. Apparently, Cisco and others have been found to be using it, too, for very much the same reason. So, thank you for that, Greg.

[01:40:17] Buzz said, I've listened to the last show, and as a UK citizen, I can confirm that Apple's ADP is still active for those users who opted in at the start. Good. It is unavailable to any new users. Best regards, Buzz. And Dan Bright said, hi, Steve.

[01:40:38] Regarding last week's talk about the availability of Apple's ADP in the UK, he said, I have it turned on myself and can confirm that Apple has not yet removed it from my account. Kind regards, Dan in Scotland.

[01:40:53] So, anyway, Buzz and Dan's notes were echoed by other listeners who all confirmed that while it's no longer possible to enable fresh UDP, I mean ADP, you're not able to turn on advanced data protection. It has not yet ever been forcibly removed from any UK-based Apple user who has reported in to us.

[01:41:19] So, if the effect of the still inferred and presumed UK notice, which was presumably sent to Apple, if that stands, then the presumption is that Apple will eventually be required to ask all K users to please flip the switch off.

[01:41:40] Or perhaps Apple will themselves preemptively disable the feature with some future update and just inform their users that the devil made them do it. So, don't know what's going to happen. But it is at least a little bit of a canary for us to get some sense for what's going on because, you know, no one's talking, annoyingly.

[01:42:03] John David Hicken wrote, I'm following the proposals to solve the problem of asserting that age of X is greater than equal to Y. That's the way he phrased it. He said, zero knowledge proofs may come in handy here, but it seems to me that there is, and he gets kind of clever here, he says, there is a potential use case that deserves thinking about.

[01:42:29] So, if different states start to impose differing age requirements while attracting the same visitors, then web tracking across those sites may be able to refine upwards the lower limit on a person's guest age. That's true. He said, I'm not sure if it's a real issue, but somebody will surely try to monetize it. So, anyway, John's thinking is correct and clever.

[01:43:00] That is, if the, and using that equation, age of X is greater than or equal to Y.

[01:43:08] Well, if the Y changes as you move from state to state and you continue making that assertion and you were to follow that person as they roamed from state to state and watched whether that assertion was true or not, you would end up being able to find the, you would be able to elevate up to equality potentially where X was equal to Y.

[01:43:38] So, again, as I said, clever.

[01:43:42] The handwriting is certainly on the wall that this previous era that we have all been enjoying of free and full unfettered access to the Internet's content is rapidly drawing to a close thanks to recent legislation in the UK, soon coming to the EU, and already within many state jurisdictions within the United States.

[01:44:09] Internet websites, which inherently have global reach. are being required to comply with the laws which govern their visitors, which often requires that those visitors sacrifice the fully anonymous access that we've been enjoying up to this point to the requirement of an acceptable form of age verification.

[01:44:31] I haven't noted this before, but we may see safe havens for anonymous Internet access spring up in the wake of these new legal restrictions.

[01:44:44] Websites that are compelled to obey the law might geolocated and limit their age restriction enforcement to only those countries that impose these requirements, much as Blue Sky is doing on a state granularity here in the US and also for the UK.

[01:45:04] Given that doing so is entirely feasible, that is geolocating your visitor, it would seem to follow logically from country-specific legal requirements. So, for example, anyone coming from the UK, the EU, or the US would be required to provide proof of their age.

[01:45:26] But, for example, Icelandic visitors who are outside the EU and live within a society with very liberal Internet regulations might not be required to give up any identifying information.

[01:45:40] And if that were the case, it would not be a stretch to imagine commercial VPN providers deliberately establishing points of presence in Iceland and offering customers anywhere, including the UK, EU, and US, the option of having their VPN traffic routed out through Icelandic locations. You know, again, all just technology. This is the problem with a global Internet.

[01:46:10] How do you solve these problems? Yep. There's no national jurisdiction that applies globally. And you're enforcing the laws under which your visitors are under. Right. Which varies from country to country, state to state. Ultimately, though, the lowest common denominator ends up winning, right?

[01:46:33] If people get more and more afraid of getting sued or shut down, they just kind of refer to zero free speech, I guess. As I think you correctly generalized, there is a coalition that just wants to see all pornography outlawed on the Internet. Yeah. And so, you know, I mean, it's like there's that, too. You know, that's what they're saying. Okay. We're just going to make it so painful that it will stop being a profitable business.

[01:47:03] Yeah. And I think it's important, the distinction between pornography and adult content. I think there is also a fairly large constituency on the Internet that wants to control what you see, period, and is willing to call it adult content in a variety of things that others might not consider adult content. Stuff that's not pornography. Yes.

[01:47:24] A year, a week or two ago, I read a really well-written lament from someone who was just he or she, I don't remember now, wrote adult non-pornographic. Oh, yeah. I don't know if it was poetry or. It was erotic. Yeah. Yeah, exactly. Exactly. And it was like, you know, I'm subject to these laws now.

[01:47:54] Right. And, yeah, I think it's really a desire, a strong desire to control what you and I and everybody else can see, to control the flow of information. And I think that's anti-democratic in the long run. But they always use children, you know, let's protect the children as the excuse. And it's not that they're wrong. I mean, the children need protection. I want to protect children. Absolutely. Yeah. Absolutely. Let's take a break.

[01:48:24] And then we're going to start in on memory integrity enforcement. And I'll find a point at about two hours in another half hour to take our final break because we're going to spend now until the end with, as I said, get your waiters on. I'm looking for my propeller hat. Yeah. Yeah. I don't think that's going to do it. I think you need waiters. You need to be able to get into some deep stuff here. Oh, I love it. It's always everybody loves it when you go that way. Let's go.

[01:48:53] We're getting in deep, kids. Hang on. Before we do, though, a last moment of sanity. Let's talk about our sponsor, Bitwarden. Yeah. We love Bitwarden. Bitwarden. We love Bitwarden. The trusted leader in passwords. Yes. Passkeys. And really, in general, secrets management. Bitwarden is consistently ranked number one in user satisfaction by G2 and by software reviews. Bitwarden has more than 10 million users across 180 countries.

[01:49:23] Over 50,000 businesses. These are people who value open source, who understand that any sort of crypto, including password managers, which rely on cryptography, needs to be open source. So you can verify that it's doing what it says it does exactly. No more, no less. And I think open source is the one and only solution for that. So that's one of the reasons I switched to Bitwarden. The other thing I like about Bitwarden is very forward thinking.

[01:49:49] They're always advancing what they do, what they can do. And one of the things that Bitwarden folks realized recently is there is an issue with people using AI and agentic browsers and agentic AI going out on the Internet, say, to look up stuff, but also to buy stuff. Because those AIs have to have your credentials, right, to buy it, your credit card, your password, and that kind of thing. And so now there is a security gap.

[01:50:17] And that's why Bitwarden just launched their very own Bitwarden MCP server. Now, it hasn't been packaged up. You know, the documentation is a little sparse, but it is available right now for you to see and use and examine at Bitwarden's GitHub. What does it do? Well, it enables secure integration between AI agents and credential workflows.

[01:50:38] So the idea is it's a secure, standardized way for AI agents to communicate with Bitwarden, to get your password, to keep it safe, but to log into those sites. Users benefit from a local first architecture for security because that's the Bitwarden MCP server runs on your local machine. So all of that secret stuff, all the client interactions are kept within the local environment, minimizing the exposure to external threats.

[01:51:07] It also integrates with the Bitwarden command line interface. That might not be important to you. I happen to love it. I use Linux and I use the CLI on Linux, and I love it. Users can also opt for self-hosted deployments. This is another thing Bitwarden's famous for as an individual user. If you wish, I don't do it because I trust Bitwarden to keep my vault safe. But if you want that extra trust, no one, you can self-host your vault.

[01:51:34] And now with the MCP server, you can also self-host that deployment, which means you have greater control over system configuration and data residency. It never leaves your system. What is MCP? It's an open protocol for AI assistants. MCP servers enable AI systems to interact with commonly used applications that could be content repositories like GitHub, business platforms like Salesforce developer environments through a consistent open interface.

[01:52:03] Could even mean other AIs like, you know, Claude. So driving secure integration with Agenic AI, the Bitwarden's MCP server presents a foundational step towards secure Agenic AI adoption. If you think about it, it's kind of a missing piece of the puzzle. But that's not all. I mean, Bitwarden is always doing important work to keep you secure, to keep you safe, to enhance its capabilities.

[01:52:29] A new report just came out from Infotech's research group titled Streamline Security and Protect Your Organization. This report highlights how enterprises in the Forbes Global 2000 are turning to, yes, Bitwarden, to secure identity and access at scale. The report emphasizes the situation we're in now, which is growing security complexity because you've got globally distributed teams. You've got fragmented infrastructure.

[01:52:53] You've got credentials dispersed, you know, across teams, contractors, devices. Enterprises are addressing these credential management gaps and strengthening their security posture by investing in scalable enterprise-grade solutions like, you got it, Bitwarden. Now, it's easy to move to Bitwarden. Steve and I did it a few years ago. Bitwarden supports importing from most password management solutions.

[01:53:18] It's actually easier than when we did it, but even then it only took a few minutes. And, of course, the Bitwarden open source code is regularly audited by third-party experts. Anyone can look at it. You too. But they hire these experts and they publish the reports. They also meet SOC 2 Type 2 GDPR, HIPAA, CCPA requirements. They're ISO 27001-2002 certified. Bitwarden does it right.

[01:53:46] One more thing I want to tell you about, then we'll get back to the show. It's coming up just a few days off, September 25th, Bitwarden's 6th Open Source Security Summit. It is a virtual free industry event. You can register right now for it. You can attend it from anywhere. Absolutely free. Go to the website, opensourcescuritysummit.com. All one word. opensourcescuritysummit.com.

[01:54:09] To explore advancements in open source security and see how using open source tools can build trust with customers and consumers. I think it's vital. I really do. Bitwarden is the awesomest. Get started today with a free trial for your business of a Teams or Enterprise plan. Or if you're an individual, get started for free forever across all devices. Unlimited passwords. Unlimited pass keys. It supports hardware keys like the YubiKey.

[01:54:36] If you're an individual user, it's free for life at bitwarden.com slash twit. Now, I paid 10 bucks for the, you know, a year for the kind of premium version. But that's just because I want to support them. You don't have to. Bitwarden.com slash twit. Thank you, Bitwarden, for all you do for all of us. And for supporting Steve Gibson and Security Now. All right. I'm going to massage my temples while you describe memory integrity enforcement. Just, yes. Close your eyes.

[01:55:06] Sit back. Let it just flow over you. Apple's big September 2025 product update announcement last Tuesday included technical capability, a technical capability advance, which garnered much less attention.

[01:55:29] But it was nevertheless perhaps somewhat more important in the long run for Apple's users than their decision, you know, to create, Leo, your new cosmic orange color for the iPhone 17. I'm ready for cosmic orange. I can't wait. I'm so excited.

[01:55:46] Under the covers of any iPhone 17 and its A19 chips lies an advance in hardware technology that goes further than anything Apple has previously, or any company, has previously implemented to prevent coding mistakes from being leveraged into exploitable vulnerabilities that can be used against iPhone users.

[01:56:14] It's worth remembering that if today's incredibly complex code did not contain subtle mistakes, none of these extra fancy prophylactic measures would be required for security. Two weeks ago, everyone needed to update and reboot their iOS and iPad OS devices, and their Macs for that matter.

[01:56:41] After Apple discovered that a subtle flaw in the decompression code for Adobe's DNG lossless image compression format, coupled with a registration bypass flaw in WhatsApp, was being leveraged in the wild, almost certainly by the customers of commercial spyware vendors, those customers largely being governments,

[01:57:05] to install spyware into the iDevices of highly targeted Apple users. Does this affect you and me? No. But Apple is serious about nipping all of this stuff in the bud and being able to claim that they have an utterly bulletproof platform.

[01:57:31] So, were it not for the apparent impossibility of catching all mistakes before they ship, there would be no need to go to these seemingly endless lengths to protect the users of these devices from their abuse. But one of the painful lessons the industry has reluctantly acknowledged, you know, as our understanding of the nature of security has matured, is that mistakes are not disappearing. And they may never.

[01:58:01] Because we're always pushing the boundaries of what's possible for us to build. This created the concept of layered security, described as defense in depth. The idea is to, wherever possible, establish multiple, often redundant, layers of protection, so that the failure of any one or more layers would still leave a system's effective security intact. Furthering this apparently endless effort,

[01:58:31] last Tuesday, Apple's SEAR, S-E-A-R, group, where SEAR stands for Security Engineering and Architecture Security Research, informed the world of their latest and greatest hardware-assisted technology that has been incorporated into the A19 processor chips being used by their iPhone 17 and other just-announced devices. Their blog posting was titled

[01:59:00] Memory Integrity Enforcement, A Complete Vision for Memory Safety in Apple Devices. Okay, now I'm going to start by sharing just the first two sentences of their posting, after which we'll need to pause to catch our breath. Apple's team wrote, Memory Integrity Enforcement, MIE, is the culmination of an unprecedented design and engineering effort

[01:59:29] spanning half a decade, as I noted earlier, also commonly known as five years, that combines the unique strengths half a decade, half a decade, that combines the unique strengths of Apple Silicon hardware with our advanced operating system security to provide industry-first, always-on, that's one of the keys, memory safety protection across our devices

[01:59:58] without compromising our best-in-class device performance. We believe memory integrity enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems. Okay? That's a long time. At least half a decade. That certainly sets the bar high. Yeah. So the reason we're here today

[02:00:27] with this podcast is to gain an understanding of what Apple has done to justify this claim. Their posting then continues to remind us of the nature of the threats they face and some details of their journey up to this point. I'm going to share that, interrupting to comment and elaborate where needed. They write, there has never been a successful, widespread, malware attack against iPhone.

[02:00:57] Okay? Now, that's true and it's worth remembering. Microsoft might argue that Windows, being a far more open platform compared to Apple's, which is a much more controlled environment, faces a much more daunting security challenge. that is, that Windows faces a much more daunting security challenge. But all of Microsoft's biggest problems

[02:01:25] were of their own making with their own code. All of those early internet worms leveraged fundamental flaws in Microsoft's IIS web server and the many continuing problems with Microsoft's NT LAN manager and their remote desktop protocol. Those were, in every case, enabled by Microsoft's poor coding and insecure protocol designs. Apple

[02:01:54] has objectively done a far better job and their devices are every bit as well connected as Microsoft's. So, Apple continues. The only system level iOS attacks we observe in the wild come from mercenary spyware, which is vastly more complex than regular cyber criminal activity and consumer malware. Mercenary spyware is historically

[02:02:24] associated with state actors and uses exploit chains that cost millions of dollars to target a very small number of specific individuals and their devices. And I'll just note that what Apple is saying is, we don't care, we're going to stop that, even though they've never really had a big problem. They wrote, although the vast majority of users will never be targeted

[02:02:53] in this way, these exploit chains demonstrate some of the most expensive, complex, and advanced attacker capabilities at any given time, and are uniquely deserving of study as we work to protect iPhone against even the most sophisticated threats. Known mercenary spyware chains used against iOS share a common denominator with those targeting Windows and Android. They exploit

[02:03:23] memory safety vulnerabilities which are interchangeable, powerful, and exist throughout the industry. Okay, that's all true. And I'll just say, I may not care less how thin Apple is able to make an iPhone, but I, the same dogged, crazy, over-the-top passion that they show for making their phones ever thinner,

[02:03:52] a whole different group at Apple is showing the same sort of focus on, darn it, we're not going to let anything attack our devices, period, no matter how much they cost, whoever it is that wants to do it, we're just saying, uh-uh, not here. So, as I noted earlier, despite all the lessons we've learned, even, you know, recently authored code, such as that Adobe DNG file decompressor,

[02:04:22] continue to exhibit exploitable vulnerabilities. So, Apple writes, for Apple, improving memory safety is a broad effort that includes developing with safe languages and deploying mitigations at scale. We created Swift, an easy-to-use memory-safe language, which we employ for new code and targeted component rewrites. In iOS 15, we introduced

[02:04:51] kalloc underscore type, a secure memory allocator for the kernel, followed in iOS 17 by its user-level counterpart, X-zone malloc. These secure allocators take advantage of knowing the type or purpose of allocations so that memory can be organized in a way that makes exploiting most memory corruption vulnerabilities inherently more difficult. In 2018,

[02:05:21] we were the first in the industry to deploy pointer authentication codes, PAC, in the A12 bionic chip to protect code flow integrity in the presence of memory corruption. The strong success of this defensive mechanism and increasing exploitation complexity left no doubt that the deep integration of software and hardware security would be key to addressing some of our greatest security challenges.

[02:05:52] It's worth noting that that means what they're saying is we learned something from that A12 bionic chip experience. They said then, with PAC behind us, we immediately began design and evaluation work to find the most effective way to build sophisticated memory safety capabilities right into Apple silicon. Okay, so to put this into perspective, the earliest efforts at building barriers

[02:06:22] around memory to protect its misuse were implemented in software. They were useful and effective, but they turned out to fall short of being absolute. As a consequence, while the bar was meaningfully raised, this just meant that the bad guys needed to work a lot harder. You know, we talked about address space layout randomization, for example, and that in turn, with the bad guys needing to work harder,

[02:06:52] the governments needed to pay more as exploits became significantly more rarefied. Unfortunately, for journalists, political activists, and other targeted individuals, governments have no shortage of funds, nor willingness to pay a competitive price. You know, after adding things like address space layout randomization, kernel address space layout randomization, stack cookies, reference counting, and other software-based

[02:07:21] mitigations, all, I'll note that we've covered in the previous years of this podcast, they were all eventually worked around by highly motivated attackers. So, the anti had been upped, and it was time to start adding explicit anti-exploitation features to the underlying hardware. Apple wrote, ARM published the Memory Tagging

[02:07:50] Extension, MTE, specification, in 2019. Okay, so that was six years ago. As a tool for hardware to help find memory corruption bugs, MTE is at its core a memory tagging and tag checking system where every memory allocation is tagged with a secret. It's a four-bit secret. The hardware guarantees that later requests

[02:08:20] to access memory are granted only if the request contains the correct secret. If the secrets don't match, the app crashes and the event is logged. This allows developers, again, developers, to identify memory corruption bugs immediately as they occur. Okay, so again, I'm going to pause to highlight this distinction because it's important. ARM's MTE

[02:08:49] was introduced, as I said, six years ago in 2019 with the ARM version 8.5a architecture. Its intention, design, and focus was to assist developers, both software developers, both the software like debuggers and the people during code development time when they were debugging. Running code under a debugger that would attempt to

[02:09:19] verify and validate every memory access would introduce prohibitive overhead. We'll be talking a lot about overhead in a bit. Everything is about overhead. So ARM's MTE was added to the ARM architecture to allow the hardware while running at speed, full speed, to detect instances of use after free and out of bounds accesses.

[02:09:49] And we'll explain how in a minute. It's not possible to do this at speed without hardware assistance because you'd have to check every reference to memory and you just can't. This has to be done in the hardware. By tagging memory allocations with what were known as colors consisting of four bit tags so different allocations receive different coloring and then checking against those pointer tags at

[02:10:19] runtime, MTE was able to provide a low overhead always available bug trapping mechanism in hardware. Since we're going to be talking about tagging a lot, let me clarify what's going on here. When an application running on behalf of its user or some process in the kernel needs the use of a block of memory, for

[02:11:03] for decades, in the past, the way this works, is that a memory manager would locate some free memory, increment that memory's usage count to show that it's now in use, and then return a pointer to the requested memory to its requester. From that point on, that memory would be considered to be owned by the requesting application, and it would be free to do anything with it that it wished.

[02:11:32] Unfortunately, the required flexibility of access required that the memory's ownership not be enforced. Any other process that knew where the memory was located could also access it. This is what the introduction of MTE changed. Under ARM's memory tagged extension, the tag tag,

[02:12:02] that color, a 4-bit secret key that would need to be present any time that memory was accessed. The theory was that while bad guys might be able to arrange to determine where some memory was that had recently been freed or might still be in use, requiring that they would

[02:12:35] MTE alone proved to be insufficient for Apple's needs. They wrote, we conducted a deep evaluation and research process to determine whether MTE as designed would meet our goals for hardware assisted memory safety. Our analysis found that when employed as real time as a real time defensive measure, the original ARM

[02:13:04] MTE release exhibited weaknesses that were unacceptable to us and we worked with ARM to address these shortcomings in the new enhanced memory tagging extension EMTE specification released in 2022. So three years after the 2019 release of MTE working with Apple ARM released a

[02:13:34] new specification the enhanced memory tagging extension EMTE in 2022. They said more importantly our analysis showed that while EMTE had great potential as specified a rigorous implementation with deep hardware and operating system support could be a breakthrough that provides an extraordinary new security mechanism. They said consider that

[02:14:03] MTE can be configured to report memory corruption either synchronously or asynchronously. In the latter mode, memory corruption does not immediately raise an exception, leaving a race window open for attackers. We would not implement such a mechanism. We believe memory safety protections need to be strictly synchronous, on by default,

[02:14:33] and working continuously. But supporting always-on synchronous MTE across key attack surfaces while preserving a great high-performance user experience is extremely demanding for hardware to support. In addition, for MTE to provide memory safety in an adversarial context, we would need to finely tune the operating system to defend the new semantics and the

[02:15:02] confidentiality of memory tags on which MTE relies. Again, I'll just pause to say that MTE was, remember, was designed to help developers and debuggers. It was not meant as a proactive security measure. So Apple was, this exploration that Apple talked about going on, this deep analysis was, can we use ARM's MTE released in

[02:15:32] ARM 8.5a as a security measure? And they said, unfortunately, no, it comes up short. They said, ultimately, we determined that to deliver truly best-in-class memory safety, we would carry out a massive engineering effort spanning all of Apple, including updates to Apple Silicon, our operating systems, and our software frameworks. This effort, together with our highly successful

[02:16:01] secure memory allocator work, would transform MTE from a helpful debugging tool into a groundbreaking new security feature. Today, we're introducing the culmination of this effort, Memory Integrity Enforcement, MIE, our comprehensive memory safety defense for Apple platforms. Memory Integrity Enforcement is built

[02:16:31] on the robust foundation provided by our secure memory allocators coupled with Enhanced Memory Tagging Extension, that's the EMTE, from 2022, in synchronous mode, and supported by extensive tag confidentiality enforcement policies, again, for use against malware. MIE is built right into

[02:17:00] Apple hardware and software in all models of iPhone 17 and iPhone Air, and offers unparalleled always-on memory safety protection for our key attack surfaces including the kernel while maintaining the power and performance that users expect. In addition, we're making EMTE available to all Apple developers in Xcode as part of the new enhanced security feature

[02:17:29] that we released earlier this year during the Worldwide Developer Conference. The rest of this post dives into the intensive engineering effort required to design and validate memory integrity enforcement. Okay, so let's get all these abbreviations straight. Originally, to aid in debugging, ARM designed and introduced MTE, memory tagged

[02:17:59] extension, in 2019. But MTE was never designed to be used in an adversarial environment. It was designed to be a debugging aid. So, for example, it was acceptable if it operated asynchronously from the code, notifying a developer of a violation sometime after the fact. That was okay, because they could go back and see what had caused that. Acceptable for a debugger, but in an adversarial setting,

[02:18:29] the damage might have already been done by the time an exception was raised. Thus, Apple's need for synchronous checking. That is, the instant you try to access memory, if you shouldn't be doing it, your butt is terminated. So, what they found was after experiencing for themselves MTE's limitations, three years later in 2022, they worked closely with

[02:18:58] ARM on the development and implementation of an extension to that, EMTE, their enhanced or extended memory tagging extension. Original MTE allowed, also, allowed non-tagged memory regions. That is, you know, it's like, okay, if you're not going to tag this, that's fine. You know, for example, global or static allocations or untagged regions could be

[02:19:28] accessed without any tag checks, meaning that allocators could exploit out-of-bounds rights into such regions. EMTE addressed this by requiring access from a tagged memory region into non-tagged memory to respect the tag knowledge. This prevented the use of untagged memory from being used as a tag bypass. Again, Apple just looked at every single aspect of this and just said,

[02:19:58] you know, no, no, no, no, no, we need to fix these things. I mean, this is, to me, this represents them really, really getting serious about, you know, nipping this stuff once and for all. EMTE also brings more comprehensive enforcement of tag mismatches, especially in synchronous mode, so that buffer overflows and use after free bugs are blocked immediately, not just signaled later

[02:20:28] or more coarsely. So much more granular control and, as I said, synchronous meaning the instant something tries to make a fetch, if it should not be doing so, the process is terminated and an exception is logged. So there's a lot more to the improvements that EMTE brought over its predecessor MTE, but with their A19 ARM chips, Apple has already moved on to their next generation of even more

[02:20:58] rigorous protections. solutions. Leo, let's take our final break and then we're going to continue looking at what Apple has done here. Really interesting stuff. Yeah, this is a take no prisoners, we're through fooling around here, we have our own silicon, we're comfortable with how ARM technology works, we're going to extend this and make what they called a significant commitment in silicon

[02:21:27] in order to just end this whole class of problems. Darren Oki asked this question, maybe it's a dumb question, he says, why don't you just wipe the memory after it's free, zero it all out each time? But I guess this is not just what you're working with, it's overflows too, right? So yes, so it's overflows and OS's do get around to

[02:21:56] zeroing memory after it's been free. Right away, right? Exactly, and so that would introduce a huge amount of overhead releasing a large buffer and then everything would have to stop while you overwrote it with zeros. So what happens is buffers that are released are put on a dirty chain and then free time that the operating system is used to go zero them and then move them over to the ready-to-allocate chain,

[02:22:25] and then all of those free memories are aggregated and consolidated. So there's a whole bunch of stuff going on behind the scenes. That's actually like in our house, because Lisa says I should wash dishes while I'm cooking, but I say I'm going to cook and then I'm going to wash the dishes afterwards. I think that's more efficient personally. Yeah, I would tend to go for the same approach. This episode of security, we'll get back to this, it's really interesting, and very impressive really that

[02:22:55] Apple would say, you know, we're going to tackle this. It is a huge investment. Yeah, that's exciting. We'll find out what Apple did do to enhance MTE in just a moment, but first a word from our sponsor, Melissa. Hi, Melissa, the trusted data quality expert since 1985. Melissa's address validation app is available for merchants in the Shopify app store now. Oh, this is good news. This means if you're using Shopify, you can enhance your

[02:23:25] business's fulfillment and incidentally keep your customers happy with Melissa. Enhanced address correction is certified by leading postal authorities, not just in the U.S., but worldwide. It corrects and standardizes addresses in more than 240 countries and territories. And there's also smart alerts, which is great. It immediately alerts the customer if their information is incorrect or if there's something missing so customers can update that before the order is processed, before that bad data

[02:23:55] gets into your data. When a business of any size would benefit from Melissa, their data quality expertise goes far beyond address validation. Sure, that's what they started with, but they do so much more. Data cleansing and validation are essential in fields like healthcare. Imagine this, 2-4% of contact data in healthcare is outdated every month. Your patients are disappearing. Millions of patient records in motion demand precision, which Melissa delivers.

[02:24:25] Boy, we've come a long way now with digital health systems, right? At least we can do this. In the past, it was paper. I don't even know what you would do. But now you can use Melissa's enrichment as part of your data management strategy. This way healthcare organizations can build a more comprehensive view of every patient. By the way, that also helps in predictive analytics, allowing providers to identify patterns in patient behavior or medical needs that can then inform preventative care. Makes you a better

[02:24:54] doctor. E-Toro's vision. Here's another example. Was to open up global markets for everyone to trade and invest simply and transparently. But to do that, they needed a streamlined system for identity verification. Because as you know, in every jurisdiction, pretty much, there's no your customer rules and so forth. after partnering with Melissa for electronic identity verification, e-Toro received the additional benefit of Melissa's auditor report containing details and an explanation of how

[02:25:24] each user was verified, perfect for the local regulators. The e-Toro business analyst shared, quote, we find electronic verification is the way to go because it makes the user's life easier. Users register faster and can start using our platform right away. Development of the auditor report was an added benefit of working with Melissa. They knew we needed an audit trail and devised a simple means for us to generate it for whomever needs it whenever they need it. So you can see in healthcare,

[02:25:54] in financial services, there's so many areas where Melissa is more than useful. It's vital. And of course your data is safe, it's compliant, it's secure with Melissa. Melissa's solutions and services of course are GDPR and CCPA compliant. They're ISO 27001 certified. They meet SOC 2 and HIPAA high trust standards for information security management. All of these things are so important in every

[02:26:23] business now, right? Get started today with 1,000 records cleaned for free at melissa.com slash twit. That's melissa.com slash twit. Thank you, Melissa, for your support for security now. And now, okay, you got to cool off a little bit, have a little tea. I'm not talking to you, I'm talking to our audience. These four-bit tags work, Leo. All right. So Apple's MIE

[02:26:53] can best be seen as an evolution of EMTE, the enhanced MTE, where MIE adds various final touches to EMTE's already very useful protections. So at first glance, for example, these four-bit tags might not appear to be very useful because, you know, four-bits having just 16 possible states

[02:27:22] cannot contain much security entropy, but the way they're employed is very clever. Allocations are made with the same granularity as memory pages, which on ARM are 16 kbytes each. One of the guarantees made by the system's memory allocator, now under MIE, is that adjacent allocations of memory will always have differing tags. This

[02:27:52] cleverly nips buffer overflows in the bud. If some adversary were able to arrange to compromise an application to obtain access to both its memory and its associated memory access tag, it would be unable to read or write outside of the application's allocated memory region because those adjacent buffer overflow regions would be guaranteed to be using

[02:28:21] a differing tag. With neither the benign application nor its malicious compromiser having any way of knowing or predicting any adjoining allocation tags differing four-bit value. Thus, the infamous buffer overwrites are stopped cold. The equally pernicious and ubiquitous use-after-free vulnerabilities are similarly prevented, and this actually addresses

[02:28:50] the question that the listener had a second ago, Leo, use-after-free vulnerabilities are prevented by having the updated EMTE memory allocator, now the Apple's MIE memory allocator, change the access tags after any freed memory is freed.

[02:29:19] Thus, in the same way, if an application had been compromised so that malware obtains access to the memory pointer and the tag of its memory after it has been released back to the system, any subsequent attempt by the malware to use that memory after it's been freed will be trapped and blocked immediately. No more use of memory after being freed. So,

[02:29:49] if you'll pardon the pun, armed with this bit of background, Apple's further explanations will make some more sense. Apple wrote, a key weakness of the original MTE specification is that access to non-tagged memory, such as global variables, is not checked by the hardware. This means attackers don't have to face as many defensive constraints when attempting to control core application configuration

[02:30:18] and state. With enhanced MTE, we instead specify that accessing non-tagged memory, like these global variables, from a tagged memory region, meaning one under control, requires knowing that region's tag, making it significantly harder for attackers to turn out-of-bounds bugs in dynamic tagged memory into a way to sidestep EMTE

[02:30:48] by directly modifying non-tagged allocations. And they said, finally, we developed tag confidentiality enforcement to protect the implementation of our secure allocators from technical threats and to guard the confidentiality of EMTE tags, including against side channel and speculative execution attacks. Our typed allocators and EMTE both rely on

[02:31:17] confidentiality of kernel data structures from user applications and of the tags chosen by the allocator. Attackers might attempt to defeat EMTE and in turn memory integrity enforcement, Apple's newest technology, by revealing these secrets. To protect the kernel allocator backing store and tag storage, we use the secure page table monitor, which provides strong guarantees even in the presence of a kernel

[02:31:47] compromise. We also ensure that when the kernel accesses memory on behalf of an application, it's subject to the same tag checking rules as user Apple to work with ARM to create EMTE, but Apple was able to obtain sufficient real-world experience with

[02:32:17] EMTE, examining the many ways that it could and still was being bypassed in the field that they then further enhanced and that already enhanced memory tag extension to create MIE. I guess they didn't want to go with EMTE, enhanced MTE. So, anyway, Apple has clearly essentially taken

[02:32:47] the second generation of MTE, known as EMTE, and moved it to always-on synchronous and as strong as possible. If we were to summarize just sort of in a bullet-pointed fashion the things they did, they made EMTE synchronous so that tag verification occurs immediately before

[02:33:17] memory accesses and any tag mismatch crashes the process to prevent its exploitation. So, this eliminates opportunities where malicious behavior might slip by due to delayed or asynchronous checking, which, due to the overhead, was the way MTE would be used. They also enforce always-on system-wide deployment. MIE is enabled by default

[02:33:47] across Apple's entire kernel and for more than 70 user land processes. Previous and other systems were forced to rely on optional or per-app memory tagging, which, unfortunately, reduced the performance significantly. They have secure typed allocators where Apple's memory allocators have been updated to use type information to isolate objects by type to

[02:34:16] reduce any type confusion style overlaps and help with the placement of allocations in memory so that different types get different tags and are less likely to misuse their targets. They also handle re-tagging and memory reuse safely. As I noted, when memory is freed and reused, Apple's system ensures that the free memory tag is changed so that stale pointers with old tags will no

[02:34:46] longer match. They also have protection for overflow across adjacent allocations by assuring that adjoining allocations have differing tags. tags. They also no longer allow for access of non-tagged memory from non-tagged memory. It has to be tagged execution memory accessing non-tagged memory so they foreclose that too. And their hardware enforces

[02:35:16] the confidentiality of these tagging which was never done before because MTE was not really focused on protecting against malicious abuse. It was always focused on helping debuggers to catch debugs. All of this being done down now in the hardware and silicon. Because doing any of this in software would be prohibitive of performance overhead,

[02:35:45] they moved everything that was necessary down into for MIE down into hardware for the A19 and A19 Pro chips. So I'm just very, very impressed with the scale of Apple's commitment. It is not difficult to imagine what the team behind MIE who had just spent the last five years of their lives perfecting all of this new super

[02:36:15] hardening technology were probably feeling when you think about it with that just two weeks ago, another successful exploit made against the hardware that they had already moved well past and were already like they were poised to replace it as they did last week with an entirely new system that would almost certainly no longer fall victim to exactly that

[02:36:44] exploit when probably nearly any other attack. As I said, it is the case that not every type of security problem is a use-after free or a buffer overflow or some sort of memory exploit, but I don't know what the percentage is, 95% of them probably are. I think no one is ever going to suggest that there will never be another

[02:37:14] successful system-level exploit against Apple's latest or future iOS and iPad platforms. But there is a distinct possibility that that could be the case. We heard, as I mentioned before, a while ago from a past early Apple hobbyist and exploit developer who was lamenting that he had long ago hung up his spurs and was no longer attempting to find iPhone exploits because

[02:37:43] they had become insanely difficult to locate and engineer. There will come a time, and we might now be there today, when the cost to develop any new exploit, if it's even possible, has become so high that even the highest and most capable exploit developers join that earlier hacker in giving up on Apple and

[02:38:13] switching to more attackable platforms. Because Apple has just gone all the way and said no. Even though a tiny percentage of our users are ever being targeted, that's not okay. Of course, that means the people who will attack Apple are the ones most strongly motivated actors from nation-states who are going after But I'm saying, even at this point, that's the only people who have

[02:38:45] been attacking Apple. Right. Is this enough to deter them, you think? Yes. Yeah. Interesting. I think what it means is we're going to be rebooting our phones for software security updates much less often. Great. Because Apple won't be in a panic needing to protect us against the latest zero day. We're just going to have many, many fewer zero days.

[02:39:16] As you know, Apple has locked things down so much it's hard for security researchers to actually work on iPhones. But they have opened up a program. In fact, they just opened up applications for the new phones for security researchers to get specially modified iPhones that are less protected so that they can at least work on these things. So I really admire the way Apple has gone. I am so impressed. I mean, this is a

[02:39:45] no, no other company has made this sort of commitment. Yeah. Fantastic. Well, that's what happens when you make your own silicon. you can do more. And thank goodness that their decision has been to do more and not save more and charge more. Yeah, they called it an unprecedented percentage of their silicon real estate is now devoted just to this.

[02:40:15] Not to making it faster, not to more cores and more neural nonsense. It's no, if you, if we're saying this is where, how we're tagging the memory and we're going to stop you cold if you don't have the magic token for doing so. And bad guys can't get that. One thing I did notice that worried me was that they have enhanced the branch prediction capabilities. They are not abandoning branch prediction, which we know is one of

[02:40:45] the sources for these timing attacks like rowhammer. Would this help in that kind of event? No. This is a different kind of problem. I think we're going to have to see whether the those, so those are side channel and they are saying that this is also proof against side channel attacks. They have heart, they have hardened this against that. So the memory leaks, that's what's happening is they leak in these branches. Yes. It's the side channel attack that

[02:41:14] gets the malware, the pointer that it can then abuse. So if it can't abuse it, ah, brilliant. It doesn't matter if the bad guys get the pointer. Wow. Thank you for explaining this. I would, I'm venturing that there are very few places you could get this kind of information. You could read the white paper for yourself, but it's going to take somebody like Steve to explain its implications. Somebody who's been doing this for a long time and knows exactly where the bodies are buried.

[02:41:44] Good on Apple. Good on Apple. And thank you for explaining this. I'm very impressed. You know, what I love is you don't shy away from the really technical stuff. And you know what? I think our audience appreciates that. That's what the fun is. Yeah. Yeah. Yeah. Fantastic. Are you going to buy the new iPhone? No. Yeah. I, well, and the reason is, as I mentioned, I did get a 16 last spring when I was worried that China's tariffs

[02:42:14] might cause a problem. Um, now it's like, uh, what that means is that my trade-in value would be high. And so it wouldn't cost me that much to go from a, from a 16 to a 17. I got offered $700 for my iPhone 16 Pro Max, which brought that price down for a new iPhone and more like 600 bucks, which were 700 bucks. And I thought, you know, cause I got it with, uh, no, actually it was 600 cause I got it with 512 gigs. That means it's, you know, at 600 bucks, maybe not such a

[02:42:44] bad idea. I like, I like the fact that they'll take those trade-ins. Yeah. And apparently, and I'm, I still have, this is my, uh, on by my desktop and there's my picture, my, my, the picture of my lovely wife. Yes. Um, on my desktop, I, I, I use my iPhone 12 still and I up, well, yeah, because, because this is the one that I had been using and I was fine with it until I worried that, that, that prices of iPhones might go through the roof during those early China tariff

[02:43:13] scares. Yeah. at the start of the Trump administration. So I bought the 16 for that reason. Uh, I just updated this to iOS 26 and based on all of the negative feedback, I, or, uh, you know, uh, reviews I've been hearing about the glass. You didn't get liquid glass. I didn't get liquid glass because the phone is too old. Yeah. There's a secret, a secret blessing hidden in there. Well, Steve, thank you so much for this. This is the kind of coverage we really appreciate.

[02:43:43] If you like this, I hope you will support Steve. There's a couple of ways to do that. Of course, we love you. If you join club twit, because that supports everything we do, 25% of our operating costs now come from club members like you. If you're not a member, twit.tv slash club twit, you get ad-free versions of this show, all the shows, specials, uh, access to the club twit discord and more. Twit.tv slash club twit. You can also support Steve by going to his site, grc.com,

[02:44:13] and picking up a copy of Spinrite. That's his actual bread and butter. This is, this is how he makes a living. 6.1 is the current version. He's very generous. If you own any copy of Spinrite prior, you get a free upgrade. So get that upgrade. But if you don't, now's the time to get on the Spinrite bandwagon. It's the world's best mass storage maintenance recovery and performance enhancing utility, grc.com slash Spinrite. But there are other things you

[02:44:43] can do there. In fact, once you get to the site, buy your copy of Spinrite and browse around. There's a lot of cool stuff. For instance, Shields Up, his tool is so useful for making sure that your router is properly configured. Lots of things like Never 10, which keep your Windows machine from upgrading against your will. A lot of freebies, lots of extra information. And if you have a comment, a suggestion, or even more importantly, you want to submit a picture of the week

[02:45:12] for the show, you can get on his email goodgraces list. That's what I'm going to call this, your goodgraces list by going to grc.com slash email. Give him the email address. He'll validate it making sure that you are not a spammer. I don't think spammers are going to jump through that hoop. So that way you can email him. He won't put you in the spam bucket. You'll notice when you're there, though, there are two unchecked checkboxes for two newsletters. By default, unsubscribe, but do check them.

[02:45:42] One is, of course, the weekly Security Now newsletter, which is very complete with links and pictures. Somebody in the YouTube chat says, I wish Steve would do these with a whiteboard, which we could set you up with if you wanted a telestrator. We could set you up. We used to in the tech TV days. That's what I did. I had Steve's whiteboard. chalkboard up. If you want, I'll work on getting a telestrator for you. Alex Lindsay has a very good setup that you could illustrate. I think it would be distracting.

[02:46:13] But if you want that kind of extra oomph, two things you should do. One is, go there to the email list, subscribe to the Security Network newsletter because that's got a lot of stuff including images in there. You could also check the other box which is a very infrequent newsletter. He's only sent out one email this whole time, but that will announce new products and we're waiting with great anticipation for his DNS Benchmark Pro any day now and you'll get an email when that is available for download. download.

[02:46:44] He also has the show. I mean, I shouldn't give that short shrift. He's got unique versions, a crazy small 16 kilobit version. It's a little scratchy but it's small. He's got the full bandwidth 64 kilobit version. He's got the show notes. He's got the incredible transcripts written by Elaine Ferris, an actual human being who transcribes all these shows. Those take a few days but once those are up there you can read along as you listen. You can use it for searching. All of that

[02:47:13] at grc.com. If you want video of the show or the 128 kilobit audio, come to, oops, Apple's doing a little thumbs up. It's rocking. It's rolling. Do a little, go to the Twit website, twit.tv slash sn and you can subscribe. There we go. Laser light show. You can subscribe. Actually, you can just download it directly. Audio and video are there. If you want to subscribe, get a podcast client, then you can subscribe and get it

[02:47:42] automatically. Again, audio or video. There's also, and this is important, if you hear something and you think, you know, I got to pass this along to our IT department or the boss or whatever, there's a YouTube channel dedicated to security now and that's a great way to send clips of the show to somebody else. YouTube makes that easy and everybody, but everybody can watch a YouTube video. I think that's all the busy work I need. We do the show and you can watch it live every Tuesday right after Mac Break Weekly. That should end up being

[02:48:12] around 1.30 Pacific, 4.30 Eastern, 20.30 UTC. The live streams, there are eight of them, including the Club Twit Discord for the members, but there's also open to all YouTube, Twitch, TikTok, Facebook, LinkedIn, and X. and X. and kick. You can go anywhere, any of those, watch, chat with us. I'm watching the chat. We love having you in the chat room, but you don't have to. Like I said, you can download it later or even subscribe and listen at your leisure. Steve,

[02:48:42] have a great week. I just saw a list. Remember Michael Swain? Yeah, of course. He used to He just published a list from a 1984 hackers conference that you were at. Do you remember this now 40 years ago? And man, the names of the people at

[02:49:11] this list. You know, here we are. Pastor Dyson invited me to speak at one of, is that the one? It might be, I don't know. Let me see if I can find his post because 1.0, can't remember the name. It was when the, yeah, I remember of course the wonderful Esther Dyson, but it was when one of the, it was right about when the Mac came out. So there were a lot of people there from the Apple group, Bill Atkinson.

[02:49:42] There was a, Woz was there, Jobs wasn't, but I just was looking at the names on this list and I thought, Roger Von Eck had a conference called Success in Software and I also spoke at that one, talking about software as an art form. Might have been that. He said this was a, he said it was like a hacker conference, I can't find the post now. But man, the names of the people were, Bob, Frankston was there, I mean,

[02:50:12] just all the legendary names in 1984. Frankston also was a speaker at Esther's conference, so that might have been it. So maybe it was Esther's, yeah. I can't. Frankston came out dressed like a, in like animal skins with a musket because he was a pioneer. I was like, okay, and Esther loved that kind of crap. In 1984, he was considered a pioneer because 10 years earlier he had created, what was it, Physi-Calc? Yeah, I think it was, was it Physi-Calc or was it,

[02:50:42] yeah, it wasn't Lotus 1-2-3, it was Physi-Calc, yeah. Well, I can't find it. I wanted to read you the list because it was a who's who of computer history. And man, you were right there, right in the middle of it. Just wild, just amazing. All right, enough of that. You get going, go have fun with your wife. We'll see you back here next Tuesday. Same to you. Thank you, everybody. Bye.

Security Now,Apple A19 chip security,Apple security update,TWiT,ransomware attacks,use after free vulnerabilities,Enhanced Memory Tagging Extension,Memory Integrity Enforcement,Leo Laporte,ARMs MTE,steve gibson,buffer overrun prevention,