SN 1077: A Browser AI API? - End of Bug Bounties?

SN 1077: A Browser AI API? - End of Bug Bounties?

Google is sneaking a massive 4.7GB AI model into Chrome, and Mozilla is fighting back as the future of browsers threatens to turn into an AI arms race. Find out what's really happening behind this push and why it's setting off alarm bells across the web.

  • Hackers AI-code a portal, forget to add authentication.
  • The UK's NCSC issues a Mythos warning. Where's CISA?
  • Another (of many) Linux local privilege escalations.
  • AI may be spelling the end of bug bounties.
  • Anthropic releases "Claude Security" mini-Mythos.
  • ChatGPT gets very serious about login security.
  • Syncthing's SyncTrayzor v1 abandoned; v2 created.
  • Google drops an AI API into Chrome; Mozilla objects

Show Notes - https://www.grc.com/sn/SN-1077-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

Google is sneaking a massive 4.7GB AI model into Chrome, and Mozilla is fighting back as the future of browsers threatens to turn into an AI arms race. Find out what's really happening behind this push and why it's setting off alarm bells across the web.

  • Hackers AI-code a portal, forget to add authentication.
  • The UK's NCSC issues a Mythos warning. Where's CISA?
  • Another (of many) Linux local privilege escalations.
  • AI may be spelling the end of bug bounties.
  • Anthropic releases "Claude Security" mini-Mythos.
  • ChatGPT gets very serious about login security.
  • Syncthing's SyncTrayzor v1 abandoned; v2 created.
  • Google drops an AI API into Chrome; Mozilla objects

Show Notes - https://www.grc.com/sn/SN-1077-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:

[00:00:00] Cybersecurity Now. Steve Gibson is here. He is armed with the knowledge that Google is now downloading 4.7 gigabytes when you download Chrome. What is it? A local AI model. Steve talks about its implications next on Security Now. This episode is brought to you by OutSystems, a leading AI development platform for the enterprise. Organizations all over the world are creating custom apps and AI agents on the OutSystems platform. And with good reason.

[00:00:28] Build, run and govern apps and agents on one unified platform. Innovate at the speed of AI without compromising quality or control. Trusted by thousands of enterprises worldwide for mission critical apps. Teams of any size and technical depth can use OutSystems to build, deploy and manage AI apps and agents quickly and effectively without compromising reliability and security.

[00:00:52] With OutSystems, you can accelerate ideas from concept to completion. It's the leading AI development platform that's unified, agile and enterprise proven. Allowing you to build your agentic future with AI solutions deeply integrated into your architecture. OutSystems. Build your agentic future. Learn more at OutSystems.com slash TWIT. That's OutSystems.com slash TWIT.

[00:01:42] OutSystems.com slash TWIT. OutSystems.com slash TWIT. That's Block cœur. But that doesn't mean I'm not going to get Steve Gibson on the horn and talk about security because I know you need your fix. Hello, Mr. G. You know, Leo, you look a little more tan than you did last time we saw you. I am. See, my, my, my hand is light, but my face is a little dark. Do, my ties increase skin pigmentation?

[00:02:09] Maybe that's what it is. Maybe that's what it is something yeah we're on vacation but i can't you know i still want to do the shows and so uh i've set up if if you could only see this cookie setup i am outside on the lanai we hear the uh the exotic birds tweeting in the back there are some exotic birds there's minor birds there's uh house sparrows and there's a bird that looks like a little chicken

[00:02:35] uh called a falcon i can't remember the name of it but uh chicken bird and it's very noisy so you'll know if it decides but the sparrows are very aggressive they might think i have something to give them so they might be coming up here and sitting on my shoulder well it all adds to the ambiance that that's it we have with you and lisa in the big island it's so beautiful have you been

[00:02:58] to hawaii steve oh yeah i had half of my the second half of my honeymoon in hawaii i love hawaii i do so what's coming up on security now this week so episode 1077 it's funny every so every so often i think about 1077 which sort of puts the infamous 999 into context yeah now it's been a while yeah

[00:03:22] actually it's been more than a year seven seven seven miles yeah um so there were two main topics which were contesting uh for uh to win the the coveted title of the podcast this week uh google's arguably premature move to build ai into chrome ended up winning because mozilla has said

[00:03:52] not so fast yeah um but we got a lot of good things to talk about uh it turns out that some hackers used ai to code up a portal for credit card uh stolen credit card verification they forgot to ask it to add authentication so whoops uh also the the uk's uh security group the ncsc

[00:04:21] has issued their own Mythos warning which caused me to wonder where is cisa why why haven't you heard anything from cisa uh we're gonna talk we're gonna touch on that we've got another of many recent uh linux local privilege escalations this one is bad and it's affected linux for years uh and yes ai found

[00:04:44] uh also some interesting commentary about the the the ground shifting under ai and vulnerability research how it's looking like it may spell the end of bug bounties and why that is probably the right thing to have happen also anthropic has released a uh what they call claude security as a mini Mythos

[00:05:11] uh chat gpt has made some uh changes which demonstrate it's getting very serious about login security um i want to make a comment that about something i discovered uh since we last talked about the end of life of sync tracer version one which is what i use to to sort of bundle sync thing uh into a nice little applet for windows but there's a replacement for it and then we're going to talk

[00:05:41] about how google has sort of surprised everyone by just saying we think it's time that we add ai support in javascript so lots of fun things to talk about and of course a great picture of the week so yeah oh and there are a couple things that happened just now uh just so that our listeners know that i'm

[00:06:04] aware of the fact that digi cert suffered a major breach oh which allowed 30 uh ev code signing certificates to get minted behind their back oh that's not good and used however their disclosure is being called a reference a state-of-the-art this is the way you do it if you're gonna if you're gonna

[00:06:29] say what happened if you're gonna share with the industry your your post-mortem so uh they just updated it 10 minutes ago so it's still a little bit in flux we'll take a look at what they had to say uh things that went right things that went wrong and what they learned it it ended up being a social uh hack a um a malicious screensaver of all things got into two of their tech support member

[00:06:55] uh pcs and it it wasn't detected due to a crowd strike endpoint security misconfiguration so anyway i'm all up to speed on it but i just didn't have time i didn't have a chance to well actually it's still we're still learning a lot and it's still in flux so we'll have good coverage of that next week i just want to let everybody know that i was aware of that so uh we'll take our first break we'll look at the picture of the week and then we'll get into all this it really just shows you how anybody

[00:07:23] is vulnerable to this uh and and as you said when we were at the threat locker zero trust world the threats coming from inside the house a network engineer who put a screensaver on his system and suddenly you're compromised that's uh that's terrible all right well i'm not going to do the commercials from here in hawaii i'm on vacation so i'm going to let leo and petaluma take this one and then we'll be back with the picture of the week right white-skinned leo yes this episode of

[00:07:50] now brought to you by zscaler the world's largest cloud security platform you know the potential rewards of ai are too great to ignore but so are the risks loss of sensitive data and attacks against enterprise managed ai generative ai increases opportunities for threat actors helping them to rapidly create phishing lures and write malicious code automate data extraction there were

[00:08:14] 1.3 million instances of social security numbers leaked to ai applications it's time for a modern approach with zscaler's zero trust plus ai it removes your attack surface secures your data everywhere safeguards your use of public and private ai and protects against ransomware and ai powered phishing attacks don't believe me check out what siva the director of security and infrastructure at

[00:08:39] wara says about using zscaler watch ai provides tremendous opportunities but it also brings tremendous security concerns when it comes to data privacy and data security the benefit of zscaler with zia rolled out for us right now is giving us the insights of how our employees are using various gen ai tools so ability to monitor the activity make sure that what we consider confidential and sensitive

[00:09:04] information according to you know companies data classification does not get fed into the public lllm models etc thank you siva with zero trust plus ai you can thrive in the ai era you can stay ahead of the competition you can remain resilient even as threats and risks evolve learn more at zscaler.com slash security that's zscaler.com slash security now back to steve and security now thank you leo

[00:09:35] you know just to explain behind the scenes we weren't sure this was going to work at all and so we thought well i better pre-record the commercials in case uh i don't know micah had to jump in or something uh and i brought a starlink mini and all sorts of backup stuff and it turned out wow who knew in hawaii they've got cable modems high speed internet i didn't need to bring anything i'm just so it isn't a complete proof of concept of your ability to roam through like anywhere in the on the globe

[00:10:04] and use not yet not yet i probably should though you know uh set it up we have this nice lawn behind us there's plenty it's perfect space for the starlink so i probably should just set it up uh before i go home and and make sure that i can do that but you know everybody could double as a bird bath couldn't it yes it could uh or a serving tray uh all right i have and now this is going to be another interesting

[00:10:29] experiment i have the picture of the week shall we uh share it so this was great uh shared from a listener of course as as are they all uh i gave this one the caption attempting to preempt the inevitable question why has the lobster become so expensive okay so we see a sign all right i had seen the lobster part but i hadn't seen the rest of it that's hysterical

[00:10:58] the sign is taped to the window of a buffet or a restaurant or something explaining just again preempting the inevitable question so the sign says all lobster prices have increased due to higher lobster prices and i can see in the background there's a okay a little old lady an elderly

[00:11:22] person and you know she went up to the guy at the restaurant said why are the lobster prices show high and he just pointed to the sign said ladies yeah because they're so expensive that's right all lobster prices have increased due to high lobster prices very nice very nice what are you gonna do okay um so we begin this week with a story that intersects several security fronts last wednesday

[00:11:50] cyber news headline was scammers vibe code server to verify stolen credit cards leak details of 345 000 cards uh i had to read that one twice to make sure i understood what they were saying so here's what they discovered they wrote threat actors like so many programmers around the world are no strangers to

[00:12:15] ai assisting in their operations however like so many vibe coders scammers also run into security issues on april 16th the cyber news research team discovered an exposed server owned by a threat actor the exposed information is controlled by a carding market called jerry's store as in tom and jerry there were some little cartoons of a mouse jumping around on things uh on in posted on the dark web

[00:12:43] they said the tool provides credit card validity percentages for each seller in other words threat actors use this tool to check if the stolen payment card is still operational according to our team jerry's store operators extensively used cursor an ai assisted development environment that is one of the

[00:13:06] very earliest uh ai based uh coding assistants uh some from several years back cursor they said to set up the leaking server and to not knowing that it was leaking and to create administrator facing dashboards cursor they wrote is a legitimate service developed by the u.s software company any sphere researchers

[00:13:30] believed that relying on an ai assistant to set up the server was the reason it was exposed based on the chat logs our team was able to access the threat actor received flawed instructions from their ai imagine that for building the dashboards the team explained quote we were able to confirm the leak originated from the user

[00:13:54] the user asking to create a statistics dashboard and cursor created an unauthenticated open web directory to serve the web page ignoring the need because of course you didn't ask for it ignoring the need to set up authentication or ensure that only the intended dashboard would be accessible in other words it's just like a regular user if you don't ask for authentication you're not going to get authentication

[00:14:21] anyway they they finished saying moreover the chat history reveals there was sufficient information for the cursor large language model to identify that it was helping set up a credit card verification service indicating a lack of sufficient guard rails to prevent abuse and as you've often heard me say i don't think you can really control a large language model researchers said quote it's a lesson

[00:14:49] for developers using cursor for legitimate uses showing how it can lead to accidental data leaks right it's just going to write what you ask it to it's not going to like be your security nanny so cyber news said that they'd reached out to cursor for comment and would update their article with any additional information they receive the fact that the cursor ai produced a statistics dashboard driven by an unsecured and

[00:15:17] open web directory allowing unauthenticated remote access i think it's a great example of the danger of using ai without being a domain expert that is you know without knowing what to ask for because you you know it'll give you what you ask for but you need to know what that is uh i've no doubt that the cursor ai would have

[00:15:41] easily provided instructions for the authentication that was needed if it had been asked to but apparently the bad guys never thought to ask um so somebody who wasn't really up to speed on web-based application security could easily fall you know or fail to anticipate all the various ways others might access and

[00:16:08] penetrate their system so you know use expecting ai to produce secure solutions by default it's probably a fool's errand in this case either it never occurred to them that authentication should be required where it was absent or they didn't know it was going to be absent or they assumed that the ai would know you know like what it should do and would do it without bidding um the cyber news article also

[00:16:38] provided some interesting background reporting a little bit on the underground industry in stolen credit cards which i thought was interesting they wrote operations such as jerry's store are integral to the cyber crime infrastructure once scammers obtain stolen credit card information they need to verify which cards can still be exploited jerry's store provides that service our team noticed that to complete

[00:17:05] the task jerry's store operators use legitimate well-known merchants the cyber news team explained quote threat actors used multiple legitimate merchant websites such as amazon us amazon japan grubhub sam's club timu lift elf cosmetics and country max utilizing hundreds or in some cases thousands of accounts that have

[00:17:33] already been established on these platforms to perform credit card validity checks attackers created those accounts to register stolen cards and then perform low risk actions as they call them these could include adding cards as a payment method or making a very small purchase if the platform accepts the card threat actors mark the card as valid and sell it to other threat actors on the dark web using large merchants like amazon

[00:18:02] or grubhub of course is a way to mask their activity since large merchants process billions of payments tiny transactions on a well-known website don't ring any alarm bells they wrote according to our team the exposed server contained a treasure trove of credit card details details meaning you know everything you need to

[00:18:23] process someone's card researchers identified nearly 200 000 credit card details that the service had verified as valid uh and over 145 accounts i'm sorry 200 000 that were invalid and over 145 000 accounts that they had verified verified as valid now the exposed information includes all the details that you need credit card

[00:18:51] numbers expiration dates the security code the card holder's name and their address typically they wrote valid credit card details are sold for between seven dollars and eighteen dollars each on the dark web meaning that the value of the valid stolen the valid stolen card data that's the 145 000 cards that are there have been verified there is somewhere between a million and two point six million dollars

[00:19:18] they said however our team added that the actual value of the exposed infrastructure may be a lot higher since jerry's store sells much more than just credit card data that's just one of the the types of you know uh fraud that they're that they're they're uh uh making available in the store they said while it's unclear where jerry's store is located internal tooling and leaked large language model chat logs suggest

[00:19:47] that the marketplace's administrator is fluent in chinese the server itself appears to be hosted in germany by a suspected bulletproof hosting provider the marketplace which yeah the marketplace which launched in late 2023 is a well-known credit card vetting tool within the cybercrime underground aimed primarily at cards

[00:20:09] stolen from victims in the u.s and the eu fluent chinese but not ai apparently you know i this comes up a lot we're going to see more of these whoops whoa so hold on camera i'm over here thank you uh nice people blame ai for stuff that they do that's dumb there was a big story last week and everybody blamed ai because the

[00:20:34] guy's database production database got clobbered but of course uh if you give ai the keys to your production database it's on you my friend and if you're dumb enough to say hey just make me a website to authenticate uh but but but don't put any authentication layers in there ai is going to do what you say it's it's i think this comes from sort of a magical belief about ai that it's somehow intelligent or or or you

[00:21:00] know it's going to take care of you and it's not um and probably it's a hope as as much as a whole leaf yes it's like i hope that ai knows how to do this and since it seems to know a lot i'm just going to assume that it does well it does if you tell it to it will i mean you're absolutely right if you say ai is great at oauth if you say write the login page use oauth make it secure it will absolutely do that

[00:21:27] but you have to tell it to it's not going to assume that it might and but again it might not the other the other thing i wanted to mention is i have had credit cards stolen due to my own stupidity and the very first of all credit card companies know to look for those low risk low value charges you know right in fact they used to say if somebody buys sneakers and then tries to fill up a tank of gas they invalidate that credit card immediately because that's the first thing somebody steals a credit

[00:21:53] card is going to do that's times have changed that's not true anymore when i credit my credit card was stolen i mentioned this before they added it to an apple wallet so they had a prior set up apple account and added it to an apple wallet which they then used the credit card through obscuring the source of the credit card the actual credit cards very i thought very clever

[00:22:18] and i should have known because when i gave it the six digit code it said okay we're trying to add this to that apple wallet and i said what are you talking about i'm not trying to do an apple wallet that should have been the hint to me that they were doing something funny um there you know it's a cat and mouse tom and jerry kind of a game yeah it does feel like um as i have said on the podcast

[00:22:40] before um for like i mean in the early days probably almost before this podcast i used to fly up to northern california to to visit my family in northern california for the holidays and and this was you know pre-expedia and so forth so i actually had a travel agent from the old old old days who i just kept around and when we would have our conversation um she would invariably say well so steve do you have

[00:23:08] the same credit card or have you lost that one too uh because i was out on the internet poking around and it's like oh yeah i don't feel so bad now it did get good yeah i did lose that one so okay so uh last friday ollie whitehouse the chief technology officer you know the cto for the uk's ncsc their

[00:23:35] national cyber security center issued a clear warning um at the level of the government ollie's warning posting was titled preparing for a vol vulnerability patch wave and it carried the tagline organizations must act now to prepare for a wave of patches that will address decades of technical debt

[00:24:01] and i love that term in this in this instance the i think that the the term technical debt is exactly the right way to express the concept that you know the piper may be about to get paid uh i have a friend from the midwest whose favorite term for this would be they're about to get there just comeuppance uh

[00:24:23] yes comeuppance indeed so here's what the uk's ncsc cto wanted everyone within the united kingdom within his his sphere of influence to appreciate he wrote whether they are technology producers and vendors or consumers and operators all organizations have technical debt a backlog of

[00:24:48] technical issues that's both expensive and time consuming as a result of prioritizing short-term gains over building resilient products artificial intelligence when used by sufficiently skilled and knowledgeable individuals is showing the ability to exploit this technical debt at scale

[00:25:10] and at pace across the technology ecosystem as a result the ncsc expect there will be a forced correction which is the way he phrases it we're going to have a forced correction to address this technical debt across all types of software including open source commercial proprietary and software as a service this is why we're encouraging all organizations to prepare now for when a

[00:25:39] patch wave arrives a rush of software updates that will need to be applied across the technology stack to address the disclosure of new vulnerabilities all organizations must take steps to identify and minimize their internet facing and other externally exposed attack surfaces as soon as possible as we've argued for some time

[00:26:04] you should prioritize technologies on your perimeter and then work inwards covering cloud instances and on-premises environments by doing this organizations can reduce the risk posed by latent vulnerabilities when they become known and exploited by attackers where organizations cannot apply updates across their entire environment they should prioritize applying updates to their external attack surfaces

[00:26:34] where capacity extends beyond the external external attack surface organizations should prioritize critical security systems it's also important for organizations to realize that patching alone will not always suffice some technical debt may be present in end of life or legacy technology that's out of support and so cannot receive updates in such instances organizations will need to replace technologies

[00:27:04] or bring them back within support especially where it presents an external attack surface building on the principles contained within our vulnerability management guidance or guidance organizations should make plans to deploy software security updates quickly more frequently and at scale including across their supply chains we are expecting an influx of updates to address vulnerabilities across all severity

[00:27:34] and expect a number of updates and expect a number of updates and expect a number to be critical NCN NCSC recommend that or that and they have three where automatic secure hot patching is available that is patching that does not involve service disruption this should be enabled as a priority okay well that's you know not hard to do I imagine as the first where automatic updates are available including for embedded devices this should be enabled as a priority

[00:28:04] should be enabled to reduce the workload on support teams so yeah turn on automatic updates and go for it and third where neither of the above are available organizations will need to ensure that processes and risk appetites support frequent and and and scaled updating noted the operational trade offs around disruption and safety critical systems a risk prioritized approach such as the stakeholder

[00:28:33] the stakeholder specific vulnerability vulnerability vulnerability categorization system can be used to prioritize installing updates and then they continue however should a critical vulnerability be under active exploitation especially when affecting an internet facing system then it is essential to accelerate the update process organizations can refer to the NCSC's new guidance on responding to active exploitation of vulnerabilities for more information

[00:29:03] to summarize you should put in place a policy to update by default where you always apply software updates as soon as possible and ideally automatically this should be at the core of your update management process but we recognize it may not apply in some instances such as for safety critical systems or operational technology patching alone won't address the systemic problems that my he

[00:29:32] he writes my previous blogs have addressed i've appealed to technology producers and vendors to ensure systemic technical security debt is minimized by including where appropriate memory safety and containment technologies similarly for consumers and operators a focus on cyber security fundamentals to raise resilience and to reduce the impact of breaches should be a priority

[00:30:01] this includes adopting and fully implementing cyber essentials or the cyber assessment framework for organizations operating essential services such as energy health care transport digital infrastructure and government finally prepare for the patch wave now in conclusion the NCSC advise all organizations irrespective of size to plan and prepare for the vulnerability patch wave

[00:30:31] a good start is to plan for the NCSC's updated vulnerability management guidance for larger organizations we also recommend working to gain assurance from your supply chains both commercial and open source so that they're prepared to navigate any required response one thing that occurred to me as I was going through this is that in the name in the name of preparedness and this certainly applies to everybody

[00:31:02] in the UK and out this notion of gaining assurance from your supply chains I would say make sure that the providers of the equipment that you have on the edge which are under support which can obtain updates make sure they've got a greased path through into your email make sure that when they do notify you of

[00:31:32] updates that you have updates that you have updates that are available it doesn't get routed to some we'll get around to it you know next month during our monthly review process I would say you know given what we expect to have happening here over the next couple months make sure that that you that the communications inbound from the vendors that you are depending upon to have the most recent code running can get to you and large

[00:32:02] largely what I just shared from the NCSC you know it's a restating of what we already know right at the same time for many of the CIOs and CSOs and IT heads in organizations throughout the UK where this has rain a clear statement and posting such as this can provide the cover and the backup they may need to succeed in getting their organizations you know and the other C-square

[00:32:32] executives to take this to take this to take this to take this to take this to take this to take this to take this seriously to understand what is probably going to be happening shortly and as I was seeing this note from the UK's NCSC I realized that I hadn't seen anything from our own CISA in the US and that struck me as odd since the CISA we've all come to know would normally have been shouting about this from the mountaintops

[00:33:00] so I went digging to see whether maybe I had missed that statement which you know it seemed clear CISA should have made in the wake of the mythos revelations

[00:33:12] I found a report published two weeks ago on April 21st by Axios and it exactly addresses the question where CISA the reporting was posted as a scoop titled scoop CISA lacks access to anthropics mythos

[00:33:33] and Axios explained writing the cyber security and infrastructure security agency you know CISA does not have access to anthropics powerful mythos preview model even though some other government agencies are using it two sources tell Axios this matters because the country's top cyber defense agency tasked with helping to secure everything from banks to power plants and the power plant is a

[00:34:03] is on the outside looking in at a time when the industries it works with are deeply concerned about AI powered cyber attacks overwhelming their defenses

[00:34:17] anthropic decided against a public release of mythos this is axios bringing less informed readers up to speed anthropic decided against a public release of mythos due to its unprecedented ability to quickly discover and exploit security vulnerabilities instead anthropic provided it to more than 40 or 40 companies and organizations who are now testing it and working to shore up their systems

[00:34:43] CISA is not on that list earlier this month an anthropic official told Axios the company had briefed CISA and the commerce department on mythos capabilities the commerce department's center for AI standards and innovation has reportedly been testing mythos so they have it

[00:35:06] the NSA is also among the organizations using mythos despite the department of defense which oversees the agency having declared mytho anthropic is a supply chain risk unquote it's unclear if the ongoing turmoil within the agency during the second trump administration played any role in the agency not moving more swiftly to secure access

[00:35:33] the agency not moving more quickly to secure access spokespeople for cisa and anthropic both declined any comment for this reporting by axios they wrote the trump administration has spent the last year as we know reducing capacity at cisa instead of opting to give more policy influence to the white house's national instead of of of of that they have opted to give more policy influence to the white house's national cyber director

[00:36:02] and pushing some programs out to the state and local level so trying to you know distribute this instead of having it as centralized as it had been under cisa cisa's acting director a guy named nick anderson told lawmakers last week that the agency's resources are quote more limited than i would like unquote he said

[00:36:25] trump has proposed cutting as much as 707 million dollars more from the agency's budget in the upcoming fiscal year cisa has already lost more than a third of its workforce and millions of dollars in funding national cyber director sean karen cross is among the trump officials negotiating broader civilian agency access to mythos the treasury department has also been negotiating access

[00:36:55] sources sources tell axios that other organizations with access to mythos have predominantly been using it to find exploitable security vulnerabilities within their own networks and software security teams at critical infrastructure organizations have often looked to cisa to share threat intelligence across their sectors and determine how to prioritize their security strategies and as we know those critical infrastructure organizations

[00:37:23] have have have very much dependent upon cisa but also on that that that blanket of hold harmless so that they're free to disclose things they discover which is still a little bit in limbo

[00:37:39] so i heard i hadn't heard about this acting cisa director nick anderson so i checked him out and he appears to be eminently competent and qualified he's a decorated u.s marine corps veteran who served as cio chief information officer for navy intelligence and head of the office of intelligence surveillance and reconnaissance systems and technologies at the u.s coast guard

[00:38:07] he served in on active duty managing intelligence in iraq europe and africa and as a veteran of the operation iraqi freedom he served as principal deputy assistant secretary at the department of energy's office of cyber security energy security and emergency response where he led national efforts to secure u.s energy infrastructure he also served as federal cyber security lead and senior cyber security advisor

[00:38:36] to the federal cio at the white house office of management and budget so you know this guy's you know he's certainly competent to be on top of cisa um i've got no complaints with nick's background it appears what he needs is more resources and support uh and that cisa's lack of access to mythos is largely due

[00:39:00] to the as we're now calling it the war department's unfortunate feud with antrop uh anthropic anthropic made clear in 2025 at the time that it signed its contract with the pentagon that it did not want its ai technology to be used for mass surveillance of people within the united states

[00:39:23] or for fully autonomous weapons systems as we know then subsequently the department of war demanded that anthropic drop those restrictions and anthropic refused to do so they published a public statement explaining their position you know and regarding fully autonomous weapons they wrote frontier ai systems

[00:39:46] are simply not reliable enough to power fully autonomous weapons and without proper oversight fully autonomous weapons cannot be relied upon to exercise the critical judgment that highly trained professional troops exhibit every day anthropic offered to work with the department of war on r and d to improve the reliability of these systems but were turned down

[00:40:11] so after that in apparent retaliation and without any evidence the pentagon declared anthropic suddenly to be a supply chain risk uh and this is all very unfortunate since cisa should absolutely have access to anthropic's mythos preview hopefully the white house's national cyber director this sean karen cross who appears to understand

[00:40:36] the need will be able to make something happen you know it's clearly ridiculous to have one of the u.s's leading ai firms frozen out of the government because as secretary pete hegseth declared it is woke ai whatever that means in this context for the time being it appears that cisa is silent for purely political reasons this is really politics should not be in intrude into this

[00:41:04] at all and unfortunately very much has um i mean cisa is in the doghouse because of what happened in 2020 right chris krebs right and um now the white house is saying they want approval of all future ai models period they're about to draft a proposal that ai models can't be released without government approval this is exactly the wrong direction to take with this stuff well and i also did see that anthropic wanted to do a second round

[00:41:33] they wanted to expand their program by adding an additional 70 70 uh organ organizations that would have access to mythos preview and the white house said no is like blocking their ability to incrementally roll this out an incremental disclosure here is exactly what you want you get the core 40 they have a month

[00:41:57] with it now and then and then you widen the circle again and let another you know like like next tier have access to it yeah this is it it's a little infuriating because political motivation and what's the right thing to do from a security point of view don't necessarily coincide and and that's what you're seeing here and it makes us all less safe frankly yeah okay break time and

[00:42:24] then we're going to look at this uh newest linux uh local privilege escalation uh and look at how ai is reshaping the bug bounty business excellent well i could just sit back and relax because petaluma leo is going to take control this episode of security now brought to you by meter the company building better networks if you're a network engineer you know the headaches legacy providers inflexible pricing

[00:42:50] it resource constraints stretching you thin complex deployments across fragmented tools look your mission critical to the business but you're working with infrastructure that wasn't built for today's demands that's why businesses are switching to meter meter delivers full stack networking infrastructure wired wireless and cellular that's built for performance and scalability meter designs the hardware

[00:43:16] they write the firmware they build the software they manage the deployments they provide support meter offers everything from isp procurement to security routing switching wireless firewall they do cellular they do power dns security vpn sd-wann multi-site workflows all in a single solution meter's single integrated networking stack scales from major hospitals branch offices warehouses and large campuses

[00:43:45] to data centers even reddit the assistant director of technology for web school of knoxville said this quote we had more than 20 games on our campus between our two facilities each game was streamed via wired and wireless connections and the event went off without a hitch we could never have done this before meter redesigned our network with meter you get a single partner for all your connectivity needs from first

[00:44:11] site surveyed ongoing support without the complexity of managing multiple providers or tools one number to call meter's integrated networking stack is designed to take the burden off your it team and give you deep control and visibility reimagining what it means for businesses to get and stay online meter built for the bandwidth demands of today and tomorrow thanks to meter so much for supporting steve and security now and we invite you

[00:44:39] to go to meter.com security now and book a demo you'll be glad you did that's m-e-t-e-r meter dot com slash security now book a demo okay meter all right now back to steve all right steve on we go with security so the news late last week was at the discovery of another serious local privilege escalation discovered in the linux kernel

[00:45:06] and it had been there for a long time and yes before you ask it was found by an ai vulnerability discovery system operated by a security firm named theory uh they wrote quote an unprivileged local user can write four controlled bytes into the page cache of any readable file on a linux system and use that to gain root

[00:45:33] a simple 732 byte line nine line python proof of concept has been posted to github which immediately elevates any normal user to root and of course that's not something you want to leave unpatched so this important

[00:45:56] and uh uh i'm sorry this is important and linux distros the ones that are for sure uh known debian ubuntu and susi have immediately issued patches for the problem uh and uh overseers of many other distros have as well red hat initially said it was going to defer the fix but then later changed its guidance to indicate

[00:46:23] that it will be going along with the other distros and will be patching promptly uh the cve has been rated as high severity at a 7.8 out of 10 and of course it's only it's only only i mean still that's bad 7.8 uh which is you know it's as bad as it gets for a local privilege escalation but the attacker first needs to get into a non-root account where they're able to then execute this script in order to obtain

[00:46:53] elevation uh but on the other hand anybody who has local access to a machine also is able to use this so it's a complete breach of of linux security content you know account security um at the end of one of the reports of this i ran across the statement ai assisted vulnerability research recently prompted

[00:47:17] the internet bug bounty that's ibb the internet bug bounty program to suspend awards until it can understand how to manage the growing volume of reports i thought that was interesting and it was so i went hunting here's what i found about that near the end of march the internet bug bounty program

[00:47:42] which is run by hacker one paused their acceptance of new vulnerability submissions due to what hacker one described as an increasing imbalance between vulnerability discoveries and the ability for open source maintainers maintainers to remediate them and of course yes ai is the underlying driver of all this

[00:48:05] okay but let's for back we'll back up a little bit um recall that the internet bug bounty is a crowdfunded vulnerability reward program that was started 14 years ago back in 2012 and it's operated through the hacker one platform its its purpose and intent is to reward and thus incentivize

[00:48:30] independent security researchers to find and responsibly disclose vulnerabilities in widely used open source software the funding for the program comes from a consortium of major tech companies including facebook github shopify tiktok and others who all contribute to a shared bounty pool the underlying idea is that since

[00:48:54] everyone depends on open source infrastructure everyone should share in the cost of helping to secure it and the vulnerability discovery payout structure is pretty simple 80 percent of each awarded bounty goes to the researcher who reported the vulnerability with the remaining 20 percent being contributed to the open source project itself

[00:49:17] where the trouble was found to support you know its repair and and remediation so that helps to fund the the remediation work and and makes the program go it's been widely seen as a success having paid out more than one and a half million dollars since the program began but almost predictably ai has messed everything up hacker one stated quote

[00:49:45] the discovery landscape is changing ai assisted research is expanding vulnerability discovery across the ecosystem increasing both coverage and speed the balance between findings and their ability to fix them you know remediation capacity in open source has substantially shifted so the problem is being called

[00:50:13] triage fatigue and the trouble is not just the increased volume of reports that would be bad uh and what's interesting is it's not it's also not the signal to noise ratio the actual problem is the nature of the noise

[00:50:31] weirdly the quality of the noise while still noise has increased we all know daniel stenberg the creator of curl he expressed it this way he said more convincing crap is worse than obvious crap you can't dismiss it quickly you have to

[00:50:55] investigate it and you waste real time getting to the point where you can prove its nonsense at scale this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people who are responsible for security which is like yikes a consequence of ai so 31 years ago way

[00:51:24] turning the clock way back 31 years ago in 1995 netscape launched the first widely recognized paid bug bounty program offering to pay researchers back in 1995 for their responsible reporting of significant bugs which they discovered in netscape navigator 2.0 so they were really ahead of the game at that point of course they were

[00:51:54] also had a web browser that was ahead of the game and that model has been functioning vibrantly the notion of paying researchers for responsibly reporting bugs they find been functioning ever since so the notion that ai may be driving a fundamental change to this long-standing vulnerability

[00:52:19] discovery discovery and reporting model is important enough as i said at the top of the show to be a contender for today's main topic except that the idea of google going off half-cocked and adding an explicit ai interface for javascript in chrome that also needed ample discussion space today uh and we're going to cover mozilla's pushback

[00:52:42] against that at the end of the podcast but meanwhile the company aikido which is deep into automated vulnerability discovery as a business recently interviewed not only curls daniel stenberg who i just quoted but also casey ellis casey's the founder of bug crowd and as such is one of the people who helped establish and formalize

[00:53:07] bounties bounties for bugs starting back in 2012 aikido titled their report bug bounty isn't dead but the old model is breaking i'm going to share what he wrote and also what my intuition immediately suggests about the nature of the change so they wrote bug bounty has been a very hot topic lately

[00:53:32] we're seeing high profile programs go offline or fundamentally change the internet bug bounty one of the most important programs for open source programs is pausing submissions curl is removing payouts and node.js is removing its bounty entirely that's not noise that's signal we wanted to understand where bug bounty is actually heading

[00:54:01] so we sat down with two of the most credible voices on opposite sides of this conversation daniel stenberg creator of curl who's living the maintainer reality and recently halted bug bounty payments and casey ellis the founder of bug crowd one of the people who helped establish the model in the first place what we found was that the bug bounty model is at a crossroads and we're in the midst of a big shift

[00:54:31] before we get into where the model is headed let's take a step back and understand why it's been one of the most effective ideas in security over the last decade it all stems from the idea of letting the internet try to break your stuff before attackers do and it worked because it gave companies scale they could never hire as casey put it quote if you're trying to outsmart a global pool of attackers

[00:54:59] with someone working nine to five the math for that is wrong unquote they said that's the magic of bug bounty instead of relying on a handful of internal people you tap into a global pool of different skill sets different perspectives and different motivations all attacking your system in ways your internal team never thought of and that's without the significant overhead required to hire specialist experts

[00:55:29] internally and then work to keep them busy all this explains why bug bounties became fundamental to modern security programs what's changing now is it is not the demand for security it's the economics of how bug bounties operate ai has altered the balance and not in a good way finding bugs is now

[00:55:56] cheaper than ever writing reports is even easier and submitting them has become effectively frictionless meanwhile the cost of validating those reports and then actually fixing the issues has not changed at all those final two required steps validating and then fixing bugs remains as labor intensive as ever

[00:56:22] we are seeing this play out in practice there are three types of report submitters there are those companies that use a new approach for legitimate reports these are reports that use layered ai approaches that combine the strengths of multiple ai models guardrails orchestration and context such as aikido's own ai pen testing

[00:56:50] capabilities and aikido is of course plugging their own solution as we would expect them to on their own website but we know that anthropic also set up the their mythos preview system to do the same both are discovering and importantly verifying suspected vulnerabilities to produce much higher quality reports which in the case of mythos include proofs of concepts of exploits aikido continues

[00:57:20] enumerating these three classes of bug sources num so they they they said then there are individuals who escalate their research and report writing using ai as a tool and finally there are individuals who are able to upskill by virtue of these ai models they generate reports that seem technically plausible

[00:57:44] but are still completely wrong daniel described it perfectly and this is where he we we quoted him earlier saying more convincing crap is worse than obvious crap they said you can't dismiss it quickly you must investigate it right because it looks real and then you waste real time getting to the proof that it's nonsense

[00:58:09] at scale this stops feeling like a helpful external contribution model and starts to resemble something closer to a denial of service attack on the people responsible for security and the impact they write has been truly devastating the internet bug bounty program paused all new submissions because ai has dramatically increased discovery volume beyond what their maintainers can handle

[00:58:38] node.js lost its bounty when funding disappeared the reports still come in but the payouts are gone and curl removed financial rewards after being flooded with ai generated reports casey emphasized that this isn't a new problem it's an old one just massively accelerated he said we're doing stupid things faster with more energy

[00:59:06] bug bounty they write has always had an issue with being a level playing field one person submits a report and another person has to validate it

[00:59:15] that sounds equal on paper but in practice it has always been difficult for one person to keep up with validation even before ai existed now it's practically impossible we're now in a world where anyone can generate dozens of reports make them appear credible and submit them instantly on the receiving end however the constraints have not changed it's still humans reviewing triaging

[00:59:44] and making decisions open source has been the first to feel this impact open source is where the pressure has shown up first largely because it was already operating close to its limits most projects are maintained by small teams often volunteers with limited time and resources yet they underpin massive portions of the web of course we all think of that xkcd cartoon right with a little tiny block that's holding up this whole you know creaky

[01:00:14] infrastructure they said add financial incentives global participation and now ai generated submissions and the system is quickly overwhelmed the internet bug bounty program said it directly quote ai assisted discovery has shifted the balance between findings and remediation capability translation we're finding more bugs than we're able to handle

[01:00:42] so now the bounty is gone and yet the expectation of reporting remains but the question is is the way bug bounty programs have been used to effectively scale security teams that improve security posture still viable without financial incentives bug crowds founder casey ellis doesn't necessarily believe so every organization should have a vulnerability disclosure

[01:01:12] program because if you're on the internet people will find issues but not every organization is in a position to run a public reward driven bounty program in casey's words curl likely should not have had one to begin with casey said i don't think every organization should run a bounty program the curl program should not have been a bounty program in the first place unquote and yet daniel's experience shows something more

[01:01:41] more nuanced daniel views the bounty program as a success because it incentivized real scrutiny of the code he said quote i've always thought about it as a success because it's a great way to actually encourage people to scrutinize the code

[01:02:00] so what happens when you remove financial incentives you'd assume that when you remove financial incentives you'd get rid of ai slop but that you'd also reduce the likelihood of genuine vulnerabilities being disclosed however when curl removed the financial incentives something interesting happened the low quality

[01:02:28] ai generated noise largely disappeared daniel said quote we have stopped getting ai slop security reports instead we get an ever increasing amount of really good security reports submitted in a never before seen frequency which put us under serious load unquote okay so i'm going to interrupt here to mention

[01:02:58] that i have a question that i have a question that i have a theory about why that is back when discovering vulnerabilities required long hours of painstaking grueling work to step through and reverse engineer code it was no fun the only motivation and it needed to be significant was the promise of a big pot of gold payout at the end of that tunnel

[01:03:26] ii driven vulnerability discovery has changed that today ai makes bugs both fun and easy to find it allows less skilled users to participate thus broadening the bug hunter base and there are plenty of people who would sincerely like to give back and contribute until now they haven't been able to but now that

[01:03:56] Now they have the means. They don't need a monetary incentive. They truly want to help. I think it makes sense. Aikido continues with their report, writing, Instead of drowning in low-quality reports, maintainers are now dealing with a high volume of genuinely useful findings, many of which are powered by AI-assisted research. The barrier to entry has dropped, not just for bad reports,

[01:04:26] but for good ones too. But this creates a new kind of pressure. Even high-quality reports take time to understand, to validate, and to repair. And many of these good findings still fall into gray areas, bugs that may not meet security thresholds but still require some attention. The result is a sustained and in some ways increased load on already constrained teams.

[01:04:54] So in a strange way, the system has not been relieved. It's been refined. And this is where it gets interesting because while this is painful in the short term, it might actually be a step in the right direction. By removing financial incentives, we strip away a large portion of the noise. What's left is a signal that is on average of higher quality,

[01:05:21] more intentional, and more aligned with actual security outcomes. AI is lowering the barrier for researchers to do meaningful work. It's enabling more people to find real issues faster than ever before. That combination, less noise, more signal, but still overwhelming volume, suggests we're in a transition phase. The historical model is breaking under the pressure,

[01:05:50] but what's emerging underneath it might be better. This would look like a system where disclosure is expected, not incentivized. Rewards are more targeted, not broad. And the focus shifts from more reports to better outcomes. We're not there yet. Right now, we're in the messy middle, where the old model no longer works and the new one hasn't fully formed yet.

[01:06:19] But if this plays out correctly, we don't end up with less bug bounty. We end up with a more sustainable version of it. What we're likely moving toward is a model where vulnerability disclosure becomes a baseline expectation across the industry, rather than something optional or incentivized. Public bounty programs don't go away, but they become more controlled, more targeted,

[01:06:46] and more aligned with organizational maturity. AI will inevitably play a larger role in filtering and triaging the growing incoming volume of reports. It won't solve the problem entirely, but it will become part of how we manage it. We'll also see a shift in what gets rewarded. As automated systems become better at finding low-level issues, the value of those findings will drop.

[01:07:13] Instead, incentives will move toward a higher impact work, the kind that requires creativity, context, and a deeper understanding of the systems. That means researchers will increasingly focus on areas like chaining vulnerabilities, exploiting business logic, and breaking complex or emerging technologies where automation may continue to struggle. Okay, so think about this from the bounty provider's standpoint.

[01:07:43] Taking Curl as an example, Daniel terminates bug bounty payouts and observes an immediate drop in the total number of reports. But it's the bogus reports, predominantly, that disappear, not the useful reports that describe true problems. Given that, why would he ever resume bounty payouts? The internet bug bounty is likely to observe the same thing.

[01:08:12] As I noted, what appears to be happening is that bugs are now so much easier to discover, even fun to find and report, that it's no longer necessary to dangle a carrot. Actual human altruism, which, believe it or not, in 2026 still exists, is now sufficient to drive what once required the promise of payment.

[01:08:39] It'll take a while for this to percolate throughout the industry. But my prediction is, you know, that the 31 years of bug bounty programs we've had ever since Netscape first offered payment for reports of bugs in Navigator 2.0 is probably going to wind down over time. And the reason our programs are currently overwhelmed by good bug reports is that, unfortunately, they are very buggy.

[01:09:09] It's going to take a while. I mean, this is that new phase where AI is finding problems that were not, is truly finding problems that were not known to exist. Those will wash out of the system over the next six months or so. And then the volume of really good reports will necessarily drop because there won't be nearly as many bugs to be found, you know, in real time.

[01:09:37] And as AI then continues to check code before it goes out the door, we're not going to have new bugs introduced into the ecosystem. I think it's really interesting that potentially we are talking about a major shift in the way, you know, bugs are discovered. It won't nearly be as much for money moving forward as it has been in the past, Leo. Okay. You want to take a little break? I do.

[01:10:07] All right. And then we're going to look at a new product from Anthropic, which we might call Mini Mythos. Mythos Light. Mythos Light or Mythos Junior or something. Yes. And it's available to all Claude Enterprise users now. Okay. Oh, cool. You're watching Security Now, Mr. Steve Gibson. We do this show every Tuesday right after Mac Break Weekly. That's about 1.30 Pacific, 4.30 Eastern, 20.30 UTC.

[01:10:38] And you can watch it live if you really want the freshest version of it. Our club members get to watch in the Club Twit Discord. But there's also, of course, not TikTok, x.com, Facebook, LinkedIn, Twitch, YouTube, and Kik. So pick your platform, watch us live, or get it after the fact on Steve's site, grc.com, or our site, twit.tv. slash sn. We'll have more Security Now right after this. This episode of Security Now is brought to you by

[01:11:07] Bitwarden, the trusted leader in password, passkey, and secrets management. With over 10 million users across 180 countries and more than 50,000 businesses, Bitwarden is consistently ranked number one in user satisfaction by G2 and software reviews. With Bitwarden Access Intelligence, organizations can identify weak, reused, or exposed credentials and take action immediately while vault health alerts and password coaching surface risks to individual users

[01:11:37] in real time and guide them to fix issues on the spot, turning one of the most common causes of breaches into something visible, prioritized, and fixable. And now, Bitwarden is introducing the new Agent Access SDK, a powerful way for developers and teams to securely integrate controlled credential access into applications, automation workflows, and AI agents. It enables programmatic, just-in-time access to vault-stored credentials without exposing sensitive data, supporting secure use

[01:12:06] within modern development environments. Now, this release does not incorporate, very important, does not incorporate any AI functionality into the Bitwarden solution, and, maybe even more importantly, does not grant AI systems persistent or unrestricted access to your vault data. That's not. The point of the Agent Access SDK is it's a separate open-source development toolkit designed to enforce secure, human-approved, and scoped credential access

[01:12:36] for teams that leverage AI agents in their workflow. It's available now in an alpha phase, early days yet, for testing, but they want everybody to use it, not just every Bitwarden customer, but everybody using any password manager anywhere. The Agent Access SDK introduces a secure framework for how agents request, receive, and use credentials, helping define a model for safe credential interaction in agent-driven systems. And I love Bitwarden because they're giving it away. Any password company that wants to use it

[01:13:06] can use it. It's open. Bitwarden now enables Passkey login. I love this for Windows 11. Securely unlocking devices at the OS level. Of course, they have to work with Microsoft on this to provide native Passkey support. This will extend SSO to automatically log users into more apps, making credential management across devices more seamless than ever. And it works with Windows. Hello! Imagine never having to enter your password again. For those who want a lightweight option, Bitwarden Lite

[01:13:36] offers a self-hosted password manager designed for home labs, personal projects, or quick deployments with minimal overhead. And don't worry, Bitwarden's open source code, besides the fact that it's on GitHub, it's GPL licensed, you can look at it yourself. It's also regularly audited by third-party experts. It meets all the standards, SOC 2, Type 2, GDPR, HIPAA, CCPA, ISO 27001-2002. Of course, it is absolutely secure. Get started today with Bitwarden's free trial

[01:14:05] of a Teams or Enterprise plan, or get started for free across all devices as an individual user at bitwarden.com slash tweet. That's bitwarden.com. Slash. But we thank him so much for supporting. Security now. Okay. No more singing. Back to Steve. Yeah. I thought that was really interesting that the first bug bounty was 31 years ago. That's remarkable. That is really amazing.

[01:14:35] Yeah. Yeah. Yeah. It's a program that has worked, but to me, it really makes sense if we have, I mean, finding bugs and contributing, you know, giving back, we know that there is a lot of altruism out there in the world. You know, people who would like to contribute, you know, but, and so, you know, spending some time working with a security and AI enhanced vulnerability finding system. I think that makes

[01:15:05] just total sense. Well, that's one thing I don't think Netscape could have anticipated 31 years ago that AI would suddenly be finding all these laws. for the intervening 30 years, it's been fabulously successful. It's worked really well. Millions of dollars, millions and millions of dollars have been paid out to, you know, authentic bugs and vulnerabilities that have been found. So, the systems have working now, we have AI able to pick up that burden and carry it forward. There's another category

[01:15:35] of people who are out of work, bug bounty finders. Well, that's true. It's not a, probably not a career path. Although, if you are expert in running AI discovery, then you've got a new way to make some money. Well, actually, that's a good point. That Linux copy fail flaw, they found it not with the AI solely, but because a very smart security researcher pointed the AI at a specific direction and said, hey, I wonder if this is a problem. And then the AI

[01:16:04] was able to go a little step further. So, it was really a partnership. Exactly. Yeah. Okay. So, we can add, well, it's apropos of the changes being wrought to AI vulnerability discovery that we have Anthropix announcement late last week of Claude Security, which is now entering public beta for their enterprise customers.

[01:16:33] We could think of it as Mythos Jr., and that's sort of how they're casting it. Here's what Anthropix posted about this. They said, Claude Security, which is what they're calling it, Claude Security is now available in public beta to Claude enterprise customers. AI cybersecurity capabilities are advancing fast. Today's models are already highly effective at finding flaws in software code. The next generation will be more capable still, and will be

[01:17:03] particularly effective at autonomously exploiting these flaws. Now is the time for organizations to act to improve their security, preparing for a world in which working software exploits are much easier to discover. Recently, we made Claude Mythos Preview, which can match or surpass even elite human experts at both finding and exploiting software vulnerabilities available to a number of partners

[01:17:33] as part of Project Glasswing. But our cybersecurity efforts go beyond Glasswing. With Claude Security, a much wider set of organizations can put our most powerful generally available model Claude Opus 4.7 to work across their code bases. Opus 4.7 is among the strongest models available for finding and patching software vulnerabilities and for discovering

[01:18:03] complex, context-dependent issues that might otherwise be missed. Claude Security, previously known as Claude Code Security, has already been tested by hundreds of organizations of all sizes in limited research preview, helping teams scan their code bases for vulnerabilities and generate targeted patches. Their feedback has shaped today's release, which makes Claude Security available to all enterprise customers.

[01:18:33] It comes with scheduled and targeted scans, easier integration into audit systems, and improved tracking of triaged findings. No API integration or custom agent build is required. If your organization uses Claude, you can start scanning today. Opus 4.7's capabilities are also being brought to cyber defenders through Claude's integration into software tools that many enterprises already use.

[01:19:02] Our technology partners, including CrowdStrike, Microsoft Security, Palo Alto Networks, Sentinel One, Trend AI, and Wiz, are embedding Opus 4.7 into their tools. In addition, services partners like Accenture, BCG, Deloitte, Infosys, and PwC are now helping organizations deploy Claude integrated security solutions. We're entering a pivotal time

[01:19:32] for cybersecurity. AI is compressing the timeline between vulnerability, discovery, and exploitation. We believe the right response is to make sure defenders have access to frontier capabilities in the ways most accessible to them through Claude directly and through our partners. Claude security can be accessed directly from the Claude. AI sidebar or at Claude.ai slash security. To begin,

[01:20:03] select one of your repositories or scope to a specific directory or branch. Then start a scan. While scanning, Claude reasons about code much like a security researcher. Rather than finding vulnerabilities by searching for known patterns, Claude seeks to understand how components interact across files and modules, traces data flows, and reads the source code. Once complete, Claude provides a

[01:20:32] detailed explanation of each of its findings, including its confidence that the vulnerability is real, how severe it is, its likely impact, and how it can be reproduced. It also generates instructions for a targeted patch, which users can open in Claude code on the web to work through the fix in context. It just sounds fantastic. Over the past two months, we've refined Claude security in line

[01:21:02] with what we learned from its use and production across hundreds of enterprises. Specifically, we've seen that detection quality is paramount. Teams have told us that high-confidence findings are what really accelerates security work. Claude security's multi-stage validation pipeline independently examines each finding before it reaches an analyst, which drives down false positives, and

[01:21:31] Claude attaches a confidence rating to every result. This means that the signal that reaches the team is worth acting on. Time from scan to fix is the metric that matters. Early users pointed to this consistently with several teams going from scan to applied patch in a single sitting, where instead of days of back and forth between security and engineering teams,

[01:22:00] teams want ongoing coverage, not one-off audits. We've added the option to schedule scans so teams can set a regular cadence around reviewing and acting on findings. With this release, we've also added the ability to target a scan at a particular directory within a repository, dismiss findings with documented reasons so that future reviewers can trust prior triage decisions, export

[01:22:29] findings as CSV or markdown for existing tracking and audit systems, and send scan results to Slack, JIRA, or other tools via webhooks. Okay. Given the windup we've seen from Mythos over the past month, and the way they describe this, I cannot imagine why any organization whose software might contain

[01:23:00] external exploitable vulnerabilities or bugs would not be jumping on this with all possible speed. As I noted a few weeks back, an organization's own internal software is only closed source to the outside world. To the organization, their own source code is wide open, and there is now an emerging tool that stands a good chance of

[01:23:29] discovering bugs that have until now escaped notice. I would love to be a fly on the wall in the software development dungeons, and of the world's enterprises, you know, watching their reactions to what they begin seeing from this clawed security. Basically, this is anybody is now able to purchase a mini version of Mythos. And I would argue

[01:23:56] that if Mythos is even better at finding bugs, there's still benefit from running this Mythos Jr., you know, clawed security, over your code base to see if it's able to find something. Certainly, if it can, Mythos would. Mythos may be available in the future. Well, we presume it will be at some point, but you have this now. So, I think this is, you know, a maybe

[01:24:26] in retrospect, a predictable evolution on Anthropics' part, but certainly welcome. I do this anyway. I mean, I don't have Mythos or anything like it. I just have the regular, you know, clawed opus 47 and chatgbt55. And I always say, in fact, I often have chatgbt check claude's work and claude check chatgbt's work. Yep. Cross model. And I frequently say, let's do a security audit on these

[01:24:56] repositories. I mean, that by itself is useful. I found all sorts of stuff. I've also had security audits on my systems, and it's found errors and corrections there too. It's, you know, just the regular models are useful. I can't wait to see how Mythos does. Yeah. And I think from what they've said, what this adds is the ability, for example, to schedule scans so that your engineering software development team, they're just working along.

[01:25:26] And then periodically the code base is given a scan and a check to see if anything significant has been found. That's a great idea. I think that's brilliant. Okay. So, OpenAI announced that they've decided, I was very impressed by this, I'll just say ahead of time, to make account login security a selling point.

[01:25:56] Their posting was titled, Introducing Advanced Account Security. And they explain, today we're introducing Advanced Account Security, a new opt-in setting for ChatGPT accounts. And you've got it now, Leo. Designed for people at increased risk of digital attacks, as well as for those who want the strongest account protections available. It brings together a set of heightened

[01:26:25] security measures that help safeguard against account takeover, while making those protections easier to activate in one place. Once enrolled, Advanced Account Security protects users in codecs as well. They wrote, people are turning to AI for deeply personal questions and increasingly high-stakes work. Over time, a ChatGPT account can

[01:26:54] hold sensitive personal and professional context and sit at the center of connected tools and workflows. For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security conscious, the stakes are even higher. This effort is part of our broader cybersecurity action plan to broaden access to the technologies that can help protect communities, critical

[01:27:23] systems, and our national security. We want users to have the controls to make the security and privacy choices that are right for them. At the same time, we want to ensure users understand, and here's a critical part, understand that the increased protection of advanced account security comes with an increased responsibility for account recovery. And so, now they get specific. Advanced account security brings

[01:27:53] together a series of controls that strengthen sign-in protections, tighten account recovery, reduce exposure from compromised sessions, and give users more visibility into account activity. It's available to opt into in the security section of users' ChatGP accounts on the web. Protection applies to both ChatGPT and Codex accounts that are accessed through that login.

[01:28:22] So, we have stronger sign-in methods. Advanced account security requires pass keys or physical security keys, while disabling password-based login, helping make phishing resistant sign-in the default for people who need it most. So, password-based login, gone. You must use a pass key or

[01:28:51] physical security key. Next, more secure account recovery. If a user's email account or phone number is compromised, an attacker may try to use one of them to gain access to their ChatGPT account via email or SMS-based recovery. We know that, right? They say, to reduce this risk, advanced account security disables email and SMS

[01:29:21] recovery and requires stronger recovery methods, backup pass keys, security keys, and recovery keys. Because account recovery is restricted to these more secure methods, OpenAI support will not be able to assist with account recovery for users enrolled in advanced account security. Again, with truly heightened security comes

[01:29:51] much more responsibility. You know, they're saying, we can't help you because you don't want bad guys posing as you to get help from us either. So, you know, now we're talking. You know, hopefully this sort of much more responsible security becomes more commonplace. security. You know, the only gotcha, of course, is that it makes users entirely responsible for the security

[01:30:20] they claim to want and to cherish. By explicitly removing email and SMS account recovery loops, the most common phishing and other attacks will be thwarted. You know, but, you know, I can see in the case of chat GPT login, this makes sense. OpenAI explains two additional security enhancements, writing, shorter session management. Sign-in sessions

[01:30:49] are shortened to reduce the window of exposure if a device or active session is compromised. Users also receive alerts when there's a login to their account, and they can review and manage the active sessions across the various devices they're signed into. And finally, automatic training exclusion. People working with especially sensitive information may opt to not have those conversations used for model training.

[01:31:19] With advanced account security enabled, that preference is automatic. Conversations from those accounts will not be used to train our models. They finish saying, using physical security keys such as YubiKeys is one of the strongest defenses against phishing. To make that level of protection easier to access, we have partnered with YubiCo,

[01:31:47] a leader in hardware-based authentication and account protection, to offer our users preferred pricing on a customized bundle of the best-in-class security keys. The YubiKey C-Nano is designed to stay in your laptop. You stick it into a USB port and its head basically tucks out a little bit so you're able to touch the little gold

[01:32:17] metal convex head of it in order to authenticate. And they said, for low-friction daily authentication and the YubiKey C-NFC for backup and use across laptops and mobile devices. We are launching this partnership as part of advanced account security, but the bundle will be available to all eligible users in their security settings on the web,

[01:32:46] so more people can adopt stronger, phishing-resistant account protection. Users will also be able to use other FIDO-compliant security key or use software-based pass keys. So I logged into ChatGPT, which I am no longer using as my daily driver. You know, I've switched to Claude after appreciating how confused an AI's context window would become if I were to share it with my wife, Lori.

[01:33:16] So now we each have our own. Once I was there in ChatGPT, sure enough, the security panel of the settings dialog now has many new features, and I think this is great. I expect to see this sort of enhanced security become a standard feature to more rigorously, I mean, like across the industry, to more rigorously protect the potentially very highly sensitive

[01:33:45] dialogues that many people are having with their AI chat bots. You know, once you appreciate, which Claude recently made explicitly clear, that the entire history of your conversation is by default retained for use in creating a conversational context, the importance of more tightly controlling its access becomes, I think, very clear. Okay. Before we get into our main

[01:34:15] topic, I did want to update everybody on something that I just discovered about SyncThing and SYNC Tracer. Of course, we've spoken often about SyncThing, you know, both Leo and I, and many of our listeners, I know, are huge fans. SyncTraser, T-R-S-Y-N-C-T-R-A-Y-Z-O-R, is a terrific little Windows GUI wrapper that turns SyncThing

[01:34:44] into more of a Windows app. In the words of its creator, he said, SyncTraser is a little tray utility for SyncThing on Windows. It hosts and wraps SyncThing, making it behave more like a native Windows application and less a command line utility with a web browser interface. Features include, has a built-in web browser so you don't need to fire up an external browser, optionally starts on

[01:35:14] login so you don't need to set up SyncThing as a service, has Dropbox style file download and progress window, the tray icon indicates when synchronization is occurring, alerts you when you have file conflicts, one of your folders is out of sync, folders have finished syncing and devices connect and disconnect, has a tool to help you resolve file conflicts, can pause devices on metered networks to stop SyncThing transferring data

[01:35:43] for example on a mobile connection or a Wi-Fi hotspot and contains translations for many languages. Anyway, I've been using both of these for years, SyncTraser which contains SyncThing and I hope to continue doing so. As we mentioned, SyncThing can also be installed into this Synology NAS and I've been using it there for many years ever since my first Drobo died and I got a Synology to replace it. And at that

[01:36:12] point, as I said, I switched to Synology. SyncThing works perfectly there as well. I'm mentioning all of this since SyncThing on Windows 10 has been noting that version 2.0.16 has been available for some time. Since I heard from several of our listeners that the major version 2 of SyncThing is fully backward compatible with version

[01:36:41] 1.3 which was where version 1 left off and which is where I'm still stuck on my Windows 7 machine because it won't run anything after version 1.3, I decided it was time to quiet down that new version available notice. But when I updated SyncThing, it complained about an unknown command line switch, meaning that the way SyncThing was being launched by SyncTraser,

[01:37:10] it wasn't familiar with. The trouble, of course, was that the version of SyncTraser I had was also out of date. So I updated it. That's when I learned that SyncTraser's creator had abandoned his baby last August when he archived his GitHub project. At the time, he wrote, I stopped using SyncThing some years ago and I'm afraid I don't have the time to maintain it. Sorry.

[01:37:40] German coding has kindly forked it as SyncTraser version 2 and is continuing development. And this fork is recommended by SyncThing. Please switch to SyncTraser version 2 after determining that you trust the fork. So I first verified that the SyncThing project does indeed still recommend the use of this forked SyncTraser 2 and indeed they do.

[01:38:09] It is recommended among their contributed software. So I wanted to let everyone who's been happily using SyncThing know that indeed major version cross compatibility works. I've got SyncThing 2 now in one location on Windows 10 and SyncThing 1.3 still on Windows 7 until I shut down this

[01:38:39] workstation and consolidate my locations which I'll be doing over the next couple months. I ran the new installer for SyncTraser 2. It saw that there was an older version available, offered to upgrade. it did that, everything went smoothly and everything is working now perfectly. So I just wanted to make a note for those people who may be Windows users using SyncThing. If you don't know about SyncTraser, it's a neat little wrapper

[01:39:09] and if you do, you are able to update and everything works great. You don't have to use it though. I mean, you can use SyncThing directly, of course. Absolutely. You're able to use SyncThing and set it up as a Windows service and it just works by itself. Let's take a time out before we get to the subject, the browser AI API. You know, there was a story, I don't know if this is related, that came out earlier today that Chrome is automatically downloading

[01:39:39] a pretty hefty AI model and you can't stop it. Not so nano. Not so nano. I don't know if that's related. Maybe it is. It was initially 22 gigs. I think they got it down to 4.7. I know. I know. I know. Okay. You kind of have to have Chrome, unfortunately. For instance, I'm using Chrome right now because the Restream, which is what we use for this show, works best with Chrome.

[01:40:08] I'm a Firefox user, but I have to have a copy of Chrome and I have to have a 4 gigabyte copy of nano along with it. Oh, well, we'll take a little break. We'll talk about this and more in just a bit. You're watching Security Now with Steve Gibson, but first this word. This episode of Security Now brought to you by Hawks Hunt as a security leader. You've been there. The eye rolls during training. The one size fits all phishing simulations

[01:40:38] that your employees spot from a mile away. and the report button that gets ignored more often than not. Your programs are running, but it ain't changing employee behavior. Meanwhile, AI is making real attacks more convincing by the day and leadership is starting to ask the question you don't have a clear answer to. Is this actually working? Well, Hawks Hunt empowers your employees to spot and stop advanced phishing attacks, to drive measurable behavior change through personalized gamified

[01:41:08] micro-training powered by AI and behavioral science. And by the way, as an admin, you'll love it. Hawks Hunt does all the heavy lifting. Simulations run automatically, not just in email, but in Slack and Teams, too. They're personalized to each employee, just like the bad guys, based on role, location, and behavior. And every simulation uses AI to mirror real world attacks, meaning your employees are being tested on what's actually getting through, not some outdated template they recognize. immediately.

[01:41:38] Gamified training keeps the engagement high without feeling punitive. And because every interaction generates a coaching moment, you're not just tracking completion, you're actually building behavioral indicators that tell a real story. Reporting rates, repeat clicker reduction, and time to report the kind of metrics that hold up when leadership asks you the hard questions. But you don't have to take my word for it. With over 3,500 verified reviews on G2, Hawks Hunt is the top rated security training platform

[01:42:07] recognized for best results and easiest to use. It's also recognized as customer's choice by Gartner, and thousands of companies like Qualcomm, DocuSign, and Nokia trusted to train millions of employees worldwide. Visit hawkshunt.com slash security now today to learn why modern secure companies are making the switch to Hawks Hunt. That's hawkshunt.com slash security now. Thank you so much for supporting Steve and the work he's doing, and now back to the work he's doing.

[01:42:38] Actually, back to our discussion of Chrome and AI on security now, Steve. Yeah. So, turns out, and this is actually exactly on point for this nano LLM, Google is planning to define a new API to bring AI into our browsers. This would serve as

[01:43:08] an interface to large language models existing outside the browser, browser, or brought in by the browser. Google appears to be mostly targeted at local LLMs, but support for cloud-based LLMs is present too. So, this would be a means for allowing, just to make this clear, for allowing web pages or browser extensions to invoke a user's

[01:43:37] local or remote large language models for many purposes, such as locally reading and summarizing a web page's content, proof reading a web page document being edited, or reading through someone's web mail to produce summaries or take actions. In other words, it would create a JavaScript API large language model prompting interface. interface.

[01:44:07] Now, not everyone thinks this is a good idea, and many of those not everyone's includes end users who feel uncomfortable with this creeping trend toward AI-fying everything. An early instance, an example of this, which we covered at the time, was Vivaldi Browser's CEO, John Von Techner, who said, we don't see AI as something that our users are asking for,

[01:44:37] rather the opposite. I think a lot of people are reacting to force-fed AI. John cited, as a no-thanks example, Microsoft's recall, compiling a long-term history of everyone's desktop screenshots every five seconds, giving recall the label of AI, now seems sort of quaint in today's world. We've come a long way in a short time. Techner said that, quote, the future

[01:45:06] of browsers is about who controls the pathway to information and who gets to monetize you, unquote, which frames the race to insert AI into our browsers as a power grab more than as a feature competition. So, the thing that put this on my radar last week was seeing that Vivaldi's John Von Techner has some other company, notably Mozilla. In a posting to Blue Sky

[01:45:36] last Thursday, April 30th, Mozilla's Jake Archibald wrote, Chrome looks to set, I'm sorry, Chrome looks set to ship an LLM prompt API to the web platform. At Mozilla, we oppose this API. We feel it has a large interoperability risk, and Google imposing

[01:46:06] terms and conditions on a web API sets a dangerous precedent. Okay, now, Leo, listen to this. Before I go any further, I want to touch on those terms and conditions, since that alone is a deal breaker for me. Last week, in a thread in Mozilla's GitHub account, Jake wrote, according to Chrome's documentation, to use the prompt API, you must

[01:46:35] acknowledge Google's generative AI prohibited uses policy. Elements of this policy go beyond law. For example, do not engage in generating or distributing content that facilitates sexually explicit content. Do not engage in misinformation, misrepresentation, or misleading activities. This includes facilitating misleading

[01:47:05] claims related to governmental or democratic processes. So, here we have a proposed web browser API that implicitly contains acceptable use policy. This would be like a web browser refusing to display controversial four-letter words on the grounds that someone might be upset by what a website might wish to have their

[01:47:35] browser display. Hearing this causes me to want to select a couple of four-letter words myself. This is so wrong. Yeah. Now, this is the system prompt prompt for the AI, right? Well, no. There is a system prompt for the AI, which is part of the API. Yeah. But, so, this is saying that the use of the prompt API by JavaScript running in the browser must

[01:48:05] acknowledge these acceptable uses policy. Because it sounds more like the kind of thing you tell an AI not to do, and that's understandable. This is for developers. This is rules for the road. Yes, for developers. So, I just thank God we have respected developers at Mozilla to push back, and I hope this also comes to the attention of the EFF because this seems wrong. Okay, so, to obtain some pro and con

[01:48:34] balance here, let's first look more closely at what this new so-called prompt API, that's the name that they're giving it, the prompt API, that Google has already implemented and moved into Chrome. it's already in Chrome, which is why Leo, you noted that this multi-gig download is happening because they're also downloading a model, their so-called nano model. The explainer for this nascent feature says, and so this is now

[01:49:04] Google speaking, this explainer and the accompanied draft report are in active development by the Web Machine Learning Community Group. Community feedback, and they're getting some, seeking feedback and support for this proposal to gain working group and implementer adoption. Implementations are experimentally available in Google Chrome and Microsoft Edge. Browsers and operating systems, they write,

[01:49:34] in order to set the context here, browsers and operating systems are increasingly expected to gain access to language models. Okay, I didn't know that, but okay. Language models are known for their versatility. With enough creative prompting, they can help accomplish tasks as diverse as, we have some bullet points, classification, tagging, and keyword extraction of

[01:50:04] arbitrary text. Helping users compose text such as blog posts, reviews, or biographies. Summarizing, for example, of articles, user reviews, or chat logs. Generating titles or headlines from article contents. Answering questions based on the unstructured contents of a webpage. Translation between languages, and proofreading. In other words, all of the things, I mean like AI in your

[01:50:33] browser things, that Vivaldi said, I don't know if we want to jump into that just yet. They said, the Google Chrome, Microsoft Edge, and the Web Machine Learning Community Group are exploring purpose-built APIs for some of these use cases, namely, translator, language detector, summarizer, writer, rewriter, and proofreader. This proposal

[01:51:03] additionally explores a general-purpose prompt API that allows web developers to prompt a language model directly. This gives web developers access to many more capabilities at the cost of requiring them to do their own prompt engineering. Currently, web developers wishing to use language models must either call out to cloud APIs or bring their own and run them using

[01:51:32] technologies like WebASM or WebGPU, usually through JavaScript runtime frameworks. By providing web platform API access to the browser or operating system's existing language model, we can provide the following benefits compared to cloud APIs. Local processing of sensitive data, for example, allowing websites to combine AI features with end-to-end encryption. Potentially faster results since there's

[01:52:01] no server round-trip involved. Offline usage, lower API costs for web developers, and allowing hybrid approaches such as free users of a website to use on-device AI, whereas paid users use a more powerful API-based model. Okay, I'll just interrupt here to note that those seemingly, I don't know, those, to me, they feel like made-up reasons. You know, local processing

[01:52:31] of sensitive data, for example, allowing websites to combine AI features with end-to-end encryption. I get the local processing angle. That's potentially valid, but the end-to-end encryption part makes little sense to me in this context. We already have TLS connections with all websites and we have decades of history and experience with making TLS privacy and security bulletproof. Then there's potentially faster results

[01:53:01] since there's no server roundtrip involved, they cite. Okay, so the assumption here is that a local potentially underpowered LLM is going to outperform an LLM in these monster data centers that are being all of everything I'm seeing says that the cloud blows away local LLM and so on for the remaining three benefits. You know, our browsers

[01:53:30] already do have the ability to query cloud-based LLMs using the tried-and-true XML HTTP request API, which has been around forever, or the more recent Fetch API. And both of those offer state-of-the-art mature security and privacy protections. So what really appears to be going on here is for Google to be engineering a means for their Chrome

[01:53:59] and other Chromium-based browsers, notably Edge from Microsoft, to access non-cloud-based LLMs since everyone can already do that, that is, can already access cloud-based LLMs. Their explainer continues, writing, compared to developer-supplied model approaches, using a built-in language model can save the user's bandwidth, storage, and memory resources,

[01:54:29] while using a model that's optimized for the device. This pattern could also provide a lower barrier to entry for web developers by removing the need for developers to serve models and manage their dependencies. Okay, now, I'm not sure that makes sense to me. Again, this presumes that any and all large language models are identical and interchangeable and that the web developer doesn't care

[01:54:59] which one they're interacting with. They're just using a generic LLM that the user has provided to their browser. You know, today that's already not the case. I mean, it's already not the case that all LLMs are identical and interchangeable and I expect model design and capability to diverge more as we move into the future rather than converge. Of course, we'll see how that goes. So next, Google's explainer clearly states its goals.

[01:55:28] Our goals are to provide web developers a uniform JavaScript API for accessing browser-provided language models of varying capabilities. Encapsulate model management and execution details as much as possible, for example, for downloads, updates, templating, and parsing. Guide web developers to gracefully handle failure cases, for example, no browser-provided model being

[01:55:58] available, I guess by always having one. Develop formal implementation guidelines and definitions for example, initial on-device models and possible cloud services. The following are explicit non-goals, they said. We do not intend to force every browser to ship or expose a language model. In particular, not all devices will be capable of storing or running one. It would be

[01:56:27] comforting to implement this API, I'm sorry, it would be conforming, it would be conforming to implement this API by always signaling that no language model is available. In other words, that's acceptable. It may also be viable to implement this API entirely by using cloud services instead of on-device models. We do not intend to provide guarantees of language model quality, stability, or interoperability

[01:56:57] between browsers. In particular, we cannot guarantee that the models exposed by these APIs are particularly good at any given use case. These are left as quality of implementation issues similar to the shape detection API. The following are potential goals we're not yet certain of. Allow web developers to know or control whether large language model interaction are done on

[01:57:26] device or by using cloud services. This would allow them to guarantee that any user data they feed into this API does not leave the device, which can be important for privacy purposes. Similarly, we might want to allow developers to request on device-only language models in case a browser offers both varieties. Allow web developers to know some identifier for the language model in use separate from the browser

[01:57:56] version. This would allow them to allow list or block list specific models to maintain a desired level of quality or restrict certain use cases to a specific model. Finally, they said both of these potential goals could pose challenges to interoperability, so we want to investigate how more important such functionality is to developers to

[01:58:26] find the right trade-off. So, in other words, we in the world are not yet necessarily ready for this or in need of this, so we're unsure how it should work exactly, but we're going to charge ahead because this will be better than nothing. Essentially, what this comes down to, when you strip it away, is Google, and as you started with this, Leo, Google wants

[01:58:54] to add a 4 gigabyte, actually it's 4.7 is the number I saw, down from 22, which it was earlier, a massive language model to Chrome so that Chrome will become AI-enabled intrinsically, and that would allow Chrome-hosted

[01:59:24] web pages to do lots of things they can't now. So, okay, today's web browsers are littered with yesterday's great ideas that while they may have never achieved critical mass, must still be present and supported since random websites scattered around the world still use them. As one example, it

[01:59:53] may not be fair to single out Flash since it did have its day. There was a time when you could only do things with Flash that you wished you could do on the browser, but JavaScript and scripting in general had not caught up. But boy, was Flash difficult to kill off. And in some places, even today, it won't die. As I look over the prompt API implementation section, I can empathize

[02:00:23] with Mozilla's gut reaction since this does seem sort of, well, both obvious but also forced and a bit unnatural. For example, this API defines a specific system prompt, as they call it. The specification says the language model can be configured with a special system prompt, which gives it the context for future interactions. The system

[02:00:53] prompt must be the first message, whether passed via the initial prompts option to the create function, or as the first message to the first prompt, or append method calls. We then see three examples of these various semantic options that they just described. The first one shows where a constant variable, session one,

[02:01:22] is set to the large language model dot create functions output, where the initial prompt, prompt, the initial prompts, system prompt, is pretend to be an eloquent hamster. And then we log to the console the output of that large

[02:01:52] language model was just created, being prompted, what's your favorite food? So, of course, an eloquent hamster is going to respond to the question, what's your favorite food? I guess that's what lettuce? I don't know. I think that's what hamster is like. This is an eloquent hamster, which is a different matter entirely. Might be lettuce with caviar. Anyway, my reaction to all of this is that web standards are too important

[02:02:22] to be created in any half-baked fashion. And Mozilla apparently feels that it's too soon to do this. Once a web standard exists, as we know, we've seen this over and over, it is incredibly difficult to deprecate it since, as we saw with Flash, someone somewhere will be using it. Browser bloat and the security implications of that are very real problems.

[02:02:52] Google has never held back, though, in unilaterally declaring web standards. They say, well, we're the dominant browser, we can do whatever we want. I understand Mozilla's reluctance to go along for the ride. And I think people are not going to be happy about 4.7 gigabytes being downloaded to their hard drive. It's really going to change the whole complexion of Chrome. Yeah, it becomes massive. I can understand why Google may

[02:03:22] say, oh, well, maybe for spell checking or local grammar or something. Developers might find a use for this, but it is a little, I think Mozilla is right. This is premature. There's no reason to be doing this now. There's no demand for this now, I don't think. Is there? No. No. And I guess what they recognize that you can do this in the cloud now. Browser pages are able to reach back

[02:03:52] out to the cloud and talk to a large language model. That's going on already. They're saying, well, but we want, you know, we've got this cool technology. We've managed to squeeze a large language model down to 4.7 gig. We want it in the browser because we can, because we own the browser. Right, right. And we might imagine down the road some use. Yes. It's hard for me to imagine what that use is. Yeah, I agree. That would justify this.

[02:04:20] So, Google's working specification goes on and on and on. And it's all extremely specific to the application of today's LLMs. They are creating something as important as an industry-wide specification for what could just be the moment we're in today. I mean, to me, that's the problem. Is that none of this is gelled yet? I mean, it is still a moving target. So, the

[02:04:50] idea of API-ing it to create a web standard seems premature and misguided. Anyway, I've dropped the URL of Google's full specification into the show notes. It's at the top of page 20 for anyone to follow up who may be interested. I want to now switch to Mozilla's response. I have the rather dry conversation thread in Mozilla's GitHub account under their standards

[02:05:19] positions. So, I've dropped that URL into the notes also. But since this podcast endeavors not only to inform but also to entertain our listeners, rather than sharing Mozilla's dry recitation, ah, yes, I want to share the registers, typically feisty and irreverent, take on this controversy. Leo, let's take our final break. We're going to look at the flip side of what's

[02:05:49] going on. I can only imagine what the reg has to say about this. I'm trying to give Crum the benefit of the doubt, but this is my problem with Google for a while now. They don't go to the IETF or W3C and say here we want to do a standard, you know, let's get everybody involved. It's already in there. Yeah, they're so big, they're so dominant. There's something like 90% of the browser space that they could just do it and it becomes a de facto standard. So I'm with you.

[02:06:19] I'm not necessarily against the idea. And it sounds like in their spec they're saying, well, it doesn't have to be our model. It doesn't have to be Gemma. It could be something else. But I don't know if there's a demand for this. And I know people are going to be very upset. I already see the upset over this giant download. And you don't get a choice. You can't turn it off. It comes with Chrome now. All right. Well, let's take our final break and we'll be back with Mozilla's response as

[02:06:49] seen through the filter of the register. You're watching Security Now with Steve Gibson. More in a moment. This episode of Security Now brought to you by Trusted Tech. If you're managing Microsoft 365 for your company, you are responsible for both the cost and whether it's set up correctly. And I think you might already know, on July 1st, Microsoft's raising prices. So any mistakes in your licensing are about to get more expensive. Most companies

[02:07:19] using Microsoft 365 are either over licensed, paying for unused seats and features, or under licensed, creating compliance and security risks. Sometimes it's both. The result is wasting thousands, sometimes tens of thousands per year on tools your team doesn't use or worse, I guess, missing critical security features you thought you had. You've got to get this just right, and that's why you need Trusted Tech. Trusted Tech helps businesses understand what

[02:07:48] they have, what they actually need, and how to lock in the right setup now before the costs go up. Their team ensures your M365 environment is well supported and aligned with how your business actually operates, and if you need ongoing help, they also offer reactive support for your Microsoft environment through their certified support services. Microsoft licensing is, I don't think I have to tell you, constantly changing. E3 versus E5 versus business premium.

[02:08:18] There's add-ons the new E7. It's confusing and easy to misconfigure and overpay. And licensing mistakes don't just cost money, they create compliance exposure that's going to get more expensive after July 1st. So even if you think your licensing is dialed in, it's absolutely worth a second look, and no one can do it better than Trusted Tech. Just ask Kevin Turner, you know his name, former Microsoft COO? This is what he said when he talked to Trusted Tech. He said, you, Trusted Tech, you have an

[02:08:47] incredible customer reputation and you have to earn that every single day. The relentless focus you guys have on taking care of customers gives them value and differentiates you in the marketplace. End quote. He was impressed. You will be too. And remember, after July 1st, you're stuck paying more. This is the last chance to fix your licensing before costs go up. Trusted Tech is offering a free Microsoft 365 licensing consultation right now.

[02:09:18] Visit trustedtech.team slash securitynow365 to get a clear data-backed view of your current licenses, what you're wasting, and how to lock in savings before the price increases. Go to trustedtech.team slash securitynow365 and submit a form to get in contact with Trusted Tech's Microsoft licensing engineers. I'm going to say it one more time. Write this down. Trustedtech.team slash

[02:09:49] securitynow365. You got to do it now. Trustedtech.team slash securitynow365. Now, back to Steve. And back to our conversation. Actually, Darren Oki, who is, of course, as you know, one of our most avid AI users in the ClubTit Discord, says he thinks this may be the most important thing to happen to browsers since AI. He thinks it's really important. I'm not sure I agree. I mean,

[02:10:19] I could see there's some potential. It's a huge change to our browser. It is a big change. I guess nobody disagrees about that. And I think it's also the case that Google is forcing this instead of proposing it. And I don't like that either. But I didn't like it when they forced HTTPS down our throat. Well, and you don't always get the right design when one person does it. That's why so much of what is done correctly is a collaboration.

[02:10:49] And, you know, Mozilla has, you know, even though Firefox is a diminishing percentage of the desktop space, Mozilla as a company has been at the forefront of all of the standards work forever. Yeah. We talked to the CEO of the Mozilla Foundation a few weeks ago on Intelligent Machines. And even then at the time, and this is before they'd added that little switch, he said, we're going to be very judicious about AI in our browsers. And now

[02:11:19] they actually have a switch that says disable all AI features. Right. This is a switch most notably Google is not offering. You cannot disable this. Okay. So I'm going to share the registers typically feisty and irreverent take on this controversy. They also supply a great deal of additional useful background. And when we see that their headline is quote, maker torches Google for building prompt API

[02:11:48] into browser, you know, it's going to be good. The register wrote, Jake Archibald, Mozilla web developer relations lead, articulated the organization's concerns in a GitHub discussion of the API, which provides a standard way to send and receive prompts and responses from a local machine learning model. Archibald wrote, quote, we continue to oppose this API and feel it has severe

[02:12:18] negative consequences to the interoperability, updatability, and neutrality of the web platform, unquote. The register writes, the prompt API as Google describes it, quote, gives web pages the ability to directly prompt a browser-provided language model. Specifically, and here it comes, it provides a way to send natural language instructions to Google's Gemini Nano model, which is small enough

[02:12:48] to be downloaded for local inference through Chrome. However, writes the register, it's not small. Google recommends having 22 gigabytes of space available, although the Nano V3 Nano model for desktop use is 4.27 gigabytes. Web developers already have a variety of ways to interact with AI models. They can use cloud service APIs to

[02:13:18] communicate with hosted models, or they can access local models through technologies like JavaScript runtime frameworks, WebSM, or WebGPU. Various vendors like OpenAI and Perplexity have shipped browsers that embed access to remotely hosted AI models. Mozilla itself is testing an AI-based smart window in Firefox and is developing tools for AI model scaffolding. The prompt API aims to

[02:13:48] make it easier to run local inference in a way that takes advantage of browser security mechanisms to produce faster response times to allow offline usage and to provide more cost-effective ways to integrate AI services. For example, providing a free AI fallback if users lack a paid API key. So that's interesting. That suggests that Google wants us to register

[02:14:17] our LLMAI provider accounts with our browser so that random websites we visit will be able to submit their prompts to our AI account. This brings to mind the famous rhetorical question, what could possibly go wrong? The Register continues, Mozilla's concern, as articulated by Archibald, has to do with what the prompt API means for the web, not to

[02:14:47] mention Google's justification for deployment. First, he worries that Google's own nano model will become the default and that developers will standardize on it in an effort to make the non-deterministic responses of an AI model more predictable. That tendency, he argues, will create pressure for Apple and Mozilla to license nano for the sake of a common user experience.

[02:15:16] Perhaps more significantly, Archibald notes that using the prompt API requires agreeing to Google's generative AI prohibited uses policy which prohibits activities that are not necessarily illegal like generating disturbing content. content. I'll just pause to say, who determines what content is disturbing? There is nothing that attorneys love more than ambiguous

[02:15:45] language in contractual agreements. It's a built-in full employment guarantee. The register quotes Jake saying, this seems like a bad direction for an API on the web platform and sets a worrying precedent for more APIs that have browser-specific rules around their usage. Amen to that. Anyway, the register continues. Finally, Archibald argues that Google

[02:16:14] misrepresented demand for the API by cherry-picking a few social media posts and calling that a groundswell of developer support. Jake posted, quote, the intent to ship on Blink Dev states web developers as strongly positive and links to the explainer for evidence. The evidence provided there does not seem to fit the claim,

[02:16:44] unquote. In an email, Archibald wrote, Archibald told the register that the question is whether the prompt API is good for the web and Mozilla doesn't believe it is. Jake said, quote, the core problem is interoperability. Prompts are tightly coupled to models. Developers will inevitably tune to the quirks and policies of whatever model they're building against. That's how you end up with

[02:17:14] model-specific code paths, which is the browser compatibility problem all over again. The terms and conditions issue is part of that. If using a web API means accepting a specific vendor's content policy, especially one that goes beyond the law, you're not really building for an open platform anymore, unquote. And just to pause, what he means is remember those days where JavaScript had to determine which browser it

[02:17:44] was in and then would do this code for IE, that code for Firefox, this code for Safari, and that code for Chrome? Those were not good days. Anyway, the register says, with regard to Google's exaggeration of developer enthusiasm, Archibald said, there are definitely devs interested in AI capabilities, but Google failed to provide evidence of that. The signal is polarized, not strongly

[02:18:13] positive. But either way, developer demand alone does not meet the bar. The question is whether the API can work across implementations without tying the platform to one vendor's model. Google did not immediately respond to a request for comment. However, on Thursday, Rick Byers, the Google Chrome engineer responsible for shipping the prompt API chimed into the GitHub discussion to acknowledge the

[02:18:43] concerns articulated by Archibald. To his credit, he wrote, quote, as one of the Blink API owner approvers for shipping this in Chromium, I admit that I share the concerns here in Mozilla's standards position. Where I differ is in preferring paths that promote experimentation, learning from mistakes, and competition to those

[02:19:13] which err on the side of stalling innovation out of fear of what might happen, unquote. Right. That's a perfectly articulated response to the more cautious, we should wait a bit to see what happens stance. The register concludes their piece by writing, buyers asked the web community to help collect evidence of harm to advance the discussion, pointing to the debate over other controversial web

[02:19:43] technologies like encrypted media extensions, remember EME, he suggested the outcome has not been as desired as was predicted. But focusing on data so far has not done much for Google's cause. According to a report created in February that compares the performance of Chrome with Gemini Nano and Edge with 5.4 Mini Instruct, using the prompt API, these models do

[02:20:12] not provide very good results. The report says, quote, for generative tasks, composition, tag generation, et cetera, 24.29% edges and 15.17% of Chrome's responses failed to complete the task at all. This is in reference to a rubric that defines failure as a score of 2 or less on a scale of 1 to 5. For

[02:20:42] classification tasks, 29.58% of edges and 23.93% of Chrome's responses did not label or categorize input correctly. So it's often also wrong. They finished with the report's conclusions, noting, in terms of groundedness and accuracy, Edge failed, which is to say hallucinated, 17% of the time, while Chrome failed 6% of the time.

[02:21:11] Is that good for the web? You could ask Chrome, but you might not get a reliable answer. And that's how the register signs off. Burn! Burn! So where does this leave us? I guess it leaves me more happy than ever that I've stuck with Mozilla. I look at what Google now presents us on a page of search results and it becomes clear that we are the

[02:21:41] product. You know, I search for something specific and sponsored. Instead of what I ask for, I get sponsored interception advertisements that are promoted to the top of the page and are presented before the result that I'm seeking. That I need to weigh down past a bunch of YouTube video links that I have zero interest in. Okay, now in fairness, Google's not alone in doing this. Apple has similarly succumbed in their

[02:22:11] app store. The thing I'm looking for is never first any longer. Even when I search for it by name and spell it correctly, what's first is what someone paid them to show me first in the hope that I wouldn't notice or wasn't sure what it was that I wanted. And on the Google side, in return for tolerating a bunch of advertising, we do receive a ton of services at no charge. You know, I author these show notes every week in Google Docs for free. And, you know, the catch-all

[02:22:40] junk email account I maintain over at Gmail is similarly valuable. All of that means a lot. So, thank you, Google. But all of that seems fundamentally different to me from intermixing the design and establishment of crucial web standards with a single company's commercial interests. Yes, Google has succeeded in leveraging their position as the winner of internet search into the winner

[02:23:10] of the web browser wars. I get it. As I use the internet daily, I am more or less continually being offered the opportunity to improve my life in one way or another by switching to Chrome. I constantly need to decline. Most people have given up declining and they're perfectly happy using Chrome whether or not their lives are any better for it. And that's great. But tremendous responsibility burdens

[02:23:40] Google's dominance with Chrome. You know, they need somebody knowledgeable to push back and to question their actions if for no other reason than to help them make the best choices. So I'm very pleased that we have Mozilla watching and actively participating. Google may and likely will still plow ahead and force Mozilla to keep up or to be left behind while Mozilla and Apple either be keep up or get

[02:24:09] left behind and become irrelevant. But everyone will likely get a better browser whether that's Chrome, Edge, Safari, Brave, Vivaldi, or Firefox if this is a collaborative effort. And Leo, the thing that I think was most significant here is the observation that Archibald made that LLMs are inherently non-deterministic.

[02:24:39] You know, every time you ask a question you get a different answer. browser. And so we're putting, we're now talking about having the browser interface to one vendor's solution which has a random number generator in its heart. Not a very good one either. It's got some temperature setting and apparently they had to sacrifice a lot of reliability

[02:25:08] in order to get the size down to something that was tolerable. They wanted it to be 22 gigs and people said, F off. I am not putting that in my is that mass storage? Is that RAM? Where does that 22 gigs live? Somewhere you don't want it to live probably. And so they've had to squeeze it down in order to make it acceptable and in the process it's lost its reliability. So I mean

[02:25:38] really if we want to be able to surf the web with any browser we choose and if web pages that we download are going to start wanting to use local large language models whose large language model will it be? And they aren't interchangeable. We know they're not interchangeable. Right. Well, and, you know, Darren, who loves AI, said, well, I can imagine some uses. For instance, it's hard to write software that

[02:26:08] detects misspellings, but the AI could quickly detect a misspelling if somebody's entering it in and correct it. And so there is that convenience. But I also think that this is Google bigfooting the whole process. And it's part of the insidification of Google. They don't feel any responsibility to anything at this point except their stakeholders to make more money. And that's clearly, you know, this is about dominating the browser space and putting everybody else out of business.

[02:26:38] The other thing that worries me as an AI fan, and I know you're an AI fan too, the more we force AI down the throats of unwilling users, the more they're going to hate it. Google's found that out. Microsoft's found that out. Oh, those annoying chat box that pop up in the lower right corner of your screen, well, that's going to end up running locally. And it's like... So I don't want to turn people against AI. AI is a real value.

[02:27:07] But by doing this, kind of forcing it down people's throats, you're actually making enemies. And I don't think that's good either for Google or for AI in general. So yeah, I have lots of problems. We'll talk about this. I'm looking forward to a conversation tomorrow. One thought would be unbundling the LLM from the browser. that is creating an interface but not having it like secret. I mean, it's essentially secret right now.

[02:27:37] I mean, I get it. That's the way to minimize friction so that everybody has it because Google wants everybody to have it. It's not a very well-kept secret. I should point out that one of the reasons Google thinks it's okay is because they're already doing this on Android as is Apple doing it on iOS. There are built-in local models on both those systems. Apple touts this all the time and your data stays local on the device. Apple intelligence is a local model. So there is a precedent for this on those platforms.

[02:28:07] I think I still wish, maybe it's a futile wish, that the web would be a standards based interface and that everybody should be able to choose the browser of their choice and they should all work well. The only one who doesn't want it to be a standard is the big guy. It's Google, the winner. You're not going to see Vivaldi saying, well, we think a standard should be favor us. They can't, nor could Mozilla, but Google can. And clearly

[02:28:36] they do. I agree with you. I think this is a, you know, we saw Google back way down on a number of its proposals. True. The whole anti-tracking technology, they tried several times, but they got real pushback. But they got pushback from people who had invested, I mean, like advertisers. Yes, exactly. Large, large commercial interests.

[02:29:06] And there's no one to push back on this. Well, just remember, as users, maybe as individuals, we don't have much power, but collectively we do. They still need us to use their darn browser. But would somebody leave Chrome to go to Firefox? Well, you and I have. Yes. You and I have, and this is one of the reasons. And fortunately, so far, you can mostly use the internet with a Firefox based browser. Mostly.

[02:29:36] Wi-Fi is another example. DRM in the browser. Yeah, the OpenTable site I use for restaurant reservations. And it doesn't work under Firefox. Needs Chrome. As I said, Restream needs Chrome because of WebRTC and the WebRTC implementation it uses. I think that's, you know, this is an object lesson. This is what happens. And if you don't want, if you want Chrome and Google to be the only player in the internet, this

[02:30:05] is how these things happen. I think we can fight. Hey, great topic, great show as usual. Thanks to all of you for being here. We do Security Now Tuesdays, as I mentioned. Steve's got copies of his show, if you can't watch live, at his website, grc.com. Actually, he has a lot of good stuff at grc.com. The show is there, including unique versions. Only Steve has a 16-kilobyte audio version, a 64-kilobyte audio version. Those are nice small versions for people who just

[02:30:35] want the audio. They don't want a lot of bandwidth. They don't want 4.7 gigabytes of AI downloaded with every single episode. He also has very nice transcripts written by Elaine Ferris. She's going to put that up a couple of days from now. It takes a while. She actually has to physically type it. He also has the show notes there. 20 pages, 21 pages this week, 22 pages of goodness. Now, you can get those show notes ahead of time. He's been sending them out on Sunday lately. If you go to grc.com slash

[02:31:04] email, you put in your email address there. It has two benefits. One, you're now whitelisted. He will validate that you're not a spammer. That allows you to comment, send him questions, submit photos. Oh, there goes another one of those helicopters. My wife might even be on it. Send in your photos for the picture of the week, that kind of thing. But below that email form, there are two checkboxes. One for the weekly newsletter mailing. You can sign up for that. You'll get it

[02:31:34] automatically every Sunday or Monday. The other is for a much less frequent email when he's got a new product. Something like Spinrite, I don't know, 7 could be in the horizon. We're at 6.1 and the world's best mass storage maintenance, recovery, and performance enhancing utility. He also has that fabulous DNS Benchmark Pro, which he has been updating. And a new update I think is coming soon. Anyway, that's how you'd find out about those things. Do get Spinrite. That's Steve's bread and butter.

[02:32:03] You'll find that at GRC, too. And if you have mass storage, you need Spinrite. If you don't have it, get it now, for sure. Also, lots of other free stuff. Steve's kind of prolific, all hand coded in assembly language. Steve, I don't think you're ever going to use Vibe coding tools somehow. You know, I think my first exposure may be creating some homegrown iOS things. Ah, just for yourself? Yes, it's for home automation

[02:32:33] and just for like modern monitoring GRC servers. And I'm thinking, you know, not commercial products, but just my own purpose. I think it'd be fun to... Looking at my GitHub repo and my projects, I have 22 or 23 little things like that that I Vibe coded that are incredibly useful. You know, it's scanning my email now, it's preparing my calendar, it's doing all sorts of great stuff. But there's a big difference between putting that on your

[02:33:03] own hardware and using it for yourself and giving it to the public and Steve knows that's a much higher calling, higher responsibility. We have copies of the show at our website, twit.tv. slash sn. We have 128 kilobit audio, which does not amazingly enough sound any better than the 64 kilobit audio. But it's there because Apple down samples and we need it to be bigger, etc. It's a long story. I don't even want to bore you with it. We also have video, which Steve refuses to do because

[02:33:32] he fought us every step of the way. But we've been offering video for some time now on security now. You can get that at twit.tv slash sn. There is a YouTube channel for the video. More importantly, you can subscribe in your favorite podcast client and get it automatically so you don't even have to think about it. Every Tuesday you'll get a new security now like magic. We will be back next Tuesday. I will be back home, I'm sad to say. I apologize for any stray noises coming in

[02:34:01] from my lanai. It's been very nice to have a little tropical bird tweeting background. It's in the 80s. The breezes are blowing. It's just beautiful. It's really lovely here. Thanks, Steve. Have a great week. We'll see you next time on Security Now. Righto. Hi there, Leo Laporte here. I just wanted to let you know about some of the other shows we do on this network. You probably already know about This Week in Tech. Every Sunday, I bring together some of the top journalists in the

[02:34:31] tech field to talk about the tech stories. It's a wonderful chance for you to keep up on what's going on with tech, plus be entertained by some very bright and fun minds. I hope you'll tune in every Sunday. For This Week in Tech, just go to your favorite podcast client and subscribe. This Week in Tech from the Twit Network. Thank you.

Security Now,TWiT,steve gibson,Leo Laporte, Google Chrome AI, browser AI API, local AI model, Prompt API, Mozilla opposition,browser security, Anthropic Claude Security, Mythos, Bug Bounty, AI vulnerability discovery, Linux privilege escalation,