API Security: Indirect Prompt Injection Threats and the Rise of AI-Driven Exploits

API Security: Indirect Prompt Injection Threats and the Rise of AI-Driven Exploits

API security has evolved from being primarily an infrastructure issue to a complex challenge centered around language and design flaws. Jeremy Snyder, CEO of Firetail, discusses the findings from their latest state of API security report, emphasizing the alarming rise of indirect prompt injection as a significant threat in AI-integrated systems. As APIs underpin much of modern application architecture, understanding how they function and the potential vulnerabilities they present is crucial for organizations aiming to protect themselves from increasingly sophisticated attacks.

Snyder highlights the shared responsibility model in API security, where both developers and security teams must collaborate to ensure robust protection. While infrastructure teams manage the basic security measures, developers are responsible for the design and logic of the APIs they create. This evolving understanding of security responsibilities is essential as threat actors become more adept at exploiting API vulnerabilities, particularly through authorization failures, which continue to be a leading cause of breaches.

The conversation also delves into the distinction between authentication and authorization, illustrating how both are critical to API security. Authentication verifies a user's identity, while authorization determines what actions that user can perform. Snyder emphasizes that many organizations still struggle with authorization issues, which can lead to significant security risks if not properly managed. The report reveals that the time to resolve security incidents remains alarmingly high, while the time for attackers to exploit vulnerabilities has drastically decreased, raising concerns about the effectiveness of current security measures.

As AI technologies become more integrated into applications, the potential for indirect prompt injection attacks increases, necessitating a reevaluation of security practices. Snyder advises organizations to focus on secure design principles and maintain visibility over AI usage within their systems. By implementing governance frameworks and monitoring tools, organizations can better manage the risks associated with shadow AI and ensure that their API security measures are both effective and comprehensive.

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Every episode script — with full source links — is posted at:

🌐 https://www.businessof.tech

 

🎙 Want to Be a Guest?

Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:

💬 https://www.podmatch.com/hostdetailpreview/businessoftech

 

🔗 Follow Business of Tech

 

LinkedIn: https://www.linkedin.com/company/28908079

YouTube: https://youtube.com/mspradio

Bluesky: https://bsky.app/profile/businessof.tech

Instagram: https://www.instagram.com/mspradio

TikTok: https://www.tiktok.com/@businessoftech

Facebook: https://www.facebook.com/mspradionews


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

[00:00:13] API security used to be an infrastructure problem. Now it's a language problem and the attackers are fluent. Today on the show, we're joined by Jeremy Snyder, CEO of Firetail, to unpack their latest state of API security report and dive into one of the most alarming threats emerging in AI integrated systems, Indirect Prompt Injection. We'll talk about why generative AI is turning every document and email into a potential attack vector, why authorization failures still dominate breaches,

[00:00:43] and what it really means when Jeremy says there's no AI without APIs. If you care about application security, developer access, or just surviving the next wave of AI powered attacks, don't go anywhere. Welcome to the Business of Tech Lounge, the live version of the Business of Tech podcast. It's Wednesday, May 21st, 2025, and I'm Dave Sobel. I'll be taking questions and comments throughout the show, so make sure to put them in chat. If you have a question, we'll happily respond to it.

[00:01:13] Now I want to thank Sales Builder, our Patreon sponsor whose support makes this show possible. Focus on your IT sales workflow with the power of automation and visit them at salesbuilder.com. That's B-U-I-L-D-R.com. Reminder, I am watching chat. We'll take questions in real time.

[00:01:33] Jeremy Snyder is the founder and CEO of Firetail.io, an API and AI security platform. With a background spanning IT and cybersecurity, Jeremy's held roles in product and sales within cloud security, including positions at Rapid7 and Divi Cloud. He's recognized for his expertise in API security, and has contributed to discussions on the evolving threat landscape of cybersecurity threats. Jeremy, welcome to the show.

[00:02:01] Jeremy Snyder, MD, My pleasure to be here, Dave. Thank you so much. I love that introduction that you gave at the top of the show, though, as well, about the analog to languages and APIs. I'd love to get into that a little bit more. Jeremy Snyder, MD, Well, so let's sort of start right there then, because that's a great place to talk about. Talk to me about it. Traditionally, most of us that have been doing infrastructure work for a long time do think of this as something of an infrastructure security problem.

[00:02:23] Jeremy Snyder, MD, But it's morphed. And having read through the security report that your organization put out, that was my big takeaway. Tell me a little bit more about the thinking about where we are now with API security.

[00:02:35] Jeremy Snyder, MD, Well, I think, first of all, I think you're spot on that the problem has kind of changed from being, let's call it like a network and infrastructure security problem to being kind of an API design flaw problem or an API business logic problem. And that's where I think that language aspect of it really comes in, right? Where in the past, a lot of kind of, let's say, network oriented attacks would have worked against APIs. Those don't work nearly as well anymore.

[00:03:01] Jeremy Snyder, MD, But what does work is that hackers and threat actors understand what APIs are, how APIs work and how to interact with them. Because APIs ultimately underpin so much of the modern internet and so much of kind of modern application architectures, they really are kind of everywhere. And I tell people this, like once you start seeing that this service that you use is powered by an API at the back end, you can't unsee it. And then you just see more and more and more of them.

[00:03:28] And, you know, to kind of make that crystal clear for people, I like to tell people, like, when you pull out your phone to order a ride share or mobile food delivery, very little is actually happening on the phone device itself, right? It's actually sending your geolocation coordinates to a back end service over an API, and then fetching down the list of available services. And that might be things like, hail a car, grab a two wheel ride, like a bicycle or like an electric scooter or something like that.

[00:03:57] Or it might be, you know, order food, whatever is available in that geolocation. Everything from that point forward is API interactions. So like really all of mobile apps are APIs. And so to that point, what it means is that these are the norm. And again, threat actors know that this is the norm for how most modern internet services are built.

[00:04:17] And they know that there's kind of a common playbook of what instructions look like in interacting with an API. This is how I authenticate once I've authenticated, I've got a set of services available for me. These are the inputs that each service might require, etc. And that's where I think that language analogy that you use really comes in very, very clearly.

[00:04:37] Now, this makes me think a lot about like the where the responsibility lies here, right? So I would think for a lot of this, my instinct says, well, the person who's exposing the API has a lot of responsibility for making sure that it is secure, right? That it is only talking to people that it's authorized to, that it's only delivering the right information. Talk to me a little bit about what's happening. Because in terms of the way that I think about it, if I'm again, I'm drawing on my historical past, we talk a lot about like API configurations.

[00:05:07] It's like, well, it feels like it's evolved a lot because hackers are just coming in and just targeting what the API does and want to explain. Yeah. Where's the responsibility lie? How do people address this security problem? Yeah, it's honestly a shared responsibility at this point. And I know that that's like a buzzword that gets thrown around, especially in cloud security context of this kind of shared responsibility model.

[00:05:30] Usually that shared responsibility is between you and whatever service provider that you're contracting with, right? So if you're using a cloud service like Microsoft Azure or AWS or whatever, you've got your responsibility for what's inside your own account and so on. And then they've got their responsibility for keeping the data center secure. The shared responsibility around APIs is a little bit to your point, there is the who's exposing the API and the responsibilities that they have.

[00:05:54] And that's very often is an infrastructure team or a network security team or a cloud security team or whatever that might be. And they're responsible for a lot of kind of basic services around APIs, which is things like the availability of the API, the integrity of the API. So in those two constructs, you would think about like, hey, I'm using something like a load balancer or an API gateway to manage traffic.

[00:06:17] And I'm using something like good certificate management to ensure confidentiality of transmission between the requester and the API itself. But the flip side of it is kind of to what we were talking about a minute ago is hackers just directly target what an API does. And that's on the design of the API. And that really comes back to the responsibility of the developer, the person who built the API. And I think that's actually where a lot of the kind of tension and misunderstanding comes in.

[00:06:45] In so many organizations, you'll still hear a lot of development teams say, well, I'm not responsible for security. But you're responsible for the application that you build. And I think that that is an evolving kind of understanding within a lot of organizations today is that, you know, this kind of, let's say, like the CISA mandate of secure by design. It's not something that is universally adopted yet, but really, really should be.

[00:07:11] As it gets broader and broader adoption, these teams are starting to understand that there really is some developer responsibility about the security of the design of the API that they're putting forward. The security teams that stand them up, they can do their network infrastructure side of things. They can contract with pen testers, ethical hackers, et cetera, to try to identify those flaws. But even when those flaws are identified, the fixes go back to that development team. And so that's really where I see the shared responsibility kind of coming into the API picture.

[00:07:41] Okay. You're a person who lives the cybersecurity world a lot more than I do, right? I tend to, I'm a tourist there much more than I am an actual resident, right? And so one of the things that I push back on is I'm uncomfortable with the idea of shared security. Because if I come at this from a traditional management kind of role, shared security or shared ownership means no ownership, right? Because it's immediately possible to point fingers. No one actually owns it.

[00:08:06] Talk to me about what I'm missing here from a security perspective and the way security people are thinking about it. Because as a management guy, shared responsibility doesn't work. While I hear you, I would actually push back on what you're saying there, which is that even in a managed security environment, if you're an MSP or an MSSP or an MDR provider or whatever, you do what you do for the organization, the user still has their responsibility. So you also have a shared responsibility model there. It's just divided between you and the end user.

[00:08:35] And so there's a little bit of kind of a contradiction, I would say, there where all security is shared responsibility. I'm not somebody who says that everybody's first job is security. I'm not somebody who would say that the organization, like every member of the organization has to have security as their first job. I don't think that's actually the right attitude for most employees and most kind of day-to-day business workers within an organization.

[00:09:04] I think that for most people, the vast majority of people, security should be a secondary thing that they're doing. And they need to have a bare set of minimum kind of understanding of what they're doing. And you talk about, let's say, like security awareness training and anti-phishing testing and so on. And I know that these things make a lot of people's eyes glaze over and they make a lot of people kind of switch off and, you know, frankly, like click through the exercise to get it done on an annual basis.

[00:09:31] But that's the level of responsibility that I do think almost everybody within an organization has. Aside from that, I will agree that the, you know, 90% or the lion chair of the responsibility lies with the info security team or with the managed service provider who's providing information security to the organization. I want to do a quick, I'm taking questions and comments. Edward's got a good one out there talking about the API security deep dives. He's clearly a fan. So Edward, thank you for watching it. If you've got questions, throw it out there.

[00:10:00] Now I want to go from the high level for a moment. I actually want to go a little bit deeper in here because one of the things in both the report and some of the presentations you've done recently is highlight this distinction between authentication and authorization. Right. And from, oh, we've lumped all security now, but we're going to need to break it down and do it. You've highlighted that there's a distinction and that that distinction is critically important. Say more. Tell me more why that's true. Yeah. Yeah.

[00:10:26] And I can give you an example that actually makes it crystal clear because it's pretty easy to understand when you just think about what those two things are for a second. Right. So authentication is who I am. And so for most computer systems, that's a way that I validate who I am. The simplest example is I use a username and password. Right. And hopefully nobody out there is just using a username and password anymore in 2025. Hopefully we've all got multi-factor of some kind or pass keys or something like that. But that is authentication.

[00:10:54] That is establishing your identity in some way. Authorization is actually a two-part question that a lot of people miss one or both parts of. And so authorization is what can I access? And secondary part of it is what can I do with that access? And here's the example that I love to use that kind of illustrates it, I think, in a pretty concrete manner. We're both on LinkedIn. We're connected on LinkedIn. Right. We're first-degree connections. I have my profile. You have your profile.

[00:11:23] I can view my profile. I can view your profile. Right. So for me then to edit my profile, I need to authenticate to LinkedIn to prove that I am who I am. Right. And then LinkedIn will allow me to try to edit my profile. It will then check whether I am allowed to edit my profile. It has a rule that says Jeremy can edit Jeremy's profile. Awesome. Jeremy can view Dave's profile.

[00:11:52] I can authenticate and we're first-degree connections. I know that I can view your profile. I cannot edit your profile. Right. So that level of access and what can I do with that access is the authorization part. In the API context, and we're getting a little deep on this, we talk about something called the principal resource action. So who is the user? What is the data that they're trying to access? What are they trying to do with that access?

[00:12:19] And to go slightly off tangent for a second, and I apologize again if it's going a little deep, APIs should be built in a way following kind of a quote-unquote zero trust philosophy around it. They should be built in a way such that that level of access is not allowed until you have A, authenticated that you are who you say you are, and then B, there is a check that you are allowed to perform the action that you're trying to perform on the data that you're trying to perform that data on.

[00:12:46] So that's the distinction between authentication and authorization, and that's where it comes into play in the API context. And as it turns out, from all of our research and all of our data from the past, I think three years now we've been doing this report, authorization still is by far the number one problem with APIs. Well, so you've given it to me, and by the way, listeners, again, if you've got questions, throw them in the chat and in the comments, and I will happily make sure that Jeremy or I address them.

[00:13:14] But Jeremy, you've released this FireTales State of the API Security Report, and it really does highlight some of these high-level API exposure issues. Give me a little bit of the one, like what was the one or two that really jumped out at you as the big highlight that is new or unique that you want to make sure people are aware of? Well, there's a couple of things. One is, and it's not new, one is just the continuation of this trend line that we've seen over the last several years, right? The number of incidents per year just steadily grows up.

[00:13:44] And, you know, like I said at the beginning, APIs are kind of everywhere. So it's not super surprising to see that that continues to be a problem. I would say some of the interesting developments over the year, and we've chosen a few kind of case studies in there, we're starting to see actual kind of more abusive functions. For the past couple of years, we've seen a lot of kind of data breaches happening over APIs. Somebody gets authenticated access to an API, but again, the authorization check is not there.

[00:14:14] They're able to scrape and download a bunch of data from that API. That's been the key trend for the last couple of years. This year, there was, you know, some fun ones that kind of make you chuckle, but also kind of when you think about them, they have some implications. So one, for instance, around some IoT in-home devices, some smart vacuum cleaners, robot vacuum cleaners that were taken over through an unauthenticated API that allowed threat actors to actually play

[00:14:41] kind of profanity messages over a loudspeaker as these robot vacuums are going around people's houses. And like I said, you know, it kind of makes a lot of people chuckle. But at the same time, you realize all of these devices that are creeping into our homes that are connected or whatnot, many of them actually have API access. And a lot of these APIs can't get updated very easily on these devices except through firmware updates. And if there's a vulnerability on that device, and if that device is somehow reachable, you

[00:15:11] know, there can be some stuff that happens. You know, the case of a robot vacuum cleaner is pretty benign. But if that's, let's say, a refrigerator, or what if it's a medical device within a hospital? So that's something that we see as a little bit of a concerning thing to watch out for. One of the other things that we saw last year in 2024 was we saw our first regulatory, sorry, regulator crackdown around APIs.

[00:15:36] So it was a company called TrackPhone, which is a prepaid division of Verizon Wireless. So Verizon wholly owns the TrackPhone brand. And they actually had what's called a consent decree, which is effectively when a regulator in a regulated industry, which telecommunications is, comes in and says, hey, company X, you've done a really bad job. We could shut you down. We're not going to. We're going to allow you to continue to operate if you consent to this agreement or this decree.

[00:16:07] And in that decree, you commit to taking certain correct actions. In their case, they had three breaches around APIs in a short period of time. And the regulator said, like, you have to fix your API security. You have to undertake certain other actions around cybersecurity and some best practices there. If you don't do that and attest to it within a six month period, we can shut you down. And so that is something that I think is actually pretty impactful, right?

[00:16:32] For a lot of organizations, having the threat of a regulator over your shoulder who says, if you want to stay in business, you got to clean up. That doesn't happen often and it doesn't happen lightly. So that was something new last year that was pretty interesting to observe as well. Now, the thing that really jumped out to me when I was digging through the report was the time to resolution still running around 80, 180 days and the times of tax is just 22 minutes.

[00:17:00] And I really looked at that and I sort of said, like, you know, AI feels like AI tooling space around this has the potential to make this both better or worse. Tell me a little bit about the trend and where you see this going. I think it's going to get worse before it gets better. And I'm not the only one to say that, by the way. I would cite for anybody who's interested, by the way, our report is free, but there's another great annual report that I've been a many year reader of that is kind of an annual

[00:17:30] must read for me. And that is the Verizon DBIR. I think you might've mentioned it on an episode recently. Yeah. Yeah. In there, one of the things they talked about was that, you know, for the last several years, email compromise has been kind of the main initial attack vector, like the main way that a threat actor infiltrates an organization. And for the first time ever vulnerability exploit, I think surpassed it in 2024.

[00:17:55] And so to your question, this mean time to exploit or mean time to attack coming down so quickly, everybody speculates, although I don't know that anybody has hard evidence of this, that this is a result of AI. That, you know, you can find a vulnerability or a vulnerability gets discussed, published, you know, whatever becomes, people become broadly aware of it. But it's very easy to go to one of these AI coding assistants, and some of them are very, very good.

[00:18:23] Give it the right prompt, give it the right context, and get an exploit developed. And, you know, 22 minutes is the time from like kind of knowledge of it to exploit being available. There's like a round of testing that can go in there. I can do a test in a lab to make sure that it works before I get it out into the world. And all of that happens in 22 minutes.

[00:18:46] So I would say almost everybody agrees that this is most likely AI fueled, that makes that kind of mean time to exploit availability down to 22 minutes. And that is a little bit of a wake up call in my mind. So the other thing that really jumped out to you, and by the way, listeners, I'll keep reminding you, if you've got questions, throw them in there. I'm keeping an eye out there. I've got plenty though, so I'll keep on going.

[00:19:13] So Jeremy, the one of the other things that we were discussing, you know, as we prepared to talk a little bit more, you sent me some of the recent stuff around indirect prompt injections. I was really digging into this and going, okay, based on the report you've just put out and the continued growing exposures and risks there, and now this idea of indirect prompt injections, like this feels like this is expanding the attack surface for APIs even further. Give a little bit of your perspective on what's going on with this new style of attack.

[00:19:43] Yeah. So first, let me kind of zoom out for a second to kind of explain what the indirectness means. And it'll actually touch on something that you said at the very beginning of the show, which is why I say there's no AI without APIs. I'll actually start with that piece real quick, because I think it's helpful to understand to frame the whole conversation around this. So first of all, we hear a lot of talk about LLMs and agentic AI and MCP and LLM powered applications and blah, blah, blah, blah, blah. Right.

[00:20:13] At the end of the day, all of this falls under a category of what we call workload AI utilization, as opposed to workforce. And I'll talk about that in just a second. So this is basically taking an application or some software code and integrating it with an AI model, provider, service, whatever you want to substitute into that gap. The way that the application talks to the AI, 99.9% of the time, I've not seen a counter example, is over an API. Okay.

[00:20:42] So when you look at, let's say like chat GPT or the open AI services, or you look at anthropic cloud service, or you look at any of the AI as a service offerings from the cloud provider. So like Google cloud platforms, Gemini, Microsoft Azure has two offerings, open AI and Microsoft Azure AI service. Similarly, if you look at what Amazon has something called bedrock and Sage maker, if you're going to integrate an application with any of those services, it's happening over an API.

[00:21:11] So that's the first thing that I want to kind of just lay out there. And by the way, you've talked on the show before about your AI is only as good, your AI results are only as good as your data. So you think about kind of some of the training exercises where you load data against one of these things and you'll hear like rag and you'll hear training data and training a model or fine tuning a model or whatever you hear around that. Similarly, all of that data transfer is happening over APIs as well. Right? So just to lay that as a groundwork.

[00:21:39] That's why we always say like, there is no way I without APIs, like APIs are actually kind of having a moment with AI adoption, if you will. Right? Right. Now the question about indirect prompt injection. So what ends up happening is that you have some AI powered application and there is something that gets fed to the AI, whether that is just a prompt, like a question, like give me an answer back. And that could be as simple as I'm on an e-commerce website or I'm on a travel website and I'm going

[00:22:08] to go to some AI for a recommendation of what I should purchase or what I should add to my itinerary or whatever. And a lot of AI engines are actually quite good at that. You give them some catalog of items. You give them, let's say a history of the user's purchases or their travel preferences or whatnot. All of this, this is a great AI use case, by the way, right? This recommendation engine use case. But how are you going to invoke that use case? How are you going to have it start off?

[00:22:34] Well, it starts with me, let's say shopping or trying to purchase some tickets and a hotel room or whatever, right? I'm initiating a transaction. At some point, what I'm doing initiates an API call to the application, which then initiates an API call out to the service. Awesome. What if in that first step between me and the initial service, there are vulnerabilities or there are flaws in the application logic?

[00:23:01] So a threat actor who understands where they can interact, not with the AI directly, but with the application that talks to the AI can inject something in there that is, let's say, malformed or has known jailbreak techniques and it passes it via the application to the LLM. This is what's meant by indirect prompt injection. And recently, I've been reading a couple of research reports around some proof of concepts

[00:23:28] of this and how pervasive it is with a lot of early kind of prototypes of AI-powered applications that are available through, let's say, like open source code repository scanning or things like that. And what it really speaks to is actually there's kind of a double importance of chaining your applications together correctly. And the risk of a vulnerability at any point in the chain can get passed down.

[00:23:56] And because so much of the interaction with the LLMs happens over what a lot of people will call a non-deterministic language model, right? So instead of having a very well-structured API call where all of my request parameters are very tightly defined, I have some English that I send through. And that English might get passed across to the AI and it might have bad results. And that's kind of the indirect prompt injection attack.

[00:24:23] And so that's where I think like there's kind of a double importance because there's API coming in, then there's API going out the other side. And by the way, API coming back and then kind of making a round trip to the user. And a lot can go wrong in that scenario. Well, I've been thinking about this too from the perspective of you've talked on some of your lectures and recently about the fact that current LLM tooling kind of doesn't even address many of the first party risks, like all the way down to bad design, poor validation.

[00:24:52] It's been a theme of the day. So trying to get this to actionable stuff, you know, what are the concrete steps that your organization, you are telling the development teams to make sure that they evaluate, particularly as they're integrating LLMs into their APIs, like before they do deployments? Yeah. It actually goes back a little bit to what we were talking about earlier with the general like, hey, if I'm launching an API, who's responsible for securing it, right? And that shared responsibility side of it. The network people can do what the network people can do.

[00:25:21] That remains the case, but the importance of what the developers can do with the security of the design of the API goes up like 10 X or a hundred X in importance because of this chaining property and because of the potential impact of what gets passed through the application out and then back in. And so they have to be kind of doubly careful to make sure that they do control whatever they can control on that side. The LLMs for the most part are kind of black boxes.

[00:25:49] So unless you're a very large, well-resourced organization that is doing some of your own training or let's say running your own foundation models in-house or something like that, nine times out of 10, you're going to be accessing an external model and you can't really control what happens on that side. So what is within your control that you can do where you can kind of do your best to eliminate risks is on your first party application that you're developing. And again, it's kind of inputs and outputs out the backside and then re-inputs back as it

[00:26:19] comes back from the LLM. And all of that is really down to the design of the application and kind of the business logic that's within the application design. That's where the core focus has to be. So when we talk to organizations about what they can and should do, it's really make sure that your source code is well-designed. Make sure that you've got secure design defaults in everything that you're doing. Make sure that you kind of eliminate a lot of fuzziness. And it's almost counterintuitive because people say the great thing about LLMs is that

[00:26:48] you don't have to be super prescriptive when you're typing up a prompt. You can just ask like a general kind of natural language question. And I don't need to give an LLM like very, very prescriptive, you know, line by line, micro detailed instructions on what I want back. But you kind of need to give the application outside of what goes in and out of the LLM. You need to control all of those things because those are the input parameters that are within

[00:27:15] your control where this indirect prompt injection stuff can happen. Now, I want to zoom out as we wrap up our time here, kind of get a broad sort of strategy perspective from you. Now, one of the things I talk a lot about on the show is data governance. I've been talking about a lot of the things that organizations need to put together. And recently, you know, you've highlighted that 90% of AI usage is happening outside the security team's visibility entirely. So how broadly, you know, as we think about the security principles here, how are you advising,

[00:27:44] you know, with an audience of MSPs and MSSPs, how are you as somebody who lives this world advising them to address AI usage, how a governance system should work, how do they manage the shadow AI? Like what's the guidance you're giving them? Well, I think governance is actually the right word to start with thinking about it. And I know governance is a little bit of a fuzzy word that different people will have different interpretations on.

[00:28:09] I'll give you kind of my view on it as somebody who used to run IT and cyber for a couple of companies and then a video game company for a few years, which was a ton of fun, but ultimately like ended in disaster. We can talk about that on another time. Governance in my mind, I generally think of as, is the organization using a technology in the way that is okay or the way that the organization intends to use a technology? So AI is just another example of a technology that you can insert into that.

[00:28:37] So when you think about AI governance, very often the questions will be like, are we using the models and the providers that we have approved for usage? Right. Right. And, you know, honestly, a couple of years back, sorry, a couple months back, the real example that was kind of flying around that everybody was talking about was like, oh crap, DeepSeq. And it's sending all the data to China at the backend. And so like a lot of organizations were like, oh, we have a rule. DeepSeq is not allowed. You can't use DeepSeq. Right.

[00:29:04] So that to me is like an example of a governance rule that, you know, which providers are approved for an organization. And the second part of that would be, well, maybe it's okay for us to do, let's say like an examination of our product catalog using an AI engine for things, again, purposes like a recommendation engine, but we can't send financial data or we can't send HR data to an AI or to an LLM. That would be another example of a governance rule. Right.

[00:29:33] So what data is acceptable to use with an AI system? So this AI governance framework, I think about it very much from that same perspective, but what's the first step in kind of going down a governance path? And the first step that I always tell people is, well, do you know what's happening today? And to the point that you raised a second ago, you know, we're not the ones, by the way, who came out with that 90% of AI usage as shadow AI. That's been kind of highlighted by a couple of much larger organizations who have

[00:30:03] done a broader data samples and research on that area. I think we cite the exact source that I can't remember off the top of my head in our report. Sure. But what it comes down to is like, you have to know what is already happening within your four walls. And I can guarantee you that for most organizations, somebody within the company is already using AI. It might be as simple as somebody in a browser sending some questions to chat GPT, or it might be as deeply embedded as somebody's already built an application that talks to anthropic cloud. Right.

[00:30:33] Right. But if you don't have visibility onto it, you just don't know. So always starting with visibility is the way that we recommend people go about kind of trying to handle or get their arms around this question. Right. And so there's a number of things that you can do. So if you think about, you know, kind of a Gentic or those LLM powered apps that we talked about earlier, well, scan your source code repositories for any references to third party providers. Scan your cloud systems to see if any of these AI as a service offerings are turned on.

[00:31:02] Like that's a great starting point on what we call the workload side of it. And then on the other side, and this is actually a big question for a lot of organizations, do you have any kind of monitoring in place for what websites you're using? And a lot of organizations, this is this gets is where things get a little bit divisive because many organizations will like, I don't want to be snooping on my users. And many users are I don't want our corporation, you know, that the corporate bosses to be snooping on me.

[00:31:29] But I can tell you that in most organizations, people sign off on an IT policy that allows for monitoring of their usage of corporate devices. Right. And in particular, their access to corporate data and how and where they might be sending that data. So most users and most organizations have already opted in by virtue of their employment or by virtue of their acceptance of an IT policy in that regard.

[00:31:55] What we found is a pretty effective method is not trying to, let's say, lock it down or block it, but at least providing visibility onto that side. Just to plug something we've been working on for a second here, we've been building a browser plugin that centralizes visibility of use of different AI providers kind of sits silently and obtrusively in the background. And it just gives an IT team central visibility onto what's happening.

[00:32:22] So, so anyway, visibility is the starting point for all of that. Once you have the visibility and you can kind of analyze what's already happening, then you can sit down and have a conversation about, well, what do we allow to happen or what do we not want to allow to happen? That's typically the process that I recommend most organizations start with. Well, I'm going to end this right there, Jeremy, because you've literally just given the prescription for a managed services offering right there. Yeah. I'm going to make a highlight for our MSP audience. That's exactly what to focus on.

[00:32:52] Jeremy, if people are interested in learning more and getting the report, what's the best way for them to get in touch? And I know you've got a podcast as well. Yeah. So firetail.io or .ai, we have both. And I think one directs to the other. So either one will end up on the same website, but that's the easiest place. The report is available there. If you don't see it right away, just go on to the blog section or the resources page and it should be available there. Free download for anybody. Also linked from the top of that page is the Modern Cyber Podcast.

[00:33:19] We interview a lot of security experts across various areas. We spend a lot of time focusing on kind of new technologies that are emerging, whether that is cloud, whether that is new applications, whether that is AI. And we try to kind of bring visibility onto new threats and new risks that are emerging in the technology landscape. And I know you've got plenty to talk about because I can't even keep up with them on my show. Well, Jeremy, this has been fantastic. Thank you so much for joining.

[00:33:47] We look forward to having you back on the show at a future panel. Awesome, Dave. Thank you so much. It's been a pleasure. Now, next up, I want to highlight an interview that's dropping this weekend. Garrison Hovsan, who's the CEO of EasyDMark, delved in with me into the complexities of email authentication and the importance of implementing DMAR for businesses of all sizes.

[00:34:09] He shared some insights on how simplifying the deployment process can enhance security, protect brand reputation, and ultimately drive customer satisfaction. Here's a preview of that interview. Now, DMAR is one of those areas where it's been traditionally seen as really complex to deploy. You know, what are the, as you rolled this out, particularly with MSPs and small business customers,

[00:34:33] what have you observed have been the biggest friction points on the deployment of a DMAR solution? As always in security risks, if you do something wrong, totally valid emails will be rejected. Imagine an IT guy or an MSP. You receive complaint from your boss or business people or from your customers about rejected emails. Why are my emails not delivered to the inbox?

[00:35:03] Why I'm not sending my invoices, etc., etc. The first place is risk, risk of the failure, mistakes. And the DMAR itself is really very hard. 70% of organizations which try to deploy DMAR themselves, they fail.

[00:35:24] It's really very simple from the first place for the really very small organizations which are using only one, two sending sources. But it goes really very, very hard when you have multiple sending sources. And for MSPs, risk is even higher. They don't have the full information. They don't have the full visibility. They don't have the full control.

[00:35:53] Organizations, lots of them delegate the controls or they expect help from MSPs. But at the same time, they do something for marketing, for support, for billing, invoicing, etc. And it can become a really very messy situation. I know my first 100 MSPs by face. Now, we have hundreds of hundreds of MSPs.

[00:36:23] I wish to know them all, but at least several hundred direct customers and MSPs, I never meet the one customer who knows all their sending sources. Initially, everyone tells that I know what's going on in my infrastructure. Okay, that means the integration is just five minutes. I'll help you. I'll deploy email authentication. I'll show the gaps. I'll show the risks just for help. I don't need to sell anything.

[00:36:53] And yeah, we discovered lots of interesting, interesting cases. We provided nice visualizations, visibility, transparency over the infrastructures. And by removing these frictions, visibility, control, risks, risk for failures, MSPs become more comfortable. But to deploy security stuff.

[00:37:24] I know MSPs can totally relate to many parts of that interview. My Patreons already have the interview now if you want to get access early. It's a benefit of supporting the show. Visit patreon.com slash MSB radio to sign up and get access now. The interview will drop on the YouTube channel and podcast feed this weekend. I want to thank Sales Builder, our Patreon sponsor, whose support makes this show possible. Focus on your IT sales workflow with the power of automation and visit them at salesbuilder.com.

[00:37:52] That's B-U-I-L-D-R dot com. Vendors, you too can get your name mentioned of the live show. It's a simple monthly subscription. Visit patreon.com slash MSP radio to learn more. And listeners, you can support the show. Like, share, and follow on your favorite platforms. Or support directly on Patreon with our Give What You Want model. You set what you think the content is worth and you get access to videos early.

[00:38:17] If you have a question and are listening to the recording, send it in at question at MSP radio dot com. And we'll answer it on next week's live show. Thanks for joining me for the Business of Tech Lounge. And I will see you next time. We'll see you next time.