The Future of AI Security: Risk Assessment and Management for Generative Applications with Sahil Agarwal

The Future of AI Security: Risk Assessment and Management for Generative Applications with Sahil Agarwal

Sahil Agarwal, co-founder and CEO of Enkrypt.ai, discusses the critical importance of security and compliance in the realm of artificial intelligence (AI) models. His company focuses on helping enterprises adopt generative AI while managing the associated risks. Agarwal explains that the mission of Enkrypt.ai has evolved from developing encryption algorithms to creating comprehensive solutions that provide ongoing management and monitoring of AI applications. This shift aims to ensure that businesses can safely integrate AI technologies without exposing themselves to brand, legal, or security risks.

Agarwal highlights the dual approach of Enkrypt.ai, which includes an initial risk assessment followed by continuous monitoring and management. The risk assessment involves simulating attacks on AI systems to identify vulnerabilities, while the ongoing management ensures that any identified risks are mitigated effectively. This iterative process creates a feedback loop that enhances the security posture of generative applications, allowing businesses to operate with greater confidence.

The conversation also touches on the economic challenges surrounding generative AI, where many companies invest heavily in projects that struggle to reach production due to unresolved security and compliance issues. Agarwal notes that while there is a democratization of AI technology, the real value lies in how enterprises apply these models. He emphasizes the need for businesses to adopt a proactive approach to security, particularly as they scale their use of AI agents and chatbots.

Finally, Agarwal addresses the pressing issue of data leakage, particularly when using third-party AI models. He advises organizations to keep sensitive data on the client side and to choose trusted solutions to mitigate risks. By implementing robust security measures and maintaining a vigilant posture, businesses can harness the power of AI while safeguarding their proprietary information.

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Every episode script — with full source links — is posted at:

🌐 https://www.businessof.tech

 

🎙 Want to Be a Guest?

Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:

💬 https://www.podmatch.com/hostdetailpreview/businessoftech

 

🔗 Follow Business of Tech

 

LinkedIn: https://www.linkedin.com/company/28908079

YouTube: https://youtube.com/mspradio

Bluesky: https://bsky.app/profile/businessof.tech

Instagram: https://www.instagram.com/mspradio

TikTok: https://www.tiktok.com/@businessoftech

Facebook: https://www.facebook.com/mspradionews


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

[00:00:02] So what is security and compliance for artificial intelligence models? I was intrigued, and so when somebody approached me to talk a little bit more about it, I said, absolutely. Sahil Agarwal joins me from Encrypt.AI. They focus specifically on solving this problem, analyzing it for customers, and providing ongoing management and monitoring to it. On this bonus episode, The Business of Tech.

[00:00:27] Today's episode is supported by Huntress. Most cybersecurity solutions are built for massive enterprises with big budgets, not Huntress. They're the fully managed cybersecurity platform built for all businesses, not just the 1%. Huntress purposely builds security solutions like EDR, ITDR, SIM, and security awareness training to equip their team of elite threat hunters to handle the heavy lifting of security for you.

[00:00:52] When threat actors strike, Huntress' 24x7 Global Sock shuts them down before they're even on anyone else's radar. But they do more than just chase alerts, they lead the charge in industry research and knowledge, bringing expert protection and peace of mind. That's why users on G2 rate their EDR number one for growing businesses. To see how their expert threat hunting team gets the job done, visit Huntress.com slash MSP Radio.

[00:01:23] Well, Sahil, welcome to the show.

[00:01:54] Thank you.

[00:02:30] Thank you. Thank you. Thank you.

[00:03:08] Gotcha. Thank you. Thank you. It could be a security risk, such as a prompt injection, where you could have data linkage. So these are some of the risks at a very high level that we would capture, or we would first find if your system is actually prone to such risks.

[00:03:33] And then go ahead and deploy solutions to mitigate these risks in the alternative applications. Now, this feels like it has two components. There's an initial kind of assessment piece where you would get an organization who's new to you, and you'd say like, hey, let's take a look at where you're at and what the risks are. And then it feels like there's a second part, which is the ongoing maintenance and monitoring and management of that. Walk me through kind of the approach and how those two work together.

[00:04:01] Yeah, that's a very, very good question. And that's actually a part of a lot of the research that the team has been doing. Okay. So what red teaming or what risk assessment is doing is, let's say you have an agent or a copilot with a particular use case. And there are certain guidelines that you would like it to follow. There are certain policies you would like it to follow or regulations that may apply.

[00:04:24] So what risk assessment is doing is in real time dynamically prompting your system and attacking it, simulating those attacks to see whether a system responds in an adversarial manner. So that's how the risk assessment is done. But what it's giving you is a detailed report of all the risks and vulnerabilities that exist.

[00:04:51] With that detailed data set, you can go ahead and refine the mitigation system itself. So that your mitigation system is much more efficient. Rather than putting a brute force, rather than brute force in a solution there, you can have a much more efficient and elegant system in terms of mitigation for your application.

[00:05:13] And that flows back into the risk assessment because if there's a false positive or false negative, we can put that back into the risk assessment itself. So they form a flywheel, which is an ever-improving system to help make sure your gender applications work as you want them to. Now, this feels like traditional security, right? Because there's a lot of looking for system vulnerabilities, looking to push the limits. We do an assessment that we do on...

[00:05:42] Is this all the same when it comes to generative AI? Are there new risks? What are we looking for more broadly when we add generative AI to organizations that we now have to look for? I mean, more broadly because you're essentially simulating human beings, right? At a very meta level, if you think about it, right?

[00:06:03] And it's not traditional vulnerability management, but it's also that is a copilot or an agent that you're bringing into your organization. Is that person biased, for example? Or is that person prone to, say, toxic stuff in front of your employees or your users? Is that person looking to leak sensitive information from your system to your competitors? Or praise your competitors in front of your users?

[00:06:33] Things like that, right? Where your brand, your business are facing liabilities and risk. Just from the fact that you have more agents or more copilots or more chatbots in your system. Now, one of the things I've been really curious to talk to owners and developers and product people like you is the idea of the cost models here.

[00:06:59] Because one of the things that's really interesting to me is that I feel like this is a space that we're still all figuring out, right? Because generative AI broadly is not making any money. OpenAI spends nearly double what it generates in revenue. And while I know this is about to be seriously disrupted, we've seen it with DeepSeq and the cost models around that. But fundamentally, nobody's making money at the model model. And now we're trying to layer stuff on top of that. Talk to me the way you're thinking about the cost models of using AI in products.

[00:07:29] Yeah. So economics is always, I mean, it is a big factor when you're talking to our customers as well. They need to be getting enough ROI. But it's a chicken and egg problem. Because what we're also seeing is companies have been spending millions, tens of millions, hundreds of millions of dollars in generative AI projects. And they're excited about it. But none of them or a very small percentage of those projects are actually going into production.

[00:08:01] Because of the fact that you need to solve for the risk and security and compliance around the generative AI projects itself. So it's like they need to be, they need to spend that money on projects. They need to spend money on a solution like encrypt, which is only then when they'll start getting value. Right. So I think that aspect is still there.

[00:08:27] In terms of the model level, obviously, like we're seeing a democratization of the technology itself. There's open source, which is now at the same level, if not better than some of the largest closed source model providers out there. And that will continue to happen. I think models will continue to get better. They'll continue to get smaller and in the hands of more and more people.

[00:08:57] I think the value there will lie in the application lab or from an enterprise perspective. And I hate to say it, but ultimately, if agents are coming and being deployed at scale, it's about how do I get a linear workforce with a 24-7 AI agent that's doing the work that I have other people manage for me. Right.

[00:09:27] So I think that's where I see the field heading towards. Gotcha. Now, I want to take the quick step back again to make sure that I've made this concrete for Lister. I'm sure you've got a use case of the use of the encrypted AI's technology. Talk me through a really great use case of what an ideal customer looks like and how you've implemented. Yeah. So some of our customers, they're SaaS application providers.

[00:09:53] Like in the industries, they're adding generative activities to their own product. Before Encrypt, when they've been testing their product in front of users, they've faced some major hurdles. They've been in the news for not so good reasons because of generative AI.

[00:10:14] And that's where we come in, where we identify all the potential vulnerabilities and risks in their application and show it to them. And they were happy, but also surprised at the same time or not so happy at the same time that such things exist. Because I think to your point, everyone's learning at the same time, including us.

[00:10:42] We're also finding out a lot of these challenges ourselves as well. And so it's essentially head of AI, head of product sort of roles or AI-focused roles that become our customers.

[00:11:00] And once you have the risks identified, we deployed guardrails to mitigate those risks, which ultimately gave them a monitoring solution as well with respect to how their AI applications are being used by their users. Gotcha. One of the risks that I hear a lot about is this idea of data leakage, right? The idea that proprietary information will leak out, will get shared with models, or the models themselves will share back information with employees. How much is that happening in the field?

[00:11:30] Be a real lay of the land of what's happening around data leakage. I think it depends on the use case. If you're just using models from different model providers, whether that's OpenAI, Anthropic, LAMAs of the World, or DeepSea.

[00:11:50] If you're hosting your own model, whether DeepSea or LAMAs or any other open source model, then that data leak problem is much less pronounced. Because you can make sure that nothing is going outside your network boundary itself.

[00:12:07] If you're using OpenAI or Anthropic or the cross-flow model providers, cloud service providers like Azure and AWS, Bedrock, have come in to act as data brokers, where your data is being guaranteed to be safe by these companies. So I think those are not major challenges. I think the biggest challenge is if you're using models like DeepSea hosted within China itself.

[00:12:37] Or if you're using some SaaS applications that have generative capabilities and you have no idea how they're managing your data itself, right? So they may be calling OpenAI's consumer API, where OpenAI doesn't provide any guarantees, for example. Or they may be using some, like their own in-front may not be completely secure itself.

[00:13:06] Now that feels like it's going to be a good portion of the market, though, because a lot of this is going to get baked into products. How are you advising clients to manage the risk around that? So one of the things that we advise is make sure your PII or PHI or any form of sensitive data that you consider proprietary,

[00:13:30] if there are ways to make them stay on the client side and not give it to the end application, I think that's a much cleaner way to ensure you're getting the right service, but your data is staying protected as well. Or go with trusted solutions.

[00:13:54] You can have a sandbox where you can try out these new models or new applications, but having the right security posture is a must to stay ahead. Sahil Argarwal is the co-founder and CEO of Encrypt AI, an AI security and compliance program. He's earned a PhD in applied mathematics from Yale and has extensive experience in AI model development and deployment.

[00:14:21] Prior to founding Encrypt AI in 2022, he led an AI initiatives at Encrypt AI focused on secure and reliable solutions for clients, such as the U.S. Department of Defense. Sahil, if people are interested in learning more, where can they reach out and get more information? They can always reach out to me via email sahil at encryptai.com or reach out to me on LinkedIn. I'm always happy to respond as fast as possible. I really appreciate you joining me today. Thanks for joining me. Thank you, Dave. Thank you for having me.

[00:14:52] Today's episode is supported by Huntress. Most cybersecurity solutions are built for massive enterprises with big budgets, not Huntress. They're the fully managed cybersecurity platform built for all businesses, not just the 1%. Huntress purposely builds security solutions like EDR, ITDR, SIM, and security awareness training to equip their team of elite threat hunters to handle the heavy lifting of security for you.

[00:15:17] When threat actors strike, Huntress' 24x7 Global Sock shuts them down before they're even on anyone else's radar. But they do more than just chase alerts. They lead the charge in industry research and knowledge, bringing expert protection and peace of mind. That's why users on G2 rate their EDR number one for growing businesses. To see how their expert threat hunting team gets the job done, visit Huntress.com slash MSB Radio.

[00:15:48] The Business of Tech is written and produced by me, Dave Sobel, under ethics guidelines posted at businessof.tech. If you've enjoyed the show, make sure you've subscribed or followed on your favorite platform. It's free and helps directly. Give us a review, too. If you want to support the show, visit patreon.com slash MSB Radio, and you'll get access to content early. Or buy our Why Do We Care merch at businessof.tech.

[00:16:17] Have a question you want answered? We take listener questions, send them in, ideally as a voice memo or video to question, at MSB Radio.com. I answer listener questions live on our Wednesday live show on YouTube and LinkedIn. If you've got a comment or a thought on a story, put it in the comments if you're on YouTube or reach out on LinkedIn if you're listening to the podcast. And if you want to advertise on the show, visit MSB Radio.com slash engage.

[00:16:44] Once again, thanks for listening, and I will talk to you again on our next episode. Part of the MSB Radio Network.