The episode identifies a structural shift in how AI adoption is being managed within IT environments: control and accountability are now central concerns, overtaking simple discussions of AI usage or feature deployment. Shadow AI—unmanaged or improperly governed AI agents—has emerged as a tangible risk vector. Government entities, such as the White House, and technology vendors including Microsoft, Cisco, and OpenAI are framing AI not only as a productivity tool but increasingly as a source of operational and security liabilities that demand more robust oversight.
A key example comes from an incident reported by TechRepublic in which an AI agent within a coding workflow deleted both a production database and its backups, resulting in a prolonged, business-impacting recovery from a three-month-old backup. In parallel, the Hacker News highlighted findings from scans of one million exposed AI services, characterizing the market’s current AI security posture as lacking, with many endpoints widely reachable unintentionally. Microsoft’s public transition of Agent365 from preview to release was directly tied to fears over the risks associated with shadow AI, indicating industry recognition of autonomous agents as a new attack surface requiring governance.
Supporting developments further validate this trend. Cisco’s open sourcing of AI Bill of Materials (BOMs) tools, Wiz’s tracking of non-human identities tied to AI workloads, and OpenAI’s rollout of advanced account security all signal a growing industry emphasis on making AI deployments auditable and restrictable. Practices such as phishing-resistant authentication—driven by token theft campaigns analyzed by Microsoft—and continuous permission monitoring, as advocated by Material Security, are now increasingly viewed as necessary safeguards rather than optional enhancements. Providers like Enforcer and products such as Copilot Manager are explicitly focused on surfacing shadow AI usage and enforcing credential discipline, underlining the growing demand for proof-of-controls.
MSPs and IT service providers now face greater operational complexity and contract risk tied to AI automation. Client expectations are shifting from baseline AI access to demonstrable governance—requiring non-human identity inventories, documented permission boundaries, and validated recovery frameworks for AI-powered workflows. Token harvesting and persistent OAuth grants increase the likelihood that MSPs will be held responsible not just for prevention, but for rapid containment, rollback, and producing evidence during security incidents. Failure to meet tightened SLAs around backup immutability, authentication protections, and agent visibility could soon become a material contract exposure.
03:50 Govern the Agent
06:24 MSP at Risk
09:54 Why Do We Care?
Supported by:
CometBackup
ScalePad
Upcoming event:
The Pivotal Point of IT: Building Services for the AI-First Era
Date: May 13 at 1p.m. EDT
Register: https://go.acronis.com/davesobelaiera
💼 All Our Sponsors
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
🚀 Join Business of Tech Plus
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
🎧 Subscribe to the Business of Tech
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
📰 Story Links & Sources
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
🎙 Want to Be a Guest?
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
🔗 Follow Business of Tech
LinkedIn: https://www.linkedin.com/company/28908079
YouTube: https://youtube.com/mspradio
Bluesky: https://bsky.app/profile/businessof.tech
Instagram: https://www.instagram.com/mspradio
TikTok: https://www.tiktok.com/@businessoftech
Facebook: https://www.facebook.com/mspradionews
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
[00:00:02] Shadow AI is being treated like a usage problem. It's not. The moment an agent can take actions, unmanaged adoption becomes unmanaged privilege. That changes what clients are buying from MSPs. They're not buying AI access, they're buying policy enforcement, credential controls, and the ability to prove, under pressure, what executed, under which identity, how fast you can reverse it, and whether the AI was deployed with the least privilege
[00:00:31] and blast radius controls you'd apply to a privileged system admin. This is the Business of Tech. I'm Dave Sobel. We're seeing a cluster of public signals that AI is moving from advice to action across real systems, and that the security posture around it is getting stress tested in the open. Start at the policy layer. Politico reports the White House is pressing technology companies to step up support,
[00:01:00] specifically around AI-driven cyber attacks. When the administration frames AI as a driver of cyber risk, it's no longer a speculative research problem, it's being handled as an operational threat category. Then look at how platform vendors are framing the problem. VentureBeat notes Microsoft has taken its Agent365 platform out of preview, and it ties that release to Shadow AI becoming an enterprise threat.
[00:01:27] Whatever you think of the framing, the signal is that autonomous agents are being positioned as a risk surface that needs governance at scale. Now look at what happens when those tools touch production systems. Tech Republic describes an incident in which an AI agent in a coding workflow reportedly deleted a company's production database and backups, forcing a restore from a roughly three-month-old backup.
[00:01:54] The failure mode isn't just deletion. It's deletion plus stale recovery, turning an automation mistake into a business impact event. Finally, the Hacker News reports on a scan of one million exposed AI services and characterizes the state of security as bad. At that scale, the key takeaway is that AI-related endpoints are already widely reachable from the Internet, intentionally or not.
[00:02:22] Put it together, government is naming AI as a cyber driver, major vendors are naming Shadow AI as an enterprise risk, we have public incidents of destructive agent actions in production, and the Internet is full of reachable AI surface area. If you're listening to this and you haven't hit follow yet, on Apple Podcasts, search the business of tech. It takes five seconds and you'll get tomorrow's show automatically. Here's what I'm hearing from MSPs on backup.
[00:02:52] They want control. Control over storage. Control over costs. Control over what happens when something breaks. Comet Backup gives you that. Bring your own storage. White label it for your clients and keep margins where they belong. With you. It's why Comet Backup keeps showing up when MSPs ask each other what actually works. See for yourself at CometBackup.com.
[00:03:18] A quick heads up, Acronis is hosting a live event on May 13th called The Pivotal Point of IT, building services for the AI-first era. Their CEO will be laying out Acronis' vision for AI-first service delivery for MSPs, including a new partner program and what they're calling a major platform announcement. If you want to hear directly from Acronis on where they're taking all of this, registration link is at go.acronis.com slash Dave Sobel AI-era. No spaces.
[00:03:50] The mechanism is straightforward. The moment AI stops being a feature you use and becomes a thing that acts, the operational problem shifts from prompt quality to permission design. Agents connect to systems, assume identities, receive scopes, and execute actions. So the real question becomes, what can it touch, what can it change, and what can you roll back quickly if it's wrong?
[00:04:16] When organizations can't answer those questions fast, they buy tooling and services that make the environment innumerable and auditable again. That's why we're seeing the rise of inventories that look different from traditional software management. The Register tracks the move from SBOMs to AI-BOMs, bill of materials that include models, data sets, prompts, agents, and dependencies.
[00:04:39] Cisco is open sourcing AI-BOM tooling, and Wiz is tracking non-human identities tied to AI workloads. The common idea is simple. You can't govern what you can enumerate, especially when what you're enumerating is an agent with tools, permissions, and data access. Identity is the second half of the control problem, and it's tightening. The next web reports OpenAI rolling out advanced account security for ChatGPT and Codex,
[00:05:06] partnering with Yubico, and pushing high-risk users toward phishing-resistant authentication, citing reports of stolen credentials circulating in criminal markets. When a single AI account can expose sensitive data or trigger downstream actions, weak authentication becomes an operational liability. You see the same pattern in the channel. Technology reseller covers Enforcer launching co-pilot manager for MSPs
[00:05:31] to surface co-pilot adoption and shadow AI usage across tenants, down to activity and data movement. The product isn't trying to make AI smarter, it's trying to make AI legible, so someone can administer it. Even the OAuth backdoor story fits the same mechanism. The Hacker News highlights OAuth grants as persistent, often unmonitored access paths, refresh tokens, third-party app scopes, and API behaviors that bypass perimeter assumptions.
[00:06:00] Material security's answer is continuous monitoring and automated remediation, because point-in-time reviews don't work when access is persistent and behavior change over time. This is what's driving the shift. Control layers like inventory, identity, and continuous permission governance are being built because the environment can't stay coherent on its own once software can act.
[00:06:25] For MSPs, this collapses normal operations and security incident into the same customer experience because the workflow converges. Privileged access gets used legitimately or abusively. Changes happen fast, and the MSP is judged on containment and recovery regardless of root cause. The customer doesn't care if it was a breach, a bad OAuth grant, or an over-permissioned agent.
[00:06:51] They care that production changed and services were impacted, and that the MSP can prove what executed, under which identity, and how quickly it can be reversed. That burden is already forming upstream. Policy pressure around AI-driven cyberattacks shows up in downstream practical ways. Updated security questionnaires, procurement language that asks for proof of controls, and tighter incident reporting expectations.
[00:07:18] MSPs are often the ones asked to produce that evidence under time pressure. And notice who bears the exposure. The business absorbs downtime and data loss, customers absorb service disruption, and the operator, often the MSP, absorbs the emergency response burden and the prove-what-happened burden. Microsoft's own reporting illustrates why. In a write-up covered by the Hacker News, Microsoft detailed a multi-stage phishing campaign
[00:07:47] using adversary-in-the-middle tactics to harvest credentials and authentication tokens in real-time, designed to bypass multi-factor authentication. When attackers capture tokens, they don't break in. They sign in, then act as a legitimate user across mail, files, apps, and admin services. That turns every identity and consent decision into an operational decision. What gets logged? What gets revoked?
[00:08:14] And how fast you can contain it once it's already inside the tenant. Now pair that with recovery. MSP360, in a PR Newswire distributed release, positioned expanded immutability support as protection against ransomware that targets backup repositories. A noted immutability is increasingly tied to audit readiness and cyber insurance expectations. The market signal is that attackers with privileged access often try to make recovery impossible.
[00:08:44] If the MSP treats this as security plus support, the MSP becomes the cleanup crew. Token revocations, mailbox restores, emergency access, backup validation, and post-incident reporting, often under flat rate expectations. Or the MSP becomes the provider that governs the automation layer. That's identity, access paths, permission boundaries, and recoverability. So containment and recovery are deliberate services, not surprise work.
[00:09:16] This episode is brought to you by ControlMap. Growing MSPs are using ControlMap to build recurring revenue by expanding their GRC services. Starting now, ControlMap is offering a free plan for MSPs looking to get started with providing compliance as a service. Create a free account and run an assessment. Track key items like policies, risks, and evidence in one place. It's a practical way to prove value to a client before deciding to expand your compliance offering.
[00:09:44] Try ControlMap for free today. Visit scalepad.com slash Dave to get started. That's scalepad.com slash Dave. Why do we care? This isn't really about AI security in the abstract. It's about misclassifying AI as an adoption issue when it's really a control plane issue. If an MSP treats agents just like another productivity layer, it will underinvest in identity design,
[00:10:14] permission boundaries, logging, rollback, and recovery validation. And when an agent acts under an identity the MSP configured or governs, the MSP is pulled into the accountability chain as the operator of record. So what to consider? Treat non-human identity enumeration as a billable discovery service. AI bomb isn't a settled standard, so don't wait for one. Deliver an AI agent asset register.
[00:10:43] What agents exist, what identities they use, what permissions they hold, and what data and workloads they can access using existing telemetry. Validate backup currency and immutability together, not separately. Immutability protects against deletion. It doesn't protect against staleness. Client SLAs should specify both the immutability guarantee
[00:11:08] and the maximum acceptable backup age for AI-adjacent workloads. Establish a hard policy on phishing-resistant authentication for any account with AI agent permissions or admin consent rights. Token harvesting means MFA alone is insufficient for high-privilege identities. If this trend continues, by the end of 2027, at least one widely publicized multi-tenant agent-related incident
[00:11:37] will drive MSP contracts and customer security addended to require, in writing, first, non-human identity inventory, second, documented permission boundaries for agents, and third, measured recovery objectives for AI-adjacent workflows. If we don't see those clauses appearing, this thesis is wrong. This is the Business of Tech. Want more from the Business of Tech?
[00:12:04] Join Business of Tech Plus for ad-free episodes, early interviews, extended cuts, subscriber-only shows, and exclusive member perks and analysis. Sign up at businessof.tech slash plus. And follow this show on your podcast app, and if you're on YouTube, hit subscribe and the bell so you never miss a story. Reviews and comments help spread the word, too. Interested in advertising? Head to mspradio.com slash engage.
[00:12:33] The Business of Tech is written and produced by me, Dave Sobel, under ethics guidelines posted at businessof.tech. Thanks for listening. I'll see you on the next episode. Part of the MSP Radio Network.

