Data Gaps, Not Hype, Block Productive AI for MSPs: Insights from Dr. Fern Halper

Data Gaps, Not Hype, Block Productive AI for MSPs: Insights from Dr. Fern Halper

The episode reveals a persistent and widening governance gap as organizations rush to implement AI without adequate data foundations or operational controls. According to observations from Dr. Fern Halper, current AI adoption is overwhelmingly characterized by top-down pressure, especially around generative and agentic AI, but is constrained by immaturity in governance, data integration, and organizational readiness. Microsoft’s bundling of Copilot in E7 licenses highlights this structural shift, as “consumerized” AI solutions proliferate without corresponding investments in foundational data and oversight.

Supporting this view, new research cited by Dr. Fern Halper indicates that nearly half of organizations are under executive mandates to pursue AI, but most remain stalled in the experimental or pilot phase. The failure to move beyond pilots is not primarily a technology limitation but stems from inadequate data quality, lack of lineage controls, fragmented data governance, and persistent data silos. The report identifies that only about 35–45% of organizations deploying generative or agentic AI have come up through a cycle of machine learning and data foundation development.

Secondary examples reinforce the governance and risk exposure. MSPs and end-customers are increasingly relying on off-the-shelf or prebuilt AI (such as Copilot or ChatGPT) for individual productivity, rather than building production-ready, data-driven applications contextualized with proprietary information. This often leads to uncontrolled proliferation of “shadow AI”—tools deployed outside formal oversight—further compounding compliance and data protection risks. As organizations start experimenting with agentic AI, the risks escalate, since these systems not only generate outputs but can take direct action, magnifying the impact of weak governance and access controls.

For MSPs, IT service providers, and technology leaders, the operational consequence is heightened responsibility around governance, auditability, and data management. The unchecked spread of shadow AI introduces contractual and regulatory exposure, particularly as clients seek to incorporate AI tools without formal policies or understanding of associated risks. Providers should prioritize baseline governance frameworks, client-facing AI literacy training, and infrastructure capable of accommodating unstructured data, lineage requirements, and auditing. Failing to address these priorities increases the risk of service breakdowns and complicates SLA enforcement as AI systems broaden operational scope.

Supported by: 
JumpCloud 
HaloPSA 
Acronis 

Upcoming event: The Pivotal Point of IT: Building Services for the AI-First Era 
Date: May 13 at 1p.m. EDT 
Register: https://go.acronis.com/davesobelaiera

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Every episode script — with full source links — is posted at:

🌐 https://www.businessof.tech

 

🎙 Want to Be a Guest?

Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:

💬 https://www.podmatch.com/hostdetailpreview/businessoftech

 

🔗 Follow Business of Tech

 

LinkedIn: https://www.linkedin.com/company/28908079

YouTube: https://youtube.com/mspradio

Bluesky: https://bsky.app/profile/businessof.tech

Instagram: https://www.instagram.com/mspradio

TikTok: https://www.tiktok.com/@businessoftech

Facebook: https://www.facebook.com/mspradionews


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

[00:00:00] Most organizations are racing to implement AI, but according to research, the majority are still in the experimental phase, with few actually deploying AI against their own data and production. The gap is in technology, it's the data foundation, the governance, and the organizational maturity that determines who succeeds and who stalls. Joining me today is Dr. Fern Halper, who spends her time researching this space on this episode of the Business of Tech.

[00:00:29] Dr. Fern Halper, you are the VP of Research at TDWI and the founder of the AI Foundations Group. Welcome to the Business of Tech. Thank you so much. Happy to be here. Now, you've spent 30 years watching organizations try to succeed with data and AI, from those early machine learning days at Bell Labs through the generative AI moment we're in right now. What's the biggest thing that hasn't changed, despite all the hype?

[00:00:59] Dr. Fern Halper, Business of Tech, Business of Tech Organizations oftentimes don't have their data foundations in place. They haven't thought about how to operationalize models. They aren't thinking about governance. They haven't thought enough about the cultural impact. You know, they really, it's a problem of scale.

[00:01:17] You know, I remember being back at AT&T during a long time ago and AT&T, of course, is very, very advanced, but we were building churn models and we could predict who could disconnect AT&T's service. And so we went to the call center people and said, we have this model. And they said, well, OK, but we can't implement it. You know, at the time it was because of the compute power, but it's the same sort of idea.

[00:01:46] Like we don't have the people to implement this. We don't know how to operationalize this. We don't have the right data infrastructure in place. All of those things are still, you know, fairly constant. And you've got new research here that shows nearly half of organizations are feeling that top down pressure to implement AI, yet most are still in the experimental phase. Like, why does the pilot phase keep becoming the destination instead of a launching pad?

[00:02:16] I think that a lot of it has to do with we're in this new, exciting era of AI, which when people think about this new AI era, they're not thinking about what I call traditional AI, machine learning, predictive analytics. You know, when people were putting that in place, they were thinking about their data foundations or organization foundations, the skills that they needed, how to operationalize it, how to govern it.

[00:02:42] In my research at TDWI, I see about 35 to 45% of organizations that are actually implementing generative and agentic AI may have actually sort of come up through the machine learning and traditional AI. So they thought about this stuff. Now, with generative and agentic AI, well, especially starting with generative AI, organizations felt the pressure and they felt like they needed to do something. And so they viewed AI as a tool.

[00:03:12] I'm just going to use this tool. It's going to help me improve productivity. It's going to, you know, write my emails for me. It's going to do my marketing campaigns. But they didn't think about the fact that they were going to hit some sort of value ceiling because they didn't have all of these foundations in place that those people who had actually implemented, you know, traditional AI were further along and more likely to succeed because they're more mature.

[00:03:37] So now these people who are trying to implement things like a co-pilot, right, or maybe just a chat bot that is talking to your customers and asking what's your problem, you know, with your IT equipment or whatever. And they put an answer in, but there's no way to sort of trace back to even the employee because maybe you don't have an employee database. So you're not actually even understanding who it is that you're talking to.

[00:04:05] Or if you're trying to implement, you know, a customer chat bot, you don't know who your customer is. So you don't know if they're a loyalty customer or anything about them, you know, in terms of the context of the person that you're talking to, to actually be able to respond in a way that was going to give people more satisfaction. And why? Because you didn't have these foundations in place, if that makes sense to you. Well, it does.

[00:04:30] And for the foundations, you build a five-dimension AI readiness model, right, to assess this. That includes organizational readiness, data readiness, skills and tools, operational readiness, and governance. Those are themes that we've been talking about on the show a lot. And I'm really curious, when you assess organizations, like which of those dimensions is most consistently underestimated? And what is that underestimation costing organizations? Yeah, that's a really good question.

[00:04:58] So organizations typically score highest in the organizational dimension. You know, so they think that they sort of have executive support. They think that they have a strategy. And, you know, maybe there's the funding. But they really fall short on the data foundation and governance. And, you know, when you think about what's happening now with generative and agentic AI, it's not that surprising. I mean, all of these things are a moving target. It's a journey.

[00:05:23] You never sort of are completely ready for every type of data that your company may have. But now with generative and agentic AI, you're making use of not just structured data. You know, now you're dealing with all of this unstructured data. I was just doing a survey on agentic AI, and I was totally stunned by the percentage of organizations that said that their primary data source was unstructured data for all of this. I mean, we know this, right?

[00:05:51] Look at call center notes or incident trouble tickets, you know, that you might get. And you want to be able to figure out what people are concerned about. And that's a good use case for generative AI. But so organizations are really trying to deal with all of these data foundation issues. They're still really concerned about data silos. They're concerned about data quality. You know, now they have to deal with semantics, right?

[00:06:15] Business definitions and metadata and all of that that they weren't used to necessarily dealing with. So that's, you know, sort of on the data foundation side. And then on the governance side, I think that there's a lot of apathy around data governance. Organizations, a lot of them, they don't want to deal with it for whatever reason, even though they know garbage in, garbage out as just one little part of that. But again, that's another moving target, just as they thought that they got their

[00:06:45] data governance down for their structured data. Now they're dealing with their unstructured data. And not only do they have to deal with data governance, now they have to deal with AI governance, you know, and making sure that the models are versioned and they're documented and, you know, they're not decaying and all of that. We'll be right back after this message. This episode is supported by Halo. Halo. There's a moment many MSPs eventually reach.

[00:07:11] The PSA they started with worked well early on, but as the business grows, workflows get harder to manage, automation becomes complicated, and the systems start shaping how the company operates. Halo PSA is designed for service providers who want more control over how their operations run, from ticketing and service delivery to billing and workflow automation. That's one reason Halo PSA often comes up when MSPs start evaluating their next PSA platform.

[00:07:40] You can learn more at usehalo.com. And we're back. What was interesting to me is your book draws a really sharp line between consumerized AI, like using ChatGPT or Copilot to make an individual work more efficient, and the idea of building AI applications against your own company data for that competitive advantage.

[00:08:05] For these providers that are advising clients, how should they be framing the distinction, and what does it mean for what their actual clients need? To me, the distinction is very clear. The consumerized AI or the off-the-shelf tools, the copilots, basically. There are thousands of vendors out there now that have tools that perform various functions,

[00:08:31] and that's sort of the consumerized path that a lot of organizations are going down. You know, oh, I'm going to use Claude to generate the code that's going to help me build my website, right? That's, look how fast I built my website, and that's great. And there is some productivity gains there. You know, or I'm going to use this tool to help to generate my marketing content, you know, advertising this.

[00:09:02] But, and that's one path, you know, versus going down the data path that, you know, then you're asking, you know, what's the business need? You know, what's the use case that I'm going to build, you know? So you have a much clearer understanding of that. Let's pick on Microsoft for a moment. Microsoft really is, and now they've included their new E7 bundles, right? They've got their Frontier Model bundle that they're going to include copilot for every license.

[00:09:30] And so for some customers, they would say, hey, I need help just with that, getting that rolled out to my customer. And for many smaller businesses, doing that in an organized way feels like work that makes sense in this business world. And I'm trying to understand, like, for you, would that fit into the consumerized side, or is that a business implementation? Like, how do you frame that example? Okay, yeah. That example I would definitely see is consumerized AI.

[00:10:00] And it can help organizations improve their productivity if it's helping them sort through their mail or, you know, putting the most important emails on top or whatever. You know, they set up their copilots to do. But to me, that's consumerized AI. It's just making use of an off-the-shelf tool versus actually thinking about, and that may improve individual productivity, and that's great.

[00:10:27] Versus what I think of as more company enterprise AI, you know, which is actually building applications that are going to either serve internally or externally, you know, your customers. And those applications should be using your data. You can do a one-off project where you take those call center notes and run it through an LLM, and then it classifies and summarizes what the issues are.

[00:10:54] But really what you want to do is set up something, you know, that's more production-oriented and ongoing. That's taking the call center notes, you know, as they're happening day by day or whatever your frequency is and running it through some sort of system that has like a RAG, a retrieval augmented generation system, you know, that provides you with the context and spits out the results. You know, that's like an AI application.

[00:11:20] I would think even in that individualized, you know, value delivery model, you've still got to do some work about data governance and make sure that the compliance is right and that the models have access to the right data. Like, how do you frame, like, the level of reuse that's possible with those governance frameworks? Or do you have to do that first and get it right? Like, is there an ordering piece?

[00:11:46] Like, help me understand, particularly for smaller companies, the way their technical advisors should be thinking about the rollout and what their governance role is. Yeah, that's another good question because a lot of organizations, they don't think about governance up front. I'm a firm believer of thinking about governance up front.

[00:12:05] And certainly what we've seen is in companies of all sizes, you know, this proliferation of shadow AI, right, where organizations, the CEO says, everyone get on the AI train. And this happens with small companies, too. And so all of a sudden, everyone's using their own tools or, you know, they get a license for something and then someone buys a license for something else and something else.

[00:12:30] And then all of a sudden, you know, you have shadow AI going on, you know, as a riff off of shadow IT. And so what I see organizations that succeed doing is that they say, okay, we know that, you know, people are going to buy their own tools, but they have to sort of go through this process in the front end. So at least we know what tools they're using.

[00:12:54] And then at least we can try to train them in some way in AI literacy and to understand the company's policies. I think that you have to do that, you know, up front. I've talked to small companies where they've said, and granted this was a number of months ago, but, oh, I just used, you know, pick your favorite LLM and uploaded a confidential RFP. Should I not have done that? Like, that sounds obvious.

[00:13:24] Like, no, of course you should not have done that, you know. But if people don't know what they don't know, you have to tell them. Sometimes small companies don't even think about governance or they think about compliance in terms of putting something on their website right at the bottom of the page. Like you can unsubscribe or whatever, but they're not actually thinking about governance and guardrails in this way because maybe they haven't even thought about it. What their company data actually includes.

[00:13:53] An upfront sort of, it can be very small, you know, just like some governance steps. You know, you're bringing in a new tool, make sure, you know, pass it by whatever little committee, you know, you form. And, you know, they can give you pointers and, you know, then you feel good about it. But other companies I talk to, they actually have sort of governance training in place, you know, literacy training. And everyone has to take it.

[00:14:20] You have to take a minimum of five hours based on your role and your persona in the organization. You know, it can go from five hours up to, you know, a lot more than that. But I think even small companies need to have that. We'll be right back after this message. A quick heads up, Acronis is hosting a live event on May 13th called The Pivotal Point of IT, building services for the AI First era.

[00:14:48] Their CEO will be laying out Acronis' vision for AI First service delivery for MSPs, including a new partner program and what they're calling a major platform announcement. If you want to hear directly from Acronis on where they're taking all of this, registration link is at go.acronis.com slash Dave Sobel AI era. No spaces. And we're back. Well, I'm wondering also is if we're about to make the problem even worse.

[00:15:18] So you've described the shift to agentic AI, you know, moving from systems that produce outputs to systems that take action. You've described it as a governance inflection point. If we're thinking about this foundationally, you know, and already in environments that may, you know, already not have that, you know, put together well enough, what's going to change and be worse when their client's AI starts executing decisions rather than just generating recommendations? Yeah, it's so much more complex now.

[00:15:48] And this is why I see a lot of organizations putting, you know, the brakes are saying, whoa, whoa, you know, hold on a second. I heard of one organization that actually was the CEO said, let's, you know, just try out the agents and see how they work. And then all of a sudden they had over, you know, 10 or 20,000 agents. This was a larger company.

[00:16:12] And, you know, how are they managing those agents and the inputs to the agents, the outputs to the agents? A lot of times if people are just building their own single agents, then, you know, you have a lot of duplicated effort, right? If really you should be thinking about it in a systems way and the kinds of agents that you actually want to build that would benefit, you know, more than one person.

[00:16:37] And so they're not just sort of building their own agents, but the governance, you know, around those agents is quite concerning. And it's just an area that's right now being looked at more fully. But if you think about it from an auditing perspective, you know, you have your input controls and your output controls and whatever's happening inside that agent. You can't just give an agent your access permissions. You know what I mean? Like that.

[00:17:07] And I think that Copilot, right, they did that initially, that the agent, your agent sort of inherited your permissions, you know, but that can get you into trouble. We've highlighted a couple of the concerns. We've got data quality. We've got governance. We've got things like lineage integration. Like these all feel like not side issues. They're core issues. But most organizations don't have most of this in order. Where do you recommend people get started?

[00:17:35] Like where are the realistic places to start and make meaningful improvements? I think data quality and data lineage, you're going to need data lineage for agents. I mean, data quality is, it's table stakes. You know, you have to make sure that your data is high quality. Like maybe if your data is a little bit garbagey for some of your marketing applications, okay, you know, you'll see what happens.

[00:18:01] You know, but if, you know, you're building a bidding agent or you're, you know, comparing prices and the prices that you had in your database, you know, that suppliers were providing you were incorrect or they weren't updated. You know, then you have a bigger problem. So you have to sort of prioritize where you need to look at the data quality issues.

[00:18:26] Of all these different trends that you're tracking, agentic AI, data products, generative BA, like AI governance, which one do you think will have the most disruptive impact on our end customers? Like the ones that these MSPs and IT service providers are serving? Like what do you think is going to be the most disruptive and why that one? If they're servicing companies that are building agents, then, you know, that could certainly be very disruptive.

[00:18:56] I mean, think about all the data that those agents are generating, all the data that they're creating. I think that's going to be very, very, could be very disruptive, you know, just in terms of volume and, you know, what happens when the volume starts increasing? And can their systems handle it if, you know, they're managed service providers? So, you know, what are their SLAs going to look like? You know, the uptime, all of that. You, like many of us, have been looking at this for a little while and you've got several of these trends.

[00:19:26] You've looked at, you know, helping organizations get ready for a lot of different ways from databases, cloud, big data, and now AI. And one of the things we always have this conversation around is readiness first. But I want to get your reaction to something because it seems that some of the organizations that won in some of the prior cycles weren't necessarily the ones that were most prepared. They were the ones that were the most opportunistic.

[00:19:51] Like are there versions of AI success that don't focus on data readiness? And, like, what would be your response to those that say, like, hey, I need to be a first mover and take advantage of this versus being the most prepared? Yeah, I think that there's definitely versions of AI that don't have to deal with data. If you're a company and you can come up with, you know, some sort of product that's an AI product, more power to you, you know.

[00:20:20] That's what I think. It doesn't all have to be that way. And certainly there's thousands of companies out there that are building innovative products to sell to their customers. And certainly that can be a path to success. For an MSP owner serving those smaller and mid-sized customers, the clients who are asking about AI, but they don't have a chief data officer or their dedicated data team,

[00:20:46] like, what are the practical things those providers should be doing for their clients today? Yeah, I think that they should be thinking about their clients and thinking about the kinds of AI tooling, you know, that might actually help their clients get started on their AI journey. I guess every sort of AI service that I've seen associated with a company that wasn't providing AI services to begin with and then put some sort of AI front end into their service.

[00:21:17] I have not been impressed by that. I don't think that that's the answer. So, you know, I think it's, you know, understanding what your clients' needs are and seeing if there's any way that you can support it in terms of, you know, what you're providing. We'll be right back after this message. Today's episode is supported by JumpCloud for MSPs. Imagine delivering intelligent, secure IT for every client from one unified platform.

[00:21:47] JumpCloud eliminates tool sprawl by bringing identity, device, and access management under one roof. Easily manage multiple clients via a multi-tenant portal, intelligently automate onboarding, and push patches across Mac, Windows, and Linux, all from a single pane of glass. The result? Tighter proactive security, fewer mistakes, and faster service delivery. To explore JumpCloud for MSPs, visit jumpcloud.com slash MSP radio.

[00:22:18] And we're back. Sometimes knowing what not to do is the right answer. Dr. Fernhelper is the VP of Research at TDWI and founder of the AI Foundations Group, where she helps businesses and technology leaders cut through the AI hype, build AI literacy, and make informed decisions about enterprise AI adoption. Her book is Data Makes the World Go Round, out now from Wiley. Fern, where should people go to find the book and stay connected with your work?

[00:22:47] Well, they can find the book on any of the major outlets, Amazon, Barnes & Noble, you know, and any of those. They can get the book on that. To stay connected with my work, go to the AIfoundationsgroup.com. I'll send you some of my reports. You can go to TDWI.org and look for some of our research there as well. Well, thanks so much for joining me today. Thank you for having me. It was a pleasure talking to you.

[00:23:17] Want more from the Business of Tech? Join Business of Tech Plus for ad-free episodes, early interviews, extended cuts, subscriber-only shows, and exclusive member perks and analysis. Sign up at businessof.tech slash plus. And follow this show on your podcast app. And if you're on YouTube, hit subscribe and the bell so you never miss a story. Reviews and comments help spread the word, too. Interested in advertising?

[00:23:44] Head to mspradio.com slash engage. The Business of Tech is written and produced by me, Dave Sobel, under ethics guidelines posted at businessof.tech. Thanks for listening. I'll see you on the next episode. Produced by Picture This Video. Part of the MSP Radio Network.