The podcast features a discussion on the intersection of artificial intelligence (AI), law, and ethics, with insights from attorney Brad Gross and AI governance expert Juliette Powell. As AI becomes increasingly integrated into the services provided by managed service providers (MSPs) and technology companies, the risks associated with compliance and unintended bias are growing. The conversation explores how businesses can navigate legal pitfalls while developing responsible and transparent AI strategies, emphasizing the importance of accountability and ethical considerations in AI deployment.
Juliette Powell highlights the need for organizations to establish AI governance frameworks that are adaptable to evolving technologies. She questions the notion of what is considered "ethically sound" and emphasizes the importance of understanding the organizational culture and the people using AI tools. The discussion reveals that many employees do not utilize AI tools effectively, leading to gaps in implementation and understanding. This sets the stage for a deeper exploration of the responsibilities that organizations must take on as they integrate AI into their operations.
Brad Gross addresses the legal implications of AI, particularly concerning privacy and intellectual property. He stresses that while MSPs can delegate tasks to AI, they cannot delegate responsibility. This means that organizations must maintain a human element in their AI strategies to mitigate risks. The conversation also touches on the current lack of comprehensive laws governing AI in the U.S., with a focus on the need for transparency and disclosure regarding data usage in AI systems.
The episode concludes with a discussion on the importance of accountability within organizations. Both experts agree that having a designated individual or team responsible for AI governance is crucial for success. They emphasize that MSPs should manage client expectations by clarifying the shared responsibilities in AI compliance. This collaborative approach is essential for navigating the complexities of AI integration and ensuring that organizations can leverage technology effectively while adhering to legal and ethical standards.
💼 All Our Sponsors
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
🚀 Join Business of Tech Plus
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
🎧 Subscribe to the Business of Tech
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
📰 Story Links & Sources
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
🎙 Want to Be a Guest?
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
🔗 Follow Business of Tech
LinkedIn: https://www.linkedin.com/company/28908079
YouTube: https://youtube.com/mspradio
Bluesky: https://bsky.app/profile/businessof.tech
Instagram: https://www.instagram.com/mspradio
TikTok: https://www.tiktok.com/@businessoftech
Facebook: https://www.facebook.com/mspradionews
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
[00:00:14] We're diving into the intersection of artificial intelligence, law, and ethics with attorney Brad Gross and AI governance expert Juliette Powell. As AI becomes embedded in how MSPs and technology providers deliver services, the risks from compliance to unintended bias are only growing. We'll explore how businesses can stay ahead of legal pitfalls while building responsible, transparent AI strategies.
[00:00:41] Welcome to the Business of Tech Lounge, the live version of the Business of Tech Podcast. It's Wednesday, April 16th, 2025, and I'm Dave Sobel. Now we'll be taking questions and comments throughout the show, so make sure to put them in chat. If you have a question, we will happily respond to it in real time. Now I want to thank Sales Builder, our Patreon sponsor, whose support makes this show possible.
[00:01:07] Focus on your IT sales workflow with the power of automation and visit them at salesbuilder.com. That's B-U-I-L-D-R dot com. A reminder, I am keeping an eye on the chat. We'll take those questions in real time. Now Brad Gross is a leading legal authority specializing in managed service providers and technology contracts. He's known for providing strategic legal counsel to IT service providers, focusing on compliance, risk mitigation, and revenue optimization.
[00:01:37] Brad frequently speaks at industry events and contributes to discussions on the evolving legal landscape of technology services. Brad, welcome to the show. Thank you. Good to be here. Looking forward to this discussion today. Me too. And Juliet Powell is an AI governance expert and co-author of The AI Dilemma. She focuses on the intersection of technology, ethics, and society, advocating for responsible AI development and implementation.
[00:02:05] Juliet's work emphasizes the importance of ethical considerations in technological advancements, aiming to guide organizations in navigating the complexities of AI integration. Juliet, welcome to the show. Hey, thank you. Now I am super excited. Listeners, if you've got questions, throw them in chat.
[00:02:23] But Juliet, I'm going to have you kick us off a little bit here because I want to give it like kind of a starting thought of how can organizations really develop AI governance frameworks that are both ethically sound and also adaptable to evolving technologies? Well, I always question the idea of ethically sound because I want to ask the question, ethically sound for whom?
[00:02:44] I think that if you've been in a corporate environment and implementing AI solutions and tools, last year, 2024 is very, very different from what we're seeing this year. This year has been all about agentic AI and a lot of, you know, trying to figure out what the actual ROI of throwing technology at our human problems actually is.
[00:03:09] And I think part of that gap, if you will, is the fact that we have relied so much on the technology and we haven't necessarily looked at the organizational culture, the people themselves who are using the tools. The majority of employees that have access to these tools don't even use them. And those that do are super users. So there are so many different gaps that we can talk about today. But more importantly, I'm very curious to hear what people that are watching the show want to ask us.
[00:03:37] Well, likewise, and Brad, immediately the moment we talk about people are using it and Juliet's brought this thought of the fact we have a bunch of people not using it, but then some who have leaned heavily into it and become super users. We haven't even talked about the potential legal risks here and challenges that you've got to come forward. What are the ones particularly thinking about privacy and intellectual property that providers need to start thinking about? Sure.
[00:04:02] So I think that the overriding rule of thumb that you have to think about when you approach AI from an MSP's perspective is that you can delegate a task, but you can't delegate responsibility. Ultimately, responsibility is going to be on the MSP.
[00:04:20] So while you might implement generative AI or any other AI model to make things faster, make things simpler, make them easier, come up with proactive instead of reactive solutions and so on. There has to be a human element, because if there isn't a human element and something goes wrong, it's not going to be as simple as saying, well, that's, you know, that's the computer that made the mistake. MSPs are not going to be able to delegate responsibility.
[00:04:47] So in their zeal to implement these kinds of solutions, we could talk about some of those legal implications and how to tackle it in advance. But in their zeal to do all of this, they have to keep in mind, AI is not meant to eliminate the human factor and it should from a legal perspective.
[00:05:04] Now, interestingly, I would think about that, Juliette, if I think about the framework that Brad has just sort of thrown out, the fact that, hey, we've got to just remember that the organization, we've got to think about the fact that organizations are thinking about responsibility, right? If we put that in as the lead, does that start helping us think about the way that frameworks might be implemented here, that if we're just assigning human responsibility to the way that technology is used, this might feel like a familiar framework.
[00:05:33] How might you guide organizations to think about this? Well, so when I wrote my book, which is based on my dissertation at Columbia on this very topic, right, is looking not just through the lens of the corporation, but also through the four logics of the power that really control AI. And some of those are within the organization and others are outside of it. So you've, of course, got the perspective of the corporation, but then you've got the perspective of the engineers that are within the corporation.
[00:06:01] They've got their own minds, their own points of view. Many times we've seen, especially in the last few years, how, you know, there are major surges where these employees get hired and they get fired and they get hired and they get fired again. I think a lot of people have lost their commitment to a specific brand or a specific company when it comes to engineers of AI in general, especially because it's where the money is.
[00:06:25] And, you know, if there ever was a time to follow the money, it's now. But then you've also got the governmental perspective that, you know, again, depending on which administration will look at AI from a very, very different perspective. If we look back at the first time that Trump was in power in the White House, that was 2019, where we really got a sense of how much AI was part of the country's strategy in his view.
[00:06:55] And we've seen that being reinforced on an ongoing basis with a lot of the, you know, the restrictions against Chinese technologies and Chinese companies. But there is also a massive push for AI supremacy in the United States, which is, you know, again, a very, very different outlook for the corporation that doesn't necessarily do anything with defense or military.
[00:07:21] And so those priorities are going to be very different as well. And of course, we've got social justice. How does all of this technology and all of these different mindsets and the way that we use and deploy these technologies, accelerate these technologies actually affect people? Not just, you know, people here in the United States, but people all over the world. And do we care about any of this when it comes to actually just going ahead and making money?
[00:07:47] We are, of course, capitalists here. So I feel like there's there's just a massive balancing act that has to happen and that doesn't necessarily happen when companies are trying to figure out what their strategy should be for the future. What does responsibility mean? Again, responsibility for whom? If it's responsibility towards shareholders, then it is absolutely maximizing the opportunity to make money.
[00:08:14] So how do you balance risk versus benefit? I think that there's a calculus of intentional risk that needs to happen, not just in the IT department, but also for the organization as a whole. Right now what I'm seeing is that much of this calculus is happening within the framework of AI and whoever the leader is that is responsible for AI.
[00:08:38] But that person also has to be accountable to the board who has to genuinely understand what the implications are, not just in dollars and cents, but also in terms of brand reputation, how their customers and their future customers will respond to the decisions that they make today.
[00:08:57] And because we're encoding so much of this into our systems and these systems are in many cases becoming more and more interoperable, we're not necessarily seeing the unintended negative consequences that can happen in the medium to longer term.
[00:09:14] So to me, all of these things work together as we come up with plans for organizations, not just to develop and deploy better AI, but also in terms of being able to scale their companies, because a lot of the pilots that we're seeing just don't scale. And interestingly, I think that's one of the spaces where we're going to see the marketplace allow different people to try different things. I think implied in what you were talking about there is the fact that there is not one size fits all answer.
[00:09:43] Lots of people will be looking for the one thing. There are going to be lots of different strategies that perform at different levels and it allows the market to figure out which ones. I think there is a space for what people that lean into, for example, you know, very ethically considered or very slower moving pace may be a path that succeeds versus someone that moves much more fast or perhaps fast will be the winner and slow will not. We'll have to see that.
[00:10:08] But that leads to Brad, you've implied a little bit with what you said, like there's got to be some thinking right now where the risk is. Is it in copyright? Is it in, you know, intellectual property, both something else? What are the bounds now? Yeah, it's more in privacy right now. That's really where where things are going to be focused. I think that look, there are interesting.
[00:10:33] There are not a lot of laws out there right now that are governing AI. And in fact, of the 89 or 90 bills that are being considered right now, very, very few address MSPs directly. But what you see if you just take a look at them as an overall in their overall context is you see that a lot of them are gained are aimed at privacy and disclosure. Okay.
[00:11:00] People are going to want to know how their data is being used, whether it's AI or not AI. And what we find is that these laws are largely talking about, listen, you can't use AI to manipulate data in a way that is not just disruptive, but incorrect, just misleading. That's the word I'm thinking, right? You can't change it to be misleading.
[00:11:29] You have to disclose where what kind of data you're receiving, where that data is going. Is it going into training models and so forth? That is the type of fundamental basis that all of these laws are trying to work toward with some degree of success. But for the most part, I've reviewed them and none of them are really ready for prime time.
[00:11:52] By the way, fun fact, there is a bill being considered in New Hampshire that says that residents of New Hampshire are allowed to use AI to defend themselves in alignment with the Second Amendment. True story. Right? Think about that. Second Amendment, firearms and so forth.
[00:12:14] And they're now contemplating a bill that says we could use AI to for self-protection in alignment with the Second Amendment. Now, what that means, I don't know. And I'm thinking Terminator. But, you know, that's me. I don't think we have to go to Terminator. I don't think we have to go to Terminator. I think we just have to look at what's happening in Israel right now to defend it. There are so many AI-driven and automated systems in place there.
[00:12:43] So just go on YouTube at this point. Well, they're already thinking about it. But I digress. So I think that the areas that we're going to be focusing on now really are privacy and disclosure. So that's a good point. Because in the U.S., you know, federally, we still have about zero laws that actually really cover American privacy. We can get into state-level laws and the patchwork there. But at a federal level, there's essentially nothing really that does that.
[00:13:10] And that has proven out that companies are leading into different strategies. So, for example, I could say Apple has leaned very heavily into positioning as privacy. How effective might be a different statement. But we can at least say from a positioning perspective that they're leaning that way. Versus, you know, other providers who are not. You know, for example, OpenAI, I think we could classify as distinctly not leaning into such styles. And then there's the spectrum in between.
[00:13:37] What I'd be curious, Juliette, to think about is, you know, these are intentional choices. But to get there, business leaders have had to make some decisions. Are there particular key ones that you kind of highlight as saying this drives the organization, that business leaders, particularly our MSP audience, would be advising their customers? But they have to make those decisions for their own business. Well, I just want to go back on just one thing about the legalities around AI.
[00:14:06] So I agree with everything that was said in terms of the United States. But I do think that it's important to underscore the fact that the artificial intelligence app coming out of the EU does affect artificial intelligence that is developed and deployed all over the world, as long as there are Europeans in that personal training data.
[00:14:26] Right. And the majority of organizations that are using off the shelf technology do not know, do not have access. There is no transparency around the training data that is being used. We're seeing more and more organizations that that are creating their own large language models saying, oh, don't worry about it. We've got your back. We've seen open AI going from being a not for profit to a for profit.
[00:14:54] And I don't know if either one of you had a chance to see Sam Altman on stage at TED just this past few couple of weeks. It's worth watching because he essentially addresses the decisions that he's made in one context and in another context. And I think that for those of us who are leaders of organizations that are implementing AI, you know, we're trying to take our cues from somewhere.
[00:15:23] And if we can't really take a cue from somebody who is, you know, at the forefront of AI, like Sam Altman of open AI, I wonder how well the rest of us are going to fare.
[00:15:35] Now, keeping all of that in mind, I do think that, again, as I was saying before, the strategy of really thinking about pilots as opposed to deploying at scale makes a lot of sense for the majority of organizations, regardless of their size. And to me, that's really, really key.
[00:15:55] One of the key components of that, to me, at least, is really empowering those employees that are using AI, both at work as well as in their personal lives, so that each one can teach one how to use these tools to the benefit, not just of employees, but of the organization as a whole. And that connection between, look what I know how to do that can add value to the organization just empowers the entire organization, A, as a learning organization.
[00:16:25] But secondly, also to use the best tools that work for your particular culture, for your particular strategy, for your particular growth. And I think all of those things aren't as top down as they used to be. I think really getting that solutioning happening from all levels of the organization really adds to the power of acceleration that both technology and culture brought to bear together in one strategy can really power.
[00:16:55] Well, it's incredibly important to position it that way. I literally was speaking with the New York IMCP group, which is the group of Microsoft Channel professionals helping small customers. And most of them were talking specifically about that problem and the fact that, again, as you alluded to, like there's a super user group and then everyone else who hasn't done anything. But the super user group, particularly in small and midsize organization, is moving really fast. But you brought up something that I think is really interesting.
[00:17:22] And I'm curious, Brad, to get your take on this is that if we think about the fact that most organizations are using software, particularly SaaS software, that's very international focused. You know, if you think about stuff, if you're a Microsoft partner, you'll be using a bunch of stuff that is deployed globally. Thus, it's also being deployed under EU guidelines to try and they're trying to be as consistent as possible.
[00:17:47] And most businesses, even small ones, are having international connections very easily now based on doing business globally. How much do you have to factor in the EU's movements in here into your operations, particularly as an MSP or advising clients? Yeah, cross-border transactions are becoming more ubiquitous, right?
[00:18:07] And even with the advent of GDPR and so forth, businesses are becoming intimately more aware about the fact that they have data that extends beyond the residents of their locales, beyond the borders of the U.S.
[00:18:24] So to the extent that companies are engaged in international transactions, to the extent that they are collecting, aggregating, viewing, accessing the data of EU residents, yeah, you're going to now start to have to worry about compliance with EU regs, including the new AI laws that are being considered in the EU. And that's where it starts to get complex, right?
[00:18:51] We live in a complex world and it's only going to become more complex. I think that the things that MSPs, whether they're dealing domestically or internationally, are going to have to think about, relate to disclosure. And this is coming in the future. If it's not already here, both the EU acts and the any contemplated laws here in the U.S. are going to require disclosure, right?
[00:19:18] You're going to have to disclose if you're using AI and decision making processes and what data is being used. I think ultimately there will be an assessment requirement, though it's not here yet.
[00:19:30] But I do think in light of how things are going in the legal field, I think that at some point certain industries, especially those with mission critical life and death type services, will probably be required to demonstrate on some sort of regular basis that the AI that they're using is being used correctly, is being used for the purposes for which it was intended.
[00:19:54] In fact, I would expect that there will be an industry, a whole cottage industry that tests the AI for companies. How do you like that? Remember that we're predicting right here on your show. Right. Predicting new business lines for business lines. Oh, there's going to be. We wrote about that. Oh, did you? Okay, there you go. Perfect kicking there too, because Julieta, you've given some thoughts to the disclosure portion too, and what the portion of disclosures.
[00:20:24] What guidance can you give? Again, we're talking to a group of people whose job is to make technology advice to small and mid-sized organizations. They're trying to guide their customers. How do we give them guidance on what they need to do to be transparent to their clients on AI usage, and how can they guide their customers on the kinds of disclosures they need to do? So one of the disclosures that I think becomes more relevant to more organizations is the advent of chatbots.
[00:20:52] Everybody and their brother has a chatbot these days, or is developing a chatbot. We're improving our chatbots. And a lot of these are deployed both internally for employees to facilitate their navigation of various systems and outwardly, right? For customer service purposes, for example, right?
[00:21:10] And under the guidelines of the Artificial Intelligence Act, sorry, Artificial Intelligence Act, the piece that really comes into view for the majority of us who are using chatbots is that if for any reason anyone mistakes your chatbot for an actual human interaction,
[00:21:34] then in the risk framework under which the Artificial Intelligence Act is deployed, then de facto you're in the highest risk category, which means that if fined, right, that is 7% of your annual revenue plus global subsidiaries plus any damages. That's a lot of money. Now, I'm not a lawyer, but I certainly know how to count.
[00:22:03] And I would think that that just knowing that means that it's in everybody's interest to inform yourself and to make sure that that at the very least isn't something that could potentially bring you down or at least hinder you from being able to compete with others, right? And Brad is a lawyer. So I have, and knowing that I can couch this. And I'm how to count too, by the way. Right. And you know how to count. I teach you that in law school, how to count.
[00:22:32] But knowing I can ask you this from the perspective of, hey, I know this isn't legal advice because of that kind of the way, but how, you know, again, are there particular easy guidelines that we can start walking through and say like, these are very much the kinds of disclosures you need to be very clear of from day one? Yeah, I think that if you're following good privacy practices, best practices of disclosure, how you're using data, where it's going, who has access to it, how do you get rid of it, and so on.
[00:23:00] And if you apply that to the AI context, you're well into and well far down the road of complying with what will be future AI privacy regulations. So, you know, again, like I said, you can't delegate responsibility. Delegate a task, but not responsibility. So ultimately, you have to think about what kind of data are you getting, right? Realize you're on the hook for this.
[00:23:27] So make sure you take a pretty good audit of how you're conducting your business. Where is the data coming from? What does it contain? How is it being used by AI? Is it generative? Is it being put into a training model and, you know, things are starting to get regurgitated? Is it some other model that they're using?
[00:23:46] And then from there, once you understand what data is, what the data you have, the scope of what you have, what it is, how it's being used, you then have to disclose that to your customers. You have to, you know, again, I think there are going to be certain industries that are going to require this at some point. That's not here yet. That's coming, but it's not here yet. Disclose it. Are they being used in training models? Is it going to be deleted?
[00:24:14] Might it be used in conjunction with other data? If you're following good data practices right now, I think that you're pretty well down the road. You know, my crystal ball doesn't work any better than yours. So I don't know what strange curve balls they might throw out, you know, in the future.
[00:24:32] But given that there is a fair, a lack rather of legislation on point, at least in the US, my advice would be, think about it as if it was your data. Right? If it was your data and you were giving it to a company, would you want to know if it was being fed into AI? Would you want to know if decisions based on your data were totally being made by AI or had some human interaction? Would you want to know if it's being fed into a training model?
[00:25:00] If you would think about it in light of that on your business and disclose it. Disclose it in a privacy policy. Disclose it when your customers sign up, but just disclose it. Now, you talk about the crystal ball. Mostly broken, but at the same time, I'm happy to use it because it's the only tool that I've got sometimes. And I've been thinking, I'm thinking a lot about the positioning here. And what I'm kind of wondering about is, is a world of haves and have nots.
[00:25:30] And it feels like there's, particularly Brad's talking here about the data preparation. And the instant thought that I was having was, okay, yes, most organizations have actually not done a particularly good job of their data practices. But there's a group of people that have. And those are the ones that are starting to actually be able to leverage the technologies and be successful with it, even before AI, because they've got a good handle on their data.
[00:25:55] Then there's most who, where this is still very messy and very difficult and aren't even necessarily ready for cybersecurity, much less ready for advanced data conversations. That's another podcast. But where I'm starting to wonder is, is how much is this going to be a have and have not of automation? I want to get that to some actionable stuff.
[00:26:18] And, Julie, if I think about this, I don't want to have a bunch of these customers that may be in a position of, hey, I'm not very well positioned. I'm not doing very things. I'm going to start rushing and rush through the process and end up in a very messy, worse situation. If they're an organization that has sort of said, hey, you know, I haven't necessarily done either right by my own organization or right by my customers.
[00:26:42] Are there obvious knowing nothing is simple, but there are obvious sort of order or first steps that people should work on to answer this data readiness question to be able to move on to the next step that you've thought through with the with the book? Well, I think first and foremost, we have to talk about accountability. Right. Who is responsible for the A.I. deployment? Who is responsible for the data?
[00:27:09] Who is responsible for collecting it, for cleaning it, for, you know, turning it into business intelligence and ultimately into action? Is that one person? Is it a committee like or is it just nobody really knows? And I think there are a lot of organizations where nobody really knows because nobody is asking those questions. They just want to accelerate their process.
[00:27:33] They want to optimize and automate without putting in the strategic thinking that is necessary. So I always go back to accountability. Right. Who is the one person in your organization that is responsible when the poop hits the fan? And all of this. Right. Definitely been a theme of everything that we've all been talking about. Here's accountability, accountability, accountability. Right. So I mean, I'm going to turn. I'll tell you. I'll tell you just on that, just continuing with that.
[00:28:03] What we often find in in in the corporate setting is that not only are they not sure about who's responsible to your point, but we find a very sort of a corollary of that, which is they make so many different people responsible for their responsibility so thin that they've spread responsibility so thin that no one takes responsibility. Right. So you either you have no idea or you say, oh, this whole group of people. And then when you speak to that group, they say, I thought I thought. Right.
[00:28:32] And the finger pointing starts and here we go. So I agree with everything you just said. And that's where the governance structure comes in, right, where you're actually delegating specific roles and responsibilities, defining them well in advance of putting a person into that position. And all of that needs to happen as a priority, not as a second thought. You need to get ahead of this stuff. And a lot of people are trying to hire people like me and I'm sure people like you, Brad, to to advise them.
[00:29:02] But you need somebody that's actually on the inside who is accountable because advisors are not accountable to your actual business decisions. That's on you. So you need to have the right people in place that aren't just IT heads. They also have to understand business strategy. Those two things need to go together. Right. Okay. Now I've got a really good question for that. I want to hear from both of you on because what you've just outlined is like I'm on board, right?
[00:29:30] So thematically the advice is for a customer. They need to have somebody in their organization who's in charge of that. But I want to, here's the interesting bit. The group, the audience that is generally watching this show is a group of managed services providers who are the groups that are IT outsourcers who've been hired by customers to advise them and help them implement technology strategies.
[00:29:50] There's almost a conflicting problem there of the fact that the end customer is hiring an outside expert to come in and tell them all of the things that they need to do. But the answer is also, you need to have somebody internally that's in charge of this, but they've hired the technology provider to do that. There's a tension there. And I'd be curious. Cool. You're saying no. And I want your answer to thought on like, okay, am I wrong?
[00:30:17] And if so, how do they, the provider, the technology provider work with the customer to mutual success? Well, it seems to me that, you know, one is very much thinking about the short and medium term. And the person that's on the inside is thinking about the long term, right? Their only goal within the organization is to allow those other people to come in and provide the services that will allow for the strategy to be executed in the way that the organization is.
[00:30:47] And the organization wants it to. Right? Somebody who's coming in like me, I'm just there to help you do your job. I'm just there to help you, you know, hit all of those benchmarks that you're looking to hit. But ultimately, you want somebody who's got some skin in the game, who's in the organization. At least that's my take. I'd be very curious to hear yours. Brad, I'm going to ask you and then I'll get, you know, my, I'll weigh in as well. But Brad, your take on that. Yeah, I agree. I agree 100%.
[00:31:14] And let me tell you, I'll say it just in a slightly different way. And again, going back to my theme of you can't delegate responsibility. You can delegate a task, but not responsibility. If, and if a customer of an MSP is calling on the MSP to make them, you know, compliant with all AI privacy and so on and so forth. And the MSP says, yeah, we're going to, we're going to do that for you. Then I think that you have a mismanaged expectation on both sides.
[00:31:43] The customer thinks the MSP is going to provide this panacea and the MSP probably doesn't realize that they can't provide a panacea in isolation. It has to be done with the customer. And that mismanaged expectation eventually will cause friction, which causes legal issues. So I think that MSPs would be well advised to set up the expectations at the beginning, at the beginning of the relationship and say, listen, yeah, we'll help you achieve compliance.
[00:32:13] We will help you get there. But part of that process might very well require someone embedded in your organization, okay, or a specific named officer who's educated, who understands the issues to be the focal point for your, for your company, to be the voice, to be that central point to which all things AI compliance related will go through.
[00:32:41] And I think that MSP that walks in says, no, you won't need that. I think that they're asking for problems. There's a mismanaged expectation. Yeah, we're all sort of nodding vigorously. And I think we've actually sort of, but what I actually think has happened is, is we've hit the conclusion here to say, okay, we've identified the problems. What you're actually looking for is kind of a joint venture between the client and the provider, where you said, no, customer, I can't take all of the responsibility for you.
[00:33:06] Like what I can do is I can work with you, I can help you implement, I can guide you through the process, I can do everything. But ultimately, as the whole process, accountability does end with each point of the chain, the MSP for themselves would have to own it for their organization. But it ends with that, the customer has to sign up to that. You can have joint responsibility. It's totally possible. But we've highlighted that you have to identify who in the end is responsible.
[00:33:31] And I think by getting, what we've literally gotten to is saying, okay, that's the end destination and the process of getting there. That's the work that I think that our providers and listeners here need to be doing. That's exactly the right thing. Exactly. I think that there has to be, there has to be allocation of responsibilities. And if the allocation of responsibilities entirely on the MSP, then I'll let them know how to reach me because they could pay my retainer right now. It'll be fine. I'm ready for it.
[00:34:00] You know, if you're not managing your customers expectations and allocating responsibilities clearly in advance, you're going to have a problem. Well, I think we've exactly hit the point that we need to give them good advice, but I want to make sure that they've got resources available to reach out. So if people are interested in reaching out, Juliette, what's the best place for listeners who are interested in getting a little bit more information or diving deeper? Where's the best place for them to go? Sure.
[00:34:26] Well, if you want to reach out to me directly, you can just go to JuliettePowell.com or reach out to me on LinkedIn. I'm always happy to say hello to new people. And Brad, what's the best way for people to reach out to you? Agreed. Same way. Reach out to me on LinkedIn. I'm there. Or they could email me, Brad at BradleyGross.com. Be happy to discuss this issue. This is what we do. Well, Brad, Juliette, thank you so much for joining me.
[00:34:49] This has been a lively and educational conversation, which is exactly what I hoped for when I got an author and a lawyer to come in and mix it up on AI governance. So thank you both for spending some time with me. Fantastic. Thank you. And I look forward to seeing you both again. I feel like I've learned a lot today. We'll, I'll have you both back for sure. Now I do want to dive in on another little topic before we end up today because I was concerned about the impact in various industries.
[00:35:18] And I had Jason Willis Lee, who's a specialized medical translator on for an interview to share some of his insights on navigating the disruption caused by AI in the translation industry. He and I discussed the importance of niching down for success, the evolving role of human translators in post editing of machine generated content, and how diversifying services can create new opportunities in the current landscape. Here's a preview of that interview.
[00:35:49] How good is the translation and where are they falling down from a broad translation perspective? So they're very good, Dave. So compared to 10 years ago, the machines, the big machines, Google Translate, DeepL, very, very good. And now we have all of these other tools we have. I use a cloud. I use a professional version of chat TBT. Where they lack, I think, is nuance and in the niches. It's in the specialist niches such as law and such as medicine. Yeah. Gotcha.
[00:36:18] Now, what I would think then is one argument might be that a good value of a translator would then be to be looking at the machine translation and adjusting for nuance. And that would theoretically accelerate the workload. But at the same time, that's not necessarily as interesting a work, potentially, in that you are reviewing it. Talk to me a little bit about how you're addressing putting the human in to get that nuance for the customer. Yeah. So a lot of the work is editing now.
[00:36:48] It's post-editing and machine translation, PEMT. We've had that on the market for years. Now it's kind of a given that you will be adding the human aspect to the machine. So a lot of my work is editing machine-generated output. I think the human is essential, Dave, because you just have to have this hybrid customer service. And that's how the customer gets the quality. It's having the combination of the machine. So you can still make money. You can still leverage these tools to give you a rough idea of what it is.
[00:37:17] But you have to have the human oversight. I think that's the important thing. Human oversight, that seems to be the theme of the day. We dive a little further into the disruption in the translation industry, and it's a fascinating insight into the way this is going to play out over time for MSPs and their customers. Now, my Patreons already have this interview now. If you want to get access early, it's a benefit of supporting the show. Visit patreon.com slash MSB radio to sign up and get access to that interview.
[00:37:46] It'll drop on the YouTube channel and the podcast feed this weekend. Now I want to thank Sales Builder, our Patreon sponsor whose support makes this show possible. Focus on your IT sales workflow with the power of automation and visit them at salesbuilder.com. That's B-U-I-L-D-R.com. And vendors, you too can get your name mentioned on the live show. It's a simple monthly subscription. Visit patreon.com slash MSB radio to learn more.
[00:38:13] And listeners, the number one thing you can do to help the show grow is support us by like, sharing, and following on your favorite platform. It really makes a difference. The more you can tell your colleagues and friends, the more the show grows. So keep doing these fun and entertaining conversations each week. You can support directly on Patreon with our give what you want model. You set what you think the content is worth and you get access to videos early.
[00:38:37] If you have a question and are listening to the recording, send it in at question at MSB radio.com. Thanks for joining me for the Business of Tech Lounge. And I will see you next time.

