This episode sheds light on a significant issue within organizations - the disparity between leadership and IT technicians. The transcript reveals a noticeable variance in priorities and perspectives between C-suite executives and IT technicians. While C-suite executives prioritize customer satisfaction as the top metric, technicians are more focused on promptly resolving technical issues. This disparity can lead to misalignment, friction, and potential challenges within the organization.
Generative AI in healthcare presents both promise and concerns. The episode highlighted that while there are investments and startups focusing on generative AI in healthcare, there are significant limitations and challenges to consider. Studies have shown that generative AI struggles with complex medical queries and can produce incorrect diagnoses, raising concerns about its efficacy. Additionally, worries exist about perpetuating stereotypes and biases in healthcare settings.
Holodeck is a cutting-edge system that leverages AI to generate interactive 3D environments for training robots. The system has been found to outpace human-created tools in terms of realism and accuracy, showcasing the potential of AI in creating diverse testing scenarios. Holodeck can generate a wide range of indoor environments based on user requests, making it a powerful tool for testing and training purposes.
💼 All Our Sponsors
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
🚀 Join Business of Tech Plus
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
🎧 Subscribe to the Business of Tech
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
📰 Story Links & Sources
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
🎙 Want to Be a Guest?
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
🔗 Follow Business of Tech
LinkedIn: https://www.linkedin.com/company/28908079
YouTube: https://youtube.com/mspradio
Bluesky: https://bsky.app/profile/businessof.tech
Instagram: https://www.instagram.com/mspradio
TikTok: https://www.tiktok.com/@businessoftech
Facebook: https://www.facebook.com/mspradionews
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
[00:00:00] It's Wednesday, April 17th, 2024. I'm Dave Sobel. Welcome to the Business of Tech Lounge.
[00:00:18] Today on the show, AI and healthcare with some insight on adoption rates, a dose of the future
[00:00:25] infused with AI as the holodeck arrives, frameworks and policies with John Gillum,
[00:00:31] a preview of this weekend's interview on the nuclear event in cybersecurity and
[00:00:36] your questions and comments. Now I want to thank SalesBuilder, our Patreon sponsor,
[00:00:43] whose support makes this show possible. Focus on your IT sales force workflow with power of
[00:00:49] automation and visit them at salesbuilder.com. That's B-U-I-L-D-R dot com. Want to get your logo
[00:00:57] right here? Vendors, you can do so with our vendor Patreon program. It's a simple monthly
[00:01:03] subscription and visit patreon.com slash MSP radio to sign up. I can't do the show without
[00:01:09] support and thanks again to SalesBuilder for theirs. Now I do take questions and
[00:01:15] comments throughout the show, so make sure to put them in the chat. We'll have a dedicated
[00:01:20] question section in the show later with all those listeners submitted questions,
[00:01:24] but we'll throw those chat comments up anytime. But first, our top story. Now I want to dive
[00:01:32] into Auvik's recent IT trends 2024 industry report. The report identifies a persistent
[00:01:40] shortage of skilled workers as the primary challenge for IT teams in 2024. As a consequence,
[00:01:48] there's an increased investment in IT with budgets expanding to accommodate needs for
[00:01:53] automation and outsourcing. There's a notable increase in investments in automation technologies.
[00:02:00] 24% more IT professionals plan to invest in automation in 2024 compared to 2023.
[00:02:08] Almost 96% of IT professionals, so almost all, are using at least one AI or ML tool to improve
[00:02:17] efficiency. A significant portion of IT teams are outsourcing tasks, particularly network-related
[00:02:24] functions, to manage workforce shortages. Nearly three out of four respondents indicated that
[00:02:30] at least some network-related tasks are being outsourced. Nearly 58% of C-suite executives
[00:02:38] expressed high confidence that their organization's network toolset meets the needs of remote workers,
[00:02:44] whereas only 35% of IT technicians shared this confidence. This disparity suggests that
[00:02:51] upper management may not be fully aware of the operational challenges faced by their IT teams,
[00:02:58] potentially leading to decisions that do not align with the technical realities and needs.
[00:03:04] There's also a notable difference in how customer satisfaction is prioritized across
[00:03:10] organizational levels. About 46% of C-suite executives list customer satisfaction as the
[00:03:16] most important metrics, compared to only 26% of technicians. This suggests that while executives
[00:03:23] focus on customer outcomes as a measure of success, technicians may be more concerned with
[00:03:28] resolving immediate technical issues, perhaps due to being overburdened with tasks.
[00:03:34] The majority at 86% of respondents reported increased budgets in 2024, with nearly 50%
[00:03:41] saying they expect to see an increase of at least 20% from 2023. In terms of specific
[00:03:48] investments, 48% of survey respondents shared that they're investing most heavily in SaaS management
[00:03:54] and monitoring tools, 46% shared their investing in Wi-Fi management, and another 46% shared their
[00:04:00] investing in cloud monitoring and in management. So why do we care? It's this disparity. It's
[00:04:10] always the disparity. When you look at the differences between leadership and those doing
[00:04:15] the work, that misalignment causes the most friction. And in fact, I think this is the critical
[00:04:21] area to focus upon, particularly when you look at the difference between priorities of C-suite
[00:04:26] executives focusing on customer sat and technicians not doing the same or that disconnect
[00:04:33] between focusing on business outcomes where their technicians are not doing the same.
[00:04:38] And in fact, it's a very similar gap. We're seeing roughly 20% difference between the two.
[00:04:43] And in that 20%, that's the key opportunity. Those that are able to execute here and outperform
[00:04:50] are the ones that are going to do far better. It's why we care. I don't want you to focus
[00:04:55] so much on the tool sets or where they're necessarily investing. I want you to focus
[00:05:00] on the cultural gaps between leadership and those that are doing work. And it's important
[00:05:06] for you to make sure internally as an IT service provider and as a managed services
[00:05:10] provider that you're teaching your people the right skills to make sure they can communicate
[00:05:15] effectively with your customer base. That's the takeaway here that I think is most interesting.
[00:05:22] I think there's areas of growth for both providers and for their customers to focus on
[00:05:28] that interdependency. And by the way, I want to note that 96% of IT professionals saying they're
[00:05:34] using at least one AI or ML tool. I'm not surprised by that, but it's how effectively they're
[00:05:40] using it that we're going to want to dig into further. Now, if you got a question or a comment,
[00:05:45] make sure to put it in the chat. If you're watching live, we will take any questions during
[00:05:50] our question section later. Now, you know, I love a good use case. So let's dive into
[00:05:57] tech crunches look at generative AI making its way into healthcare. And not everyone is
[00:06:02] convinced of its readiness. While their investments and startups focused on generative
[00:06:08] AI in healthcare concerns remain about its limitations and efficacy. Studies have shown
[00:06:14] that generative AI struggles with complex medical queries and can produce incorrect
[00:06:18] diagnoses. Additionally, there's worries about perpetuating stereotypes and biases.
[00:06:25] So let's quote directly from the article quote only about half at 53% of US consumers said
[00:06:33] they thought generative AI could improve healthcare, for example by making it more accessible
[00:06:38] or shortening appointment wait times. Fewer than half said they expected generative AI to make
[00:06:43] medical care more affordable. Andrew Bakorsky, chief AI officer at the VA Sunshine Healthcare
[00:06:50] Network, the US Department of Veterans Affairs largest health system doesn't think that the
[00:06:56] cynicism is unwarranted. Borosky warned that generative AI's deployment could be premature
[00:07:02] due to its significant limitations and the concerns about its efficacy. Well, one of the key
[00:07:09] issues with generative AI is its inability to handle complex medical queries or emergency,
[00:07:15] he told tech crunch. It's finite knowledge base that is the absence of up-to-date clinical
[00:07:20] information and the lack of human expertise make it unsuitable for providing comprehensive
[00:07:25] medical advice or treatment recommendations. Several studies suggest there's credence to these
[00:07:31] points. In a paper in the journal Jama Pediatrics, open AI's generative AI chatbot, chat GPT,
[00:07:38] which some healthcare organizations have piloted for limited use cases, was diagnosing pediatric
[00:07:44] diseases 83% of the time. And an open AI's GPT-4 is a diagnostic assistant. Physicians at
[00:07:51] Beth Israel Deaconess Medical Center in Boston observed that the model ranked the wrong
[00:07:56] diagnosis as its top answer nearly two times out of three. And this section, quote, Arun Thurvasascu,
[00:08:04] a clinical research fellow at the University of Oxford, said there's nothing unique about
[00:08:09] generative AI precluding its deployment in healthcare settings, quote, more mundane
[00:08:14] applications of generative AI technology are feasible the short and long term and include
[00:08:19] text correction, automatic documentation of notes and letters and improve search
[00:08:23] features to optimize electronic patient records, he said. There's no reason why
[00:08:27] generative AI technology, if effective, couldn't be deployed these sorts of roles immediately.
[00:08:33] But while generative AI shows promise in specific narrow areas of medicine, experts like
[00:08:39] Borkowski point to the technical and compliance road box that must be overcome before
[00:08:44] generative AI can be used and trusted as an all around assistive healthcare tool.
[00:08:50] Significant privacy and security concerns surround using generative AI in healthcare,
[00:08:54] Borkowski said. The sensitive nature of medical data and the potential for misuse or unauthorized
[00:09:00] access pose severe risks to patient confidentiality and trust in the healthcare system.
[00:09:06] Furthermore, the regulatory and legal landscape surrounding the use of generative AI
[00:09:10] in healthcare is still evolving with questions around liability, data protection,
[00:09:16] and the practice of medicine by non human entities still needing to be solved.
[00:09:21] Another thought leader, bullish as he is around generative AI in healthcare says there needs to
[00:09:26] be rigorous science behind tools that are patient facing. So why do we care? It's the studies,
[00:09:36] it's the thinking strategically around what the right use cases are. I don't necessarily
[00:09:42] think that everything is going to be effective in all cases right away, but there are obvious easy
[00:09:48] places to find those initial investments. Note taking, text correction, automatic documentation,
[00:09:55] summarization of information, obvious first use cases that will get adoption quickly.
[00:10:01] Beyond that, you're going to be looking to find out what's more results are coming
[00:10:06] out of these kinds of studies to help guide your clients through the kinds of investments
[00:10:11] they're going to be making. It's not going to happen overnight, and that's a good thing because
[00:10:16] we want to have thought through those various pieces. And you can use those first rounds of
[00:10:22] implementations to help guide with policy and procedure to make sure that you're not putting
[00:10:27] confidential information into untrustworthy agents or making sure that your people understand
[00:10:33] the correct use. And ultimately, the thing that I'm finding most consistent across all
[00:10:39] of these use cases is the places where humans remain in the loop are the most effective.
[00:10:46] So that's why we care. We continue to look for use cases that are effective and are working out
[00:10:51] well for customers in order to leverage them further in expansive service offerings.
[00:10:56] Now reminder, we continue to take questions and I'll watch that chat. If you've got something
[00:11:01] to say, we do appreciate it. And by the way, throw that list of who liked the videos.
[00:11:06] Thank you very much. It's much appreciated. Now, last weekend, there was an interview that came out
[00:11:12] with John Gillum whose company focuses on AI detection. He joined me for an interview that
[00:11:18] released last weekend. Here's a segment of that interview.
[00:11:22] I think that leads me to then to where my next question is like, how do you
[00:11:26] tell people to best use AI detectors to be effective?
[00:11:30] Yeah. So most of our use case is within the world of marketing. So where we have people that
[00:11:37] are have writers on staff that are creating content, they're happy to pay those writers $100,
[00:11:44] $1000 or an article, whatever it might be. They're not super happy when they find out
[00:11:48] that that content was popping and pasted a chat, GPT in five seconds. That doesn't
[00:11:52] feel like a fair value exchange. And so the way we recommend our tech to be used is
[00:11:58] within the copy editor. So whoever's receiving the content from writers,
[00:12:03] use a tool as a spot check and build up a live back history with each writer to understand,
[00:12:09] okay, this person has always had 0% AI detected. They had one article that was a 50%
[00:12:16] sort of a flip of a coin. The classifier wasn't unclear. And it was back to 0%.
[00:12:21] So probably that's the situation where the classifier or detector got it wrong.
[00:12:26] Let that go. There's another situation where you would see a trend on a writer that used to get 0%
[00:12:32] is now getting 90%, 100%, 100%. Pretty good sign that that writer is now using AI and if that's
[00:12:37] not within your policies, then you should not be using that content.
[00:12:43] Gotcha. So how are you then advising organizations to put their policies together?
[00:12:48] What are the questions that you think are most important to answer?
[00:12:52] Yeah, this comes back to that first question that you asked around understanding
[00:12:55] within the organization where the risk associated with AI having input into content that you might
[00:13:02] not want exists. And so if you have a part of your organization where absolutely no content,
[00:13:08] no AI usage is allowed, some law firms are taking that approach, some medical
[00:13:12] organizations are taking that approach where we want no chance that AI is being used.
[00:13:17] We want whoever's writing this to have 100% responsibility on the words that have been
[00:13:23] used. And in those situations, then the policy will be different than say the marketing team where
[00:13:28] there's a little bit of sort of let time build up, let some data build up where there's a sort
[00:13:34] of a stronger, more binary approach potentially within law firms. So I think it again, it comes
[00:13:40] back to sort of a nuanced approach but making it clear to the editors who are then in charge of
[00:13:47] policing whatever policy gets enforced, but creating a policy that accurately reflects the risks at
[00:13:53] that portion of the organization.
[00:13:57] So ultimately we have to ask why do we care? I love this conversation with John because it was
[00:14:03] the application of practical tools in a way that helped enforce policy and it is a great
[00:14:10] example of human in the loop. Really, if you've missed this, I would encourage you to
[00:14:14] dive in here because he walks through the various ways that the organizations that leverage his
[00:14:19] toolkit use the product to be thoughtful. And one of the things is much more about finding
[00:14:26] trends and patterns than it is around specific implementation. Here what John was talking
[00:14:32] about, he's engaged when you're working with a marketing firm who's hired an organization
[00:14:37] for content management and they want to look at each lawyer or each writer, what they're
[00:14:42] doing over time is they're running it through the detector and they're getting a sense of how that
[00:14:47] person works over time. They are able to then look at those exceptions and find out and make a
[00:14:53] determination whether or not that's actually an exception or look for a larger pattern of
[00:14:58] increasing usage of AI to generate tools and break the relationship they have. I found it
[00:15:04] to be a really insightful set of uses and the way we could think about applying policies and
[00:15:10] procedures in our day-to-day operations. Encourage you to listen to it. It is on the YouTube feed
[00:15:16] and the podcast feed now. Now, remember we're continuing to take questions and you can submit
[00:15:21] them anytime in the chat window or if you're listening to the recording, go ahead and submit
[00:15:26] for next week and send to question at MSBradio.com. Now I've got one more story before we dive
[00:15:33] into questions and it's a use case I just had to write up.
[00:15:38] Holodeck, a system for generating interactive 3D environments has been developed to train robots
[00:15:44] in navigating real-world environments. It's leveraging large language models and Holodeck
[00:15:48] can generate a wide range of indoor environments based on user requests. The system has been
[00:15:54] found to outpace human created tools in terms of realism and accuracy and has shown
[00:15:59] positive effects on the agent's ability to navigate new spaces. Holodeck addresses the
[00:16:05] challenge of efficiently generating diverse environments for robot training, expanding
[00:16:10] the scope of research beyond residential spaces. Quoting the article, named for its Star Trek
[00:16:17] forebearer, Holodeck generates a virtually limitless range of indoor environments using AI
[00:16:23] to interpret users' requests. What we can use language to control it says, Yang, you can
[00:16:28] easily describe whatever environment you want and train the embodied AI agents. Holodeck leverages
[00:16:35] the knowledge embedded in large language models, the system's underlying chat GPD and other chat bots.
[00:16:42] Language is a very concise representation of the entire world, says Yang. Indeed,
[00:16:46] LLMs turn out to have a surprisingly high degree of knowledge about the design of spaces
[00:16:52] thanks to the vast amounts of text they ingest during training. In essence,
[00:16:57] Holodeck works by engaging an LLM in conversation using a carefully structured series of hidden
[00:17:02] queries to break down user requests into specific parameters. And just like Captain Picard might
[00:17:08] ask Star Trek's Holodeck to simulate a speakeasy, researchers can ask Penn's Holodeck to create
[00:17:14] a 1B1B apartment of a researcher who has a cat. The system executes this query by dividing it
[00:17:20] into multiple steps. First, the floor and walls are created, then the doorway and windows.
[00:17:26] Next, Holodeck searches the objectverse, a vast library of pre-made digital objects for the
[00:17:32] sort of furnishings you might expect in such a space. A coffee table, a cat tower, and so on.
[00:17:38] Finally, Holodeck queries a layout module which the researchers designed to constrain
[00:17:43] the placement of objects so you don't wind up with a toilet extending horizontally from the
[00:17:48] wall. Now, why do we care? I love pulling interesting use cases again over and over,
[00:17:57] and we can always borrow from science fiction to get a sense of what's coming. Think this through,
[00:18:02] they're using the Unreal Engine to build virtual environments on demand for testing scenarios.
[00:18:08] It's incredibly powerful. If you need an infinite test variety to test robots in,
[00:18:14] you can now do so with this simulation. They can actually generate the graphical representations
[00:18:22] of full environments fully rendered using Unity. This is exactly the kind of thinking that we want
[00:18:27] to do. You might want to do it from a designer perspective for a single environment, but some
[00:18:33] use cases are required at infinite numbers and that's where the recursive use of these kinds
[00:18:38] of tools are particularly interesting. Now, we care because we want to extend the application
[00:18:44] and thinking again to our own customers. Are there scenarios where we can leverage these kinds of
[00:18:51] technologies to create multiples and scenarios we haven't before or be able to create data sets that
[00:18:58] we might not have been able to test against prior? That's the kind of thinking we want to use
[00:19:02] when considering these tools. Now, we're coming up on those questions. Make sure to put them
[00:19:07] in chat if you've got any last minute ones. I do love taking them. Focus first, always on
[00:19:13] those user submitted ones. Remember, bring your questions live and you'll get a live response.
[00:19:18] This is a great part of the show to get involved with our ongoing discussions.
[00:19:24] Let's take our first submitted question. What are the potential impacts of cybersecurity
[00:19:30] company acquisitions on the service quality and client relationships of MSPs?
[00:19:37] The acquisition question. You know, this one comes around every so often for me to get
[00:19:42] my thoughts on what happens when companies are acquired. I start with my basic advice.
[00:19:49] Anytime a company is acquired, do not panic and understand what your contractual obligations are
[00:19:55] because ultimately the company wants to continue to engage and receive your customer
[00:20:00] engagement and receive your money on an ongoing basis. So they're going to protect that
[00:20:05] contract as best as they can. And they are obligated to do that continuing based
[00:20:10] on the contract that you have with the previous organization. And remember, post acquisition,
[00:20:16] there's always 12, 18 or 24 months of merger as the two organizations come together.
[00:20:24] Oftentimes when you consider changes in organizations, they don't come until after
[00:20:29] that process is concluded. So the time to think about that and be engaged is not at
[00:20:35] the time of transaction. It's either two years down the road when they are in a position to
[00:20:41] start making changes or on your contract end. So from my perspective, I think about it from
[00:20:49] that rhythm and I advise people to not worry on day one. Now here's the other thing that I
[00:20:55] think is important specific to the solicitor's question around cybersecurity. There are,
[00:21:00] frankly, too many cybersecurity companies out there to support the market. I'm anticipating
[00:21:06] we're going to see consolidation of those vendors into a lesser set. Either companies will get
[00:21:14] acquired for talent or technology and get rolled up into more robust organizations. So you should
[00:21:21] be in a position where you're expecting more acquisitions in the cybersecurity space. It's
[00:21:28] been heavily funded by VCs and private equity over the past number of years, meaning those companies
[00:21:35] are looking for ways to extract that value out. And they're going to look to do that with mergers
[00:21:41] and acquisitions. So you should expect that there are going to be more of them to have less
[00:21:45] cybersecurity companies over time, frankly, because we have too many. It's great taking
[00:21:51] questions. I love doing it. Make sure you send them in for next week. If you didn't get a
[00:21:56] chance for yours to be heard this week. Now I want to highlight an upcoming interview,
[00:22:02] something that's coming up this coming weekend, an interview with Rodrigo Larrero on cybersecurity
[00:22:07] defense. Here's a clip from that interview. You've also offered that this, that AI may be part of
[00:22:14] the defense. How do you think that that fits in as a countermeasure? So more than just that,
[00:22:22] I think AI is the only defense that we can have against this. I mean, it's essentially an arms
[00:22:31] race, right? You don't bring a knife to a gunfight and the attackers are going to use AI. There's
[00:22:39] nothing you can do about that. So if on our side, on the defense side, you are hesitant about
[00:22:48] about the use of AI, you don't want your people to use AI, you are essentially putting your cyber
[00:22:55] defenses with knives, arming them with knives to fight off gunslingers. So that's why I think that
[00:23:04] we must use AI. One of the most obvious applications of AI is in terms of a cybersecurity
[00:23:12] analyst. Our industry is desperately lacking qualified cybersecurity analysts. So AI can
[00:23:23] contribute to increase defectiveness and the productivity of those cybersecurity analysts,
[00:23:29] because right now, the few that we have are already overwhelmed with responding to incidents in
[00:23:36] order to increase and protect against these heightened attackers. The only way we can do that is
[00:23:46] through the use of AI. Rodrigo has some bold points. In fact, he thinks AI may be a nuclear level
[00:23:54] event in cyber. I do push back in the interview and I encourage for you all to listen to that on
[00:23:59] this coming weekend and we'll discuss again next week in the live show. A reminder for
[00:24:04] listeners that my Patreon supporters already have this video and you can get all my interview
[00:24:10] content early as a supporter, visit patreon.com.com. through MSP radio to sign up now. I want to
[00:24:17] thank Salesforce Builder, our Patreon sponsor who support made this show possible. Focus on your
[00:24:22] IT sales workflow with the power of automation and visit them at salesbuilder.com. That's
[00:24:30] And vendors, you too can get your name mentioned on the live show.
[00:24:35] It's a simple monthly subscription.
[00:24:38] Visit patreon.com slash MSP radio for more.
[00:24:41] And listeners, thank you for all your support.
[00:24:44] And here's what you can do to help continue with the show.
[00:24:47] Make sure to like share and follow on all your favorite platforms and share it with
[00:24:52] your friends and encourage them to listen as well.
[00:24:55] Or you can support directly on Patreon with our give what you want model.
[00:25:00] You set what you think the content is worth.
[00:25:02] And if you have a question and are listening to the recording, send it in at question at
[00:25:07] MSPradio.com.
[00:25:09] Next week, the live show is on Wednesday, so catch us live on Wednesday, 3pm Eastern.
[00:25:15] Thanks for joining me for the Business of Tech Lounge, and I will see you next time.

