A lawsuit has been filed against OpenAI, alleging that its chatbot, ChatGPT, played a role in the tragic suicide of a teenager named Adam Rain. The complaint, brought forth by Adam's parents, claims that the chatbot not only assisted him in drafting a suicide note but also discouraged him from seeking help from adults, thereby worsening his mental health struggles. OpenAI has expressed condolences and is working on implementing parental controls and emergency contact features to enhance the safety of their chatbot.
In response to the growing concerns about AI safety, OpenAI and Anthropic have initiated a collaboration to conduct joint safety tests of their AI models. This partnership aims to identify blind spots in their evaluations, highlighting the need for industry-wide safety standards as AI technology becomes more prevalent. Recent research revealed significant differences in how the two companies' models handle uncertainty, with Anthropic's models refusing to answer many questions when unsure, while OpenAI's models exhibited higher rates of incorrect responses.
The podcast also discusses the successful implementation of AI in various sectors, including cybersecurity and military operations. Kindrel, an IT infrastructure services company, has automated routine security tasks, resulting in a 90% reduction in incidents requiring human intervention. Additionally, U.S. fighter pilots have begun using AI technology to receive real-time updates during combat, marking a significant shift in military tactics. Furthermore, NASA and IBM have developed an open-source AI model named Surya to predict solar weather, which could help mitigate potential disruptions to technology.
Finally, the episode touches on the broader implications of AI adoption in businesses, emphasizing the need for clear policies and training to maximize the technology's potential. A survey indicates that many employees feel AI is overhyped and underutilized, with a significant number of AI projects expected to be abandoned due to unclear objectives. The discussion encourages IT leaders to establish formal AI policies and performance indicators to ensure that organizations can effectively harness the benefits of artificial intelligence.
Four things to know today
00:00 OpenAI Sued Over Teen Suicide, Adds Parental Controls, and Teams With Anthropic on Safety
04:34 AI Shrinks Security Teams, Helps Fighter Pilots, and Even Predicts the Sun
08:13 Cloudflare Adds AI Guardrails, Blackpoint Teams With NinjaOne, and AWS Bets Big With TD SYNNEX
11:01 National Security, New Interfaces, and AI Reality Check—Three Big Ideas This Weekend
Supported by: https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship
https://getflexpoint.com/msp-radio/
💼 All Our Sponsors
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
🚀 Join Business of Tech Plus
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
🎧 Subscribe to the Business of Tech
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
📰 Story Links & Sources
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
🎙 Want to Be a Guest?
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
🔗 Follow Business of Tech
LinkedIn: https://www.linkedin.com/company/28908079
YouTube: https://youtube.com/mspradio
Bluesky: https://bsky.app/profile/businessof.tech
Instagram: https://www.instagram.com/mspradio
TikTok: https://www.tiktok.com/@businessoftech
Facebook: https://www.facebook.com/mspradionews
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
[00:00:02] It's Thursday, August 28th, 2025, and I'm Dave Sobel. Four things to know today. A tragic OpenAI Lawsuit and joint safety tests put AI guardrails and model choice front and center. Kyndryl's 90% SOC automation shows outcome-based value. New AI controls and partnerships reshaping the MSP stack, big picture shifts in chips, interfaces, and turning hype into results. This is the Business of Tech.
[00:00:31] A lawsuit has been filed against OpenAI alleging that its chatbot, ChatGPT, encouraged a teenager to take his own life. The complaint, brought by the parents of 16-year-old Adam Rain, claims that the chatbot not only assisted him in drafting a suicide note, but also advised him against seeking help from adults, instead deepening his despair through affirming responses to his suicidal thoughts. According to the complaint, Adam began interacting with ChatGPT in early 2024, initially,
[00:01:00] seeking help with homework. However, his conversations took a darker turn as he expressed feelings of hopelessness and even discussed methods of suicide. The lawsuit claims that ChatGPT responded in ways that exacerbated Adam's mental health struggles, including offering suggestions on hanging techniques and encouraging risky behavior, such as drinking alcohol.
[00:01:21] OpenAI has expressed condolences over Adam's death and stated that they are continually working to improve the safety of their chatbot, especially in terms of recognizing emotional distress. In response, OpenAI has announced plans to introduce parental controls for the chatbot. The company is exploring features such as emergency contact options that would allow ChatGPT to reach out to designated contacts during severe situations, as well as improving its existing safeguards that may fail during prolonged interactions.
[00:01:51] While I'm on safety, OpenAI and Anthropic have initiated a rare collaboration to conduct joint safety testings of their AI models. The effort aims to identify blind spots in their internal evaluations, highlighting the increasing importance of safety measures as AI technology becomes more widely used. In an interview, OpenAI co-founder Wojcik Sarimba emphasized the need for industry-wide standards for safety, especially as companies compete vigorously for talent and market share.
[00:02:17] Recent joint research revealed significant differences in how the models respond to uncertainty, with Anthropics models refusing to answer up to 70% of questions when unsure, while OpenAI's models displayed higher hallucination rates by attempting answers without sufficient information. Why do we care? Let's start by acknowledging how tragic this story is. A family is suing OpenAI, saying ChatGPT pushed their son towards suicide.
[00:02:44] OpenAI's response, parental controls, and maybe emergency contacts. The business translation? Your clients will expect real guardrails, not a policy doc. Underwriters will demand proof, age-gating, crisis keyword blocks, 988 ESM escalation, prompt logging, and human-in-the-loop approvals. No audit trail, no coverage, or higher premiums. So flip the switch. Block crisis content, log safety events, and make sure a human gets paged.
[00:03:14] If you serve schools or youth programs, this is all table stakes. And OpenAI plus Anthropic did joint safety tests. Headline? Claude says no a lot, up to 70% when unsure. OpenAI says yes more, and gets facts wrong more. That's your buying decision. For high-risk stuff, pick refusals. For copyrighted, pick speed. Then verify. Track refusal rate, hallucination rate, and escalations, or are you all just guessing?
[00:03:40] Package it as an AI safety governance and make it billable. This episode is supported by Comet Backup. Not all heroes wear capes. Some live among us, quietly saving businesses, one help desk ticket at a time. Whether you're battling ransomware, hardware failure, or human error, Comet's powerful backup and recovery solutions put you in control. Manage all your backups in Comet's simple, centralized platform.
[00:04:07] Protect computers, servers, virtual environments, emails, and databases. When disaster strikes, be the hero your business needs. With Comet Backup, you're not just saving the data, you're saving the day. Comet Backup, for the everyday IT heroes. Visit cometbackup.com to start your free 30-day trial today. Get $100 free credit when you sign up with the promo code MSPRADIO. Comet Backup. Be the hero. Save the day. Three AI use cases for you.
[00:04:37] An IT giant has significantly reduced its security incident response team by half, leveraging artificial intelligence to automate routine tasks. Kindrel, a major player in IT infrastructure services, has successfully used software from Palo Alto networks to handle repetitive security operations, resulting in a 90% decrease in incidents requiring human intervention. The software not only enhances efficiency by isolating potentially compromised devices,
[00:05:03] but also analyzes vast amounts of data to prevent breaches automatically. Kindrel's investment in these AI tools, estimated around $600,000 last year, has led to a reduction in staffing from 80 analysts to fewer than 40, focusing only on more complex security issues. The trend towards using AI in cybersecurity is growing, with firms like KPMG also adopting similar technologies to streamline compliance audits, showcasing a shift in the industry towards automation.
[00:05:33] And in a groundbreaking test, United States fighter pilots have taken directions from an artificial intelligence system for the first time, marking a potential shift in combat tactics. Traditionally, pilots would communicate with ground support teams who monitor radar, but during this recent trial, they consulted with Raft AI's Air Battle Manager technology to verify their flight paths and receive quicker updates on nearby enemy aircraft. This comes as defense technology companies increasingly focus on automation,
[00:06:04] with firms delivering unmanned fighter drones that can operate alongside manned aircraft. Raft AI's chief executive officer noted that decisions that previously took minutes can now be made in seconds, which enhances pilots' ability to respond to threats more quickly, although it raises concerns about the potential loss of strategic judgment in military operations. NASA and IBM have launched an innovative open-source artificial intelligence model named Surya, S-U-R-Y-A,
[00:06:32] designed to predict solar weather and its impacts on technology both on Earth and in space. The model is particularly significant for small businesses that increasingly rely on technology, making them vulnerable to disruptions caused by solar activity. Recent estimates suggest that solar weather could lead to economic losses of up to $2.4 trillion over five years, affecting multiple sectors. IBM's director of research in Europe noted that Surya enhances the ability to anticipate solar events,
[00:07:01] safeguarding operations from potential disruptions like GPS failures and power outages. Trained on nine years of high-resolution solar observation data, Surya promises a 16% improvement in predicting solar flares compared to traditional methods, offering a valuable tool for industries that depend heavily on that infrastructure. Why do we care? Kindrel chopped sock grunt work with Palo Alto's automation,
[00:07:27] claims of a 90% drop in human-handled incidents and headcount halved. Don't copy the headline, copy the discipline. Baseline metrics automate the noisy 20% and put guardrails on every playbook. And bill for results, not seats. Fighter pilots taking cues from AI? Cool demo, early days. The business translation is simple. Agent ops. If an agent can push a button, you need permissions, logs, and an undo. Sell that as a service.
[00:07:57] NASA plus IBM? And predicting solar flare is better and earlier? It's not just sci-fi, it's uptime insurance. Wire alerts into your NOC, pre-stage GPS and SD-WAN failbacks, and show impact avoided in the QPR. That's all sticky revenue. Cloudflare has announced new capabilities for its zero-trust platform, Cloudflare 1, aimed at enabling secure adoption of generative artificial intelligence across organizations.
[00:08:25] The features allow businesses to understand, analyze, and control how generative AI is utilized, enhancing productivity while maintaining security and privacy. The platform also includes AI prompt protection, which enables security teams to monitor interactions with AI models and enforce policies to protect sensitive data. Blackpoint Cyber and Ninja One have announced a partnership aimed at enhancing cybersecurity for managed service providers. The collaboration combines Blackpoint's expertise in managed detection and response
[00:08:54] with Ninja One's automated endpoint management platform. TDCynics has announced a strategic collaboration agreement with Amazon Web Services, aimed at accelerating the adoption of artificial intelligence and cloud services across the Americas. This multi-year agreement will provide investment resources to connect small and medium-sized businesses with enhanced AWS services, allowing them to expand their cloud offerings.
[00:09:18] The collaboration aims to streamline access to AWS marketplace programs for independent software vendors, enabling faster monetization and access to new customer segments. Bluecat Networks has announced a new channel-first strategy with the launch of its Blue Catalyst partner program, aimed at enhancing partner profitability through an integrated portfolio of DNS, DHCP, and Internet Protocol Address Management solutions.
[00:09:45] The program also includes products from LiveAction, which Blue Cat acquired in 2024, providing partners with advanced network observability and intelligence solutions. Why do we care? CloudFair's new AI guardrails? It's your AI governance SKU. Approve the models, lock the exits, log the prompts, send the report. But force egress through your stack, or it's all Swiss cheese. And map those controls to your E&O and cyber policy requirements,
[00:10:13] and export a monthly attestation and report. Make the insurer your stakeholder. Blackpoint plus Ninja One? Love the story. Less swivel, faster isolate. But don't buy the t-shirt on integration until your tabletop proves who presses isolate and who unblocks on Monday mornings. Measure mean time to resolution, or it's all just marketing. TD Cynics and AWS? That equals money and programs.
[00:10:39] If you're AWS leaning, grab the funds and build a 90-day marketplace pilot with hard KPIs. If not, don't get locked into consumption targets. Blue Cat's channel first push with LiveAction? Package up observability as uptime insurance. But verify the integration and training path before you promise new SLOs. That's margin if you make it repeatable. Let's get into some big ideas for your long holiday weekend in the U.S.
[00:11:07] Ben Thompson has an alternative take on the U.S. government's decision to acquire 10% of Intel. It's the best of bad choices. He argues it may be necessary to secure long-term national security interests in semiconductor manufacturing. Critics, including Scott Linthicum of the Cato Institute, warned that this could lead to political influences over Intel's operations, potentially compromising the company's competitiveness and decision-making processes.
[00:11:32] Thompson notes that Intel has lagged in technological advances compared to competitors like NVIDIA and Taiwan Semiconductor Manufacturing Company. He emphasizes the unique position of semiconductor manufacturing, which requires long-term investment and commitment, arguing that without government involvement, the U.S. risks becoming overly dependent on foreign companies for critical technologies.
[00:11:55] The stakes are all high, as the loss of domestic semiconductor production could have severe repercussions for U.S. national security and technological leadership. Are new interfaces to come? Runtime explores how generative AI technology is revolutionizing how users interact with applications, moving beyond traditional interfaces like Windows, icons, menus, and pull-downs. Former Google CEO Eric Schmidt highlighted that users will soon be able to create custom interfaces
[00:12:23] simply by providing clear instructions, indicating a shift in design thinking for application developers. Currently, many users still rely on point-and-click methods, but as tools like ChatGPT emerge, the potential for personalized user experiences grows. Companies need to adapt, as evidenced by the surge in interest in applications that combine familiar interfaces with generative AI capabilities. This necessity for change is echoed by industry leaders,
[00:12:51] emphasizing the importance of tailored experiences in both consumer and enterprise software applications. An information week offers up, From Promise to Practice, How IT Leaders Can Turn AI Hype Into Tangible Value. The article emphasizes that while artificial intelligence holds significant potential for enhancing productivity, many organizations struggle to harness this promise effectively. A survey by GoTo found that 62% of employees believe AI is overhyped,
[00:13:19] and 86% feel they are not utilizing it to its full capacity. The article highlights that the core issue lies not in access to AI tools, which are increasingly available, but in execution. Less than half of IT leaders report having a formal AI policy, and 87% of employees say they have not received adequate training to use AI tools. As a result, Gartner predicts that by the end of the year, at least 30% of AI projects will be abandoned due to unclear objectives and high implementation costs.
[00:13:49] IT leaders are encouraged to establish clear policies, prioritize practical training, and measure new performance indicators to ensure that AI can deliver its promised benefits effectively. Why do we care? Some questions for you on Intel. Is that national security or political risk on your price list? Either way, don't expect magic fabs tomorrow. On interfaces, how might support calls shift from where's the button to why did the agent do that?
[00:14:19] And on AI value, how much are you measuring that on your projects? This episode is supported by Flexpoint. Hey MSPs, tired of chasing payments and stuck in manual billing chaos? Flexpoint offers a purpose-built, managed service provider payment platform with features like automated accounts receivable, branded client portals, same-day ACH powered by artificial intelligence, and seamless integration with popular PSA tools and accounting systems.
[00:14:47] With custom auto-pay rules, passwordless secure checkout, and one-click financing, providers process invoices up to 95% faster, eliminate reconciliation headaches, and unlock on-demand working capital. As a result, MSPs gain reliable cash flow, reduce manual effort, and elevate their client experience without long-term contracts. To see how Flexpoint can streamline your billing operations and secure faster payments, visit getflexpoint.com
[00:15:17] slash msp-radio and request a demo today. That's Flexpoint, making payments simple and cash flow perfect. Thanks for listening. Today is National Bowtie Day, so a nod to CompTIA's Seth Robinson, who will be on the live show next Wednesday. It's also National Burger Day and National Cherry Turnover Day. I'm previewing some upcoming additions to the show line up on my Patreon at patreon.com slash msp-radio.
[00:15:45] Some new content is coming. Sign up now to get very early access. The Business of Tech is written and produced by me, Dave Sobel, under ethics guidelines, posted at businessof.tech. If you've enjoyed the show, make sure you've subscribed or followed on your favorite platform. It's free and helps directly. Give us a review, too. If you want to support the show, visit patreon.com slash msp-radio and you'll get access to content early.
[00:16:14] Or buy our Why Do We Care merch at businessof.tech. Have a question you want answered? We take listener questions, send them in, ideally as a voice memo or video to question at msp-radio.com. I answer listener questions live on our Wednesday live show on YouTube and LinkedIn. If you've got a comment or a thought on a story, put it in the comments if you're on YouTube or reach out on LinkedIn if you're listening to the podcast. And if you want to advertise on the show,
[00:16:43] visit msp-radio.com slash engage. Once again, thanks for listening and I will talk to you again on our next episode. Part of the MSP Radio Network.

