OpenAI has released research in collaboration with the MIT Media Lab that explores the emotional impact of using ChatGPT. The study indicates that while over 400 million people engage with ChatGPT weekly, only a small number form emotional connections with the chatbot, which is primarily marketed as a productivity tool. Notably, female users reported a decrease in socialization after four weeks of use, and those who interacted with the chatbot in a voice different from their own experienced heightened feelings of loneliness. The findings suggest that users who bond with ChatGPT may face increased loneliness and emotional dependency, prompting OpenAI to submit these studies for peer review.
In the competitive landscape of generative artificial intelligence, OpenAI is reportedly facing significant financial challenges, with annual operating costs estimated between $7-8 billion. AI scholar Kai-Fu Lee points out that as foundational models become more commoditized, OpenAI may struggle to compete with cheaper alternatives like DeepSea, which operates at just 2% of OpenAI's costs. Lee emphasizes that the economics of the AI industry are shifting towards open-source models, which are cheaper to produce and operate, suggesting that while OpenAI is not on the brink of collapse, the market may soon be dominated by a few key players.
The podcast also discusses the evolving capabilities of AI models, highlighting the latest version of ChatGPT, which can now blend text and image generation and respond to voice commands. Additionally, DeepSeek has upgraded its AI model, showing improved performance in coding and reasoning tasks, while Google has introduced its Gemini 2.5 Pro model, which boasts enhanced reasoning capabilities and a large token context window. These advancements indicate a trend where AI models are becoming more versatile and capable of handling complex tasks, emphasizing the importance of deployment flexibility and cost efficiency in the evolving AI landscape.
Finally, the episode addresses ongoing privacy concerns surrounding AI technologies, including a new complaint against OpenAI in Europe for generating false information and a settlement reached by Clearview AI regarding privacy violations. The discussion highlights the legal implications of using generative AI tools, particularly in relation to GDPR compliance. Additionally, the podcast examines the lack of diversity in IT leadership, revealing that despite efforts towards diversity, equity, and inclusion, the demographic makeup of IT leadership remains largely unchanged, underscoring the need for continued focus on inclusive leadership in the tech industry.
Four things to know today
00:00 Talking to ChatGPT Might Hurt Your Mood—And OpenAI’s Bottom Line
04:15 Who’s Winning the AI Arms Race? Depends If You Want Comics, Code, or Context
08:10 From Encrypted Chats to AI Slip-Ups—More in “What Could Possibly Go Wrong?”
12:00 All Talk, No Change? IT Leadership Still Looks the Same in 2025
Supported by: https://syncromsp.com/
Event: : https://www.nerdiocon.com/
💼 All Our Sponsors
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
🚀 Join Business of Tech Plus
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
🎧 Subscribe to the Business of Tech
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
📰 Story Links & Sources
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
🎙 Want to Be a Guest?
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
🔗 Follow Business of Tech
LinkedIn: https://www.linkedin.com/company/28908079
YouTube: https://youtube.com/mspradio
Bluesky: https://bsky.app/profile/businessof.tech
Instagram: https://www.instagram.com/mspradio
TikTok: https://www.tiktok.com/@businessoftech
Facebook: https://www.facebook.com/mspradionews
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
[00:00:02] It's Wednesday, March 26, 2025, and I'm Dave Solt. Four things to know today. Talking to ChatGPT might hurt your mood and OpenAI's bottom line. Who's winning the AI Arms Race? Depends if you want comics, code, or context. From encrypted chats to AI slip-ups, more in what could possibly go wrong, and all talk, no change, IT leadership still looks the same in 2025. This is the Business of Tech.
[00:00:32] OpenAI has released its first research exploring how using ChatGPT impacts emotional well-being in collaboration with the MIT Media Lab. The studies reveal that more than 400 million people engage with ChatGPT weekly, yet only a small subset emotionally connects with the chatbot, which is primarily marketed as a productivity tool.
[00:00:54] Researchers found that female users were slightly less likely to socialize after four weeks of using ChatGPT, while those interacting with the chatbot in a voice different from their own reported heightened loneliness. The research involved analyzing nearly 40 million interactions and surveying over 4,000 users about their feelings. The findings indicate that individuals who form a bond with ChatGPT may experience increased loneliness and emotional dependence.
[00:01:25] OpenAI plans to submit these studies for peer review. And from ZDNet, OpenAI, which is reportedly losing billions annually, may face increasing challenges in the competitive landscape of generative artificial intelligence, according to AI scholar Kai-Fu Lee. He highlighted that as foundational models like OpenAI's become more commoditized, competing with cheaper alternatives like DeepSeq could become difficult.
[00:01:50] Lee noted that while OpenAI incurs operating costs of around $7 to $8 billion annually, its competitor operates on just 2% of that expense. He predicts that the economics of the AI industry favor open source models, which are cheaper to produce and operate. Lee suggests that while OpenAI may not be on the brink of collapse, a few key players could ultimately dominate the market.
[00:02:15] He emphasizes that the current AI environment remains highly competitive with frequent releases of new models expected. Why do we care? OpenAI's research into how ChatGPT affects emotional well-being is notable because it edges into territory few AI companies are willing to publicly examine, user psychology and the consequences of habitual interaction. This isn't just about ethics. It has real implications for enterprise use.
[00:02:42] While ChatGPT is marketed as a productivity tool, this study shows users may still anthropomorphize it. For IT service firms building chatbots or virtual agents, this points to a design risk. Over-personalizing bots could drive emotional attachment that causes harm, especially in B2C deployments like healthcare, education or customer support. Kaifulu's commentary surfaces a critical economic reality.
[00:03:11] The AI arms race is brutal. And OpenAI's cost structure might be unsustainable in the long run, especially against leaner open source alternatives. Foundational models are quickly becoming commoditized. IT service providers building AI-enhanced offerings should be cautious about long-term dependence on expensive proprietary APIs. Open source alternatives may become not just viable, but preferable due to cost and flexibility.
[00:03:41] This episode is supported by Synchro. Synchro, the integrated remote monitoring and management and professional services automation platform, is designed for mid-sized and growing managed service providers. Its latest innovations include an AI-powered smart ticket management system with automatic ticket classifications, guided resolution steps using pre-approved scripts and a natural language smart search function. These tools streamline ticket handling and improve response times.
[00:04:10] Discover more at SynchroMSP.com OpenAI has launched a new image generator for its ChatGPT chatbot, enabling it to create images based on detailed user instructions. This allows users to describe complex scenarios such as a four-panel comic strip, which the system can generate instantly. Unlike previous versions, the latest version of ChatGPT can blend multiple concepts to produce unique images.
[00:04:38] This is part of a broader trend in artificial intelligence where chatbots are evolving to combine text and image generation. The new system in ChatGPT 4.0 also supports voice commands and can respond to images and videos. OpenAI announced this version will be accessible to both free and paid users, including those subscribed to ChatGPT Plus and ChatGPT Pro services, enhancing its capabilities in the evolving field of artificial intelligence.
[00:05:06] DeepSeq announced a significant upgrade to its V3 AI model named V3-0324, which was released in December. This new version shows improved performance on coding for web development and reasoning tasks, scoring nearly 20 points higher on the American Invitational Mathematics Examination benchmark than its predecessor. DeepSeq emphasizes that while the model has enhanced writing quality, it is still recommended for use in less complex reasoning tasks.
[00:05:36] Users can access the upgraded model through HuggingFace or DeepSeq's website, although potential security and privacy concerns remain regarding the model's safety. Google unveiled its latest artificial intelligence model, Gemini 2.5 Pro, which the company claims is the most intelligent model to date. This new release comes just three months after the introduction of Gemini 2.0.
[00:05:57] Gemini 2.5 Pro features enhanced reasoning capabilities and improved performance with the ability to process multimodal inputs, including text, audio, images, and videos. It boasts an impressive 1 million token context window set to expand to 2 million tokens soon. In advanced reasoning benchmark tests, Gemini 2.5 Pro has achieved a state-of-the-art score of 18.8% on Humanity's last exam, a dataset designed to evaluate human-like reason.
[00:06:27] Pricing details are expected to be released soon. Why do we care? Well, the GPT-4.0 upgrade is less about raw horsepower and more about mainstream capability built in text, image generation, and voice support in one chatbot. The decision to include free-tier access is a notable shift toward user-based expansion and competitive defense against open-source and big-tech rivals. DeepSeq's upgraded model is more than a technical footnote.
[00:06:55] It shows that open-source models are advancing fast, especially in reasoning and coding. That also comes with a clear message. These models are good enough for a wide range of enterprise tasks, even if they're not yet top-tier in complex reasoning. And the Gemini 2.5 Pro launch is focused on reasoning depth and token context, two of the few remaining differentiators at the top end of the model stack.
[00:07:20] With a 1 million token window and 2 million coming, Gemini is built to support long-form reasoning and enterprise document processing at scale. The battle lines are being drawn not on raw power alone, but in deployment flexibility, cost efficiency, and ecosystem integration. OpenAI is commoditizing access to AI features, Google is targeting enterprise depth, and DeepSeq is pulling open-source closer to parity.
[00:07:46] For IT services and enterprise buyers, the why-do-we-care is clear. AI is fragmenting into fit-for-purpose tiers. The differentiator won't be which model you choose, but how intelligently you architect around it. The winners will be those who can pivot fast, integrate flexibly, and operate AI safely across this multimodal landscape.
[00:08:10] Democratic senators questioned top national security officials, including Director of National Intelligence Tulsi Gabbard and CIA Director John Ratcliffe, during a Senate Intelligence Committee hearing regarding their involvement in that signal chat that discussed military plans, which reportedly included a journalist. Senators expressed concern over the potential leak of sensitive information and criticized the incompetence of including a reporter in the discussions. Gabbard and Ratcliffe frequently responded with,
[00:08:40] I don't recall, when asked about specific military targets or operational details, raising doubts among senators about whether classified information was discussed. Virginia Senator Mark Warner emphasized the potential danger of releasing such information, stating that it could jeopardize American lives. As I went to publication, The Atlantic released the transcripts.
[00:09:03] OpenAI is facing a new privacy complaint in Europe regarding its AI chatbot, ChatGPT, which has been accused of generating false information, including a claim about a Norwegian individual being convicted of child murder. The complaint, supported by the privacy rights group NOYB, highlights that ChatGPT can produce defamatory content that violates the European Union's General Data Protection Regulation, or GDPR.
[00:09:31] A data protection lawyer at NOYB emphasized that under GDPR, personal data must be accurate, and users have the right to rectify false information. Confirmed breaches of GDPR can result in penalties of up to 4% of a company's global annual turnover. The complaint follows previous issues where ChatGPT produced incorrect personal data without a means for individuals to correct it.
[00:09:56] Clearview AI has reached a settlement in a class-action privacy lawsuit estimated to be worth over $50 million. A federal judge approved the settlement, which addresses allegation that the company violated the privacy rights of millions of Americans by scraping their facial images from the Internet without consent. Notably, the settlement structure allows plaintiffs and their lawyers to have a stake in Clearview's future value rather than receiving a one-time payment.
[00:10:22] This approach stems from the company's financial constraints as nearly all Americans could be considered class members due to their online presence. The case was tried in Illinois, where Clearview faced acquisitions of infringing upon the state's Biometric Privacy Act. Despite the settlement, the company does not admit liability. Additionally, 22 state attorneys general expressed concerns that the settlement may not sufficiently prevent future violations.
[00:10:49] In a separate agreement in 2022, Clearview pledged to limit access to its database for private companies and Illinois government agencies for five years. Why do we care? The controversy around the signal is a case study in what happens when high-level personnel use unofficial or uncontrolled communication channels, even when those platforms are encrypted. Signal may be secure, but introducing non-cleared participants like journalists into the chat is a major failure of operational controls.
[00:11:17] The same issue applies in enterprises where execs default to personal messaging apps. See my detailed analysis yesterday. Generative AI can fabricate false statements about real people that most tools, including ChatDpt, do not yet support real-time data rectification or deletion as required under GDPR. For IT consultants integrating these tools into customer-facing applications, this is a legal exposure point.
[00:11:44] Clearview's legal troubles stem from scraping images without consent, a tactic still common among data-hungry startups. IT buyers must ask, is the model or dataset powering this tool compliant with relevant data and biometric laws? Each quarter, this podcast releases our research data on the demographic makeup of IT leadership, broken down by race and sex. By surveying public websites, we're looking to track the change over time.
[00:12:13] This quarter, we surveyed 300 companies and 4,336 humans, 48% vendors, 48% technology providers. This quarter, we found that 88% are white and 3.2% are black. The breakdown is also 78.4% male. The data remains similar between vendors and tech providers, and while last quarter was only slightly different, this quarter again the same.
[00:12:37] When we look at publicly traded or Fortune 100 companies, the numbers do improve for women, but they are remaining at 26%. The racial divide remains within 2%. This data is essentially identical to the data reports for all of 2024. Why do we care? Because despite the surge in DEI initiatives, AI-driven hiring platforms, and public commitments to equity, the face of IT leadership hasn't materially changed. We're entering a time where this is not prioritized.
[00:13:07] From a data perspective, it's a new test. Does this harden the status quo further, or do we see changes in either direction? Homogeneous leadership limits perspective, reduces creativity and problem-solving, and undermines efforts to design for broad user bases, especially in global markets. The language may change, the politics will shift, but inclusive leadership remains a strategic imperative, not a social luxury. Retreating from that is a short-term play with long-term costs.
[00:13:38] Thanks for listening. Today is National Spinach Day, and it's make up your own holiday day. I declare baseball opening day a holiday tomorrow. Nerdy Ocon will be held in Palm Springs, California from April 7th through 9th. Visit NerdyOcon.com to learn all about it. The Business of Tech is written and produced by me, Dave Sobel, under ethics guidelines posted at businessof.tech.
[00:14:03] If you've enjoyed the show, make sure you've subscribed or followed on your favorite platform. It's free and helps directly. Give us a review, too. If you want to support the show, visit patreon.com slash MSP radio, and you'll get access to content early. Or buy our Why Do We Care merch at businessof.tech. Have a question you want answered?
[00:14:26] We take listener questions, send them in, ideally as a voice memo or video to question at MSP radio.com. I answer listener questions live on our Wednesday live show on YouTube and LinkedIn. If you've got a comment or a thought on a story, put it in the comments if you're on YouTube, or reach out on LinkedIn if you're listening to the podcast. And if you want to advertise on the show, visit MSP radio.com slash engage. Once again, thanks for listening. And I will talk to you again on our next episode.
[00:14:59] Part of the MSP radio network. So 1 7 an Bike on Broadway. Missing practice in March 1, 2020. From the Difficult Navajo One of the User and Dimensions avons습 Aaaaab Weihn wallet Blue Reiilo network looks like both of light. Every one day that is set up until spring time. The HUTSpm storage and the quadro and the top light times. I am insrolling theаже special of vivir at… And it's more exciting and refreshing the expectation thatagit tril列стр

