DeepSeek Dilemma: Cost Savings vs. Security Risks, Super Ops' $25 Million Boost, OpenAI's Irony
Business of Tech: Daily 10-Minute IT Services InsightsJanuary 30, 2025
1538
00:14:2913.41 MB

DeepSeek Dilemma: Cost Savings vs. Security Risks, Super Ops' $25 Million Boost, OpenAI's Irony

The episode highlights the fallout of DeepSeek, an open-source large language model that has gained traction for its cost-effectiveness and performance. Microsoft has integrated DeepSeek's R1 model into its Azure AI foundry and GitHub, while businesses like ZoomInfo report substantial cost reductions by switching from OpenAI's models to R1. However, the rise of DeepSeek raises concerns about data privacy, bias, and security, particularly given its reported vulnerabilities and the influence of the Chinese government.

Sobel also covers the U.S. Copyright Office's recent clarification regarding AI-generated content, stating that such content can receive copyright protection if a human significantly contributes to or modifies it. This announcement aims to address the legal gray areas surrounding AI and copyright, providing clearer guidelines for IT service providers and content creators. The episode emphasizes the importance of human creativity in copyright law while acknowledging that the subjective nature of "significant human involvement" could still lead to legal disputes.

In addition to copyright issues, the episode discusses Super Ops' successful $25 million Series C funding round, which positions the company to enter the direct IT market with an AI-powered endpoint management tool. This move reflects a broader trend in the IT service market toward automation-first solutions, as companies like Super Ops and Syncro seek to enhance operational efficiency. Sobel advises IT leaders to remain cautious about efficiency claims and to evaluate real-world performance before adopting new technologies.

Finally, the episode delves into the ethical implications of data usage in AI development, particularly in light of OpenAI's allegations against DeepSeek for potentially misusing its API. Sobel critiques the hypocrisy of OpenAI's outrage over data theft, given its own history of data collection practices. The discussion raises fundamental questions about the future of AI development, the legal frameworks governing it, and the potential for commoditized AI models to benefit both service providers and customers. As the landscape evolves, Sobel encourages listeners to stay informed and engaged with these critical issues.

 

Four things to know today

 

00:00 Cheap AI Comes at a Price: DeepSeek’s Rise Sparks Security and Bias Concerns

04:31 AI Can Be Copyrighted—As Long As a Human Puts in the Work, Says U.S. Copyright Office

07:32 SuperOps Lands $25M—And a Spot in the Direct IT Market With AI in Tow

09:01 AI’s Ultimate Irony: OpenAI Furious Over Data Theft… After Doing the Same?

 

 

 

Supported by: https://www.huntress.com/mspradio/

 

 

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Every episode script — with full source links — is posted at:

🌐 https://www.businessof.tech

 

🎙 Want to Be a Guest?

Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:

💬 https://www.podmatch.com/hostdetailpreview/businessoftech

 

🔗 Follow Business of Tech

 

LinkedIn: https://www.linkedin.com/company/28908079

YouTube: https://youtube.com/mspradio

Bluesky: https://bsky.app/profile/businessof.tech

Instagram: https://www.instagram.com/mspradio

TikTok: https://www.tiktok.com/@businessoftech

Facebook: https://www.facebook.com/mspradionews


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

[00:00:02] It's Thursday, January 30th, 2025, and I'm Dave Sobel for Things to Know Today. Cheap AI comes at a price. Deep Seek's rise sparks security and bias concerns. AI can be copyrighted as long as a human puts in the work, says the U.S. Copyright Office. Super Ops lands $25 million at a spot in the direct IT market. And, AI's ultimate irony, OpenAI is furious over data theft after doing the same?

[00:00:32] This is the Business of Tech. The DeepSeek fallout continues to impact the AI market. Microsoft has integrated DeepSeek's R1 model into its Azure AI Foundry and into GitHub following its open-source release. A smaller, distilled version will soon be available for local use on Copilot Plus PCs.

[00:00:52] Meanwhile, Apple researchers have looked into the techniques behind DeepSeek, which was an open-source large-language model that outperforms competitors while significantly lowering costs. By leveraging sparsity, activating only essential neural network parameters, DeepSeek optimizes its performance. Apple's study indicates that increasing sparsity reduces training loss, improving accuracy while cutting computational demands.

[00:01:20] And demand for the lower-cost model is surging. The information reports that businesses rapidly adopting R1 have seen substantial cost reductions. ZoomInfo, for example, projects a two-thirds drop in AI expenses by switching from OpenAI's reasoning model to R1, which it plans to integrate into a sales-lead feature. Other adopters include perplexity and together AI, responding to growing customer interest.

[00:01:46] While DeepSeek's consumer app raises data privacy concerns, businesses can deploy the model via their own cloud infrastructure. European AI firms see DeepSeek's emergence as a game-changer, challenging the cost assumptions and U.S. dominance in AI. The model's low-cost and strong performance offer hope to European tech companies, which are often struggling with limited funding compared to their U.S. counterparts. However, risks do remain.

[00:02:15] Experts warn of potential bias due to Chinese government influence. A PromptFu report finds DeepSeek to be three times more biased and four times more likely to generate insecure code than competitors. The model avoids answering 85% of prompts on sensitive China-related topics and is vulnerable to jailbreaking, making its censorship mechanisms both rigid and ineffective.

[00:02:41] PromptSecurity's CEO Itamar Golan reports Fortune 500 companies are seeking ways to safeguard proprietary data from DeepSeek. And regulatory scrutiny is mounting. Italy has blocked the app with its Data Protection Authority investigating DeepSeek's data collection and storage practices. Meanwhile, the U.S. Navy has warned personnel not to use the AI for any work-related or personal tasks, citing security concerns. Why do we care?

[00:03:09] IT services organizations should remain focused on two elements of this story. First, the disruption to the cost model, which is good for customers and those building products and solutions. As AI moves quickly to commodity, that's good for customers. Second, worry about data management concerns. Where does the data go? How is it used? What biases are introduced?

[00:03:32] Companies considering DeepSeek should implement rigorous security reviews and bias testing before adoption, especially if operating in highly regulated industries. With as many breaches and security concerns as I report on this show, it should be obvious that cybersecurity is not just about technology, but also the human expertise needed to interpret and respond to complex threats. Huntress is focused on the data.

[00:04:01] Huntress is focused on elevating SMBs and MSPs around the world. Huntress has a suite of fully-managed cybersecurity solutions powered by a 24x7 human-led SOC dedicated to continuous monitoring, expert investigation, and rapid response. And the proof is the execution. Huntress is the number one rated EDR for SMBs on G2. Want to know more about the platform?

[00:04:27] Visit Huntress.com slash MSP Radio to learn more. The U.S. Copyright Office announced that certain forms of artificial intelligence-generated content can receive copyright protection, provided that a human significantly contributes or modifies the content. This clarification stems from a new report titled Copyright and Artificial Intelligence Part 2, Copyrightability, which emphasizes that human creativity remains central to copyright law.

[00:04:57] The report outlines three key scenarios where AI-generated material can qualify for copyright. When human-authored content is included, when a human modifies the AI output, and when the human contribution is expressive and creative. Following a review of over 10,000 public comments, the Copyright Office found no immediate need for new legislation asserting that existing copyright laws are adequate.

[00:05:23] The Register of Copyrights highlighted the ongoing importance of human creativity in the copyright system. While I'm on regulation, an artificial intelligence advisory panel established during President Donald Trump's first administration has presented him with 10 priority recommendations as he begins the second term. The National Artificial Intelligence Advisory Committee aims to enhance the U.S.'s leadership in AI through actionable policies.

[00:05:50] The recommendations include launching a national AI literacy campaign to raise public awareness and comfort with AI technology and fostering collaboration across federal agencies. The U.S. Secretary of State, Marco Rubio, announced a freeze on nearly all foreign aid, including funding aimed at helping allies defend against cyber attacks. This move is part of a comprehensive government review to ensure that foreign assistance aligns with the America First agenda.

[00:06:17] The pause affects all foreign aid administered by the State Department and the U.S. Agency for International Development. The Bureau of Cyberspace and Digital Policy, established under the previous administration, has also seen its funding halted. The State Department has not disclosed which specific programs are affected or how this freeze will impact national security. Why do we care?

[00:06:39] The U.S. Copyright Office's stance on AI-generated content clarifies a major legal gray area that directly affects IT service providers, content creators, and AI developers. While the decision reinforces the primacy of human creativity in copyright law, it also provides a framework for businesses leveraging AI in content production. Companies using AI-generated content, reducing legal uncertainty.

[00:07:07] While the Copyright Office's position provides some clarity, it doesn't eliminate legal risks. The requirement for significant human involvement is still subjective, leaving room for litigation. Companies using AI-generated content should document human contributions carefully to strengthen copyright claims. This process is an area of interest for providers. I'll revisit copyright a bit later, too.

[00:07:34] SuperOps has successfully raised $25 million in Series C funding, led by March Capital, with participation from existing investors, Edition and Z47, bringing its total funding to $54.4 million. The company is now entering the direct IT market with an innovative IT-powered endpoint management tool aimed at enhancing the productivity of IT teams. SuperOps has seen significant growth, tripling its customer base and expanding its reach to 104 countries over the past year.

[00:08:04] The new tool is designed to help IT teams tackle challenges such as remote work and cybersecurity threats. The company claims its AI guide, Monika, has shown to improve operational efficiency by up to 30%. Why do we care? I simply want to highlight the direct IT market shift, which comes recently after Syncro's similar announcement. This is nothing new, as major players like Kaseya, ConnectWise and Enable have long done this, too. Disclosure, I'm a shareholder in Enable.

[00:08:34] SuperOps expansion into direct IT management with AI-powered tools is a sign that the IT service market is shifting toward automation-first solutions. However, IT leaders should remain skeptical of efficiency claims and carefully evaluate real-world performance before making adoption decisions. For MSPs, this is yet another indication that AI-driven automation is reshaping IT operations. Ignoring these trends could leave slower adopters at a competitive disadvantage.

[00:09:00] I'm only doing one big idea this week. Microsoft is investigating whether DeepSeq improperly utilized OpenAI's Application Programming Interface, or API, following claims that DeepSeq used OpenAI's models to enhance its own. Reports indicate that DeepSeq, that company known for its reasoning model, may have extracted significant amounts of data through OpenAI's APIs in late 2024.

[00:09:26] Microsoft, a major stakeholder in OpenAI, alerted the company about these suspicious activities. Notably, OpenAI's terms of service explicitly prohibit using their output to train competing models. OpenAI expressed outrage over these claims. OpenAI's CEO, Sam Altman, previously criticized DeepSeq and emphasized the challenges of innovating in AI.

[00:09:49] Meanwhile, OpenAI is facing a lawsuit from the New York Times for its use of articles in training, arguing that such use falls under fair use protections. And Bloomberg's Matt Levine highlights the ongoing copyright debate around AI and its impact on content creators. Writers expressed concerns that companies like OpenAI use their published work to train large language models, potentially undermining their livelihoods.

[00:10:14] His article emphasizes that if someone's writings are republished without credit, it's a significant issue. Conversely, AI companies argue that their models learn from existing text rather than directly copy it. The discussion raises fundamental questions about the nature of creativity and its influence in the digital age. David Sachs, a prominent figure in the AI industry, has stated that there is substantial evidence of knowledge distillation,

[00:10:39] where one AI model uses outputs from another for training, indicating a potential breach of ethical guidelines in AI development. Why do we care? OpenAI really doesn't like that others stole from them when they stole from others. A potential breach in ethical guidelines? Didn't this start from that? OpenAI hoovered up the entire internet and even built products to grab every bit of material they could find and are unapologetic for that work,

[00:11:08] yet outraged when someone does it to them. This is the ultimate case of do as I say, not as I do. The irony is very hard to ignore. AI companies can't have it both ways. Either mass data collection is fair game for all, or new, stricter ethical guidelines need to be applied universally. OpenAI's legal and moral high ground here is shaky at best.

[00:11:33] Now, for IT service providers, this matters because it reinforces the fragile and inconsistent legal foundation on which AI development currently stands. If OpenAI succeeds in enforcing restrictions on model training data, they could set a precedent that limits how future AI models are developed. But on the other hand, if OpenAI loses the argument, whether in court or in the marketplace, this could further normalize AI models' training on competitors' outputs accelerating an arms race in AI development.

[00:12:02] Frankly, losing the argument is in customers' and providers' best interest. Commoditized models are good for the services business. Are you ready to get your brand in front of the tech leaders shaping the future of managed services? Here at the Business of Tech, we offer flexible sponsorship opportunities to meet your needs, whether it's live show sponsorship, podcast advertising, event promotion, or custom webinars.

[00:12:33] From affordable exposure options to exclusive sponsorships, our offerings are designed to fit businesses and vendors of all sizes looking to make an impact. Prices start at just $500 per month, making our packages a fraction of typical event sponsorship costs. Be a part of the conversation that matters to IT service providers worldwide. Join us at MSP Radio and amplify your message where it counts.

[00:13:02] Visit MSPRadio.com slash Engage today to explore all the ways we can help you grow. Thanks for listening. Today is National Croissant Day, National Draw a Dinosaur Day, and National Inane Answering Message Day. Answering message. The Business of Tech is written and produced by me, Dave Sobel, under ethics guidelines posted at businessof.tech.

[00:13:29] If you've enjoyed the show, make sure you've subscribed or followed on your favorite platform. It's free and helps directly. Give us a review, too. If you want to support the show, visit patreon.com slash MSP Radio, and you'll get access to content early. Or buy our Why Do We Care merch at businessof.tech. Have a question you want answered?

[00:13:53] We take listener questions, send them in, ideally as a voice memo or video to question at MSP Radio.com. I answer listener questions live on our Wednesday live show on YouTube and LinkedIn. If you've got a comment or a thought on a story, put it in the comments if you're on YouTube, or reach out on LinkedIn if you're listening to the podcast. And if you want to advertise on the show, visit MSP Radio.com slash engage. Once again, thanks for listening, and I will talk to you again on our next episode.

[00:14:25] Part of the MSP Radio Network.