Thu Apr-11-2024: Navigating the Copyright Minefield,AI Safety Research,AI Use Cases

Thu Apr-11-2024: Navigating the Copyright Minefield,AI Safety Research,AI Use Cases

Today's episode covers the challenges of navigating the copyright minefield and the impact of generative AI on content creation. Legislation targeting AI copyright compliance and fines for lack of creator disclosures is discussed. Kaseya introduces a generative AI assistant to empower sales teams, reducing documentation time. Additionally, AI safety research is examined, revealing that only 2% of AI research is dedicated to the topic despite an increase in global AI safety research papers.

Today's episode covers the challenges of navigating the copyright minefield and the impact of generative AI on content creation. Legislation targeting AI copyright compliance and fines for lack of creator disclosures is discussed. Kaseya introduces a generative AI assistant to empower sales teams, reducing documentation time. Additionally, AI safety research is examined, revealing that only 2% of AI research is dedicated to the topic despite an increase in global AI safety research papers.

 

Three things to know today

 

00:00 Navigating the Copyright Minefield: Generative AI's Challenge to Content Creation

04:05 Legislation Targets AI Copyright Compliance, Introduces Fines for Lack of Creator Disclosures

08:07 Kaseya Empowers Sales Teams with Generative AI Assistant, Slashing Documentation Time

 

 

Supported by:

https://atakama.com/mspradio/

https://huntress.com/mspradio/

 

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Every episode script — with full source links — is posted at:

🌐 https://www.businessof.tech

 

🎙 Want to Be a Guest?

Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:

💬 https://www.podmatch.com/hostdetailpreview/businessoftech

 

🔗 Follow Business of Tech

 

LinkedIn: https://www.linkedin.com/company/28908079

YouTube: https://youtube.com/mspradio

Bluesky: https://bsky.app/profile/businessof.tech

Instagram: https://www.instagram.com/mspradio

TikTok: https://www.tiktok.com/@businessoftech

Facebook: https://www.facebook.com/mspradionews


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

[00:00:00] It's Thursday, April 11, 2024 and I'm Dave Sobel, three things to note today.

[00:00:07] Navigating the copyright minefield, Genitive AI's challenge to content creation, legislation

[00:00:14] targeting AI copyright compliance and introduces fines for lack of creator disclosures, and

[00:00:20] Kaseya Empower sales teams with Genitive AI Assistant slashing documentation time.

[00:00:26] This is The Business of Tech.

[00:00:30] In today's digital work environment, the browser is the most widely used operating system, yet

[00:00:35] remains a significant security blind spot.

[00:00:38] Adacama has introduced the browser security platform purpose-built for MSPs.

[00:00:44] Built in collaboration with over 40 MSPs, this solution focuses on managing and protecting

[00:00:49] today's window of work, the browser.

[00:00:52] Adacama's browser extension empowers MSPs to set data policies, gain critical insights

[00:00:58] and optimize the user experience.

[00:01:00] Adacama recently announced the limited time 50 for 50 program, discounting the per user

[00:01:05] price by 50% when you deploy the plugin to 50% or more of your client endpoints.

[00:01:12] Visit adacama.com slash msp radio to learn more and see the platform in action.

[00:01:18] That's A-T-A-K-A-M-A dot com slash MSP radio to learn more.

[00:01:24] When I take time off, I get the luxury of reviewing all the stories to bring you the most interesting

[00:01:30] ones so we don't miss anything.

[00:01:32] There are some good ones, so let's start with the AI News.

[00:01:37] Despite the hype around AI safety, a new study reveals that only 2% of AI research

[00:01:42] is dedicated to the topic.

[00:01:44] American institutions and companies contribute the most to AI safety research, but it

[00:01:48] still pales compared to the overall volume of AI studies.

[00:01:53] While the number of AI safety research papers has increased globally, there's debate about

[00:01:57] the definition of AI safety and its focus on existential risks.

[00:02:02] The study suggests that more funding and resources should be allocated to AI research to keep

[00:02:08] up with industry advancement.

[00:02:11] Anthropics Claw 3 Opus has surpassed OpenAI's GPT-4 on Chatbot Arena, marking a significant

[00:02:18] moment in the history of AI language models.

[00:02:21] This achievement is notable as GPT-4 has consistently held the top position on the leaderboard.

[00:02:28] The victory of Claw 3 Opus highlights the importance of diversity among top vendors in the AI space.

[00:02:33] The Chatbot Arena, run by a large model systems organization, plays a crucial role in measuring

[00:02:38] the performance of AI Chatbots.

[00:02:41] OpenAI is expected to release a new successor to GPT-4 Turbo later this year.

[00:02:47] I also want to highlight coverage in Nextgov, which looks at copyright concerns around AI.

[00:02:53] Generative AI tools have the potential to create content that may infringe on copyright protections.

[00:02:59] The challenge lies in how individuals and companies can be liable for copyright violations when

[00:03:04] using generative AI tools.

[00:03:06] Selective prompting strategies can result in unintentional copyright infringement,

[00:03:11] and there's a need to establish guardrails to prevent such violations.

[00:03:15] Methods for detecting copyright infringement, establishing content providence, and ensuring

[00:03:20] model trading reduces similarity to copyrighted material has been suggested.

[00:03:25] The article notes that policymakers and regulation may play a role in providing best practices

[00:03:30] for copyright safety, such as filtering or restricting model outputs and regulating

[00:03:35] dataset construction and model training.

[00:03:38] Why do we care?

[00:03:40] The value providers in the future will be in their expertise in guiding customers here.

[00:03:45] That will include understanding which models and products perform well for which scenarios,

[00:03:50] and the risks and concerns the product use will result in.

[00:03:54] That's why we care.

[00:03:56] In order to deliver those services, we'll need research and data so we should care

[00:04:00] about how much funding is provided there.

[00:04:02] And we'll get to laws in a moment.

[00:04:07] So let's review some legislation and start with things that have passed.

[00:04:11] Tennessee has passed a First In the Nation bill to ensure like this Voice and Image Security

[00:04:16] Act to protect musicians from artificial intelligence by penalizing the unauthorized copying of a

[00:04:22] performer's voice.

[00:04:24] The use of AI technology in mimicking public figures led to discussion around regulations,

[00:04:29] and this new law aims to safeguard the music industry in Tennessee, which generates

[00:04:33] billions of dollars for the state and supports thousands of jobs and venues.

[00:04:38] In policy but not law, the White House has introduced extensive AI guidelines for the

[00:04:43] federal government requiring agencies to stop using AI systems that infringe on Americans'

[00:04:49] rights or safety.

[00:04:52] Policy also emphasizes transparency and internal oversight of AI use, including

[00:04:56] appointing chief AI officers and establishing AI governance boards.

[00:05:01] The guidelines aim to ensure responsible AI adoption while harnessing its potential to

[00:05:06] improve public services and address societal challenges.

[00:05:10] The headliner in Bills That Have Been Introduced is a bipartisan U.S. federal data protection

[00:05:16] law called the American Privacy Rights Act, introduced by Congresswoman Cappy McMorris-Rodgers

[00:05:22] and Senator Maria Cantwell.

[00:05:24] The law aims to give U.S. citizens greater control over their personal data, strengthen

[00:05:29] cybersecurity standards, and prohibit discrimination based on personal information.

[00:05:34] It also seeks to eliminate the patchwork of state privacy laws and establish a national

[00:05:39] data privacy and security standard.

[00:05:42] A bill introduced in the U.S. House of Representatives would require those training AI models to

[00:05:48] disclose any copyrighted works used and it would apply retroactively.

[00:05:53] The bill aims to increase transparency and protect creators' rights in the age

[00:05:57] of AI.

[00:05:58] The bill also aims to ensure to comply with the disclosure requirement could result in

[00:06:01] a penalty of at least $5,000.

[00:06:04] California has introduced a Right to Disconnect Bill requiring employers to specify work

[00:06:09] hours and prohibit employees from responding to work-related communications all off

[00:06:14] the clock.

[00:06:15] Assemblyman Matt Hanney introduced a proposition to update protections for the modern year.

[00:06:21] In political moves, Republicans who voted against federal bills for broadband funding

[00:06:26] are now taking credit for the local broadband projects funded by those bills.

[00:06:31] Despite opposing the bills, politicians like Senator Tommy Tuberville, Governor Ron DeSantis,

[00:06:37] Governor Greg Giaforte, Senator John Corden, and Senator Shelley Moore Capito are promoting

[00:06:43] and celebrating the benefits of the broadband subsidies.

[00:06:47] This is despite the elimination of the Affordable Connectivity Program which

[00:06:51] announced reduced support amounts for May as the program winds down.

[00:06:55] Finally, law is taking effect.

[00:06:57] All but the smallest ISPs are now required to publish broadband nutrition labels on their

[00:07:03] plans, providing consumers with information on costs, fees, speeds, and more.

[00:07:08] The labels aim to make it easier for consumers to compare plans and avoid hidden fees.

[00:07:13] Major broadband providers have fought against this requirement while some critics argue

[00:07:17] it doesn't address the issue of regional broadband monopolies.

[00:07:21] Verizon, Google Fiber, and T-Mobile have already released labels ahead of the deadline.

[00:07:27] Why do we care?

[00:07:29] Tennessee moves to protect its industry.

[00:07:31] As a producer of music, they distinctly care about this risk.

[00:07:35] The headline here is the possibility of a U.S. federal level privacy law.

[00:07:40] This is bipartisan, so that's a positive.

[00:07:43] Congress has limited time to legislate, so we'll see what happens here.

[00:07:47] I'm going to observe that companies should have a policy about requirements for work hours

[00:07:52] and expectations for after-hours communications, particularly in the IT space.

[00:07:58] Does it need a law?

[00:08:00] Regardless, it's a wise investment for savvy owners looking for shire employee retention.

[00:08:08] Let's also cover some AI use cases.

[00:08:12] Texas is implementing an automated scoring engine powered by artificial intelligence

[00:08:17] to grade open-ended questions on state exams, aiming to replace a majority of human graders.

[00:08:24] The Texas Education Agency expects to save $15 to $20 million annually

[00:08:30] by reducing the need for temporary human scores.

[00:08:33] While some educators are concerned about the system, TEA assures safety nets are in place,

[00:08:39] including human rescorings and addressing AI confusion.

[00:08:43] The new scoring system was trained using 3000 exam responses that had already received human grading.

[00:08:51] The FDA has approved an AI-driven test developed by Prenosis to predict the risk of sepsis,

[00:08:58] making it the first algorithmic AI-driven diagnostic tool for sepsis to receive FDA

[00:09:04] authorization.

[00:09:05] The test uses an algorithm trained on over 100,000 blood samples and clinical data

[00:09:11] to classify a patient's risk of sepsis into four categories.

[00:09:15] Other companies such as Epic Systems have also developed AI-driven diagnostics for sepsis,

[00:09:21] but it makes questions about the accuracy of their algorithms.

[00:09:25] A group of graduate students from UC Berkeley has developed an AI-powered tool called

[00:09:30] PolyWatch to detect congressional insider trading.

[00:09:34] The tool analyzes publicly available stock filings, hearing schedules,

[00:09:38] committee assignments and sponsored travel to identify suspicious activities.

[00:09:43] PolyWatch aims to increase transparency and hold public officials accountable for their stock trades.

[00:09:48] Although still in its early stages, the tool has the potential to fundamentally change how

[00:09:53] journalists, researchers, and the general public track and monitor stock trades by members of

[00:09:59] Congress. New York City Mayor Eric Adams is defending the city's AI Chatbot,

[00:10:05] MyCity, despite reports that it has been giving incorrect advice that would lead to breaking

[00:10:10] the law. The Chatbot launched as a pilot program aims to provide business owners with

[00:10:15] accurate information that has been found to be wrong several times.

[00:10:19] The city acknowledges the issues and plans to fix them in collaboration with Microsoft,

[00:10:23] the provider of the AI service. Kaseya has partnered with Win.ai, a sales tech startup to

[00:10:30] enhance sales efficiency through generative AI. Kaseya has licensed Win.ai's real-time AI

[00:10:36] assistant for over a thousand sales professionals resulting in an 80% reduction in call documentation

[00:10:42] time. Win.ai's AI assistant enables hands-free note-taking during customer conversations,

[00:10:49] improving interactions and reducing administrative workloads.

[00:10:54] Why do we care? Sure, IT providers and MSPs are probably dying to hate on Kaseya,

[00:11:00] but it's the use case you care about, the obvious one of helping with documentation.

[00:11:05] Summarization and documentation are strong AI use cases. More revolutionary will be data

[00:11:11] analytics solutions in medicine or politics. And Mayor Adams is a warning about not doing

[00:11:17] your diligence before release. With as many breaches and security concerns as I report in this show,

[00:11:25] it should be obvious that cybersecurity is not just about technology, but also the human expertise

[00:11:31] needed to interpret and respond to complex threats. Huntress is focused on elevating

[00:11:37] SMBs and MSPs around the world. Huntress has a suite of fully-managed cybersecurity solutions

[00:11:44] powered by a 24x7 human-led SOC dedicated to continuous monitoring, expert investigation,

[00:11:50] and rapid response. And the proof is the execution. Huntress is the number one rated EDR for SMBs on

[00:11:59] G2. Want to know more about the platform? Visit huntress.com.mspradio to learn more.

[00:12:07] Thanks for listening! Today is National Pet Day. We hugged dogs yesterday, so today all the pets get

[00:12:15] some attention. And if you're not into that, it's also National Cheese Fondue Day and National

[00:12:20] Eat Track Tape Day. Have a question you want answered? We take those. Ideally,

[00:12:25] his voicemem was our video to question at mspradio.com. I answer list of questions live

[00:12:31] each week on our Wednesday live show on YouTube and LinkedIn. And if you got a thought or

[00:12:35] comment, put it in comments if you're on YouTube or reach out on LinkedIn if you're listening to the

[00:12:39] podcast. Talk to you again tomorrow! The business of tech is written and produced by me, Dave

[00:12:47] Sobel, under ethics guidelines. Post it at businessof.tech. If you like the content,

[00:12:52] please make sure to hit that like button, follow, or subscribe. It's free and easy

[00:12:58] and the best way to support the show and help us grow. You can also check out our Patreon

[00:13:03] where you can join the business of tech community at patreon.com slash mspradio or buy our Why Do We

[00:13:11] Care merch at businessof.tech. Finally, if you're interested in advertising on this show,

[00:13:17] visit mspradio.com slash engage. Once again, thanks for listening to me and I will talk to you

[00:13:24] again on our next episode of the business of tech. Part of the msp radio network.