Mitigating Risks of AI in Organizations: Insights from Jon Gillham

Mitigating Risks of AI in Organizations: Insights from Jon Gillham

In this bonus episode of the Business of Tech, John Gillham, founder of Originality.ai, discusses the detection of AI use and its impact on content creation. He shares insights on identifying whether content is human-created or AI-generated, highlighting the importance for companies to understand the origins of their content. Gillum's inspiration for focusing on the intersection of AI and its physical impact on people stems from his experience running a content marketing company.

In this bonus episode of the Business of Tech, John Gillham, founder of Originality.ai, discusses the detection of AI use and its impact on content creation. He shares insights on identifying whether content is human-created or AI-generated, highlighting the importance for companies to understand the origins of their content. Gillum's inspiration for focusing on the intersection of AI and its physical impact on people stems from his experience running a content marketing company.

 

💼 All Our Sponsors

Support the vendors who support the show:

👉 https://businessof.tech/sponsors/

 

🚀 Join Business of Tech Plus

Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.

👉 https://businessof.tech/plus

 

🎧 Subscribe to the Business of Tech

Want the show on your favorite podcast app or prefer the written versions of each story?

📲 https://www.businessof.tech/subscribe

 

📰 Story Links & Sources

Looking for the links from today’s stories?

Every episode script — with full source links — is posted at:

🌐 https://www.businessof.tech

 

🎙 Want to Be a Guest?

Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:

💬 https://www.podmatch.com/hostdetailpreview/businessoftech

 

🔗 Follow Business of Tech

 

LinkedIn: https://www.linkedin.com/company/28908079

YouTube: https://youtube.com/mspradio

Bluesky: https://bsky.app/profile/businessof.tech

Instagram: https://www.instagram.com/mspradio

TikTok: https://www.tiktok.com/@businessoftech

Facebook: https://www.facebook.com/mspradionews


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

[00:00:00] As we think about policies for AI, how do we build them?

[00:00:07] What are the parameters?

[00:00:08] Well, maybe somebody who's been thinking about detection of AI has some insights for us.

[00:00:14] John Gillom, who is the founder of originality.ai, a service that does detection of AI use,

[00:00:21] joins me on this bonus episode of The Business of Tech.

[00:00:27] Going to reach an audience of thousands of MSPs and IT service providers?

[00:00:31] Put your ad right here on The Business of Tech, and be on the show that 64% of MSPs report

[00:00:37] having listened to.

[00:00:39] A recurring top 50 tech news podcast, there are affordable options for you to reach our

[00:00:45] audience but we can support any budget.

[00:00:48] Podcast listeners are more engaged, have a higher level of brand retention and are

[00:00:53] more willing to listen to ads here than any other avenues.

[00:00:58] Want to know more?

[00:00:59] There's information at MSPRadio.com slash engage, including a button to book a time to

[00:01:06] talk.

[00:01:07] Go forward to that discussion.

[00:01:11] John thanks for joining me today.

[00:01:13] Yeah, thanks for having me Dave excited to be here and chat about sort of an interesting

[00:01:17] aspect of the world of AI.

[00:01:21] You brought up a really interesting question when we were talking beforehand and I wanted

[00:01:25] to bring it to my listeners.

[00:01:27] What inspired you to focus on the intersection of AI and its potential physical impact on

[00:01:33] people?

[00:01:35] So with what originality.ai does is it is a AI concept sector that identifies if content

[00:01:42] has been human created or AI created.

[00:01:48] It's important for a lot of companies to truly understand where their content was AI

[00:01:54] created because there's a whole bunch of risks that come with that or whether it was a human

[00:01:58] created.

[00:01:59] What inspired us was we had run our own content marketing company, we had then eventually

[00:02:05] sold that content marketing company and we had seen this sort of intersection of

[00:02:10] where are humans in the loop, where are AI's in the loop and that was what had

[00:02:15] inspired us to try and build their own detector that was able to understand when humans were

[00:02:21] creating content or AI's were creating content.

[00:02:25] And how do you link this to physical harm?

[00:02:27] Like what's the physical harm angle for how an AI can actually hurt people?

[00:02:32] There was a situation where AI's had written books and those books had been

[00:02:40] published on Amazon and then we were involved in some research with the Guardian and

[00:02:45] they had been recommending that the book that was published on Amazon have been

[00:02:49] recommending that if foragers, so people that were picking mushrooms were unsure if

[00:02:55] a mushroom was safe, they should just paste it a little bit to check.

[00:02:59] And so sort of that ability of AI to hallucinate and then to not end up with

[00:03:05] any human in the loop is as a really interesting risk trajectory that a lot of

[00:03:11] companies are exposed to if they aren't having their policy tight around the use of AI.

[00:03:19] So what are you recommending then as the approach?

[00:03:22] Well, let's start with the companies that are thinking about using it.

[00:03:25] Like how do they implement correctly to make sure that they're avoiding these kinds

[00:03:30] of arms for their own customers?

[00:03:33] Yeah. So every company is going to have different risk profiles.

[00:03:36] Some need to be concerned about the impact of math producing AI content and

[00:03:40] then getting slapped by Google.

[00:03:43] That's one of the risks they need to manage.

[00:03:45] Others are concerned about reputational harm, such as publishing hallucinated news

[00:03:51] stories like a lot of companies have been caught doing.

[00:03:54] And somewhere there's legal harm where if they're working for a law firm and

[00:03:59] they need to make certain that lawyers are not using generative AI to

[00:04:03] just shortcut the work.

[00:04:05] And so it's a nuanced approach as I think it's probably your audience is

[00:04:10] familiar with that.

[00:04:11] You know, the answer to most things is, well, what's the right solution for me?

[00:04:15] Well, it depends.

[00:04:16] It's pretty common, commonly the answer and the right answer.

[00:04:20] And I think it's the same situation here where companies need to have an

[00:04:25] understanding on where content is being produced in their organization

[00:04:29] and then understand the risk associated with if it's marketing content,

[00:04:33] if it's legal content, if it's contractual content, wherever writers

[00:04:37] are typing in words, if there's an incentive for them to use AI and that

[00:04:44] produces a risk for the company, they need to be understanding it.

[00:04:48] And then our tool is there to help them truly understand how much AI has

[00:04:54] potentially polluted parts of their work that they wish it wasn't involved in.

[00:05:01] Now talk to me a little bit about detection because from everything

[00:05:04] that I've read, there's been a number of attempts, you know, open AIs had an

[00:05:07] attempt to do this and oftentimes they find that they're not very good

[00:05:12] in terms of detecting it.

[00:05:13] Talk to me a little bit about the approach and how you make sure that

[00:05:17] you can actually detect AI generated versus human.

[00:05:21] Yeah, no great question.

[00:05:22] And there's a ton of mean as with any new technology, there's a ton

[00:05:25] of misunderstanding about the capabilities of it.

[00:05:28] Just like when you chat to BT first came out, people thought it was sort

[00:05:31] of a super intelligent device people were chatting with and that it would

[00:05:36] be capable of thing far beyond what it sort of has turned out to be being capable of.

[00:05:42] So most detectors are AIs themselves.

[00:05:45] So what good way to think about it is that it's like the good terminators

[00:05:48] sort of movie two and on where it's an AI that's capable of detecting

[00:05:53] other other AI.

[00:05:54] And so we train it with a ton of human content, train it with a ton

[00:05:57] of AI content and then it learns to tell the difference between the two.

[00:06:04] In the case of some of the reputation of a detectors not being not working,

[00:06:10] there's their classifiers so that they have a

[00:06:14] they don't work the same way that the weatherman doesn't work.

[00:06:16] But there's sort of a probability of it being a certain

[00:06:22] it being AI and there's a probability of it being human on most

[00:06:26] the open data sets that we can test against where 99% accurate on AI

[00:06:30] content and then we have about a two and a half percent of false positive rate.

[00:06:35] Open AI, because they were viewed as the authority because it was their

[00:06:40] content that was being checked, they were viewed with a much harsher

[00:06:43] lens than I think a lot of other companies.

[00:06:46] And they had built a classifier knowing that that was so

[00:06:50] tuned to not produce false positives, that it was

[00:06:55] pretty useless at detecting a content, but it was also still not perfect

[00:06:58] and it still would produce some false positives just a really low number.

[00:07:01] And so you're always balancing false positives with accuracy with these

[00:07:04] classifiers and they're never perfect.

[00:07:07] And so that's been where sort of this I think

[00:07:11] misunderstanding around AI detectors don't work.

[00:07:14] It's like they do depending on your use case and the accuracy that you require.

[00:07:19] OK, help me understand a couple of the use cases.

[00:07:21] So I put for a little bit that like spell check is an AI, some very basic.

[00:07:28] I would assume spell check doesn't fire off the off the detector.

[00:07:32] Let's start with that baseline, right?

[00:07:34] Beljeck no.

[00:07:36] OK, now let's talk about like tool like Grammarly, right?

[00:07:38] So there's a tool like Grammarly that then goes through and does

[00:07:41] grammar correction.

[00:07:42] Like how is that interpreted by an AI detector?

[00:07:45] Yeah.

[00:07:46] So and so we have different models.

[00:07:48] We have a model that where companies are saying no AI period,

[00:07:52] the risk is too high.

[00:07:53] We have our 3.0 turbo model and that Grammarly will trigger

[00:07:59] that model to be detected.

[00:08:02] Human human created AI edited will get detected frequently by

[00:08:09] your most sensitive model for AI.

[00:08:12] Or we have a 2.0 standard model where I think most people will

[00:08:16] say, well, that's OK.

[00:08:18] That human created AI light, AI editing with tool like Grammarly

[00:08:24] and them putting their author's name to it.

[00:08:27] Like that's human created.

[00:08:28] Most people want that to skate through and that's where sort of

[00:08:30] our 2.0 standard model is most defective.

[00:08:34] OK, that helps me understand a little bit.

[00:08:36] So how many of them I'm trying to make the line kind of a

[00:08:39] little bit tangible in my head?

[00:08:41] So it makes sense for me like something like your Grammarly.

[00:08:43] Well, but what if I'm working with a chatbot and we're kind of

[00:08:47] collaborating, right?

[00:08:48] I'm talking about it and it creates some stuff and I start

[00:08:51] putting in pasting words and I'm building my own paragraphs

[00:08:54] and then I write a line to transition.

[00:08:57] I tweak this word like now it's a hybrid, right?

[00:08:59] It's kind of a mix.

[00:09:02] How do you detect that?

[00:09:03] And how does it classify them?

[00:09:05] Yeah.

[00:09:06] So that will get end up getting classified as AI.

[00:09:09] And there's potentially nothing wrong with that.

[00:09:11] Like for example, we have writers that are on our AI research

[00:09:15] team and they create content for our website.

[00:09:18] We know that they use AI and we just let them let the world

[00:09:21] know that this was here's the author behind it.

[00:09:23] AI was used in the creation of this document.

[00:09:27] And they use AI.

[00:09:28] It's a great use case of AI.

[00:09:29] And so in that in that case, it correctly will identify

[00:09:32] that as AI generated because anytime that are the ones

[00:09:37] that are the way to think the way we think about it is we

[00:09:40] definitely want to call something AI if an input comes into a

[00:09:44] black box that black boxes the AI and the output of that

[00:09:47] black box is no longer recognizable to that input,

[00:09:50] then that has been modified by AI enough that there's no

[00:09:55] other mechanism to identify the original original source.

[00:09:58] Because if that wasn't the case, then a paraphrasing tool

[00:10:02] would be able to turn any content into unique content.

[00:10:06] And so that's why that's why we classify what you just

[00:10:09] described as AI.

[00:10:11] It doesn't mean thrown.

[00:10:13] It just means that we it should be classified as AI.

[00:10:15] And then our view is transparently communicating when

[00:10:20] it's been a when it's been a hybrid, the same way as if

[00:10:22] you had a co-author, you would credit your co-author.

[00:10:27] So I think that leads me to then to where my next

[00:10:30] question is like how do you tell people to best use AI

[00:10:35] detectors to be effective?

[00:10:37] Yeah.

[00:10:38] So most of our use case is within the world of marketing.

[00:10:42] So where we have people that are have writers on staff

[00:10:46] that are creating content, they're happy to pay those

[00:10:48] writers $100, $1,000 or an article, whatever it might

[00:10:51] be, they're not super happy when they find out that

[00:10:54] that content was copied and pasted of chat,

[00:10:56] you PT in five seconds.

[00:10:57] That doesn't feel like a fair value exchange.

[00:11:00] And so the way we recommend our tech to be used

[00:11:03] is within the copy editor.

[00:11:06] So whoever's receiving the content from writers,

[00:11:09] use a tool as a spot check and build up a live

[00:11:12] back history with each writer to understand,

[00:11:15] okay, this person has always had, you know,

[00:11:18] 0% AI detected.

[00:11:19] They had one article that was a 50% sort of a flip

[00:11:23] of a coin.

[00:11:23] The classifier wasn't unclear and it was back to 0%.

[00:11:27] So well, you know, probably that's a situation

[00:11:29] where the classifier or detector got it wrong.

[00:11:32] Let that go.

[00:11:33] There's another other situations where you would see a

[00:11:35] trend on a writer that used to get 0% is now getting

[00:11:38] 90%, 100%, 100%.

[00:11:41] Pretty good sign that that writer is now using AI and

[00:11:43] if that's not within your policies, then you should

[00:11:46] not be using that content.

[00:11:49] Gotcha.

[00:11:50] So how are you then advising organizations to put

[00:11:53] their policies together?

[00:11:54] What are the questions that you think are most

[00:11:56] important to answer?

[00:11:58] Yeah, I think this comes back to that first

[00:11:59] question that you asked around understanding within

[00:12:01] the organization where the risk associated with AI

[00:12:06] having input into content that you might not want

[00:12:09] exists.

[00:12:09] And so if you have a part of your organization

[00:12:12] where absolutely no content, no AI usages allowed

[00:12:15] some law firms are taking that approach, some

[00:12:17] medical organizations are taking that approach

[00:12:20] where we want no chance that AI is being used.

[00:12:23] We want whoever's writing this to have 100%

[00:12:26] responsibility on the words that have been

[00:12:29] used and in those situations, then the policy

[00:12:32] will be different than say that the marketing

[00:12:34] team where there's a little bit of sort of let

[00:12:36] time build up, let some data build up where

[00:12:39] there's a sort of a stronger, more binary

[00:12:42] approach potentially within law firms.

[00:12:44] So I think it again, it comes back to sort

[00:12:47] of a nuanced approach, but making it clear to

[00:12:50] the editors who are then in charge of

[00:12:54] policing whatever policy gets enforced, but

[00:12:57] creating a policy that accurately reflects the

[00:13:00] risks at that portion of the organization.

[00:13:04] Gotcha. Now, you've spent a lot of time thinking

[00:13:05] about this. What measures or regulations do you

[00:13:08] think need to be in place to mitigate those

[00:13:10] risks and communicate? Like what's the company?

[00:13:13] What's regulatory? Give me a little bit of a sense

[00:13:15] of where your head is for this.

[00:13:17] Yeah, I think regulatory, so like if we look

[00:13:22] at sort of like the legal regulatory framework

[00:13:24] that's going to come out or related to sort

[00:13:26] of transparent use of AI, I think that's

[00:13:29] going to really focus on the mediums that will

[00:13:32] do the most societal harm. And so that is

[00:13:35] photos and video where the sort of societal

[00:13:39] harm that comes from a misunderstood

[00:13:42] video or like a totally fabricated video

[00:13:44] that people believe be true can be significant.

[00:13:47] And so I think that's where the regulatory

[00:13:49] focus is going to end up being. I think

[00:13:51] it's going to come down a lot to companies

[00:13:53] focus and private enterprises focus around

[00:13:56] what they how they want to police AI

[00:14:00] use it in the organization. And in the end,

[00:14:03] I think what's going to happen is the author

[00:14:06] that the importance of putting your name

[00:14:09] against document is going to increase

[00:14:11] within organizations. And that's going to

[00:14:13] continue to mean something and that can

[00:14:15] mean something more forward where, you

[00:14:18] know, the documents won't sort of live

[00:14:21] and it be unclear on who created it. I

[00:14:24] think that sort of demand that the

[00:14:26] control to document control is going to

[00:14:29] become more significant to smaller

[00:14:31] organizations and that the importance

[00:14:34] of the author will will increase over

[00:14:37] time. Now you was probably spent a lot

[00:14:41] of time working with organizations to

[00:14:44] help them implement this like what are

[00:14:45] the kind of requirements from a data

[00:14:48] management and preparation perspectives

[00:14:51] before applying these technologies that

[00:14:53] need to be in place for an organization

[00:14:55] to be effective. So the organizations

[00:14:57] that have been the most effective at

[00:14:58] implementing this have been really clear

[00:15:01] up front about what part of the

[00:15:03] organization they're wanting to apply it

[00:15:04] to. And so when they say, well, you

[00:15:07] know, when they try and sort of apply

[00:15:08] it across the entire organization and

[00:15:10] try and feed everything in, it just

[00:15:12] doesn't work because it's unclear on

[00:15:14] like, well, we learned something, but

[00:15:17] who's behind this document. And so

[00:15:19] what we've seen, I think this is pretty

[00:15:21] consistent across most change

[00:15:23] management practices, but getting very

[00:15:25] clear on a small part of the

[00:15:26] organization that is the most at risk,

[00:15:28] but also then potentially the most

[00:15:30] well documented and being able to

[00:15:32] apply detection within that part of

[00:15:35] the organization. So if it a law

[00:15:37] firm applying it to

[00:15:41] briefs, that sort of a controlled

[00:15:44] understood well already well

[00:15:47] documented location that this can

[00:15:49] slip into you where we can

[00:15:52] basically they can apply an API into

[00:15:54] their process and not institute a

[00:15:56] whole new process. That has been the

[00:15:58] most effective. So copy editors

[00:16:01] like publishing houses where they

[00:16:03] have a clear milestone of document

[00:16:06] comes in, editor does a task that

[00:16:08] document goes out applying or

[00:16:11] applying a detection at a buy an

[00:16:13] API at a known existing

[00:16:16] point within a small part of the

[00:16:17] organization has been the most

[00:16:19] effective.

[00:16:21] John Gillam is the founder of

[00:16:23] originality dot AI known for his

[00:16:25] insightful exploration about how

[00:16:27] AI can pose physical and data

[00:16:29] risk to individuals with a keen

[00:16:31] focus on the intersection of

[00:16:32] technology and human safety.

[00:16:34] John brings a unique perspective

[00:16:36] to the discussion on AI joins

[00:16:38] us from up north in Ontario.

[00:16:40] John, thanks for joining me.

[00:16:42] If people are interested in

[00:16:43] learning more, how can they get in

[00:16:44] touch?

[00:16:45] Yeah, you can go to

[00:16:47] John originality dot AI is my

[00:16:49] email. I'm happy to talk to anyone

[00:16:51] there and then I can find me on

[00:16:52] LinkedIn or X.

[00:16:55] John, thanks for joining me today.

[00:16:57] All right, thanks, Dave.

[00:17:01] The business of tech is written

[00:17:02] and produced by me, Dave Sobel

[00:17:04] under ethics guidelines posted

[00:17:07] at business of tech.

[00:17:09] If you like the content, please

[00:17:10] make sure to hit that like button

[00:17:12] and follow or subscribe.

[00:17:14] It's free and easy and the best

[00:17:16] way to support the show and help

[00:17:18] us grow.

[00:17:19] You can also check out our

[00:17:20] Patreon where you can join the

[00:17:22] business of tech community at

[00:17:23] patreon dot com slash

[00:17:26] MSP radio or buy

[00:17:28] our Why do we Care Merch?

[00:17:29] At business of dot tech.

[00:17:31] Finally, if you're interested in

[00:17:33] advertising on this show, visit

[00:17:35] MSP radio dot com slash

[00:17:37] engage. Once again, thanks

[00:17:40] for listening to me and I will

[00:17:41] talk to you again on our next

[00:17:43] episode of the business of tech.

[00:17:48] Part of the MSP radio network.