How many buzzwords do we have in the MSP world? MSP, MSSP, Web 2.0, AI? At any rate, we now have AI as a buzzword to deal with. Kidding AI is a lot more than a buzzword. Join me as I discuss AI and the risks of AI with Jim Harryman of Kinetic Technology Group.
[00:00:00] Welcome to MSP 1337. I'm your host, Chris Johnson, a show dedicated to cybersecurity challenges, solutions,
[00:00:15] a journey together not alone.
[00:00:17] Welcome everybody to another episode of MSP 1337. This week I'm joined by Jim Harryman
[00:00:29] of Kinetic Technology Group, Jim. Welcome to the show.
[00:00:32] It'll be your Chris.
[00:00:34] So it's that time to talk about things that we aren't necessarily all that familiar with but it's
[00:00:40] making our mayor existence. Let's talk about the risks of AI and maybe not the traditional model of what you kind of see in the media right now,
[00:00:52] like fill in the blank product that's the next chat GPT but more of like especially from a professional side like a company organizational side,
[00:01:03] more work environment versus consumer side. I can make a lot of reasons that consumers will or won't use chat GPT's and I don't necessarily have a whole lot of concern about that because unless you're feeding it very sensitive information, whatever.
[00:01:22] But in the context of this, I want to talk about the risks of AI and the impacts that it might have on your organization like what are the you know the things that we should be wary of and for those listening, you know,
[00:01:35] and an upcoming secure outcomes town hall. We have slated to talk about the risks of AI and things that you might use.
[00:01:42] And so maybe today let's look at it to the lens of like,
[00:01:45] AI is attached to vendors right like it's not something that I've built internally to my organization probably.
[00:01:51] So thinking about the risks of AI is probably an extension of evaluating a vendor and so maybe I've stretched too far but Jim like talking me a little bit about like if you were evaluating a vendor tomorrow what that might look like
[00:02:09] and then I'll add some things that I might want to consider if I knew that they were also using AI as part of their product offering.
[00:02:18] First off, I think that evaluating a vendor is a ever evolving process right for us it has been just do you meet these 1234 criteria to now it's a much longer list morphing into a probably a lot of things that are going to be done.
[00:02:38] So I think that's probably a probably a questionnaire coming this year from us to our vendors in the future right so it's just a it's a ever evolving process for us and so as it relates to you know,
[00:02:54] and yes, absolutely you have to take all those things into consideration and the risks are just mind blowing I just even thinking about all the risks with AI and data getting uploaded and
[00:03:14] to you know what we talk about regularly you and I and yeah, we are involved with where did my data just go like you guys a question data management right data management.
[00:03:27] So I think that's a simple CIS controls on data management and catargerization and all those things that go into place I was reading something the other day about I guess it was a Microsoft related study that you know what
[00:03:52] the question of the administrative rights that are given are actually used within within the cloud structures right yeah and and less than I don't even remember the stat but an amazing amount of organizations that don't have any type of data management really in place
[00:04:17] and that just just freaks me out you know, but we looked at it as opportunity in our industry right so I mean at least I do well also right on that gym just think about the conversation we had literally I think this was in the last I'm going to say in the last 10 days I think it was in the last 10 days I think it was one of the last Monday of the month calls.
[00:04:38] The conversation that brought up privileged access controls and fill in the blank cloud vendor you know I struggling with like I have six pages of permissions to navigate that I can modify and make changes to but the reality is it doesn't actually create a net new privileged access user type it just has modified the permissions for that individual user so the question was like does that count like how do I
[00:05:08] articulate that with you know the vendor that I'm referring to you know I've got 25 employees or two employees or however many it is and they all have 25 varying degrees of permissions that are unless I go in and look at each user I have no idea what they are and so we were all a kind of an agreement like I think I would just say that's a vendor that we classify as privileged access for anybody that's assigned that access to that cloud service is in that category
[00:05:36] and I think kind of to your point isn't that the fundamental challenge in this is like the unknown of like okay if the vendor uses AI is that disclosed if the vendor uses AI to deliver data to a third party is that being disclosed.
[00:05:54] And now you're going down this rabbit hole of like at this point can I use any vendor because I haven't clarified for each one of them where my data might or might not go.
[00:06:03] What you're doing.
[00:06:06] It just I mean you know a particular radio host that I used to listen to would talk about getting the duct tape and wrapping it around your head so it doesn't explode and you know.
[00:06:19] They put your own fair day cage on your own brain.
[00:06:22] Yeah, I just there are just so many things man so many things like you know you think about I mean going into you guys we hear a lot about what is it?
[00:06:32] The co pilot stuff with Microsoft right I mean that that is a good place to start in how this could impact us as.
[00:06:43] Manage service providers it could impact our clients just somebody goes in and turn something on and we're like well crap you know whatever.
[00:06:52] I mean just the things that we don't know yet you know what I mean what what is the audit trail of all this stuff you know how is how is that impacting our ability to you know find things.
[00:07:06] Yeah may have gone wrong you know the mistakes that can be made with the data being generated you know is additional is additional sensitive information being generated.
[00:07:21] And categorized at that moment right I mean it's just like oh my gosh I just mind blowing.
[00:07:31] No it's funny you say that so I was thinking about some of the recent changes we've seen like with chat GBT so like I know our work at com Tia we have an internal version of it and now when you type stuff in.
[00:07:44] Now kicks back where it got the information from to generate what it's providing you which like when we first started using it it just was like boom here you go and I was like wow chat GBT is really smart it created all this content for me from.
[00:07:56] You know out of thin air based on what information it was fed and we obviously know that it's not that.
[00:08:02] Right right but but along those same same lines like I think about when we leverage vendors for any stretch the imagination.
[00:08:12] What does it do with our data forget the AI for me they and I don't know that we've ever really you know until we start having these conversations around.
[00:08:20] Security safeguards in talking through like the things that we should expect from our vendors are the things that our clients should expect from us.
[00:08:28] Did we really spend very much time talking through this and like forget the whole like I can come up with 12 or 15 or 20 data classification types let's just talk about the three easy ones public sensitive.
[00:08:41] And then like sensitive private or something along those lines where you don't have to get.
[00:08:45] You can probably pretty easily classify data when you have three or four categories to put them in those buckets but.
[00:08:53] You know do we do that with our vendors do we really ask those questions or do we you know and I'm not saying this like oh vendors bad like we should ask better questions but more of like because of AI.
[00:09:05] We started and the awareness around say that the trust mark and like CIS and other frameworks for returning our organizations based on a set of standards.
[00:09:14] Are we asking the right questions and then I think AI amplified that because suddenly it's you know should should AI what are the risks or the challenges that we need to be aware of when we're evaluating a vendor that might be using something that's quite honestly unknown.
[00:09:32] Yeah, man.
[00:09:36] Maybe we should have a different topic.
[00:09:38] Yeah.
[00:09:41] I just realized I kind of felt like I went in a circle or like what are the ramifications for forget security for a minute and just thinking about like I think it was the New York Times.
[00:09:52] I think it was the first as filed suit against chat GPT for not having permission to scrape the data that's behind a you know a gated community and so like that asked two questions will wait a second.
[00:10:03] If your data was secured properly chat GPT wouldn't have had access to just go behind the gate like it is like it's like the you see the picture where like we put a security gate and then like the sidewalk goes on either side of the gate.
[00:10:19] Oh my goodness, dude I just it is just so it's all just so overwhelming and we can sit here and right.
[00:10:29] Well, Chase our tails all over this this topic because it is just so vast and so many unknowns and so many just Cali just so many rabbit trails to chase potentially.
[00:10:44] So we could focus on the things we do know and we can use them to apply it to this right so like let's just use let's go back to how we kind of started and we really didn't answer this question.
[00:10:53] We should all have a vendor evaluation sheet right so like I am looking at doing business with fill in the blank vendor because they're going to solve my backup problem or whatever it might be.
[00:11:03] Start there like Jim what does the process look like for you I don't mean like it changes I think the process itself doesn't necessarily change as much as the questions that might be included in that evaluation.
[00:11:17] Yeah, I think it goes back to like I said, I mean our our list of things has morphed over the last several years and I think that when it comes to our service provider evaluations, I mean and you get get into this for instance, I mean it going back to the Microsoft co pilot thing right.
[00:11:43] We're a company my size is not going to be sending Microsoft a vendor evaluation for them to fill out and and basically respond to me right in my.
[00:11:58] Okay, okay so let's let's let's tone it down a little bit so like vendor evaluation.
[00:12:04] Can that it evaluates their vendors it doesn't necessarily mean you're sending the vendor a questionnaire correct right so many of these winners have yeah so to be fair this isn't like because Microsoft is the giant we're not going to try and get them on the phone when they don't answer support phone calls right like but but in fairness when you are looking at a vendor you ask questions like.
[00:12:27] Do they send data to third party.
[00:12:31] You might ask yourself that question you're like what APIs do they allow is it an open API is it a rest API does it involve encapsulating all of the tunnel traffic does it allow me to distinguish between what is or isn't being sent like what support look like for technical issues what kind of escalation path that we have like more of your generic general questions all of those all of those things are.
[00:12:56] Have become the standard for what we what we use as an evaluation right so i think when we when we get into the AI discussion.
[00:13:08] The unknowns for me right now are basically it's just like we are not as an organization going to be using this except from a.
[00:13:19] Like an experimental practice right I mean probably more from a consumption standpoint than a participate in generating content for something else exactly so now the that that is us I know for a fact that I have vendors particularly in the marketing side of things that are using.
[00:13:42] You know the language models to build out certain things for my my marketing processes more power to lower my price and it solves the problem right precisely you hope it does that so getting back to the larger issue for me and and.
[00:14:04] I think the when I was reading this this survey and everything the other day it was it was like well everything i'm going back to the Microsoft deal with co pilot it's you know everything is you know segregated tinted it's only going to affect you so you should have to worry about things getting.
[00:14:27] Interwoven with other things when it says should any data essentially right when it says shouldn't that bothers me right shouldn't is not a definitive right i'm not going to do that maybe right I just that that's where that's where it all gets down to me where i'm like you know with our staff it's been like.
[00:14:53] Look we're going to test some things we're going to do some things sparingly if we get encountered with a with a client that is wanting to do this kind of stuff let's sit down and have a conversation with them before we start enabling these things to to use because there are going to be.
[00:15:11] To be implications of that back to the vendor thing I think that it has to be added into the scope of what we're doing yeah.
[00:15:20] On the evaluation side because if it's not I mean look lots of our vendors have been using some AI for years for sure we to do lots of different things right so it's just now you know it's all.
[00:15:37] I'll give you an example of an AI that I've used for years it's called meter and all it does is it looks at my calendars and it highlights and identifies so that I don't have to go into the calendar item to click on the link that's what one is the right link to click on.
[00:15:56] Then have it open a browser tab to then have it ask me if I want to watch the native application it skips all of that so you can call that whatever level of sophisticated AI you want to call it what it is doing that dynamically it is reading what it sees and.
[00:16:11] And lo and behold I found out that I believe it's it's part of the I want to say Bardine which is kind of an affiliated with like the Google Bard project but it literally is using an AI algorithm to look at what's there to determine that's teams that's Google me etc.
[00:16:31] Right that's me consuming something that's really pretty closed right like it's a very close it's running on my desk it's not like me churning over time because it's pulling stuff from the internet it literally just looks at my calendar and says if you click here in the apps open it will launch it.
[00:16:48] The things that concern me aren't even really all that crazy when it comes to the chat GPD cycles I think it's more along lines of like do I know what this app actually has the capability of doing and where is collecting data from to help me make a decision like the garbage in garbage out so like I'll give an example of another one that's super simple in my mind it's called I don't remember the name of it I can just tell you what it does I upload say this podcast.
[00:17:16] I take the podcast and tells me based on having listened to the episode and then can extrapolate from that and give me recommendations of what it thinks would make sense as LinkedIn statements to promote this episode.
[00:17:32] Well that's that's actually way better than probably what I could come up with on my own time because that would take me a long time to do that to go back listen to the episode and go okay like in 10 words or less or something along those lines create a statement that I can post like 120 characters or whatever it is and actually have.
[00:17:51] It be put in front of someone who is making a decision as to whether or not they should listen to this episode and I think that is a very powerful use of AI it's self contained right like I don't care what it's doing with the audio really it's not anything in there sensitive and it's giving me something meaningful back.
[00:18:08] Now if we were talking about how I'm building out an application that may involve like the you know aeronautics in a fighter jet and I'm struggling with solving a problem I probably don't want to just go and use an open chat GPT and ask it like hey I'm really struggling with this section of code can you can you point me in the right direction that's probably not.
[00:18:33] A good use of the resource I mean it might give you the answer but your code is now also out there on the internet exactly you know I think ultimately for me it all goes back to you know working a program on security and how we're going to do this I think if we if we have a good set of standards in place in our organization to try and stay ahead of of the things that I'm doing.
[00:19:03] So it's a good idea to do that because we're coming and it doesn't mean that the controls might not change that things might be edited as technology evolves which it always does.
[00:19:14] But I do think that if we have a program in place when it comes to this it's like look we need to make users and people aware of what the impact the risks and everything of using AI can be regardless of what it is right being able to monitor the
[00:19:33] activity of the use of that true within your organization right being able to manage and categorize and classify the data that you have so that's something that sensitive doesn't get leaked out or new sensitive data isn't created and put it in the wrong location right.
[00:19:57] And then strengthen enough you know back to another conversation that we've had in the past browser security right and things of that nature that is not always an easy task to do and to implement a manage and so.
[00:20:13] AI being built into it now like we're no longer like how many things do we use today that has AI natively built in and i'm not quite sure there are like oh if you would like to go into privacy and settings.
[00:20:26] You can toggle the little thing offices don't use AI like I don't know that's realistic anymore I think about the costs that it must be to maintain search engines especially search engines that are really filtering out like the you can pay to be ranked higher the you know all these things that they're
[00:20:42] going to use a AI and help determine so that they don't have to pay somebody to do that for remember back in the day when it's like you typed in your search criteria and you were pretty certain there was somebody on the other end going oh we should give them these results.
[00:20:56] Yeah.
[00:20:57] No kidding it's like concierge for a search engine.
[00:21:01] So so on the time that we have left I'd like to do this I would like to talk through how one might approach or like things that you might need to put in place so i'll throw one out there that I think is super easy like in your acceptable use.
[00:21:15] I don't want to say policy but in your acceptable use section of your employee handbook there should definitely be some language around what is appropriate in using.
[00:21:26] GPT or LLM you know large language model large language models and and you need to do this with in such a way that it's not like you can't use them at all because the reality is then you just set yourself up for failure because we just articulated like well it's built into your browser.
[00:21:43] And you're also going to have the scenario of like well what happens if they didn't know that there was an LLM involved in what they are using how do you how do you address that so that would be the first one that I would throw out there is have some language that acknowledges.
[00:21:58] You recognize that it exists.
[00:22:00] You want people to take advantage of tools and resources that help them be more effective with their jobs but at the same time it's not a free for all you don't you're not just out there like whatever one you want to use is fine.
[00:22:11] Yeah I think it that goes back to the use the education aspect of that you know making sure encouraging them not to just rely on the output that.
[00:22:23] And you know like you said acceptable use here's what we expect you to utilize this and so on and so forth but at the same time being cautious about what what is being reported back to you when it's generating content especially if they're sensitive data involved right.
[00:22:45] So which kind of goes which kind of goes to the second one you might actually need a policy specifically tied to risk around AI.
[00:22:56] Certainly yeah or at least how you evaluate the products and services that you as an organization are using and consuming because obviously we understand we're not going to get rid of all of them you know what does that look like.
[00:23:09] Right and and full you know in all candor I mean we have not established that yet in our own organization we've discussed it it's on our docket to put into place but it has all been really just hey don't do this right I mean we get certain people certain access to do some testing but.
[00:23:32] It's it's it's been very limited up to this point though I know that by large there's probably other activity that is going up but it does at least we know that it doesn't have anything to do with the data that we're managing at the moment so that's the good news but you know that's yeah.
[00:23:52] So so I have this so we have a risk application evaluation form so it's it's not just for AI it's more of like when you evaluate vendors that might be using AI and obviously the vendor could be AI only in well that's a whole other conversation but.
[00:24:09] But they wrote this in and I thought it is worth saying says AI AI tech as an additional layer of use and complexity along with uncertainty integration of company data offering say black box text or text black box tech.
[00:24:26] I can't speak black box tech and the suite of development created number of unknowns that beg the questions that are designed that in this questionnaire were designed to help so like questions like do they just this AI use a third party AI tool to help with the AI that they're doing right or questions like how is the data controlled and monitored and I think at the end of the day and when I think it's interesting about this is.
[00:24:52] Forget the AI for a moment shouldn't aren't these questions that we should be asking about any vendor or client that we're going to work with does the client need data from us that we haven't provided them that we provide them is the client giving us data that we should or shouldn't be storing on their behalf like does this create us you know liability that's.
[00:25:13] Not necessary right like we take on risk every day when we on board clients but like is it a necessary risk and so then that got me into like what they had for what we kind of built out as this profile or this template evaluation template and it has questions like.
[00:25:26] You know a risk evaluation of no risk is zero and severe risk is a fire and then you add in things like risk multipliers like if it's collecting data about staff or clients or intellectual property without risk multiplier might be a lot higher and so then they got me going down this rabbit hole okay.
[00:25:45] How I done a risk evaluation of my organization to determine across these areas where my multipliers need to be bigger because the impact is far more significant regardless so it could be the risk evaluation has a score of two but my multipliers and nine.
[00:26:02] Versus the severity is a five but my multiplier is a zero or a one all of a sudden you see like how some of these things can actually balance themselves out so like it just got me thinking like okay A we need a policy B you probably need Mike Stewart you know anchor networks they're like vendor evaluation form that's you know 25 pages.
[00:26:25] But along those lines what are your thoughts Jim I mean I feel like we go further we've talked about email browsing and and and are sorry email security and web browser security which I think goes hand in hand with this because now we suddenly introduced the fact that like wait a second.
[00:26:42] What is Firefox or edge or Chrome or what are they doing and what AI are they using do I need to reevaluate what we allow in our organization from a browser standpoint as a result of this conversation.
[00:26:53] Yeah I mean as far as other tactics or areas that we could focus on besides those I mean I think you start getting into probably a higher level of you know data loss prevention right I mean of the D.O.P.
[00:27:12] and technologies that exist but not to a great degree to the smaller market smaller business market that we typically traverse right I mean that that can get pretty extensive and expensive right when you when you start going that route now granted both Microsoft and Google have
[00:27:40] some capabilities within the infrastructure that they have to limit that but that only goes so far and so you know you get into further endpoint protection type things software to do that so I think that that is another area that I think we're going to be seeing more
[00:28:07] but hopefully see more accessibility in the small business market that that's what I'm hopeful for because I think it's pretty limited at this point.
[00:28:17] Yeah and I think at the end of the day you know it's funny we're focusing on risk which I think I somewhat tried to cram a square peg into around holis you know we both are considered highly highly trained and highly knowledgeable on everything that is AI which is obviously why we decided to have this conversation today.
[00:28:36] But the thing that comes to mind for me is is going back to the fundamentals or the basics that we've been doing through CIS through the trust mark fill in the blank that is if you don't a risk assessment.
[00:28:47] Have you done a third party vendor management risk profile do you know like do you know how are you have you done data classification like I think about like the six or seven things that we've all kind of come to conclusion that you have to be doing.
[00:29:02] And I think I think acceptable use policy is just continuing to get a little bit bigger and bigger kind of every time we turn around and remember what Mike said about like I need to have a B.Y.O.D. policy and like do you need to have a statement in your acceptable use policy about B.Y.O.D.
[00:29:17] And that's probably it because you're not going to get rid of it like you just can't make it go away.
[00:29:23] But I think you can really narrowly narrowly define a B.Y.O.D. scope so that things that kind of goes hand in hand with the AI stuff like hey, I don't care maybe if you're accessing.
[00:29:34] Email and using this loose as example like maybe on your personal phone you can access through the app that you can download and you can get to your work email.
[00:29:44] But maybe I have it set up so you can't really do anything with it other than reply to email you can print you can do attachment, whatever.
[00:29:50] That might be the extent right so like we've allowed B.Y.O.D. but we haven't allowed B.Y.O.D. to just be anything and I think that's far easier to define than to say no personal devices period.
[00:30:03] And then they go well, how am I supposed to do my token generator that I need to use for authenticating for all right we're getting your own like you can you can continue the domino this for always in forever and come up with scenarios over and over again that would put you in a position of like
[00:30:21] come on don't you understand the B.Y.O.D policy is no B.Y.O.D. and it's like well if I give you B.Y.O.D. that is allowed it's very much an easier position to say than it's assumed that if it's not in here, I shouldn't do it right
[00:30:37] I think this yeah go ahead well I was just going to I mean in our organization we we we have a B.Y.O.D. policy that is that there is no B.Y.O.D. we issued devices to every staff member phones tablet laptops everything right but 10 years ago you didn't know you're right 10 years ago we didn't
[00:31:01] and I think so so to that I think as an organization grows the ability to deliver on no B.Y.O.D. becomes realistic but I also know of scenarios that can happen what happens if your laptop is just gets fried for whatever reason not let's say it was because of natural disaster like dumped water on it
[00:31:21] and tell you get you another computer we've modified the profile you're allowed to remote in from this personal device that is getting you to window X right like even though you can say no to all things B.Y.O.D. that's I think the flip side is you can always find a reason justify it so like if you don't have a very clear picture as to I always think like if you do like this part is allowed
[00:31:48] the person who's reading that says that means nothing else is allowed but if I say no B.Y.O.D. is allowed at all they're like yeah but what about this scenario what about this scenario I just think we we create our own scenarios that just go on and on and on solely because of that.
[00:32:03] oh for sure it's again it's a rabbit trail I think that you know our stance on it is you know what it's been and you know we try and take into account all those things but I think there are all those different exceptions could come into play at any given point you cross those bridges when you get there right so I got one to stump you then.
[00:32:26] so what about browsers are those not devices and if I load Firefox on my Mac and it's not one of the browsers that say is or isn't in my software inventory list how do you cross that bridge.
[00:32:42] well I mean we evaluate that stuff pretty much monthly in our organization so I mean we do have restriction set we do have an approved list of applications that are usable if if somebody were to be able to load something that wasn't we would we would see that it would granted it would take a month.
[00:33:07] okay so I have the approved browser but I logged in with my personal profile to sync all my bookmarks now what now what now see now you're getting a technical.
[00:33:19] so the reason why I brought that up though in the context of B.A.O.D. is like remember when the last time was that we talked about browsers most of the time you don't think of browsers as a device and you don't think about it in the B.A.O.D. context it's why just want to throw that out there at the end of this to think about like AI and all these other pieces that were navigating to say hey for those of you listening this is not easy the world word living in is changing a browser is now considered a device so take that for what it's worth I'm not sure how you physically.
[00:33:49] I'm not sure how you really touch it and feel a kind of thing but it's real it is its own operating system for sure yeah I mean it you know I mean living on a shared device but it is with embedded applications inside it like like.
[00:34:05] yeah and we can think I know we've done that when multiple times I know it's been talked about but again I don't know that we've ever done in the context my browser is my device and you can't take it from me good luck prying it from my cold it oh wait it doesn't matter you can't touch it.
[00:34:21] so this was this was great I really appreciate you taking the time this conversation was really meant to spark the thought process around your plans or anybody's plans is listening to this about AI it's number two late to start.
[00:34:36] hopefully many of you have something in place that is at least educating we could have done a whole episode on how do you educate your staff on AI and how to use it appropriately but maybe that's an episode for a future future.
[00:34:50] yeah when you run this one when you run this one through that AI tool that you were talking about see if it makes some suggestions you know Jim harryman doesn't really know anything about AI which I know why I told you that what I.
[00:35:03] right right I like this is this is very validating this obviously used source material that's true to give me an output that also validates the source I mean this is what better obviously they've got this all figured out so.
[00:35:17] So for those of you listening this has been an episode of msp 1337 thanks and have a great week.

