The episode delves into the impact of artificial intelligence (AI) on cybersecurity, particularly focusing on the rise of AI-driven phishing attacks. Bryant G. Tow, Chief Security Officer at LeapFrog Services, discusses how cybercriminals are leveraging generative AI to create more convincing phishing schemes, which can lead to identity theft. Despite the advancements in attack methods, Tao emphasizes that the fundamental defenses against these threats remain unchanged. He highlights the importance of understanding the evolving landscape of cyber threats and the necessity for organizations to adapt their security measures accordingly.
Tow elaborates on the concept of an "arms race" in cybersecurity, where defenders must continuously improve their strategies to keep pace with increasingly sophisticated attacks. He points out that while phishing remains a common entry point for cyber threats, the use of AI is transforming these attacks into more personalized and effective schemes. The conversation shifts to the implications of deepfake technology, which can create realistic impersonations of individuals, further complicating the security landscape. Tao warns that the ability to produce convincing deepfake videos and audio can lead to significant risks for organizations.
The discussion also touches on the challenges of insider threats, particularly when employees intentionally disregard security policies. Tao stresses the importance of establishing clear acceptable use policies and implementing a zero-trust framework to mitigate these risks. He notes that most insider threats are accidental, but organizations must be prepared to address malicious actions as well. Effective governance, training, and monitoring are essential components in managing insider threats and ensuring compliance with security protocols.
Finally, the episode highlights the evolving role of government agencies like the Cybersecurity and Infrastructure Security Agency (CISA) in addressing cybersecurity challenges. Tow reflects on recent changes in leadership and the potential for new perspectives on cybersecurity governance. He expresses hope that the shift in focus will lead to more accessible resources and support for organizations navigating the complex landscape of cyber threats. The conversation underscores the need for continuous adaptation and vigilance in the face of emerging technologies and evolving attack methods.
💼 All Our Sponsors
Support the vendors who support the show:
👉 https://businessof.tech/sponsors/
🚀 Join Business of Tech Plus
Get exclusive access to investigative reports, vendor analysis, leadership briefings, and more.
👉 https://businessof.tech/plus
🎧 Subscribe to the Business of Tech
Want the show on your favorite podcast app or prefer the written versions of each story?
📲 https://www.businessof.tech/subscribe
📰 Story Links & Sources
Looking for the links from today’s stories?
Every episode script — with full source links — is posted at:
🎙 Want to Be a Guest?
Pitch your story or appear on Business of Tech: Daily 10-Minute IT Services Insights:
💬 https://www.podmatch.com/hostdetailpreview/businessoftech
🔗 Follow Business of Tech
LinkedIn: https://www.linkedin.com/company/28908079
YouTube: https://youtube.com/mspradio
Bluesky: https://bsky.app/profile/businessof.tech
Instagram: https://www.instagram.com/mspradio
TikTok: https://www.tiktok.com/@businessoftech
Facebook: https://www.facebook.com/mspradionews
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
[00:00:45] Let's focus a bit on security today. What is the real impact of AI on security and what isn't? What can we learn about insider threats from what's happening with Doge and what impacts on CISA are we tracking? Brian G. Tow joins me today. Welcome to the Business of Tech Lounge, the live version of the Business of Tech Podcast. It's Wednesday, February 19th, 2025, and I'm Dave Sobel. We'll be taking questions and comments throughout the show, so make sure to put them in chat. If you've got a question,
[00:01:15] we will happily respond to it. And I want to thank Sales Builder, our Patreon sponsor whose support makes this show possible. Focus on your IT sales workflow with the power of automation and visit them at salesbuilder.com. That's B-U-I-L-D-R dot com. Now, a reminder, I am watching chat. We'll take those questions live. Bryant G. Tow is the Chief Security Officer at LeapFrog Services, bringing over 25 years of experience in technology, cyber risk management, and physical risk management.
[00:01:45] In his role, he leads a team that assists clients in developing comprehensive security programs, encompassing strategy, governance, and operations, with a focusing on managing risk through LeapFrog's Ring of Security methodology. His leadership extends to various industry organizations, including the Department of Homeland Security Sector Coordinating Council, ISSA, ISACA, and as a board member and vice president of InfraGuard National Members Alliance. Bryant, welcome to the show.
[00:02:14] Thank you, Dave. Wouldn't my mother be proud? Yeah, it's the big achievement, right? To be on live internet broadcasting. Yeah, there we go. There we go. Thanks for having me. Well, I'm excited to talk to you on a couple of different topics. First, I kind of want to baseline a little bit of AI impact is what we're seeing in cybersecurity. And what interested me was an article that felt like just an acceleration, and I want to sort of start from there.
[00:02:37] I noted that a recent phishing campaign targeting Gmail users revealed what they called an alarming use of artificial intelligence by cybercriminals. The FBI warned in May of last year about the rising threat of those attacks, which leverage AI to create communications that can lead convincingly to identity theft.
[00:02:57] The AI-driven scams often begins with calls claiming that a user's Gmail account has been compromised, urging the target to provide their recovery account. Victims then receive seemingly legitimate emails from Google that add to the deception. Right? This use of generative AI feels like we're making phishing better. That seems like a really basic bit. Am I right to think about this as acceleration? Is there more going on with AI that we want to think about from a cyber perspective?
[00:03:26] Yeah, there is. There is. And first of all, let me say I love that you send articles so we kind of have a common point where we can start. That is fantastic. But what I do want to throw you a bit of a curveball back and go to the very end. And at the end of that article, I thought it was very well written, by the way.
[00:03:45] But at the end of that article, it gives you the list of things to prevent the attacks. Right? It's like five things. And if you erased the AI off of it, there is zero change. It is the exact same thing. So there's so much. My point to all this is, yes, there is an accelerant. There's an arms arms race, if you will, between all these things. And we'll get into that.
[00:04:13] Phishing is such the elementary version of the thing, kind of where people will start the conversations, but certainly not where it needs to be ending up. And I've got some ideas perhaps for you on that. But the interesting thing to me about that was even at the end of the article, literally, if you took the AI out of the topic of the things that you need to do to prevent, quote, AI phishing, remove the AI, it's all the exact same defenses.
[00:04:39] So my thought as a result of that is how widespread, how effective is this actually being if the defense mechanisms are not altered in any way?
[00:04:57] Okay. So I'll bite on that one because in particular, where I wanted to go with the accelerant was one of the things that I think generative AI in particular, I'm making it generative versus just broadly AI, is that it's going to make these attacks a lot more consistent and look a lot more legitimate. So for example, one of the things that I find in using generative AI tools is they're quite good at transforming existing elements.
[00:05:22] I could say, make this email into 5,000 different customized versions for this list of users. So rather than having to build this by hand, now you can systemize it and you have a certain degree of quality control that you can make it look consistent. And I think it's what I look at this and say, well, you're right. It's not different. It is opening a bit of a fire hose of accelerant. Markedly better. Absolutely. Yeah.
[00:05:49] Wouldn't there have to be some changes to the defense to handle that much more of a volume of attacks? Yeah. Okay. So more of the same doesn't really affect us, like I was saying, but more of the accelerant, I'd like that you use that word, is very, very true. The days of the Nigerian prince, the very, very poor English and those kinds of things are websites that look a little bit off.
[00:06:17] Those days are gone. Right. So the fundamental shift in the defense, if you will, is just we have to this is where I mentioned the arms race. We have to be as good as they are getting better. And there are still there. There's a lot of signatures, if you will, and things that are that are very noticeable in patterns and heuristics and in those kind of things.
[00:06:45] When AI is used to develop those things that are still fairly noisy, that most of your, you know, spam email, mind casts and those kind of things are able to pick up fairly easily. So, you know, and that is going to evolve. They're going to figure out what that is and how to get around those things. So it's like I said, it's an arms race. But yeah, the defense techniques don't really change, but we have to be a whole lot better at it.
[00:07:14] So so I think I think you're exactly right now. What's what's missing? The next step beyond all of this are actually how AI is is being used in those ways. Right. And I would say probably the biggest thing that concerns me more than anything, the top two, as I'll say, is in these deep fake images.
[00:07:37] There was a new model that was released a couple weeks ago that I'm not going to brag about too much and give it a name. But but the deep fake videos, the voice impersonations and the video impersonations can literally be modeled with less than 10 seconds of footage. You know, we used to see, you know, Tom Cruise all the time or some celebrity where there's there's this monumental monumental amount of learning capability around the movies and whatnot.
[00:08:06] But that time has become so reduced. And I'm going to give you a quick example that we found out about. I guess it's been a month or two ago where a CEO was dialing into a board of directors Zoom meeting and was claiming to be on the plane.
[00:08:28] It was a deep fake video of that CEO having problems with their with their video and voice connection. And that video needed to be about 10 seconds of him in the deep fake going. I can't really hear and messing with the camera and doing all these. All that is it. Well, this isn't working. I'm going to chat. So now he's now he's back right back on chat. Everybody in the meeting saw the physical version of them.
[00:08:58] That was a deep fake. We're going to be using this and investing in that and blah, blah, blah. And I need twenty five million dollars to be sent to this account. And it went all because of a deep fake video. Right. So those things, I mean, we see even public officials are using it as a as an example. It's really, really interesting. So, you know, just even seeing or hearing somebody anymore is really not not not really enough.
[00:09:24] Well, I mean, it goes even further than that. I mean, we did the test with my own producer. I was able to fool my own producer with AI versions of me already. I mean, it's it's it's incredibly cheap and easy to produce. I don't think it is it is a stretch to say we'll be at the level of remember the sort of like 90s soundboard fake call in shows where they would play a celebrity. Like it doesn't feel like it's that much further where almost in real time you're typing it in and translate.
[00:09:53] You don't even need to just have the 10 second fake. You'll quickly be able to point where that where someone's just typing and the AI avatar will do it. So what is that change in terms of the way that security professionals need to be working with organizations to change procedures? How does that have to change? Yeah. So the thing I both love and hate about this entire topic is that there are no blinky lights.
[00:10:21] This is not a cyber problem that you're going to install a tool and it's going to fix this problem for you. Right. You mentioned in the opening the ring of ring of security methodology. Right. Which is basically people processes technology facilities. Right. Meet. And the whole the whole core of that is looking beyond I. T for where the inherent risks are within the business. Right.
[00:10:46] So in having said that, again, the fundamental differences in the defense is in validation of payment request, payment source before it goes. The fundamental defense doesn't change the capability. And I love your word of the accelerant to the the the persuasion capability of that is now, you know, now it's off the charts.
[00:11:12] But the fundamental change is still having things like passcode lists that each group has that when you say, OK, you want me to change this. What's our what's next next on our passcode list that is not changed, exchanged digitally? It's exchanged in mail or even handing it to somebody or something like that. There's a list. And then, you know, that, OK, so, you know, ABC, X, Y, Z, one, two, three, four, whatever that next code is.
[00:11:39] That person has that code and it is not in any kind of digital format. So, I mean, that kind of a thing that you put in, there's no tool for that. Right. That is a process. There is a procedure. There's a governance model. And there's there's dozens of them that we could talk about and input as part of the processes. But a deep fake capability is merely an accelerant to the persuasion power to get to that.
[00:12:07] Those are business email compromises. Right. That's what the FBI labeled them years ago, back attacks. But those the back attacks are the same. The defense beyond or for back attacks is the same, but they're getting so much better and they're becoming so much more successful with it. And I hope that organizations are finally looking towards alternate forms of authentication of payment changes and and those kind of things. Right.
[00:12:35] My wife thought I was crazy when I said we needed a family passcode in order to start talking about these things. Safe words. I mean, we've been doing that. I mean, that coming out of, you know, when we were kids, we we had a safe word that we would always use with our family. Right. Right. So that that feels to me, I want to get a little bit of a validation because it feels like this is the portion of security that's much more important than the back end software element of it.
[00:13:03] Like, I actually think that, you know, again, I think there's an accelerant element going on to both defenders and offense's ability to attack. But I feel like it's this portion of the conversation that's much more important. Give me a little bit of your sense of that balance between what we should be looking at from the software side versus what we should be looking at the process side. Yeah. So if we go back to our ring of security methodology, what we discovered over the years is that even if you do your technology perfectly,
[00:13:32] you're still covering about half of your attack surface. Now, that number can be quantified because, frankly, it varies from industry to industry with IOT heavy industries and so forth. Right. So but the point to that is, is that if you go back and look at the headline breaches over the past year and you do the root cause analysis on any of those breaches,
[00:14:02] when you get down to your fifth wise, right, if you're using the five wise methodology developed at Nissan that we've all been using for root cause for years. But when you get down into that, all of them, not some, not a majority, all of them will come back to an error, a flaw. Somebody missed something in governance. Something happened somewhere that something was not taken care of. And I know it's a bit stale, but it affected all of us in the Equifax breach.
[00:14:30] Right. Those patches had been out for six months. So where exactly was the failure? Was it in resources? Was it in change management? You know, where was the failure that those patches did not get put in place to solve that breach before it happened and cost them? I remember what the number was, something like close to a billion dollars. And most people on the planet, certainly in North America, were affected by it. Right.
[00:14:56] So it's always comes down that we often blame the technology. But when you're talking about this balancing act, you know, having proper governance in place and we can go down the list, you know, as we move things into SAS applications that we're using even right now, you know, third party risk. You know, there are entire conventions put on by the Global Business Resiliency Foundation on nothing but TPRM.
[00:15:25] It even has its own acronym now that it didn't have years ago. Right. Third party risk is absolutely huge in what we do. So just as you know, just a couple of things that we can list in governance that are critical. Yeah. Now, I'm going to remind listeners, if you're watching and you want to throw a question or comment in there, we will take those in real time. But this is a great talking about governance is actually a good opportunity to pivot to sort of my next topic.
[00:15:50] And I'm going to set the ground loose for listeners a little bit is I'm going to bring up something that often is politically charged. I do not want to have a political conversation here, but I'm using it to stimulate a conversation that I think is interesting. We've all been tracking what is going on with the Department of Government Deficiency and their actions moving around the federal government, you know, in potential with some level of uncertainty of what's going on there specifically.
[00:16:16] The intention appears to be to, you know, to do a level of audit, whether or not these the access to systems to sensitive systems is been fully legal or not is a current open level of question. What I want to do here is is talking about they've been looking at Treasury Department and OPM, the Office of Personal Management, Department of Labor, workers have filed lawsuits. That's not the bit I want to talk about.
[00:16:39] What's interesting to me here is this feels instinctively like the idea of an insider threat and when insider people do not follow standards and norms in organizations. And where I wanted to get a little sense of is talk a little bit from your perspective, because I'm thinking about what would happen in a organization if internal actors started ignoring rules.
[00:17:05] The classic one that I might think of is there's oftentimes a lot of owners that simply say, I won't do two factor authentication. Right. They just refuse to do it. And then you have either have to change the policy or bad things happen like it becomes very unquestionable. Talk to me a little bit about the best way of kind of policy enforcement in organizations when insiders refuse to follow policies.
[00:17:31] Yeah. So insider threats are huge, as you can imagine. But I will say this just about insider threats overall. And then I want to go down into the details as you ask. Most of the insider threats that we see in our security operations center are typically accidents. Somebody that's accidentally accessed something or clicked on something or whatever.
[00:17:59] And there are very, very few that we see in our SOC. Even when I was managing the security operations centers, you know, on the services at Unisys and CSC, when I was chief security officer there. We only really probably not a career. I've probably come across five or six on certainly on two hands that you can count that were typical that were very malicious, maybe beyond trying to steal a customer list or some of those kind of things.
[00:18:26] All right. So most of those kind of things are typically accidental. So so how do you prevent them? Is in, you know, zero trust. Right. We have to assume really the zero trust framework. And that is a very, very nebulous term. And it means a lot of things to a lot of different people, really, depending on the vendor that's trying to sell it to you at a particular case.
[00:18:49] But for the purposes of our discussion, just the presumption of breach inside the organization. So the proper segmentation, the need, the role based access control on need, not well, we all know need to know, right? Giving them the access that they need, but even taking that a bit further and the access that they need when they need it. So maybe even some time controls around there.
[00:19:20] But we look at things like data throttling. There's no reason for two gigabytes of data for a normal user to be suddenly tried for that attempt to be. Right. So we should be alerting on. So there are technologies and controls, you know, that you put in place around a zero trust framework that would essentially. I don't say stop because we don't ever speak in absolutes in the security world, but it would certainly slow it down.
[00:19:46] Or it would alert enough that there could be some, you would notice there would be some alternate actions and something along those lines. Right. Okay. So, and I get that. Like, it makes sense that most, most is accidental. Yeah. And I think we understand, like, there's, there's a procedure of correction and training that goes through an accidental. Let's, let's talk about something a little bit interesting here. Like, what happens when it's malicious, right?
[00:20:10] Like, what, what are, what are the avenues that organizations have to push back on that? Okay. That's a very, very good and a very interesting question. I'm going to throw you another curveball on governance. Ready? Okay. Right. It starts at the higher. Right. Acceptable use policies. And now I'm going to come back to you with AI, right? So acceptable use policies. And so, and I'll give you an example.
[00:20:40] Going into a network and finding a gambling site that's being run on your network. True story. We went into a network in the Midwest. That's actually a very long story. But we found a terabyte of child pornography operating inside an insurance company. Using, you know, for spam or whatever. And the defense is, you didn't tell me not to. Okay. Right.
[00:21:07] So because you didn't have that acceptable use policy in place to tell somebody very specifically not to use your network in that way. Now, you could potentially be criminally liable for aiding, abetting or whatever legal. Not a lawyer, but whatever. Whatever. Yeah. Right. So it starts there. And then coming back to your initial topic on AI.
[00:21:29] When we're going in and talking to clients about adoption of AI, it starts with acceptable use. Telling your users what they are allowed to do with AI. What data is allowed to be used and is not allowed to be used. I think that's going to come up in your Doge comments, right? Because there's some questions about that type of data.
[00:21:54] And there's been some accusations of that data going into AI, which I would absolutely 100%. I'm not even a little bit surprised that Elon Musk and his team is using an AI type of, I'm sure, agentic, I should say, AI in summarizing some of these things.
[00:22:15] I feel very, very confident that it is not going into chat GBT, especially on the public side. Right. So I would suspect, and I do not know this, but I would suspect that being Elon, there is a standup of Grok in some way that they're using privately for that. Now, I can neither confirm or deny that.
[00:22:41] But knowing how these things work and knowing Elon Musk and his typical methods of operation, that would be my suspicion. Right. So it's a great example.
[00:22:53] And again, I'm using it as an example to talk about the, so many Americans might be concerned that their private information that is stored within these government systems may be being used by AI systems in ways we are unclear about and may be managed in ways that we're uncomfortable with.
[00:23:42] I'm using that. How do you enforce that insider threat? What are the effective ways to use policy, procedure, governance? I don't know, the law? Like what are the ways that you actually enforce that? Because I would think and I want you to push back if they are able to be disregarded, they become nonfunctional. non-functional right yeah so um starting back with with the acceptable use right so it did it
[00:24:10] it starts there uh and you try to govern it uh in such a way to and most of this comes around training i mean you certainly i mean you want to ask samsung uh you're right there they've become the poster child for this exact topic when they have their engineers posting code for something as innocuous uh as as just simply reviewing the code for errors or maybe taking a critical meeting and putting the minutes from that meeting in and asking for a summary those are very uh you know common
[00:24:39] things that we would be using um you know basic generative ai for uh but the way i like to describe that is that salt in the ocean right once you pour that in uh it's it's there so uh preventing it you know along well first of all uh we have to be able to alert on we have to know know when it's happening uh and that is a very difficult thing to do on ai because your data is going out
[00:25:06] through a browser uh right it's on your opening you know firefox or chrome edge whatever uh and you're interacting via text through a secure channel uh tls or what have you uh into these these bots and then information is being popped up uh on the screen it's not like you know a malware code is coming in that has to be you know deep found and sandbox and detonated and all these kind of terms
[00:25:32] that we use around that uh it's literally coming in and out in uh you know some uploads maybe some documents some pictures going back and forth perhaps and some text so it is very very difficult to spot i know there are some technologies and i don't want to get into names but there are a few technologies that are being developed i'm seeing uh we're actually experimenting with a couple of these
[00:25:54] that are uh pre-browser in i'm gonna call it the kill chain right so when you go through as i thought so as that data is being put in and it goes in and then it goes into the the the agent and then comes back knowing about it as i started off with and getting something that is a detection agent and an alerting agent and perhaps even a prevention agent at the browser level before it goes is that's where
[00:26:23] the industry really needs to be focusing on and and what that actually looks like so i'm hoping that that is you know we talked about an arms race uh on the good guy's side that needs to be the absolute forefront um of the battle uh in coming back to your topic of the insider threat and keeping that data from going out first of all we have to um tell our users uh and govern it that it doesn't then we have to be able
[00:26:49] to be alerted on it when it does and then we have an action to take uh as a result and then depending on what that is whether or not we're we're simply detecting on it whether or not we're blocking it uh whether or not we're taking action against the user itself and deleting or or or suspending accounts maybe or something like that so there's there's a workflow that goes along with all of all of that uh gotcha and a reminder for anybody watching if you've got questions or comments throw them in
[00:27:15] the chat and we will definitely address them now brian as i started right you you brought up a little bit of the stuff that i was even going to in terms of thinking about the governance and the and the oversight of both i want to get a little bit of your reaction and thinking particularly with your experience working with department of homeland security one of the areas that we've been that i've been keeping an eye on is developments over at the cyber security and infrastructure security agency or cissa as well as some of the stuff going on at department of homeland security around some
[00:27:41] of their advisory boards there's there feels like there's a refocusing of their some of their efforts as somebody who spends a lot of time thinking about threat detection running a sock you know what are the resources and key elements that you're keeping an eye on in those agencies to make sure that like you know we have a continued steady hand and we're getting what what we need from those organizations as a security community yeah so coming back to your comment uh there about your
[00:28:07] about seeing a shift right so uh biden had an executive order on on ai that was uh as you would expect um from that ideology if you will that was very uh i'm gonna make up my own word and call it guard railie right so we want to make sure that and we're doing all and we want to make sure it's protected and it's all this and um you know uh with the new administration uh that's in there that uh as we've
[00:28:35] seen with doge and and a lot of other things that is a lot more aggressive a lot more uh private innovation lifted that executive order and has put something else in that that inspires that at the private level so uh with jen uh easterly's departure uh and uh i'm forgetting the name of the guy that they just just put in maybe you have it in front of you um but he doesn't have any cyber
[00:28:59] experience right yeah i mean there's literally no cyber experience uh from this guy and i should have the notes in front of me so i apologize for that um but the general thinking i i think and if you're asking for for my opinion on that is one of the things that uh i've seen in some of the other cabinet
[00:29:21] positions and so forth is a having a a fresh opinion having a fresh take not being i mean maybe maybe uh having not having a a cyber background is going to cause somebody like that uh that has an excellent administrative history has an excellent success history with what they have doing i would like to
[00:29:44] see them come in and ask questions in a different way uh that maybe changes some of the ways that we would be looking or maybe gives us a direction i know when i talk to a lot of board of directors for our clients um and i spend a good bit of time talk talking uh to people uh that will say are more advanced uh in their career uh that don't have a fundamental understanding of cyber security but you
[00:30:13] know try to explain to them what the risks are um how do we quantify how do we measure risks and those kind of things um without being technical about it um so my my hope is is that the fundamental changes over there because of the executive orders and because of the new leadership uh maybe that gives us a different lens uh to be able to put things through and maybe coming back to the original thing that we were
[00:30:41] talking about about cyber is beyond i.t so there may be some thinking and you know speculation opinion whatever on that side that maybe we maybe cissa and the things that uh i mean all the good work that jenny easterly did and the team with all the standards um really the alerting processes um the public outreach uh the programs that cissa had i i've really thought that under jen's leadership um that cissa was uh more
[00:31:09] accessible um to people like us uh than the agency had ever been before uh i think the direction the the direction of that agency and that really made it less about what the government is doing and what the government wants you to do and it became more about what do you need help with from the government to do um there just seemed to be a real fundamental shift there so i do hope that that continues uh with
[00:31:35] with with her depart with her departure well i think you've definitely given me something to to keep an eye on brian this has been a fascinating conversation really appreciate you uh spending some time with me if listeners are interested in reaching out what's the best way for them to do so yeah so uh leapfrogservices.com is the website you'll find um information on the ring of security there as you can imagine my email address is first dot last so brian.tao at leapfrogservices.com you can email me
[00:32:04] there and i would be happy to answer any questions or continue the conversation and uh you know a lot of things that we i mean when you got 30 minutes on a couple topics right so getting into like dark web and and dark gbt's and how that's affecting uh cyber and and attackers and what it's enabling them to do on the ai side right so there's lots and lots of topics that we can talk about certainly we could be going for hours but we do need to be careful with everybody's time brian this has been such fun thanks for joining me today absolutely thank you dave y'all have a good day and i want to preview an
[00:32:33] upcoming interview that i've got releasing soon and james d wilton join me he's the author of capturing value now he delved into some innovative sas pricing strategies particularly the rise of outcome-based pricing and its implication for service providers he shared his insights on how businesses can create value effectively for customers when navigating the challenges of pricing differentiation and packaging to enhance profitability and customer engagement here's a
[00:33:00] preview of that interview the one that everybody is talking about right now which is also very relevant for services actually is outcome-based pricing which is where instead of pricing in a traditional way and you know scaling by traditional metrics like in a sas company often by number of seats or in a service company just by number of hours of different people you are charging for something which relates to the outcome which the offering gets so how much value is is created charging based on on
[00:33:29] that i will say that there is it's interesting because outcome-based pricing has always been there and it's always been held up as this this thing which is in theory the gold standard of pricing it's perfect right because you can charge for exactly what kinds of value you are you are producing but it's practically quite difficult to do because you need to be able to get the customer to agree that that that's the right level that's that's the right way to price for the for the beginning
[00:33:55] we talk about value and delivering value to customers all the time we dove right into how to do value pricing and what it means this interview is already available for my patreon supporters they already have it if you want to sign up there it'll drop on the weekend on youtube and the podcast feed for everyone else if you're interested i really do encourage you to listen visit patreon.com msbradio to get access to that interview right now now i want to thank sales builder our patreon
[00:34:22] sponsor whose support makes this show possible focus on your it sales workflow with the power of automation and visit them at sales builder.com that's b-u-i-l-d-r.com and vendors you too can get your name mentioned on this live show it's a simple monthly subscription visit patreon.com mspradio to learn more and listeners you can support the show like share and follow on your
[00:34:46] favorite platforms or support directly on patreon with our give what you want model you set what you think the content is worth and you get access to the videos early if you have a question and are listening to the recording send it in at question at msbradio.com thanks for joining me for the business of tech lounge and i will see you next time

