We are finally talking about Application Software Security with Matt Lee of Pax8. We went deep on a few safeguards and found some tangents.
[00:00:00] Okay, says recording now. And says it's being recorded. Okay, we're guys. We've been recording for, oh no, we've been on for 44 minutes. Okay, and now it's recording. I have a red dot. I promise. Welcome everybody to another episode of MSP 1337.
[00:00:22] It is that time of the month where it is Fireside Chat with Cyber Matt Lee of Paxate. Matt, welcome to the show. Man, I appreciate it. This has been a fun run. Where at number I think 16, yeah? But the location software security and yes, we are actually recording.
[00:00:38] But you know what? It's good because we've only done this once before. We did our best to recreate it, but at this point I'm going to cut you off Chris in the script and say, no, I tease him.
[00:00:46] Hey man, but but MSPs don't they don't use applications software. They don't develop any applications. So why would we care Chris? No one, I think to add to that. I don't know how many times I've actually heard MSPs talk about things like we've been
[00:01:00] working on standing up a side of our company that builds applications. We're not quite there yet. Yeah, and we built our own portal and we used our own ITSM and we do all those things.
[00:01:11] And yet for the majority MSPs say they don't develop software and even if you're not one of those, even if you're not one of those, I want you to pay attention to this set of safeguards because you probably write your own scripts because you probably
[00:01:21] do copy scripts from from stack exchange or wherever you're going to get. Right. Try them out. Hope they were just good. Or somewhere. Yeah. Yeah. I think in the other part of this too is, and when we were talking about this earlier
[00:01:35] when I thought we were recording is getting into things like when you launch cloud applications or web apps that are available to consume. It doesn't ask you whether or not you do them custom.
[00:01:49] It just says that as part of this control, you have a responsibility to maintain and patch and be cognizant of things like what third party libraries might those products or services consume that you now have an obligation or responsibility for it.
[00:02:03] Raises questions around how many things we could be doing through open source paths and that kind of thing to do it, but the reality is there's a whole lot of work involved when you go down that path. Well, there's a perfect example we can use in this.
[00:02:18] For example, my good buddy Kelvin Tecler that does the CIPP, right? You might host it yourself, but that means your response from working it up to date. That's what they mean here and I'll read the overview to you with application software security.
[00:02:32] Manage the security life cycle of in-house developed, got it. Okay, cool, that's where everybody stops reading. Cool, we don't develop software. Comma space hosted, wait a minute, hosted, comma space or acquired software to prevent comma space detect comma space and remediate security weaknesses before they can impact
[00:02:52] the enterprise. That's what it means to prevent, which means isolation, methodologies that allow you to to protect it and micro segment is necessary. Things that think then nature detect. That means monitoring and finding stuff like it seems logical. Well, the first one is in-house developed, right?
[00:03:11] Yeah, it doesn't say hosted developed or acquired that you could have done. How are separate entities, right? Very much so. Yeah, so that's the way I want people to think about this set of safeguards as we go into it. 16.1, Chris is a process oriented safeguard.
[00:03:27] It's one of those ones moving to govern as we see that play out in 8.1 which we'll talk about at some point. Yep. As officially as this episode, 8.1 lives. It is now public, it is available. You can download it. You can see that they're shifting towards governance.
[00:03:42] Okay, and it says establish maintain. That's the one you can always tell, right? Establish maintain, getting it being processed or oriented. A secure application development process. You're like, okay, I'm checking out right now. I don't make applications. In this process, address such items as secure application design standards.
[00:03:59] That means like before you make an actual application or a script, ensure that it's designed properly, Jason Slagel did like a great job talking about some of those problems with like AI development. So, secure coding practices, right?
[00:04:10] Like actually know where your code is, how it's made, how it's checked in, checked out, and looking at you every vendor that's been poned by a supply chain attack, developer training, make sure you're people know what the hell they're doing.
[00:04:23] That's the whole premise of Jason's talk last I saw it. Right, vulnerability management. Wait, wait, wait, wait, wait. Chris, wait. Why is vulnerability management called out? I thought control seven handled that. What's the difference here?
[00:04:36] Well, I mean, we're talking about with regards to software that you are building and or software where the libraries are being pulled in from somewhere else. You're saying to me that if I'm building a piece of software, I'm responsible somehow
[00:04:54] to ensure that it has vulnerability management of the crap I create and the vulnerabilities I make it puts me on the other side of the fence as if like I'm that mysterious vendor person who talk about the asked to manage vulnerabilities not me.
[00:05:05] This is blowing my mind right now Chris. I don't know. It is a mind blur. And I also think we got to remember control six because if you're building custom software, what kind of user privileges and permissions are sharing, baking into this?
[00:05:20] And you're singing to my heart right now Chris. I'm so far I'm glad we're far apart not close together right now. It's much, too much. Too much. Yeah. Okay. Continue. Vulnerability management. Security of third party code like actually giving a shit about the security of the stuff
[00:05:35] you ingest which means you have to understand it. You have to look at it. You have to be responsible for keeping an eye on it. Right. I think there's been several great examples of when you go get something from GitHub.
[00:05:44] Make sure there's thousands and thousands and thousands of people using it. And tons of people forking it and tons of pull requests and tons of response. Like do that. Make sure not the ones that has to boil. Two files and has to boil up. Yeah. Two files.
[00:05:56] Nobody's touched it. Matt Lee wrote it. It might have some bad stuff in it. You going that way? But I think this is an area where I think forget for a minute with regards to whether or not you may or may not develop software or applications.
[00:06:08] But this is an area where we know that the majority, actually, in the majority, a lot of vendors who's products we consume are using these libraries. Are you using these services?
[00:06:19] And I think we have more than just the, I trust you my vendor to take care of these things. Yeah. Take it where you're going with this or a little bit further which is take 16 on
[00:06:28] what you garner from it and use it to ask your vendors if they're doing these things. There. I think that's exactly where you're going with this. That was where I was going. That's where I'm at. So all right. Continuing applications, security testing procedures.
[00:06:41] We're going to get into that. But I thought pin testing is yep. But if you're developing code, you now need that to be pintested. Wait a minute, Matt. Are you telling me there's two types of pin testing?
[00:06:51] Like there might be a focus that is very specifically on pin testing my environment. My users, my humans, my software, I consume and buy as an organization and the stuff I make. Just not to. Yeah. And then you get into that at the types of that.
[00:07:04] What is it like the the the. Type, see, might do authenticated or privileged access pintests to see. Can I can I elevate privileges based on the credentials I've been given to do this penetration test?
[00:07:17] All right, 16 to now that we've got a process we're building, what we'll find is that all of the rest of 16 modify 16 one. If you're looking at my visualizations, you'll see that's a solid line. That means a parameter being given to 16.1.
[00:07:31] So 16.2 while it's also a process oriented safeguarding, I think in fact, most of 16 is. Yes, that was in maintain a process to accept and address reports of software vulnerabilities. Like there was a time, Chris, it wasn't so long ago. I feel like an old man.
[00:07:45] I don't know if this is how it creeps up on you or if it's the gray beard, but I feel older. But there was a time in the last decade that software companies that we know and love were
[00:07:54] actively suing people to shut them up when they found the vulnerability. Yep. It's one of that bacon where it wasn't that simple in one of the cases you might be thinking of, but basically the same theory applies. Most people don't have a vulnerability management process.
[00:08:07] We didn't have a very authoritative vulnerability process until just recently connect to a same thing. They just set up a vulnerability management process that's much more official. So you start seeing this world back. You could go back further, Mago back further in time.
[00:08:20] Remember when everybody was watching when browsers would release another version of their application. So you go from five to five three and you're like, hey, we're not deploying this to everybody. We're going to put this in the lab unless you go make sure that everything that we
[00:08:36] use still works and find. And that goes back to regression testing some visualization over time that we've dealt with. And we don't assume the vendor that right. We did not assume the vendor was responsible for verifying all of the flaws found.
[00:08:51] We took it upon ourselves to say, we have a responsibility and an obligation to our own staff and our own clients to make sure that we've checked the integrity of the apps and services we're going to use. Yeah, we don't do that anymore.
[00:09:05] And even then like we've never been great at doing any of that. No. And we'll get into some of this about that testing piece I talked about earlier. Continuing on in the safeguard right about this addressing vulnerability. So now we need to accept and address for vulnerabilities.
[00:09:20] And it says you need to provide a mean for external entities to report. That's where that falls apart. Right. A lot of people didn't have a trust center or someplace to actually repair this or they didn't participate in something like a bug bounty program. Right.
[00:09:31] With some of the results. No, that was more than 10 years ago before that started but it wasn't much locked about that. I know and even then it's still something people keep as look at these idiots bug boundaries just want to make a profit off of us.
[00:09:44] And you know you have problems and you're not paying people clearly enough to fix it themselves and therefore we are an externalized force Eric. We get off my damn pulpit. I'm not going to preach from there. Okay. I'm good.
[00:09:53] But the point is is that use of vulnerability tracking systems include severity ratings and metrics from measuring timing from identification analysis and remediation. Like if I have code and it's found to have some decirilization or remote code execution
[00:10:08] flaw or right, just all the horrible things that can happen. I do it by a pass whatever might be all the different things. If those things are there we need to know when they were reported and how quickly we fix
[00:10:19] them which is a sign of how healthy your application development processes around security because Chris here's where it gets weird. I'm going to get on my pulpit a little bit. The problem isn't that they don't know about the vulnerabilities.
[00:10:33] The problem is that they cannot thread them in over that next MSP that is needed for the next quarter sales growth to say, I'm not joining this platform until you have this feature. All right, feature x, feature x, feature x goes in front of everything.
[00:10:45] Everything every every sprint from now forward, feature x, feature x. What about that big security bug? Hide it. Don't worry about it. We'll deal with it. We got to have it and I'm not saying people are that call us but I will say that the forces of capitalism
[00:10:56] that have the continued driving growth that we all love are the same forces that without regulatory or other check and balances allow for this to exist in applications. This is the problem we live with because we do not have that regulatory oversight.
[00:11:09] You're not having that conversation, but yeah. I mean, this isn't new, right? This is been a pandemic from the very beginning. Very beginning. Yeah, this is garden. In some respect it is understandable because you're building something to be to make money out.
[00:11:25] And I think that's that balancing act of recognizing the risk and I think one of the things to your point that we're not seeing a lot of is I think vendors in the early stages don't
[00:11:36] often spend enough time understanding what the potential impact to the organization would be if the vulnerability was exploited because they're so worried about getting that next iterative release. And simply put their people that invest in these companies have not been forced to or have
[00:11:54] not been bit hard enough to change their expectations of startups. So they want to ask those questions, right? Like when when when when engineers give money to a startup it's because of what it has the potential to do now what it has the potential to break. Right. Oh, 100%.
[00:12:11] Now let's look at you over their recall. I think we're going to get that aggressive. Yeah. All right. Okay, 16 to six. So 16 to says have a way to accept vulnerability. 16 three. Like perform root cause analysis. This is human's defined Chris.
[00:12:29] This is the actual finish of my just exposition of my earlier point which is if we have to make a control that says perform root cause analysis it means that people aren't doing it. When reviewing vulnerabilities root cause analysis is the task.
[00:12:43] And they're even having to educate in this control to tell people what the hell it is. It is the task of evaluating underlying issues that create vulnerabilities and codes not fixing the vulnerability. It's fixing the root cause of that vulnerability. It's backed up a few stages, right?
[00:12:57] I think if we put that into context it says to move has to it, right? Like this is, this isn't band aids. This isn't like we've addressed the symptom. We've suppressed it.
[00:13:07] Like it's we genuinely have a path forward for this is no longer going to come up again. It's like the old imagery of the big giant warning banner coming out of the machine and then going into the shredder, right? Straight up. It's a form. We fix the problem.
[00:13:21] It's it's well, which is the best one is like you know you don't have to worry about it if you just take the paper out of the fax machine. Or what is it Homer Simpson covering up the blinking light on his dash, right?
[00:13:32] That's the perfect me image for this, right? He's like done. I don't. I don't. Okay. So it says you need to review vulnerabilities and provide root cause analysis. And the essential point is to move beyond fixing just an individual vulnerability and moving to like actually fixing the cause.
[00:13:47] And what, what, let's unpack that. What could that be? Let's say you have something that lends itself two stages down in the code to causing problems. You fix it by putting something to stop this here but you could redesign this element to not have that flaw.
[00:14:02] And instead that's the difference in those two, right? Work to go redesign an element which might be thousands and thousands and thousands of lines long and a new redesign methodology. I will say this is probably where machine learning and AI helped the most for code. Absolutely.
[00:14:17] As time goes, this is because they can take thousands of thousands of lines of code and find where the actual problem exists. Well, or I mean just that the redesign capability of that redesign this so that this
[00:14:28] flaw can't exist is something I see in the very near future. Maybe in the next year if not, if it's not here now. Yeah. And you are to see it with like get hubs, get and get guardian and other players that are trained to analyze that.
[00:14:39] But I mean just like taking a developer in their IDE and saying help me redesign this and test it and it's regression testing like that kind of state capability. Especially just listen to a podcast episode of Pivot is it carouselwisher? That's she does. She wrote the book, Techburn.
[00:14:59] It's nice. Anyways, she interviewed the CSO Ford chat GPT and that was one of the things that they talked about. It's not about how we can replace coders to write better code. It's how tools that we think brownfields must crap that exists and get people overwater
[00:15:17] without spending another head count when you're buying something for 300 bucks a year or per users. And this is something that's not just in the programming space, they were talking largely about like garbage and garbage out. Where do you scrape data from? How do you use it?
[00:15:30] I mean, it's good data. It was talking about like the whole like fake news and some of the things like how you can emotionally convince somebody to do something because it aligns with the ideology.
[00:15:40] Well, what was interesting about that is like the point was this has been happening for hundreds if not thousands of years right? The beauty of what is happening with AI and what we can do in the Learn Language models
[00:15:54] and the ability to rewrite code as you now have what is happening is oversight this says, we no longer want to accept that, we want this to be better. Vendors are building products they don't want them to break.
[00:16:07] They don't want to spend money developing new programs that are like, oh my, sure, I hope there's a big vulnerability so I can be in the news again. Yeah, no doubt. And I think we are in that day today where that will change not to get to theoretical
[00:16:19] you all sorry getting off of the topic of the CIS controls. Okay, so perform root cause analysis, moving on, establish and manage an inventory if third party software components. I won't believe you're this point with all the components but basically it's the beginning
[00:16:31] of S bomb they even quoted in here, Bill of Materials software bill of materials, SBOM also basically slated for future use things I might use which is kind of interesting that they would call that out.
[00:16:43] But any risk to third party component could pose like you're basically starting to say hey, this third party piece I'm using for financial management. Now give you this is a perfect example one major player that I used as an MSP had a flaw
[00:16:56] where I kept failing authentication for a chrome binary that was basically 2000 vulnerabilities on it and 100 versions old. And I said nobody in my employee is using that. I would lose my mind as it turned out it was built, baked into the thick client.
[00:17:10] And it was the sole component for signing into QuickBooks. And it was because that package was old and they hadn't paid for a new package and even if they had it would still be old it was like 71 versus 67.
[00:17:21] So the point is is that when we start talking, we're like Chrome 190 or 200 something so the point was and that tells when it was too. But the point was that was a third party component that when you're assessing the risk, now is handling the financials.
[00:17:35] I will assess it higher. Okay, the point is have a list of your third party components we have a lot to get through on this one so I'm going to run this along. Yeah, we can move a little faster.
[00:17:42] Some of these are got a little bit nerdy anyway. The next one is use up to date and trusted third party components. First says have a list. The next one says make sure it's up to date and trusted goes back to what I was talking
[00:17:51] about earlier like yeah. Yeah, look at how. Yeah, how many people are using it? Is it up to date? Are they doing updates? I think there's just a specific one that is as if you were doing third party vendor evaluation as an MSP.
[00:18:03] Like this is one that you can do some checking on your own. Yeah, you can go to built with at least for SaaS applications. Yeah. For the most part, you can get into certain things and help you understand that.
[00:18:12] But really, it's just a good thing to understand when asking these questions. Tell me about your application security process. I know you can't share everything but do you do these types of things? Are you meeting these types of components?
[00:18:21] I think that's a very great question I would ask, right? Walk me through your API. So you're saying it's TLS version one. Is that what you said? Okay, we're done here. I'm going to see myself out. Yeah, all right.
[00:18:32] But I think that's still happening and I think that there's a lot in this space that doesn't necessarily understand the nature of that when you look at it through the lens of what does that expose us to because of what version of an API may be getting
[00:18:43] used when there's newer versions available and have been for a long time. Well, that doesn't even get us back into the sun setting of legacy access methodologies, which isn't really described before. Like we didn't turn off the old APIs. Yeah, I don't know.
[00:18:55] You don't get these things into it. You're trying to pull my chain. We only have 10 more. Okay, so. Okay, establish a main tane so we're back in process land again. So very rating system. Right? Now if you're building it and you're creating these reports of vulnerability,
[00:19:08] all they're saying is like find a way to rate them. If you have a pure 10, maybe the 10 guy gets dealt with first. If you've got a three, maybe she sits for a minute, right? Like you might have that alone or hard day.
[00:19:19] Maybe it's more of a feature that you just didn't know you had. And even this comes up with like regression, for example, right? We're talking about regression. I think regression, the SSH vulnerability that's out there right now, that requires patching, I think it takes 10,000 simultaneous connections just
[00:19:33] overwhelmed a 32 bit system. They don't have any theoretical exploitation of it in a 64 bit system, which would be orders of magnitude more than that and not easy to achieve. So when do you do that?
[00:19:43] Well, that that 10 might be a three or that nine might be a three, right? Like it comes down to some of that. But the point is, as you're doing this in your own application, you need to have a severity rating system.
[00:19:52] And a way to see set a minimum level of security acceptability for releasing code. They kind of packed that in here. And facilitates prioritization of what's discovered when it's discovered and what's fixed first. One can always find bugs, right?
[00:20:04] Like those be honest, there's always going to be a bug trail. How exploitable is that bug? What is it worth? What can I do with it? Those are the things that matter. And you need to be able to investigate that or pay for third-party SaaS-dast capabilities, right?
[00:20:17] All right. 16, 7, 16, 7. 16, 7. Use Holy God. It's a blue triangle in my visual because it's a tool essentially, which they're saying, use standard industry recommended, hardening configuration templates for application infrastructure components. Now, I will call out that this is very, very complimentary to the creators of these controls, right?
[00:20:40] In the sense that CIS cells are images. Those hardened images are like two cents more per compute hour on AWS and Amazon. So there might be some reason this is called out. That aside, I don't disagree with their statement.
[00:20:52] What they're saying is get shit to a baseline configuration. Like make sure when you roll it out, use something like terraform or Kubernetes and coob clusters to check to see if the code meets what it was actually designed to be doing.
[00:21:05] Things that, and big granted, anybody that's a developer make fun of the math, the security guy here because I'm not massively a developer. I'm just a baby at understanding these concepts. So just make sure there's something to add to this, though, that I think is important here too.
[00:21:18] You talked about the configuration and I think that's where this gets almost over the top. You think about the platforms that are out there that you can use, Google Cloud and some of the others where you can go in and build your own application
[00:21:29] just because you can build the applications. It doesn't mean you've done any of these things, right? Like this is saying like, where is this application being hosted? Where is the, yeah, have you put the parameters around it? You know, the web application firewalls, the phone the blank.
[00:21:43] And it's really saying not even that. Chris, this is the interesting part of it saying, don't do it yourself either get a temperature or an automated methodology like that's what they're saying. And if you look at it, they say on underlying servers
[00:21:54] on databases, on web servers, which how those are delineated. It's actually saying you're not going to save money just because you're building yourself if you're not making sure these things are always. Right. Yeah, yeah, because if you're doing this yourself,
[00:22:06] you have to do all these things as well to your point for so that raises questions like, I don't want to mention something. Yeah, go ahead on just a. So it also calls out Cloud containers, which means you need to have some terraforming or Kubernetes methodology,
[00:22:17] something to bring that out that way. Pass, right, or infrastructure as code, right, I act if you will. Pass? Well, wait. And SAS, wait. So what they're saying is if I'm using an application I built an Amazon and I use some type of Amazon
[00:22:32] Cognito services, the way it's configured matters with a standard template, right? So just what they're getting at is, if you're going to build something, everything it touches, everything it consumes, everything that's made of needs to have standardized templates and configurations.
[00:22:48] Holy crap, let that set in for a second. Like this is why building custom software, oftentimes is not the best path to take especially when you're doing it to just save money. Yeah, oh, 100% preaching. All right, 16.8. That's it. Come on, y'all. This has to be said.
[00:23:07] Can I just walk away now? Can I just quit? So great production and production systems. So yeah, so man, I think this is an interesting one because if you go back in time, it was very common to have dev and production development environments because that's how we started
[00:23:24] out in the programming space. We didn't just, we knew that compiling something wrong could bring us server demos. So you did not look at the other perspective, right? So that's why I overloaded up on having to have this conversation.
[00:23:35] What I think it means though is yes, we would have separated those environments, but they might have shared the same database. They might have shared the copy of the database. I hope not. My mind, I think this goes further if you really take this
[00:23:46] down in your own heart into, I need separate clean data that's not going to be PII if it's leaked. That's not necessarily going to be in that. It doesn't have a clear path across. In fact, one of the MSPs that's part of the happening,
[00:23:58] I was sharing with him the fact that I'd found his dev environment and it wasn't protected by cloud flare. And that was actually still behind their comcast mode. And they had made a mistake and showing me that because that told me they were both probably there, right?
[00:24:12] And so the first thought was, oh, I think that's separated. Well, I hope it is. And that's the point you shouldn't hope it is if you're going to build something for sandboxing, keep it fucking separate of the thing you want to keep protected. Well, and permissions, right?
[00:24:24] So your dev environment should not have the same permissions as what it would be to publish to your production environment. 100%. It's a CI CD pipeline, right? It's a, it's a, yeah. It's constant integration constant development. Oh, gosh. I hope I didn't screw that up.
[00:24:38] CI CD, see this where I get to tell him and idiot. So as he's pulling that up, I mean, constant integration, constant continuous delivery or continuous deployment. So it was the crisis to self-correct. Matt, you lose one matte point from yourself. OK.
[00:24:54] But the point is, you think you've been able to manipulate. I do have, I get to make them. It's like printing your own money. It's amazing. OK. So the point is, you need to have separate environments. And that means fully separate. You all all of it.
[00:25:04] So that way if it's compromised, it doesn't have a breach impact on the data that you are held for the share of the responsible for. Use shit data, keep separate systems, have separate accounts. Don't give them cross permissions. All of the things go take Bob Bullock's
[00:25:16] breaching in the cloud course. You'll walk off and go, oh my God, I'm completely wrong in everything I do in my life. No, I tease. But it is fantastic to get teaching you some of those separations in that development space. So OK. 69.
[00:25:28] And so you're all software development personnel retreat or see the training. But what about 14? And the end of 14's specificity of role-specific training. This is a reinforcer, right? Because at the end of 14, you get to conduct role-specific training. And they even call out developers getting secure development practice.
[00:25:47] So why? Well, because 14.x comes before IG2 of 16.9, or comes after 16.9, because it is in IG3. So the point is that this IG2 is the first time you will see that developers need their own training far before you see role-specific training.
[00:26:05] And you know, this is one that I don't think it says it in here. But like this is where like learning to develop under OASP guidelines or some sort of industry standard around how to develop or program with security as a core component of what you're building.
[00:26:21] And I have misled the audience. It is actually before. So 14.9 comes right before this. So I apologize. You will have seen role development. But we will add extra to this saying, writing secure code, right? Jim Manicoet, Manicoet does that both in person and in a lab-based form fantastic.
[00:26:37] If you're a developer, you're out there looking for so many, I would plug Manicoet every time. There's specific development environment. Like that means they need training in Python if you're writing in Python or having forbid Ruby on Rails if you're writing in Ruby. And Ruby on Rails rather.
[00:26:51] So I have a handwritten rule while. Oh, I know. I heard it the other day and I was like, oh, god. Me and Jason Slyglot of fantastic talk about it. He educates me a lot on this stuff because I'm ignorant. Training can include general security principles. Great.
[00:27:02] Like teach developers why it matters. Why doesn't I do or matter? Which is an indirect or insecure direct object reference. And I'll just do the fun here since we're here. What is an I-dorm? Alyssa Miller does this the best.
[00:27:13] She goes, imagine you're driving up to a really, really fancy club. And right in front of you, you see this swag Lamborghini. They park, they hand the keys to the Valet. They get their ticket number 68 and they go on their merry way.
[00:27:26] And I drive up in my 76 pent-o, pop an environment as she spotters up. And I walk out my nice suit and I hand them the keys and they begrudgingly hand me number 69. What if I went around the corner?
[00:27:39] And after they drove that horrible piss bucket away, I just changed it to 68 and walked back. And they then took my nice tuxedo handed me my keys to my Lamborghini and I drove away. That's an insecure direct object reference.
[00:27:51] I've been able to modify something handed to me, represent it to the authority figure and gain something I shouldn't have had access to. And that's just one of hundreds of those types of vulnerabilities that people need to be aware of.
[00:28:03] When you see a lot of that with forged identities, right? Sure. SSRF, like clients, CSRF, client side, request forgery, like all kinds of different methods to do things, right? And so the argument is that training can include general security or application security
[00:28:20] standard practices, things that help prevent those types of things, conduct training at least annually and design a way to promote security within the development team, being like gamification, other methodologies of measurement and build a culture of security among the developers.
[00:28:33] Chris, this triggers you a little bit sometimes but that's tactical or strategic there. It sounds very tactical. I think it can be a little bit of both because I think that you want this to be on two different levels, right?
[00:28:46] You want them to not just be begrudgingly doing training. You want them to be excited and wanting to participate that they are going to participate in this, like it's just going to be more important. Not punishing, dealing, trust, gaining camaraderie.
[00:29:03] All of those things are parts and elements of the outcome of a good culture but I actually think it really is those things to make it easy for people to do the right thing, easy to make up for mistakes when failing.
[00:29:13] Those are all just basic tenants and principles there that need to exist. So never go through the training, you're never going to have the opportunity to embrace that, right? You don't know what you don't know. Exactly. 1610. Apply secure design principles and application architectures.
[00:29:26] Now they're getting back into this architectural discussion saying like when you're designing your software at the very beginning, you need the concept of least privilege, you need the concept of mediation, something that's mediating between the entities, right? Think about it this way.
[00:29:41] Imagine one of the challenges of a micro service architecture. Micro service architecture, APS, and central applications, some of those micro services are not aware of each other in what they do in their function, they just understand it can be asked for this and given this.
[00:29:56] So some of the challenges that we need to make sure we validate that the options being put in by the user are being mediated against all of these different endpoints to ensure that they aren't violating those business logic challenges after authentication or pre-authentication known right? Sure.
[00:30:11] And so it says examples of this would be ensuring explicit error checking or is performed documented for all input, right? So that means we're actually doing performing against all input, including size, data type, acceptable ranges, like what character should I have in this format?
[00:30:25] These are the, against that injection methodologies that allow me to execute why or other methodologies, remote code execution. That's what these proven just like you can be even simpler than that. So like I remember when I did some low level development, my job was to create form validation.
[00:30:42] So like when you put your values in it, yeah, yeah. Put a value in and what if that value let's say it says it can support, you just start typing and if you open a 1, 1, 1 and you return okay and I now know I have access to
[00:30:54] us equal data. Right, right. Like there's a lot of things that were along those lines of like so one of the things that I was responsible for was making sure that anything that was plugged into those fields was validated before it actually could talk to the server. Exactly.
[00:31:07] So the mediation is 100%. You're mediating in that regard or filtering in that regard. Or the checkpoints. So it says, which is weird because you should also do things. They pack two things in here which I don't, I like and I don't like because they're
[00:31:20] really saying, they almost made a control of its own self and then a safe card underneath it. So here's what they did. They first talked about like secure design principles, concept beliefs privilege, that's one type, enforcing mediation, that's another type.
[00:31:32] But then they go into like this understanding of saying minimize attack surface. Well, what does that mean? We just talked about that. That's methods you just talked about from mediation like why are we deviating minimizing attack surface such as like turning off unprotected forces and services.
[00:31:44] Wait, that's control for. What are you talking about? Right. But again this way. I don't want to agree with that because I think that a lot of development today is being done in an environment that has already been pre-packaged to give you a space to do it.
[00:31:59] And I think this is a great place for it to save the way to second. You still need to go make sure that the things that are not going to be used are not available to be exploited. That's correct.
[00:32:11] Inside another package, inside another set of files, inside another thing. I understand that concept. I'm just simply saying there are thematic elements that got brought into their own world of 16 that are coexisted outside and it's because the human nature is not to do it if
[00:32:24] it's not explicitly called out. I'll give you an example, WordPress. You can host multiple websites in a single WordPress instance, right? And they all have their own different users, their own different plugins, their own different.
[00:32:38] Well, one of the things that can happen with that is what if you didn't update the plugins and all of the websites to the latest first. Yeah, there might be an executable that you can use. And now like your products walk to the other site.
[00:32:48] It's analogous but not direct, but I see what you're saying there. Yeah. Just don't know, right? But it goes into even talking about removing unnecessary default accounts, renaming like it to your point.
[00:33:00] It's very valid, especially in that example which you just need to turn on a WordPress site. It comes with a default account. It's called that. Yeah. All right. So 1611, moving on leverage, vetted modules or service applications for applications security components.
[00:33:13] What they're saying is like don't go make your own stupid authentication platform, go pay off zero, right? Right. Just like or use one that's already out there. Don't reinvent your own user IDC. Right. Like don't do I made it my own authentication. That's what I'm saying.
[00:33:26] I'm going to write my policies from scratch. Yeah, yeah, yeah. And this is weird when you get into companies that are actually building new authentication platforms and things, right? Like you get into very interesting aspects there with that says to them. It gets interesting. Okay.
[00:33:40] So as leverage vetted modules such as identity management, encryption, auditing. Basically don't want your own stupid encryption. Come on, y'all. No. And sorry if anybody, you're writing your own encryption and just kidding. I mean that just sounds one that's nerdy. Okay. But some part.
[00:33:53] The platform features in critical security functions were reduced developers workload and minimize likely to design an implementation errors. That's straight up, right? Going and using something like Cognito or Azure Active Director on try to detect
[00:34:04] accessibility, things that nature, as you're key vault or things that nature can be way better than you trying to do that yourself. Standardize. I just can't just use my mind. It's my mind. Feel better. Yeah. Yeah. For mechanisms such as identification, authentication, authorization.
[00:34:18] To make those mechanisms available to applications. It's only standardized, currently accepted, comma space and extensively reviewed encryption algorithms. They really hit that hard, y'all. Yes. They're saying don't make your own encryption. I mean unless you're like going get a body standard approved and have people in the
[00:34:34] academia pick it apart and have thread actors or hackers going to do something of that idea. But why would you? There's so many accepted cryptographic types. Again, again, go work on this. One of those works group.
[00:34:46] People get in wrapped around the axle of like, but I can see money if I do myself. I doubt that. I think that's only huge. Maybe not this one. Think that one's only you, Briss. Okay. Operating systems provide mechanisms to create maintain secure audit logs.
[00:35:00] What they're saying is that as you're doing these things, make sure that you're grabbing the logs and the information about that. They're basically just telling you that stuff's there and you have it for logging, for encryption. Like, don't reinvent the wheel if you have OS not logging.
[00:35:13] You could rely on. Okay. Right. 16.12. That code level security checks. This is where you get into SAS and DAS, right? The static application security testing, dynamic application security testing. Apply static and dynamic analysis tools that's SAS and DAS with the application life cycle within the application life cycle.
[00:35:32] As you build it prior to release. Okay. Let's say you're following the rules. Are you following all right? Verify that the secure coding practices are being followed. You're now checking. This is governance boys and girls. I would be very surprised that 16.12 isn't some extensibility of governance.
[00:35:47] It is a tool so it's probably not. But definitely this is a governance check. This is saying if my people have been trained by Jim at Manicode, how to code securely and we're doing those things in practices in the actual CICD life cycle of that application
[00:36:00] are we building in both static? Static means like looking at the code as it sits, right? Just to define. And as it's open, it's clear box. So it's not opaque. It's looking at code whereas dynamic is actually interacting with the code as it's running.
[00:36:15] That's the big delineation static is code sitting on paper, dynamic is as code. It's running. So that means in multiple phases of your life cycle of your application both from code to in development and actually live code go check and interact with it. And there's tools out there.
[00:36:29] There's automations out there that will do these things very well for you. They can be manual code reviews. Okay. I want to be 1613. Application, pen testing y'all. Right Chris like codes can do pen testing. What's the difference between a dust and sast versus a pen test?
[00:36:44] Why is this called out separately in your mind Chris? I hate going down this rabbit hole but I think it's because it's two different places right for one. We're talking about when you talk about application, pen testing, there's a lot more rules
[00:37:01] often times because of like where it sits. Like you can't just like you're not going to just go pen test 365, right? Like there's some rules around what you can do. So I think it's into in my humble opinion is being able to have a scope for a
[00:37:17] pen test that lives within the boundaries of the application itself and is not just out there. You're not wrong about that. It is about that application and maybe not about attacking any of its past, fast or sast, type of impedances, maybe impedances.
[00:37:32] And we need to use a lot of tools this way too. That's different. No, this one's human and the reason I wanted to call sast and dast are going to be very tool oriented but this calls out specifically saying for critical applications,
[00:37:47] authentication, authenticated pen testing so application pen testing specifically and then authenticated pen testing, meaning I already have a credential to the system. Right. It's better suited to finding business logic vulnerability. So those are those I do as I talked about. And code scanning and automated security testing. Why?
[00:38:02] Because those are good at finding like redirectories, finding things that have shares injecting, SQLize or remote code execution capabilities or remote lookups or remote file and what's exposed. I think right. But what this is is saying, okay, I'm now a user. What can I do?
[00:38:16] And I'll give you a great example of what I live with. Right. We had a JSON web token that was intended to be able to do just billing and payments. Right. So you can come in and pay your bill.
[00:38:25] But it didn't have the ability to go look at a quote. Well, what could I do with a quote that I could abuse? And you have to dig into and then change that application design to say, no, that's not what this is for. You didn't sign in regularly.
[00:38:34] This JWT is an allowing you to go to that point. And so you have to look for those things by actually trying it and doing it. Those aren't going to be things that computers the best at doing today. Right.
[00:38:43] Those are going to be things we have to go. Why don't I can do this? Right. The computer doesn't wonder. It can identify those entry points though. It can identify where those places probably are. Likely are where can I put that in.
[00:38:54] But they can't get smart like that in my mind. So that's why application testing is humans. They want you to do it with another authenticated and authenticated user to find business logic vulnerabilities and other things that have been missed by traditional code scanning capability. Okay.
[00:39:08] And sometimes this is a volume game, right? How many times do I try that one thing and you don't get through but then on the thousands of things. Tricky, caution, capabilities, things like that. Those usually don't happen there.
[00:39:18] And what I mean by that is not, I would say that probably happens more in Saston Dast. Only because you don't want to test against a developed application with a volume exhaustive type thing because that gets very close to dedos type capabilities, right?
[00:39:31] And so typically, fantastic avoid that. Not to be nitpicky in my very limited experience but okay. So conduct threat modeling. All right guys, this is the last one. Like we're done. This is at 1614. The land, the holy grail for developers.
[00:39:43] We are done once we're done with this, right? Yeah. Actually I tease. Okay. Because we haven't talked anything about regression or any of the things that would need to happen in development. But that's, this is a great start of all the things you need to do for security.
[00:39:55] Okay. Conduct threat modeling. Threat modeling is the process of identifying and addressing applications security design flaws. This goes back to that point of root cause that we talked about earlier. This is finding those fundamental flaws within a design before code is created.
[00:40:10] This saying what if I do this? What is the funny? Because it's number 14. Yeah. It's the last one because you got to get good at it first. You're not going to know what sucks until you get punched in the face. That's right.
[00:40:21] The reality is this is a learning experience. And I think it's because most people have brown field code and most of these things are speaking towards learning this, then have green field. I got into a little conversation with unpopular opinion, copeland about this, right?
[00:40:37] Saying, you know, most people should start green and everybody's like, no, that never happens and the argument was a long one. But anyways, so threat modeling is the process of identifying and finding those flaws before it's created. And it's conducted through specifically trained individuals who evaluate application design,
[00:40:56] gauge security risks for each entry point. Again, back to that entry point statement I made and access level to your POLP statement that you made earlier. The goal is to map out the applications architecture like if it never had more than this privilege,
[00:41:10] even if you got to this section at high privilege, it cannot go here. Right? I'm over summarizing. But that's the statement of this, right? Yeah. And so, and then it says, in a structured way to understand its weaknesses.
[00:41:22] What they're basically saying is, yeah, it's always going to have those weaknesses. It'll always be in a check in a balance. You always will have to balance productivity with security. That's the reality of it. But you can do some threat modeling, don't understand where your best choices are.
[00:41:33] So that you're not left with maybe a front slash that allows you to reset up the set up wizard to let you get access to all those juicy juicy machines. And slash it sounds very familiar like almost like real or something. Oh, yes. Okay.
[00:41:45] I think that's the natural conclusion of these. I think we have rather than a professional 16. I think for those of you that are listening to this, I think what we're really trying to get at is Matt did so eloquently decipher these safeguards is to remember that
[00:42:00] you may not be building an application, but are you modifying what you might be building a program? Like, as the questions, yeah, right? Ask the questions and I think, I think, and I know this is a tough one.
[00:42:11] It's tough to get in that rhythm of wanting to, you know, to get confident and asking it's questions because they're not easy. And the answers are potentially even more complicated than the question. So but if you don't start doing it, you're never going to get good at it.
[00:42:26] And if you are participating in the trust smart, this is the place to go and ask. And if you're a big day, go down in the path like make sure you're doing these things. Look at it in its small piece.
[00:42:36] Go pull down my visualization and I got a go. So I do have back to back. But go pull down my visualization and look at every yellow triangle and tell me if you're doing every yellow triangle in some form of fashion. That's or yellow trappers, what you're sorry.
[00:42:46] Did you just give homework to the vendors that was saying to show? It's weird. Can I do that? Is that allowed? Sure. You absolutely. Thank you, brother. I'm just procuring homework whenever it gets turned in for those of you listening. Thanks and have a great week.