CrowdStrike CEO Discusses Deepfakes and AI
WSJ Tech News BriefingJune 20, 202400:11:55

CrowdStrike CEO Discusses Deepfakes and AI

At WSJ Tech Live: Cybersecurity, CrowdStrike CEO George Kurtz discussed how improvements in artificial intelligence can help cybersecurity practitioners defend systems, but it can also be used by attackers. He spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election and threats posed by China and Russia. Zoe Thomas hosts. The interview was recorded on June 6, 2024. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

At WSJ Tech Live: Cybersecurity, CrowdStrike CEO George Kurtz discussed how improvements in artificial intelligence can help cybersecurity practitioners defend systems, but it can also be used by attackers. He spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election and threats posed by China and Russia. Zoe Thomas hosts. The interview was recorded on June 6, 2024.


Sign up for the WSJ's free Technology newsletter.

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:01] Welcome to Tech News Briefing. It's Thursday, June 20th. I'm Zoe Thomas for The Wall Street Journal. CrowdStrike is an industry leader in cybersecurity. Like others in the industry, the cybersecurity software company is facing a host of challenges this year, from advances in artificial intelligence

[00:00:21] that both companies and cyber criminals can use to make their jobs easier, to risks posed by government-sponsored hackers in the lead up to the 2024 election. At WSJ Tech Live Cybersecurity earlier this month, CrowdStrike CEO George Kurtz discussed

[00:00:36] these cyber threats with our reporter Dustin Voltz. Here are highlights from their conversation. What are the strategic priorities for CrowdStrike as you sort of look at the back half of this year and into the next year?

[00:00:50] Well, as a company, we're always focused on being on the leading edge of protecting new technologies and security is a complicated space. If you think about security, there isn't just one behemoth that does everything, right? If you go to RSA, you've got 5,000-plus

[00:01:05] companies that are out there because the technology curve has almost gone up exponentially. As the technology curve increases its slope, you need security that parallels that. Not any one company can do that. Obviously, we handle our area and we focus on it, but there's

[00:01:20] many other companies that a CISO needs to be able to protect their environment. I think that's a big part of it. As AI emerges in terms of an attack vector, as people think about how they use AI and we're still in the formative stages for many companies and

[00:01:37] what does it all mean, but ultimately you're going to have your AI pipeline, everything from gathering the data, curating the data, creating the training models, and then moving into inference and everything in between. You're going to need guardrails across all of those.

[00:01:52] On AI, clearly that's a huge game changer, not just for the industry but technology writ large. There's been so much focus on the perils of AI in the cybersecurity landscape of being able to write malware. We've seen it used crudely in the disinformation space,

[00:02:05] but it's clearly also can be very useful in the defense space. Do you see AI as more of a threat to cybersecurity or more of an opportunity or is it a net zero? How do you

[00:02:16] think about dealing with those next-gen AI threats versus the opportunities and the ability to help protect customers? It's a foundational element that I started Crouchrike with as an AI native company. At the time it was machine learning and being able to use that to detect these attacks that

[00:02:36] I was seeing before. Obviously, in the later years you've got generative AI which came out well past when I started Crouchrike. I think it's a real boon for security practitioners because part of security and what we need to do is to really what I call bend time.

[00:02:54] In security, every company here is going to have some level of incident. What you want to do is make sure that incident doesn't turn into a breach. Any big company is going to

[00:03:02] have that. If you can bend time, that's going to help you deal with that incident and make sure that it doesn't grow into something much bigger. As an example, our breakout times in which we track this off our website have come down dramatically. Now it's 62 minutes,

[00:03:18] meaning that if an attacker gets onto a system, it on average takes 62 minutes before they can move laterally somewhere else. We've seen this as low as two minutes and seven seconds.

[00:03:30] If you can bend time and you can get in front of this, you have a much greater probability of stopping the adversary. That's one piece. The second piece then is what does it mean

[00:03:38] for the adversary? What does AI mean? Well, the defenders are going to be able to use it to help bend time and make their job easier but so is the adversary. What I think we're

[00:03:49] seeing is this democratization of skill sets that can be brought to the masses. In the grand scheme of things, there's 8 billion plus people in the world. In the grand scheme of things, there's a relative handful of people that really have the know-how to create

[00:04:03] these exploits and malware and disassemble things. There's a lot of them but in the grand scheme of 8 billion, it's not much. What you're going to see is many more threat actors emerge. On AI and the business implications of that,

[00:04:16] there's concerns about the AI industry and where it's going to top out. Do you have concerns that there could be some sort of AI bubble in the cybersecurity industry? That is fueling a lot of the growth you're seeing currently and that that could come crashing down at

[00:04:27] some point when maybe AI defense systems aren't as robust as maybe people want them to be? I don't see an AI bubble. I think in some cases, the LLMs will be commoditized and then

[00:04:37] how you actually create value around those. A lot of them started as massive large language models. What we're seeing is that you can create smaller ones that give pretty good efficacy but are much more cost effective and are bespoke to the application that you're

[00:04:54] building it for. I think there's going to be a lot of innovation around leveraging AI and how you actually use it and what you use and how you create it and how you train it and those sort of things. I don't see it as a bubble in cybersecurity.

[00:05:09] I think we're really in the early innings. When we show customers that you can take eight hours of grunt work in a security operation center and do it in 10 minutes, when you could write a situational report in 10 seconds, that's incredible. I think it's only going

[00:05:21] to fuel cybersecurity in terms of allowing people to move faster and get better outcomes. Coming up, deep fakes and election risks. We'll find out how CrowdStrike is dealing with both. That's after the break.

[00:06:02] We're back with more from WSJ Tech Live Cybersecurity. Before the break, the CEO of CrowdStrike, George Kurtz and our reporter Dustin Volz discussed the role of generative AI in the industry. Now we'll hear why the CEO says deep fakes are escalating risks for the 2024 US election.

[00:06:20] Let's move on to some threats. There's a lot of elections around the globe this year. How are you and how is CrowdStrike thinking about election security, which can mean so many things? Of course, it can mean attacks on the infrastructure itself of voting systems. It can mean disinformation.

[00:06:32] It can mean hack and leak. How are you thinking about what Russia might be up to, what China might be up to in this election in 2024, helping to identify what's going on and reassure the American voter that it's going to be a secure election?

[00:06:44] Sure. If you think about 2024 versus, say, 2016, I think there's a lot of expertise that has been gained by the likes of Russia and China, Iran, others. The technology environment has dramatically changed. The deep fake technology today is so good. I think that's one of the

[00:07:01] areas they really worry about. In 2016, we used to track this and you would see people actually have conversations with just bots. That was in 2016. They're literally arguing or they're promoting their cause and they're having an interactive conversation. There's

[00:07:17] nobody behind the thing. I think it's pretty easy for people to get wrapped up into that's real or there's a narrative that we want to get behind. A lot of it can be driven and

[00:07:27] has been driven by other nation states. What we've seen in the past, we spent a lot of time researching this with our CrowdStrike intelligence team, is it's a little bit like a pebble in a pond. You'll take a topic or you'll hear a topic, anything related to geopolitical

[00:07:43] environment. The pebble gets dropped in the pond and all these waves ripple out. It's this amplification that takes place. If now in 2024, with the ability to create deep fakes and some of our internal guys have made some funny spoof videos with me in it just to

[00:08:00] show me how scary it is, you could not tell that it was not me in the video. I think that's one of the areas that I really get concerned about. There's always concern about infrastructure

[00:08:09] and those sort of things. Those areas, a lot of it is still paper voting and the like. Some of it isn't, but how you create the false narrative to get people to do things that a nation state wants them to do. That's the area that really concerns me.

[00:08:22] Volt Typhoon or I believe you guys call them Vanguard Panda, but the US government has adopted the Volt Typhoon moniker. Chinese threat actor that's seemingly getting in all sorts of US critical infrastructure, not for espionage purposes, but to pre-position, as

[00:08:37] government officials have said, for a potential disruptive attack or attacks on our infrastructure, perhaps ahead of a conflict over Taiwan. This is very serious. I feel like the level of rhetoric I'm hearing from these officials is in some ways unprecedented about a cyber

[00:08:53] threat. You guys have tracked this threat actor. What can you tell us about how serious this is? Well, the threat actor is very capable and the threat is real. It should be taken seriously. If you look at the critical infrastructures, they target it, whether it's power, water,

[00:09:07] wind, shipping, et cetera, it goes across the board. It's something you have to really be concerned about. Obviously, the thought is if there is some level of outbreak with Taiwan and China that some of these capabilities could be activated, which would greatly impact

[00:09:24] the response of the US government. You have to be concerned about it. It's real. We've been involved. We've tracked and seen these threat actors and helped some of the organizations respond to it. At this point, I think the government has done a decent job of raising

[00:09:39] the flag and it's something that we have to be concerned with. But it's different. China spent a lot of time stealing data and copying it. This is not stealing data. This is preparing the environment to activate it at some point in the future.

[00:09:54] This was a great discussion, George. Thank you so much. Thanks so much. Thank you. That was WSJ reporter Dustin Volz speaking with CrowdStrike CEO George Kurtz at WSJ Tech Live Cybersecurity earlier this month. And that's it for Tech News Briefing. Today's

[00:10:10] show was produced by Julie Chang with supervising producer Catherine Millsop. I'm Zoe Thomas for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.