What Did We Learn From CrowdStrike’s Congressional Hearing?

A CrowdStrike executive answered questions about the July 19 global IT outage during a congressional subcommittee hearing.

Carrie Pallardy, Contributing Reporter

September 26, 2024

5 Min Read
crowdstrike logo
SOPA Images Limited via Alamy Stock

On Sept. 24, Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, answered questions during a Cybersecurity and Infrastructure Protection Subcommittee hearing regarding the global IT outage that caused widespread disruption, beginning July 19.  

The outage impacted 8.5 million Windows devices around the world, plunging businesses and critical services into chaos. Insurance company Parametrix estimates that the outage caused Fortune 500 companies $5.4 billion in losses.  

Given the unprecedented scale and cost of the outage, government was inevitable. What did we learn from the live streamed hearing about the incident and CrowdStrike’s plans to prevent anything like it from happening in the future?  

CrowdStrike Takes Responsibility 

CrowdStrike took responsibility for the outage in its immediate aftermath. The company’s founder and CEO George Kurtz released a statement on July 19 apologizing for the incident. But he was not the one to answer questions at the hearing.  

“It was a little shocking to me that CEO George Kurtz didn't agree to also testify,” Josh Aaron, CEO of Aiden Technologies, a company that provides IT automation solutions for Windows environments, tells InformationWeek.  

Meyers maintained that apologetic stance in both his prepared witness testimony and his responses to the members of the Congressional subcommittee. He also emphasized that the outage was not a breach or the result of a cyberattack.  

Related:CrowdStrike, Microsoft Outage Causes Global IT Meltdown

CrowdStrike previously released a root cause analysis of the incident, and Meyers answered questions regarding the cause during the hearing. A content configuration update triggered the historic global outage. The company sent new threat detection configurations to sensors running on Microsoft Windows devices, but its Falcon sensor’s rules engine did not understand the configurations.  

“Think about a chessboard, trying to move a chess piece to someplace where there’s no square. That’s effectively what happened inside the sensor,” Meyers explained during the hearing. “So, when it tried to process the rule, it was not able to do what the rule was asking it to do, which triggered the issue within the sensor.”  

Changes to the Way Updates Roll Out

The configurations that caused the outage were validated through the company’s standard procedures. Meyers shared that the “…validator itself was in place for over a decade, and we’ve released 10 to 12 of these updates every single day since we started using the configuration updates.”  

Related:CrowdStrike Aftermath: Lessons Learned for Future Recovery

But a “perfect storm” of issues caused the sensor failure.  

“I think Meyers did a good job of explaining… this was a perfect storm for the organization and a validator that had been in use for 10 years… worked reliably until it didn't,” Harold Rivas, CISO at cybersecurity firm Trellix, tells InformationWeek. “And, of course, these black swan events can happen. So, it should be a prompt for the industry as a whole to say, ‘Where could I see this type of impact within my product, within my solution?’” 

For CrowdStrike, this means changes to the way it rolls out content updates. Going forward, the company will leverage a system of concentric rings and offer its customers more control.  

Sensor software updates and rapid response content will go through internal testing before being rolled out to early adopters. Then, these updates will go out to increasingly larger groups.  

“I think that they’re taking an important step in offering greater control to their customers in introducing this idea of early adopters,” says Rivas.  

Committee Questions Kernel Access  

The Congressional subcommittee asked Meyers some fairly technical questions about CrowdStrike’s kernel access: Is it necessary? Is it dangerous? Are there alternatives?  

Meyers told the subcommittee that kernel visibility is critical when trying to secure operating systems and to ensure that threat actors themselves do not access the kernel and tamper with security tools.  

Related:Contractual Lessons Learned From the CrowdStrike Outage

“Ultimately, CrowdStrike can't function unless it’s at the kernel level. That’s how their end-point detection response system works. And there’s no way to really change it,” Michael McLaughlin, principal of government relations and co-leader of the cybersecurity and data privacy practice group at law firm Buchanan Ingersoll & Rooney, explains following the hearing. “And so the only thing that CrowdStrike can do is to ensure that any code that they are injecting at that point has gone through rigorous testing and that the quality control mechanisms are in place.”  

Restitution for Impacted Parties  

Meyers did field a question about “making victims whole,” but we gained little insight into what that could look like.  

Meyers shared that CrowdStrike has been focused on getting its customers operational. He reemphasized the company’s apologies and spoke about rebuilding trust. But he did not speak to any legal proceedings or insurance coverage.  

“I would imagine that they [CrowdStrike] are … working through some sort of settlement or they’re working through their insurance policy to determine how do they make their customers whole,” says McLaughlin. He also points out that customer contracts likely have indemnification and liability limitation clauses.  

Some customers may move forward with legal action. Delta, for example, has publicly called the company out.  

“Those types of one-off suits … I think those are going to persist, and CrowdStrike is probably going to have to fight those out,” says McLaughlin.  

Increasing Focus on Single Points of Failure  

The CrowdStrike incident was eye-opening. One seemingly small issue could have massive repercussions for IT systems around the world. The Congressional subcommittee had questions about single points of failure and their vulnerability to simple error or malicious actors.  

Identifying these single points of failure, which exist well beyond just CrowdStrike, and mitigating their risk is “a team sport,” according to Meyers.  

Vendors have a role to play in evaluating their systems. “I think this is going to force not just CrowdStrike but all EDR vendors … to take a serious look at the way that they put [out] rapid updates,” Aaron says.  

McLaughlin also points to the government’s role. “The government needs to step in and say, ‘Look, these … choke points that we’re creating … these single points of failure, that’s going to be what nation states target,’” he says. “How do we mitigate that?”  

About the Author

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights