ThreatLocker CEO Talks Supply Chain Risk, AI’s Cybersecurity Role, and Fear

The security endpoint and zero-trust firm looks toward the future with an eye on AI possibilities -- both on the security side and on the threat side.

Shane Snider, Senior Writer, InformationWeek

November 7, 2024

6 Min Read
Pictured: ThreatLocker CEO Danny Jenkins.Image provided by ThreatLocker

It’s no secret that cybersecurity concerns are growing. This past year has seen massive breaches, such as the breach of National Public Data (with 2.7 billion records stolen), and several large breaches of Snowflake customers such as Ticketmaster, Advance Auto Parts and AT&T. More than 165 companies were impacted by the Snowflake-linked breaches alone, according to a Mandiant investigation.

According to CheckPoint research, global cyber-attacks increased by 30% in the second quarter of 2024, to 1,636 weekly attacks per organization. An IBM report says the average cost of a data breach globally rose 10% in 2024, to $4.8 million.

So, it’s probably not that surprising that Orlando, Fla.-based cybersecurity firm ThreatLocker has ballooned to 450 employees since its 2017 launch. InformationWeek caught up with ThreatLocker CEO Danny Jenkins at the Gartner IT Symposium/XPO in Orlando last month.

(Editor’s note: The following interview is edited for clarity and brevity.)

Can you give us a little overview on what you were talking about at the event?

What we’re talking about is that when you’re installing software on your computer, that software has access to everything you have access to, and people often don’t realize if they download that game, and there was a back door in that game, if there was some vulnerability from that game, it could potentially steal my files, grant someone access to my computer, grab the internet and send data. So, what we were really talking about was the supply chain risk. The biggest thing is vulnerabilities: The things a vendor didn’t intend to do, but accidentally granted someone access to your data. You can really enhance your security through sensible controls and limiting access to those applications rather than trying to find every bad thing in the world.

Related:Secure By Demand: Key Principles for Vendor Assessments

AI has been the major reoccurring theme throughout the symposium. Can you talk a little about the way we approach these threats and how that is going to change as more businesses adopt emerging technologies like GenAI?

What’s interesting is that we’re actually doing a session on how to create successful malware, and we’re going to talk about how we’re able to use AI to create undetectable malware versus the old way. If you think about AI, and you think about two years ago, if you wanted to create malware, there were a limited number of people in the world that could do that -- you’d have to be a developer, you’d have to have some experience, you’d have to be smart enough to avoid protections. That pool of people was quite small. Today, you can just ask ChatGPT to create a program to do whatever you want, and it will spit out the code instantly. The amount of people that have the ability to create malware has now drastically increased … the way to defend against that is to change the way you think about security. The way most companies think about security now is they’re looking for threats in their environment -- but that’s not effective. The better way of approaching security is really to say, "I’m just going to block what I don’t need, and I don’t care if it’s good and I don’t care if it’s bad. If it’s not needed in my business, I’m going to block it from happening."

Related:The Importance of Empowering CFOs Against Cyber Threats

As someone working in security, is the pace of AI adoption in enterprise a concern?

I think the concern is the pace and the fear. AI has been around for a long time. What we’re seeing the last two years is generative AI and that’s what’s scaring people. If you think about self-driving cars, you think about the ability of machine learning, the ability to see data and manipulate and learn from that data. What’s scary is that the consumer is now seeing AI that produces and before it was always stuff in the background that you never really thought about. You never really thought about how your car is able to determine if something’s a trash can or if it’s a person. Now this thing can draw pictures and it can write documents better than I do, and create code. Am I worried about AI taking over the world from that perspective? No. But I am concerned about the tool set that we’ve now given people who may not be ethical.

Related:5 Questions Your Data Protection Vendor Hopes You Don’t Ask

Before, if you were smart enough to write successful malware, at least in the Western Hemisphere, you’re smart enough to get a job and you’re not going to risk going to jail. The people who were creating successful malware before, or successful cyber-attacks, were people in countries where there were not opportunities, like Russia. Now, you don’t need to be smart enough to create successful cyber-attacks, and that’s what concerns me. If you give someone who doesn’t have capacity to earn a living access to tools that can allow them to steal data, the path they are going to follow is cyber crime. Just like other crime, when the economy is down and people don’t have job, people steal and crime goes up. Cyber crime before was limited to people who had an understanding of technology. Now, the whole world will have access and that’s what scares me -- and GenAI has facilitated that.

How do you see your business changing in the next 5-10 years because of AI adoption?

Ultimately, it changes the way people think about security, to where they have to start adopting more zero-trust approaches and more restrictive controls in their environment. That’s how it has to go -- there is no alternative. Before, there was a 10% chance you were going to get damaged by an attack, now it’s an 80% chance.

If you’re the CIO of an enterprise, how should you be looking at building out these new technologies and building on these new platforms? How should you be thinking about the security side of it?

At the end of the day, you have to consider the internal politics of the business. And we’ve gone from a world where IT people and CIOs, who often come from introverted backgrounds where they don’t communicate with boards, were seen as the people that make our computers work, and not the people who protect our business … now the board is saying we have to bring a security department. I feel like if you’re the CIO, you should be leading the conversation with your security team … as a CIO, you should be driving that.

What was one of your biggest takeaways from the event overall?

I think the biggest thing I’m seeing in the industry is fear is increasing, and rightly so. We’re seeing more people willing to say, "I need to solve my problem. I know we’re sitting ducks right now." That’s because we’re on the technology side and we live and breathe this stuff. But what we don’t necessarily always understand is what the customer perspective and customer viewpoint is and how do we solve their problems.

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights