Innovation Relies on Safeguarding AI Technology to Mitigate its Risks
This session explains how to get the best of generative AI in your organization by securing your AI systems to avoid security pitfalls and mitigate risks.
As artificial intelligence (AI) continues to advance and be adopted at a blistering pace, there are many ways AI systems can be vulnerable to attacks. Whether being fed malicious data that enables incorrect decisions or being hacked to gain access to sensitive data and more, there are no shortage of challenges in this growing landscape.
Today, it's more vital than ever to consider taking steps to ensure that generative AI models, applications, data, and infrastructure are protected.
In this archived panel discussion, Sara Peters (upper left in video), InformationWeek’s editor-in-chief; Anton Chuvakin (upper right), senior staff security consultant, office of the CISO, for Google Cloud; and Manoj Saxena (lower middle), CEO and executive chairman of Trustwise AI, came together to discuss the importance of applying rigorous security to AI systems.
This segment was part of our live virtual event titled, “State of AI in Cybersecurity: Beyond the Hype.” The event was presented by InformationWeek and Dark Reading on October 30, 2024.
A transcript of the video follows below. Minor edits have been made for clarity.
Sara Peters: All right, so let's start here. The topic is securing AI systems, and that can mean a lot of different things. It can mean cleaning up the data quality of the model training data or finding vulnerable code in the AI models.
It can also mean detecting hallucinations, avoiding IP leaks through generative AI prompts, detecting cyber-attacks, or avoiding network overloads. It can be a million different things. So, when I say securing AI systems, what does that mean to you?
What are the biggest security risks or threats that we need to be thinking about right now? Manoj, I'll send that to you first.
Manoj Saxena: Sure, again, thanks for having me on here. Securing AI broadly, I think, means taking a proactive approach not only to the outside-in view of security, but also the inside-out view of security. Because what we're entering is this new world that I call prompt to x. Today, it's prompt to intelligence.
Tomorrow, it will be prompt to action through an agent. The day after tomorrow, it will be prompt to autonomy, where you will tell an agent to take over a process. So, what we are going to see in terms of securing AI are the external vectors that are going to be coming into your data, applications and networks.
They're going to get amplified because of AI. People will start using AI to create new threat vectors outside-in, but also, there will be a tremendous number of inside-out threat vectors that will be going out.
This could be a result of employees not knowing how to use the system properly, or the prompts may end up creating new security risks like sensitive data leakage, harmful outputs or hallucinated output. So, in this environment, securing AI would mean proactively securing outside-in threats as well as inside-out threats.
Anton Chauvkin: So, to add to this, we build a lot of structure around this. So, I will try to answer without disagreeing with Manoj, but by adding some structure. Sometimes I joke that it's my 3am answer if somebody says, Anton secure AI! What do you mean by this? I'll probably go to the model that we built.
Of course, that's part of our safe, secure AI framework approach. When I think about securing AI, I think about models, applications, infrastructure and data. Unfortunately, it's not an acronym, because the acronym would be MADE, and it'll be really strange.
But after somebody said it's not an acronym, obviously, everybody immediately thought it's an acronym. The more serious take on this is that if I say securing AI, I think about securing the model, the applications around it, the infrastructure under it, and the data inside it.
I probably won't miss anything that's within the cybersecurity domain, if I think about these four buckets. Ultimately, I've seen a lot of people who obsess about one, and all sorts of hilarious and sometimes sad results happen. So, for example, I go and say the model is the most important, and I double down on prompt injection.
Then, SQL injection into my application kills me. If I don't want to do it in the cloud for some reason, and I try to do it on premise, my infrastructure is let go. My model is fine, my application is great, but my infrastructure is let go. So, ultimately, these four things are where my mind goes when I think about securing AI systems.
MS: Can I just add to that? I think that's a good way to look at the stack and the framework. I would add one more piece to it, which is around the notion of securing the prompts. This is prompt security and filtering, prompt defense against adversarial attacks, as well as real time prompt validation.
You're going to be securing the prompt itself. Where do you think that fits in?
AC: We always include it in the model, because ultimately, the prompt issues to us are AI specific issues. Nothing in the application infrastructure data is AI specific, because these exist, obviously, for non-applications. For us, when we talk about prompt, it always sits inside the M part of the model.
SP: So, Google's secure AI framework is something that we can all look for and read. It's a thorough and interesting read, and I recommend to our audience to do that later. But you guys have just covered a wide variety of different things already when I asked the first question.
So, if I'm a CIO or a CISO, what should I be evaluating? How do I evaluate the security of a new AI tool during the procurement phase when you have just given me all these different things to try to evaluate? Anton, why don't you start with that one?
About the Author
You May Also Like