How Safe and Secure Is GenAI Really?

No leaning for or against, no fanfare, no sensationalism. Just an honest look at what professionals hailing from different disciplines see as generative AI safety and security concerns.

Pam Baker, Contributing Writer

August 8, 2024

9 Min Read
AI security concept
Zoonar GmbH via Alamy Stock

Fear of the unknown can be crippling -- while concerns based in fact are at the heart of meaningful risk mitigation. But when it comes to discussions regarding the safety of artificial intelligence, a little of both seems to be always present. Sometimes that’s by design or convenience rather than ignorance. For example, fear can be used to convince legislators to protect the profits of legacy vendors from the rise of innovating startups, or to give conspiracy theorists new propaganda feeds.

Legitimate concerns, on the other hand, reveal where real AI dangers lie. For instance, in generating volumes of disinformation and deepfakes, writing malicious code, and producing a tsunami of data leaks. The question is where do we stand amid the quickening in AI evolution? Are we safe or not? Are we reacting to fear or responding to concerns?  

The answer isn’t readily apparent given the newness of AI and the subsequent market frenzy. In short, the issue is further complicated by another kind of fear: the fear of missing out (FOMO). 

“Companies are jumping in feet first into AI when they are not even covering the basic vulnerabilities, like the data being wide open. It may sound harsh, but it’s a dumpster fire out there,” says Matt Radolec, VP of Incident Response and Cloud Operations at Varonis. Incidentally, Radolec recently gave a keynote on How to Avoid Your First AI Breach at the RSA Conference in San Francisco.  

Related:Nonprofit Launches Tools to Boost Responsible AI

Time to take a step back from the AI rush and investigate the issues. 

Identifying AI risks 

Dumpster fire or not, avoidance or panic aren’t useful. It’s better to prioritize risks and calmly evaluate threats.  

“When it comes to cyberattacks, the key motivator is money. That’s why old-fashioned techniques like phishing and ransomware continue to thrive. As for ChatGPT or other generative AI apps, there are really not many opportunities to make money with hacking. In fact, there haven’t been high-profile breaches or issues -- so far,” says Muddu Sudhakar, co-founder of Aisera, a provider of artificial intelligence, enterprise GPT, and generative AI (GenAI) software. 

While not high-profile as of yet, there are reports of AI breaches happening now.  

“According to our survey of security leaders, 77% of companies reported identifying breaches to their AI in the past year. The remaining were uncertain whether their AI models had seen an attack. The top sources of AI breaches include criminal hacking individuals or groups, third-party service providers, automated botnets, and competitors," says Malcolm Harkins, chief security and trust officer at HiddenLayer, an AISEC platform provider. 

Related:Biden Pens Landmark AI Executive Order

There is also a plethora of other types of threats. One is the weaponization of AI to facilitate and accelerate phishing and ransomware attacks. 

“In today’s business world with geographically distributed workforces, AI tools enable increasingly credible methods to execute successful social engineering attacks. Controls that were difficult to circumvent, such as voice verification for identity on a password reset, will become obsolete. While GenAI tools like ChatGPT may not yet produce convincing spear phishing outputs, it could be used to improve base level quality issues with most phishing campaigns such as addressing poor grammar and obviously inaccurate information,” says Chad Thunberg, chief information security officer (CISO) at Yubico, the manufacturer of Yubikey, a hardware authentication device. 

Other types of AI threats include election disruptions, public riot instigations, widespread fraud, and deepfake attacks.   

These types of threats prompted the Federal Communications Commission (FCC) to recently announce a ruling that makes robocalls using AI-generated voices illegal. 

“This isn't really surprising and was a unanimous vote. While this affects so-called legitimate companies that are making automated calls, it unfortunately won't do much to dissuade nation states and criminal threat actors from continuing to adopt AI-generated calls. After all, AI serves as both a force accelerator, as it will allow those threat actors to operate at large scale without having to increase the size of their workforce. At the same time, the ability of AI to generate convincing-enough speech in another language will serve to open new markets to threat actors who might have previously employed linguists,” says Kayne McGladrey, IEEE Senior Member. 

Related:RAI Institute Founder on Steering AI Systems to Maturity

“We're moving to a point where inbound voice calls without authentication aren't suitable for sensitive communications, much like email. Alternatively, we might all have to create a 'safe word' that's known between two parties and can be used to authenticate one another in calls.” McGladrey adds. 

AI flaws also create vulnerabilities with data leaks and data harvesting ranking high on the list. Data leaks can lead to high and prolonged exposures such as ChatGPT’s first data breach, which exposed user data to about 1.2 million OpenAI users over the span of nine hours. Data harvesting creates new vulnerabilities through wide exposures, too, and some have been high-profile such as the 
Samsung proprietary code harvest wherein the proprietary code is now part of ChatGPT’s training data and available for public use.  

“It isn’t clear how private your private instance is. At Microsoft, probably in good shape. Ideally you would host your own LLM behind your firewall if you want to be absolutely certain that the data would never be shared even by accident by the AI,” says Kevin Surace, chair of Token, father of the Virtual Assistant, and a frequent speaker on AI. 

Be aware that data breaches can potentially come from trusted sources, too. 

“It's dangerous to give that much information to Google and OpenAI. These big, centralized corporations have one business model: your data. The more you give them, the more they know about you,” says Jonathan Schemoul, founder at aleph.im. He is a blockchain and AI developer whose work and research focuses on data privacy and decentralization issues around current GenAI business models. 

But even all these scary scenarios are but a chip off the AI iceberg coming our way.  

Developing Risks 

AI and security researchers are also noting developing risks on the horizon. For example, GenAI can be used to fuel the underground in various forms of crime that may evade current laws, at least temporarily. In any case, it stands ready to accelerate many of the worst activities found on the Dark Web. 

“To give one particularly scary example: Stable Diffusion is the world's most popular text-to-image model. In December 2023, some Stanford researchers discovered that one of the training datasets that Stable Diffusion was trained on -- called LAION-5B -- contained over 1,600 images of child pornography. But virtually no companies or agencies have infrastructure in place to track which applications use which models, and which models are trained on which datasets,” says Marc Frankel, CEO at Manifest, a supply chain technology security company. 

Another scenario is the use of GenAI to hack websites on a scale never seen. 

“The authors describe how LLM agents can autonomously hack websites. For example, when given a list of critical severity CVE descriptions, GPT-4 is already capable of exploiting 87% of the vulnerabilities compared to 0% in other models that were tested including GPT-3.5, open source LLMs. As more frameworks, such as CrewAI and those that operate locally using open source models, become available and open source LLMs grow in maturity, the time between vulnerability disclosure and widespread, automated exploits will shrink. By leveraging agentic AI, attackers can exploit one-day vulnerabilities within minutes of their public disclosure,” says Pascal Geenens, director of threat intelligence for Radware, who cited these numbers and information from the research paper “LLM Agents can Autonomously Exploit One-day Vulnerabilities.” 

“This means that many of the tools that businesses rely on for the protection of their digital assets will no longer be sufficient. To be effective, organizations will need more sophisticated tools that employ behavior-based protections as well as machine learning and AI algorithms to detect and mitigate these new AI-generated nefarious scripts, injections, and bot attacks. The idea is to fight fire with fire,” adds Geenens. 

Other developing risks are rising at the speed of AI, too.  

“We are seeing life-changing technological advances and life-threatening new risks -- from disinformation to mass surveillance to the prospect of lethal autonomous weapons,” said U.N. Secretary-General Antonio Guterres in the opening session of the AI Seoul Summit, according to a report by the Associated Press. AP also reported that “Google, Meta, and OpenAI were among the companies that made voluntary safety commitments at the AI Seoul Summit, including pulling the plug on their cutting-edge systems if they can’t rein in the most extreme risks.” 

While that all sounds well and good, as the Associated Press point out, “It’s not the first time that AI companies have made lofty-sounding but non-binding safety commitments.” Signs of those promises fading fast are evident in several places. For example, an examination of companies saying they adhere to responsible AI principles typically reveals more than a few delivering little more than lip-service to the concept. Worse, AI safety teams have been cut or eliminated entirely. For example, “Microsoft, Meta, Google, Amazon, and Twitter are among the companies that have cut members of their ‘responsible AI teams,’ who advise on the safety of consumer products that use artificial intelligence,” according to a Financial Times report.  

The Upshot 

While the cybersecurity community is working hard to mitigate many of these risks, it’s difficult to keep pace with the rapid-fire advances in GenAI. Meanwhile, the GenAI community is consumed with market pressures to produce more features, faster.  

“A large majority of the AI community is not focused on security of the AI vs the ability to generate and extract value. This is not to say that security of GenAI systems is not being considered from both researchers and practitioners, it is just, unfortunately lagging behind,” says Jeff Schwartzentruber, senior machine learning scientist at eSentire, a managed detection and response company. 

While GenAI vendors are also working hard at adding guardrails and other safety measures to their models, they are still typically far from secure. 

“The crown jewels of sophisticated AI models are their weights -- the learnable parameters derived by training the model on massive data sets. If a hacker gets access to the model weights, he owns the AI, and at a fraction of the cost that it took to create it,” says Dan Lahav, co-founder and CEO and an AI security researcher at Pattern Labs 

Given the insanely high GenAI adoption rates, these security vulnerabilities and oversights can spread like wildfire. 

“From my conversations among the AI community, many SaaS providers are woefully unprepared for the additional exposure these systems can create, given the infancy of GenAI and the rapidly changing ecosystem,” says Schwartzentruber.  

Essentially, companies should be on full alert and approach GenAI with a buyer beware posture. 

“Software is virtually the only thing we buy without knowing what's in it. For 100 years, the FDA has required Kellogg's to put ingredients on the side of a box of Raisin Bran. The FTC requires ingredient tags on our t-shirts. But when it comes to Zoom or Microsoft Word, or the software in our pacemakers or cars, historically we've effectively flown blind. Lack of transparency in AI is no different -- in fact, arguably it's worse,” says Frankel. 

Stay abreast of emerging data and AI protection technologies because many existing security products may not be particularly helpful. 

“It is not so much about being more risky rather it is about there being very different types of risks requiring different mitigation strategies,” says Vivek Singh, Associate Professor of Library and Information Science at Rutgers University School of Communication and Information. 

About the Author

Pam Baker

Contributing Writer

A prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights