It’s in every organization’s best interest to implement security measures that counter threats in order to protect artificial intelligence investments.

Guest Commentary, Guest Commentary

September 15, 2020

5 Min Read
Image: valerybrozhinsky - stock.adobe.com

Security and privacy concerns are the top barriers to adoption of artificial intelligence, and for good reason. Both benign and malicious actors can threaten the performance, fairness, security and privacy of AI models and data.

This isn’t something enterprises can ignore as AI becomes more mainstream and promises them an array of benefits. In fact, on the recent Gartner Hype Cycle for Emerging Technologies, 2020, more than a third of the technologies listed were related to AI.

At the same time, AI also has a dark side that often goes unaddressed, especially since the current machine learning and AI platform market has not come up with consistent nor comprehensive tooling to defend organizations. This means organizations are on their own. What’s worse is that according to a Gartner survey, consumers believe that it is the organization using or providing AI that should be accountable when it goes wrong.

It is in every organization’s interest to implement security measures that counter threats in order to protect AI investments. Threats and attacks against AI not only compromise AI model security and data security, but also compromise model performance and outcomes.

There are two ways that criminals commonly attack AI and actions that technical professionals can take to mitigate such threats, but first let’s explore the three core risks to AI.

Security, liability and social risks of AI

Organizations that use AI are subject to three types of risks. Security risks are rising as AI becomes more prevalent and embedded into critical enterprise operations. There might be a bug in the AI model of a self-driving car that leads to a fatal accident, for instance.

Liability risks are increasing as decisions affecting customers are increasingly driven by AI models using sensitive customer data. As an example, incorrect AI credit scoring can hinder consumers from securing loans, resulting in both financial and reputational losses.

Social risks are increasing as “irresponsible AI” causes adverse and unfair consequences for consumers by making biased decisions that are neither transparent nor readily understood. Even slight biases can result in the significant misbehavior of algorithms.

How criminals commonly attack AI

The above risks can result from the two common ways that criminals attack AI:Malicious inputs, or perturbations and query attacks.

Malicious inputs to AI models can come in the form of adversarial AI, manipulated digital inputs or malicious physical inputs. Adversarial AI may come in the form of socially engineering humans using an AI-generated voice, which can be used for any type of crime and considered a “new” form of phishing. For example, in March of last year, criminals used AI synthetic voice to impersonate a CEO’s voice and demand a fraudulent transfer of $243,000 to their own accounts.

Query attacks involve criminals sending queries to organizations’ AI models to figure out how it's working and may come in the form of a black box or white box. Specifically, a black box query attack determines the uncommon, perturbated inputs to use for a desired output, such as financial gain or avoiding detection. Some academics have been able to fool leading translation models by manipulating the output, resulting in an incorrect translation.

A white box query attack regenerates a training dataset to reproduce a similar model, which might result in valuable data being stolen. An example of such was when a voice recognition vendor fell victim to a new, foreign vendor counterfeiting their technology and then selling it, which resulted in the foreign vendor being able to capture market share based on stolen IP.

Newest security pillars to make AI trustworthy

It is paramount for IT leaders to acknowledge the threats against AI in their organization in order to assess and shore up both the existing security pillars they have present (human focused and enterprise security controls) and the new security pillars (AI model integrity and AI data integrity).

AI model integrityencourages organizations to explore adversarial training for employees and reduce the attack surface through enterprise security controls. The use of blockchain for provenance and tracking of the AI model and the data used to train the model also falls under this pillar as a way for organizations to make AI more trustworthy.

AI data integrityfocuses on data anomaly analytics, like distribution patterns and outliers, as well as data protection, like differential privacy or synthetic data, to combat threats to AI.

To secure AI applications, technical professionals focused on security technology and infrastructure should do the following: 

  • Minimize the attack surface for AI applications during development and production by conducting a threat assessment and applying strict access control and monitoring of training data, models and data processing components.

  • Augment the standard controls used to secure the software development life cycle (SDLC) by addressing four AI-specific aspects: threats during model development, detection of flaws in AI models, dependency on third-party pretrained models and exposed data pipelines.

  • Defend against data poisoning across all data pipelines by protecting and maintaining data repositories that are current, high-quality and inclusive of adversarial samples. An increasing number of open-source and commercial solutions can be used for improving robustness against data poisoning, adversarial inputs and model leakage attacks.

It’s hard to prove when an AI model was attacked unless the fraudster is caught red-handed and the organization performs forensics of the fraudster’s system thereafter. At the same time, enterprises aren’t going to simply stop using AI, so securing it is essential to operationalizing AI successfully in the enterprise. Retrofitting security into any system is much more costly than building it in from the outset, so secure your AI today.

Avivah_Litan-Gartner-cp.jpg

Avivah Litan is a Vice President and Distinguished Analyst in Gartner Research. She specializes in Blockchain innovation, Securing AI, and how to detect Fake content and goods using a variety of technologies and methodologies. To learn more about the security risks to AI, join Gartner analysts at the Gartner Security & Risk Management Summit 2020, taking place virtually this week in the Americas and EMEA.

 

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights