Artificial Intelligence and the Balance of Power in Cyberspace

Intelligence operations have been upended time and again by new technology, so with AI, what should we expect for the balance of offense and defense? [SPONSORED]

August 13, 2023

9 Min Read
Pixabay

[SPONSORED ARTICLE]

Intelligence operations have been upended time and again by the implementation of a new technology -- secret writing, codebreaking, telegraphic and radio interception, spy planes and distant orbiting imagery satellites as examples. Eventually these technologies were democratized, made cheaper and more widely available, and spread to not only to new states but to individuals. They also brought a new field of countermeasures, thus establishing a new normal.

With this history in mind, I have been thinking about AI and what it portends for the balance between offense and defense in cyberspace long-term. While we can undoubtedly expect a few years of headline-grabbing operations incorporating some form of AI, what’s harder to discern is how much worse the business risk truly is versus the same operations carried out with less novel technology.

However, by looking at the fundamental changes AI is bringing -- and how malicious cyber actors and defenders alike might benefit from those changes -- some degree of security planning and risk mitigation is possible now. Let’s walk through some of those advantages.

Speed and the Value of Data and Access

The ability of generative AI to undertake routine, often highly technical, tasks like code writing and execution, mapping of networks, and reporting and synthesizing information has obvious implications for network defenders who can anticipate an increase in lateral movement speed by APT groups. We may see AI improve the discovery of vulnerabilities, both theoretical and in particular targeted systems, shrinking the time from targeting to execution while creating new exploitation opportunities.

If that assumption proves to be true, it could also shift decision-making by cyber aggressors toward a model that increasingly emphasizes quick strikes to take advantage of short-lived vulnerabilities. This posture naturally increases the odds of being detected and removed from a targeted network, but if the payoff in data theft or damage increases with scalable AI-powered operations, the risk/reward balance could tilt in favor of greater aggression over stealth.

To some degree that tilt has happened already -- APT28’s cavalier attitude toward detection when hacking political targets belies their likely underlying belief that the costs of being caught are low and the rewards for a big score high, even against targets that might have value for long-term intelligence collection. Likewise, the hack of the US Office of Personnel Management by China-origin espionage actors revealed a preference for taking large volumes of information when the opportunity presented itself rather than laying low.

Now imagine cyber threat groups discovering a vulnerability in a similarly valuable system in the future: Perhaps they know beforehand what information they are targeting, but perhaps they do not. AI holds the promise of being able to ingest large volumes of disparate information and find valuable trends and correlations; overwhelmingly for positive uses, but potentially also helping spies and extortionists piece together the puzzle they are looking at, or for use enhancing more sophisticated and targeted phishing operations. Today, a target holding valuable information but that does not have any particular piece of information a cyber threat group is seeking might be one in which the APT group loiters, for months or years.

Likewise, an APT group with a zero-day might pass on using it against that target at all given the opportunity costs. But a world of cyber operations in which AI supercharges speed and scale while also greatly increasing the value of sensitive but fragmentary data and decreasing opportunity costs is probably also a world in which that preference for action becomes more common. Longer-term, it also looks like a world that incentivizes organizations that are currently fairly open about their work to keep proprietary information hidden from public discovery, using increased secrecy as a means of defending against foreseen but hard to predict algorithmic threats.

Influence Operations

Because some of the most prominent online covert influence campaigns in recent years have been carried out by sophisticated intelligence services, there is a tendency to look for influence operations online that are themselves sophisticated in their messaging. But the reality is that influence operations, like other forms of advertising, mostly seek to evoke a general feeling about the target of the campaign versus making a convincing, logical argument. For that reason, even many known disinformation campaigns can continue to be effective once exposed. Likewise, a range of potential malicious influence actors may produce targeted content that, upon review, looks amateurish or comically out of touch with the target audience. It’s important to keep in mind, though, that such campaigns can still be effective for their sponsors, especially in the long run.

Generative AI malign influence campaigns have a lot of potential for that reason: even if detected by the average viewer they can leave a deep impression. A realistic, shocking image of a politician running for office or a major brand behaving badly can still become the reference point for discussion about that person or company, however transparent the fake is. Expect a flood of “good enough”, automatically generated propaganda especially by the largest states able to afford the necessary computational resources to do so at societal scale.

There are also likely to be a number of powerful state actors for whom AI will fill a critical need: language. Even states with long histories of successful influence campaigns may not have native-language speakers on hand in sufficient number for every corner of the globe they wish to carry their message–witness the exceedingly poor English in some Russian influence operations targeting the 2016 U.S. Presidential election, or Pyongyang’s focus on Polish financial system targets in late 2016 due in part to the large number of language-capable expatriate North Korean IT workers there. What could emerge is a world in which many more state and non-state actors, not just the current global intelligence powers or those that speak English and especially including the developing world, have to be contested in every other country as language is reduced as a barrier to diplomatic or operational success.

Personnel

The ability of AI systems to replace white collar, skilled intellectual work is in the news. Certainly, we intelligence analysts are paying attention to the potential impact on our own jobs!

In many offensive cybersecurity roles, AI use in the years ahead probably looks a lot more like AI-assistance than AI-replacement: Just as stealth fighter jets increasingly come with clusters of drones they command while still keeping their human pilot, at least in the near-term APT groups probably look to use AI to augment their work. MetaSploit and CobaltStrike tools already support offensive cyber workflow without replacing skilled operators, and AI in the near-term will likely fill a similar but potentially more powerful role.

But in many cases demand for certain skills greatly exceeds supply, even for the largest and most powerful nation-state groups. In addition to perhaps helping overcome the shortage of linguists needed for influence operations, AI could also mean hacking groups expanding upon more niche skills like ICS-specific exploitation.

Criminal groups that might be capable on their own of deploying ransomware or stealing credit cards from end users but rely on larger criminal enterprises to carry out more targeted operations or gain initial access to targets, could see AI reduces their need to interconnect with other kinds of cyber criminals–reducing opportunities for disruption by law enforcement.

Just as importantly, AI could help defenders “scale talent” by making expert cybersecurity and intelligence knowledge available to security generalists on the frontlines of defense, or to small- and medium-sized enterprises that struggle to compete for top talent in these fields. This is doubly important for resource-intensive expert tasks like tracking insider threats and anomalous behavior.

AI also holds promise in the boardroom, perhaps explaining on a more regular basis how the cybersecurity threats an organization is facing apply to other areas of business risk and ensuring that the security team’s actions are aligned with executives’ understandings of bigger picture risk mitigation.

Key Factors

What emerges as key factors predicting how AI-driven cyber operations will affect business risk are some of the same ones underlying how AI will affect businesses more broadly: how fast and equitable is the spread of the technology, and how well is it harnessable by non-specialists.

Whether offense or defense broadly and effectively adopts AI into their operations first will affect how APT groups choose to pursue their targets for years to come: targeted and quiet or large scale and in swarms, in large or smaller networks of human experts. For overall cyber resilience, it will be key to see defensive applications of these technologies available to many industries: it will not improve security for a bank to see a net-benefit from these developments, only for the power company to increasingly have outages.

Likewise, adoption of these technologies by strategic leaders and employees without cyber backgrounds will dictate a lot of the balance. Cyber intelligence is always about informing decision-making, not just cataloging threats and indicators, or even improving detection. Cyber intelligence has to be incorporated alongside insight into physical, reputational, and other threats to influence strategic business risk decisions. Automating defenses with AI might improve detection, but unless those processes are explainable to non-experts, those gains may not translate as easily into improved business judgments. 

Christopher Porter is the Head of Threat Intelligence at Google Cloud. From 2019 to June 2022 he was the National Intelligence Officer for Cyber, leading the US Intelligence Community’s analysis of foreign cyber threats and threats to US elections. As a member of the National Intelligence Council, Christopher oversaw production of National Intelligence Estimates, was the primary cyber intelligence advisor to the Director of National Intelligence, developed the analytic portion of the Unified Intelligence Strategy, and lead the IC Cyber Analysis Leadership Council made up of the other cyber leaders of CIA, FBI, NSA, DOD, and other agencies.

Serving under both the Trump and Biden Administrations, he frequently briefed and wrote for the President of the United States, senior Cabinet officials, the Gang of Eight and other legislative leaders in the US House of Representatives and Senate, and civilian and military cyber leaders throughout the executive branch.

From 2016 to 2019, Christopher was the Chief Technology Officer for Global Cybersecurity Policy and the Chief Intelligence Strategist at cybersecurity company FireEye.

As a Senior Fellow at the Atlantic Council, Christopher contributes to the Cyber Statecraft Initiative within the Scowcroft Center for Strategy and Security. His research focuses on cyber diplomacy broadly, practical methods of reaching and enforcing cyber arms control agreements, and artificial intelligence as a security issue.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights