Truth imperiled by coordinated attacks using deepfake videos and new NLG-powered fake news articles.

Guest Commentary, Guest Commentary

July 22, 2019

6 Min Read

The Information Age was coined as a concept in the 1960s and started to gain momentum in the 1970s. This development, relating to the advance of computing technology and sometimes called the Information Revolution, was at the time expected to bring a more informed, idealistic and virtuous world. Useful information would be readily available to anyone who went looking and greater transparency would bring dictators to heel.

We are now well down the Information Superhighway and some of those hopes have been realized. Though in an Orwellian twist, that same access to information has led to unanticipated and not always pleasant outcomes. For one, the divide between information and disinformation is now nearly nonexistent, carried at the speed of light via social media and other electronic platforms. In a sad irony from promise to reality, truth is under attack unlike any time in human history.

FakeNews-MonsterZtudioAdobeStock_187165068.jpg

“Fake News” is at the forefront of this assault, though there is nothing new about fake news. It used to be called “Yellow Journalism,” pushed by William Randolph Hearst and Joseph Pulitzer in the 1890s to sell newspapers through sensationalized -- and sometimes manufactured news. Though even then, there was already a tradition of fake news. In the election of 1800, Thomas Jefferson convinced many Americans that John Adams wanted to attack France. Although the claim was false, voters bought it, and Jefferson won the election. Now, fake news has been democratized, as anyone can blast it out into the world. You could say that fake news is an American tradition, and like Star Wars and Coca Cola it has been successfully exported worldwide, from Ukraine to Sri Lanka.

Created by a wide range of people, from demagogues to teenagers struggling to make a living in Macedonia, fake news has become a disinformation pandemic spearheaded by false or misleading social text and digital advertising. Infamously, fake news disinformation placed in social media was central to the controversy surrounding the 2016 U.S. elections. For a democracy, this is a disaster.

Fake news arsenal grows

While clearly dangerous, fake news is on the verge of becoming even more so through application of artificial intelligence-fueled “deepfake” videos. Using widely available AI tools to produce these videos, fake news can now be supercharged. Video content, long seen as an unassailable verification tool for truth and confirmation, has become as vulnerable to political distortion as anything else. Ominously, it is becoming more difficult to discern fake from real.

Nascent legislative efforts are under way to ban deepfake videos from California to Washington D.C., but detecting them and discriminating fake from legitimate content is increasingly problematic. Speaking recently about the improving quality of deepfake videos, Hao Li, an expert in AI-driven computer vision and an associate professor at the University of Southern California, said at some point it’s likely that it’s not going to be possible to detect them.

As noted by the Brookings Institution, it could be that deepfake videos will be the inevitable next step in the attack on truth. However, a new threat has emerged with AI-enabled natural language generation (NLG). Initially created by non-profit OpenAI, this capability allows a user to create realistic and coherent text about a topic of their choosing. While positive applications exist, such as writing basic news articles and improved dialog and speech recognition, less redeeming uses including impersonation and generating misleading news articles are also possible. OpenAI believes the capabilities of their GPT2 text generator are so powerful and potentially harmful it declined to make the fully featured version of the system available to the public.

According to a recent report in the New York Times, not everyone agrees this NLG technology should be restricted. For one, the Allen Institute sees things differently. They too are developing a news generator, known as “Grover.” According to a researcher cited in the story, they believe the tools the two labs created must be released so other researchers can learn to identify them. Per a TechCrunch story, Grover’s creators believe we’ll only get better at fighting generated fake news by putting the tools to create it out there to be studied.

Like how deepfake videos are created, text generation systems use generative adversarial networks (GANs), pairing two neural networks in opposition to improve the final content. One generates content and another rates how convincing it is. If the rating is sufficiently low, the generator produces the next iteration and eventually it learns what is convincing and what isn’t. Grover today is reasonably adept at spotting text it produces, but also that from OpenAI’s GPT2.

As with cybersecurity, when those making detection software close one attack vector, the developers create another. For example, after it became known that deepfake videos could be detected because of unrealistic eye blinking, the creators improved their GANs to produce better eye blinking in the final video. Now, this forensic no longer works. Likely the same will apply to text generation tools.

Nobody knows where it goes

People are creating text-based fake news stories today without these tools. The difference with AI is that the exercise of creation changes from low-volume, handcrafted pieces to mass production. The potential now exists for thousands of unique, auto-generated articles to be placed by bots and flood media channels all in an instant: an AI-powered flash mob of fake news produced at digital scale. The quality of these generated stories is already quite good and likely will improve and be more persuasive very soon. Even more scary is a coordinated attack where both text-based and deepfake videos are unleashed simultaneously, with the video seemingly prompting the thousands of text-based responses that serve to reinforce and legitimize the fake news. This could easily be a 2020 election scenario.

It is not that far-fetched. Recently, a type of “Birtherism” campaign against Sen. Kamala Harris surged in the wake of her strong performance in a debate among Democratic presidential candidates. Per a Yahoo article, the attack went viral thanks in part to an assist from Twitter accounts identified as bots. Again, there are precedents for this in the pre-AI era including the fake “Canuck Letter” that effectively ended the presidential bid of Senator Edmund Muskie in 1972.  In 2012, a legitimate video surfaced of Mitt Romney that also hurt his presidential effort. Fast forward to 2020, with the arsenal of fake news tools now available that are increasingly difficult to debunk and spread through an efficient distribution engine. Any number of fake news attack scenarios are possible, driven at the speed of light by coordinated bot-spread deepfake videos and NLG texts and stories.

We’re now a very long way from the informed and virtuous world view of the early Information Revolution. It is now even plausible to think that AI-powered fakes could spark a real revolution.

Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights