Bringing an AI-driven tool into the battle between opposing worldviews may never move the needle of public opinion, no matter how many facts on which you’ve trained its algorithms.

James Kobielus, Tech Analyst, Consultant and Author

October 1, 2020

8 Min Read
Image: kyo - stock.adobe.com

Disinformation is when someone knows the truth but wants us to believe otherwise. Better known as “lying,” disinformation is rife in election campaigns. However, under the guise of “fake news,” it’s rarely been as pervasive and toxic as it’s become in this year’s US presidential campaign.

Sadly, artificial intelligence has been accelerating the spread of deception to a shocking degree in our political culture. AI-generated deepfake media are the least of it.

Instead, natural language generation (NLG) algorithms have become a more pernicious and inflammatory accelerant of political disinformation. In addition to its demonstrated use by Russian trolls these past several years, AI-driven NLG is becoming ubiquitous, thanks to a recently launched algorithm of astonishing prowess. OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) is probably generating a fair amount of the politically oriented disinformation that the US public is consuming in the run-up to the November 3 general election.

The peril of AI-driven NLG is that it can plant plausible lies in the popular mind at any time in a campaign. If a political battle is otherwise evenly matched, even a tiny NLG-engineered shift in either direction can swing the balance of power before the electorate realizes it’s been duped. In much the same way that an unscrupulous trial lawyer “mistakenly” blurts out inadmissible evidence and thereby sways a live jury, AI-driven generative-text bots can irreversibly influence the jury of public opinion before they’re detected and squelched.

Introduced this past May and currently in open beta, GPT-3 can generate many types of natural-language text based on a mere handful of training examples. Its developers report that, leveraging 175 billion parameters, the algorithm “can generate samples of news articles that human evaluators have difficulty distinguishing from articles written by humans.” It is also, per this recent MIT Technology Review article, able to generate poems, short stories, songs, and technical specs that can pass off as human creations.

The promise of AI-powered disinformation detection

If that news weren’t unsettling enough, Microsoft separately announced a tool that can efficiently train NLG models that have up to a trillion parameters, which is several times larger than GPT-3 uses.

What this and other technical advances point to is a future where propaganda can be efficiently shaped and skewed by partisan robots passing themselves off as authentic human beings. Fortunately, there are technological tools for flagging AI-generated disinformation and otherwise engineering safeguards against algorithmically manipulated political opinions.

Not surprisingly, these countermeasures -- which have been applied both to text and media content --also leverage sophisticated AI to work their magic.  For example, Google is one of many tech companies reporting that its AI is becoming better at detecting false and misleading information in text, video, and other content in online news stories.

Unlike ubiquitous NLG, AI-generated deepfake videos remain relatively rare. Nevertheless, considering how hugely important deepfake detection is to public trust of digital media, it wasn’t surprising when several Silicon Valley powerhouses announced their respective contributions to this domain: 

  • Last year, Google released a huge database of deepfake videos that it created with paid actors to support creation of systems for detecting AI-generated fake videos.

  • Early this year, Facebook announced that it would take down deepfake videos if they were “edited or synthesized -- beyond adjustments for clarity or quality -- in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” Last year, it released that 100,000 AI-manipulated videos for researchers to develop better deepfake detection systems.

  • Around that same time, Twitter said that will remove deepfaked media if it is significantly altered, shared in a deceptive manner, and if it's likely to cause harm. 

Promising a more comprehensive approach to deepfake detection, Microsoft recently announced that it has submitted to the AI Foundation’s Reality Defender initiative a new deepfake detection tool. The new Microsoft Video Authenticator can estimate the likelihood that a video or even a still frame has been artificially manipulated. It can provide an assessment of authenticity in real time on each frame as the video plays. The technology, which was built from the Face Forensics++ public dataset and tested on the DeepFake Detection Challenge Dataset, works by detecting the blending boundary between deepfaked and authenticate visual elements. It also detects the subtle fading or greyscale elements that might not be detectable by the human eye.

Founded 3 years ago, Reality Defender is detecting synthetic media with a specific focus on stamping out political disinformation and manipulation. The current Reality Defender 2020 push is informing US candidates, the press, voters, and others about the integrity of the political content they consume. It includes an invite-only webpage where journalists and others can submit suspect videos for AI-driven authenticity analysis.

For each submitted video, Reality Defender uses AI to create a report summarizing the findings of multiple forensics algorithms. It identifies, analyzes, and reports on suspiciously synthetic videos and other media.  Following each auto-generated report is a more comprehensive manual review of the suspect media by expert forensic researchers and fact-checkers. It does not analyze intent but instead reports manipulations to help responsible actors understand the authenticity of media before circulating misleading information.

Another industry initiative for stamping out digital disinformation is the Content Authenticity Initiative. Established last year, this digital-media consortium is giving digital-media creators a tool to claim authorship and giving consumers a tool for assessing whether what they are seeing is trustworthy. Spearheaded by Adobe in collaboration with The New York Times Company and Twitter, the initiative now has participation from companies in software, social media, and publishing, as well as human rights organizations and academic researchers. Under the heading of “Project Origin,” they are developing cross-industry standards for digital watermarking that enables better evaluation of content authenticity. This is to ensure that audiences know the content was actually produced by its purported source and has not been manipulated for other purposes.

What happens when collective delusion scoffs at efforts to flag disinformation

But let’s not get our hopes up that deepfake detection is a challenge that can be mastered once and for all. As noted here on Dark Reading, “the fact that [the images are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology.”

And it’s important to note that ascertaining a content’s authenticity is not the same as establishing its veracity.

Some people have little respect for the truth. People will believe what they want. Delusional thinking tends to be self-perpetuating. So, it’s often fruitless to expect that people who suffer from this condition will ever allow themselves to be disproved.

If you’re the most bald-faced liar who’s ever walked the Earth, all that any of these AI-driven content verification tools will do is provide assurances that you actually did generate this nonsense and that not a measly morsel of balderdash was tampered with before reaching your intended audience.

Fact-checking can become a futile exercise in a toxic political culture such as we’re experiencing. We live in a society where some political partisans lie repeatedly and unabashedly in order to seize and hold power. A leader may use grandiose falsehoods to motivate their followers, many of whom have embraced outright lies as cherished beliefs. Many such zealots -- such as anti-vaxxers and climate-change deniers -- will never change their opinions, even if every last supposed fact upon which they’ve built their worldview is thoroughly debunked by the scientific community.

When collective delusion holds sway and knowing falsehoods are perpetuated to hold power, it may not be enough simply to detect disinformation. For example, the “QAnon” people may become adept at using generative adversarial networks to generate incredibly lifelike deepfakes to illustrate their controversial beliefs.

No amount of deepfake detection will shake extremists’ embrace of their belief systems. Instead, groups like these are likely to lash out against the AI that powers deepfake detection. They will unashamedly invoke the current “AI is evil” cultural trope to discredit any AI-generated analytics that debunk their cherished deepfake hoax.

People like these suffer from we may call “frame blindness.” What that refers to is the fact that some people may so entirely blinkered by their narrow worldview, and stubbornly cling to the tales they tell themselves to sustain it, that they ignore all evidence to the contrary, and fight vehemently against anyone who dares to differ.

Keep in mind that one person’s disinformation may be another’s article of faith. Bringing an AI-driven tool into the battle between opposing worldviews may never move the needle of public opinion, no matter how many facts on which you’ve trained its algorithms.

About the Author(s)

James Kobielus

Tech Analyst, Consultant and Author

James Kobielus is an independent tech industry analyst, consultant, and author. He lives in Alexandria, Virginia.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights