Why IT Departments Need to Consider Deepfakes - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
9/21/2020
08:00 AM
Lisa Morgan
Lisa Morgan
Commentary
Connect Directly
Twitter
RSS
50%
50%

Why IT Departments Need to Consider Deepfakes

It's hard to tell what's real and what isn't. If one of your executives or your company is the victim of deepfakes, what's IT going to do about it?

Deepfakes (aka synthetic media) can spread misinformation and disinformation very effectively. The 2020 US election is just one example, but the use of deepfakes isn't limited to politics. In fact, representatives from a big brand company recently asked Avivah Litan, vice president and distinguished analyst at Gartner Research, what they could do if deepfakes were used to undermine the reputation of the brand or CEO. Unfortunately, her reply was "nothing," because there's no way they can stop the social sharing of content.

Image: Andy Shell - stock.adobe.com
Image: Andy Shell - stock.adobe.com

"[T]he companies that have to solve this problem are the social media networks in terms of spreading deepfakes around the world," said Litan. "Even if there are solutions now, no one has the wherewithal to implement them except for the digital giants because the content spreads through their platforms."

Litan estimates that 90% detection rates may be possible by analyzing the content, who's submitting it, the kinds of devices its coming from and traffic patterns -- which is how bots and crime operations are already detected.

"If [the digital giants] took all the resources they've spent on targeted advertising and spent it instead on detecting fake news, fake content, deepfakes, we'd have a solution," said Litan.

While there's little financial incentive for social networks to combat deepfakes and "cheap fakes," they're nevertheless under pressure to take some responsibility for the content that's posted and shared on their sites.

Facebook published some information designed to help users spot fake news. However, impassioned users aren't all that discerning if every-day Facebook experiences are any indication. Following the 2016 US election, Facebook stated that it was working to stop misinformation and false news. Earlier this year, the company said it was going to ban deepfakes, but it has been criticized for not doing enough.

Avivah Litan, Gartner
Avivah Litan, Gartner

Meanwhile, Twitter announced earlier this year that it was actively targeting fake COVID-19 content using automated technologies and broadening its definition of harm to combat content that contradicts authoritative sources. Twitter also started labeling harmful and misleading information about COVID-19. Twitter even labeled some of President Trump's tweets as potentially misleading and manipulated media, which was not without political backlash.

Fake content is a growing problem

The reality is while it's easy to dismiss fake news as the ramblings of extremists or the tools of politicians seeking election, it's also a threat to corporations and their executives, which will become more evident soon.

Clearly, deepfakes or cheap fakes, weaponized against a company or executive could cause serious and expensive PR problems. However, fakes could also be used as a means of social engineering. For example, a voice deepfake caused a UK energy CEO to fall victim to a $243,000 million scam.

"As soon as [bad actors] discover these deepfake websites, everyone will start worrying about it real fast," said Litan. "If you think about how money gets stolen and how data gets breached, it's often through the social engineering of employees."

Forget about spear phishing. Instead, create a video or audio clip of "the boss" demanding a password or a financial transaction.

To help address the problem of fakes, Microsoft recently announced Microsoft Video Authenticator, which can identify subtle features in photographs and video that the human eye can't detect. It then assigns a confidence score that reflects the likelihood of artificially manipulated media.

Microsoft concurrently announced another new technology that can detect manipulated content and assure people that they're viewing authentic content. That solution consists of two tools. One enables a content producer to add digital hashes and certificates to a piece of content. The other is a reader for content consumers that checks the certificates and matches the hashes. The reader can be implemented as a browser extension or "in other forms", which most likely translates to embedded in applications.

"I think [fakes] will become a more widespread and understood problem within 18 months to two years," said Litan. "Especially now with all the political sensitivities, imagine if some CEO [allegedly] or a baseball team said black lives don't matter or we don't support the movement at all. That could get very worrisome as it starts happening. So far, it's happened mainly to politicians but it hasn't hit enterprises just yet."

Misinformation and disinformation tactics

Part of the problem with fakes is confirmation bias. Specifically, individuals are inclined to believe content that aligns with their beliefs, irrespective of its authenticity.

An even sadder truth is that misinformation and disinformation tactics grossly pre-date any living person today. However, at no time in history has it been cheaper and more convenient to reach thousands, millions or even billions of people with authentic or fake messages.

It's only a matter of time before deepfakes and cheap fakes become a very real corporate issue, which IT, legal, compliance, risk management, PR and the C-suite will need to address collectively. Right now, there really isn't anything IT can do about it, other than lobby politicians and the digital giants to fix the problem, which no one will want to hear.

Follow up with these related InformationWeek articles:

How to Detect Fakes During Global Unrest Using AI and Blockchain

Expect AI Flash Mobs of Fake News

Is It Possible to Automate Trust?

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Slideshows
10 Trends Accelerating Edge Computing
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/8/2020
Commentary
Is Cloud Migration a Path to Carbon Footprint Reduction?
Joao-Pierre S. Ruth, Senior Writer,  10/5/2020
News
IT Spending, Priorities, Projects: What's Ahead in 2021
Jessica Davis, Senior Editor, Enterprise Apps,  10/2/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
[Special Report] Edge Computing: An IT Platform for the New Enterprise
Edge computing is poised to make a major splash within the next generation of corporate IT architectures. Here's what you need to know!
Slideshows
Flash Poll