What CIOs Can Learn from an Attempted Deepfake Call
An employee recognized something was wrong when they received an audio deepfake of the LastPass CEO.
On April 10, password management company LastPass published a blog detailing a thwarted social engineering attempt powered by a deepfake audio impersonation of the company’s CEO.
The targeted employee recognized the communication as suspicious and reported it to LastPass’s internal security team. No harm was done but this incident serves as a reminder that deepfake technology is another tool in the threat actor’s arsenal -- one that will increasingly be put to work.
What lessons can CIOs and other enterprise security leaders learn from this deepfake incident as they consider the rising risk of threats fueled by generative AI (GenAI)?
Deepfake Technology
Image manipulation is not new. The term “photoshop” has been a part of our vocabulary since the 1990s, points out Toby Lewis, head of threat analysis at cybersecurity company Darktrace. “As a vector, as a technique, it’s not new. What we see with generative AI is absolutely the power to really supercharge that, and ultimately, lower the skill barrier for an attacker,” he tells InformationWeek.
The technology to create audio and visual deepfakes is accessible -- often for free -- and GenAI is only going to make it more powerful.
“We're kind of at the stage … especially with video deepfakes where there's ‘Take a look at hands. Take a look at ears.’ There's all these sorts of visual tells that you can keep an eye out for,” says Mike Kosak, senior principal intelligence analyst at LastPass and author of the company’s blog on the incident. “As the technology advances, that's just going to get more and more convincing … and those tells are going to fade away.”
The targeted LastPass employee, a sales executive, received calls, texts, and a voicemail on WhatsApp from someone pretending to be the company’s CEO. The employee recognized this for what it was, a social engineering attempt, and reported it.
“The fact that they [the threat actor] were able to put something together that was at least passable, I think is what's really interesting and represents that … proliferation and that escalation that we've been concerned about with this technology for years now,” says Kosak.
Deepfake Fallout
For LastPass, this incident is a good learning opportunity, not a financial disaster. “The whole reason that this didn't result in some multi-million dollars’ worth of loss was because somebody went ‘That looks a bit weird. I'm going to report it to internal security,’ and just that simple step was just all it needed to stop this in its early tracks,” says Lewis.
But deepfake technology has played a role in successful social engineering attacks with devastating consequences. Notably, threat actors orchestrated a video call, impersonating a CFO with deepfake technology. A finance worker, believing they were speaking to their colleagues, paid $25 million to the scammers, CNN reports.
Verify First
Threat actors who impersonate executives are preying on employees’ willingness to follow directions from superiors and their reluctance to question those directions.
“Some people are very vulnerable to social engineering and AI type threats. Some are vulnerable to urgency or obedience. If your boss sends you something at 4:30, there are some people [who] will go into that email, click a link without even verifying that it was from their boss,” says Shaun McAlmont, CEO of NINJIO, a cybersecurity awareness training company.
At LastPass, the employee recognized the urgency of the messages they received as typical of a social engineering attack and felt comfortable enough to tell the internal security team. “We’ve put a huge effort into education and culture and really kind of trying to educate everybody that we can within the company about this,” says Kosak.
With deepfakes becoming increasingly convincing, many organizations may need to embrace a cultural shift. “We used to say, ‘Trust but verify.’ I think we're pushing that envelope now to say, we have to verify everything before we trust,” says McAlmont.
Deepfake Training and Awareness
Defending against deepfake threats takes a multifaceted strategy. “There's a three-pronged approach where there's education, there's culture, and there's technology,” says Kosak.
NINJIO focuses on educating people on cybersecurity risks, like deepfakes, with short, engaging videos. “If you can deepfake a voice and a face or an image based on just a little bit of information or maybe three to four seconds of that voice tone, that's sending us down a path that is going to require a ton of verification and discipline from the individual’s perspective,” says McAlmont.
He argues that an hour or two of annual training is insufficient as threats continue to escalate. More frequent training can help increase employee vigilance and build that culture of talking about cybersecurity concerns.
When it comes to training around deepfakes, awareness is key. These threats will continue to come. What does a deepfake sound or look like? (Pretty convincing in many cases.) What are some of the common signs that the person you hear or see isn’t who they say they are? (A sense of urgency and reaching out via unexpected channels.) What are the potential consequences of falling for a deepfake? (Often financial.)
“When you go through what the potential ramifications can be or you actually show it so somebody could feel the shock of the fact that was an absolute fake, it really gets people's attention,” says McAlmont.
Even if equipped with the awareness of these threats, people are still vulnerable to social engineering attempts that use deepfakes to convincingly masquerade as a superior. That cultural shift to “verify first” means employees need to be comfortable reaching out to superiors, but executives and security teams need to be on board as well. “Having people comfortable being challenged … that's important, too,” says Kosak.
While people undeniably have a role to play in combatting deepfake threats, they should not be the sole line of defense, according to Lewis. “Whilst the human being can be trained to a degree to spot these sorts of attacks, there's so many … variables that kind of creep in,” he says. People get distracted. People make mistakes. The pressure to obey a presumed superior might be too much.
Lewis argues that technology and process are necessary elements of defense. AI-powered systems can help to detect and flag anomalous behavior. “It's about being able to take huge volumes of data and identify patterns that might not be possible for the human being to spot at the same speed and at the same sort of level,” he explains.
Implementing checks and balances that make it more difficult for a deepfake scam to be successful is also a useful strategy. “There should be a few things that get in [the] way regardless of whether somebody clicks on the link or … somebody thinks that this is a genuine person,” says Lewis. For example, enterprises can put multistep protocols in place when someone is attempting to change banking details or authorize a major financial transaction.
Awareness of deepfake technology, and its potent powers of manipulation, is not only important for enterprise teams working to defend their organizations. It also matters on an individual and national level, particularly as it will fuel disinformation and misinformation during an election year for the US and many other countries.
“We're just…at the forefront of this threat as it continues to develop and advance,” says Kosak.
About the Author
You May Also Like