Deepfakes will cause damages of over $250 million in 2020. That was the stark prediction made by Jeff Pollard, the top analyst at Forrester. While deepfakes are nothing new, the way they’re being used is.
While the media have extensively covered deepfakes for their ability to spread fake news, cybercriminals have found ways to profit from them. Last year, we saw a worrying trend evolving – deepfakes as a form of identity theft.
With one out of fifteen people worldwide having already experienced identity theft, this trend is of great concern. Now that cybercriminals are harnessing the tech behind deepfakes, we can expect that figure to increase. This is particularly true in the business arena.
This is why in this post, we’ll go over why security awareness training has to include information on identifying deepfakes.
The term usually refers to video footage where one face is swapped with another. With artificial intelligence, it’s also possible to create fake videos from scratch.
AI can analyze facial features from several angles. The information is then used to apply a new face or create a fake video. With this tech, it’s possible to make anyone appear to say anything.
Think you’d spot the difference? Think again. Have a look at the video below for some interesting examples of the tech at work.
It’s a mistake to think that’s all there is too it, though. Last year, the CEO of a company in the UK transferred around $243,000 to fraudsters after being caught out. The criminals used deepfake tech to create a reasonable facsimile of the company’s Chief Operating Officer’s voice. The CEO thought that he was speaking to the real deal.
Is this call a warning of what to expect in the future? We certainly think so.
Frauds, like the one discussed above, show how fraudsters can take identity theft to the next level. If we don’t up our games to match, we could easily fall victim to the same trick.
You already know you should be careful with email links and attachments. If a client sends in a request that seems out of the ordinary, you know how to verify it. What if that client calls you up on the phone or on Skype to make the request?
Would you even think twice?
The truth is that most of us wouldn’t. When you can see or hear the voice of someone familiar to you, you’re bound to let your guard down.
That, of course, is what the fraudsters are counting on. They know you’ll trust your eyes and ears in this case.
The attack above was the first of its type reported in Europe. It’s a foreshadowing of more to come. Modern anti-spam software has AI built into it. The AI makes the software far more accurate at weeding out potential threats coming in via email.
In addition to that, internet users are getting better at recognizing phishing attempts. They know to be careful about what information they give out. They also know that they should be cautious about social engineering techniques. Essentially, the concept of stranger danger takes on a new meaning in the digital age.
That’s what makes this form of attack so effective. As far as you know, you’re not dealing with a stranger. We can expect more cybercriminals to start making use of this tech going forward. Why wouldn’t they if it’s so effective?
Image by Gerd Altmann from Pixabay
What makes it even more interesting is that the fraudsters in the case mentioned above carried on a conversation with the CEO. Experts believe that they used voice-changing software to fool him.
What’s of more concern, though, is that bots could conduct these attacks. Cybercriminals are just like entrepreneurs – they also want to streamline their operations. AI has made it possible for bots to mimic human speech patterns very well.
In the future, we might see an army of bots conducting coordinated attacks. Depending on the level of research the fraudster conducts, this could be devastating.
Say, for example, that you’re a mid-level employee. You get a call from a client asking for a refund. Your company has procedures in place, and you explain them to your client. The client gets angry – she wants the money now. There’s nothing you can do, so she hangs up on you.
A short while later, your CEO or her secretary phone to find out what’s going on. Were you rude to the client? What did you say? Why is she so angry? The CEO then tells you to call and apologize and get all the details.
As a mid-level employee, would you ignore the request? You recognized both the voice of the client and the CEO or her secretary. With the emphasis on top-notch customer service in business today, you want to make things right. You process the transaction.
In reality, both “callers” were bots, and the entire transaction is fraudulent.
Fraudsters will inevitably cash in on this tech. At the moment, no software can detect spoofed calls. And while companies are developing technology to detect fake videos, integrating it into video calling might be difficult.
Hypervigilance is the only way to succeed here. To protect yourself, you’ll need to:
Be careful about the photos and videos of you online. The more media that is out there, the more AI has to work with. Google your name, and see what comes up.
Look for signs that you’re communicating with a bot. A human won’t be able to type a considered response in messages within seconds. If you’re getting instant answers, it’s probably a bot. Also, look at the speech patterns used. If we’re in a rush, we tend to type in fragmented responses. Ask yourself, does this conversation seem natural?
Verify transactions that require you to give out sensitive information or transfer funds. Got a Skype call from your CEO? Great. When you’re finished on Skype, call him on his office line and confirm the instructions.
As you can see, there’s not a lot that we can do about deepfake attacks at the moment. You’re not likely to be able to distinguish these attacks from the real thing. Going forward, adopt the watchwords - be vigilant and verify!
There’s no foolproof way to stop these attacks. That said, if we understand the potential risks and are careful going forward, we can ward off a lot of trouble.