As much as the widespread acceptance of AI is impacting the very nature of work, it’s also reshaping the threats to businesses’ security, integrity, and reputation. Social engineering is nothing new, but deepfake voicemails and other reality-bending scams now test even the most alert among us.
In this article, we dive deep into the despicable and alarming practice of deepfake voicemails. We examine how they work, expose their telltale signs, and provide real-world examples. Most importantly, we offer concrete advice on how to recognize these scams and prevent them from doing your company or employees harm.
How do Deepfake Voicemail Scams Work?
Voice synthesis technology has reached a point where a short snippet of audio provides enough data to convincingly mimic the speaker’s voice. Naturally, who better to impersonate than a member of a company’s C-suite with the authority and plausible motives to make sudden and impactful decisions?
Persons in such positions are often in the limelight. They may speak at conferences and as podcast guests, or they can talk about the company in freely available promotional videos. This abundance of quality audio is more than enough to clone a voice to such a degree that it mimics not just the speaker’s tone, but even their mannerisms and accent.
Having developed the clone, cybercriminals can now create and send voicemails targeting various company personnel. Money is the most common demand — the “CEO” will urgently request that funds be transferred to the fraudsters’ account to finalize a business deal. Alternatively, they’ll ask for login details, business records, technical data, and other sensitive information that could compromise the company and enrich the scammers.
Are Voicemail Deepfakes Effective?
Voice deepfakes are a relatively recent development, and companies may understandably be reluctant to disclose successful scam attempts. Even so, the data that does exist is sobering.
According to Pindrop, a company specializing in voice identification, the number of voice deepfake-related incidents reported in 2024 has risen thirteenfold. This staggering increase alone suggests that far more companies and employees are affected, making successful incidents more likely through sheer volume.
Right-Hand, a cybersecurity company specializing in human risk management, conducted its own survey on the matter. 70% of companies that participated claim to have been targeted by voicemail deepfakes. They also ran a test in which 25% of participants failed to recognize deepfakes.
Real-world Cases and Consequences
There’s not enough data as of yet to form a clear general picture. However, individual real-world incidents still offer insights into the scammers’ practices and victims’ responses.
One of the most infamous cases happened in 2023, when an employee at an undisclosed Hong Kong company transferred $25 million after being persuaded by a video call featuring a deepfake of the CFO.
Another notable incident affected employees of the cloud security company Wiz a year later. Many received a voicemail from the CEO asking for their credentials, which should have raised alarm bells on its own. Luckily, the sample was obtained from a video of a conference where the CEO was a speaker. Since he has public speaking anxiety, it wasn’t hard to figure out that something wasn’t right.
How to Spot Deepfake Voicemails?
Having grown more sophisticated doesn’t mean that the deepfakes are foolproof. Attentive listeners aware of the scam’s modus operandi and increased proliferation can spot and react to telltale signs.
The recordings and models the cloned voice is based on are imperfect. On the one hand, this is evident through artifacts like glitches, random popping and hissing, or suspiciously clean sound inconsistent with, let’s say, a crowded or outdoor environment.
On the other hand, the voice often sounds off. For example, it may lack the natural pauses and “uhh” sounds you’d expect a human to make. Unnatural and inappropriate pauses are another giveaway. As is the tendency to make mistakes when dealing with less common words, like foreign names, the original speaker would know how to pronounce them correctly.
Finally, there’s the message itself. Deepfake creators are aware of their clones’ shortcomings, so they’ll try to keep the message short, urgent, and to the point. This may be incongruous with the real person’s communication style and personality, prompting recipients to take pause.
How Can Companies and Individuals Minimize Deepfake Voicemail Impact?
Due to the insidious nature of this type of impersonation, mitigation and protection require a two-pronged approach.
Employee awareness and training that heightens it are crucial. It’s hard to fool a cognizant person who will take precautions like verifying the request and reaching out to the real person via alternate means like the company’s official communications platform. Awareness and common sense go a long way.
Voicemail deepfakes will continue to evolve, meaning companies need to meet them on a technological level. Dedicated recognition tools are still in their infancy, but more traditional alternatives remain effective.
For example, companies that value cybersecurity already use VPN for PC as a means of safeguarding their remote employees. These business VPNs ensure that employees’ connections are encrypted, minimizing the chance of leaks and stolen data. More importantly, a connection secured via VPN also doubles as an access control measure. Even if someone were to successfully steal login credentials via deepfake voicemail, they still couldn’t access company networks and resources without using a VPN.
Aside from such reactive measures, companies would be wise to also invest in proactive ones, like best identity theft protection services. They monitor the dark web for leaked logins, employees’ personal data, and sensitive company information. Companies get notified as soon as these show up, giving them time to change affected credentials and preempt financial fraud.
Conclusion
We’re witnessing the very beginnings of what is bound to become a highly sophisticated and damaging threat in the future. Recognizing deepfakes’ disruptive potential early and acting now will put you in a much better position to meet the evolving challenges head-on.
