Deepfake Video Calls: The New Business Scam

by RedHub - Founder
Deepfake Video Calls
Deepfake Video Calls: The New Business Scam | RedHub.ai

Deepfake Video Calls: The New Business Scam

The $25 Million Deepfake That Should Terrify Every Business Owner

How a 10-minute video call just changed everything we know about digital trust


Last Tuesday morning, a finance director in Hong Kong joined what seemed like a routine video call with her company's CFO. She could see his face. She could hear his voice. Everything looked normal.

Ten minutes later, she authorized a $25 million payment.

The CFO was fake. The entire video call was an AI-generated deepfake.

And if you think this couldn't happen to you, you need to read what comes next.

The Digital Trust Crisis Nobody Saw Coming

Here's a number that should make you pause: In 2023, researchers counted about 500,000 deepfake videos circulating online. By the end of 2025, that number exploded to over 8 million.

That's a 900% increase in just two years.

We're not talking about obvious fakes anymore—the kind where someone's mouth doesn't quite match their words or the video looks glitchy. Modern deepfakes are so sophisticated that cybersecurity experts can barely tell the difference. Real-time synthetic actors can now mimic someone's face, voice, and mannerisms during a live video call.

Think about that for a moment. Every video call you take could potentially be with someone who isn't really there.

It's Already Happening at Scale

The $25 million fraud case isn't an isolated incident. One Indonesian financial organization reported blocking over 1,100 deepfake fraud attempts in a single quarter. That's more than 12 attempts every single day.

These aren't amateur scammers sending obvious phishing emails anymore. They're sophisticated criminals using AI to impersonate CEOs, CFOs, lawyers, and even family members. They study their targets through social media, recorded videos, and public appearances. Then they create digital doubles that can fool even people who work closely with the person being impersonated.

The scariest part? Most victims don't realize they've been fooled until it's too late.

The "Liar's Dividend" Problem

But here's where things get really dark. Deepfakes aren't just enabling fraud—they're destroying trust in everything we see and hear online.

Security researchers call it the "liar's dividend." When deepfakes become commonplace, people start questioning whether ANY video is real. Politicians caught on camera doing something wrong can simply claim it's a deepfake. Genuine whistleblower footage gets dismissed as AI manipulation.

We're entering a world where "I'll believe it when I see it" no longer works. Because seeing isn't believing anymore.

How to Protect Yourself Right Now

The good news? You don't have to be defenseless. Here are five critical steps every business and individual should implement immediately:

1. Create secret verification phrases.

Agree on specific words or phrases with your team that only real humans would know. Change them regularly. If someone on a video call can't provide the verification phrase, don't proceed—no matter how real they look.

2. Use multi-channel confirmation for high-stakes decisions.

Never authorize payments or sensitive actions based on a single video call. Require confirmation through a separate phone call to a known number, an in-person meeting, or a verification through your company's secure portal.

3. Slow down on urgent requests.

Deepfake scammers rely on creating artificial urgency. "I need this payment approved in the next hour." "This is confidential—don't tell anyone." These are red flags. Real emergencies can wait 15 minutes for proper verification.

4. Watch for subtle tells.

While deepfakes are getting better, they're not perfect. Look for unnatural eye movements, slight delays in lip-syncing, backgrounds that seem off, or lighting that doesn't quite match. Trust your instincts.

5. Educate your entire team.

Security is only as strong as your most vulnerable employee. Make sure everyone in your organization understands deepfake threats and knows the verification protocols.

What This Means for 2026

We're at an inflection point. The technology behind deepfakes is advancing faster than our ability to detect them. By some estimates, deepfake creation tools are improving 10 times faster than detection methods.

This isn't a problem for tomorrow. It's a crisis for today.

The finance director in Hong Kong did everything she thought she was supposed to do. She verified the person on the call looked right and sounded right. She followed her company's approval process. And she still got fooled.

That should tell you everything you need to know about how serious this has become.

The Bottom Line

We're entering an era where digital verification isn't just about passwords and two-factor authentication anymore. It's about fundamentally rethinking how we confirm someone's identity in a world where AI can fake anything.

The question isn't whether deepfake fraud will affect your business or personal life. The question is whether you'll be ready when it does.

Because right now, somewhere in the world, someone is training an AI on your face, your voice, and your mannerisms. The technology is already here. The criminals are already using it. And the $25 million fraud case in Hong Kong is just the beginning.

The only question that matters: What are you going to do about it?


About RedHub AI

We help businesses implement AI safely and responsibly. Our mission is to ensure you get the productivity benefits of AI without the security nightmares. Learn more about our AI security assessments and deepfake protection protocols at RedHub.ai.

🛡️ Protect Your Business

Download our free "Deepfake Defense Checklist" with 15 verification protocols you can implement today.

Download Free Checklist →

What verification methods does your company use? Have you encountered suspected deepfakes? Share your experience in the comments below and help others stay safe.

About the Author

Todd Brooks is a technology strategist and AI security expert at RedHub.ai. He helps organizations navigate the opportunities and risks of artificial intelligence, ensuring they harness AI's power while protecting against emerging threats like deepfakes and AI-driven fraud.

Connect with RedHub.ai:
X | LinkedIn | Facebook | Instagram

© 2026 RedHub.ai. All rights reserved.

What are deepfake video calls and how do they work?

Deepfake video calls are real-time AI-generated video communications where a fraudster impersonates someone else using sophisticated artificial intelligence technology. Modern deepfake systems use generative AI models trained on video footage, voice recordings, and images of the target person. The AI analyzes facial movements, voice patterns, speech cadence, and mannerisms to create a convincing digital double that can interact in real-time during a video call. These systems have become so advanced that they can mimic micro-expressions, natural eye movements, and even the unique speaking style of the impersonated person. The Hong Kong case demonstrated that deepfakes can now fool even people who work closely with the person being impersonated, making them one of the most dangerous cybersecurity threats facing businesses today.

How much money has been stolen through deepfake fraud?

The most publicized case involved $25 million stolen from a Hong Kong company when a finance director was fooled by a deepfake video call impersonating the company's CFO. However, this represents just one incident in a rapidly growing wave of deepfake fraud. One Indonesian financial organization reported blocking over 1,100 deepfake fraud attempts in a single quarter—that's more than 12 attempts every day. Global losses from deepfake fraud are estimated to be in the hundreds of millions, though many cases go unreported due to embarrassment or fear of reputational damage. Security experts warn that as the technology becomes more accessible and sophisticated, financial losses could reach billions annually by 2027. The real cost extends beyond direct financial theft—companies also suffer reputational damage, loss of customer trust, increased insurance premiums, and the expense of implementing enhanced security protocols.

What are the warning signs of a deepfake video call?

While modern deepfakes are increasingly sophisticated, several warning signs can help you identify them. Watch for unnatural eye movements or a lack of natural blinking patternsdeepfake AI often struggles with realistic eye behavior. Look for slight delays or mismatches between lip movements and speech, especially during rapid talking or emotional expressions. Pay attention to the background and lighting inconsistencies—the person may look real, but their environment might have subtle anomalies. Notice if the person avoids certain head angles or movements, as deepfake systems sometimes work best from specific perspectives. Be suspicious of unusual urgency or pressure tactics combined with requests for sensitive actions like wire transfers or password changes. Audio quality that seems unnaturally perfect or occasionally robotic can also be a tell. Trust your instincts—if something feels off about a video call, even if you can't pinpoint exactly what, implement verification protocols before proceeding with any sensitive requests.

How can businesses protect themselves from deepfake fraud?

Protecting your business from deepfake fraud requires a multi-layered approach combining technology, protocols, and human awareness. First, establish secret verification phrases that change regularly and are known only to authorized personnel—require these phrases for any high-stakes video communications. Implement multi-channel confirmation for sensitive decisions: never authorize payments or data access based solely on a video call; require additional verification through a separate phone call to a known number or in-person confirmation. Create a culture where employees feel empowered to slow down and verify urgent requests, even from apparent executives. Deploy AI-powered deepfake detection tools that analyze video streams for manipulation artifacts. Restrict the amount of video and audio content of executives available publicly, as this material is used to train deepfake models. Conduct regular training so all employees understand deepfake threats and know your verification procedures. Consider implementing biometric verification systems and digital certificates for high-security communications. Most importantly, create a 'zero trust' environment where verification is standard practice, not an insult.

Are deepfake detection tools effective?

Current deepfake detection tools offer some protection but face significant challenges in an arms race with deepfake creation technology. The sobering reality is that deepfake generation tools are improving approximately 10 times faster than detection methods. Modern detection systems analyze various factors including facial biometrics, voice patterns, pixel-level anomalies, and behavioral indicators to identify synthetic media. Some tools achieve 90-95% accuracy in controlled laboratory conditions. However, real-world performance is often lower because deepfake creators constantly develop new techniques to evade detection. The most sophisticated deepfakes, like those used in the $25 million Hong Kong fraud, can bypass most commercial detection tools. This is why security experts recommend a layered defense approach: use detection tools as one component, but never rely on them exclusively. Combine technological solutions with human verification protocols, behavioral training, and procedural safeguards. The goal isn't perfect detection—it's creating enough barriers and verification steps that fraudsters move on to easier targets. As deepfake technology evolves, detection tools must continuously update, making ongoing investment in security infrastructure essential.

What is the 'liar's dividend' and why does it matter?

The 'liar's dividend' is a disturbing phenomenon where the existence of deepfake technology undermines trust in all digital media, even when it's authentic. As deepfakes become more prevalent and sophisticated, bad actors gain a powerful new defense: they can claim that genuine evidence against them is fake. A politician caught on camera making controversial statements can dismiss it as a deepfake. Corporate executives recorded engaging in unethical behavior can claim AI manipulation. Whistleblower footage and investigative journalism can be discredited without any actual proof of forgery. This creates a crisis of epistemic trust—we're entering an era where 'seeing is believing' no longer applies. The liar's dividend erodes public discourse, makes accountability harder to enforce, and creates space for disinformation to flourish. For businesses, this means that even real video evidence in legal proceedings, HR investigations, or compliance matters may be challenged as potential deepfakes. Organizations must now consider how to create verifiable, tamper-evident records of important communications and events. The broader societal impact is profound: as trust in digital media collapses, we may see increased polarization, difficulty establishing factual truth, and weakened democratic institutions.

How quickly is deepfake technology advancing?

Deepfake technology is advancing at an alarming pace that has caught even cybersecurity experts off guard. In 2023, researchers counted approximately 500,000 deepfake videos online. By the end of 2025, that number exploded to over 8 million—a 900% increase in just two years. The quality improvements are equally dramatic. Early deepfakes from 2018-2020 were relatively easy to detect with obvious visual glitches, unnatural movements, and poor audio synchronization. Today's deepfakes can operate in real-time during live video calls, accurately mimicking facial expressions, voice patterns, and even subtle mannerisms. The barrier to entry is also dropping rapidly. What once required specialized technical knowledge and expensive hardware can now be accomplished with consumer-grade computers and freely available software. AI models like generative adversarial networks (GANs) and diffusion models are becoming more powerful and accessible. Experts predict that by 2027, deepfake technology will be so advanced and widespread that distinguishing real from fake will be nearly impossible without specialized forensic analysis. This exponential growth curve means that security measures implemented today may be obsolete within months, requiring constant vigilance and adaptation.

What should I do if I suspect I've been targeted by a deepfake?

If you suspect you've encountered a deepfake during a video call or received a deepfake message, take immediate action. First, do not comply with any requests for money transfers, password changes, or sensitive information sharing—stop all actions immediately. End the call politely but firmly, saying you need to verify the request through alternative channels. Immediately contact the person being impersonated through a known, verified communication method (their direct phone number, official email, or in-person). Do not use contact information provided during the suspicious call. Document everything: take screenshots if possible, note the time and date, record what was requested, and preserve any communications. Report the incident to your IT security team, compliance department, or cybersecurity officer. If financial fraud was attempted or occurred, contact law enforcement immediately and file a report with the FBI's Internet Crime Complaint Center (IC3) or your country's equivalent. Change any passwords or security credentials that may have been compromised. Conduct a security review to identify how the attacker obtained information about your organization or the impersonated individual. Consider implementing enhanced verification protocols to prevent future attempts. Finally, use this as a training opportunity—share the incident (appropriately) with your team to raise awareness and prevent others from falling victim.

Can deepfakes impersonate anyone, or just public figures?

While early deepfakes primarily targeted public figures and celebrities who had abundant video footage available online, modern deepfake technology can impersonate virtually anyone with surprising ease. The amount of training data required has dropped dramatically—creating a convincing deepfake once required hours of high-quality video, but current AI models can generate convincing results with just minutes of footage or even a handful of photos combined with a brief voice recording. This democratization of deepfake creation means that ordinary business executives, managers, and employees are now vulnerable targets. Criminals research their victims through social media, company websites, conference presentations, LinkedIn videos, and Zoom recordings. They study speech patterns, mannerisms, and contextual details to make their impersonations more convincing. The Hong Kong fraud case involved impersonating a CFO, not a celebrity—demonstrating that any professional with even a modest digital footprint can be targeted. Small business owners, family members, and individuals can all be deepfaked. The attackers often combine publicly available information with social engineering techniques to create scenarios that seem perfectly legitimate. This means everyone needs to be aware of their digital footprint and implement verification protocols for sensitive communications, regardless of their public profile.

What legal protections exist against deepfake fraud?

Legal protections against deepfake fraud are still evolving and vary significantly by jurisdiction, creating a challenging landscape for victims. In the United States, several states including California, Texas, and Virginia have enacted specific anti-deepfake laws, though these often focus on election interference and non-consensual intimate imagery rather than business fraud. Federal law can address deepfake fraud through existing statutes covering wire fraud, identity theft, and computer fraud, but prosecuting across international borders remains difficult since many deepfake scammers operate from countries with weak cybercrime enforcement. The European Union's AI Act includes provisions addressing deepfakes, requiring transparency about synthetic media, though enforcement mechanisms are still being developed. In Asia, where the $25 million fraud occurred, countries like China, South Korea, and Singapore have introduced deepfake-specific legislation, but legal remedies often prove slow and inadequate for victims seeking recovery. The primary legal challenge is attribution—proving who created and deployed the deepfake, especially when sophisticated criminals use anonymization techniques. For businesses, this means legal recourse alone is insufficient protection. Companies should focus on prevention through robust security protocols, comprehensive insurance coverage including cyber fraud policies, and contractual provisions requiring multi-factor verification for significant transactions. Working with cybersecurity legal specialists to understand jurisdiction-specific rights and implementing ironclad verification procedures offers better protection than relying on after-the-fact legal remedies.

You may also like

Stay ahead of the curve with RedHub—your source for expert AI reviews, trends, and tools. Discover top AI apps and exclusive deals that power your future.