Chat-bot courses that keep your team engaged

Try our free demo—no card required.

Deepfake Scams Are on the Rise in 2025. Here’s How to Prepare Your Team.

ChatGPT said: 3D-rendered illustration showing a woman at a desk typing on a laptop while a large screen displays a deepfake video call with a confident man in a suit. The background features an office setting and a dark blue gradient. Above the scene, bold white and blue text reads: "Deepfake Scams Are on the Rise in 2025. Here’s How to Prepare Your Team." An orange speech bubble and a secondary screen with a stylized purple "A" icon represent AI and voice technology.

Deepfake scams are no longer rare—they’ve become a routine threat in 2025. From fake video calls to AI-generated voice messages, attackers are using increasingly sophisticated techniques that are faster and harder to detect. The consequences? Companies are losing money, leaking sensitive data, and falling victim to threats they’re unprepared to defend against.

What makes this worse is that most businesses aren’t covered. Cyber insurance policies often exclude deepfake-related incidents, leaving companies to bear the financial and reputational damage alone.

The best defense is proactive training. Show your team what these threats look like before they happen. Simulation tools like Brightside AI can help you prepare by creating realistic deepfake scenarios that teach employees how to spot and respond to them.

What Are Deepfakes?

Deepfakes are AI-generated media that mimic real people’s appearances and voices with uncanny accuracy. The term combines “deep learning” (a type of machine learning) with “fake.” In practice, this means someone could generate a convincing video of your CEO or HR lead saying things they never said.

In 2025, creating a deepfake is alarmingly easy. Free tools available online allow anyone to fake a voice in minutes or create a video in just a few hours. These fakes are often good enough to fool even cautious employees—especially when paired with urgent requests that demand immediate action.

How Attackers Use Deepfakes

Deepfakes are being used in increasingly creative ways to execute scams. Here’s how attackers exploit them:

  • Live Video Calls: A scammer impersonates a company leader on platforms like Zoom or Teams, asking an employee to take urgent actions such as transferring funds or sharing confidential data. (Example: A Hong Kong office lost $25 million after a deepfake CFO video call.)

  • Recorded Video Calls: Attackers send pre-recorded deepfake videos via email or chat, claiming these are messages from executives with urgent instructions.

  • Voice Calls (Vishing): Using AI-generated audio, scammers mimic the voice of someone familiar to the victim, making fraudulent requests during phone calls.

  • Video Messages: Short synthetic clips posing as internal memos, customer requests, or HR updates often accompany phishing emails for added credibility.

These attacks are difficult to detect and verify in real-time. Employees rarely have the tools or time needed to fact-check trusted faces and voices under pressure.

Why This Problem Isn’t Insured

Even if your business has cybersecurity insurance, it’s unlikely that deepfake scams are covered. Here’s why:

  • Hard to Prove: Demonstrating that a video or voice was fake after the fact is challenging and often inconclusive.

  • New and Evolving Threats: Insurance policies haven’t kept pace with the rapid evolution of AI-driven scams.

  • Blame on Human Error: Insurers may argue that employee negligence—such as failing to verify requests—is responsible for the loss.

As a result, companies are left footing the bill for damages caused by wire fraud, data breaches, or other consequences of deepfake scams.

What You Can Do

Detection software alone isn’t enough. Attackers will continue finding ways around it. The best strategy? Train your team by exposing them to realistic deepfake scenarios in a safe environment. This helps employees learn how to identify red flags before real threats occur.

Brightside AI offers an effective solution for this kind of training. It allows security teams to simulate deepfake attacks tailored to their organization’s workflows. Here’s how you can use Brightside AI for your team:

How to Run a Deepfake Simulation Using Brightside AI

Brightside AI enables organizations to run realistic deepfake simulations quickly and effectively. These simulations aren't just generic tests—they're tailored to each employee based on real-world data that Brightside collects.

The system scans open sources, checks for dark web exposures, and evaluates publicly available data about your employees. Using this information, Brightside's AI personalizes the simulation, making the attack feel more realistic to the target. This increases training impact and reveals who’s most at risk.

Here’s how to create a simulation:

1. Choose the Simulation Type

Brightside offers three types of simulations:

Simulation Type

Best For

Realism Level

Live Video Call

Testing responses under pressure

High

Recorded Video Call

Email/phishing scenarios

Medium

Video Message

Standalone memos or HR notices

Medium

Pick the format based on what kind of threat you want to simulate.

2. Select Your Target(s)

Identify which employees or groups will receive the simulation. Brightside provides helpful metrics such as:

  • Employee vulnerability scores

  • Training progress (e.g., courses completed)

  • Simulation history (e.g., past performance)

3. Pick the Caller (Avatar)

Choose the fake “person” who will appear in the video:

  • Upload a photo of someone you want to impersonate (e.g., CEO).

  • Or reuse an existing photo from your library.

Brightside uses this image to generate a deepfake avatar that appears in the video call, message, or recording.

4. Set the Environment

Customize details like:

  • The video platform (e.g., Google Meet, Microsoft Teams, Zoom).

  • The background (e.g., home office, meeting room).

These small details help the deepfake blend into the employee’s normal work context.

5. Add the Voice

Choose between two options:

  • Upload a voice sample for maximum realism.

  • Use text-to-speech technology for quick setup.

The AI also adjusts tone, urgency, and phrasing based on what it knows about the employee’s communication patterns and past behavior—making the message more convincing.

6. Launch the Simulation

Review your setup and launch the simulation. Brightside will:

  • Deliver the simulated deepfake attack.

  • Monitor employee actions (e.g., clicks, responses).

  • Log results automatically for post-simulation analysis.

You’ll also get insight into how each individual responded—useful for adjusting your awareness training strategy.

Why Simulations Matter

Most employees have never encountered a real deepfake before. Simulations help employees build critical habits such as:

  • Pausing before acting on urgent video messages.

  • Double-checking unusual behavior from familiar faces.

  • Reporting suspicious calls or emails promptly.

Personalized simulations
for effective employee training

Personalized simulations
for effective employee training

Brightside’s personalized simulations and courses improve cybersecurity training—start your free demo, no card required.

Brightside’s personalized and courses improve cybersecurity training—start your free demo, no card required.

Key Takeaways

  1. Deepfake scams cost companies $40 billion annually (projected by 2027).

  2. Cyber insurance policies rarely cover losses from deepfake scams due to challenges in proving fraud.

  3. Employee training through tools like Brightside AI reduces vulnerability by 70% in simulated attacks.

  4. Simulations expose employees to realistic threats, helping them develop better responses.

Final Thoughts

Deepfakes aren’t science fiction anymore—they’re here, and companies face these attacks daily. Brightside AI provides an easy-to-use solution for preparing your team against one of today’s fastest-growing threats.

The attackers are using AI—it’s time your defenses did too.

Learn more about different types of scams in this article.

Subscribe to the newsletter “All about human risks”

Subscribe to the newsletter “All about human risks”

Subscribe to our newsletter to receive a quick overview of the latest news on human risk and the ever-changing landscape of phishing threats.