Back to blog

Case Study

Case Study

Phishing Simulations vs AI Phishing: What Works in 2026

Written by

Brightside Team

Published on

Mar 2, 2026

An employee joins a video call with “leaders” they recognize. The voices sound right. The faces look right. Then comes an urgent request to move money. In the Arup case, that request led to $25 million in transfers, and the people on the call were AI-generated deepfakes.

That story isn’t here to scare you. It’s here to make one point clear: if your training program only teaches people to spot weird spelling in emails, it’s not training for the attacks your business will face this year.

Before we get practical, let’s define a few terms in plain language.

What is social engineering?

Social engineering is when an attacker tricks a person into doing something that helps the attacker. The attacker doesn’t “break into” a system first. They persuade someone to open the door for them.

That “something” could be:

  • Sharing a password.

  • Approving a wire transfer.

  • Buying gift cards.

  • Installing software.

  • Changing bank details for a vendor.

What is AI-driven social engineering?

AI-driven social engineering is the same idea, but the attacker uses AI to make the trick more believable, faster to produce, and easier to scale.

AI helps attackers:

  • Write cleaner, more natural messages.

  • Personalize messages to a job role or department.

  • Hold longer conversations that feel “human.”

  • Impersonate voices, and sometimes faces.

Quick definitions (so you don’t get lost)

  • Phishing: Fake emails that try to get you to click, log in, pay, or share info.

  • Smishing: The same thing, but through SMS text messages.

  • Vishing: The same thing, but through voice calls.

  • Deepfake: Fake audio or video that looks or sounds like a real person.

  • Hybrid attack: An attack that moves across channels, like email plus a phone call.

Now the key question.

If attackers can use AI to sound like your CFO on a call, what should training look like?

The 2026 reality check: Why this keeps getting worse

A lot of leaders still treat AI threats like a “future” issue. That’s not what major threat intel teams are saying.

Google Cloud’s Cybersecurity Forecast 2026 says threat actors will use AI to increase the speed, scope, and effectiveness of attacks, and it frames 2026 as an “AI arms race” with defenders also using AI agents to improve security operations. It also calls out “Shadow Agent” risks (unsanctioned AI agents and tools in the business) and says identity and access management needs to evolve alongside this shift.

That matters for training because people work inside those messy conditions:

  • New tools roll out fast.

  • Policies lag behind.

  • Teams feel pressure to “just get it done.”

  • Attackers exploit the gaps.

AI phishing vs “classic” phishing: What the numbers say

Even if your email filtering is strong, you still have a people problem. Why? Because AI improves the parts of phishing that humans usually rely on to detect a scam.

A few stats show how fast this moved:

  • The average phishing-related data breach cost is reported as $4.88 million.

  • A widely cited metric reports a 1,265% increase in phishing emails since the rise of widely available generative AI tools.

  • A KnowBe4-cited figure in the same report says 82.6% of phishing emails include AI-generated content.​

  • IBM researchers showed an AI could assemble a phishing campaign in 5 minutes using 5 prompts, compared with 16 hours for a human team.

Here’s the simple takeaway: attackers can test more ideas, faster. And they can tune messages until they land.

So what breaks first? Usually, the training program.

Three reasons legacy awareness programs are failing

Most organizations are not failing because they “don’t care.” They’re failing because the training model is outdated.

1) Training teaches recognition, not decisions

Old-school training often says: “Look for these signs.” Bad spelling. Strange sender. Weird link.

But AI removes many of those tells. So the real skill people need is decision-making under pressure.

Ask yourself: when an employee gets a message that looks clean and sounds normal, what do they do next?

That’s not a recognition problem. That’s a response problem.

2) One-size-fits-all training ignores how attacks actually work

Attackers don’t treat your company like a single blob. They target:

  • Finance, for payments and bank changes.

  • HR, for payroll and personal data.

  • IT, for password resets and access.

  • Executives, because “authority” moves people.

If the same training goes to everyone, you miss the point. The same is true for simulations. If everyone gets the same template, people learn to spot the template, not the risk.

3) Email-only simulations don’t match multi-channel attacks

Attackers don’t stay in one channel. They start in email, then follow up with a call. Or they call first, then send “the link you asked for.”

Vishing has also surged. One report cites vishing attacks up 442% year over year, driven in part by voice cloning. If your training never includes voice, your people are practicing the wrong sport.[

What modern training should do instead

Better training doesn’t try to make everyone a human spam filter. It builds habits that reduce loss.

Here’s the model that works in the real world:

  1. Practice realistic scenarios.

  2. Measure behavior that matters.

  3. Trigger follow-up learning right after mistakes.

  4. Repeat, with increasing realism.

This is where platforms like Brightside AI fit, because they combine structured learning with simulations across email and voice, plus deepfake coverage.

Let’s break it down in practical terms.

Build a modern training program (step by step)

Step 1: Use role-based, personalized spear phishing simulations

People don’t fall for “Dear user” anymore. They fall for messages that match their job.

Brightside supports AI-powered spear phishing simulations that personalize scenarios using employee profile data such as department, role, language, location, tenure, and even work tools used. That means a marketer can receive a believable lure tied to ad platforms, while a finance employee gets a payment or invoice scenario.

What this accomplishes:

  • Less “template spotting.”

  • More real judgment practice.

  • Better data on which teams face which risks.

Step 2: Add vishing simulations, not just phishing

If your training never includes phone calls, you’re leaving a wide open path for attackers.

Brightside’s vishing simulator runs realistic AI-powered phone calls that test whether employees can handle voice-based social engineering. The simulated caller can be configured with a persona, an objective (like getting a password reset link), and a set of tactics such as authority, urgency, or social proof

This matters because voice adds pressure. People respond differently when they hear a confident voice saying, “I need this now.”

Step 3: Practice hybrid attacks (email plus voice)

Hybrid attacks are where many teams break. The email seems normal. Then a call arrives that references the email. Now the employee thinks, “Oh, it’s real. They know about it.”

Brightside supports Hybrid Attack simulations that combine vishing with a phishing email containing a trackable link, so teams practice multi-channel awareness.

If you want a featured snippet section in your blog, this is one of the best places to do it. Keep it concrete.

How to run a hybrid vishing simulation (example playbook)

  1. Pick a goal, like “get the target to click a link in a follow-up email” or “get a password reset link.”

  2. Create a caller persona (name, role, organization) and include a few realistic details, like a ticket number or recent account activity.

  3. Choose tactics and set the tone, for example formal with medium urgency, then escalate if the employee hesitates.

  4. Select the voice, either a preset voice or a custom cloned voice if your policy allows that for executive impersonation testing.

  5. Launch as a hybrid simulation so the call and email support the same story.

  6. Measure outcomes, then trigger follow-up training for employees who fail.

This structure is what attackers do. You’re just doing it safely and on purpose.

Step 4: Train for deepfakes, but focus on the right skill

Deepfake detection tools may help in some cases, but training should assume the content can look real.

Brightside includes deepfake simulations, plus training topics like deepfake identification and CEO fraud. In practice, this should lead to one skill: verification.

Instead of teaching “spot the fake,” teach:

  • “Don’t approve high-risk requests based on voice or face alone.”

  • “Use a second channel you already trust.”

  • “Follow the process, even if the request feels urgent.”

Step 5: Measure behavior that matters (not vanity metrics)

Click rates are easy to track. They’re also easy to misread.

Brightside tracks simulations across five actions: Delivered → Opened → Clicked → Entered credentials → Reported. It also marks a simulation failed if the attack goal is achieved before the employee reports it, within a configurable window.

That’s a big deal because it rewards the behavior you want in real life:

  • Reporting early.

  • Escalating before damage happens.

  • Slowing down when things feel urgent.

Brightside also provides reporting that includes metrics like click rate, credential submission rate, attachment open rate, and report rate, plus a NIST-weighted simulation failure rate and month-over-month trends.

If you’re writing for security leaders, call out metrics like these as “operational metrics,” not just “training metrics.”

Commom questions (and straight answers)

Does security awareness training still work if attacks use AI?

It can, but only if training matches the real attacks people face. AI makes phishing cleaner and faster to produce, so training has to focus on decisions and verification habits, not just spotting obvious mistakes.

What’s the difference between phishing and vishing?

Phishing is usually email-based. Vishing is voice-based. Vishing often relies on authority and urgency, and AI voice tools can make impersonation more convincing.

How often should teams run simulations?

Annual training leaves long gaps. Short, repeated practice works better because it builds habit. Brightside supports simulation periodicity options including quarterly, yearly, and evergreen continuous programs.

How do you run a vishing simulation without causing panic?

Use clear internal communication, set expectations, and avoid humiliating people. Keep scenarios realistic but respectful. Brightside’s vishing simulator includes a “Try in browser” option for testing templates before launching, and admins can review and tune the scenario before it reaches employees.

What should you track besides click rate?

Track reporting behavior and repeat patterns. Brightside includes report rate, credential submission rate, and trend indicators, which help answer “Are we improving?” instead of just “Did someone click?”

Where Brightside AI fits

A good article should be honest about what the platform does and doesn’t do.

Brightside is a training and simulation platform. It does not detect or respond to breaches in real time, and it does not monitor employee communications.

What Brightside does do is help you run training that matches AI-era attacks:

  • Personalized spear phishing simulations using employee profile data.

  • GenAI-powered vishing calls with configurable goals, tactics, urgency, and tone.

  • Hybrid simulations that test multi-channel escalation (voice plus email).

  • Deepfake simulations and training topics tied to modern impersonation risks.

  • Reporting that helps you identify high-risk groups and track improvement over time.

Try our vishing simulator

Experience the most advanced voice phishing simulator built for security teams. Create scenarios, test voice cloning, and explore automation features.

What you should be training for?

If an attacker can write a perfect email in seconds, and follow it up with a convincing voice call, what are you really training your people to do?

Google Cloud’s Cybersecurity Forecast 2026 frames this year as a new era where AI increases attacker capability and defenders respond with AI-driven operations. The human side has to keep up, too.

Modern training doesn’t aim for perfect employees. It aims for better decisions:

  • Pause.

  • Verify.

  • Report early.

  • Follow the process.

That’s how you stop AI-driven social engineering from turning one rushed moment into a major loss.