Back to blog

AI Spear Phishing in 2026: Statistics, Trends & CISO Action Guide

Articles

Articles

Written by

Brightside Team

Published on

The phone call lasted four minutes. The voice on the other end was calm, authoritative, and unmistakably familiar, a near-perfect replica of the CEO's southern Italian accent. The caller asked a financial executive to approve an urgent transaction. The executive was suspicious, not because anything sounded wrong, but because the request felt slightly off. So he asked a question only the real CEO would know how to answer. The caller couldn't. The fraud attempt failed.

That attack on Ferrari didn't start with an email. There was no suspicious link, no mismatched sender domain, no awkward phrasing to flag. The only thing that stopped it was a procedural instinct: a human decision to verify through a channel the attacker couldn't fake. Technology played no role in the defense.

Most security training programs aren't preparing employees for that moment. They're preparing them for a different kind of attack entirely, one that looks less and less like what attackers are actually sending.

Key AI Spear Phishing Statistics for 2026

Before getting into what changed and why it matters, here's where things actually stand.

82.6% of phishing emails now use AI in some form, whether for text generation, personalization, or obfuscation. That's not a fringe trend. It's the baseline. If your filters are catching AI-generated emails at the same rate they caught template-based ones five years ago, they're not performing as well as you think.

AI-generated phishing achieves a 54% click-through rate, compared to 12% for traditional campaigns[1]. That difference isn't small. It reflects a fundamental shift in quality: AI removes the grammar errors, implausible contexts, and cultural mismatches that employees were trained to spot. That figure comes from simulation environments, so exact real-world rates vary, but the directional claim holds across multiple independent sources — AI phishing performs significantly better than manual phishing.

Phishing attacks linked to generative AI surged 1,265% in a short window, and credential phishing attacks specifically increased 703% in the second half of 2024 alone. AI didn't improve attack quality incrementally. It made high-volume, personalized campaigns economically viable for the first time.

Voice phishing (vishing, meaning AI-powered phone calls designed to extract credentials or authorize payments) increased 442% between 2023 and 2024. Deepfake incidents grew 680% year-over-year, with Q1 2025 alone recording more incidents than all of 2024 combined. AI-assisted Business Email Compromise incidents rose 37% according to the FBI's 2025 IC3 report.

The numbers are significant. But the more important story is where these attacks are happening, and that's what the statistics alone don't tell you.

How AI Transformed Spear Phishing Between 2021 and 2026

It didn't happen overnight. It happened in three distinct phases, each one raising the floor for what attackers could accomplish without specialized expertise.

2021–2022: Machine Learning and the Detection Arms Race

In the early part of this five-year window, AI in cybersecurity mostly meant machine learning for detection and classification: spam filters, URL classifiers, anomaly detection, fraud scoring. Phishing was still largely template-driven. Emails arrived with tell-tale signs: suspicious sender domains, awkward phrasing, implausible scenarios. Security teams built classifiers to catch these patterns, and those classifiers worked reasonably well.

Attackers responded by testing their payloads against the same classifiers. The evasion game started here, but the tools required were still specialized, and the skill gap between a sophisticated attacker and a script kiddie remained wide.

2023: ChatGPT and the End of the Grammar-Error Tell

Broadly usable large language models changed everything about the interface between intent and execution. Suddenly, any operator could describe what they wanted and receive a grammatically perfect, contextually tailored phishing email in seconds. The translation cost between "I want to impersonate a CFO requesting a wire transfer" and a convincing email that could pass a trained reader's inspection collapsed to near zero.

The impact was measurable almost immediately. Academic research on LLMs in cybersecurity went from a handful of papers before 2023 to more than 80 papers published in that year alone. Security teams started reporting that AI-generated phishing was harder to distinguish from legitimate communications. The grammar-error tell, one of the most reliable heuristics employees had been taught, stopped working.

2024–2026: Voice, Video, and the Multi-Channel Shift

This is where the threat became genuinely different in kind, not merely in degree. Generative AI moved beyond improving email lures. It made voice and video impersonation viable for attackers who previously couldn't afford the expertise or infrastructure.

Voice cloning now requires as little as 20 to 30 seconds of audio. That audio exists publicly for almost every senior executive in the world: earnings calls, conference recordings, interview clips, LinkedIn videos. A convincing deepfake video can be created in under an hour using freely available tools. The $2.3 million fraud against an Australian local government wasn't carried out by a sophisticated nation-state actor. Attackers carried it out using deepfaked voice and video of city officials to approve fraudulent payments, with tools that are now commercially accessible.

The FBI warned in December 2024 that criminals are exploiting generative AI to commit fraud at larger scale and with increased believability. ENISA's 2025 threat report found that AI-supported phishing represented more than 80% of observed social-engineering activity by early 2025. Email defenses had matured. Attackers noticed. They moved to channels where authentication is still social rather than cryptographic.

That shift is the real story. And it's the one most training programs haven't caught up to yet.

Why Email-Only Simulations Are No Longer Enough

Most security awareness programs were built for a 2019 threat model. They teach employees to look for suspicious links, mismatched sender domains, and unexpected requests. Those instincts are still worth having, but they cover a shrinking share of the actual attack surface.

Three specific gaps have opened up as attackers moved channels.

Gap 1: Channel coverage. Security training simulates email. Attacks now arrive via phone calls, WhatsApp messages, video conferences, calendar invites, and SMS. An employee who passes every phishing simulation you run has never been tested on the attack vector that took down a Hong Kong finance firm for $25 million.

Gap 2: Detection instincts. Security training teaches employees to spot visual signals: typos, suspicious URLs, mismatched headers. Voice and video attacks bypass all of those instincts entirely. Human detection accuracy for high-quality deepfake videos is only 24.5%. You can't train someone to spot a deepfake by showing them badly formatted emails.

Gap 3: Simulation realism. Template-based simulations deliver a static message and measure whether the employee clicked a link. Real AI-powered vishing calls adapt in real time to what the target says. An employee who knows to be suspicious of unexpected emails has no practiced response for a caller who already knows their name, their role, their manager's name, and is responding dynamically to every objection they raise.

The question isn't whether your employees can spot a bad email. It's whether they've ever had to decide if the voice on the phone is real.

5 Actions Every CISO Should Take Now

These aren't aspirational recommendations. They're the specific controls that address the gaps described above, and that the Ferrari attack, the Australian local government incident, and dozens of less-publicized cases demonstrate are necessary.

1. Deploy phishing-resistant MFA across all high-value accounts.
AI can crack 85.6% of common passwords in under 10 seconds. Password-based authentication is critically vulnerable at the accounts that matter most: privileged access, finance, and executive accounts. Hardware keys or passkeys remove the credential-theft surface that phishing and vishing attacks are designed to exploit. This is the highest-impact single change most organizations haven't fully completed.

2. Extend simulation programs beyond email to voice and video.
If your current platform only simulates email phishing, your program has a blind spot the size of a phone call. Employees need to practice recognizing vishing attacks, including AI-powered calls that adapt to their responses, before a real attacker calls them. Vishing simulations that use live AI voice agents are now available and give employees a rehearsed response rather than an untested one.

3. Implement procedural verification for high-value transactions.
Technology didn't stop the Ferrari attack. A procedural question did. Establish out-of-band confirmation workflows for wire transfers, credential resets, and access changes, regardless of how convincing the requester sounds or looks. A policy that says "no transfer above $X is authorized by phone call alone" costs nothing to implement and removes the entire attack surface that deepfake voice and video fraud exploits.

4. Run hybrid attack simulations combining voice and email.
Attackers coordinate across channels. A phishing email arrives, followed by a vishing call from someone claiming to be IT support asking why the employee hasn't clicked the link yet. Employees who pass email simulations consistently still fail when a coordinated call follows. Training programs need to simulate this pattern specifically, not email and voice in isolation, but both together.

5. Establish AI account governance for employee-facing AI tools.
AI tools are becoming orchestration layers over sensitive systems. A compromised ChatGPT or Microsoft Copilot account connected to code repositories, email, and internal documents is no longer a minor incident. Phishing-resistant MFA, session monitoring, and least-privilege access apply to AI accounts exactly as they apply to any other privileged system. Most organizations haven't added AI accounts to their privileged access management scope yet.

Top 5 AI-Powered Spear Phishing Training Platforms in 2026

Disclaimer: This article is published on the Brightside AI blog, so it's fair to read that disclosure before the comparison below. Platforms are listed alphabetically and assessed only on publicly documented capabilities, we haven't given Brightside credit for anything that isn't verifiable from public sources, and we've applied the same standard to every platform on this list.

Adaptive Security

Adaptive Security positions itself as an AI-threat-focused human risk platform built for enterprise-scale posture management. Its strengths are broad: executive exposure monitoring, AI-powered phishing personalization, and automation workflows that give security teams visibility into organizational risk without manual effort. It fits large enterprises best, particularly those that need posture automation alongside simulation, where the CISO needs both a risk dashboard and a training program from a single platform. On voice simulation, Adaptive is growing but less explicitly documented than some of the AI-native challengers.

Best for: Large enterprises that need AI-powered posture monitoring and phishing simulation in one platform.

Arsen

Arsen is a Paris-based platform that covers phishing, smishing, and vishing simulation with strong European market positioning. Its multi-channel simulation capability is genuine, and it has live adaptive vishing available, a meaningful differentiator from platforms that offer only voicemail-style or template-based voice scenarios. Arsen's threat monitoring and collaboration-tool delivery options make it relevant for European organizations navigating NIS2 compliance requirements alongside their simulation programs. Deepfake simulation isn't currently a documented core feature, which matters if that vector is a priority for your threat model.

Best for: European organizations that need multilingual, multi-channel simulation with a strong compliance posture.

Brightside AI

Brightside AI is a Swiss simulation-first platform that covers all three modern attack vectors: email phishing, live AI vishing, and deepfake awareness, in a single system[5]. Its vishing simulator is live and adaptive: an AI agent conducts a real outbound phone call, responds dynamically to what the target says, and follows a configurable social engineering strategy (authority impersonation, fear/threat, commitment escalation) selected or recommended at setup. Admins can upload a one to two minute voice recording to create a custom executive voice clone for targeted simulations. Hybrid attacks, meaning a coordinated email and phone campaign launched from one workflow, are natively supported.

Phishing simulations are personalized using employee profile data, with AI selecting and filling templates based on role, department, location, and work tools. Difficulty is mapped to the NIST Phish Scale. Reporting tracks NIST-weighted simulation failure rates, vishing-specific metrics including answer rate and call duration, and month-over-month trend indicators across the full program.

Best for: Organizations that need realistic simulation across email, phone, and deepfake scenarios in one platform, particularly in financial services, legal, healthcare, and crypto sectors where social engineering risk is highest.

Jericho Security

Jericho Security's core strength is the speed and quality of AI-generated phishing pretexts. It covers email, voice, and SMS with a focus on conversational multi-channel realism and rapid personalization at scale. For organizations that need to run high-volume, highly personalized phishing campaigns quickly, think red team exercises, large enterprise rollouts, or organizations that want to vary pretexts frequently, Jericho's AI generation pipeline is a real asset. Its vishing capability is present, though the workflow is less explicitly documented in public materials than Brightside's step-by-step simulation builder.

Best for: Organizations that need rapid, high-volume personalized phishing simulation across multiple channels with strong AI-generated pretext variety.

Proofpoint

Proofpoint operates at enterprise scale and integrates human risk management directly into its broader threat intelligence and security control stack. Its phishing simulation is tied to real threat intelligence: simulations can reflect the actual campaigns targeting your industry and region, rather than generic templates. The suspicious-message reporting button and integration with Proofpoint's email security suite create a feedback loop between simulation and live threat detection that smaller platforms can't match. On voice and deepfake simulation, Proofpoint is less differentiated than the AI-native challengers, but for organizations already running Proofpoint for email security, the integration value is significant.

Best for: Large enterprises already in the Proofpoint stack that need phishing simulation deeply integrated with threat intelligence and email security controls.

Try our vishing simulator

Experience the most advanced voice phishing simulator built for security teams. Create scenarios, test voice cloning, and explore automation features.

The Shift Has Already Happened. Training Programs Need to Follow.

The Ferrari executive who answered that phone call didn't have specialized deepfake detection training. He didn't use a detection tool. He asked a question. That worked once, against one attacker, who hadn't anticipated it. It's not a repeatable defense at organizational scale.

The 1,265% surge in AI phishing isn't the real threat. The real threat is that most employees have never practiced saying no to a voice that sounds exactly like their CFO, adapting in real time to every objection they raise, referencing a real project they worked on last week. That scenario isn't hypothetical anymore. It's happening at organizations that thought their training programs were solid.

Organizations that rehearse these scenarios before attackers deploy them will be significantly better positioned than those still running email-only simulations. The attack has moved. The training needs to follow it.