Back to blog
The "Verified" Threat: How Identity Breaches and AI Agents Are Fueling Corporate Fraud

Written by
Brightside Team
Published on
TL;DR: Age verification laws created the largest concentration of biometric identity data in history, most of it sitting inside third-party vendors that have been breached repeatedly. That data now fuels AI-generated vishing attacks that nearly match real human scammers in plausibility — 3.4 vs 3.6 out of 5. The attack format is hybrid: a convincing phishing email followed minutes later by a voice call from someone who sounds exactly like the target's manager. Technical defenses alone can't stop it. The only proven layer is people who have already seen this attack before it reaches them for real.
In 2026, choosing the wrong cybersecurity awareness training platform is no longer just a budget mistake. It's a gap that criminals are already exploiting — and the fuel powering their attacks is sitting inside something most security teams haven't reviewed in months: their compliance vendor stack.
Here's what that looks like in practice. Your company's finance director gets a phone call from what sounds exactly like the CEO. The voice has the right accent, the right pace, and the right level of authority. It uses the director's first name. It mentions a recent internal project by name. It even references the CEO's home city.
The call ends with an urgent request: wire $200,000 to a supplier account before end of day. The director complies. The following morning, the CEO has no idea what they're talking about.
The CEO never made that call. An AI did. And the personal details that made it so convincing came from a database breach at an identity verification vendor your organization used to stay compliant with a new digital privacy regulation.
This isn't a future scenario. It's happening right now, at scale, against businesses that would describe their security posture as solid. Below, we break down the threat pipeline, show you what it's already cost organizations like yours, and review the five training platforms actually equipped to prepare your employees for it.firstpagesage+1
How Digital Identity Laws Are Creating the World's Largest Fraud Database
Why Businesses Are Being Forced to Collect Sensitive User Data
Over the last two years, governments around the world rushed to pass digital identity and age verification laws. More than 25 U.S. states now require digital identity verification for certain online services. The UK's Online Safety Act is fully enforced, with fines up to £18 million or 10% of global revenue. Australia introduced fines of up to AU$49.5 million for platforms that fail to verify user ages. The EU is mandating a continental Digital Identity Wallet for all member states by the end of 2026.
For businesses operating online, compliance means integrating third-party identity verification vendors and collecting government-issued IDs, facial biometrics, and dates of birth from users. The problem is what happens to all that data once it's collected.
Why Third-Party Verification Vendors Are Single Points of Catastrophic Failure
Instead of dozens of individual companies each holding small amounts of user data, compliance requirements funnel enormous volumes of sensitive information into a small number of major verification vendors. These vendors simultaneously serve dozens or hundreds of platforms, which means one successful breach exposes users across the entire client portfolio.
The breach record since 2024 is not a run of bad luck. It's a pattern.
In June 2024, security researchers discovered that a company called AU10TIX had left its administrative credentials exposed online for over 18 months. AU10TIX isn't a small player — it handles identity verification for TikTok, Uber, X, PayPal, LinkedIn, Coinbase, Bumble, and Fiverr. The credentials had been stolen from a staff member's computer in late 2022, posted to a public Telegram channel in March 2023, and were still working when researchers found them in June 2024. Anyone with access could view names, dates of birth, nationalities, government ID numbers, passport images, driver's license scans, and facial recognition data for users across all of those platforms. AU10TIX held ISO 27001 certification for four consecutive years while this was happening.
In November 2025, a database belonging to a company called IDMerit was found sitting on the internet with no password protection. It contained approximately one billion personal records across 26 countries, including 203 million from the United States — full names, national ID numbers, dates of birth, addresses, phone numbers, and identity verification logs. The database was discovered on November 11, 2025. It wasn't publicly disclosed until February 18, 2026 — 99 days later.
Discord's story is instructive in a different way. When a hack exposed 70,000 government ID images through their verification vendor 5CA in October 2025, Discord dropped them and moved to a company called Persona. Researchers then found that Persona's code was publicly accessible on a government-authorized endpoint, revealing that the platform performs 269 distinct checks per verification transaction — including facial recognition against watchlists, screening against politically exposed persons databases, and something called "Selfie Suspicious Entity Detection." Discord dropped Persona too. They burned through at least three verification vendors in five months.
In December 2025, Veriff was breached, exposing government-issued IDs and driver's license numbers for thousands of consumers. Three class-action lawsuits followed.
One vendor breach. Millions of users across dozens of platforms. All exposed at once — with no way to undo it.
Why Biometric Data Breaches Are Permanent, Not Temporary
Stolen passwords can be reset. Stolen credit card numbers can be cancelled. Stolen facial biometrics, passport scans, and fingerprints cannot be changed. Ever.
Research from Michigan State University found that biometric templates can be reverse-engineered with 60 to 80% success rates, meaning criminals can reconstruct a working facial scan from stolen data and use it to fool authentication systems. Biometric identity theft incidents rose by 1,300% in 2024 alone.
When a person's verified identity is stolen, it retains its value for life. The data collected to prove someone is who they claim to be becomes the perfect toolkit for a criminal to impersonate that exact person.
How Stolen Identity Data Gets Turned into AI-Powered Scam Calls
What an Autonomous AI Scam Agent Actually Does
Once verified identity data leaks onto dark web markets, criminals feed it into autonomous AI agent systems built to conduct convincing, multi-turn phone conversations.
In 2025, researchers at Rutgers University published a peer-reviewed study called ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls. The paper describes an AI framework that combines large language models, goal-driven planning, contextual memory, and real-time text-to-speech synthesis to conduct fully automated scam calls without any human involvement after setup.
The part that changes the threat model is how it bypasses safety measures. When researchers tried to get major AI models including GPT-4 and Claude 3.7 to produce harmful content with a single direct request, those models refused 84 to 100% of the time. When the same goal was broken into a sequence of smaller, apparently innocent steps spread across a multi-turn conversation, the refusal rate dropped to just 17 to 32%.
The system doesn't ask the AI to "run a scam." It asks it to "confirm a routine policy update." Then "address the caller's concern about mail delays." Then "reassure them about the limited scope of the request." Each step looks harmless. The full sequence extracts a Social Security number.
Why Adding Leaked Identity Data Makes These Calls Nearly Undetectable
An AI agent armed with a target's legal name, home address, date of birth, and professional background can reference accurate personal details throughout a call. Each accurate detail builds trust and disarms skepticism.
The Rutgers study rated AI-generated scam dialogues at 3.4 out of 5 for plausibility and 3.6 out of 5 for persuasiveness when evaluated by independent human raters. Real-world human scam calls scored 3.6 and 3.9. The gap is small — and closing. For an unprepared employee, there's no practical way to tell the difference.
How Compliance Regulations Are Accidentally Training Users to Fall for Scams
Here's a paradox worth sitting with. The same regulations designed to protect people are training hundreds of millions of users to behave in exactly the way criminals want.
When every major platform suddenly asks users to upload a government ID or complete a facial scan, people stop questioning those requests. It feels normal. It feels expected.
When Australia launched its social media age verification requirements in December 2025, the country's National Anti-Scam Centre immediately identified four distinct scam types exploiting the law. Platform impersonation scams told users their accounts would be deleted without verification. Government impersonation scams claimed users had violated the law and faced fines. These scams appeared within days, not months.
The Direct Business Cost: Documented Losses and Attack Patterns
How Deepfake Executive Impersonation Leads to Wire Transfer Fraud
In January 2024, a finance worker at a major engineering firm in Hong Kong attended a video conference where every participant — including the CFO and multiple colleagues — were AI-generated deepfakes assembled from publicly available video and audio. The employee made 15 wire transfers totaling $25.6 million. Police later determined the deepfakes were built from existing online conference recordings. Arup's own Chief Information Officer later recreated a convincing deepfake of himself using open-source software. It took him 45 minutes.
A UAE bank lost $35 million when criminals used AI voice cloning to impersonate a company director, supported by fake corroborating emails. At Ferrari, an executive received a phone call from an AI voice clone of the CEO. The attack was stopped only because the executive asked a personal verification question the deepfake couldn't answer.
Deloitte projects that generative AI-enabled fraud will grow from $12.3 billion in 2023 to $40 billion by 2027. AI-generated deepfake scams rose 700% in 2025 alone. And here's the number that puts the human side of this in perspective: independent research found that human accuracy at identifying high-quality deepfakes sits at just 24.5% — worse than a coin flip. When attackers can source a verified identity profile from a breached compliance vendor and build a voice clone in under an hour, asking employees to spot the fake is not a security strategy.
How the Hybrid Attack Pattern Works Against Everyday Employees
Executives aren't the only targets. Rank-and-file employees face a different but equally sophisticated angle — one that security researchers call a Hybrid Attack: a coordinated combination of email phishing and voice phishing, timed to hit the same employee within minutes of each other.
Here's how it plays out. An employee receives an email from what appears to be the IT department. The message references a compliance verification update tied to a recent regulatory deadline. It contains a link. The email is personalized — it uses the employee's real name, references their department, and uses formatting that matches internal communications. Minutes later, their phone rings. The caller sounds exactly like their direct manager. Not approximately like them — exactly. The voice references the same compliance update, mentions a specific internal project by name, and adds urgency: this needs to be completed before end of business today.
The employee clicks the link.
Each channel makes the other more credible. The email primes the employee to expect contact. The voice call confirms it. The two working together are far more effective than either one alone — and the attack is specifically designed to overcome skepticism rather than rely on its absence.
Business Email Compromise already cost organizations $2.77 billion in 2024, with cumulative losses of $55.5 billion since 2013. When attackers anchor that kind of social engineering to real verified identity data and a voice clone indistinguishable from the actual executive, the conviction rate climbs further. The number of identity theft cases reported in the U.S. in the first three quarters of 2025 had already exceeded the total for all of 2024.
Why Your Vendor's Security Failure Becomes Your Legal and Reputational Problem
Your users submitted their government IDs because your platform required them to. Your terms of service directed them to that vendor. When their data leaks, they won't be angry at the vendor. They'll be angry at you.
Integrating a cheap or unvetted identity vendor to reduce compliance costs doesn't transfer the liability. It multiplies your exposure while reducing your control.
5 Best Cybersecurity Awareness Training Platforms for 2026
Let's be direct about what's actually in your control here.
You can't choose which vendors governments mandate for identity verification. You can't force those vendors to meet security standards that their own ISO certifications apparently can't guarantee. You can't pull your executives' data from databases that have already been breached, or prevent the dark web market for that data from operating.
What you can control is what happens inside your organization when an attack arrives. You can decide whether your employees have seen these attack patterns before. You can decide whether they know what a real hybrid attack feels like, how a cloned voice is deployed, and what to do when urgency and authority are being used against them simultaneously.
The Rutgers ScamAgents paper concludes exactly this: technical defenses alone aren't enough. The researchers call for multi-layered approaches that include user education as a core layer — not an afterthought — because no technical guardrail can fully stop an attack that distributes its intent across multiple convincing conversational turns.
Three things have to be true about training for it to work against this threat.
Simulations need to match the real attack. Running a phishing test with a generic email template isn't the same as running a hybrid simulation that combines a convincing email with a follow-up voice call. Simulations that only test one channel are training employees for a simpler version of the threat than the one they'll actually face.
Voice has to be part of the training. Most security awareness programs are built around email. That made sense five years ago. When attackers can clone a senior executive's voice from a three-second audio clip, employees need to have experienced what that sounds like in a safe environment before they encounter it under real pressure.
Follow-up training needs to happen immediately after a failure. The moment an employee fails a simulation is when they're most receptive to learning. An explanation delivered right then — in plain language, walking through exactly which tactic was used and why it worked — is worth more than a compliance video sent three days later.
These three criteria are the implicit scorecard for the platform reviews below.
1. KnowBe4
KnowBe4 is the most established name in security awareness training, holding over 16% market share and serving organizations of every size globally. Its platform delivers the industry's largest library of phishing simulations and compliance training content spanning 35+ languages, covering email phishing, smishing, vishing, and regulatory compliance. Its AI-driven adaptive learning adjusts training paths based on individual behavior, and its auditor-friendly reporting dashboards make compliance documentation straightforward. For large enterprises that need standardized training at scale, KnowBe4 remains the dependable legacy choice. Where it shows its age is in campaign management — which still requires significant manual admin work — and in reporting that tracks activity completion more than it measures whether employees actually behave differently after training.peerspot+1
Pros:
Largest training content library in the industry, covering a broad range of threat scenarios
Strong enterprise integrations with SIEMs, HR platforms, and identity providers
AI-driven personalized learning paths based on individual employee behavior
35+ language support for global and multinational deployments
Auditor-friendly reporting dashboards built for compliance documentation
Cons:
Campaign setup and content management require significant manual admin oversight
Reporting measures activity completion rather than genuine behavior change, making real ROI hard to quantify
2. Brightside
What sets Brightside apart is a GenAI Vishing Simulator that does what no other platform on this list offers: it lets admins clone real voices, define the psychological tactics the AI should use, set scenario-specific goals, and deploy fully automated multi-turn voice calls that mirror exactly what criminal AI agents produce in the wild. Employees hear the real thing in a safe environment before they encounter it from a criminal — and when they fail a simulation, immediate follow-up training fires automatically, at the moment of maximum learning impact.
The platform doesn't stop at vishing. It covers the full hybrid attack pattern through coordinated email-and-voice campaigns, phishing, smishing, and deepfake awareness content, all delivered through interactive story-driven courses built to keep employees engaged rather than clicking through static slides. The admin portal provides detailed behavioral reporting with department-level risk visibility, key enterprise integrations, and everything is backed by Swiss data privacy standards — which matters significantly given what this article has shown about what can go wrong when identity data is handled carelessly. Brightside is a Swiss award-winning platform recognized for bringing genuine AI-native threat simulation to the security awareness market.Brightside Vishing Simulator 2cb5f9c8c334806c94d2d7f64ce7e575.md+1
Pros:
GenAI Vishing Simulator with voice cloning, contextual goal-setting, and hybrid attack capability
Covers phishing, vishing, deepfake, and smishing simulations in one platform
Interactive, story-driven course format drives engagement well beyond passive video modules
Swiss-engineered data privacy standards with a clean, intuitive admin and employee portal UX
Immediate automated follow-up training triggered by simulation failure, at the moment of maximum learning impact
Cons:
As a newer platform, total content library volume is smaller than legacy providers like KnowBe4
Best suited for organizations that have prioritized AI-driven social engineering; may exceed requirements for basic compliance-only programs
3. Hoxhunt
Hoxhunt takes a behavioral science approach to security awareness. Rather than punishing employees who fail simulations, it rewards those who identify and report threats — building genuine motivation over time rather than anxiety around getting caught. Its adaptive engine continuously adjusts simulation difficulty based on each user's performance, serving simpler scenarios to employees who are still learning and more advanced attacks to those who are ready. Everyone gets a program that feels relevant to where they actually are.
Hoxhunt runs continuous phishing assessments across email, SMS, and Slack and Teams channels, with strong employee risk scoring and minimal admin overhead due to automated campaign management. It's particularly popular across European enterprises focused on building a long-term security culture rather than running point-in-time annual tests.
Pros:
Adaptive simulation difficulty automatically calibrates to each individual employee's performance level
Reward-based model increases voluntary engagement and long-term participation rates
Continuous phishing assessments across email, SMS, and collaboration channels
Strong employee risk scoring and organizational risk mapping for security team reporting
Minimal admin overhead due to automated campaign management logic
Cons:
Limited emphasis on vishing and voice-based simulation compared to platforms purpose-built for AI threat scenarios
The reward model may not align with the culture of every organization, particularly those in highly regulated or formal industries
4. Riot
Forget the separate training portal. Riot's conversational AI chatbot, Albert, delivers bite-sized security lessons directly inside Slack and Microsoft Teams through messages that look and feel like normal workplace conversation. For fast-moving technology companies and remote-first teams, this removes the biggest friction point in training adoption: the requirement to log into something new.[
Riot includes phishing simulation and reporting within the same chat interface, making it a genuinely lightweight option for organizations that want solid security fundamentals without a heavy implementation project. It's a particularly strong fit for fast-scaling startups where adoption speed matters as much as depth.
Pros:
Native Slack and Microsoft Teams integration delivers training inside existing daily workflows
Conversational chatbot format drives high completion rates among tech-savvy, time-constrained employees
Lightweight deployment with minimal IT configuration required
Phishing simulations and reporting available within the same interface as training
Fast rollout makes it effective for rapidly scaling teams needing immediate coverage
Cons:
Primarily focused on email and messaging threat vectors, with limited coverage of voice and deepfake attack scenarios
Content depth and customization are narrower than full enterprise platforms, which may not satisfy auditors in heavily regulated industries
5. Jericho Security
Start with the credential that matters most here: Jericho Security is trusted by the U.S. Department of Defense and won four Global InfoSec Awards at RSA Conference 2025. That track record reflects what makes it genuinely different from template-based platforms.
Jericho uses generative AI to analyze each organization's public digital footprint and build attack simulations specifically tailored to its industry, internal language, and targeted roles. No templates — every simulation is unique. It covers email, voice, messaging, and video channels in a single platform and integrates dark web data to reflect information attackers could realistically already have in hand. For finance, healthcare, government, and technology organizations facing sophisticated targeted attacks, this level of specificity is difficult to match.reuters+1
Pros:
Generative AI produces unique, personalized spear-phishing simulations without template libraries
Multi-channel simulation coverage including email, voice, messaging, and video
Recognized by the U.S. Department of Defense and multiple industry awards for technical innovation
Dark web data integration allows simulations to reflect real leaked organizational information[
Strong fit for finance, healthcare, government, and technology sectors facing targeted attacks
Cons:
Positioned toward mid-market and enterprise organizations; smaller teams may find pricing and complexity disproportionate to their needs
As a newer entrant, its reporting maturity and integration ecosystem are still developing compared to more established vendors
Try our vishing simulator
Experience the most advanced voice phishing simulator built for security teams. Create scenarios, test voice cloning, and explore automation features.
What Your Security Team Can Do Right Now
1. Audit your identity vendor stack. Ask every third-party identity vendor the following questions directly: Do you store biometric data beyond the verification transaction? What's your breach notification timeline? Do you conduct watchlist screening or risk scoring beyond what we requested? Require SOC 2 Type II certification, third-party penetration test results from the past 12 months, and a contractual breach notification clause within 48 hours. Remember: AU10TIX held ISO 27001 certification throughout an 18-month credential exposure. Certifications are a floor, not a guarantee.
2. Push for data minimization. NIST SP 800-63 Revision 4 (2025) recommends requesting a boolean "over 18" confirmation rather than collecting a full birthdate or government ID scan. Wherever technically possible, push vendors toward on-device processing, minimal data retention, and zero-knowledge architectures.[
3. Move employees off SMS-based two-factor authentication. SIM swap attacks — where criminals use leaked national ID data to transfer a phone number — are a documented downstream consequence of identity database breaches. Authenticator apps and hardware security keys close this attack vector significantly.
4. Replace static training with simulation-led programs. Rules-based phishing modules don't prepare employees for hybrid attacks that combine email with a coordinated AI voice call. The platforms reviewed above are built for this. Choose one that simulates the actual threat your employees will face — including the voice component.
Frequently Asked Questions
Are digital identity laws directly responsible for identity data breaches?
The laws don't cause breaches. But the compliance infrastructure they require creates concentrated third-party databases that become high-value targets. The breach record between 2024 and 2026 confirms it — AU10TIX, IDMerit, Veriff, 5CA, and Persona all suffered significant failures within 18 months, exposing over a billion personal records combined, with several incidents going undisclosed for months.
How do AI scam agents bypass modern AI safety filters?
They break a malicious goal into a sequence of apparently harmless sub-steps spread across multiple conversation turns. No single message triggers a content filter. Peer-reviewed research published in 2025 showed this approach reduced LLM refusal rates from over 84% down to just 17% across GPT-4 and Claude 3.7 — a collapse in protection that existing single-turn safety systems weren't designed to catch.
What is a hybrid vishing attack?
A hybrid vishing attack combines a phishing email and a voice call into a single coordinated campaign. The email arrives first, establishing a plausible scenario and a sense of urgency. Minutes later, the target receives a phone call from someone who sounds exactly like their manager or a trusted internal contact, reinforcing the same scenario. Each channel makes the other more credible — which is precisely why the two-stage format is significantly more effective than either one alone.
What's the difference between phishing simulation and vishing simulation?
Phishing simulation sends fake malicious emails to test whether employees click links or submit credentials. Vishing simulation delivers fake phone calls to test whether employees can be manipulated through live conversation. AI-powered vishing adds voice cloning and multi-turn goal-driven dialogue — matching what real criminal AI agents now produce — which means employees need to train against the voice-based version specifically.
Which security awareness training platform is best for small businesses?
It depends on the threat you're most concerned about. Riot is the most accessible option for small teams due to its lightweight Slack and Teams integration, low setup overhead, and fast rollout. Brightside suits small businesses that have specifically identified AI vishing and hybrid attacks as a priority threat and need simulation capabilities that match what criminals are currently deploying.[
How often should organizations run security awareness simulations?
NIST and ISO 27001 frameworks recommend continuous rather than annual testing. Quarterly phishing simulations are the minimum standard. For vishing and hybrid attack scenarios, at least one targeted simulation per team per year is a reasonable baseline, scaled upward based on role sensitivity and industry risk profile. For high-value targets like finance teams and executives, more frequent testing is worth the investment.
What should an employee do if they receive a suspicious identity verification request?
Don't submit any identity document or credentials through a link received by email, SMS, or direct message. Navigate directly to the platform's official domain in your browser instead. Confirm the request is legitimate by contacting IT through a separate, known channel. Then report the attempt to your security team — even if you're fairly sure it was genuine. The pattern of attempts tells the security team something important about what's being targeted.
The Threat Has Already Evolved. Has Your Training?
The breaches will continue. The tools will get cheaper. The attacks will get faster, more personalized, and more convincing.
The finance director in our opening scenario wasn't careless. They were unprepared for a category of attack that barely existed at scale two years ago. A verified identity profile assembled from a breached compliance database, loaded into an AI voice agent, produces something that defeats most conventional security instincts. It sounds right. It knows accurate details. It adapts in real time to pushback. There's no suspicious link to hover over. And research shows the average employee has just a 24.5% chance of detecting a high-quality deepfake — worse odds than guessing.
What's in your control is what happens when an attack reaches your employees. Have they seen this before? Do they know what a cloned executive voice sounds like? Do they know how urgency and authority are used together to short-circuit judgment? Have they experienced a hybrid attack in a safe environment where failing was a learning opportunity rather than a $25 million wire transfer?
That preparation has to happen before attackers get their first attempt. The window to build it is open right now. It won't stay open indefinitely.


