Back to blog

DORA, NIS2, and AI Security Training: What European Companies Must Teach Employees Now

How-To

How-To

Written by

Brightside Team

Published on

Organizations in DORA or NIS2 scope can no longer treat security awareness as a once-a-year checkbox. DORA raises expectations for operational resilience across the EU financial sector, and NIS2 requires management-body training while supporting regular employee education as part of broader cyber risk management. At the same time, phishing, voice scams, and impersonation attacks are getting harder to spot because attackers are now using AI to build them. That's not because DORA or NIS2 are AI-specific laws — they're not. AI matters here because it changes the threat landscape your employees are walking into every day.

This guide explains exactly what in-scope organizations should teach employees, managers, and executives right now. You'll learn which topics are regulatory baseline, which are AI-era best practice, and how to evaluate the best tools to make it happen.

Who This Applies To

Before getting into what to teach, it's worth being precise about who these regulations actually cover.

DORA applies to EU financial entities, including banks, insurance companies, investment firms, crypto-asset service providers, and payment institutions, along with critical ICT third-party service providers that support them. If your organization provides ICT services to a financial entity, you may also fall within its scope.

NIS2 applies to essential and important entities across a broader range of sectors, including energy, transport, healthcare, digital infrastructure, and public administration. It doesn't cover every company in Europe. Whether you're in scope depends on your sector and size.

If your organization isn't directly covered by either regulation, the training guidance in this article still applies as strong operational practice. But it's not a direct legal obligation for you.

Why This Matters Now

Regulations aside, there's a practical reason this conversation is urgent. Vishing incidents surged between 260% and 442% year over year in 2025. Deepfake-enabled fraud caused over $200 million in losses in Q1 2025 alone. Attackers are now making live AI phone calls that adapt in real time, generating personalized phishing emails based on employee job titles and tools, and cloning executive voices from short audio recordings.

The training most organizations are running wasn't designed for any of that. It was designed for a threat environment that no longer exists.

On the regulatory side, DORA has been in force since January 2025, and NIS2 is already active across EU member states. Both push organizations toward stronger governance, clearer accountability, and more repeatable evidence of cyber resilience. A generic annual awareness video doesn't satisfy that intent, even if someone signs off on a completion report.

The Problem With Most Security Training Programs

Most security training programs share a few common weaknesses.

  • They run once a year, then disappear for eleven months.

  • They treat everyone the same, whether someone is a CFO or a junior account manager.

  • They focus entirely on email phishing and ignore voice attacks, impersonation calls, and deepfake scenarios.

  • They track whether employees finished a module, not whether behavior actually changed.

  • They're difficult to tie to any real evidence of resilience when an auditor or board asks.

The result is a program that looks good on paper and fails in practice. Employees recognize the company's own phishing test because it looks the same every quarter. Managers don't know what to do when someone calls claiming to be from IT and asks for credentials. Executives get no training on how attackers might impersonate them.

That gap is where real incidents happen.

What Happens If You Get This Wrong

Getting training wrong creates several compounding problems.

The first is operational. Employees who haven't practiced recognizing voice-based social engineering are more likely to comply when an attacker calls pretending to be a vendor or a colleague. Without drilled response habits, people freeze or comply under pressure.

The second is governance. NIS2 explicitly requires training for management bodies and supports regular employee training tied to cyber risk-management practices. DORA expects ICT security awareness and resilience training that's matched to each person's role and responsibilities. If your program can't show that management has been trained, that simulations have been run, or that results have been reviewed, you're not meeting the spirit of either regulation.

The third is audit evidence. If you can't show what was trained, who completed it, how employees performed in simulations, and what remediation followed failures, you have limited documentation to support governance reviews or incident response evaluations.

What DORA and NIS2 Actually Mean for Training

DORA in Simple Terms

DORA is a resilience framework, not just a risk awareness campaign requirement. It requires financial entities to manage ICT risk actively, maintain reliable incident response procedures, and test their operational continuity. Training is part of that picture because people are part of ICT risk.

The regulation expects training to support ICT risk awareness, resilience behaviors, and incident readiness, and it expects that complexity to be matched to each person's role. A software engineer needs different training than a customer service agent. An executive needs different training than both.

NIS2 in Simple Terms

NIS2 Article 20 requires that management bodies of covered entities follow training so they can identify risks and assess cybersecurity risk-management practices. It also encourages covered organizations to offer regular training to employees as part of their broader security program.

Article 21 includes cyber hygiene practices and cybersecurity training as part of the minimum security measures organizations must implement. While the management-body obligation is the clearest explicit training requirement, employee training is woven into what the regulation expects organizations to do.

Where They Overlap

Both regulations push in the same direction.

  • Governance and accountability need to be visible and documented.

  • Risk management should be active, not passive.

  • Incident preparedness requires practice, not just policy.

  • Training needs to connect to operational behavior, not just annual participation metrics.

If you're designing a training program that satisfies DORA and aligns with NIS2, you're essentially building the same thing: a continuous, role-based, evidence-driven program that keeps pace with real threats.

What Employees Must Be Taught Now

Here's where the regulation meets the real world. There are two buckets.

Regulatory Baseline

These are the topics that align closely with what DORA and NIS2 actually describe.

  • Cyber hygiene, including safe password practices, device hygiene, and avoiding risky behaviors.

  • Phishing and social engineering awareness.

  • MFA setup and access control practices.

  • Secure communications, especially around sensitive or financial information.

  • How to report a suspicious email, call, or message, and who to escalate to.

  • Business continuity basics for people whose roles matter during an incident.

AI-Era Best Practice

These topics aren't written into DORA or NIS2 text, but they reflect how attacks actually work in 2026. They belong in any program that's trying to build real resilience rather than just satisfy a checklist.

  • How to spot AI-generated phishing emails, which often look more polished and personalized than older attacks.

  • Voice phishing: recognizing suspicious calls, using callback verification, and not trusting caller ID.

  • Deepfake awareness: understanding that audio and video of known people can be faked, and what that means for approvals, instructions, and wire transfers.

  • Executive impersonation drills, so staff practice what to do when someone calls claiming to be the CEO.

  • Hybrid attack awareness: recognizing when a phishing email and a follow-up phone call are part of the same coordinated attack.

By Role

Not everyone needs every topic at the same depth.

All employees should cover cyber hygiene, phishing awareness, incident reporting, MFA, and the basics of AI-generated threats.

Managers and team leads need to go further on approval verification, payment validation, vendor checks, and what escalation ownership looks like in practice.

Executives and board members need training on governance responsibilities, decision-making under pressure, and the specific risk of voice and deepfake impersonation targeting them by name.

IT and security teams need to practice incident response drills, run and interpret simulation results, measure employee behavior over time, and produce audit-ready evidence.

What Organizations Typically Do vs What They Should Do

What organizations typically do

What they should do instead

Annual awareness module

Continuous, role-based training program

Generic phishing emails

Multi-channel phishing, vishing, and deepfake simulations

Same content for everyone

Different paths for employees, managers, executives, and IT

Completion-based reporting

Reporting rate, failure trends, and remediation tracking

Manual campaign setup

AI-assisted scenario creation and automated follow-up

Compliance-only mindset

Resilience-first program with compliance evidence

The gap between these two columns is where most organizations are sitting right now. They're delivering training, but they're not building the kind of behavioral resilience that protects against modern attacks or satisfies regulators who want to see an active, functioning program.

Annual completion is a baseline, not a strategy.

What Good Looks Like

A strong training program in 2026 has five characteristics.

Role-based learning paths. Different employees face different risks and need different knowledge. A finance manager approving payments is a higher-risk target than most, and the training should reflect that. IT administrators have a different attack surface than HR. Building separate paths by role isn't a nice-to-have, it's how you make training relevant.

Realistic, repeated simulations. A single annual phishing test tells you almost nothing about behavioral trends. A program that runs phishing, vishing, and deepfake simulations on a recurring schedule, with varying difficulty and attack types, shows you whether people are actually getting better over time.

Automatic follow-up after failures. When someone clicks a phishing link or hands over credentials in a vishing simulation, that's a teachable moment. The best platforms trigger a short, targeted follow-up module immediately, while the experience is still fresh, rather than waiting for the next annual cycle.

Behavior-based reporting. Click rates, report rates, credential submission rates, and month-over-month risk trends tell you far more than completion percentages. If your training dashboard only shows who finished a module, you're missing most of the picture.

Privacy-conscious administration. Aggregate risk reporting helps security teams identify high-risk groups and trends. Surfacing personal employee-level failure data unnecessarily creates trust issues and, depending on how it's handled, may create GDPR considerations. Well-designed platforms keep reporting useful without overexposing individuals.

How to Operationalize It

Here's a practical approach to building this kind of program.

  1. Start with a gap analysis. Map what you're currently delivering against what DORA and NIS2 expect for your organization's role and sector. Identify missing topics, missing roles, and missing evidence trails.

  2. Focus on high-risk roles first. Finance, HR, legal, executives, and IT tend to carry the most risk. Executives are high-value impersonation targets. Finance teams are high-value fraud targets. IT has privileged access. Start your training investments where the risk is highest.

  3. Build a role-to-topic matrix. Map each employee group to its required regulatory baseline topics and its AI-era best practice topics. This makes rollout systematic and gives you a defensible record of how the program was designed.

  4. Run continuous simulations. Replace one-off annual campaigns with recurring simulations that vary by attack type, difficulty, and channel. Your employees should never know when the next simulation is coming, because attackers don't announce themselves.

  5. Review results with leadership. Bring risk trends, reporting rates, and simulation outcomes into management reviews. This serves two purposes: it keeps leadership informed and accountable, which satisfies governance expectations, and it builds the evidence you need if you're ever asked to demonstrate your program's effectiveness.

Top 5 AI-Powered Security Training Tools for Companies

The platforms below are worth evaluating if you're building or upgrading a security training program that covers modern attack vectors, supports role-based delivery, and produces useful evidence for compliance and leadership reviews.

1. Brightside AI

Brightside AI is a Swiss security awareness platform built for the AI-threat era. It brings together structured training courses, AI-powered phishing simulations, live AI vishing, deepfake awareness scenarios, and hybrid attack campaigns in a single workflow.

The platform's vishing simulator is particularly strong. It runs live, adaptive AI phone calls where the AI agent takes on a configured persona, pursues a specific social engineering goal, and responds dynamically to what the employee says. Admins can clone executive voices from short recordings, combine voice calls with coordinated phishing emails in a hybrid campaign, and test scenarios in-browser before launch.

Phishing simulations are personalized using employee profile data including role, department, location, tools used, and tenure, so a finance manager doesn't get the same test as a marketer.

Reporting covers click rate, report rate, credential submission rate, and month-over-month risk trends at organizational and group level, with exportable data for audit and leadership reviews. Follow-up training is triggered automatically when someone fails a simulation.

Best for: Organizations that want phishing, vishing, deepfakes, and structured training in one place, with meaningful reporting and minimal admin overhead.

2. KnowBe4

KnowBe4 is the market's largest security awareness platform, serving over 65,000 organizations. It offers a wide content library, phishing simulation tools, and AI-assisted module suggestions that adapt to user risk profiles.

Its scale is its biggest advantage. Enterprise teams get a tested workflow, a large library of templates, and integrations with most HR and identity providers. For organizations that value breadth and organizational continuity, it's the safest incumbent choice.

Best for: Large enterprises that need a proven, scalable platform with wide content coverage.

3. Hoxhunt

Hoxhunt takes a behavioral science approach. Its simulations adapt based on individual performance, and it focuses on positive reinforcement rather than punitive testing. It reports solid phishing reduction outcomes among its customer base.

The platform works well for organizations that want to improve employee reporting behavior and build a healthier security culture rather than just run tests. Its deepfake and vishing coverage is more limited compared to platforms that focus on simulation breadth.

Best for: Teams that want an adaptive, gamified approach centered on behavioral change.

4. SoSafe

SoSafe is a European platform with a clear awareness and behavioral science positioning. It's well suited to organizations that care about cultural fit and want a platform built with European buyers in mind.

Its multi-language support and European roots make it a natural fit for NIS2-oriented programs across German-speaking and broader EMEA markets. Multi-vector simulation coverage is less developed than platforms that focus on phishing, vishing, and deepfake breadth.

Best for: European organizations that want a platform with a strong EMEA fit and awareness-first culture.

5. Jericho Security

Jericho Security positions itself around AI-driven simulations and is worth evaluating if you're looking for a newer entrant with a focused emphasis on AI-generated attack realism. Based on available product information, it covers phishing and vishing scenarios with an AI-forward approach.

Buyers should validate platform breadth, admin workflow, and reporting depth against their specific needs before committing.

Best for: Security-first buyers who want AI-heavy simulation capabilities and are comfortable evaluating a newer platform.

Comparison at a Glance

Tool

Live AI Vishing

Deepfake Sim

AI Personalization

Multilingual

Hybrid Attacks

Brightside AI

Yes

Yes

Yes

Yes

Yes

KnowBe4

Tier-dependent

No

Yes

Yes

No

Hoxhunt

Limited

Limited

Yes

Yes

No

SoSafe

Limited

No

Yes

Yes

No

Jericho Security

Yes

Yes

Yes

Limited

No

Try our vishing simulator

Experience the most advanced voice phishing simulator built for security teams. Create scenarios, test voice cloning, and explore automation features.

What to Look For in a DORA- and NIS2-Ready Platform

If you're evaluating platforms specifically for in-scope DORA or NIS2 programs, these criteria matter more than feature lists.

  • Role-based curricula. Can you assign different content to executives, managers, IT, and general staff without manual workarounds?

  • Management and board training support. NIS2 explicitly mentions management-body training. Does the platform support that with appropriate content?

  • Simulation breadth. Does it cover phishing, vishing, and deepfake scenarios, or only one channel?

  • Audit-friendly reporting. Can you export evidence of completions, failure rates, and remediation for compliance reviews?

  • Automated remediation. Does the platform trigger follow-up training automatically, or does that require manual setup every time?

  • Multilingual delivery. European workforces are multilingual. Your training platform should be too.

  • Privacy boundaries. What data does the platform expose at the individual level, and how does that interact with your GDPR obligations?

Common Mistakes to Avoid

  • Assuming DORA or NIS2 applies to every European company. Check your sector and size before drawing compliance conclusions.

  • Treating deepfake and AI vishing training as an explicit regulatory text requirement. It's risk-based best practice, not a quoted legal mandate. Frame it that way internally so you're not overclaiming to auditors.

  • Measuring training by completions alone. Completions tell you who opened a module. They don't tell you whether anyone's behavior changed.

  • Skipping management-body training. NIS2 is explicit about this, and it's one of the most frequently overlooked gaps in European security training programs.

  • Sending the same simulation repeatedly. Employees learn to recognize your tests, which produces artificially low failure rates without any real improvement in behavior.

  • Choosing a platform that can't produce useful evidence. A training program that doesn't generate reporting you can show leadership or regulators creates work, not protection.

Conclusion

DORA and NIS2 are useful forcing functions, but satisfying a regulation isn't the real goal. Building an organization where people can actually recognize and respond to the attacks they're going to face is.

That means covering more than email phishing. It means running live voice simulations. It means giving executives specific practice for the scenarios they're most likely to encounter. It means tracking whether employees are getting better over time, not just whether they clicked through a module.