The 'Root Permissions' problem: why Agentic AI poses unique data security risks

A stylized illustration of four humanoid figures with headsets, depicted in soft blue and beige tones, engaging in various technology-related activities. The central figure has "AI" on its face and holds up a glowing light bulb. Surrounding figures interact with a laptop, a magnifying glass, and a tablet displaying a checkmark. The background features abstract icons including a cloud, network diagram, upward graph, and flowchart, representing concepts of artificial intelligence, innovation, agentic AI and AI agents.


Think about the last time you gave someone the keys to your house. You probably thought carefully about who you trusted with that level of access. Now imagine giving those same keys to a robot that never sleeps, can make copies of itself, and might hand those keys to strangers without asking you first.

That's essentially what happens when organizations deploy agentic AI systems with broad data access. These AI agents need extensive permissions to do their jobs effectively. But this creates a dangerous situation that security experts call the "root permissions" problem.

What Is Agentic AI and Why Should You Care?

Agentic AI is a broader category of artificial intelligence that encompasses systems capable of autonomous decision-making and goal-directed behavior. Unlike traditional chatbots and simple AI tools that wait for your questions and provide answers, agentic AI systems can independently plan, reason, and take actions to achieve specific objectives without constant human oversight.

AI agents are the practical implementations of agentic AI—they're the specific software programs that actually perform tasks on your behalf. These agents can book flights, send emails, schedule meetings, and even make purchases without asking for permission each time. They work across multiple systems simultaneously, accessing your calendar, email, banking apps, and social media accounts all at once to complete a single task. This sounds convenient, and it is. But it also creates serious security risks that most organizations haven't fully considered.

Meredith Whittaker, a leading voice in digital privacy and security, has been sounding the alarm about these risks. As President of the Signal Foundation—the organization behind the encrypted messaging app Signal—Whittaker brings decades of experience in technology and privacy advocacy to her warnings about agentic AI. Before joining Signal, she spent over a decade at Google, where she co-founded the company's Open Research group and later became a prominent critic of big tech's approach to AI ethics and worker rights.

Speaking at SXSW 2025, Whittaker didn't mince words about the dangers of agentic AI. She described using these systems as "putting your brain in a jar," warning that they need access to almost everything you do online to function properly. "For agentic AI to work as advertised—to book your concert tickets, manage your calendar, message your friends—it needs access to all of that data," she explained. "Your browsing history, your credit card, your calendar, your contacts, your messaging apps."

What Does "Root Permission" Mean in Simple Terms?

In computer security, "root permission" means having complete control over a system. It's like having a master key that opens every door in a building. When we talk about agentic AI having root permissions, we mean these systems often get access to far more data and systems than any single person would normally have.

Whittaker uses a technical but telling analogy to explain this concept: "It's like giving AI root permissions to all the relevant databases, and there's no way to do that securely with encryption right now." She points out that the infrastructure for secure, encrypted access across multiple systems simply doesn't exist yet, forcing organizations to choose between functionality and security.

Here's why this happens: AI agents need to complete complex tasks that might involve multiple steps across different platforms. To book a business trip, for example, an AI agent might need to:

  • Check your calendar for available dates

  • Access your email to find travel preferences

  • Use your credit card information to make purchases

  • Connect to airline and hotel booking systems

  • Send confirmation details to your colleagues

Each of these steps requires different permissions. Rather than setting up complex, limited access for each task, many organizations simply give AI agents broad permissions that cover everything they might need to do.

The Privilege Gap: When AI Agents Have More Power Than Humans

Traditional computer security follows a simple principle: give people only the access they need to do their jobs. A marketing employee doesn't need access to payroll systems. An accountant doesn't need to see customer service tickets. This approach, called "least privilege," helps limit damage if someone's account gets compromised.

But agentic AI breaks this model. These systems often need access to multiple departments' data and systems to complete their tasks. An AI agent helping with customer service might need to access:

  • Customer databases

  • Billing systems

  • Product information

  • Shipping details

  • Return policies

  • Previous support tickets

This creates what security experts call a "privilege gap." If an attacker compromises an AI agent, they potentially gain access to far more systems and data than they could by compromising any single human employee's account.

Why Traditional Security Approaches Don't Work

Most organizations use identity and access management (IAM) systems designed for humans. These systems assume that users will log in, perform specific tasks, and log out. They rely on static permissions that don't change very often.

AI agents work differently. They operate continuously, switch between tasks rapidly, and need different permissions depending on what they're doing at any given moment. Traditional IAM systems can't keep up with this dynamic behavior.

Static API keys and broad service accounts make the problem worse. These credentials don't expire, can't be easily monitored, and often provide more access than necessary. When AI agents use these credentials, it becomes nearly impossible to track what they're doing or limit their access appropriately.

Real-World Risks: What Happens When Things Go Wrong

The risks of over-privileged AI agents aren't theoretical. Here are some scenarios that keep security professionals awake at night:

  • Data Breaches at Scale: If an attacker compromises an AI agent with broad database access, they could potentially steal customer records, financial information, and trade secrets all at once. The agent's legitimate access makes this theft harder to detect.

  • Confused Deputy Attacks: Attackers can trick AI agents into performing actions on their behalf. For example, an attacker might manipulate an AI agent into transferring money or sharing confidential information by crafting requests that seem legitimate to the AI.

  • Privilege Escalation: AI agents might be programmed to request additional permissions when they encounter obstacles. Attackers can exploit this behavior to gradually gain access to more sensitive systems.

  • Untraceable Actions: Because AI agents can perform thousands of actions per minute, it becomes difficult to audit their behavior or trace the source of security incidents.

A person in business attire uses a tablet, with their finger touching a virtual screen displaying interconnected padlock icons. The background features a digital world map made of dots, symbolizing global cybersecurity and data protection. The padlocks are connected by lines, representing a secure network.

The Encryption Problem: Breaking Down Security Barriers

Meredith Whittaker raises another critical concern about agentic AI and data security. These systems often need to access encrypted information to function properly. When an AI agent summarizes your private messages or helps you respond to emails, it must read the actual content of those communications.

This requirement breaks the security model of encrypted messaging apps. Signal, WhatsApp, and other secure platforms protect your messages by encrypting them so that only you and the recipient can read them. But if an AI agent needs to process these messages, the encryption becomes meaningless.

Recent research shows that even when AI communications appear encrypted, attackers can analyze patterns in the data to extract meaningful information. This technique, called token pattern analysis, can reveal sensitive details about conversations without actually decrypting the messages.

Cloud Processing: Another Layer of Risk

Most AI agents don't run on your local device. They process data in the cloud, which means your sensitive information travels across the internet to remote servers. This creates additional opportunities for interception and unauthorized access.

Cloud processing also means that your data might be stored on servers you don't control, in countries with different privacy laws, or alongside data from other organizations. The complexity of cloud infrastructure makes it harder to ensure that your information stays secure.

The Speed Problem: When AI Moves Too Fast to Stop

Human employees make mistakes, but they usually make them slowly. An employee might accidentally send an email to the wrong person or delete an important file, but these errors happen one at a time and can often be caught and corrected quickly.

AI agents operate at machine speed. They can make thousands of decisions and perform thousands of actions in the time it takes a human to read a single email. If an AI agent starts behaving incorrectly or gets compromised, it can cause massive damage before anyone notices.

This speed also makes incident response more challenging. By the time security teams detect a problem, a compromised AI agent might have already accessed hundreds of systems and stolen massive amounts of data.

Why Current Monitoring Isn't Enough

Traditional security monitoring focuses on detecting unusual human behavior. Security systems look for patterns like:

  • Logging in from unusual locations

  • Accessing systems outside normal business hours

  • Downloading large amounts of data

  • Attempting to access unauthorized systems

These patterns don't work well for AI agents. These systems legitimately operate 24/7, access multiple systems simultaneously, and process large amounts of data as part of their normal function. This makes it much harder to distinguish between legitimate AI behavior and malicious activity.

A close-up photo of a hammer about to strike a nail, with three previously hammered nails in the foreground that are bent at awkward angles, illustrating failed attempts at driving them straight into a wooden board. The background is a clean, light gradient, emphasizing the tools and the wood.

The Stochastic Challenge: When AI Becomes Unpredictable

AI systems are "stochastic," which means they don't always produce the same output for the same input. This unpredictability is actually a feature in many cases—it's what makes AI-generated content feel natural and creative rather than purely robotic.

But unpredictability becomes dangerous when combined with extensive system access. An AI agent might make decisions that seem reasonable to the AI but violate company policies or create security risks. Because these decisions emerge from complex machine learning models, they can be difficult to predict or prevent.

The Broader Implications: A Call for Systemic Change

Whittaker's concerns about agentic AI go beyond just technical problems. She sees these systems as part of a larger pattern of how power works in the digital world. Her experience co-founding the AI Now Institute—which studies how AI affects society—shapes how she thinks about these issues.

"This isn't just about new technology," she said in a recent interview. "It's about how power structures are changing in digital systems. When you give AI agents broad access and let them act independently, you're creating new kinds of concentrated power that are hard to challenge or control."

Whittaker has long criticized how a few big tech companies control essential digital infrastructure. She thinks agentic AI could make this concentration of power even worse.

"The same companies that built surveillance capitalism are building these AI systems," she pointed out. "Adding AI doesn't magically make them care about privacy."

Building Better Security for AI Agents

The good news is that security experts are developing new approaches to address these challenges. Here are some promising solutions:

  • Just-in-Time Access: Instead of giving AI agents permanent access to all systems they might need, organizations can grant permissions only when needed and automatically revoke them when tasks are complete.

  • Dynamic Permission Management: New systems can monitor AI agent behavior in real-time and adjust permissions based on current context and risk levels.

  • Verifiable Delegation: These frameworks allow humans to delegate specific permissions to AI agents with clear boundaries and audit trails.

  • Context-Aware Authorization: Advanced systems consider factors like current threat levels, the sensitivity of requested data, and the AI agent's recent behavior when making access decisions.

Practical Steps for Organizations

If your organization uses or plans to use agentic AI, here are concrete steps you can take to reduce security risks:

  • Start with an Inventory: Document all AI agents currently in use and classify them by the sensitivity of data they access and the potential impact if they're compromised.

  • Implement Least Privilege: Give AI agents only the minimum permissions needed for their specific tasks. Regularly review and reduce these permissions when possible.

  • Monitor Everything: Log all AI agent actions and set up alerts for unusual behavior. This includes data access patterns, system interactions, and external communications.

  • Use Strong Authentication: Implement multi-factor authentication and regularly rotate credentials used by AI agents.

  • Plan for Incidents: Develop specific incident response procedures for AI-related security breaches, including how to quickly disable compromised agents.

  • Regular Security Reviews: Conduct periodic assessments of AI agent permissions and behavior to identify potential security gaps.

The Human Element: Keeping People in the Loop

While AI agents can operate autonomously, maintaining human oversight is crucial for security. Organizations should implement "human-in-the-loop" controls that require human approval for sensitive actions.

This doesn't mean humans need to approve every AI action—that would defeat the purpose of automation. Instead, organizations can define specific scenarios that require human review, such as:

  • Accessing highly sensitive data

  • Making financial transactions above certain thresholds

  • Modifying security settings

  • Communicating with external parties

Looking Ahead: The Future of AI Security

The security challenges posed by agentic AI will only grow as these systems become more capable and widespread. Organizations that address these issues proactively will be better positioned to benefit from AI while protecting their data and systems.

New technologies specifically designed for AI security are emerging. These include specialized monitoring tools, AI-aware access management systems, and automated response capabilities that can react to AI-related threats at machine speed.

Regulatory frameworks are also evolving to address AI security risks. The European Union's AI Act, for example, requires organizations to implement specific security measures for high-risk AI systems.

Why This Matters Now

The "root permissions" problem isn't a future concern—it's happening today. Organizations across industries are deploying AI agents with broad system access, often without fully understanding the security implications.

Meredith Whittaker's warnings about AI agents needing "root-level" access to function effectively highlight a fundamental tension in AI deployment. Organizations want the benefits of AI automation, but they also need to protect sensitive data and maintain security.

The solution isn't to avoid AI entirely. Instead, organizations need to approach AI deployment with the same careful consideration they would give to any other technology that handles sensitive data. This means implementing appropriate security controls, monitoring AI behavior, and maintaining human oversight where necessary.

Outsmart AI-driven threats

Outsmart
AI-driven threats

Attackers use AI to exploit what your team exposes online. Brightside scans digital footprints with OSINT and runs real-world simulations—from deepfakes to targeted phishing—to reveal your weakest links.

Brightside’s personalized and courses improve cybersecurity training—start your free demo, no card required.

Taking Action

The first step in addressing the root permissions problem is recognizing that it exists. Many organizations deploy AI agents without fully considering the security implications of giving these systems broad data access.

Security teams need to work closely with AI developers and business stakeholders to understand how AI agents operate and what data they access. This collaboration is essential for developing effective security controls that protect data without unnecessarily limiting AI capabilities.

Organizations should also invest in training for their security teams. Traditional cybersecurity skills remain important, but protecting against AI-related threats requires understanding how these systems work and what makes them vulnerable.

The root permissions problem represents a new frontier in cybersecurity. Organizations that take it seriously and implement appropriate safeguards will be better positioned to harness the benefits of AI while protecting their most valuable assets: their data and their customers' trust.

As AI agents become more sophisticated and widespread, the stakes will only get higher. The time to address these security challenges is now, before they become critical vulnerabilities that threaten entire organizations.

Subscribe to the newsletter “All about human risks”

Subscribe to the newsletter “All about human risks”

Subscribe to our newsletter to receive a quick overview of the latest news on human risk and the ever-changing landscape of phishing threats.