Agentic AI: new attack surface, new defences

How to stop your agents becoming somebody else’s backdoor


webcast_agentic_AI_1

As AI agents and multi-agent systems move from demo to production workload, agents are making decisions, calling tools, moving money and touching sensitive data.

That also makes them a target.

So hear from AWS and Palo Alto Networks (PAN) about how to stay secure when you plug Amazon Bedrock Agents and multi-agent workflows into your infrastructure, how MCP and agent-to-agent (A2A) links may expand the attack surface, and why just adding guardrails isn’t enough.

Join Cristiano Marciel of AWS and Kanthi Sarella of PAN, who are talking to The Reg’s Tim Phillips about how to build security into agentic architectures from day one: covering OWASP-style risks for agents and MCP, where cloud-native guardrails help, and where you need runtime protection so understand agents in context.

And stay to the end for a demo of PAN Prisma Airs, a runtime security layer for AI systems that watches what agents actually do rather than what they were supposed to do.

You will learn:

  • How agentic AI changes your threat model.
  • Where MCP, A2A and multi-agent patterns introduce new classes of vulnerability.
  • Using guardrails and runtime controls so AI agents stay productive.