AI Agent Security
The real risks of autonomous agents: prompt injection, exfiltration via MCP, sandboxing, permissions, security hooks. AI agent red teaming.
What you'll be able to do
Identify attack vectors specific to AI agents
Implement sandboxing and permission management mechanisms
Configure security hooks to detect and block malicious behavior
Conduct a red teaming exercise on an AI agent
Define a security policy for deploying agents in production
Program
The AI agent threat model
- › The 3 attack surfaces: prompt, tools, context
- › Direct and indirect prompt injection: live demonstrations
- › Injection vectors: web pages, emails, documents, API responses
- › Defense techniques: data/instruction separation, validation, human confirmation
Exfiltration, sandboxing and permissions
- › Exfiltration via agent tools: files, URLs, APIs
- › MCP and attack surface: malicious servers, tool poisoning, escalation
- › Sandboxing: filesystem, network, process isolation
- › Workshop: configuring a secure environment for an agent
Security hooks and red teaming
- › Detection hooks: secrets, suspicious URLs, dangerous commands
- › Prevention hooks: block push --force, limit outgoing requests
- › Red teaming in pairs: attacking and defending an agent
- › Hook bypass: testing protection robustness
Security policy and monitoring
- › Writing your organization's agent security policy
- › Production monitoring: what to log, KPIs, alerting
- › Incident response: detection, containment, investigation, recovery
- › Action plan: the first 5 hardening actions
Practical info
1 to 2 days (7 to 14 hours)
CISO, CTO, DevSecOps, developers, security architects
Basic IT security knowledge. Familiarity with AI concepts.
1 to 6 people
30% theory, 70% offensive and defensive workshops on sandbox environments. Each participant leaves with an AI security policy and configured hooks.
Entry-level assessment, red teaming exercises, end-of-training knowledge evaluation.
Colombani.ai, AI developer and cybersecurity expert.
2 weeks minimum between enrollment and training start.
Pricing
€1,100 / person / day
€6,500 / group (4-10 pers.) / 2 days
Pricing on request. Each program is adapted to your situation, contact us for a personalized quote.
Accessibility
Program accessible to people with disabilities. Contact the disability officer in advance to discuss accommodations.
Ulysse Trin — [email protected] — 06 58 58 37 11
Post-training support
Frequently asked questions
Do I need pentesting skills for this training? +
No, but basic IT security knowledge is needed. The training is progressive: we start with the threat model before moving to red teaming.
Is red teaming done on real agents? +
Yes, on agents configured in a sandbox for the exercise. You attack and defend real agents in a controlled environment.
Does this training cover MCP risks? +
Yes, an entire session is dedicated to MCP risks: malicious servers, tool poisoning, privilege escalation. It's a critical attack vector for agents.
Related programs
Claude Code in Production
Code with an AI agent, organize your repository, deploy as a team. CLAUDE.md, hooks, subagents, multi-agent workflows. From terminal to production setup in 2 days.
AI Compliance: AI Act & GDPR
The AI Act classifies autonomous AI agents as high-risk systems. Classification, documentation, risk management. Prepare your compliance before August 2026.
Request the full program
Bespoke program, tailored to your industry. First call is free.