Skip to content
govern

AI Agent Security

The real risks of autonomous agents: prompt injection, exfiltration via MCP, sandboxing, permissions, security hooks. AI agent red teaming.

1 to 2 days (7 to 14 hours) CISO, CTO, DevSecOps, developers, security architects

What you'll be able to do

Identify attack vectors specific to AI agents

Implement sandboxing and permission management mechanisms

Configure security hooks to detect and block malicious behavior

Conduct a red teaming exercise on an AI agent

Define a security policy for deploying agents in production

Program

Day 1 — Morning

The AI agent threat model

  • The 3 attack surfaces: prompt, tools, context
  • Direct and indirect prompt injection: live demonstrations
  • Injection vectors: web pages, emails, documents, API responses
  • Defense techniques: data/instruction separation, validation, human confirmation
Day 1 — Afternoon

Exfiltration, sandboxing and permissions

  • Exfiltration via agent tools: files, URLs, APIs
  • MCP and attack surface: malicious servers, tool poisoning, escalation
  • Sandboxing: filesystem, network, process isolation
  • Workshop: configuring a secure environment for an agent
Day 2 — Morning (optional)

Security hooks and red teaming

  • Detection hooks: secrets, suspicious URLs, dangerous commands
  • Prevention hooks: block push --force, limit outgoing requests
  • Red teaming in pairs: attacking and defending an agent
  • Hook bypass: testing protection robustness
Day 2 — Afternoon (optional)

Security policy and monitoring

  • Writing your organization's agent security policy
  • Production monitoring: what to log, KPIs, alerting
  • Incident response: detection, containment, investigation, recovery
  • Action plan: the first 5 hardening actions

Practical info

Duration

1 to 2 days (7 to 14 hours)

Target audience

CISO, CTO, DevSecOps, developers, security architects

Prerequisites

Basic IT security knowledge. Familiarity with AI concepts.

Group size

1 to 6 people

Pedagogy

30% theory, 70% offensive and defensive workshops on sandbox environments. Each participant leaves with an AI security policy and configured hooks.

Assessment

Entry-level assessment, red teaming exercises, end-of-training knowledge evaluation.

Trainer

Colombani.ai, AI developer and cybersecurity expert.

Access delay

2 weeks minimum between enrollment and training start.

Pricing

Open enrollment

€1,100 / person / day

In-company

€6,500 / group (4-10 pers.) / 2 days

Pricing on request. Each program is adapted to your situation, contact us for a personalized quote.

Accessibility

Program accessible to people with disabilities. Contact the disability officer in advance to discuss accommodations.

Ulysse Trin — [email protected] — 06 58 58 37 11

Post-training support

Red teaming report written during the exercise
Agent security policy (template + completed version)
Suite of configured and tested security hooks
Security checklist for agent deployment
Security support for 30 days

Frequently asked questions

Do I need pentesting skills for this training? +

No, but basic IT security knowledge is needed. The training is progressive: we start with the threat model before moving to red teaming.

Is red teaming done on real agents? +

Yes, on agents configured in a sandbox for the exercise. You attack and defend real agents in a controlled environment.

Does this training cover MCP risks? +

Yes, an entire session is dedicated to MCP risks: malicious servers, tool poisoning, privilege escalation. It's a critical attack vector for agents.

Request the full program

Bespoke program, tailored to your industry. First call is free.

Voir cette page en français