AI Security Capstone Project Program
A 12-week applied program where your team secures a production-grade AI system end-to-end — threat-modelled, hardened, adversarially tested and operationalised — mentored throughout by Axiom Prime experts.
Twelve weeks. One real AI system. Shipped securely, together.
Most AI security training stops at the lab door. The Capstone Project Program is different: your team selects a real production-grade AI system inside your organisation — an LLM-backed product, an internal copilot, an agentic workflow, or a classical ML pipeline — and spends twelve weeks securing it end-to-end under the guidance of Axiom Prime mentors.
The program blends weekly mentor sessions, structured delivery sprints and applied labs. By week 12 your team will have produced a documented threat model, a working set of controls, an adversarial test report and an executive readout — and, just as importantly, the muscle memory to repeat the playbook on the next system.
Capstone is the natural next step after CAISA, CCAIP, CDAIP, CAISO or CAIDSP — but it also stands alone for security teams who already hold credentials elsewhere and need to operationalise AI security inside a live environment.
What your team will do
- Secure a production-grade AI system end-to-end — from data pipeline through model serving to downstream consumers.
- Apply a structured AI threat-modelling methodology (STRIDE / PASTA / MITRE ATLAS) against your team's real target system.
- Stand up continuous controls: red-team automation, prompt-injection defences, model integrity monitoring, and incident playbooks.
- Translate AI security risk into board-level language, including residual risk, compensating controls and roadmap trade-offs.
- Operate as a high-functioning AI security squad — mixed roles working to a delivery cadence under expert mentorship.
What you'll walk away with
- A documented threat model and security architecture for one production-grade AI system.
- A working set of preventive, detective and responsive controls deployed against the target system.
- An adversarial test report covering OWASP LLM Top 10, model extraction, data poisoning and jailbreak resilience.
- An executive readout pack — risk register, control inventory, residual-risk statement and 12-month roadmap.
- A team that has shipped real AI security work together and can repeat the playbook on the next system.
Six phases. One end-to-end secured AI system.
Discovery & Scoping
- ›Target AI system selection & stakeholder alignment
- ›Asset, data-flow & trust-boundary mapping
- ›Success criteria, delivery cadence & RACI
- ›Baseline maturity assessment
Threat Modelling & Architecture
- ›STRIDE / PASTA applied to the live system
- ›MITRE ATLAS adversary mapping
- ›Zero-trust architecture for AI workloads
- ›Prioritised control backlog
Build — Preventive & Detective Controls
- ›Prompt-injection & output-handling defences
- ›Model & data supply-chain integrity
- ›Telemetry, guardrails & abuse detection
- ›Identity, secrets & key management
Adversarial Testing
- ›OWASP LLM Top 10 assessment
- ›Model extraction & membership inference
- ›Data poisoning & jailbreak campaigns
- ›Findings triage & remediation sprint
Operationalise
- ›Incident response runbooks for AI failures
- ›Continuous red-team automation
- ›On-call, escalation & vendor management
- ›Metrics, KPIs & reporting cadence
Capstone Readout
- ›Executive briefing & live demo
- ›Residual-risk statement & sign-off
- ›12-month AI security roadmap
- ›Team retrospective & playbook handover
Tangible artefacts your CISO can act on.
Living artefact covering data, model, infra and downstream consumers — mapped to STRIDE, PASTA and MITRE ATLAS.
Preventive, detective and responsive controls deployed against the target system, with owners and SLAs.
Findings across OWASP LLM Top 10, extraction, poisoning and jailbreak — with remediation status.
Board-ready narrative: risk register, residual-risk statement and 12-month roadmap.
Incident response, on-call, escalation and red-team automation tailored to your stack.
A squad that has shipped AI security work together — and can repeat it on the next system.
Expert Mentorship
Each cohort is paired with a dedicated Axiom Prime mentor — a senior practitioner with hands-on experience securing LLM products, agentic systems and ML pipelines. Weekly working sessions, async reviews and on-demand office hours.
Program Structure
- 12-week applied programme
- Teams of 4–8 practitioners
- ~6–8 hours per participant per week
- Live system as the target — not a sandbox
For mixed security & AI teams.
Pre-requisites: Participants should hold one of the Axiom Prime certifications (CAISA, CCAIP, CDAIP, CAISO or CAIDSP) or equivalent industry experience. The cohort works best as a cross-functional squad spanning architecture, engineering, AppSec and GRC.
Sample Capstone project ideas.
Real targets past cohorts have tackled. We work with you to scope the right system for your team — these are starting points, not a fixed menu.
Harden a Customer-Facing LLM Chatbot
Secure a production support copilot against prompt injection, system-prompt extraction and PII leakage; deploy guardrails, abuse detection and red-team automation.
Secure a Clinical Decision-Support Model
Threat-model a diagnostic ML pipeline end-to-end — data lineage, model integrity, inference monitoring — and deliver an HIPAA-aligned control set.
Defend a Fraud-Detection Pipeline
Build defences against adversarial evasion, data poisoning and membership inference; instrument drift and integrity monitoring across the MLOps lifecycle.
Risk-Assess an Agentic Document Workflow
Map trust boundaries for an agent that reads, summarises and acts on contracts; design tool-use authorisation, output validation and human-in-the-loop controls.
Secure a Predictive-Maintenance Model
Protect an OT-adjacent ML system from sensor spoofing and model extraction; design a safety-aware incident response runbook spanning IT and OT.
Govern an AI-Personalisation Engine
Stand up bias, abuse and data-leakage controls on a recommendation system; produce a board-ready residual-risk statement and 12-month roadmap.
What teams say after week 12.
"By week 12 we had a real threat model, working guardrails and an executive pack our CISO took straight to the risk committee. Nothing else we've done has produced that kind of artefact density."
"The Capstone forced our security and ML platform teams to actually ship together. We came out with a playbook we've now repeated on two more LLM products."
"Our mentor pushed hard on residual risk and the trade-offs we kept ducking. The week-12 readout was the first time the board genuinely understood our AI exposure."
Turn certified individuals into a shipping AI security team.
Reserve a Capstone cohort for your organisation, or talk to us about scoping the right target AI system for your team to secure.
