// PROGRAMS/CAPSTONE
APPLIED · 12-WEEK TEAM PROGRAM

AI Security Capstone Project Program

A 12-week applied program where your team secures a production-grade AI system end-to-end — threat-modelled, hardened, adversarially tested and operationalised — mentored throughout by Axiom Prime experts.

Duration
12 Weeks
Format
Team Cohort
Target
Live System
Mentorship
Expert-led
// OVERVIEW

Twelve weeks. One real AI system. Shipped securely, together.

Most AI security training stops at the lab door. The Capstone Project Program is different: your team selects a real production-grade AI system inside your organisation — an LLM-backed product, an internal copilot, an agentic workflow, or a classical ML pipeline — and spends twelve weeks securing it end-to-end under the guidance of Axiom Prime mentors.

The program blends weekly mentor sessions, structured delivery sprints and applied labs. By week 12 your team will have produced a documented threat model, a working set of controls, an adversarial test report and an executive readout — and, just as importantly, the muscle memory to repeat the playbook on the next system.

Capstone is the natural next step after CAISA, CCAIP, CDAIP, CAISO or CAIDSP — but it also stands alone for security teams who already hold credentials elsewhere and need to operationalise AI security inside a live environment.

// PROGRAM OBJECTIVES

What your team will do

  • Secure a production-grade AI system end-to-end — from data pipeline through model serving to downstream consumers.
  • Apply a structured AI threat-modelling methodology (STRIDE / PASTA / MITRE ATLAS) against your team's real target system.
  • Stand up continuous controls: red-team automation, prompt-injection defences, model integrity monitoring, and incident playbooks.
  • Translate AI security risk into board-level language, including residual risk, compensating controls and roadmap trade-offs.
  • Operate as a high-functioning AI security squad — mixed roles working to a delivery cadence under expert mentorship.
// OUTCOMES

What you'll walk away with

  • A documented threat model and security architecture for one production-grade AI system.
  • A working set of preventive, detective and responsive controls deployed against the target system.
  • An adversarial test report covering OWASP LLM Top 10, model extraction, data poisoning and jailbreak resilience.
  • An executive readout pack — risk register, control inventory, residual-risk statement and 12-month roadmap.
  • A team that has shipped real AI security work together and can repeat the playbook on the next system.
// 12-WEEK ROADMAP

Six phases. One end-to-end secured AI system.

01
Weeks 1–2

Discovery & Scoping

  • Target AI system selection & stakeholder alignment
  • Asset, data-flow & trust-boundary mapping
  • Success criteria, delivery cadence & RACI
  • Baseline maturity assessment
02
Weeks 3–4

Threat Modelling & Architecture

  • STRIDE / PASTA applied to the live system
  • MITRE ATLAS adversary mapping
  • Zero-trust architecture for AI workloads
  • Prioritised control backlog
03
Weeks 5–7

Build — Preventive & Detective Controls

  • Prompt-injection & output-handling defences
  • Model & data supply-chain integrity
  • Telemetry, guardrails & abuse detection
  • Identity, secrets & key management
04
Weeks 8–9

Adversarial Testing

  • OWASP LLM Top 10 assessment
  • Model extraction & membership inference
  • Data poisoning & jailbreak campaigns
  • Findings triage & remediation sprint
05
Weeks 10–11

Operationalise

  • Incident response runbooks for AI failures
  • Continuous red-team automation
  • On-call, escalation & vendor management
  • Metrics, KPIs & reporting cadence
06
Week 12

Capstone Readout

  • Executive briefing & live demo
  • Residual-risk statement & sign-off
  • 12-month AI security roadmap
  • Team retrospective & playbook handover
// DELIVERABLES

Tangible artefacts your CISO can act on.

AI Threat Model

Living artefact covering data, model, infra and downstream consumers — mapped to STRIDE, PASTA and MITRE ATLAS.

Control Inventory

Preventive, detective and responsive controls deployed against the target system, with owners and SLAs.

Adversarial Test Report

Findings across OWASP LLM Top 10, extraction, poisoning and jailbreak — with remediation status.

Executive Readout Pack

Board-ready narrative: risk register, residual-risk statement and 12-month roadmap.

Runbooks & Playbooks

Incident response, on-call, escalation and red-team automation tailored to your stack.

Team Capability Uplift

A squad that has shipped AI security work together — and can repeat it on the next system.

Expert Mentorship

Each cohort is paired with a dedicated Axiom Prime mentor — a senior practitioner with hands-on experience securing LLM products, agentic systems and ML pipelines. Weekly working sessions, async reviews and on-demand office hours.

Program Structure

  • 12-week applied programme
  • Teams of 4–8 practitioners
  • ~6–8 hours per participant per week
  • Live system as the target — not a sandbox
// WHO IT'S FOR

For mixed security & AI teams.

Pre-requisites: Participants should hold one of the Axiom Prime certifications (CAISA, CCAIP, CDAIP, CAISO or CAIDSP) or equivalent industry experience. The cohort works best as a cross-functional squad spanning architecture, engineering, AppSec and GRC.

Security architects & engineers
AI/ML platform & MLOps teams
Application security leads
GRC & risk officers
Incident response & SOC leads
CISO office & security programme managers
// SAMPLE PROJECTS

Sample Capstone project ideas.

Real targets past cohorts have tackled. We work with you to scope the right system for your team — these are starting points, not a fixed menu.

Customer Experience

Harden a Customer-Facing LLM Chatbot

Secure a production support copilot against prompt injection, system-prompt extraction and PII leakage; deploy guardrails, abuse detection and red-team automation.

Healthcare

Secure a Clinical Decision-Support Model

Threat-model a diagnostic ML pipeline end-to-end — data lineage, model integrity, inference monitoring — and deliver an HIPAA-aligned control set.

Financial Services

Defend a Fraud-Detection Pipeline

Build defences against adversarial evasion, data poisoning and membership inference; instrument drift and integrity monitoring across the MLOps lifecycle.

Legal & Compliance

Risk-Assess an Agentic Document Workflow

Map trust boundaries for an agent that reads, summarises and acts on contracts; design tool-use authorisation, output validation and human-in-the-loop controls.

Industrial / OT

Secure a Predictive-Maintenance Model

Protect an OT-adjacent ML system from sensor spoofing and model extraction; design a safety-aware incident response runbook spanning IT and OT.

Retail & E-commerce

Govern an AI-Personalisation Engine

Stand up bias, abuse and data-leakage controls on a recommendation system; produce a board-ready residual-risk statement and 12-month roadmap.

// FROM PAST COHORTS

What teams say after week 12.

"By week 12 we had a real threat model, working guardrails and an executive pack our CISO took straight to the risk committee. Nothing else we've done has produced that kind of artefact density."
Priya Raman
Head of AppSec · Regional Bank, Singapore
"The Capstone forced our security and ML platform teams to actually ship together. We came out with a playbook we've now repeated on two more LLM products."
Daniel Okafor
Director of AI Engineering · Health-tech Scale-up
"Our mentor pushed hard on residual risk and the trade-offs we kept ducking. The week-12 readout was the first time the board genuinely understood our AI exposure."
Mei Ling Tan
CISO · Public-Sector Agency

Turn certified individuals into a shipping AI security team.

Reserve a Capstone cohort for your organisation, or talk to us about scoping the right target AI system for your team to secure.