Certified AI DevSecOps Professional
A 3-day advanced programme positioning senior security professionals at the intersection of AI engineering and offensive security — AI-augmented architecture, LLM application security, AI-assisted pentesting and adversarial AI defence.
Attacking AI is inseparable from designing it securely.
Organisations are deploying LLM-based applications, AI chatbots, and agentic workflows at speed — each representing an attack surface that traditional application security was never designed to evaluate. Security architects who cannot threat-model AI systems and penetration testers who cannot test prompt injection vulnerabilities are leaving material gaps that adversaries are actively exploiting.
This programme addresses the OWASP Top 10 for LLM Applications at the technical depth required for professional assessment work. The adversarial AI module — covering model extraction, membership inference, model inversion, and systematic AI red-team methodology — aligns with current global industry standards.
Aligned to SkillsFuture for Digital Workplace and addressing Singapore's shortage of AI-literate senior security professionals, the Certified AI DevSecOps Professional prepares participants to occupy the rare and highly valued position at the intersection of AI engineering and offensive security.
Skills you'll gain
- Apply AI tools to accelerate security architecture design — AI-assisted threat modelling (STRIDE/PASTA), zero-trust design for AI systems, and automated design review.
- Use AI to dramatically speed up penetration testing — OSINT reconnaissance, CVE analysis, payload generation, and attack research.
- Design and execute AI-assisted red team exercises including responsible use of AI-generated social engineering.
- Become proficient in LLM application security — the OWASP Top 10 for LLMs — including prompt injection taxonomy, system prompt extraction, and model behaviour manipulation.
- Model and defend against advanced adversarial AI attacks: model extraction, membership inference, data poisoning, and jailbreaking.
- Apply AI for strategic threat intelligence: actor profiling, campaign correlation, and dark web monitoring.
- Govern offensive AI programmes responsibly: ethical frameworks, authorisation requirements, and responsible disclosure.
After certification you'll be able to
- Explain the expanded attack surface introduced by LLM-based applications including prompt injection, system prompt exposure, and AI supply chain risks.
- Apply AI tools to STRIDE / PASTA threat modelling, design review, and zero-trust architecture for AI and agentic systems.
- Use AI tools for offensive operations: OSINT, AI-assisted payload generation, vulnerability pattern recognition and reconnaissance automation.
- Analyse adversarial AI attack techniques: prompt injection, model extraction, membership inference, jailbreaking and data poisoning.
- Evaluate and design secure architectures for LLM-based applications: prompt injection prevention, output validation, trust boundaries.
- Govern offensive security AI programmes: authorisation frameworks, responsible use policies, and disclosure management.
- Communicate AI security risk to senior leadership in business and regulatory language.
Three days. Seven modules. Built on real offensive tooling.
AI-Augmented Security Architecture & Threat Modelling
- ›Structuring AI outputs for professional security deliverables
- ›STRIDE & PASTA threat modelling with AI assistance
- ›MITRE ATT&CK + MITRE ATLAS technique mapping
- ›OWASP Threat Dragon for AI architectures
- ›Case Study: Threat-modelling an AI customer service chatbot
LLM Application Security & OWASP LLM Top 10
- ›Direct, indirect & multi-turn prompt injection
- ›System prompt extraction & confidentiality failures
- ›LLM01–LLM10: full OWASP LLM Top 10 walkthrough
- ›Insecure output handling: XSS, SSRF, code injection via LLM output
- ›AI supply chain security & model artefact provenance
AI-Assisted Penetration Testing & OSINT
- ›AI-accelerated target profiling & attack-surface enumeration
- ›Nmap output analysis with AI
- ›AI-assisted CVE analysis & payload generation
- ›Burp Suite + AI for web application testing
- ›Lab: Full AI-augmented pentest kill chain on HTB target
Red Team Operations & AI-Generated Social Engineering
- ›Red teaming LLM applications — adversarial input generation
- ›Jailbreaking methodology and detection
- ›Responsible use of AI-generated phishing & pretexting
- ›Authorisation, scoping and ethical guardrails
- ›Lab: Red team exercise against vulnerable LLM application
Adversarial AI Defence & Advanced Attack Techniques
- ›Model extraction & capability reverse engineering
- ›Membership inference & model inversion attacks
- ›Data poisoning across the ML lifecycle
- ›Systematic AI red teaming methodology
- ›Defensive controls & detection engineering for adversarial AI
Threat Intelligence, Governance & Capstone
- ›AI for strategic threat intelligence & campaign correlation
- ›AI-assisted attribution analysis & its limits
- ›Governing offensive AI programmes — authorisation & disclosure
- ›Communicating AI security risk to the board
- ›Capstone: 3-hour applied AI security assessment exercise
Industry-standard offensive security tooling.
AI threat modeller & research assistant — STRIDE generation, attack-surface analysis, security review checklists.
Web/API attack surface mapping, AI-assisted request analysis and payload construction.
Reconnaissance, scan-result interpretation and AI-augmented exploit research.
Vulnerable-app labs and visual threat modelling for AI architectures.
Live target environments for AI-augmented penetration testing kill chains.
Adversary TTP mapping for traditional and AI-specific threats.
Integrated Red-Team Labs
Cloud-accessible lab environment with vulnerable LLM applications, Hack The Box targets, Burp Suite, OWASP WebGoat and Claude.ai. Instructor-led exercises across the full AI-augmented kill chain.
Capstone & Certification
- 3-hour applied capstone exercise
- 70% aggregate passing score
- ISO 17024 certified credential
- 3-day intensive programme (24 hours)
For senior security practitioners.
Pre-requisites: Minimum 3 years of cybersecurity experience in architecture, pentesting or red team roles. Familiarity with threat modelling, web application security and at least one testing tool (Nmap, Burp Suite, Metasploit or equivalent).
What CAIDSP alumni say.
"CAIDSP completely reshaped how we secure our AI/LLM pipelines — from threat-modeling design reviews to runtime guardrails in production."
"The AI-assisted pentesting labs were exceptional. We've integrated several techniques into our internal AppSec testing playbook."
"Excellent balance between offensive and defensive AI tradecraft. My platform team finally speaks the same language as our security team."
"We used the OWASP LLM Top 10 walkthroughs to harden our GenAI products in the very first sprint after the course."
Operate at the intersection of AI engineering and offensive security.
Reserve your seat in the next CAIDSP cohort, or talk to us about private on-site delivery for your security team.
