FOUNDATIONAL · ASSOCIATE TRACK

Certified AI Security Associate

A 2-day intensive foundational programme blending core cybersecurity principles with AI-specific risks, adversarial techniques, threat modeling and the OWASP LLM Top 10 — built for practitioners new to AI security.

Duration
2 Days
Level
Foundational
Format
Intensive
Capstone
Scenario
// OVERVIEW

Traditional cybersecurity is no longer enough.

AI systems are rapidly expanding enterprise capabilities — but also introducing new attack surfaces across data, models and AI-driven applications. This course provides a foundational understanding of AI security, combining core cybersecurity principles with AI-specific risks and attack techniques.

Participants will learn why AI security matters, how AI systems are attacked, and how to apply basic threat modeling and defensive thinking. By the end, learners will be able to identify risks, understand attack patterns, and recommend basic controls for AI systems.

// KEY FOCUS AREAS
  • Cybersecurity fundamentals applied to AI
  • AI/ML pipeline risks and vulnerabilities
  • Adversarial AI threats and attack techniques
  • Threat modeling for AI systems
  • OWASP LLM Top 10 risks
// LEARNING OUTCOMES

After certification you'll be able to

  • Explain core cybersecurity and AI security concepts.
  • Identify AI-specific threats and attack techniques.
  • Understand AI system vulnerabilities and attack surfaces.
  • Apply basic threat modeling to AI use cases.
  • Interpret OWASP LLM Top 10 risks.
  • Recommend foundational AI security controls.
// COURSE OUTLINE

Two days. Six modules. Foundations to applied scenario.

01
Day 01 — Module 1

Cybersecurity Foundations for AI

  • CIA Triad — Confidentiality, Integrity, Availability
  • AAA — Authentication, Authorization, Accounting
  • Security controls: preventive, detective, corrective
  • Threat actors: APT, insider, cybercriminal
  • Cyber Kill Chain & digital trust in modern enterprises
02
Day 01 — Module 2

Introduction to AI Security & Attack Surface

  • AI security vs traditional application security
  • AI/ML lifecycle: Data → Training → Model → Inference → Deployment
  • Why AI is vulnerable: data dependency, model opacity, API exposure
  • Real-world misuse: deepfakes & AI-driven phishing
  • Workshop: map an AI system & identify attack surfaces
03
Day 01 — Module 3

AI Threats & Adversarial Techniques

  • Evasion attacks, data poisoning & model extraction
  • Model inversion & backdoor attacks
  • Data leakage & memorization risks
  • LLM risks: prompt injection, jailbreaking, hallucinations
  • Exercise: match attack types to AI lifecycle stages
04
Day 02 — Module 4

Threat Modeling for AI Systems

  • Assets, threats, vulnerabilities & entry points
  • STRIDE simplified for AI systems
  • Attack-path mapping basics
  • MITRE ATT&CK & MITRE ATLAS (AI threat lens)
  • Workshop: guided AI threat model
05
Day 02 — Module 5

OWASP LLM Top 10 Risks

  • Prompt Injection & Insecure Output Handling
  • Data Leakage & Model Denial of Service
  • Supply Chain Risks & Over-reliance on AI
  • Mapping risks to enterprise environments
  • Real-world misuse scenarios
06
Day 02 — Module 6

Defensive Principles & Capstone

  • Secure AI basics: input validation, output filtering, access control
  • Monitoring: logging & anomaly detection
  • Human risks: social engineering & deepfake awareness
  • Awareness of AI security tooling
  • Capstone: AI chatbot scenario — STRIDE-lite + OWASP LLM Top 10

Capstone Exercise

End-to-end scenario built around an AI chatbot / virtual assistant. Participants identify attack surfaces, map threats with STRIDE-lite, analyse risks via the OWASP LLM Top 10 and recommend foundational security controls.

Certification

  • 2-day intensive programme
  • Foundational level — no prerequisites
  • ISO 17024 certified credential
  • Pathway into Professional-tier tracks
// TARGET AUDIENCE

For practitioners new to AI security.

No prior AI experience required. Designed for cybersecurity, IT, development and risk professionals who need a structured grounding in how AI systems are attacked and defended.

SOC Analysts
IT & Security Teams
Developers
Risk Professionals
Compliance & Audit Staff
Beginners in AI Security
// ALUMNI VOICES

What CAISA alumni say.

"CAISA gave me the structured grounding I needed — moving from generic cyber awareness to clearly understanding how AI systems get attacked."
S
Sneha P.
Security Analyst · Banking, Mumbai
"The OWASP LLM Top 10 module was a game changer. I now run AI security reviews for every new model deployment in my team."
A
Akira N.
Application Security Engineer · SaaS, Tokyo
"Two days, zero filler. The capstone forced us to actually map a real AI threat model — not just memorise definitions."
L
Lucia F.
IT Risk Officer · Insurance, Madrid
"Best foundational AI security course I've taken. Our entire SOC junior tier went through CAISA before being trusted with AI alerts."
D
Daniel K.
SOC Lead · MSSP, Dubai

Start your AI security journey.

Join the foundational cohort and progress into Axiom Prime's Professional-tier certifications across DefenAI, CyberAI, SecOps, DevSecOps and GRC.

Enroll in CAISA