Certified AI GRC Professional
A 3-day intensive programme delivering an end-to-end AI GRC capability — governance frameworks, regulatory alignment, risk identification, control design, assurance and audit readiness.
Traditional GRC was not built for AI risk.
AI is rapidly transforming enterprises — but it is also introducing new categories of risk that traditional GRC models are not equipped to handle. Biased decisions, regulatory exposure, adversarial attacks and model unpredictability all demand AI-specific governance, risk and compliance frameworks.
The Certified AI GRC Professional equips practitioners with a practical, end-to-end AI GRC capability — covering governance structures, regulatory alignment, risk identification, control design, assurance and audit readiness. The programme blends frameworks, real-world scenarios and hands-on exercises, culminating in a capstone where participants produce a complete AI governance pack.
- Establish AI governance aligned with global frameworks
- Identify and assess AI-specific risks across the lifecycle
- Implement privacy, security, and control mechanisms
- Execute AI assurance and compliance validation
- Build audit-ready documentation and operating models
After certification you'll be able to
- Differentiate AI governance from traditional GRC and apply Responsible AI principles.
- Map AI initiatives to NIST AI RMF, EU AI Act and MAS FEAT requirements.
- Build AI risk registers and run AI Impact Assessments (AIIA).
- Design preventive, detective and corrective controls for high-risk AI systems.
- Operationalise continuous AI assurance — fairness, robustness, explainability and drift.
- Stand up an AI governance operating model with Three Lines of Defense.
- Produce an audit-ready AI governance pack including model cards & evidence logs.
Three days. Six modules. Frameworks to audit-ready governance.
AI Governance & Global Regulatory Landscape
- ›AI Governance vs Traditional GRC
- ›Responsible AI & Digital Trust principles
- ›NIST AI RMF overview
- ›EU AI Act — risk-based classification
- ›MAS FEAT principles & mapping AI governance into enterprise GRC
AI Risk Management & Impact Assessment
- ›AI risk taxonomy: ethical, legal, operational, security
- ›Risk identification across the AI lifecycle
- ›AI Risk Registers — structure & scoring
- ›AI Impact Assessments (AIIA) and prioritisation models
- ›Workshop: classify AI use cases and build a Risk Register + AIIA
AI Data Governance, Privacy & Lifecycle Risks
- ›AI data lifecycle governance
- ›Privacy in AI systems — GDPR & PDPA concepts
- ›Data leakage, memorization & re-identification risks
- ›Dataset misuse and provenance
- ›Controls: consent traceability, data minimisation, access control
AI Security, Threat Modeling & Control Design
- ›Adversarial ML — evasion, poisoning, extraction
- ›STRIDE for AI & MITRE ATLAS (conceptual)
- ›LLM risks: prompt injection, hallucination, data exposure
- ›Preventive, detective, corrective controls + Policy-as-Code
- ›Workshop: AI threat model & control design for a high-risk system
AI Assurance, Testing & Continuous Compliance
- ›Assurance dimensions: fairness, robustness, explainability
- ›Testing frameworks (AI Verify concepts)
- ›Monitoring: drift detection, bias monitoring, performance risk
- ›Embedding continuous compliance into pipelines
- ›Evidence collection for ongoing assurance
AI Governance Operating Model & Audit Readiness
- ›Roles, responsibilities & Three Lines of Defense
- ›Governance artifacts: risk registers, AIIAs, model cards, audit logs
- ›Audit readiness & evidence-based compliance
- ›Embedding AI governance into enterprise workflows
- ›Capstone: AI hiring / credit-scoring governance pack
Capstone Simulation
Scenario: AI-based hiring / credit-scoring system. Teams develop a complete AI Risk Register, perform an AI Impact Assessment, define controls and governance policies, review an AI assurance report and produce an audit-ready AI governance pack.
Certification
- 3-day intensive programme
- Instructor-led + hands-on labs + capstone
- ISO 17024 certified credential
- NIST AI RMF · EU AI Act · MAS FEAT aligned
For governance, risk and compliance leaders.
Designed for GRC, audit, privacy, legal and security leaders accountable for the safe and compliant deployment of AI across the enterprise.
What CAGRC alumni say.
"CAGRC gave our team a clear, practical path to operationalise the EU AI Act in 90 days — not in theory, in actual policy and audit artifacts."
"The mapping of NIST AI RMF, ISO 42001 and EU AI Act into a single working framework was worth the entire course on its own."
"The audit methodology and AI risk register templates are now a permanent part of our governance toolkit."
"Finally a GRC course that doesn't just quote regulations — it teaches you how to govern AI in the real enterprise."
Operationalise AI governance.
Join the next CAGRC cohort and walk away with a complete, audit-ready AI governance pack aligned to global frameworks.
Enroll in CAGRC