Training Module
Training Module
AI Fundamentals 2
Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems
Understand
Implement
Manage
Audit
Training module overview
Many organisations underestimate how often AI behaviour shifts with context: changing input data, operational constraints, unclear objectives, or brittle integrations. When teams lack a shared view of AI limitations, they either over-control (“block everything”) or under-control (“trust the output”), both of which create governance and delivery problems.
This full-day domain fundamentals module explains the main sources of uncertainty and the most common failure modes across AI system types (including predictive and generative AI). It focuses on recognising how failures emerge in practice (e.g., data issues, model behaviour, system interactions, and human misuse), and how managers, implementers, and auditors can ask better questions and interpret technical artefacts realistically. It does not teach risk assessment methodology, lifecycle scoping/inventory methods, or operational control design; those are covered in the dedicated follow-up modules and the relevant generic foundation modules.
Many organisations underestimate how often AI behaviour shifts with context: changing input data, operational constraints, unclear objectives, or brittle integrations. When teams lack a shared view of AI limitations, they either over-control (“block everything”) or under-control (“trust the output”), both of which create governance and delivery problems.
This full-day domain fundamentals module explains the main sources of uncertainty and the most common failure modes across AI system types (including predictive and generative AI). It focuses on recognising how failures emerge in practice (e.g., data issues, model behaviour, system interactions, and human misuse), and how managers, implementers, and auditors can ask better questions and interpret technical artefacts realistically. It does not teach risk assessment methodology, lifecycle scoping/inventory methods, or operational control design; those are covered in the dedicated follow-up modules and the relevant generic foundation modules.
Target audience
AI management system managers and implementers working with technical teams
Governance, risk, and compliance professionals who need realistic AI expectations
Product owners and process owners responsible for AI-enabled services
Internal auditors who need AI domain fluency (not audit craft)
AI management system managers and implementers working with technical teams
Governance, risk, and compliance professionals who need realistic AI expectations
Product owners and process owners responsible for AI-enabled services
Internal auditors who need AI domain fluency (not audit craft)
Agenda
Why limitations and uncertainty are central to AI governance
AI outputs as signals, not facts
Typical organisational failure patterns (over-trust vs. over-blocking)
Where uncertainty comes from in AI systems
Data uncertainty, label ambiguity, and operational context shifts
Model uncertainty vs. system uncertainty (integration, workflow, human factors)
Model behaviour limits (predictive ML)
Generalisation limits, spurious correlations, and distribution shift
Performance trade-offs and threshold effects (what “good” depends on)
Model behaviour limits (generative AI)
Hallucinations, instruction-following limits, and prompt sensitivity
Context window constraints and knowledge/update limits
Data-related failure modes
Training/serving skew, leakage, and silent data pipeline changes
Quality issues: missingness, bias in sampling, and provenance gaps
System and socio-technical failure modes
Automation bias, misuse, feedback loops, and gaming
Interface failures: brittle integrations, latency constraints, and fallback behaviour
Workshop: failure-mode walkthrough of a case system
Identify likely failure modes across data, model, system, and human use
Formulate concrete “what to ask for” evidence prompts (artefacts and signals)
Why limitations and uncertainty are central to AI governance
AI outputs as signals, not facts
Typical organisational failure patterns (over-trust vs. over-blocking)
Where uncertainty comes from in AI systems
Data uncertainty, label ambiguity, and operational context shifts
Model uncertainty vs. system uncertainty (integration, workflow, human factors)
Model behaviour limits (predictive ML)
Generalisation limits, spurious correlations, and distribution shift
Performance trade-offs and threshold effects (what “good” depends on)
Model behaviour limits (generative AI)
Hallucinations, instruction-following limits, and prompt sensitivity
Context window constraints and knowledge/update limits
Data-related failure modes
Training/serving skew, leakage, and silent data pipeline changes
Quality issues: missingness, bias in sampling, and provenance gaps
System and socio-technical failure modes
Automation bias, misuse, feedback loops, and gaming
Interface failures: brittle integrations, latency constraints, and fallback behaviour
Workshop: failure-mode walkthrough of a case system
Identify likely failure modes across data, model, system, and human use
Formulate concrete “what to ask for” evidence prompts (artefacts and signals)
Course ID:
HAM-AIF-2
Audience:
Auditor
Manager
Executive
Domain:
Artificial Intelligence
Available in:
English
Duration:
7 h
List price:
CHF 550
Excl. VAT. VAT may apply depending on customer location and status.
What you get
Learning outcomes
Explain the main sources of uncertainty in AI systems and how they differ by system type
Recognise common predictive ML failure modes (shift, leakage, threshold effects, brittleness)
Recognise common generative AI failure modes (hallucination patterns, prompt sensitivity, context limits)
Identify data pipeline and provenance issues that typically drive AI degradation over time
Describe socio-technical failure modes (misuse, automation bias, feedback loops) and how they appear in operations
Use a structured walkthrough to articulate likely failure modes for an AI-enabled service and the evidence that would clarify them
Explain the main sources of uncertainty in AI systems and how they differ by system type
Recognise common predictive ML failure modes (shift, leakage, threshold effects, brittleness)
Recognise common generative AI failure modes (hallucination patterns, prompt sensitivity, context limits)
Identify data pipeline and provenance issues that typically drive AI degradation over time
Describe socio-technical failure modes (misuse, automation bias, feedback loops) and how they appear in operations
Use a structured walkthrough to articulate likely failure modes for an AI-enabled service and the evidence that would clarify them
Learning materials
Slide deck
Participant workbook
Certificate of completion
Slide deck
Participant workbook
Certificate of completion
Templates & tools
AI uncertainty map (sources and observable signals)
Failure mode catalogue (predictive, generative, and system-level)
Case walkthrough canvas (data → model → integration → use)
Evidence prompt set for technical walkthroughs (questions to request artefacts and operational signals)
AI-assisted summarisation prompt set for consolidating model/system artefact notes (optional)
AI uncertainty map (sources and observable signals)
Failure mode catalogue (predictive, generative, and system-level)
Case walkthrough canvas (data → model → integration → use)
Evidence prompt set for technical walkthroughs (questions to request artefacts and operational signals)
AI-assisted summarisation prompt set for consolidating model/system artefact notes (optional)
Prerequisites
This module assumes baseline familiarity with core AI concepts and system types (data, training vs. inference, and common AI architecture patterns). Participants should also be comfortable reading high-level technical descriptions (services, APIs, data stores) without needing to implement them.
Helpful background includes:
Basic understanding of digital services and dependencies (applications, interfaces, data flows)
Familiarity with common IT control concepts (access control, logging, encryption) at a conceptual level
This module assumes baseline familiarity with core AI concepts and system types (data, training vs. inference, and common AI architecture patterns). Participants should also be comfortable reading high-level technical descriptions (services, APIs, data stores) without needing to implement them.
Helpful background includes:
Basic understanding of digital services and dependencies (applications, interfaces, data flows)
Familiarity with common IT control concepts (access control, logging, encryption) at a conceptual level
Helpful preparatory modules
The modules below prepare for an optimal learning experience – but are not strictly necessary for participants to follow.
AI Foundations I: AI Concepts & System Types
Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services
7 h
AI Foundations I: AI Concepts & System Types
Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services
7 h
AI Foundations I: AI Concepts & System Types
Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services
7 h
Continuous learning
Follow-up modules
Follow-up modules
After completion of this module, the following modules are ideal to further deepen the participant's competence.
After completion of this module, the following modules are ideal to further deepen the participant's competence.

Ready to achieve mastery?
Bring ISO requirements into everyday practice to reduce avoidable issues and strengthen the trust of your customers and stakeholders.

Ready to achieve mastery?
Bring ISO requirements into everyday practice to reduce avoidable issues and strengthen the trust of your customers and stakeholders.

Ready to achieve mastery?
Bring ISO requirements into everyday practice to reduce avoidable issues and strengthen the trust of your customers and stakeholders.
