Training Module
AI Risk, Impact & Harm Assessment
Understand how to assess AI impacts and harms, document results, and connect them to risk decisions in an AI management system
Training Module
AI Risk, Impact & Harm Assessment
Understand how to assess AI impacts and harms, document results, and connect them to risk decisions in an AI management system
Training Module
AI Risk, Impact & Harm Assessment
Understand how to assess AI impacts and harms, document results, and connect them to risk decisions in an AI management system

Are your AI risk decisions grounded in real impact analysis?
AI risks cannot be managed credibly without understanding who may be affected and how harm can occur. This module shows how to assess impacts and link them to documented risk decisions.

Are your AI risk decisions grounded in real impact analysis?
AI risks cannot be managed credibly without understanding who may be affected and how harm can occur. This module shows how to assess impacts and link them to documented risk decisions.

Are your AI risk decisions grounded in real impact analysis?
AI risks cannot be managed credibly without understanding who may be affected and how harm can occur. This module shows how to assess impacts and link them to documented risk decisions.
Training module overview
Training module overview
Training module overview
Generic statements about “AI risk” rarely improve decisions. Effective governance requires a structured understanding of who may be affected, how harm can occur, and how impacts translate into risk judgments.
This module explains how to interpret ISO/IEC 42001 expectations for impact and harm assessment, define clear assessment units, and identify affected parties. Participants develop harm scenarios, analyse impact pathways, and apply organisational risk criteria to evaluate severity and likelihood. The focus is on producing documented assessments that are traceable, defensible, and clearly linked to risk decisions and residual risk acceptance.
Generic statements about “AI risk” rarely improve decisions. Effective governance requires a structured understanding of who may be affected, how harm can occur, and how impacts translate into risk judgments.
This module explains how to interpret ISO/IEC 42001 expectations for impact and harm assessment, define clear assessment units, and identify affected parties. Participants develop harm scenarios, analyse impact pathways, and apply organisational risk criteria to evaluate severity and likelihood. The focus is on producing documented assessments that are traceable, defensible, and clearly linked to risk decisions and residual risk acceptance.
Applicable environments
This module applies to organisations implementing or operating a AI Management System (AIMS) in line with ISO/IEC 42001. It focuses on how the standard’s requirements are interpreted and applied in practice within real organisational contexts.
The content is relevant for organisations seeking certification as well as for those using ISO/IEC 42001 as a reference framework to structure responsibilities, processes, and controls in the AI management domain.
Target audience
Target audience
Target audience
People involved in designing, building, operating, or improving an AIMS aligned with ISO/IEC 42001
Executives and department heads accountable for the effectiveness and performance of an AIMS
Those responsible for processes, policies, applications, risks or risk controls related to AI
Auditors of ISO/IEC 42001 who want to deepen their understanding of management-side best practices (not audit technique)
People involved in designing, building, operating, or improving an AIMS aligned with ISO/IEC 42001
Executives and department heads accountable for the effectiveness and performance of an AIMS
Those responsible for processes, policies, applications, risks or risk controls related to AI
Auditors of ISO/IEC 42001 who want to deepen their understanding of management-side best practices (not audit technique)
Decision support
Is this module for you?
It is a good fit if you…
need to assess AI impacts and harms in concrete, decision-ready terms.
struggle to explain who could be harmed, how, and why it matters.
must justify deployment, approval, or continuation decisions.
want impact and harm assessments that hold up in audits and oversight.
support or review ISO/IEC 42001 risk and assurance artefacts.
need to assess AI impacts and harms in concrete, decision-ready terms.
struggle to explain who could be harmed, how, and why it matters.
must justify deployment, approval, or continuation decisions.
want impact and harm assessments that hold up in audits and oversight.
support or review ISO/IEC 42001 risk and assurance artefacts.
If most of the points above apply, this module is likely a good fit.
It may not be the best fit if you…
are primarily looking for general AI risk concepts or AI fundamentals.
expect technical risk modelling or quantitative risk methods.
want to design or implement operational controls.
already operate a mature, consistent AI impact and harm assessment process.
are primarily looking for general AI risk concepts or AI fundamentals.
expect technical risk modelling or quantitative risk methods.
want to design or implement operational controls.
already operate a mature, consistent AI impact and harm assessment process.
Agenda
Agenda
Agenda
What ISO/IEC 42001 means by risk, impact, and harm assessment
Defining the assessment unit and boundaries
Impact pathways and harm categories in AI use
Applying the organisation’s risk method to AI harms
Treatment and decision rationale in ISO/IEC 42001 terms
Documented information and traceability expectations
Audit-facing view: what “good” looks like in evidence (without audit craft)
Case-based workshop
Show detailed agenda...
What ISO/IEC 42001 means by risk, impact, and harm assessment
Defining the assessment unit and boundaries
Impact pathways and harm categories in AI use
Applying the organisation’s risk method to AI harms
Treatment and decision rationale in ISO/IEC 42001 terms
Documented information and traceability expectations
Audit-facing view: what “good” looks like in evidence (without audit craft)
Case-based workshop
Show detailed agenda...
What ISO/IEC 42001 means by risk, impact, and harm assessment
Defining the assessment unit and boundaries
Impact pathways and harm categories in AI use
Applying the organisation’s risk method to AI harms
Treatment and decision rationale in ISO/IEC 42001 terms
Documented information and traceability expectations
Audit-facing view: what “good” looks like in evidence (without audit craft)
Case-based workshop
Show detailed agenda...
Learning outcomes
Learning outcomes
Learning outcomes
Key outcomes
Interpret ISO/IEC 42001 expectations for impact and harm assessment and relate them to organisational decision‑making
Define assessment units and identify affected parties and stakeholders for AI systems
Describe harm scenarios and impact pathways and apply risk criteria to evaluate severity and likelihood
Interpret ISO/IEC 42001 expectations for impact and harm assessment and relate them to organisational decision‑making
Define assessment units and identify affected parties and stakeholders for AI systems
Describe harm scenarios and impact pathways and apply risk criteria to evaluate severity and likelihood
Additional capabilities
Produce documented information that traces assessments to treatment decisions and residual risk acceptance
Recognise implementation gaps and prevent “paper assessments” that lack credibility
Integrate impact and harm assessments into existing risk and compliance routines
Produce documented information that traces assessments to treatment decisions and residual risk acceptance
Recognise implementation gaps and prevent “paper assessments” that lack credibility
Integrate impact and harm assessments into existing risk and compliance routines
Additional benefits
Additional benefits
Additional benefits
Learning materials
Slide deck
Participant workbook
Templates & tools
Practical, reusable artefacts to apply the module directly to your organisation.
AI impact and harm assessment template
Affected parties & harm mapping canvas
Risk criteria worksheet (linking to the organisation’s existing risk criteria)
Assessment review triggers checklist
Supporting AI prompt set
AI impact and harm assessment template
Affected parties & harm mapping canvas
Risk criteria worksheet (linking to the organisation’s existing risk criteria)
Assessment review triggers checklist
Supporting AI prompt set
Confirmation
Certificate of completion
Module ID
HAM-AI-S-02
Domain
Audience
Auditor
Manager
Language
English
Delivery
Live virtual
Duration
7 h
List price
CHF 550
Excl. VAT. VAT may apply depending on customer location and status.
Delivery & learning format
Delivery & learning format
Delivery & learning format
Virtual live teaching
This module is delivered live, with a strong focus on discussion, practical application, and direct interaction with the instructor.
Sessions work through realistic examples, clarify concepts in context, and apply methods directly to participants’ organisational realities.
Custom delivery options
For organisations with specific constraints or learning objectives, the module can be adapted in format or scope, including in-house delivery and contextualised case material.
Not sure if this module is right for you?
Not sure if this module is right for you?
Not sure if this module is right for you?
For an optimal learning experience
Preparation guidance
This module is designed as part of a modular training approach. Topics are deliberately distributed across modules and are not repeated in full, in order to avoid unnecessary redundancy. Each module is self-contained and can be taken on its own. Where prior knowledge or experience is helpful, this is indicated below so you can decide whether any preparation is useful for you.
For an optimal learning experience
Preparation guidance
This module is designed as part of a modular training approach. Topics are deliberately distributed across modules and are not repeated in full, in order to avoid unnecessary redundancy. Each module is self-contained and can be taken on its own. Where prior knowledge or experience is helpful, this is indicated below so you can decide whether any preparation is useful for you.
For an optimal learning experience
Preparation guidance
This module is designed as part of a modular training approach. Topics are deliberately distributed across modules and are not repeated in full, in order to avoid unnecessary redundancy. Each module is self-contained and can be taken on its own. Where prior knowledge or experience is helpful, this is indicated below so you can decide whether any preparation is useful for you.
Assumed background
This module assumes participants can already work with a management system and a basic risk process, and can understand AI systems at a practical level.
Helpful background includes:
Familiarity with management system roles, responsibilities, and documented information practices
Working knowledge of the organisation’s risk criteria and approval/escalation expectations
Basic understanding of AI system lifecycle concepts and typical ways AI can fail or be misused
This module assumes participants can already work with a management system and a basic risk process, and can understand AI systems at a practical level.
Helpful background includes:
Familiarity with management system roles, responsibilities, and documented information practices
Working knowledge of the organisation’s risk criteria and approval/escalation expectations
Basic understanding of AI system lifecycle concepts and typical ways AI can fail or be misused
Preparatory modules
Foundational modules (depending on background)
Useful if you are new to the underlying concepts or want a shared baseline before attending this module.
Risk Management Foundations
Learn the fundamentals of identifying, evaluating, treating, and monitoring risks and opportunities across management systems
7 h
Risk Management Foundations
Learn the fundamentals of identifying, evaluating, treating, and monitoring risks and opportunities across management systems
7 h
Risk Management Foundations
Learn the fundamentals of identifying, evaluating, treating, and monitoring risks and opportunities across management systems
7 h
AI Fundamentals II
Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems
7 h
AI Fundamentals II
Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems
7 h
AI Fundamentals II
Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems
7 h
AI System Scope, Lifecycle & Inventory
Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001
7 h
AI System Scope, Lifecycle & Inventory
Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001
7 h
AI System Scope, Lifecycle & Inventory
Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001
7 h
Supporting modules (optional)
Helpful if you want to deepen related skills, but not required to participate effectively.
AI Fundamentals I
Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services
7 h
AI Fundamentals I
Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services
7 h
AI Fundamentals I
Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services
7 h
Continuous learning
Follow-up modules
Continuous learning
Follow-up modules
Continuous learning
Follow-up modules
After completion of this module, the following modules are ideal to further deepen your competence. If you are looking for a structured learning path, modules can also be taken as part of a professional track.
Operational Control of AI Systems
Understand how to define, implement, and maintain operational controls for AI systems across deployment, change, and monitoring
Duration
7 h
List price
CHF 550
View module
Operational Control of AI Systems
Understand how to define, implement, and maintain operational controls for AI systems across deployment, change, and monitoring
Duration
7 h
List price
CHF 550
View module
Operational Control of AI Systems
Understand how to define, implement, and maintain operational controls for AI systems across deployment, change, and monitoring
Duration
7 h
List price
CHF 550
View module
AI System Scope, Lifecycle & Inventory
Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001
Duration
7 h
List price
CHF 550
View module
AI System Scope, Lifecycle & Inventory
Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001
Duration
7 h
List price
CHF 550
View module
AI System Scope, Lifecycle & Inventory
Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001
Duration
7 h
List price
CHF 550
View module

Ready to improve your management systems?
We support continuous improvement by embedding ISO requirements into everyday practice and daily operations.

Ready to improve your management systems?
We support continuous improvement by embedding ISO requirements into everyday practice and daily operations.

Ready to improve your management systems?
We support continuous improvement by embedding ISO requirements into everyday practice and daily operations.
