Training Module

AI Risk, Impact & Harm Assessment

Assess AI impacts and harms, document findings, and connect them to risk decisions in an AI management system

Abstract digital interface with layered circular indicators and data rings, representing structured analysis of AI risks, impacts, and potential harms to support defensible governance decisions.

Are your AI risk decisions grounded in real impact analysis?

AI risks cannot be managed credibly without understanding who may be affected and how harm can occur. This module shows how to assess impacts and link them to documented risk decisions.

Abstract digital interface with layered circular indicators and data rings, representing structured analysis of AI risks, impacts, and potential harms to support defensible governance decisions.

Are your AI risk decisions grounded in real impact analysis?

AI risks cannot be managed credibly without understanding who may be affected and how harm can occur. This module shows how to assess impacts and link them to documented risk decisions.

Abstract digital interface with layered circular indicators and data rings, representing structured analysis of AI risks, impacts, and potential harms to support defensible governance decisions.

Training module overview

Generic statements about AI risk rarely improve decisions. Effective governance requires a structured understanding of who may be affected, how harm can occur, and how impacts translate into risk judgments.

This module explains how to interpret ISO/IEC 42001 expectations for impact and harm assessment, define clear assessment units, and identify affected parties. Participants develop harm scenarios, analyse impact pathways, and apply organisational risk criteria to evaluate severity and likelihood. The focus is on producing documented assessments that are traceable, defensible, and clearly linked to risk decisions and residual risk acceptance.

Applicable environments

This module applies to organisations implementing or operating a AI Management System (AIMS) in line with ISO/IEC 42001. It focuses on how the standard’s requirements are interpreted and applied in practice within real organisational contexts.

The content is relevant for organisations seeking certification as well as for those using ISO/IEC 42001 as a reference framework to structure responsibilities, processes, and controls in the AI management domain.

Target audience

  • People involved in designing, building, operating, or improving an AIMS aligned with ISO/IEC 42001

  • Executives and department heads accountable for the effectiveness and performance of an AIMS

  • Those responsible for processes, policies, applications, risks or risk controls related to AI

  • Auditors of ISO/IEC 42001 who want to deepen their understanding of management-side best practices (not audit technique)

Decision support

Is this module for you?

It is a good fit if you…

  • need to assess AI impacts and harms in concrete, decision-ready terms.

  • struggle to explain who could be harmed, how, and why it matters.

  • must justify deployment, approval, or continuation decisions.

  • want impact and harm assessments that hold up in audits and oversight.

  • support or review ISO/IEC 42001 risk and assurance artefacts.

If most of the points above apply, this module is likely a good fit.

It may not be the best fit if you…

  • are primarily looking for general AI risk concepts or AI fundamentals.

  • expect technical risk modelling or quantitative risk methods.

  • want to design or implement operational controls.

  • already operate a mature, consistent AI impact and harm assessment process.

Agenda

  • What ISO/IEC 42001 means by risk, impact, and harm assessment

  • Defining the assessment unit and boundaries

  • Impact pathways and harm categories in AI use

  • Applying the organisation’s risk method to AI harms

  • Treatment and decision rationale in ISO/IEC 42001 terms

  • Documented information and traceability expectations

  • Audit-facing view: what “good” looks like in evidence

  • Case-based workshop

Show detailed agenda...

Learning outcomes

Key outcomes

  • Interpret ISO/IEC 42001 expectations for impact and harm assessment and relate them to organisational decision-making

  • Define assessment units and identify affected parties and stakeholders for AI systems

  • Describe harm scenarios and impact pathways and apply risk criteria to evaluate severity and likelihood

Additional capabilities

  • Produce documented information that traces assessments to treatment decisions and residual risk acceptance

  • Recognise implementation gaps and prevent “paper assessments” that lack credibility

  • Integrate impact and harm assessments into existing risk and compliance routines

Additional benefits

Learning materials

  • Slide deck

  • Participant workbook

Templates & tools

Practical, reusable artefacts to apply the module directly to your organisation.

  • AI impact and harm assessment template

  • Affected parties & harm mapping canvas

  • Risk criteria worksheet (linking to the organisation’s existing risk criteria)

  • Assessment review triggers checklist

  • Supporting AI prompt set

Confirmation

  • Certificate of completion

Module ID

HAM-AI-S-02

Discipline

ISO clause

6: Planning

Audience

Manager

Languages

English

Delivery

Live virtual

Duration

7 h

List price

CHF 550

Excl. VAT. VAT may apply depending on customer location and status.

Delivery & learning format

Virtual live teaching

This module is delivered live, with a strong focus on discussion, practical application, and direct interaction with the instructor.

Sessions work through realistic examples, clarify concepts in context, and apply methods directly to participants’ organisational realities.

Custom delivery options

For organisations with specific constraints or learning objectives, the module can be adapted in format or scope, including in-house delivery and contextualised case material.

Not sure if this module is right for you?

Send a short message and describe your context.

Not sure if this module is right for you?

Send a short message and describe your context.

For an optimal learning experience

Preparation guidance

This module is designed as part of a modular training approach. Topics are deliberately distributed across modules and are not repeated in full, in order to avoid unnecessary redundancy. Each module is self-contained and can be taken on its own. Where prior knowledge or experience is helpful, this is indicated below so you can decide whether any preparation is useful for you.

Assumed background

This module assumes participants can already work with a management system and a basic risk process, and can understand AI systems at a practical level.

Helpful background includes:

  • Familiarity with management system roles, responsibilities, and documented information practices

  • Working knowledge of the organisation’s risk criteria and approval/escalation expectations

  • Basic understanding of AI system lifecycle concepts and typical ways AI can fail or be misused

Preparatory modules

Foundational modules (depending on background)

Useful if you are new to the underlying concepts or want a shared baseline before attending this module.

Risk Management

Systematically identify, evaluate, treat & monitor risks and opportunities across management systems

7 h

Risk Management

Systematically identify, evaluate, treat & monitor risks and opportunities across management systems

7 h

AI Limitations & Failure Modes

AI uncertainty, limitations & common failure modes across predictive and generative AI systems

7 h

AI Limitations & Failure Modes

AI uncertainty, limitations & common failure modes across predictive and generative AI systems

7 h

AI System Lifecycle & Inventory

Define AI system scope, set lifecycle boundaries, and maintain an AI system inventory aligned with ISO/IEC 42001

7 h

AI System Lifecycle & Inventory

Define AI system scope, set lifecycle boundaries, and maintain an AI system inventory aligned with ISO/IEC 42001

7 h

Supporting modules (optional)

Helpful if you want to deepen related skills, but not required to participate effectively.

AI Systems & Architectures

Core AI concepts, AI system types, AI agents, and the technical building blocks behind modern AI-enabled products and services

7 h

AI Systems & Architectures

Core AI concepts, AI system types, AI agents, and the technical building blocks behind modern AI-enabled products and services

7 h

Continuous learning

Follow-up modules

After completion of this module, the following modules are ideal to further deepen your competence. If you are looking for a structured learning path, modules can also be taken as part of a professional track.

Continuous learning

Follow-up modules

After completion of this module, the following modules are ideal to further deepen your competence. If you are looking for a structured learning path, modules can also be taken as part of a professional track.

Office scene with people standing, walking and sitting

Ready to improve your management systems?

We support continuous improvement by embedding ISO requirements into everyday practice and daily operations.

Office scene with people standing, walking and sitting

Ready to improve your management systems?

We support continuous improvement by embedding ISO requirements into everyday practice and daily operations.

Office scene with people standing, walking and sitting

Ready to improve your management systems?

We support continuous improvement by embedding ISO requirements into everyday practice and daily operations.