Training Module
Training Module

AI Risk, Impact & Harm Assessment

Understand how to assess AI impacts and harms, document results, and connect them to risk decisions in an AI management system

Understand

Implement

Manage

Audit

Training module overview

Many organisations have risk registers that mention “AI” but cannot explain, in concrete terms, who might be harmed, how harm could occur, and what makes the residual risk acceptable. The result is fragile approval decisions, unclear accountability, and assurance activities that focus on document presence rather than decision quality.

This ISO/IEC 42001 specialisation module focuses on how the standard expects organisations to assess AI-related impacts and harms and to document the resulting decisions. It applies (but does not re-teach) generic risk-management methods, and it stays separate from AI concepts, lifecycle inventory work, and operational control implementation.

Many organisations have risk registers that mention “AI” but cannot explain, in concrete terms, who might be harmed, how harm could occur, and what makes the residual risk acceptable. The result is fragile approval decisions, unclear accountability, and assurance activities that focus on document presence rather than decision quality.

This ISO/IEC 42001 specialisation module focuses on how the standard expects organisations to assess AI-related impacts and harms and to document the resulting decisions. It applies (but does not re-teach) generic risk-management methods, and it stays separate from AI concepts, lifecycle inventory work, and operational control implementation.

Target audience

  • AI management system managers and implementers (ISO/IEC 42001)

  • Risk, compliance, privacy, and product governance professionals working with AI-enabled processes

  • Control owners and system owners accountable for AI deployment approvals

  • Internal auditors and assurance professionals reviewing ISO/IEC 42001 readiness and system effectiveness

  • AI management system managers and implementers (ISO/IEC 42001)

  • Risk, compliance, privacy, and product governance professionals working with AI-enabled processes

  • Control owners and system owners accountable for AI deployment approvals

  • Internal auditors and assurance professionals reviewing ISO/IEC 42001 readiness and system effectiveness

Agenda

What ISO/IEC 42001 means by risk, impact, and harm assessment

  • Distinguishing AI-specific impact/harm assessment from generic enterprise risk assessment

  • Where assessment outputs are expected to inform governance decisions and controls

Defining the assessment unit and boundaries

  • What exactly is being assessed: AI system, use case, deployment context, users and affected parties

  • Boundary decisions that change the assessment (interfaces, human involvement, downstream use)

Impact pathways and harm categories in AI use

  • From system behaviour to real-world effects (error, misuse, unfair outcomes, automation side-effects)

  • Typical harm categories and who can be harmed (individuals, groups, organisation, wider context)

Applying the organisation’s risk method to AI harms

  • Turning harm scenarios into clear risk statements and linking to existing risk criteria

  • Recording assumptions, uncertainty, and confidence in a way that supports decisions

Treatment and decision rationale in ISO/IEC 42001 terms

  • Connecting assessment results to constraints, controls, and usage conditions (without designing controls)

  • Residual risk acceptance, escalation, and approval evidence

Documented information and traceability expectations

  • Minimum viable artefacts: assessment record, decision log, review triggers, change history

  • Traceability to obligations, internal requirements, and monitoring inputs (without teaching doc architecture)

Audit-facing view: what “good” looks like in evidence (without audit craft)

  • Common evidence patterns auditors expect for impact/harm assessment and approvals

  • Typical failure modes: generic statements, missing affected parties, unowned assumptions

Workshop (case-based, Halderstone-provided)

  • Build an impact & harm assessment for a provided AI use case and connect it to risk decisions

  • Peer review for clarity, traceability, and governance readiness (case-based, not organisation-specific)

What ISO/IEC 42001 means by risk, impact, and harm assessment

  • Distinguishing AI-specific impact/harm assessment from generic enterprise risk assessment

  • Where assessment outputs are expected to inform governance decisions and controls

Defining the assessment unit and boundaries

  • What exactly is being assessed: AI system, use case, deployment context, users and affected parties

  • Boundary decisions that change the assessment (interfaces, human involvement, downstream use)

Impact pathways and harm categories in AI use

  • From system behaviour to real-world effects (error, misuse, unfair outcomes, automation side-effects)

  • Typical harm categories and who can be harmed (individuals, groups, organisation, wider context)

Applying the organisation’s risk method to AI harms

  • Turning harm scenarios into clear risk statements and linking to existing risk criteria

  • Recording assumptions, uncertainty, and confidence in a way that supports decisions

Treatment and decision rationale in ISO/IEC 42001 terms

  • Connecting assessment results to constraints, controls, and usage conditions (without designing controls)

  • Residual risk acceptance, escalation, and approval evidence

Documented information and traceability expectations

  • Minimum viable artefacts: assessment record, decision log, review triggers, change history

  • Traceability to obligations, internal requirements, and monitoring inputs (without teaching doc architecture)

Audit-facing view: what “good” looks like in evidence (without audit craft)

  • Common evidence patterns auditors expect for impact/harm assessment and approvals

  • Typical failure modes: generic statements, missing affected parties, unowned assumptions

Workshop (case-based, Halderstone-provided)

  • Build an impact & harm assessment for a provided AI use case and connect it to risk decisions

  • Peer review for clarity, traceability, and governance readiness (case-based, not organisation-specific)

Course ID:

HAM-ARIA-1

Audience:

Auditor

Manager

Domain:

Artificial Intelligence

Available in:

English

Duration:

7 h

List price:

CHF 550

Excl. VAT. VAT may apply depending on customer location and status.

What you get

Learning outcomes

  • Interpret ISO/IEC 42001 expectations for assessing AI impacts and harms and positioning them within an AI management system

  • Define a practical assessment unit (system/use case/context) and identify relevant affected parties

  • Describe credible harm scenarios and impact pathways in a way that supports governance decisions

  • Apply their organisation’s existing risk criteria to AI harm scenarios and document the rationale without inventing parallel scoring schemes

  • Produce audit-ready documented information that shows traceability from assessment to decisions and follow-up triggers

  • Recognise common implementation gaps and “paper assessments” that fail under scrutiny (management or audit)

  • Interpret ISO/IEC 42001 expectations for assessing AI impacts and harms and positioning them within an AI management system

  • Define a practical assessment unit (system/use case/context) and identify relevant affected parties

  • Describe credible harm scenarios and impact pathways in a way that supports governance decisions

  • Apply their organisation’s existing risk criteria to AI harm scenarios and document the rationale without inventing parallel scoring schemes

  • Produce audit-ready documented information that shows traceability from assessment to decisions and follow-up triggers

  • Recognise common implementation gaps and “paper assessments” that fail under scrutiny (management or audit)

Learning materials

  • Slide deck

  • Participant workbook

  • Certificate of completion

  • Slide deck

  • Participant workbook

  • Certificate of completion

Templates & tools

  • ISO/IEC 42001-aligned AI Impact & Harm Assessment Record template

  • Affected Parties & Harm Mapping canvas

  • Risk Criteria Linkage worksheet (explicitly uses the organisation’s existing risk criteria)

  • Residual Risk Acceptance & Escalation decision log

  • Assessment Review Triggers checklist (change- and monitoring-driven)

  • Audit Evidence Checklist for AI risk/impact/harm assessment (evidence types, not audit techniques)

  • AI prompt set for structured harm-scenario brainstorming and summarisation (supporting judgement)

  • ISO/IEC 42001-aligned AI Impact & Harm Assessment Record template

  • Affected Parties & Harm Mapping canvas

  • Risk Criteria Linkage worksheet (explicitly uses the organisation’s existing risk criteria)

  • Residual Risk Acceptance & Escalation decision log

  • Assessment Review Triggers checklist (change- and monitoring-driven)

  • Audit Evidence Checklist for AI risk/impact/harm assessment (evidence types, not audit techniques)

  • AI prompt set for structured harm-scenario brainstorming and summarisation (supporting judgement)

Prerequisites

This module assumes participants can already work with a management system and a basic risk process, and can understand AI systems at a practical level.

Helpful background includes:

  • Familiarity with management system roles, responsibilities, and documented information practices

  • Working knowledge of the organisation’s risk criteria and approval/escalation expectations

  • Basic understanding of AI system lifecycle concepts (data → training → deployment → monitoring) and typical ways AI can fail or be misused (conceptual, not technical deep-dive)

This module assumes participants can already work with a management system and a basic risk process, and can understand AI systems at a practical level.

Helpful background includes:

  • Familiarity with management system roles, responsibilities, and documented information practices

  • Working knowledge of the organisation’s risk criteria and approval/escalation expectations

  • Basic understanding of AI system lifecycle concepts (data → training → deployment → monitoring) and typical ways AI can fail or be misused (conceptual, not technical deep-dive)

Strongly recommended preparatory modules

Risk Management Foundations: Consistent Risk and Opportunity Logic Across Management Systems

Learn the fundamentals of identifying, evaluating, treating, and monitoring risks and opportunities across management systems.

7 h

Risk Management Foundations: Consistent Risk and Opportunity Logic Across Management Systems

Learn the fundamentals of identifying, evaluating, treating, and monitoring risks and opportunities across management systems.

7 h

Risk Management Foundations: Consistent Risk and Opportunity Logic Across Management Systems

Learn the fundamentals of identifying, evaluating, treating, and monitoring risks and opportunities across management systems.

7 h

AI Foundations II: AI Limitations, Uncertainty & Failure Modes

Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems

7 h

AI Foundations II: AI Limitations, Uncertainty & Failure Modes

Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems

7 h

AI Foundations II: AI Limitations, Uncertainty & Failure Modes

Understand AI uncertainty, limitations, and common failure modes across predictive and generative AI systems

7 h

AI System Scope, Lifecycle & Inventory (ISO/IEC 42001)

Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001

7 h

AI System Scope, Lifecycle & Inventory (ISO/IEC 42001)

Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001

7 h

AI System Scope, Lifecycle & Inventory (ISO/IEC 42001)

Define AI system scope, lifecycle boundaries, and a maintained AI system inventory aligned to ISO/IEC 42001

7 h

Helpful preparatory modules

The modules below prepare for an optimal learning experience – but are not strictly necessary for participants to follow.

AI Foundations I: AI Concepts & System Types

Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services

7 h

AI Foundations I: AI Concepts & System Types

Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services

7 h

AI Foundations I: AI Concepts & System Types

Learn core AI concepts, AI system types, and the technical building blocks that underpin modern AI-enabled products and services

7 h

Office scene with people standing, walking and sitting

Ready to achieve mastery?

Bring ISO requirements into everyday practice to reduce avoidable issues and strengthen the trust of your customers and stakeholders.

Office scene with people standing, walking and sitting

Ready to achieve mastery?

Bring ISO requirements into everyday practice to reduce avoidable issues and strengthen the trust of your customers and stakeholders.

Office scene with people standing, walking and sitting

Ready to achieve mastery?

Bring ISO requirements into everyday practice to reduce avoidable issues and strengthen the trust of your customers and stakeholders.