How we support you
Depending on your starting point, we support organisations in four clearly defined roles: from initial design to independent assurance and future-oriented development.
We help organisations establish practical governance for artificial intelligence across strategy, risk, control, accountability and assurance. This includes structuring AI management systems, embedding oversight into the AI lifecycle, and creating documentation and evidence that support responsible use, internal governance and external scrutiny.
01 Design
Establishing clear structures and accountability
AI governance framework and policy design, including AI Management Systems (AIMS) aligned with ISO/IEC 42001
Definition of roles, responsibilities and decision rights
AI system classification and risk categories
Integration into existing management systems (e.g. ISMS, QMS)
Design of documentation and evidence structures
02 Operate
Making AI governance work in daily practice
AI risk and system impact assessments
Operational processes for AI lifecycle management
Controls for data quality, model changes and human oversight
Incident and issue handling for AI-related risks
Enablement of key roles (management, product owners, compliance)
03 Assure
Providing confidence and audit readiness
Independent reviews of AI governance and AIMS structures
Control effectiveness and implementation checks
Outsourced internal audit based on ISO/IEC 42001
Certification readiness assessments
Supplier and third-party AI reviews
Preparation for internal and external audits
04 Evolve
Keeping governance effective as technology and regulation change
Monitoring regulatory and technological developments
Scenario analysis for future AI use cases
Maturity assessments and improvement roadmaps for AIMS
Executive sparring on strategic AI decisions
Integration of new requirements into existing systems
Typical situations and challenges
Organisations typically contact us when one or more of the following situations arise.
AI tools are already in use, but roles and responsibilities are unclear
Management asks whether current AI usage is compliant and defensible
Concerns about legal, ethical or reputational risks of AI systems
Preparation for new regulations
Lack of transparency over data sources, models or decision logic
Pressure from customers, auditors or regulators
Typical starting points for engagement
Engagements often start with a focused assessment or review, such as the following.
AI risk assessment
AI system impact impact assessment
ISO/IEC 42001 readiness assessment
AI supplier & third-party review
AI policy & documentation review
Why Halderstone
Our approach
We focus on governance that works in practice, not paper frameworks
Strong experience with management systems and audits
Clear separation between design, operation and assurance
Independent, technology-agnostic perspective
Suitable for both early-stage AI adoption and regulated environments
What we deliberately do not do
Build or operate AI models ourselves
Offer generic, template-driven compliance solutions









